content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
© 2022 MJH Life Sciences™ and Contemporary OB/GYN. All rights reserved. A clinical decision tool serves as a framework for providers to counsel patients about mode of hysterectomy. Data are limited on outcomes of laparoscopic hysterectomy with morcellation in patients with unsuspected uterine sarcomas, which complicates discussions between physicians and patients about management of uterine fibroids. A shared clinical decision tool described in an article published in the European Journal of Obstetrics and Gynecology and Reproductive Biology may help in counseling patients about optimal management of large fibroids while taking into consideration risks and benefits as mandated by the Food and Drug Administration. Writing about their use of the tool in a hypothetical population, the authors indicate that “women and their providers can use this tool to weigh the benefits of a minimally invasive procedure against the risk of dissemination of a rare but serious cancer.” They caution, however, that the decision aid has not been validated nor is it comprehensive and not all of the incidence parameters are generalizable to all patient populations. The objective of the research, performed by physicians from Beth Israel Deaconess Medical Center, the Institute for Technology Assessment at Massachusetts General Hospital, and Harvard Medical School, was to compare risks and benefits of laparoscopic hysterectomy with morcellation versus abdominal hysterectomy without morcellation for large fibroids. The shared clinical decision tool was designed to serve as a framework for providers to counsel patients about mode of hysterectomy to facilitate shared decision-making between patient and provider. Risks and benefits were estimated from the literature, including surgical complications (venous thromboembolism [VTE], small bowel obstruction, adhesions, hernia, surgical site infections, and transfusions), uterine sarcoma risks, and quality-of-life (QoL) endpoints. The tool was applied to a hypothetical population of 20,000 patients with large uterine fibroids, of whom 10,000 underwent laparoscopic hysterectomies and 10,000 had abdominal hysterectomies. The authors calculated that abdominal hysterectomy would result in 50.1% more adhesions, 10.7% more hernias, 4.8% more surgical site infections, 2.8% more bowel obstructions, and 2% more VTEs than laparoscopic hysterectomy. An abdominal procedure also would result in longer hospital stays (2 days), slower return to work (13.6 days), greater postoperative day 3 narcotic requirements (48%), and lower SF-36 QoL scores (50.4 points lower). Looking at risks associated with unsuspected cancers, the authors estimated that 0.28% of patients undergoing hysterectomy for fibroids would have occult uterine sarcomas. In these women, laparoscopic hysterectomy with morcellation would reduce 5-year overall survival rates by 27% and recurrence-free survival by 28.8 months. Because it is not possible to exclude the presence of an occult malignancy with imaging or statistical models, physicians should use their clinical judgment and consider contained tissue extraction at the time of surgery.
https://www.contemporaryobgyn.net/view/tool-helps-decision-making-fibroid-management
Gallery hours: Monday, Tuesday, Wednesday and Friday, 10 AM - 5 PM: Thursday 10 AM - 7 PM; Saturday 12 PM – 5 PM Creativity Explored, the premier San Francisco nonprofit visual art gallery and studio for artists with developmental disabilities, presents Mind Place, a new exhibition exploring the psychology of place. From depictions of Ferris wheels that evoke memories, African landscapes, to psychedelic abstractions of subjects that inhabit another space entirely, the exhibition leads the viewer on a journey to ethereal environments that are physical yet based in the mind. The exhibition includes selected works by Ian Adams, Jay Herndon, Camille Holvoet, Laron Bickerstaff, Kathy Wen, and Marilyn Wong. Curated by Visual Arts Instructor Leeza Doreian. Evoking both physical and imaginary worlds, the multimedia artwork in this exhibition offer the viewer diverse perspectives into the production of a visual space that is both interior and exterior. Emotional, provocative, and inviting narrative, the more than two dozen artworks included challenge traditional viewership by expanding visual depictions of a place or space into the subjective territory of a "mindscape." Ian Adams, 27, was inspired to draw pictures of places he wants to visit. His work is imbued with the appreciation of these locations that give his subjects an air of the mythological. Adams is currently in the middle of a two and a half year project to make at least one work for each country in Africa. Using images he finds online, he renders his affection into his subjects with playful perspectives that capture the essence and awe of viewing an unknown world. Jay Herndon, 60, usually starts a work by layering marks until a subject takes form. This process is further illustrated in his vibrant mobile sculpture that activates as viewers examine the shapes from multiple angles and perspectives. The work began with a series of individual landscapes on wood blocks. After creating the series, the pieces were wired together to create a multidimensional mobile sculpture. His other pictorial works included in the exhibition draw from reference materials, meticulously translating what he sees to paper, with almost microscopic intensity. The short, sketchy lines that Jay favors, endow his drawings with energy and staccato movement. Camille Holvoet, 65, depicts deceptively sweet subjects with a practice tending to draw on remembrances of life’s anxieties and forbidden desires. Her Ferris wheels elicit a contradiction of emotions that sway between elation and unease. Holvoet expounds on her inspiration, “I draw Ferris wheels because it’s my symbol because I used to ride on them. The third time when I went to Chinatown at night, and it felt fun and funny feelings, and scary and a long drop. It was the longest drop. It was like riding a building down and around.” Holvoet’s Girl’s Dormitory When I First Went to the Hospital invites the viewer to relive an intimate memory, depicting an interior scene replete with an unsettling proximity of bright colors and ornate iron bars on windows. Her work often occupies a space between dream and memory, sewing the seeds of a narrative both real and imaginary. Laron Bickerstaff, 47, creates stream-of-consciousness text-based artwork, exemplified in the animation Laron’s Home. Communicating in American Sign Language, his work often depicts observations of life with a deep awareness of the visual characteristics of language. His pictorial works in Mind Place utilize a process of layered repetitive mark making to produce an abstraction of the subject. These abstractions are blanketed in perspectives and architectural forms, enveloping the viewer in a maze of structures and colors. Kathy Wen’s, 32, seascapes are energized with her concerns for environmental welfare. The flow and wash of vibrant watercolor contrast the meticulous patterning of highlight and shadow producing a faceted abstract quality. Wen’s bold yet detailed slices of color perforate traditional perspective and bring to mind striations in geological formations that unite the land, sea, and air. Marilyn Wong, 68, has said of her work she is “painting her mind.” Medical textbook illustrations first inspired her unique style of abstraction. Wong takes an introspective approach to illustrating her subjects, often producing works that interplay light and form with grids of circles and shapes that render the subject within various planes of perspective. Wong’s Kundo Lion in the Zoo features her characteristic style of dissecting and reconstructing limbs and parts into an expansive layout of lines and shapes, reminiscent of aerial photography. While typically formal in subject matter, Wong’s cartographic elements and style of depiction merge the subject with an unseen and expansive environment that unfolds over time.
http://www.creativityexplored.org/press-room/3659/mind-place
Network Ports And Firewall Information Which network ports does Fleet Maintenance Pro use? We utilize TCP ports 12010 and 12011 for the network version. The ports for the SQL version depend on how your SQL database is setup, but it is TCP 1433 by default. Are there any particular firewall rules that need to be setup? On the server computer, make sure that incoming and outgoing traffic on those ports is allowed on your local network. On the workstations you will also want to make sure incoming/outgoing on these ports is allowed. I've opened the ports but I'm still having trouble getting connected from a workstation. How do I test if a port is open? Our ElevateDB Server service (network edition only) will communicate over Telnet to port 12010. If you have the Telnet Client enabled on your workstation (you can turn it on via Windows features) use this command prompt command: telnet server-name 12010 Replace 'server-name' with whatever your actual server name is before running the command. If successful, you should get a blank window: If unsuccessful, you will get a connection error message. How do I open a port in my firewall? The instructions for this will vary, as there are may different firewall types. Software firewalls include Windows Firewall, which is built into the Windows operating system, along with any other antivirus or security software suite that you may have installed. There are also hardware firewalls for more complicated IT setups, which are either their own piece of network equipment, or simply include settings within your router (generally hardware firewalls only affect external communication, but can be set to block local traffic as well). You will want to make sure any firewall you use is configured appropriately on both the server and the workstations. If you are unsure of how to open ports in your specific firewall solution, you may want to get in touch with your IT person or your antivirus/security software vendor. How to open a port in the Windows firewall? As the Windows firewall is the one most commonly used (since it comes built in) we have included the steps on how to configure it below. - Click on Start and go to Control Panel. Click on "Windows Firewall". - Click "Advanced Settings" on the left hand side: - In the Windows Firewall with Advanced Security dialog box, click "Inbound Rules" on the left pane, then click "New Rule" in the right pane: - Select the radio button for "Port" and click Next. - Keep the option for TCP. Select "Specific local ports" and type in 12010, 12011 - Select the option for "Allow the connection". - Apply the rules to your network type. - Name the rule and click Finish. - Back in the Windows Firewall with Advanced Security window, click Outbound Rules on the left and click New Rule on the right. Repeat the steps above to setup your Outbound rule.
https://support.mtcpro.com/article/497-network-ports-and-firewall-information
For the first time since 2008, the GMP Annex 1 guidance for the manufacturing of sterile medicine products has undergone a profound revision. The recently finalised document is more than a simple update, the actual guideline has been completely rewritten, as DuPont's Steve Marnach explains Not only has the length of Annex 1 been increased from 16 to 50 pages, but the whole approach has changed, which will have repercussions on the technologies and the procedures used in pharmaceutical manufacturing. It anticipates that all pharmaceutical manufacturing activities will be governed holistically by the QRM principles and documented in the contamination control strategy (CCS). The CCS will become a living document, based on a data-driven scientific approach, which should be continuously updated and improved in order to control potential risks to quality. The new draft is calling for a proactive approach: simply reacting to and correcting detected contamination will no longer be enough. Manufacturers will be expected to fully understand their processes and procedures, so that they can identify the potential risks to quality, put in place all the technical and procedural means to control these risks and aim for continuous improvements. Since cleanroom garment systems are a critical part of sterile and aseptic manufacturing, they need to be managed under QRM (Quality Risk Management) principles too. Quality risk management starts with an analysis and understanding of all the risks to quality associated with cleanroom operators wearing cleanroom garments. A complete data-based analysis will allow for design certification, qualification, validation and monitoring procedures which have quality built into them, thus being part of a holistic contamination control strategy. A risk analysis is needed to understand the contamination risks coming from operators wearing cleanroom garments. Operators represent the biggest source of contamination inside cleanrooms, responsible for 75% of all contaminants. This contamination is coming both from the operators themselves and from their cleanroom garments. The EU guidance advises on a validation approach that consists of five steps Operator contamination is due both to our human nature (an average person sheds 40 000 particles per minute and 10% of them carry microorganisms) and human behaviour. While it is possible to mitigate the latter aspect through careful operator selection, training, slow movements or impeccable hygiene, the fact is that operators will always be shedding particles, as multiple studies have proven. There is just one way to prevent particles generated by operators from contaminating the cleanroom: use cleanroom garments. They are the only barrier between the operator and the production environment. The 2020 draft of Annex 1 clearly points this out: “(the cleanroom garments should) retain particulates shed by the body”. Cleanroom garments themselves may be a source of contamination and this risk needs to be assessed too. For example, the material used for making the garments (non-woven for single-use garments or woven for reusables) can shed more or fewer particles depending on the nature of the fibres or filaments used, their resistance to abrasion or their construction. The trims (zipper, buttons, elastic or sewing threads) may also be a source of contamination. The design of the garment plays a role too and should be evaluated. One detail which is often neglected is the packaging in which the cleanroom garments come, which could be an additional source of contamination (i.e. paper-back bag vs. plastic bags). Once the risks have been evaluated, they should be removed or replaced by technical or organisational means as far as possible, and the residual risks mitigated as much as possible using a validated cleanroom garment system. The EU general guidance on validation (GMP Annex 1519) provides the general framework which can be applied to the qualification of cleanroom garment systems as well. This validation approach consists of five steps: the definition of User Requirements Specification (URS), the Design Qualification (DQ), the Installation Qualification (IQ), the Operational Qualification (OQ) and the Performance Qualification (PQ). While the DQ and IQ have the highest impact on the quality achieved, the other stages should not be neglected, and it is important to proceed step by step. While they are not formally part of the validation process, it is important to define upfront the requirements on the cleanroom garments system from the users and the environment they work in. The URS will define the critical requirements against which the garment system needs to be assessed so that they will be in line with the risk assessment. For example, a trained operator may have to be able to work at least 3 hours in the same set of cleanroom garments without causing unacceptable (cGMP) levels of contamination of the garments and the aseptic working environment. The garment’s packaging system may have to be suitable for the layout of the cleanroom and its material pass-through systems, or may have to be suitable for manual spray disinfection. Sometimes, the operator may also need chemical or biological protection against the substances they are handling inside. The compliance of the cleanroom garment system with cGMP must be demonstrated and documented during the DQ, which aims to confirm that the selected cleanroom garment is qualified for the intended use. As the new Annex 1 will require a data-driven scientific approach, the DQ should include tests to simulate the intended use and the performances of the garments. As recommended by ISO 11607-1, the DQ should be split into four key areas: Material qualification, Performance testing, Stability testing and Usability evaluation. For reusable garments this needs to be extended to the garment maker’s subcontractors, suppliers and service providers. M. Pavičić and T. Wagner have listed properties which should be assessed in table 1 below. In this article, only a couple of these properties will be highlighted to show their importance and the scientific test methods which may be used to assess the performance of cleanroom garment systems. 1. Material qualification: In order to ascertain whether the garments are truly sterile, it is important to check if the manufacturer is following a validated sterilisation process and can guarantee a sterility assurance level of 10-6 as per ANSI/AAMI/ISO 11137-1 and document this in a certificate of sterility. A simple certificate of irradiation or a document attesting an internal autoclaving process are not enough. Since the cleanroom garments need to be a barrier against the human contamination generated by the operators, it is important to assess the filtration efficiencies of the materials (non-woven or reusable polyester fabrics) used for making the garments. The particle filtration efficiency (PFE) against dry particles can be assessed with the test method EN 143 (TSI 8130), which measures the filtration efficiency using salt particles having a diameter of 0.3µm. While the bacterial filtration efficiency (BFE) can be assessed with the test method ASTM F2101. 2. Performance testing: The Helmke Drum test method as per IEST-RP – C003.4 is a good way to assess the particle shedding of cleanroom garments, especially for garments that are washed multiple times. The Body box test (IEST-RP-CC003.4) is the only test method available to assess particle shedding when a garment is being worn by an operator. It allows evaluation of both the particle shedding of the garment and its PFE & BFE of the particles shed by the operator. 3. Stability testing: It is important to check how the garment characteristics and properties will change over time (due to ageing, wear, and wash-dry-sterilisation cycles). Therefore the performances listed above should be validated under worst-case conditions, i.e. for single-use garments, assessing garments from different batches and at the end of their shelf-life, and for reusable garments after 10, 20, 30, 40 and 50 wash-dry-sterilisation cycles to assess the end-of-life of the garments (studies by Romano F., Ljungqvist B. and B. Reinmüller have demonstrated that repeated washing deteriorates garment performance). It is important to go through the user scenarios and to assess the packaging of the garments to ensure that the cleanroom garments can be used with acceptable remaining contamination and safety risks. While this is typically done by the end-user, suppliers can also evaluate and supply data to users. Even though the IQ is a formal check to verify if all required elements of the cleanroom gowning system are present, it is important to check the following in order to eliminate unforeseen risks: Are the gowning and de-gowning facilities in order? Did the supplier provide the required certificates of conformance and/or analysis, supplier instructions, etc? Have the SOPs for gowning and de-gowning been written or adapted? Have the logistical processes for garments and accessories been validated? Has the operators’ training and qualification plans been established? The OQ aims to qualify the gowning and de-gowning concept, including logistics & material pass-throughs, and the aseptic presentation of the garments (i.e. folding, packaging). The PQ is typically done under worst-case conditions, which must be determined based on a risk assessment to validate the performance of the cleanroom garment system when it is used. The requirements specified in the URS must be complied with fully. They include the aseptic gowning qualification and the validation of the microbiological quality of the gowned personnel with the garments and other accessories during the actual work. PQ is typically done under worst-case conditions Of course, it does not end there: periodic revalidation of the garment system, constant monitoring and critical review of changes to the garments or the system are important to demonstrate the state of control. Cleanroom garment systems are a critical part of the contamination control strategy and process validation. A Risk and Science-based Quality-by-Design approach and verification is the correct strategy to control contamination risks related to people and offer designed-in risk reductions. This approach is an adequate response to the latest regulatory requirements.
https://www.cleanroomtechnology.com/news/article_page/Selection_criteria_for_cleanroom_garments_with_the_new_Annex_1/204009
Interested candidates kindly apply before 04/05/2022. CV must be in pdf format, saved with your Full name Job Description: Computer vision Engineer •Researching and developing scalable computer vision and machine learning solutions for complex problems •Collaborate in cross-functional teams to integrate image analytics algorithms and software, as well as develop system prototypes •Proficiency with vector quantization and clustering to build search engines and systems •Understanding of computational geometry, statistics, and linear algebra •Maintain and develop software modules for large-scale image and LiDAR data processing in the cloud •Analyze and improve the efficiency and stability of various deployed systems •Work well in a collaborative team environment What We Look For •3+ years of experience with computer vision, deep learning, and machine learning •2+ years of experience in vectorization of raster data using computer vision •2+ years of experience working with drawing/CAD/GIS file formats (extraction manipulation and generation Dxf,SVG shapefiles, etc) •2+ years experience with programming languages like C, C++, Matlab and Python •Experience with Unix and Linux command-line tools and scripts •Experience in OpenCV or other computer vision libraries, as well as deep learning frameworks like Tensor Flow •Experience with point cloud meshing, image segmentation, and 2D marker tracking •Experience with three dimensional (3D) imaging concepts •Proficiency with CNNs (Convolutional Neural Networks) and RNNs (Recurrent Neural Networks) •Excellent analytical, mathematical, communication, and problem-solving skills •Keeps up with the latest advancements in technology related to the field •B.S., M.S. or Ph.D. in computer science, computer vision, machine learning or other relatedfields These Would Also Be Nice •Experience with optimizing algorithms for embedded devices using OpenGL and OpenCL •Experience with JIRA, Git, and agile project management •Experience with WebRTC, ARM, and CUDAWe are no longer accepting applications for this ad.
https://www.radicaltechnologies.co.in/jobopenings/hiring-for-computer-vision-engineerb-s-m-s-or-ph-d-in-computer-science-computer-vision-machine-learning-or-other-related-fieldsexp-3-years-in-computer-vision-deep-learning-and-machine-learn/
Piazza message board Please ask all questions on Piazza! The discipline of artificial intelligence (AI) is concerned with building systems that think and act like humans or rationally on some absolute scale. This course is an introduction to the field, with special emphasis on sound modern methods. The topics include knowledge representation, problem solving via search, game playing, logical and probabilistic reasoning, planning, machine learning (decision trees, neural nets, reinforcement learning, and genetic algorithms) and machine vision. Programming exercises will concretize the key methods. The course targets graduate students and advanced undergraduates. Evaluation is based on programming assignments, a midterm exam, and a final exam. If you are unsure about any of these, please speak with the instructor. CSE 132, CSE 240, and CSE 241, or permission of the instructor. Knowledge of Python. This will be critical to complete the programming assignments. If you do not know Python, or are rusty, you may find some resources to help below. Some basic knowledge of statistics, probability theory, and first-order logic. Please post all questions to Piazza! You can find autograder information here. Website: Upload your own images to be visualized as "deep dreams" This course is based on the CS 188 course at UC Berkeley. You may find lectures, slides, and more there. The required book for this course is Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig. Either the second or third edition is fine. This is a classic textbook and highly recommended! Another good reference for reinforcement learning is Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto. The book is available online here. The official Python tutorial is quite comprehensive. There is also a useful glossary. The tutorial for the CS 188 AI course at Berkeley also contains a bit of information related to Project 0. There is an interactive lesson plan available on Codecademy. The Washington University library has electronic copies of the O'Reilly book Learning Python available for viewing online. See here.
https://www.cse.wustl.edu/~garnett/cse511a/fall_2016/
This policy sets out the SRA's approach to dealing with people who may be witnesses in serious cases but who have (or may have) themselves been involved in alleged misconduct1. to provide transparent criteria and processes. Some cases investigated by the SRA involve very serious misconduct in circumstances where proving the misconduct is extremely difficult or proving it is likely only to be reasonably possible when supported by evidence from a witness close to and possibly involved in the behaviour of concern. It is important that the public and potential witnesses understand how the SRA will approach such situations to ensure that the public is protected. The SRA receives information from confidential informants and protects their identity so far as it properly can as a matter of law. This policy is about more formal disclosure including formal evidence being given by a potential witness. Read further information on how to provide information to the SRA. This policy relates to the regulation of persons by the SRA. Agreements entered into with the SRA are separate to action which may be taken by other regulators or enforcement agencies in appropriate cases. As a matter of general principle, the SRA can decide not to pursue misconduct by a person on grounds of proportionality, including in response to relevant mitigation. where a sanction is to be considered by another body (such as the Solicitors Disciplinary Tribunal), the SRA acknowledges the assistance provided and that it constitutes mitigation. g. whether the witness has paid any financial penalty or costs due to the SRA or has entered into an arrangement to do so (and abides by that arrangement). Mitigation may be particularly substantial if the witness is responsible for the behaviour of concern having been brought to the attention of the SRA for the first time or to provide credible evidence of it. This could include a report which is made via a firm’s Compliance Officer for Legal Practice (COLP) or Compliance Officer for Finance and Administration (COFA). O(10.4) you report to the SRA promptly, serious misconduct by any person or firm authorised by the SRA, or any employee, manager or owner of any such firm (taking into account, where necessary, your duty of confidentiality to your client). While there should not be a material delay in a report being made to the SRA, you may also have an obligation to report certain matters internally within your business. The SRA may approach a potential witness on its own initiative or enter into discussions at the instigation of the witness. The SRA is under no obligation to enter into discussions or reach any agreement. Discussions will be ‘without prejudice’ as between the SRA and the potential witness, save that the SRA may have to act upon factual information representing a risk to the public or the regulatory objectives of the Legal Services Act 2007. Subject to any potential claim to public interest immunity2, the agreement reached with the witness will be disclosable to a respondent (i.e. another person being investigated by the SRA in respect of whom the evidence relates) if evidence from the witness forms part of the SRA case before any court, tribunal or in the SRA's internal decision-making processes. Reports which are made to the SRA on a confidential basis do not generally involve the more formal co-operation at which this policy is aimed and so will not normally be appropriate for a co-operation agreement. The SRA's view is that communications with the witness prior to and leading up to the agreement will usually be irrelevant and should not be disclosable to any other party, subject to orders of the court or a relevant tribunal. The potential witness must make full and frank disclosure to the SRA of alleged misconduct of which he or she is aware and in particular of his or her involvement in misconduct. If it is subsequently discovered that full and frank disclosure has not been made, the SRA may re-investigate and/or take action against the potential witness for any original misconduct or consequential misconduct (such as misleading the SRA). The SRA will not generally commit to any outcome or likely outcome until satisfied that it is in full possession of all relevant facts including the involvement of the potential witness in the misconduct and how that relates to the involvement of others (such as the identity of the ‘ringleader’ or any person coercing others into the behaviour in question). The potential witness must provide continued and good faith co-operation including giving evidence where necessary. Failure to co-operate may also result in the SRA reinvestigating or taking action as set out above. The potential witness will generally be required to enter into a regulatory settlement agreement which will include a full admission of any misconduct and a declaration that full disclosure has been made (with consequential acknowledgment of potential action if disclosure has not been full). This is a statement of policy and not a formal document to be interpreted closely as if it were a rule or delegated legislation.
http://sra.org.uk/solicitors/enforcement/solicitor-report/cooperating.page
Your furry friend can end up in an accident or situation that may require immediate help. Sometimes it may be something that may seem trivial but persistent. Instead of just staying at home and worrying, it is important to know what to look out for. This will help you make rational decisions when needed. If you are in doubt, please contact your vet for advice. Here are some situations that qualify as pet emergencies: If you realize that your pet is open-mouth breathing, wheezing, coughing, weak/shallow breathing, raspy breathing, or choking, you need to get your pet to the vet. Do not delay since breathing difficulties are severe and possibly life-threatening. In the process of seeking help for your pet, handle them with care. According to CDC, even the gentlest of animals can bite and scratch when injured due to fear. If you suspect that your pet has eaten something they should not, take them to the vet immediately. This will help the vet deal with it before the toxic substances have started being digested and absorbed into the pet’s body. Some poisonings can happen without your knowledge. The signs to look out for are: seizures or collapse, vomiting, diarrhea, excessive salivation, or skin damage due to corrosive substances, among others. This includes falls, bites, accidents, and gunshot wounds. Even if your pet seems fine, it may be hard to assess the extent of the internal damage. Wounds may also be deeper than they appear. Thus, you must get your pet to the vet as soon as possible after the trauma. Pain may also be caused by the trauma, which is another good reason to ensure that your pet is seen by the vet. If your pet is repeatedly straining to urinate or defecate, it is essential to take them to a vet and know why. Animals do not often show that they are in pain, so the issue can develop into a larger and potentially life-threatening one. GDV is probably one of the most serious non-traumatic emergencies for any dog. Bloating will cause swelling behind the rib cage from an enlarged stomach filled with gas. It then develops to GDV when the stomach twists upon itself, causing a volvulus that blocks the opening and exit of the stomach. In the early stages, your pet will appear restless after a large meal and try to vomit without success. If left unattended, this could lead to the loss of an eye or blindness. Signs include excessive tearing, redness of the eye, discharge, squinting, pawing at the eye, swelling, or closed eye. Even if it is just a foreign body, a scratch on the cornea could deteriorate vision. Some of the situations that would warrant rushing your pet to the vet include: Heatstroke Pale gums Excessive bleeding Weak or rapid pulse Change in body temperature Seizures Difficulty standing Loss of consciousness If your pet experiences any of the above symptoms or situations, call South Willamette Veterinary Clinic in Creswell, Oregon immediately. You can reach us at 541-313-3352 to schedule an appointment.
https://www.swvetclinic.com/blog/what-qualifies-as-a-pet-emergency.html
A B O U T Carim Nahaboo is a London based illustrator specialising in accurate depictions of natural history subjects as well as more imaginative, conceptual themes. Drawing, as a way of recording and understanding the world, tangible or not, has been a constant necessity from an early age. The fascination of varied recording techniques goes hand in hand with an equal passion for the natural world. Much of the work is commission based or referential material - many pieces are available as prints, or as the original work where possible. Past work has included entomological identification material, medical illustration, concept design for creatures and props, tattoo design and wall murals. T-shirts are available at Zazzle.co.uk/cnahaboo For updates on work and events, please follow the Facebook page or Twitter below,
https://www.carimnahaboo.com/about
The LEVERAGE partner organizations are professional affinity groups that support the professional development, persistence and success of diverse engineers traveling along academic career pathways, allowing their careers to be set within their cultural context. Collectively, these organizations have 300 years of experience supporting and celebrating women and the historically underrepresented in pursuing engineering careers. LEVERAGE programs provide mechanisms for strengthening informal professional networks, for professional development, and for creating discipline and/or research specific affinity group affiliations. LEVERAGE programs are created to have a direct impact on participant productivity and well-being. Thanks for posting this video of an important effort. I understand how mentoring will assist early career faculty to improve their opportunities to succeed. You also mention professional development activities. Can you tell me more aobut these? May 17, 2018 | 02:10 p.m. lol... Agreed. I definitely don't miss the parking issues faculty face! Following up on George's question, those are certainly important topics to cover for anyone at this career stage. I am wondering then how this is specifically tailored to the group you are serving? Do you know whether any of the issues you address in the PD is more or less of a barrier for this specific group? Are there other mentoring programs that you used as model that perhaps do not target the specific group you are targeting? Is the difference the content of the mentoring/PD or the group that is receiving it? The large and connected group of resources (organizations) that you have brought together for this project is amazing. May 17, 2018 | 02:23 p.m. Hi Rachel, Thanks for much for the follow-up questions. Yes, they are important topics for anyone at this career stage, but often the diverse faculty served by LEVERAGE do not have ready access to these resources on their campuses. The diversity serving professional organizations are able to provide cultural context for the PD/mentoring, particularly for the face-to-face events offered at their conferences. The cultural context also comes out in the rich nature of the questions asked of speakers at webinars and workshops, and in the the candid nature of the discussions. For many participants, they are "the only..." on their campus, in their college, and/or in their department. They continually express how powerful it is to receive this PD and mentoring in a context where they are the majority. They also often express appreciate having a safe place to ask questions that they would worry about asking on their campus or of their department heads or colleagues. Makes sense, thank you. Your project is a good reminder that STEM for All is a lifelong process. May 18, 2018 | 05:16 p.m. The intention with LEVERAGE is to build and test the infrastructure to focus on early-career faculty and then to expand up and down the pathway to other career stages. I like how your project immediately benefits underrepresented faculty with the intention to ultimately benefit all engineering students. We all profit from more voices in the conversation! May 17, 2018 | 02:24 p.m. Sarah, thanks for your comment and for watching the video. Agreed! ... and we are looking forward to realizing those voices in those conversations. May 20, 2018 | 03:39 p.m. This is such an important issue! Thank you for developing a thoughtful process for supporting historically underrepresented faculty in the field of engineering; it is key to the next generation of engineers. In our STEM work with K-12 teachers (http://videohall.com/p/1182) we find that the teachers often feel unprepared to address engineering issues. Our professional development team includes a recent engineering doctoral graduate who works with us in the schools and inspires the teachers. It seems, however, that we need to engage all types of engineers from diverse backgrounds -- including engineering faculty -- with our K-12 schools. Our K-12 students, teachers and administrators need models. May 21, 2018 | 01:39 p.m. Thanks for your comment and for the work you are doing in the K-12 space to address engineering issues. I agree with you that we need to engage a diverse group of engineers to make a difference, and that LEVERAGE likely has a unique community to help do so. The challenge is developing ways to make it efficient for the diverse faculty to do so. Engineering desperately needs them to successfully earn tenure and outreach work often doesn't "count" in the tenure process. It would be interesting to think about what types of processes we could create to "automate" the process so that they could contribute in ways that are impactful without impacting their performance in other more tenure relevant areas. May 21, 2018 | 05:31 p.m. I agree, Kimberly, with everything you have said. This is certainly a creative challenge for those of us committed to engagement at all levels. You seem to be suggesting that we think about some imaginative ways of capturing the voices and perspectives of LEVERAGE fellows without taking them away from their primary academic pursuit of tenure. Perhaps technology could play a role in documenting the story of some of the your early career faculty in a way that can be shared with younger audiences, without distracting faculty themselves from their principal mission. All good wishes for your ongoing work!
https://stemforall2018.videohall.com/presentations/1284
The European Parliament, Council and Commission have celebrated the launch of an annual ‘EU organic day’. The three institutions signed a joint declaration establishing from now on each September 23 as EU organic day. This follows up on the action plan for the development of organic production, adopted by the Commission in March 2021, which announced the creation of such a day to raise awareness of organic production. At the signing and launch ceremony, Agriculture Commissioner Janusz Wojcie-chowski said: “Today we celebrate organic production, a sustainable type of agriculture where food production is done in harmony with nature, biodiversity and animal welfare. “23 September is also autumnal equinox, when day and night are equally long, a symbol of balance between agriculture and environment that ideally suits organic production. “I am glad that together with the European Parliament, the Council, and key actors of this sector we get to launch this annual EU organic day, a great opportunity to raise awareness of organic production and promote the key role it plays in the transition to sustainable food systems.” The overall aim of the Action Plan for the development of organic production is to boost substantially along with the production and consumption of organic products. It is being seen as contributing to the Farm to Fork and biodiversity strategies’ targets such as reducing the use of fertilisers, pesticides and anti-microbials. To boost consumption the action plan includes actions such as informing and communicating about organic production, promoting the consumption of organic products, and stimulating a greater use of organics in public canteens through public procurement. Furthermore, to increase organic pro-duction, the Common Agricultural Policy (CAP) will remain a key tool for supporting the conversion to organic farming. It will be complemented by, for instance, information events and networking for sharing best practices and certification for groups of farmers rather than for individuals. Finally, to improve the sustainability of organic farming, the Commission will dedicate at least 30 per cent of the budget for research and innovation in the field of agriculture, forestry and rural areas to topics specific to or relevant for the organic sector. Organic production comes with a number of important benefits: organic fields have around 30 per cent more biodiversity, organically farmed animals enjoy a higher degree of animal welfare and take less antibiotics, organic farmers have higher incomes and are more resilient, and consumers know exactly what they are getting thanks to the EU organic logo.
https://farmweek.com/eu-organic-day-raises-sustainable-food-awareness/
People with pent-up anger are typically responding to underlying feelings of unworthiness or resentment, resulting in destructive behaviors. If their emotions explode, it’s often because it seems like there is no other way to release them. Fortunately, if you’re dealing with pent-up anger, you can cope effectively by talking to a therapist, increasing awareness, enhancing assertiveness skills, and practicing relaxation techniques. Causes of Pent-Up Anger Typically, pent-up anger, also called repressed anger, can be triggered by external factors like getting mad at a specific person or situation, or internal factors like ruminating on a complicated problem or hurtful memory. For many individuals, it may stem from learned behaviors and habits, miscommunication, or an underlying mental health issue. People with pent-up anger may suppress this emotion for several reasons: - Fear that their anger will get out of control and harm others - Trying to adhere to moral values - Assuming their feelings will be dismissed - Fear of being scolded - Not wanting to cause conflict in relationships Suppressing anger may temporarily keep it at bay, but eventually, it will likely become destructive to the person withholding it and those around them.1 How Can Pent-Up Anger Affect Your Physical & Mental Health? During prolonged and frequent occurrences of anger, specific parts of the nervous system become highly activated. Subsequently, your blood pressure and heart rate increase, staying elevated for long periods of time. The strain on your body can create a variety of health problems, like cardiovascular disease, hypertension, and a weakened immune system.1,2,3 Suppressed anger can also lead to chronic stress, depression, and anxiety.1 While anger itself is not considered a disorder, it is a prominent symptom in numerous psychiatric conditions such as bipolar disorder, obsessive-compulsive disorder (OCD), trauma- and stressor-related disorders, impulse disorders, personality disorders, and more.4 Research shows that when anger is present in these psychological conditions, there is a higher severity of symptoms, as well as an unfavorable response to treatment.5 5 Ways to Cope With Pent-Up Anger We all possess the remarkable capability to train our brains to become more emotionally balanced.1 As such, there are effective methods and skills to support yourself and express and manage anger in a positive way. This strategy involves talking to a therapist, creating anger awareness, learning and using assertiveness and conflict-resolution skills, engaging in calming activities, and enhancing positive supports.2 Here are constructive ways to better cope with your pent-up anger: 1. Talk to a Therapist Talking to a mental health clinician and entering individual therapy is an effective approach to addressing anger issues. Individual counseling provides one-on-one attention, allowing the therapist to access your distinctive anger response. Moreover, this sets the stage to create a validating environment in which to process your thoughts, feelings, and bottled-up frustrations.1 Engaging directly with a counselor also opens the door to developing a collaborative and supportive relationship. Doing so will make the therapeutic process more effective. You will likely be encouraged to identify triggers, as well as faulty thoughts and core beliefs that sustain your tendency to suppress anger. With a variety of interventions, your counselor can help you develop functional behavioral responses.1 2. Anger Awareness Through Writing Writing or journaling can be used as a tool to express when you are struggling to identify your emotions. It can also help you put things into perspective and regulate any uncomfortable feelings.6 When journaling about your suppressed anger, try to understand what’s happening in your mind, and why you may be burying angry emotions in the first place. 3. Effective Communication & Conflict Resolution Skills Increasing respectful communication with healthy assertiveness and conflict resolution will allow you to verbalize your feelings in a controlled and constructive manner.1,2,7,8 Conversely, stuffing your anger down can cause interpersonal conflicts, increased resentful feelings, self-harming behaviors, passive-aggressive responses, explosive reactions, and more. Here are some tips to effectively communicate and resolve conflicts related to anger: - Address and communicate your emotions as they arise in a direct, respectful way - Establish and enforce boundaries for yourself and others - Use “I” statements, active listening, and empathy to manage interpersonal problems - Examine the source of what may be causing the conflict - Link feelings that may be related to the conflict - Identify the advantages of forgiveness and the disadvantages of holding on to anger - Identify how the impact of the problem contributes to the conflict - Decide whether to resolve the conflict - Work toward bringing a finding common ground or a resolution to the conflict 4. Relaxation Practices Different relaxation exercises will calm your senses, giving you time and perspective to allow your angry feelings to subside.3 You can regularly practice relaxation techniques, not only to decrease your anger, but also to promote long-term physical and emotional benefits.1 Use the following techniques when you feel pent-up anger building: - Deep breathing: As you focus on breathing deeply and slowly, your heart rate slows down and your body relaxes. Deep breathing shifts your attention, helps you clear your mind, and allows you move past anger without neglecting it.3 - Mindfulness and meditation: Grounds you in the moment as you gain control of your emotions.1,9 It also enables you to recognize negative thought processes associated with your pent-up anger, sit with your emotions, and change your relationship to these thoughts while not reacting. - Progressive muscle relaxation: In this self-soothing exercise, you slowly tense and then relax each muscle in your body. With practice, it can give you an instantaneous feeling of calm. Your awareness also heightens, allowing you to identify when you are tense.3 5. Enhancing Social Support Research shows that people who have a supportive network can better deal with their challenges, like pent-up anger. Supportive connections are also an integral part of managing your struggles in the long-term.3 When looking for support, be specific about what you need, and give feedback or show appreciation when appropriate. You can find social support in the following places or ways: - Self-help groups - Personal relationships - Spiritual or religious affiliations - Coworkers - Local community organizations or non-profit agencies When to Get Professional Help For Pent-Up Anger Everyone has experienced situational surges of anger, but if your pent-up anger is intense, frequent, and/or expressed inappropriately, you should seek professional help. In addition to being harmful to you, pent-up anger is physically and emotionally detrimental for those closest to you.3 It can become especially problematic when it co-occurs with other mental disorders or substance abuse issues, increasing the risk for negative consequences.4 Who Should I Consult For Dealing With Pent-Up Anger? Consider working with a professional based on the areas of your life that are most affected. If you are having marital difficulties, consult with a marriage and family therapist. If your personal life is the primary concern, a social worker or licensed professional counselor may be most suitable.1 Cognitive behavioral therapy (CBT) can be an efficient intervention for individuals with anger issues.1,3,8,10 Whether online or in-person, it is best to work with a CBT-trained professional to help you reframe certain anger-triggering events, develop a renewed perspective, and respond in healthier ways. Another option is group therapy, a practical form of treatment that is often affordable and effective. A group program is aimed at teaching anger coping skills, reducing possible aggressive behaviors, strengthening self-control over thoughts and actions, and providing support and feedback from other members.3,10 How to Find a Therapist If you are considering counseling, you can begin your search in an online directory like Choosing Therapy. This resource allows you to filter your preferences and specific needs to choose a therapist who is the right fit for you. Once in counseling, a motivated individual can usually attain favorable outcomes within 8-12 weeks. Moreover, the effects of therapy seem to have a long-lasting impact on the person being treated.1,3 The cost of treatment may range from $50 to $150 per session without health insurance. However, if you have behavioral health coverage through your insurance provider, the out-of-pocket costs per session may be much lower. Final Thoughts on Dealing With Pent-Up Anger Remember that anger is a valid emotion, but if you find yourself struggling with pent-up anger, you are not alone. You can work through these feelings in constructive ways by seeking help from a therapist or reaching out to a trusted friend or family member. Doing so can dramatically improve your overall well-being and help you relate to those closest to you in a healthier way.
https://www.choosingtherapy.com/pent-up-anger/
That being said, I was very upset to see a Native act in the Shrine Circus this year. The scene is a man on a horse in a costume and war bonnet and some women dancing to some very mocking Native music. After talking with several local Shriners after the show, I realized they did not see the harm in this act.If you ask my children, they will say it is demeaning without a second thought. I don’t think the Lincoln Shriners knowingly engaged in racism. After pointing some of these concerns out, they did agree to remove the headdress the man on the horse was wearing in the rest of the shows in Lincoln. Baby steps, I guess. Items of great spiritual significance to many Native people, such as feathers found in the headdress, are trivialized when improperly used by non-Native people for secular purposes. We would all agree that a man in a priest costume running around sprinkling holy water on the crowd and tossing wafers would be offensive. People need to just lighten up and enjoy the humor and fun at circuses. The U.S. has gone way overboard in their political correctness and in being offended at every little thing. To K. Ross wrote on March 26, 2009 7:26 am: Every time I read some scathing letter overreacting to cultural differences it makes me take a step back and want to stay away for fear of offending someone. Distancing ourselves doesn't bring on understanding, just paranoia and distrust. I didn't see the act, but have been to the Shriners circus many times where my own sex is wearing shiny and skimpy attire. I have never complained to anyone that it promotes stereotypes. I don't dress like that, but I would imagine that the circus is for fun, suspending belief for a few hours for the purpose of entertainment? Did your ticket read "documentary?" From everything I have read and the famous artwork, natives did wear headdresses, and the scottish wore kilts sometimes....so what's your point? Ed H wrote on March 26, 2009 8:34 am: Actually Scotious i believe Don is saying this is entertainment. Not a historical re-enactment. If we demanded everything we view as entertainment be historically/culturally accurate then it wouldn't be entertainment. Also people realize what they view as entertainment is not representative of a people. Bill part Chippewa says wrote on March 26, 2009 8:39 am: ....with respect to kris ross.....get over it. The political correctness is amazingly going overboard in this country. I played with Indian costumes, Indian figurines with cowboys, pounded a drum while doing an Indian rain dance and I seem to be just fine today. I didn't even think of this until you brought it up. Explaining the obvious to these conscious or unconscious racists, I posted a couple of rejoinders: People have documented the harm of Native stereotypes over and over. Denying the harm because of ignorance doesn't change the evidence that proves the harm. Actually, Ed, your statement that "people realize what they view as entertainment is not representative" is basically false. Most people think Indians dressed like Plains chiefs, were primitive savages, and barely exist today. They think this because the only "Indians" they see are phony ones like these Shriners. A few more comments Some comments I didn't post: Does "To K. Ross" think no one has ever criticized women in skimpy costumes? That's just plain dumb. A few Natives did wear headresses. Most didn't then and don't now. The point is that this attire isn't representative of most Indians; it's stereotypical. "Bill part Chippewa" grew up emulating stupid stereotypes and now is so brainwashed that he doesn't even recognized them. But he thinks he's okay. If ignorance is bliss, Bill must be happy. For more on the subject, see the Stereotype of the Month contest.
http://newspaperrock.bluecorncomics.com/2009/03/indians-dance-in-shriner-circus.html
Since turning professional in 2005, Murray’s big breakthrough success at the very top level of the game, came at the 2012 US Open. Prior to Murray’s win at Flushing Meadows, where he beat world number one Novak Djokovic in five sets, the Scot had suffered defeat in four previous Grand Slam finals. Murray’s first Grand Slam final occurred at the 2008 US Open and back to back final defeats at the Australian Open in 2010 and 2011, saw him suffer again in 2012 at Wimbledon. He did however, achieve the feat of becoming the first British male to make into a Wimbledon final since 1938. However, a four set defeat against Roger Federer meant that he would have to wait a while longer for his first Grand Slam title. The resiliency and determination of Murray won out, breaking through for his maiden Grand Slam final in an epic encounter against Novak Djokovic. With the win, Murray became the first British male to earn a Grand Slam title since 1936. The first ATP Tour title for Andy Murray came in 2006 when he won in San Jose. Further breakthroughs in his career came in winning his first ATP World Tour Masters 1000 event in 2009, taking titles in Canada and Miami. Murray was also successful at the 2012 London Olympic Games, beating Swiss tennis star Roger Federer in straight sets in the final, claiming his first Olympic Gold. Murray added the honour of becoming the first British player to win the Olympic tennis gold in singles since 1908. Andy Murray was awarded an OBE in the 2013 New Year Honours list by Queen Elizabeth II.
http://www.oddsbetting.co.uk/odds-history/Tennis/Andy-Murray
SHARJAH, Children occupy an important place within the strategy of the Sharjah Museums Authority (SMA), which makes the celebration of the World Children’s Day, which falls on November 20 every year, an important occasion in which it celebrates and engages children through a range of specific activities and events. The authority gives children a great attention in its annual plans, which derives from the vision of H.H. Dr. Sheikh Sultan bin Muhammad Al Qasimi, Supreme Council Member and Ruler of Sharjah, that young people and children are the future of the nation. The plans of the SMA are full of an integrated package of entertainment and educational activities and programmes for children, with the aim of raising their awareness of the importance of museums and their educational status, allowing them to unleash their imaginations and enhance their abilities and knowledge. As part of its efforts to achieve its goals in the field of children’s awareness, the authority organised 4,635 comprehensive programmes for schools and families last year, in addition to 25 community programmes and a series of workshops, tours and cultural discussions in the various museums. In addition to launching the characters of “Hamdan and Alia”, ambassadors of museums affiliated with the SMA, with the aim of connecting children and the new generation to museums, through an enjoyable educational method. The series of stories was published in both Arabic and English in 8 museums, including the Sharjah Science Museum, Al Mahatta Museum, and Sharjah Heritage Museum. The authority also launched its unique programme “Ambassadors of Sharjah Museums”, which targets children and adolescents, and seeks to raise students’ awareness of the importance of guidance, promote this culture, and introduce them to the importance and status of museums in societies. It is worth noting that the SMA strives to organise entertainment and educational events and programmes that contribute to enriching children’s memory with a wealth of knowledge about the history and heritage of the Emirate of Sharjah.
https://www.emiratesnewsgazette.com/world-childrens-day-an-important-occasion-at-sma/
Well, stating the obvious, 2020 has been a year like no other. What started off as what we thought was a normal year soon changed, and life as we knew it became a whole new experience. And what of the fashion industry? With stores opening and closing, it’s been through its own tumultuous journey, as we look back and reflect on a year that none of us could have ever imagined. The High Street Few of us will forget, Boris Johnson’s first stay-at-home order back in March. Non-essential retail was told they had to shut their doors, making our local high-streets a no-go zone. Some stores, such as Topshop, Selfridges, John Lewis and more had already taken that decision, but Boris’ statement left the rest of the retailers with no choice. What was originally set to be three weeks left shops shut for almost three months, as online trade was relied on to keep them afloat. However, this wasn’t enough for some companies to survive, as we lost TM Lewin, whilst Oasis, Warehouse, Cath Kidston amongst other high street favourites went into administration, the former surviving as just online businesses. After stores were able to reopen, the High Street did what it could over the summer and early autumn to draw people back in. This, alongside Rishi Sunak’s Eat Out to Help Out scheme brought people back into its city centres, but this also started to drive Covid cases back up. By October, and the start of the colder season, numbers were edging higher and higher, until a second lockdown was announced for the month of November, meaning that shops would be shut at the busiest time of the retail year. This lockdown was the final nail in the coffin for some of the biggest names in fashion, with Arcadia plummeting into administration, and, as a result of that, Debenhams’ buyers dropping out (Arcadia is their biggest concession), changing the High Street as we know it forever. Now, as 30% of the country is in a new, higher Tier, which is effectively another lockdown, we wait to see what will happen for the rest of the country, and, what that will do for our High Streets in the future. Online Retailers Whilst it’s been a bad year for our High Streets, this year, the shift to online shopping has meant online retailers have kept many companies afloat. Some of our favourite brands had to completely rethink their marketing strategies to put the focus on online, whilst strengthening their relationships with their consumers. Social media has been more important than ever to develop brand trust, and to also see our favourite shops as lifestyle platforms, engaging with them directly as ‘friends’. This year, perhaps the most important shopping weekend in retail – Black Friday, pressure was on solely online to perform, with physical stores unavailable. Whilst numbers were up for online, there was still a big fall in UK sales compared to last year. When physical stores are shut, retailers are still able to offer a click and collect service. Some brands, such as Ted Baker and Whistles also have the option to Ship from Store, meaning they’re able to sell their store stock and post it, rather than sending warehouse produce, which means they can help their physical stores to profit. Whilst we know the pandemic is set to come to an end next year with the implementation of the vaccine, we don’t know the fate of our stores over the coming months. With the possibility that retail will be closed again, it will be interesting to see how online retailers tempt their customers post peak. For Jobseekers How has this all affected the job market? At the start of the pandemic, understandably, things drew to a halt. As the world was filled with uncertainty, so were hiring managers who reassessed their need for new employees. This was far from ideal for those hoping for entry level fashion jobs once they finished university in the summer, but it also left others unemployed, after they gave notice to leave their jobs and, due, to the pandemic, were unable to start new ones. In fashion particularly, jobs were affected, as profits fell. Even fashion ecommerce jobs were hit, despite the boom in online retail. The furlough scheme meant jobs were able to be saved, but as stores reopened, many companies had to rethink their strategies, and restructures cost thousands of jobs in the industry, as many brands had to streamline their business to cut costs. However, once our favourite brands picked up the pieces, the job market picked up, and, in fact, new and exciting roles have appeared. Fashion jobs have always been competitive, but, with us having more spare time on our hands, candidates have had the chance to make their applications more creative than ever, and impress future employers onto remembering them. As we look forward to 2021, I see the job market continuing to evolve. This year has undoubtedly changed all our perspectives, and brands will carry on shifting their narratives to fit into this. Whilst it’s been a tough year for the fashion industry, people have still turned to it, seeing retail as a therapy as well as a necessity. We may have been sporting more loungewear than heels, but clothing has continued to be important to us, and as long as that’s the case, our industry will continue to reign supreme.
https://www.angelaharper.co.uk/reflecting-on-how-2020-has-affected-the-fashion-industry/
Many different groups, including media, governments, businesses, scientists, United Nations agencies, often use the term sustainable development (sometimes called sustainability) in describing how to create a better world for all. There are almost as many definitions of sustainable development as there are groups talking about it; however, a common theme runs through the various definitions. Sustainable development is often explained as balancing three components: environment, society, and economy. The well-being of each of these three areas depends on the well-being of the others. In other words, it’s impossible to have a vibrant healthy environment and society if the economy is very weak. Perhaps the most popular definition of sustainable development is the one used in the famous 1987 publication Our Common Future (a.k.a. the Brundtland Report, named after the commission’s chair Gro Harlem Brundtland) by the World Commission on Environment and Development. It defined sustainable development as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs” (World Commission on Environment and Development, 1987, p 43). Learn more about sustainable development: • Peruse some fantastic resources on many sustainable development issues at the United Nations Cyberschoolbus website. • See the full text of the report Our Common Future in an easy-to-read format provided by the Centre for a World in Balance (Some of the text is written in complicated terms). • Visit the TakingITGlobal website to find toolkits, books, blogs, projects and more on sustainable development. • Read Global Challenge, Global Opportunity: Trends in Sustainable Development, which summarizes the issues tackled at the World Summit on Sustainable Development in Johannesburg in 2002. Source: Ministry of Sustainable Development, Environment and Parks of Quebec, Canada.
http://greenwave.cbd.int/resources/sustainable_development
To determine eligibility for accommodations and services, you may be required to submit verifying documentation. The purpose of disability documentation is to provide SDS staff with historical information to inform accommodation planning and determine a student’s eligibility for accommodations in accordance with the Americans with Disabilities Act as Amended and Section 504 of the Vocational Rehabilitation Act. A qualified professional must conduct the evaluation. The name, title and professional credentials of the evaluator, including information about license or certification as well as the area of specialization, employment and state/province in which the individual practices should be clearly stated in the documentation. It is not considered appropriate for professionals to evaluate members of their own family. The documentation must include a clear diagnostic statement that describes how the condition was diagnosed, provide information on the functional impact, and detail the typical progression or prognosis of the condition. The documentation must include a description of the diagnostic criteria, evaluation methods, procedures, tests and dates of administration, as well as a clinical narrative, observation, and specific results. When appropriate to the nature of the disability, having both summary data and specific test scores within the report is essential (ex. for learning disabilities). The documentation must be recent (within past 5 years) and age-appropriate so as to determine the need for accommodations and/or services based on the individual’s current level of functioning in the educational setting. The diagnostic report should include specific recommendations for accommodations and/or academic adjustments as well as an explanation as to why each accommodation/adjustment is recommended. The evaluators should describe the impact the diagnosed disability has on a specific major life activity as well as the degree of significance of this impact on the individual. The evaluator should support recommendations with specific test results or clinical observations. It is the responsibility of the student to obtain their documentation and to upload documentation into the SDS online portal SAM: Students Accessing MIami. If assistance is needed, please contact the SDS office on your campus. Any correspondence regarding the adequacy of the submitted documentation will be sent to the student’s MU email account. It is the student’s responsibility to obtain additional information or clarification if requested. Psychological assessment (minimally, an individual intelligence test such as the Wechsler Adult Intelligence Scale [WAIS], Woodcock Johnson Tests of Cognitive Abilities, or Stanford-Binet Intelligence Scales) with subtest and composite standard scores included. If the student also has a dual diagnosis with AD(H)D, additional behavioral measures may be helpful to support the diagnosis. The report should include the professional’s credentials and contact information, standard scores, composite scores and a summary of the results which supports the clearly stated diagnosis, description of functional limitations impacting learning for each recommended accommodation. Please note: Screening instruments such as the WASI (Wechsler Abbreviated Scale of Intelligence) or WRAT (Wide Range Achievement Test) and child-normed tests such as the Wechsler Intelligence Scales for Children (WISC) may not be sufficient for full approval, but if available, may assist in providing provisional accommodations. A copy of an IEP or 504 Plan alone is also not sufficient to establish full eligibility unless it includes items 1-4 above. The SDS office does not provide psychoeducational testing services. Please see our list of providers in the local area.
http://miamioh.edu/student-life/sds/student-tools/documentation-guidelines/index.html
Over 1,000 people have been killed and more than 400 abducted in the past six months in intercommunal violence in South Sudan, amid fears that tensions may worsen with the onset of the dry season, the UN envoy for the country has said. David Shearer, Special Representative of the Secretary-General for South Sudan, warned of increased risk of conflict with the start of the dry season, in December-January, as people start moving towards sources of water for their cattle. “I think we can anticipate increased tensions”, he said at a press conference on Tuesday, explaining that losses of cattle in floods earlier this year and poor economic conditions could exacerbate the situation. The problems need to be “nipped in the bud” before they escalate into violence, added Mr. Shearer, calling for the appointment of officials at the county level “to fill the vacuum of power that has existed since the transitional government was formed.” Mr. Shearer, who heads the UN Mission in South Sudan (UNMISS), also underlined an “urgent need to breathe fresh life into the peace process, which is currently stalled.” That was the message he conveyed to all major players and stakeholders, added the senior UN official. Juba POC sites ‘re-designated’ as IDP camps Mr. Shearer also announced that, as of Monday, the protection of civilian (POC) sites in capital Juba have been re-designated as camps for internally displaced persons (IDPs). The POC sites were set up by UNMISS to provide thousands of families – who had fled to UN bases in fear of their lives – with sanctuary when civil war erupted across South Sudan in 2013. Many lives were saved as a result. The closure of the Juba sites “has followed a long and careful process, planning alongside humanitarians, and in consultation with national and local government, the security services, and of course the displaced community themselves”, added Mr. Shearer. “The Government now has sovereign responsibility for the sites as it does with many other IDP camps across the country.” ‘Being nimble’ to protect civilians The re-designation has allowed UNMISS to gradually withdraw troops from static duties at the sites where there is no threat so they can be redeployed to conflict hotspots where people’s lives are in real danger, said Mr. Shearer. “Our approach to the protection of civilians is about being proactive, about being nimble, and being robust,” he added. “That means we need to relocate our troops and staff who facilitate reconciliation and peacebuilding into areas of tension, and hopefully address that tension before conflict erupts.” Throughout the coming dry season, UN peacekeepers will be located in new temporary bases and carry out long duration patrols to places like Manyabol, Likongule, Duk Padiat, Yuai, and Waat where tensions between communities are high. Troops and civilian staff will work together with local communities to deter violence, promote reconciliation, and build peace so families get the opportunity they deserve to rebuild their lives, added Mr. Shearer Building roads Alongside, the UN mission will be rebuilding around 3,200 kilometres of roads across the country in the next few months, not only improving access to markets and services, but also improve trade and create jobs, and bring communities together. “Through roads, peoples from different communities can communicate with each other, and by communication they can build trust and deter conflict,” said the senior UN official. Mission engineers will also assist with plans to open the border between South Sudan and its northern neighbour Sudan by improving roads between Renk and Aweil and crossing points. Africa Today Partnership with Private Sector is Key in Closing Rwanda’s Infrastructure Gap The COVID-19 (coronavirus) pandemic has pushed the Rwandan economy into recession in 2020 for the first time since 1994, according to the World Bank’s latest Rwanda Economic Update. The 17th edition of the Rwanda Economic Update: The Role of the Private Sector in Closing the Infrastructure Gap, says that the economy shrank by 3.7 percent in 2020, as measures implemented to limit the spread of the coronavirus and ease pressures on health systems brought economic activity to a near standstill in many sectors. Although the economy is set to recover in 2021, the report notes the growth is projected to remain below the pre-pandemic average through 2023. Declining economic activity has also reduced the government’s ability to collect revenue amid increased fiscal needs, worsening the fiscal situation. Public debt reached 71 percent of GDP in 2020, and is projected to peak at 84 percent of GDP in 2023. Against this backdrop, the report underlines the importance of the government’s commitment to implement a fiscal consolidation plan once the crisis abates to reduce the country’s vulnerability to external shocks and liquidity pressures. “Narrowing fiscal space calls for a progressive shift in Rwanda’s development model away from the public sector towards a predominantly private sector driven model, while also stepping up efforts to improve the efficiency of public investment,” said Calvin Djiofack, World Bank’s Senior Economist for Rwanda. According to the Update, private sector financing, either through public-private partnerships or pure private investment, will be essential for Rwanda to continue investing in critical infrastructure needed to achieve its development goals. The analysis underscores the need to capitalize further on Rwanda’s foreign direct investment (FDI) regulatory framework, considered one of the best in the continent, to attract and retain more FDI; to foster domestic private capital mobilization through risk sharing facilities that would absorb a percentage of the losses on loans made to private projects; and to avoid unsolicited proposals of public–private partnership (PPP) initiatives; as well as to build a robust, multisector PPP project pipeline, targeting sectors with clearly identified service needs such as transport, water and sanitation, waste management, irrigation, and housing. While the report findings establish clearly the gains of public infrastructure development for the country as whole, it also stressed that these gains tend to benefit urban and richer households most. “Rwanda will need to rebalance its investment strategy from prioritizing large strategic capital-intensive projects toward projects critical for broad-based social returns to boost the potential of public infrastructure to reduce inequality and poverty,” said Rolande Pryce, World Bank Country Manager for Rwanda. “Any step toward the Malabo Declaration to allocate 10 percent of future infrastructure investment to agriculture, allied activities, and rural infrastructure, will go a long way to achieving this goal.” Africa Today Greenpeace Africa responds to the cancellation of oil blocks in Salonga National Park On Monday the UNESCO World Heritage Committee decided to remove Salonga National Park in the Democratic Republic of the Congo from the List of World Heritage in Danger. The decision follows clarification “provided by the national authorities that the oil concessions overlapping with the property are nul[l] and void and that these blocks will be excluded from future auctioning.” Oil blocks overlapping with Salonga were awarded by President Joseph Kabila in the twilight of his regime. Greenpeace Africa has repeatedly demanded their cancellation, while local leaders voiced their opposition to the project in light of its impacts on communities. “A decision by President Felix Tshisekedi to cancel all oil blocks in Salonga Park must be followed by a decision to cancel oil blocks in Virunga Park and across the Cuvette Centrale region. These are vast areas rich in biodiversity that provide clean water, food security and medicine to local communities and which render environmental services to humanity,” says Irene Wabiwa Betoko, International Project Leader for the Congo Basin forest. The Salonga National Park, which is Africa’s largest tropical rainforest reserve, was inscribed on the World Heritage List in 1984. The park plays a fundamental role in climate regulation and the sequestration of carbon. The park is also home to numerous endemic endangered species such as the pygmy chimpanzee (or bonobo), the forest elephant, the African slender-snouted crocodile and the Congo peacock. Salonga had been inscribed on the List of World Heritage in Danger in 1999, due to pressures such as poaching, deforestation and poor management. The government of DRC later on issued oil drilling licences that encroached on the protected area, posing a threat to the wildlife-rich site. “DRC’s auctioning of oil blocks has not only been scandalously lacking transparency and menacing for particularly sensitive environmental areas – they neither benefit Congolese people nor the planet. Instead of privileging a small group of beneficiaries of the toxic fossil fuels industry, diversifying the DRC’s economy should be done through renewable energy investments that will make energy accessible and affordable for all,” Irene Wabiwa concluded. Greenpeace Africa urges full transparency from both UNESCO and the DRC government and calls for the publication of all supportive documents regarding the decision to cancel the aforementioned oil blocks, as well as the map of the nine oil blocks that are still being auctioned in the Cuvette Centrale region. Africa Today Domestic violence, forced marriage, have risen in Sudan Deteriorating economic conditions since 2020 and the COVID-19 pandemic have fuelled an increase in domestic violence and forced marriage in Sudan, a UN-backed study has revealed. Voices from Sudan 2020, published this week, is the first-ever nationwide qualitative assessment of gender-based violence (GBV) in the country, where a transitional government is now in its second year. Addressing the issue is a critical priority, according to the UN Population Fund (UNFPA) and the Government’s Combating Violence against Women Unit (CVAW), co-authors of the report. “The current context of increased openness by the Government of Sudan, and dynamism by civil society, opens opportunities for significant gains in advancing women’s safety and rights,” they said. Physical violence at home The report aims to complement existing methods of gathering data and analysis by ensuring that the views, experiences and priorities of women and girls, are understood and addressed. Researchers found that communities perceive domestic and sexual violence as the most common GBV issues. Key concerns include physical violence in the home, committed by husbands against wives, and by brothers against sisters, as well as movement restrictions which women and girls have been subjected to. Another concern is sexual violence, especially against women working in informal jobs, but also refugee and displaced women when moving outside camps, people with disabilities, and children in Qur’anic schools. Pressure to comply Forced marriage is also “prominent”, according to the report. Most of these unions are arranged between members of the same tribe, or relatives, without the girl’s consent or knowledge. Meanwhile, Female Genital Mutilation (FGM) remains widespread in Sudan, with varying differences based on geographic location and tribal affiliation. Although knowledge about the illegality and harmfulness of the practice has reached community level, child marriage and FGM are not perceived as key concerns. Women’s access to resources is also severely restricted. Men control financial resources, and boys are favoured for access to opportunities, especially education. Verbal and psychological pressure to comply with existing gender norms and roles is widespread, leading in some cases to suicide. The deteriorating economic situation since 2020, and COVID-19, have increased violence, especially domestic violence and forced marriage, the report said. Harassment in queues for essential supplies such as bread and fuel has also been reported. Data dramatically lacking Sudan continues to move along a path to democracy following the April 2019 overthrow of President Omar Al-Bashir who had been in power for 30 years. Openly discussing GBV “has not been possible for the last three decades”, according to the report. “GBV data is dramatically lacking, with no nation-wide assessment done for the past 30 years, and a general lack of availability of qualitative and quantitative data,” the authors said. To carry out the assessment, some 215 focus group discussions were held with communities: 21 with GBV experts, as well as a review of existing studies and assessments. Research was conducted between August and November 2020, encompassing 60 locations and camps, and the data was scanned through a software for qualitative analysis, followed a model first used in Syria. Publications Latest Pakistani PM’s Interview with PBS News Hours on Afghanistan Issues In an interview with PBS News Hour, host Judy Woodruff asked PM Imran Khan multiple questions about Pakistan’s point of... Hardened US and Iranian positions question efficacy of parties’ negotiating tactics The United States and Iran seem to be hardening their positions in advance of a resumption of negotiations to revive... Criticism Highlights Russia’s Media Weakness in Africa In her weekly media briefing July 23, Foreign Ministry Spokeswoman Maria Zakharova criticized United States support for educational programs, media... Is your security compromised due to “Spy software” know how Spy software is often referred to as spyware is a set of programs that gives access to user/ administrators to... The other side of the Olympics The world Olympic movement has always been based on the principles of equal and impartial attitude towards athletes – representatives... Tunisia between Islamism and the ‘Delta variant’ On Sunday 25 July, on a day dedicated to celebrating the country’s independence, in a move that surprised observers and... International Criminal Court and thousands of ignored complaints The civil war in Donbass has been going on for more than seven years now. It broke out in 2014,... Trending - Americas3 days ago Wendy Sherman’s China visit takes a terrible for the US turn - Intelligence3 days ago China and Russia’s infiltration of the American Jewish and Israeli lobbies - East Asia2 days ago Will US-China Tensions Trigger the Fourth Taiwan Strait Crisis?
https://moderndiplomacy.eu/2020/11/21/violence-insecurity-continues-to-plague-south-sudan-communities/
Summary/Abstract: Abstract. Psychological content analysis techniques developed to distinguish truthful from fabricated allegations (Statement Validity Assessment, Reality Monitoring, Scientifi c Content Analysis) show some promise in distinguishing truthful from fabricated statements. It is however argued, that they are not accurate enough to be admitted as expert scientifi c evidence in courts. A new, innovative formal assessment procedure – Multivariable Adult’s Statement Assessment Model (MASAM) was proposed. A group of 43 raters trained in statement content analysis, rated witnesses’ accounts. Studies have proven that with the use of MASAM it is possible to select 96,87% of truthful accounts and the conditional probability for content analysis results based upon MASAM analysis is 91,85%. As regards to false statements assessment, content analysis with the use of MASAM has also proven its superiority, with the conditional probability of 69,23% and three other compared content analysis techniques lead to wrong decisions in more than 50% of cases.
https://www.ceeol.com/search/article-detail?id=270358
If I were to lie in a court of law, I would go to jail. But it's entirely legal for an MP to lie to Parliament. Doesn't sound right, does it? I've been highlighting the corruption, greed and hypocrisy in our system for my TV show “The Revolution Will Be Televised” for years. But right now, I’m making a documentary for the BBC about why young people aren’t interested in Westminster politics. We should be able to trust our MPs – but what’s clear to me in making this documentary is that too many people just don’t. That’s why I’ve started this petition. I'm not launching it as part of a campaign to have this law passed. I want to start a debate about the importance of the truth in politics. If enough of you sign, it will send a message that we have had enough of broken promises, massaged statistics and the double speak we feel we've been subjected to over the years. Today, less than 1 in 4 people trust MPs. This can't go on. Next year it's the 800-year anniversary of the signing of the Magna Carta - a document that gave birth to the rights and freedoms we enjoy today. Isn’t it time for a new Magna Carta? It would be a Magna Carta 2.0 – including a measure that would make lying in Parliament totally illegal (and maybe have lying MPs thrown to the lions!!!). In order to gauge the support for stopping MPs from lying please sign the petition here and let's start a public debate about how we can make politicians’ lies history.
An unanswered question in hepatology is why only a significant minority of liver patients progress to severe symptomatic states, whereas the majority remain relatively healthy. When we have the answers to this question patient management will be genuinely transformed, with anticipated advances in disease prognosis, patient stratification, and therapeutics. There are undoubtedly genetic influences which become ever more apparent from genome-wide association studies (GWAS). However, we still lack robust genetic explanations for population variability in the progression of liver disease to cirrhosis, hepatocellular carcinoma (HCC), and organ failure. Deeper GWAS may better illuminate the mechanistic basis for differential disease progression, as could the discovery of rare genetic polymorphisms that map to fibrogenesis and tumorigenesis. But additionally, there are many epigenetic influences on cell phenotype and disease. These signals operate literally above ("epi" meaning "upon" in Greek) the DNA sequence. Epigenetic mechanisms can operate at a genome-wide level to influence gene expression and cell behavior, they are highly dynamic, responsive to the cellular microenvironment, and exhibit considerable molecular diversity at the cellular, tissue, and organismal levels. Brief Description of Epigenetics and Its Constituent Regulatory Mechanisms ========================================================================== The consensus modern definition of an epigenetic trait is "*a stably inherited phenotype resulting from changes in a chromosome without alterations in the DNA sequence*." By "inherited" this definition holds true for transmission of phenotype by both mitosis and meiosis. Most current reviews on epigenetics provide a narrow description usually focusing on three constituent regulatory systems; DNA (CpG) methylation, histone posttranslational modifications, and microRNAs (miRNAs). In fact, the reader should be aware of additional epigenetic influences such as transcription factors, histone remodeling complexes, and the entire gamut of noncoding RNAs, including the long noncoding RNAs (lncRNAs) that have recently emerged as regulators of chromatin structure and function ([Fig. 1](#fig01){ref-type="fig"}). Critically, there is substantial functional crosstalk between these distinct epigenetic elements, which combines to determine cell phenotype. ![An overview of epigenetic mechanisms influencing gene expression. DNA is packed into histone octamers (or nucleosomes) that are depicted as "beads" or "spools" on the DNA "string." The degree of compaction of nucleosomes at regulatory sequences is controlled by modifications to the histone tails such as phosphorylation (P), acetylation (Ac), and methylation (Me). Engagement of RNA polymerase II and associated transcriptional regulatory factors with chromatin at the gene promoter is reliant upon spacing and organization of nucleosomal structure as dictated by chromatin remodeling proteins (SWI/SNF). As transcription proceeds the ability of RNA polymerase II to read through the gene and elongate the nascent transcript is dependent on accessibility to downstream DNA, which is controlled by additional histone signatures. As an example, the presence of the signature H3K27me3 enables the recruitment of polycomb group complexes (PcG) which under the guidance of long noncoding RNAs (lncRNAs) can bring about chromatin compaction at specific loci. Such chromatin compaction in downstream regions of a gene will be inhibitory to RNA polymerase II transcriptional elongation leading to stalled or terminated transcription.](hep0060-1418-f1){#fig01} DNA Methylation --------------- CpG methylation is a common DNA modification that has a repressive influence on gene expression. It is regulated by DNA methyltransferases (DNMT1, DNMT3a, and DNMT3b) of which DNMT1 is a maintenance methyltransferase necessary for faithful copying of methyl-CpG marks to daughter DNA strands during mitosis.[@b1] Methylated CpGs repress transcription by inhibiting the binding of transcription factors to DNA or by recruiting methyl-DNA binding proteins (MBDs) that influence chromatin structure.[@b1] The recent discovery of the Tet enzymes that catalyze oxidation of methyl-CpG to generate hydroxymethyl-CpG has revealed the dynamic nature of DNA methylation.[@b2] The hydroxymethyl-CpG modification is not only an intermediate step in the pathway to CpG demethylation, it also has its own regulatory properties and in contrast to methyl-CpG can stimulate gene transcription. DNA methylation has a major influence on phenotype; recent genome-scale DNA methylation profiling in three distinct human populations (Caucasian-, African-, Chinese-Americans) highlighted the contribution of differences in DNA methylation towards natural human variation.[@b3] Histone Code ------------ DNA is packaged by histones into chromatin, which can take two forms: compacted transcriptionally inactive *heterochromatin* or lightly packaged transcriptionally permissive *euchromatin*. The basic structure of chromatin ([Fig. 1](#fig01){ref-type="fig"}) has been famously depicted as "beads on a string"; DNA being the *string* and the *beads* representing nucleosomes consisting of 147bp of double-stranded DNA (dsDNA) loosely wrapped around a core of eight histone molecules (two copies each of H2A, H2B, H3, and H4). The unstructured tail extensions of histones can be modified by phosphorylation at serine residues, methylation of arginine and by acetylation, methylation (mono, di, and tri), ubiquitination, sumoylation, and ADP-ribosylation at numerous lysine residues. Histone acetylation relaxes histone-DNA interactions and are associated with transcriptionally active chromatin. Histone lysine methylation plays a modulatory role in gene regulation and, depending on the lysine residue involved, will either suppress or promote transcription.[@b4] Trimethylation of lysine 4 of histone 3 (H3K4me3) and H3K36me2/3 are usually associated with euchromatin. By contrast, H3K9me3 and H3K27me3 are associated with heterochromatin and silenced genes. However, the transcriptional activity of a gene is determined by the cumulative influences of multiple histone modifications (or a histone "code"). The histone code is actively involved in control of cell phenotype, is highly dynamic, and under the regulatory control of enzymes that either add ("writers") or remove ("erasers") posttranslational modifications. The code is then interpreted by mediator proteins ("readers") that affect histone-DNA interactions and nucleosomal organization.[@b4] Nucleosome structure can also be regulated by the exchange of core histones with one or more histone variants.[@b5] As an example, exchange of H2A for H2A.Z is functionally important in gene activation and silencing. Chromatin Remodeling and Compaction ----------------------------------- The dense spacing and packaging of nucleosomes needs to be remodeled to allow access to transcription factors. This is carried out by the activities of adenosine triphosphate (ATP)-dependent chromatin remodeling complexes (e.g., mammalian SWI/SNF or BAF) that alter nucleosome-DNA contacts, promote nucleosome repositioning, or regulate the incorporation of variant histones into the nucleosome.[@b6] By contrast, silencing of genes involves compaction of nucleosomes into dense chromatin. The polycomb group (PcG) proteins represent a global gene silencing system that plays a critical role in cell determination and fate. PcG proteins contribute to two multiprotein complexes known as polycomb repressor complexes 1 and 2 (PRC1 and PRC2). PRC2 and its constituent enzyme EZH2 stimulate H3K27 trimethylation[@b7]; this epigenetic mark recruits PRC1, a stimulator of chromatin compaction. Loss of PcG function is implicated in cancers, in particular, EZH2 is overexpressed in many human cancers where it silences expression of tumor suppressor genes such as the Ink4/Arf locus.[@b7] Transcription Factors --------------------- Some transcription factors can exert a global influence on gene expression and cell phenotype in the liver, including members of the PPAR and CEBP transcription factor families and regulators of the circadian clock (e.g., CLOCK/BMAL) involved in control of hepatic metabolism and bile synthesis. These proteins are important in the epigenetic machinery since they are directly wired into signaling events that are downstream of extracellular receptors for environmental cues such as microbes, nutrients, hormones, growth factors, and xenobiotics. Moreover, a great many transcription factors engage in crosstalk with the chromatin regulatory machinery through their direct interaction with coactivators (e.g., histone acetyltransferases) or corepressors (e.g., histone deacetylases) at target genes. Of relevance to hepatologists, the bile acid sensor farnesoid X receptor (FXR) regulates gene transcription in cooperation with a number of coregulators such as SRC-1, SIRT1, Brg-1, CARM1, PRMT1, and N-coR.[@b8] Noncoding RNAs (ncRNAs) ----------------------- Until recently, the majority of the noncoding genome was thought to be "junk" DNA, but we are now aware that it carries important regulatory information transmitted by way of the ncRNAs. Most readers will be familiar with miRNAs, which are short (22 nucleotide) molecules that function in gene silencing and have already been manipulated for the design of antivirals or as cancer drug targets.[@b9],[@b10] It is important to also be aware of the long-noncoding RNAs (lncRNAs), which are anticipated to be extremely abundant and that are generated from a complex network of overlapping sense and antisense transcripts often including protein-coding loci.[@b11],[@b12] The lncRNAs are implicated in almost every step of gene regulation including transcription, splicing, and translation. Furthermore, lncRNAs are able to recruit chromatin-modifying complexes to specific genomic loci, thus playing a fundamental epigenetic role. Not unexpectedly, relationships between lncRNAs and a variety of human diseases including HCC are emerging.[@b12] Epigenetics and Hepatocellular Carcinoma ======================================== DNA Methylation and HCC ----------------------- Typical epigenetic lesions in liver cancer include genome-scale changes in the DNA methylation landscape, loci-specific DNA hypermethylation, dysfunction of histone-modifying enzymes, and abnormal expression of ncRNAs. Cancer-related changes in DNA methylation are attractive as biomarkers since they can be readily detected and quantified from fixed tissues. As a consequence, there are many published studies reporting DNA methylation patterns specific to liver cancers of different etiologies, including recent genome-wide studies. Combined DNA methylation and transcriptome mapping in human HCC identified 230 genes whose promoters were hypomethylated and had elevated expression in HCC (epigenetically induced), and 322 genes that were hypermethylated and underexpressed in tumors (epigenetically repressed).[@b13] Epigenetically induced genes were mapped to pathways driving cellular differentiation and transformation, tumor growth, and metastasis. Repressed genes mapped to apoptosis, cell adhesion, and cell cycle progression. A study of hepatitis B virus (HBV)-induced HCC compared DNA methylation profiles between tumor and adjacent tissue, this identified 1,640 hypomethylated and 684 hypermethylated CpG in the tumor.[@b14] Using a similar approach, Song et al.[@b15] reported that 62,692 loci displayed differential methylation between HCC and surrounding tissue, of which a remarkable 61,058 were hypomethylated. In a more focused study, tumor suppressor genes were identified that are hypermethylated in the early stages of HCC.[@b16] Eight genes (*HIC1, GSTP1, SOCS1, RASSF1, CDKN2A, APC, RUNX3* and *PRDM2*) displayed significantly increased methylation in early HCCs and were associated with shorter-time-to-HCC occurrence. Despite the excitement surrounding genome-wide DNA methylation studies, there are a number of caveats to be considered that urge caution regarding the clinical and biological significance of the emerging datasets. Perhaps of most importance is that tumors have high cellular heterogeneity, as such observed differences in DNA methylation patterns may simply reflect differences in numbers of tumor to normal cells rather than identifying epigenetic signatures relevant to cancer biology. For DNA methylation profiling to deliver definitively relevant clinical data, it will be necessary to carry out quantitative analysis on small numbers of histologically verified tumor cells captured from tissue by a technique such as laser dissection microscopy or high-speed cell sorting. A further caveat is that we must not assume a simple relationship between changes in DNA methylation and altered gene expression, even where this is indicated by overlaid transcriptome data. A direct functional correlation would require *in vivo* experimental manipulation of DNA methylation in a site-directed manner and demonstration of an associated change in the rate of gene transcription. Viruses as Drivers of Epigenetic Changes Underlying HCC ------------------------------------------------------- Cancers of viral origin can provide insights into relationships between epigenetics and tumor biology. The oncogenic HBx protein of HBV induces the expression of DNMT1 and recruits DNMT1, 3a, and 3b to stimulate hypermethylation of *IGFBP-3* and *p16^INK^*.[@b17] One mechanism by which HBx has been proposed to induce DNMT1 is by down-regulating microRNA miR-152, which directly targets the DNMT1 transcript.[@b18] Overexpression of miR-152 results in global DNA hypomethylation, whereas inhibition of miR-152 caused global hypermethylation and increased DNA methylation at the *GSTP1* and *CDH1* tumor suppressor genes. HCV has also been shown to stimulate alterations in DNA methylation; for example, the Gadd45β promoter is hypermethylated in HCV transgenic mouse liver and in cells infected with the JFH1 strain of HCV.[@b19] Gadd45β is expressed at reduced levels in HCV-infected patients and in tumor tissue; this is functionally significant given the role of Gadd45β in the control of cell cycle, growth arrest, and DNA repair. Studies in HBV- and HCV-induced HCC have identified common functional mutations in the SWI/SNF-like ATP-dependent chromatin remodeling enzymes *ARID1A* and *ARID2*.[@b20]--[@b22] Exome sequencing in an HCC tumor and adjacent nontumor tissue of HBV/HCV origin discovered missense mutations in genes encoding the H3K4 methyltransferases *MLL, MLL2, MLL3*, and *MLL4*.[@b23] These enzymes are important for remodeling chromatin into a transcriptionally active state. *MLL4* is of particular interest, being a recurrent hotspot for HBV integration and considering its role as a regulator of p53 target genes.[@b23] The PRC2 methyltransferase EZH2 and its structural partners EED, SUZ12, and RBP7 are expressed at elevated levels in human HCC and contribute to tumorigenesis by silencing multiple miRNAs.[@b24] The PRC2-regulated miR-125b is a transcriptional corepressor of the H3K9 methyltransferase SUV39H ([Fig. 2](#fig02){ref-type="fig"}), which regulates heterochromatin formation.[@b25] SUV39H is overexpressed in human HCC and when knocked-down in HCC cell lines inhibits proliferation and migration. SUV39H is a repressor of miR-122, which is best known for its ability to stimulate translation of HCV RNA ([Fig. 2](#fig02){ref-type="fig"}). However, miR-122 is decreased in HBV infection and when genetically deleted in mice results in spontaneous hepatosteatosis, inflammation, fibrosis, and HCC.[@b26],[@b27] HBx represses miR-122 by recruiting peroxisome proliferator-activated receptor gamma (PPARγ) and its associated SUV39H-containing corepressor complex to the miR-122 promoter.[@b28] Hence, altered expression or mutation of histone methyltransferase genes in HCC disrupts multiple regulatory networks, including a large number of miRNAs involved in posttranscriptional control. Drugs that modulate the activity of one or more of these epigenetic enzymes may be of considerable therapeutic potential. ![An example of complex epigenetic crosstalk and its impact on liver physiology and disease. The histone methyltransferase SUV39H plays a central regulatory role in liver physiology; on the one hand, negatively regulating the expression of the microRNA miR-122, which orchestrates the epigenetic control of gene networks involved in lipid metabolism and HCC. In addition, miR-122 is critical in the life cycle of HCV and is considered a therapeutic target. On the other hand, SUV39H regulates histone modifications at genes encoding regulators of cell proliferation and migration, its overexpression is associated with HCC. Expression of SUV39H is posttranscriptionally regulated by miR-125, which in turn is transcriptionally under the influence of the polycomb group complex PRC2 and its H3K27 methyltransferase EZH2.](hep0060-1418-f2){#fig02} A number of miRNAs are of mechanistic relevance in HCC and are described in detail elsewhere.[@b14] LncRNAs have so far received considerably less attention; however, several are emerging as potentially important in HCC. Highly Up-regulated in Liver Cancer (HULC) is a 500 nucleotide lncRNA which was discovered from a screen of noncoding RNAs expressed in HCC. HULC is expressed in normal human hepatocytes but is strongly induced in HCC tissue.[@b29] Elevated HULC expression is also a feature of HBV infection[@b30] and is found in liver metastatic tissue of colorectal cancer origin. HULC regulates HCC proliferation and expression of a number of HCC-associated genes and is detected in the sera of HCC patients, the latter raising potential for biomarker development. HOTAIR is expressed in HCC and is associated with a higher risk of tumor recurrence following therapeutic transplantation.[@b31] Depletion of HOTAIR inhibits tumor cell proliferation, stimulates apoptosis, and generates significant antitumor effects *in vivo*.[@b32] MALAT1 is a very large (8,000 nt) nuclear lncRNA expressed in HCC and is associated with high risk of posttransplant recurrence.[@b33] Knockdown of MALAT1 in HCC cell lines has similar behavioral effects to those described for depletion of HOTAIR. Future work with lncRNAs is greatly anticipated and expected to lead to exciting new insights into hepatic gene regulation. Epigenetics and Liver Fibrosis ============================== Hepatic stellate cell (HSC) transdifferentiation to a profibrogenic myofibroblastic phenotype is a pivotal event in fibrogenesis. Transdifferentiation requires global epigenetic remodeling to bring about the suppression of adipogenic differentiation factors, *de novo* expression of regulators of the myofibroblast phenotype, and cell cycle entry. According to Waddingtons famous epigenetic landscape model,[@b34] reversion or conversion of a differentiated state is energetically costly for the cell and this ensures the stability of cell phenotype and tissue organization. Hence, the HSC may need to overcome energy-dependent epigenetic barriers to adopt the myofibroblast phenotype. In this respect, it is noteworthy that autophagy, a mechanism by which the cell recycles its intracellular components to generate energy, is critical for HSC activation.[@b35] Small molecular epigenetic inhibitors such as the DNMT1 inhibitor 5-azadeoxycytidine (5-AzadC) and the EZH2 inhibitor 3-deazaneplanocin A (dZNep) potently inhibit HSC activation *in vitro* and *in vivo*.[@b36],[@b37] Studies in our laboratory have described an epigenetic relay pathway that must be activated in order to drive HSC transdifferentiation.[@b37] Mice lacking MeCP2 are protected from liver fibrosis and *mecp2-*deficient HSC display multiple defects in their fibrogenic phenotype including reduced expression of collagen I, TIMP-1, and α-SMA. MeCP2 may be a generic core-regulator of tissue fibrosis since *mecp2*-deleted mice are also protected from pulmonary fibrosis.[@b38] MeCP2 operates two concurrent mechanisms to ensure epigenetic silencing of PPARγ and HSC transdifferentiation. MeCP2 directly binds to methyl-CpG-rich regulatory regions in the PPARγ promoter and recruits H3K9me3-modifying enzymes that suppress transcription initiation. MeCP2 is also required for expression of EZH2 and H3K27me3 modifications in the downstream coding region of the gene that impede transcriptional elongation. These two mechanisms help explain the ability of 5AzadC and dZNep to inhibit HSC transdifferentiation. More recently, our laboratory described how MeCP2 can promote transcription of multiple profibrogenic genes through its control of the expression of the H3K4/H3K36 methyltransferase ASH1.[@b39] Changes in DNA methylation during HSC activation have been reported at specific loci such as the PTEN tumor suppressor and Patched 1 (PTCH1) genes; in both cases the genes become hypermethylated and this corresponds with diminished expression in the myofibroblast.[@b40] To date, we lack genome-wide studies of changes in DNA methylation during HSC activation. However, a landmark study recently published from the Diehl laboratory interrogated 69,247 differentially methylated CpG sites in liver biopsy material from nonalcoholic fatty liver disease (NAFLD) patients stratified into advanced (F3-4) versus mild (F0-1) disease.[@b41] 76% of the differentially modified CpG sites became hypomethylated in advanced disease, while 24% underwent hypermethylation. The mechanistic basis for these NAFLD-associated changes in DNA methylation was not investigated. The DNA methylome data were overlaid with transcriptomics data from the same biopsies; this led to the discovery of several key fibrogenic genes that were both hypomethylated and overexpressed in advanced NAFLD. However, when using whole tissue for DNA methylome analysis there is a risk that observed differences may simply be reflecting cellular and/or architectural changes in the tissue rather than identifying molecular changes that are driving fibrogenesis. There is also no proof that the alterations in DNA methylation are directly responsible for the observed differences in gene expression; the two may simply be coincidental. It is highly likely that noncoding RNAs of all sizes and activities will play fundamental functions in the determination of HSC phenotype and liver fibrosis. Numerous miRNAs regulating proliferation, apoptosis, TGFβ1 signaling, and collagen expression have been described as regulators of HSC phenotype and fibrosis progression.[@b42] We await investigations into the functions of lncRNA species. Epigenetics and Nonalcoholic Steatohepatitis (NASH) =================================================== The associations between nutrition, epigenetics, and metabolic disease are firmly established. The phenotype of the Agouti mouse, which includes obesity and predisposition to cancer, is prevented by supplementation of the maternal diet with methyl donors.[@b43] Diets depleted of methyl donors promote DNA hypomethylation and the development of steatosis in rodents. By contrast, supplementation of high-calorie diets with methyl donors prevents NAFLD, suggesting that epigenetic changes that alter hepatic fat metabolism may be related to dynamic alterations in DNA methylation.[@b44],[@b45] There is a close association between lipid metabolism and circadian rhythm, the latter being controlled by the CLOCK machinery. The CLOCK-BMAL1 circadian transcription factors regulate hundreds of genes including the PPARs; hence, metabolic genes regulated by PPARs are rhythmically expressed.[@b46] Mice lacking the expression of *Clock* are hyperphagic, obese, and develop NASH.[@b47] The deacetylase SIRT1 forms a chromatin complex with CLOCK-BMAL1 and its activity is regulated in a circadian manner. This CLOCK-SIRT complex determines the degree of histone acetylation and amplitude of transcription for circadian and metabolic genes. Moderate overexpression of *Sirt1* in mice protects from high-fat-diet-induced metabolic disease.[@b48] These data are of relevance when considering epidemiological data in humans with disturbed circadian rhythms such as shift workers who have a high risk of metabolic disorders.[@b49] However, clinical studies investigating epigenetic reprogramming in NASH are only just beginning to emerge. Ahrens et al.[@b50] carried out DNA methylation and transcriptome analysis on liver biopsies from lean controls, healthy obese, and NASH patients. Analysis of 45,000 CpG sites revealed 467 dinucleotides where methylation deviated from lean controls. By overlay with transcriptome data, eight genes were identified (*GALNTL4, ACLY, IGFBP2, PLCG1, PRKCE, IGF1, IP6K3* and *PC*) that displayed obesity-related alterations in expression correlating in an inverse manner with altered CpG methylation. Noteworthy is that all of these genes are regulators of metabolism and candidates for NAFLD disease drivers. As part of the same study, a similar analysis was also carried out with paired liver biopsies obtained 5-9 months following bariatric surgery. The gene encoding protein-tyrosine phosphatase epsilon (*PTPRE*), a negative regulator of insulin signaling, became hypermethylated and transcriptionally down-regulated with weight loss. This finding provides an interesting mechanistic link between weight loss and control of hepatic insulin sensitivity. Progression to steatohepatitis is a critical step in the continuum of NAFLD towards fibrotic disease and/or HCC. A plausible hypothesis for progression of benign steatosis to steatohepatitis is the perturbed regulation of inflammatory cytokines (e.g., interleukin \[IL\]-6, IL-1, and tumor necrosis factor alpha \[TNFα\]). Hepatocytes cultured with free fatty acids overexpress the ATP-dependent chromatin remodeling proteins Brg1 and Brm, which upon recruitment to proinflammatory genes stabilize nuclear factor kappa B (NF-κB) binding and help remodel chromatin.[@b51] Impressively, experimental depletion of Brg1 into MCD-fed mice suppressed steatosis, inflammation, and fibrosis, indicating a pivotal role for Brg/Brm chromatin remodeling proteins in the progression of NAFLD. Investigations of noncoding RNAs in NASH have so far been limited to miRNAs. On the order of 100 miRNAs are reported to be differentially expressed in NASH and these have vast functional diversity, including control of lipid and glucose metabolism.[@b52] Noteworthy is miR-122, which plays important regulatory functions in lipid and cholesterol metabolism and is closely linked to the circadian clock system. mIR-122 is abundantly expressed in healthy liver but down-regulated in NASH, and in experimental studies in mice has been functionally implicated in NAFLD pathology.[@b53] Epigenetics and Liver Disease Imprinting ======================================== The concept that environmental cues may induce stable adaptive traits that can be passed between generations and influence phenotype has its theoretical roots in so-called "Lamarckian" inheritance. Until recently, there was limited convincing experimental evidence for epigenetic transgenerational effects in mammals and most studies were of maternal origin, where it can be argued that contributory *in utero* events confound data interpretation. By contrast, paternal effects are more attributable to true epigenetic inheritance, as fathers usually only contribute sperm to offspring. Of relevance, there are intriguing recent reports for paternally transmitted heritable epigenetic adaptations that impact on liver function. Male inbred mice fed a low-protein diet gave rise to offspring that exhibited elevated hepatic expression of genes regulating lipid and cholesterol metabolism.[@b54] Liver fibrosis in male outbred rats triggers the multigenerational transmission of changes in the expression of genes regulating hepatic stellate cell activation, with the phenotypic consequence being suppression of liver fibrosis.[@b55] Interestingly, in both of these studies DNA methylation and gene expression for PPARα and PPARγ were altered; this may indicate that these nuclear hormone receptors provide an epigenetic hub for integrating ancestral environmental information. Evidence for Lamarckian-like inheritance is rare in humans, although intergenerational inheritance of metabolic disorders following the 1944-1945 Dutch famine does provide a striking example. As reported by Veenendaal et al.,[@b56] body mass index (BMI) and weight are increased in F2 (grandchildren) of males exposed to *in utero* famine, suggesting that epigenetic adaptations to dietary factors may be stable and transmissible across multiple generations. Whether such ancestral epigenetic imprinting contributes to population variability in liver diseases remains to be determined. Summary and Future Prospects ============================ Genetic and epigenetic variants combine to influence observed differences in disease susceptibility and variable disease progression. Based on the experience with GWAS, it is inevitable that epigenome-wide association studies (EWAS) in liver diseases will be undertaken. However, conducting meaningful EWAS will be challenging, as epigenetic signatures are highly plastic, display differences between cells within a tissue, and are modified by aging and multiple environmental factors. But if carefully conducted, EWAS combined with GWAS offers the rewards of unparalleled mechanistic insights into disease pathology, improved patient stratification, and new prognostic tools. Unlike genetic drivers of disease, unhealthy epigenetic modifications may be modified, thus offering the prospect of epigenetic therapies. There are already ongoing clinical trials with HDAC inhibitors in cancer, the miR-122 inhibitor miravirsen in chronic HCV, and first-into-man trials with a mimetic of miR-34, a powerful tumor suppressor.[@b57],[@b58] But again, enthusiasm must be tempered with what are significant hurdles to be overcome, not the least in finding drugs with sufficient specificity for a given epigenetic modifier to ensure efficacy and prevent clinical toxicities. Notably, very few HDAC inhibitors have completed phase II testing due to adverse side effects including fatigue, constipation, diarrhea, and dehydration.[@b59] Ideally, combined GWAS and EWAS information will arm clinicians and dieticians with the tools to design evidence-based lifestyle modification strategies tailored to prevent liver disease developing in at-risk individuals and from passing on unhealthy epigenetic traits to future generations. GWAS : genome-wide association studies lncRNAs : long noncoding RNAs MBD : methyl-DNA binding protein miRNAs : microRNAs PcG : polycomb group [^1]: The author is funded by the UK Medical Research Council (MR/K001949/1), National Institutes of Health NIAAA (U01AA018663), the Newcastle Biomedical Research Centre funded by NIHR and is in receipt of grant funding from GlaxoSmithKline. [^2]: View this article online at <wileyonlinelibrary.com.> [^3]: Potential conflict of interest: Prof. Mann consults for and received grants from GlaxoSmithKline. He consults for Novartis and UCB.
Laverne Cox lauded Caitlyn Jenner — who recently shared her story on the cover of Vanity Fair — for helping the transgender movement take another monumental step towards equality in a sharp, poignant note on her Tumblr that also served as a reminder of the numerous obstacles many transgender men and women continue to face. “It feels like a new day, indeed, when a trans person can present her authentic self to the world for the first time and be celebrated for it so universally,” wrote Cox, who, almost a year ago to the day, appeared on the cover of Time proclaiming the “Transgender Tipping Point.” “Many have commented on how gorgeous Caitlyn looks in her photos, how she is ‘slaying for the Gods.’ I must echo these comments in the vernacular, ‘Yasss Gawd! Werk Caitlyn! Get it!'” The Orange Is the New Black star, however, quickly turned a critical eye towards the problematic nature of photo shoots and her own participation in them. While Cox copped to her desire to create and project “empowering images of the various sides of [her] black, trans womanhood,” she wrote, “I also hope that it is my talent, my intelligence, my heart and spirit that most captivate, inspire, move and encourage folks to think more critically about the world around them.” Cox wrote that it was these intangibles that she found most inspiring and beautiful in Jenner’s story, especially Jenner’s ability to make herself vulnerable before the entire world and “her courage to move past denial into her truth so publicly.” Still, Cox was aware that physical beauty often takes precedent over these qualities. While Cox said she had been flattered by remarks that she looked “drop dead gorgeous” on the cover of Time, she noted that such a compliment was rooted in her ability to embody cisnormative standards of beauty.
https://www.rollingstone.com/culture/culture-news/laverne-cox-lauds-caitlyn-jenner-talks-fight-for-trans-rights-66092/
- The college library functions with the objective of providing quality service to its members and promoting excellence in education by housing several latest edition books, journals, monographs, magazines, etc. Library resources are available to encourage research, engage students in pleasure reading, support the curriculum, and address individual needs and interests. - The college has a Main Library and 08 departmental libraries that collectively support the teaching, research, and extension programs. - The library remains functional on all working days from 7:30 a.m. in the morning till 5:30 p.m. in the evening. - The college library has a spacious reading room that is adequately lighted and contains furnishings appropriate to the student population. - It houses over 1,23,267 book titles and subscriptions to 68 Journals/periodicals and 15 newspapers in three languages. - It also has more than 125 educational CDs and about 20 sets of encyclopedias. - Students are provided with the facility of computers with high-speed connectivity and access to e-journals. - All e-resources available on UGC-N-LIST are available to college students and staff. - The library is fully automated and uses Bar Code System and standard integrated Library Management Software. - Books for Competitive examinations are issued to the students. - Book exhibitions on campus are organized from time to time. - Faculty members encourage their research scholars to use e-resources hosted through INFLIBNET services. - Our library has a novel DOOR (Digital Online Offline Resources) facility for students. - The college library also has a Book Bank facility available to students that provide books for the whole session to needy students. The central Library has the following facilities: - - INFLIBNET - Access e-books and e-journals - Reference books - Periodicals, journals, magazines - Separate study room for researchers - Reprographic facilities: - - a) Computers - - b) Bar code printer - - c) Printer - - d) Scanner - - e) Photo copying machine - - Competitive Exam Section-Dr. A. P. J. Abdul Kalam Vachan Katta - Innovative DOOR- Digital Online Offline Resources Number of books, reference books, journals, e-books in library .
https://www.psgvpasc.ac.in/library/
Essay on Pollution in English: Pollution is the release of dangerous materials into the environment. These harmful materials are called pollutants. Pollutants can sometimes be natural, like volcanic. This type of pollution can also be caused by human activity, such as waste or runoff produced by manufacturers. Pollutants harm many natural things, like the quality of air, water, and land. Pollution essays in English are an important part of students and teachers in school and college tests and exams. Essay writing on pollution is important because it is related to the environmental problem. This pollution essay in English will help students and teachers with writing essays on pollution in exams and other school activities. Essay on Pollution in English 50 words for Classes 1, 2 & 3 Kids Pollution is very dangerous for the environment. Pollution is the presence of harmful things in the environment. Many different types of pollution here on the earth. Three major types of pollution like water pollution, air pollution, and noise pollution. Pollution affects our daily routine and pollution is also bad for our health. Read Essay on My Country India Essay on Pollution 100 words for Classes 4 & 5 Kids or Children Pollution means making it unclean. Pollution is the expansion of avoided substances into the environment. Pollution can damage our Earth. Pollution of the environment is a serious issue in societies. Industrial growth and the green revolution have had a harmful effect on the environment. On Earth, 4 major types of Pollution are water pollution, air pollution, noise pollution, and soil pollution. All types of Pollution are an outcome of careless activities by a human. People have changed over their daily routine emotionally supportive network of all experiencing individuals into their own assets and have very upset the normal natural equilibrium. Essay on Pollution 200 words for Classes 6, 7 & 8 Kids or Children Today, the earth faces the biggest pollution problem. Unwanted and unnecessary things have an effect negatively on the environment. Today I am writing an essay on pollution to save our earth. Different types of pollution here, but I will write about 3 major types of pollution in our essay on pollution. Water pollution, air pollution, and land pollution. All kinds of pollution are very harmful to humans. Every kind of pollution and its effects on human life can be known. Due to water pollution, many types of illnesses are caused by water pollution. Air pollution can cause a variety of damaging health effects. It increases the risk of infections, heart disease, and lung cancer. Land pollution can damage the human body in many different ways. The current issue is that it is rising day by day as an effect of many polluting causes. Humans created machines to pollute. Pollution deterrence is necessary because it saves the environment by saving and protecting natural resources. Deforestation, the dumping of worthless and different garbage, and the usage of dangerous chemicals are reasons for soil pollution. increasing financial benefits by allowing manufacturing to become more efficient and reducing the amount of garbage that must be controlled by households, businesses, and societies Essay on Pollution in English 250 words for Classes 9, 10, 11 & 12 Students Increasing day-by-day pollution is corrupting the natural environment. We should be serious about saving life on earth. Some of the most common kinds of pollution are soil pollution, water pollution, and air pollution. All types of pollution are very harmful to our environment. Water pollution is created by industries. Poisonous or wastewater from manufacturers and industries directly into the river or other water is the main issue for water pollution. Water pollution directly affects health by drinking polluted water. Soil pollution is generated by the existence of dangerous chemicals in the natural soil. Soil pollution is produced by industrial projects and by using farming chemicals. These types of chemicals directly impact our health. Air pollution is very dangerous for our health. Air pollution is mainly caused by vehicle emissions of harmful and toxic gases, which is the main reason for pollution in the air. Factory smoke is also the biggest cause of air pollution. Air pollution is not only harmful to humans; it is also harmful to animals and plants. Pollution control is very important for life on earth. There are multiple methods we can control pollution. The control of pollution involves many options. Not to allow life-threatening escape into the atmosphere Recycle substances that could be dangerous if discharged into the environment in excessive amounts. Stop substances that continue to release harmful substances into the air. All these pollution control options must be needed for the technology for the application. Remember, pollution control means not defecting from existing productivity. Different Types of Pollution in English - Air Pollution - Water Pollution - Soil Pollution - Garbage Pollution Air Pollution: Air pollution is the release of impurities, harmful gases, dust, and smoke into the air that can damage the health of humans, animals, and plants. There is an unquestionable percentage of gases present in the environment. The addition or reduction in the composition of these gases is harmful to life. Water Pollution: Water is necessary for life. Water is one of the important natural resources on the planet. Without water, no one can live on earth. Water pollution is very harmful to life. But using polluted water causes lots of deaths and illnesses every year. Soil pollution: Soil pollution is defined as the presence of harmful chemicals in the soil. In simple terms, natural soil due to human activities is termed soil pollution. It is a serious environmental problem for human health. Garbage Pollution: Garbage has all types of food waste, household garbage, all plastics, and cargo relics. Garbage is similar to everyday items such as bottles, cigarettes, plastic bags, cans, and other common sources of pollution. Pollution Frequently Asked Questions Q.1 What are the impacts of pollution? Ans.1: All types of pollution impact the quality of human life. Air pollution directly affects breathing. Water pollution causes many types of infections, especially in children. Q.2 How many types of pollution? Ans.2: The five main kinds of pollution are: water pollution, air pollution, light pollution, soil pollution, and noise pollution. Q.3 What are the five ways to reduce pollution? Ans.3: The five ways to reduce pollution. - Reduce forest fires and smoking. - Avoid utilising developments with chemicals. - Recycle products and reuse them. - Minimize air pollution from vehicles. - Don’t dispatch chemicals into waterways.
https://eduenations.com/essay-on-pollution-in-english/
Encryption is the process of translating data into a secret format so that only authorized parties can understand the information. Plain text, or readable data that is not encrypted, is converted into cipher text, or scrambled data that is unreadable. Encryption takes readable data and alters it so it appears random. This is done to protect and secure the confidentiality of data transmitted through a network. To read an encrypted file, the recipient must have access to a secret key or password that enables them to translate the information back to its original form. This process is called decryption. Although it appears random, encryption requires the use of an encryption key. This key consists of a string of characters used in combination with an algorithm to transform the plain text into cipher text and vice versa. Each key is unique. Types of encryption There are two main types of encryption: Asymmetric (also known as public key cryptography) and symmetric. The biggest difference between the two is that symmetric encryption uses one key for both encryption and decryption, and asymmetric encryption uses a public key for encryption and a private key for decryption. Symmetric encryption is the simplest and most-used technique. While asymmetric encryption takes longer to execute because of the complex logic involved, it’s a better choice from a security standpoint. Examples of encryption - Data Encryption Standard (DES): DES is a low-level encryption standard that was established by the United States government in 1977. DES uses a 56-bit key and uses the block cipher method, which breaks text into 64-bit blocks and encrypts them. Because of technological advances, DES is relatively obsolete for protecting sensitive data. - Triple DES: Triple DES runs DES encryption three times. It encrypts, decrypts, then once again encrypts data. It strengthens the original DES standard. - RSA algorithm: RSA stands for Rivest, Shamir, and Adelman the inventors of the technique. The algorithm is based on the assumption that there is no efficient way to factor very large numbers. Deducing an RSA key, therefore, requires an extraordinary amount of computer processing power and time. - Advanced Encryption Standard (AES): As of 2002, AES is the United States government standard, replacing DES. It works at multiple network layers simultaneously and is used worldwide.
https://www.webopedia.com/definitions/encryption/
Hundreds Chart Unit of Study: Strengthening Critical Area: Understanding Place Value to Add and Subtract Global Concept Guide: 3 of 3 You may have noticed that Unit 16’s GCG’s look different from the previous units. Unit 16 was provided as a way to strengthen student understanding of the critical area place value. Your student’s performance on the UnitAssessment and Performance Task will be your guide for the activities you select as you progress through this unit. Content Development • Students need to see multiple representations and strategies to add and subtract using place value. They have had many chances to find patterns and count by 1’s and 10’s using a 100/120 chart, and can build on these experiences to add/subtract larger numbers. • Using hundred charts to add/subtract larger numbers can build students’ understanding and set a foundation for addition/subtraction with regrouping. • Make connections to open number lines and adding on a hundreds chart. Day 1 Essential Question: How does visualizing the structure of a hundreds chart help you add or subtract? • Build a Wacky Hundreds Chart-Cooperative whole group activity where students use number cards to build a Hundreds Chart on the floor. Students will need to communicate about the relationship of numbers (1 more, 1 less, 10 more, 10 less). • Students independently complete Hundreds Chart Puzzle. Look for students who struggle with the pattern on Hundred Chart. Be sure to provide support when needed. By the end of Day 1, students will be able to build a Hundred Chart and describe the structure and pattern of the chart. Day 2 I started at 73. I had to add 25 more, so I moved down two tens and counted on 5 ones. Essential Question: How does using a 120 chart help you compose and decompose numbers? • Addition and Subtraction 120 Chart-Students should be using a completed 120 chart to model addition and subtraction. Boards can be laminated or placed in plastic sleeves so students can track their thinking. • Have students explain their thinking in terms of tens and ones, and how they moved on the hundreds chart. By the end of Day 1, students will be able to model adding and subtracting on 120 chart as well as explain the pattern. Students should make connections to adding and subtracting on a hundred chart to their previous work with open number lines. This will be required on the Performance Task. Performance Task will be given at the end Day 2. Enrich/Reteach/Intervention • Reteach/Intervention • Using Hundred Chart To Add 1 more/1 less – Interactive activity • Subtracting on a Hundred Chart • Colored Hundred Chart – help with visualizing patterns • Enrich • Provide number sentences for adding and subtracting and students will model on Hundred Chart.
https://fr.slideserve.com/akiko/hundreds-chart
The conference offered so much to so many, as did the exhibition floor. This year, a multitude of vendors – 160 in total – displayed their offerings and discussed their technologies with attendees. Among the exhibitors were many of the industry’s leading vendors, including Autodesk, Nvidia, AMD, Maxon, and others. Yet, according to show management, one quarter of those exhibiting were first-timers to SIGGRAPH. I spent some time traversing the aisles and checking out the technologies on display in search of those selected for CGW’s annual Silver Edge Awards, chosen for the impact they are making, or their potential impact, on the industry. For the first time, we have selected a platinum winner, as well, whose technology is in a category all its own. Silver Edge Award–Platinum Winner Nvidia--Quadro RTX Turing-powered raytracing GPUs Silver Edge Award Winners AMD--Radeon Pro WX 8200 for real-time visualization Blackmagic Design--DaVinci Resolve 15 with Fusion integration Chaos Group--Project Lavina (photoreal real-time raytracing) Epic--Further development of Unreal Engine 4 and pushing the boundaries of real-time development across the industry and in cutting-edge virtual productions Foundry--Athera cloud-based VFX platform Lenovo--ThinkPad P1 lightweight yet powerful mobile workstation Maxon--Cinema 4D Release 20 extensive DCC platform Pixar--RenderMan 22 renderer SideFX--Houdini 16.5's dynamic simulation tool updates Unity--Unity 2018 real-time 3D engine Vicon--Origin VR suite for location-based entertainment ADELAIDE, SOUTH AUSTRALIA — As 2018 comes to a close, Rising Sun Pictures Education (rsp.com.au) can look back on a year that saw its program grow, its partnership with the University of South Austral ... January 11, 2019 CARY, NC – Epic Games has launched Unreal Engine 4.20, enabling developers to build the most realistic characters and immersive environments across games, film and TV, VR/AR/MR and enterprise applicat ... July 18, 2018 LOS ANGELES - Over two nights, the Television Academy presented the 2018 Creative Arts Emmy Awards Ceremonies, honoring outstanding artistic and technical achievement in television. The event was held ...
http://www.cgw.com/Press-Center/Siggraph/2018/CGW-Selects-Best-of-Show-for-SIGGRAPH-2018.aspx
While on your professional career journey to become a designer/modeller and digital artisanal craftsman, you must have tried numerous tools, processes and workflows. Outsiders to this digital craftsmanship are not aware of the numerous “digital tools” of the trade for 2D or 3D. How do you find your way through the countless tools to determine those that deliver the visions you want to realize? Becoming a digital artist has a strong commitment. One must practice day in day out, just like an athlete in a constant training regime getting better every day. After achieving the needed level, enters the maintenance stage for keeping the skill sharp, warm, and ready to be used when required. Part of the digital artist maintenance responsibility is always to be curious and aware of what is out there and what is new. How many softwares have you tested in your professional path? For me, personally, that’s a tough question to answer. In my professional life, and even just a bit before, I have tried countless 2D and 3D softwares, and trust me, while writing these words, I can honestly say without a doubt, it’s a bottomless rabbit hole. Still, it amazes me the amount of which software did not survive to this day or that I did not survive the transitioning process, spending the time to learn and not having them as part of my workflow. Some fitted like a glove, and some just did not. This type of human-software coexistence and evolution together is part of our life in this digital era. The adaptation rate for software, in my opinion, consists of a few vital aspects: 1. Solves a problem (the easy way to solve the problem vs the best way of solving it) 2. Intuitive (UI, function logics, 2clicksAway…) 3. The learning curve (community and learning material availability) 4. Boost your creativity (FYI. there are many excel creative minds out there) As the Konzepthaus Academy Director, I have carefully selected a list of software which I find personally an absolute pleasure, and which answer the following question: If I was a young designer today, what would I be interested in learning or upskilling? Some of the software selections are related to my personal and creative workflow, but I think at this point of time we should have more significant scope to this question and address not what is out there, but what could be of a difference! 1. Blender Number one on my list. Yes, and it is here to stay. Blender must have been doing something right. Since the introduction of the new blender simple, intuitive user interface, we are experiencing a significant shift to Blender. Different profiles of users from industrial, fashion and automotive designers are rapidly adapting to the most downloadable 3D software ever. If you are not aware, in the open-source and free category software, Blender is the most robust 3D swiss-army knife out there. In the past decade, Blender went from hobbyist level 3D software to a full-scale movie and film 3D modelling production and composition tool. In this one-stop-shop tool you are able mix endless processes seamlessly: Modelling / Sculpting /Cycle Rendering / EEVEE real-time PBR engine / Rigging / Animation / 2D Grease Pencil / VFX / Video Editing. To sum it up; offers an intuitive way into 3D, free and powerful, a vast user community, backed up with open-source developers worldwide participation in this huge 3d software project. Entry-level: 2 2. Substance Painter 3D texturing was never so much fun and as powerful as it is now, thanks to this fantastic product within the Adobe suite. I know there are other software out there, but for me, this is the most comfortable entry-level to 3D visualisation, for realistic LookDev of your product. It makes total sense that all industries alike are attracted to Substance painter, and the product design results speak for themselves. Add to this the fact that Substance/Adobe is aware of the growing community and keep pushing the tool to be integrated into as many industries as possible, investing time and resources into meeting the needs of their users. Win-win. Having Substance Painter under Adobe’s umbrella makes sense after all! If you are familiar with Adobe Photoshop or similar products, you will find your way-in easily. It is very intuitive, quick to master and you can easily gain an understanding of how 3D materials are layered while creating them digitally. When you master Substance painter (aka SP) I am sure you will go the extra mile for more procedural material exploration with substance source or even full parametric node workflow with Substance Designer. In a nutshell; a fabulous and fun way into 3D, materials and material R&D. Entry-level: 1 3. Grasshopper Full disclosure here! I am very biased when it comes to Grasshopper, as it’s close to my heart. An excellent and complex tool, with a very high level of personal maintenance (practice). But when the magic happens, the personal reward is at its best. With Grasshopper, you will be able to automate your daily repetitive tasks and learn how to create your visual scripting code (code without coding). In its early stages, architects have adopted Grasshopper and the direct outcome of using the tool was a full-throttle revolution to all aspects of current architecture processes. Let me tell you a small secret; Grasshopper is not only for automotive parametric texture modelling! It has much more to offer, from design conception (form-finding), complex 3D and additive manufacturing R&D, optimization, C&T pattern explorations, and of course the iteration part of each process. Learning Grasshopper might be tricky, but the goal is to find what you are comfortable with but still ensuring personal growth. In a nutshell, Grasshopper is hard to learn but super rewarding once mastered. Future designers should experience parametric design and even code as part of their workflow. Entry-level: 3.5 4. Zbrush. Monster software that I absolutely love. Blood, sweat and tears went into learning this one. Even though these days, my design process only utilises probably 3% of what Zbrush has to offer, it is always a fun place to go. My curiosity about using ZBrush started back in 2011 while trying to create a more robust jewellery design, which forced me to let go of poly modelling and merge sculpting into my daily design workflow. Nine years on, and I’m still a fan and find Zbrush to be one of the most creative places for me to express my artistic, abstract design process. Pixologic is investing many hours in online content that will get you up and running, as well as maintaining a vast fan-based community while leading the 3D sculpting industry for the film sector. To cut a long story short; after you get “in”, it’s super fun. Not for the faint-hearted, I wouldn’t recommend it to everyone, but it has massive potential for sculptures and Clay modellers. Entry-level: 3 5. Unreal. You can’t get more epic than Unreal. Unreal Engine has captured the attention of almost everyone who utilises a real-time visualisation tool! OK, Unreal can’t be mentioned without discussing its’ rival engine Unity, which splits the automotive world in two. Are you Team Unity or Unreal? However, there is no black or white answer to this. I think it’s a fact that half of the automotive design community is a passionate, console gaming community, and I can imagine that GTA rings a bell with each and every one of us. However, selecting a quality program for real-time visualisation is a must for the industry. Visually, the results of Unreal are stunning. More and more companies are integrating Unreal into their visualization process. Unreal also has the additional added benefit of being able to help studios be a future communication centre, through Virtual Reality; integrated technologies to facilitate multiple participants in different locations worldwide. And the most potent integrations are still emerging, virtually testing specialized processes before physically developing them. My wrap up; visualization starter, a fast way in, that delivers excellent results in no time.
https://www.konzepthaus-consulting.com/my-5-favourite-design-programs-why/
Meiji Shrine is a Shinto shrine in Tokyo. History The Simpson family saw the Meiji Shrine from their room at the Royal Tokyo. Non-canon |The contents of this article or section are considered to be non-canon and therefore may not have actually happened or existed.| The Simpsons: Tapped Out - This section is transcluded from The Simpsons: Tapped Out decorations/Foreign buildings and architecture. To edit it, please edit the transcluded page.
https://simpsonswiki.com/wiki/Meiji_Shrine
An organization that seeks to empower a particular people should have characteristics that project its cause and intention of assisting a given underprivileged group. First and foremost, it should not seek to benefit financially from the people as that would render it a financial institution rather than a social organization. It should seek to unite the disenfranchised people towards their common goal. The group should also have a leadership drawn from the people it seeks to help rather than foreign individuals. It should have a clear mission statement and be very transparent in its activities. The Federation for American Immigration reform (FAIR) is such an organization that seeks to ensure that immigration laws and policies are revised to suit the American people and the immigrants. The organization’s mission statement is to examine immigration trends and effects, to create awareness in the American citizen of the benefit of controlled high volume immigration and to propose and advocate for policies that will be of best interests to America’s societal, environmental and economic system (FAIR n.o). FAIR seeks to address the issue of border security, illegal immigration and advocate for immigration policies that will be of mutual benefit to the country USA and to the immigrants. I am particularly interested in this issue considering the plight of the illegal immigrants and the negative effects to the economy and to the American people in general brought about by uncontrolled immigration. Illegal immigration poses a risk to the immigrants as well as to the Americans security and economic well being. Some individuals tend to take advantage of illegal immigrants who provide cheap labor or are even sometimes enslaved. This is disrespect to human dignity and is contrary to the provisions of the constitution. Illegal immigrants also render Americans jobless as employers unscrupulously employ the immigrants at wages below the national requirement. Illegal immigrants risk losing their lives while immigrating to America since some go to extreme measures to get to America. Those who manage to arrive pose a serious security threat to the citizens. It is therefore necessary to have clear policies and laws that guide this issue. Marc Zimmerman asserts that the three key components of empowerment are; efforts to gain access to resources, group participation and a critical understanding of the issue in question (Zimmerman 282). FAIR possess all of these qualities or characteristics. FAIR brings together the American citizens in identifying the need of better immigration policies. With better immigration policies, it becomes easier to allocate jobs legally and the citizens and the immigrants are all protected by the law and are able to freely exercise their rights. The organization thus meets Zimmerman’s criteria. Works cited Federation for American Immigration Reform (FAIR). Zimmerman, M. A. (2000). Empowerment theory: Psychological, organizational and community levels of analysis. In Rappaport. J and Seidman E. (Eds.), Handbook of community psychology. New York: Kluwer/Plenum Publishing.
https://www.wowessays.com/free-samples/free-course-work-on-rights-and-opportunities/
Recycle Forum’s first community recycling project to be launched this week The Recycle Namibia Forum (RNF) will launch its very first Community Recycling Project – in celebration of Global Recycling Day on 18 March. The project is run in conjunction with Development Workshop Namibia (DWN) and City of Windhoek’s Solid Waste Management division in the Samora Machel constituency in Windhoek. The RNF sourced three recycling “igloos” in early 2020 as part of its commitment to enable communities (where no household recycling takes place) to have a collection point for their recyclables. Given the RNF motto of “Taking hands today for a cleaner Namibia tomorrow”, it was decided to partner with DWN, which already has its Community Led Total Sanitation (CLTS) programme in this constituency, together with the support of City of Windhoek’s Solid Waste Management Division (SWM). According to DWN, “The collaborative igloo community recycling initiative between DWN, RNF and City of Windhoek SWM came at the right time. Solid waste is a massive challenge for most informal settlements, and introducing community recycling in Hadino Nghishongwa of Samora Machel constituency, is a milestone for piloting sustainable sanitation enterprises which makes provision of solid waste management facilities and creating educational awareness on how to manage and recycle waste.” The collection and sorting of the recyclables will also remain within the community – with a resident waste collector responsible for the processing of the recyclables. In order to create awareness about the importance of good waste management and the value of recycling, educational teams from both the City of Windhoek and DWN will be active within the community over the next few weeks, to introduce and encourage the sorting of their household waste. Anita Witt, coordinator of the RNF: “With Windhoek growing at a rate of approximately 5% annually, the collection of waste places a great demand on our municipal services. Creating awareness for proper waste management and the importance of recycling, the launching of this collection bin is a highlight for the RNF – not only enabling the residents to do the right thing, but also minimizing what goes to Kupferberg landfill.” “Recycling initiatives in the informal settlements of Windhoek is silent but it is also an opportunity and solution to challenging issues of solid waste ending up into riverbeds. Although many communities need awareness on the importance of this before any initiations, the CLTS project areas have been sensitized on waste management and therefore are ready to pilot the igloo recycling drum,” according to Sheya Gotlieb Timo, Project Coordinator of DWN. “The introduction of Solid Waste Management in the CLTS project areas has opened doors for more innovative options that could be used for piloting. With improved understanding and change of social behaviour toward managing household waste in an appropriate way, it has proved to work after community sensitization and outreach awareness programmes. The launch of the recycling igloo within this community will be beneficial in these areas: Keep Windhoek clean and minimizing what goes to Kupferberg Landfill, which is nearing full capacity; provision of the igloo will create a physical space of awareness on how and the importance to recycle and the waste type that could be recycled. Furthermore, it will set up a scene for educational component with the possibility of it turning the idea of recycling into various creative viable opportunities for local entrepreneurs in solving the solid waste management issues in their communities. Within days of being placed in the Samora Machel area, residents have been keenly separating their waste, making use of both the skip for general waste, and the recycling container for their recyclables. About The Author The staff reporter is the most senior in-house Economist reporter. This designation is frequently used by the editor for articles submitted by third parties, especially businesses, but which had to be rewritten completely. - Ed.
New Summit School is one of our 2015 award winners for their exceptional work in education. An innovative learning environment they have taken the lead with a highly acclaimed dyslexia program. Their dyslexia program has yielded such success in helping children improve their reading skills they are overflowing with applicants. You can learn more about their exceptional work here http://newsummitschool.com Welcome to New Summit School! New Summit School was established in 1997 as a full-time, special purpose, co-educational school offering classes for students in Kindergarten through 12th grade. Although New Summit School (NSS) is designed similarly to traditional schools, emphasis is placed on keeping instructional groups small and teaching to the individual student in a positive and stimulating environment. NSS is committed to an educational philosophy that considers the diverse needs of today’s students and families. Our goal is to promote academic achievement and social growth within an environment enriched by economic and ethnic diversity. –Read More About New Summit School In addition to offering the compulsory subjects required by an accredited school, New Summit School incorporates innovative educational approaches into the daily curriculum model. Our experienced educators and counselors take pride in reaching all of our students equally, while taking into account the various learning styles and skill levels that are represented. Through an initial assessment, the staff of New Summit School determines each student’s instructional needs and custom-designs an educational plan suited to meet those needs. At New Summit School, we strive to encourage our students to realize their full academic and creative abilities, develop strong values, and eventually determine how they will contribute to the community in their own individual way. New Summit School offers many options:
http://themagicofbooks.com/literacy-champions/new-summit-school/
The Kunming Master Chefs cooking contest, in which culinary skills by leading chefs were put into strict tests, has announced its winner. The team from the Kunming Hi-Tech Zone won the competition, which was held in Kunming, capital of Yunnan province. The contest comprised of theoretical and practical tests in Chinese culinary skills. Chefs?ˉ specific dish cooking and flour-based food making as well as the serving skills of waiters and waitress were all put to the test. Altogether 157 competitors from 25 teams, including those of the star hotels and vocational schools, took part in the contest. The Kunming Hi-Tech Zone sent the contest a team formed by chefs, waiters and waitresses from the Yunnan Dynasty International Hotel located in its administrative area, which eventually achieved impressive success. Its team members won six individual prizes, including the title of ?°the best Chinese cuisine cook?± and ?°the best flour-based food maker?±. The team was also awarded ?°the best team?± prize.
http://www.21food.com/news/detail66096.html
See fossilized dinosaur eggs, babies Bone that's turned to stone. That's what fossils are. It's not uncommon to see dinosaur fossils in museums, but what about dinosaur eggs? Did you know scientists have unearthed fossilized eggs from these ancient creatures? These eggs are fascinating to look at, and many will be on view in an interactive exhibit called, "Tiny Titans: Dinosaur Eggs and Babies," at the Yale Peabody Museum of Natural History in New Haven. It opens Saturday, Feb. 8. Richard Kissel, 39, a paleontologist and director of public programs at the Peabody, has been a dinosaur fan since childhood. "I was one of those kids who loved dinosaurs; I just never outgrew it," said Kissel, who still remembers a short stack of dinosaur books he had as a child. He flipped through them so many times, he said, he probably memorized them. He wasn't into "The Flintstones" so much though. For him, it was more about the real thing. "It's very hard to become a fossil," he said. "It's a very rare process." First a creature must be buried quickly (otherwise it decays or is picked apart by scavengers). Then, if it's in a protected area, minerals in the groundwater percolate through its bones, and the original material in those bones is replaced by the minerals. "The bone turns to stone; that's why there's different colors, it all depends on the minerals," he said. "Tiny Titans" offers visitors a glimpse into secrets that have been revealed since 1923, when the first dinosaur eggs were discovered, entombed in the Flaming Cliffs of Mongolia. That 75-million-year-old find was followed by others. Fossilized dinosaur eggs and nests have been recovered from around the world, along with the bones of tiny hatchlings and embryos. Such discoveries help scientists learn how dinosaurs lived their lives. Another highlight is the extremely rare embryonic skin preserved inside an egg, a scientific first. Fossils of embryos are among the rarest of dinosaur remains, but the fossilization of soft tissue, such as skin, is even more so because it usually decays soon after burial. A short film tells the story of the discovery of "Baby Louie," one of the first known "articulated" dinosaur embryos, meaning Louie's fossilized bones are intact and lying in the same position as they would have been in life. A cast of Louie's bones and a feathered reconstruction are on display. "With skin, it's not actually skin that's preserved, but you get impressions of skin," said Kissel. "The original material is replaced with minerals; the original shape and texture is preserved by minerals." Kissel has done excavation work in Europe and Texas. At one site, he helped recover the skeleton of a 50-foot-long crocodile-type creature. "The skull alone was 5 feet long," he said. "It was discovered by one of our crew members ... You walk around and look for pieces of bone sticking out of the ground." He said it was "lots of fun" because piecing fossils together is like working on a puzzle. "You're trying to reconstruct a picture of what life was like hundreds of millions of years ago. Imagine you were given a week and 10 pieces from a 100-piece puzzle, and not the picture from the box, and you had to determine what the picture was from those 10 pieces." At the exhibit, there's plenty for visitors to touch and feel, as well as a dig pit for smaller children, more than 150 eggs, and beautiful artwork depicting dinosaur life. Also, live emus will be hatching -- right in the exhibit.
https://www.newstimes.com/news/article/See-fossilized-dinosaur-eggs-babies-5211329.php
For many, eating is a time where you share moments of your life with the people you love. Sharing a meal means sharing your love. In the Torres family, food has always been special. And it’s this love of food and family that led two brothers to embark on a journey which would bring them to New York City to eventually become Executive Chefs at Raymi. A venture that has involved the entire family, and has also welcomed new friends into their circle along the way. Even though their backgrounds are as varied as snowflakes, the common thread is a love for Peru. Destiny had plans for these two brothers to unite and bring their varied experiences and knowledge into the creation of a place that pays homage to their common love. Growing up, Jaime and Felipe, along with their other siblings, were exposed to many gastronomical experiences. One that stands out for them is the time a young Chef came to their house to cook. Armin, the head of the family, met a young man, Virgilio Martinez, working at Astrid & Gaston in Bogota, Colombia. Over the years they developed a good friendship; Armin invited the young Chef into his home to share his culinary gift with the family, and so he became a welcomed guest in the Torres home on a regular basis. Curious, Felipe and Jaime would watch and help Virgilio’s cutting vegetables, stirring pots, and happily performing any other tasks they were asked to do. Soon they became passionate about this vocation. According to the Torres family creating a dish is a form of art. Back then the young chef couldn’t imagine how greatly he was influencing this family with a love for Peruvian food. This young Peruvian-born Chef is now the owner and chef of Central in Lima. At the time he was barely 20 years old, but his talent was clear. The Torres boys loved having him cooking in their home; watching and learning from him, not fully realizing that they were learning from a man who is now one of the most renowned Chefs in the world. It was around the age of 12 that Felipe Torres decided that he wanted to be a chef. The exposure to culinary arts fostered his creative side, and making a dish and serving it to his family who enjoy food as much as he enjoyed preparing it was very fulfilling. Of course, his family supported his decision, so his career in culinary arts began. After graduating from New York Institute of Culinary Education and International Culinary Institute, Felipe began a series of stints in New York City, all of them in outstanding restaurants, learning all about how a kitchen runs first hand. These experiences at Eleven Madison Park, Jean George, and Esca, to name a few, ignited a spark: he would love to have his own kitchen to run one day. His enduring commitment to his trade led Felipe down a path that would eventually bring him to Peru. After spending a few years in Northern Italy to work in a Michelin star restaurant called Il Sole, Felipe began to hear about a new and upcoming cuisine from Peru. Remembering his first inspiration from childhood, his interest peaked, and he set out on an adventure to Peru. Once there, he felt the warmth of the people and how proud they were of this unique cuisine. He knew that this was something he wanted to explore further. Jaime Torres fell into culinary arts while studying business administration in Colombia. Jaime decided to change his career path to be inside the kitchen just like his brother. Once Jaime found out about Felipe’s trip to Peru, he too decided to explore this unique cuisine as well. Now both Torres boys were making their way through the culinary scene: Jaime managed his way inside the kitchen of Astrid & Gaston in Madrid; Felipe did the same in Peru. It was then that Jaime decided to complement the business side of his studies with his culinary and hospitality knowledge. As his desire for learning about gastronomy grew, he took up studying in many places around the world including New York City’s Institute of Culinary Education and Instituto de Formación Empresarial de la Cámara de Comercio e Industria de Madrid, where additionally he learned the administration side of the culinary business. With his interest in Peruvian cuisine peaked from his time spent in in Astrid & Gaston in Madrid, Jaime flew to Peru and worked with many great chefs in Astrid & Gaston and La Mar in preparation for his next opportunity. This opportunity arose at Raymi, where new, creative, and passionate Chefs were needed to help build the menu and the concept in order to create New York City’s best Peruvian Restaurant. Bienvenidos a nuestra casa y buen provecho!
https://www.rayminyc.com/the-team
This is the part 3 of a 3-part series: Intro to Statistics and Evidence by Eli Hymson. The aim of these articles is to equip debaters with some debate-applicable knowledge from the field of statistics. The list of subjects is nowhere near comprehensive but reflects a grab-bag of areas which have the potential to improve the quality of in-round evidence comparison and out-of-round research practices. - Experimental Design and Modeling Much of the science of statistics has been developed to analyze results of carefully designed controlled experiments in which researchers manipulate experimental conditions, treatment combinations, and the selection of experimental units to examine causal relationships without other noisy factors involved. Debaters rarely have access to, or need for, this type of research. The majority of quantitative work used by debaters comes from observational studies where the data was collected from a process over which the researcher had no control. The best we can do in these circumstances is try to use variation in observed features to explain variation in some other variable of interest. Enter regression analysis. Regression models are ubiquitous in fields utilizing quantitative research. I will focus on the elements of regression modeling which might be important for debaters, but there is a rich background of theory involved I encourage you to learn about too. Let’s discuss multivariate regression, coefficients and standard errors, statistical significance, p-values, and reading regression tables. - Multivariate regression The goal of single-variable regression is to draw a line through a cloud of data points on the X-Y plane which best represents the linear relationship between the explanatory variable, X, and the outcome variable, Y. X is called a regressor. The slope and intercept of this line are chosen so as to minimize the sum of squared errors, i.e. the line tries to minimize the average distance by which it misses each point in the observed data. The slope and intercept parameters are sample estimates of the population’s parameters. What we want to do is obtain some measure of how closely these parameters estimated from the data represent the entire population without observing the entire population. A simple linear regression estimates an equation of the form Yi = B0 + B1*Xi, where Y is the outcome variable, B0 is the intercept of the line, and B1 is the slope coefficient describing how a 1-unit change in X influences the expected change in Y. You can add many regressors which might be related to Y. Then our linear regression equation now looks like Y = B0 + B1*X1 + B2*X2 + … + Bp*Xp. Each B can be interpreted as follows: a one-unit increase in X is associated with an increase in Y of magnitude B, holding all other regressors constant. Even though I’ve described it as an increase in Y, B can be positive or negative depending on the association between Y and X. Note that these coefficients are NOT correlation coefficients. Correlation coefficients range between -1 to 1, only measure straight-line relationships between two variables, and cannot control for partial effects of other variables. Correlation analysis is a weak technique relative to regression analysis. When working with multivariate regression, it is important to keep in mind what the researcher wants to accomplish. The researcher often wants to conduct inference about a regressor and its coefficient, meaning they want to draw statistically valid conclusions about X’s relationship with Y for the population. There are many assumptions which a regression model must satisfy in order for any inference to be considered valid. Some problems researchers often run into which you might read about in a study’s methodology section include multicollinearity/high variance inflation factors, patterns in residuals, and endogeneity (think confounding variables). You can read more about the Gauss-Markov assumptions and their violations elsewhere if you are so inclined. Let’s talk about the ingredients of a research paper’s description of its regression analysis. The following table is taken from Kouba and Mysicka, “Should and Does Compulsory Voting Reduce Inequality?”, available here. Alright, lots of numbers here. What if there’s some useful information in here, but the researchers did not bother to write a concise, easy-to-understand paragraph summarizing their findings in words for the debaters of the world to use in their cases? Let’s explain and make sense of what’s happening here, focusing only on Model I. Let’s first look at some coefficients, specifically the coefficient next to Gini index. There is one number, -51.27, with some stars next to it, and then there’s another number below it (7.02) surrounded by parentheses. What do these mean? Since the Gini coefficient ranges from 0 to 1, the number -51.27 means that for every one-unit (.01) increase in a country’s Gini coefficient, we expect the percentage of invalid votes to decrease by about 0.5127, or half a percentage point. This is not a causal relationship, mind; this is only a measure of association. Generally, in the data set, districts with higher income inequality have lower levels of invalid voting, quantified by this coefficient and holding the effects of other variables constant. The number in parentheses, 7.02, is something called a standard error, which quantifies how variable we expect that -51.27 estimate would be if we ran this same regression for many different samples. What about the stars? These help us determine whether the -51.27 coefficient is actually different from zero. In other words, we want to determine whether the relationship between invalid voting and the Gini index is statistically significant. In analyzing regression model output, we test the hypothesis that the coefficient is equal to zero. These tests are conducted separately for each regressor. Rejecting the null hypothesis means our data suggests the parameter truly does not equal zero for the population, i.e. the parameter is statistically significantly different from zero. Statistical significance is tricky business with complicated theory, so stick with me. The coefficient divided by its standard error approximately follows the standard normal distribution for large samples. The standard normal distribution has a well-known feature that 95% of its mass is located within two standard deviations of the mean, which equals zero. That means only the 5% most extreme outcomes live more than two standard deviations away from zero. This is where p-values come into play. The p-value summarizes how far out into the extreme tails of the distribution our observed outcome falls. You may have noticed the tiny font in the bottom-left corner of the table. The first part says that p is less than .05 if a coefficient has one star next to it. Translation: if your coefficient has one star, then that value is further from the mean of zero for its distribution than at least 95% of all possible values. If your coefficient has three stars, that value is further away from zero than 99.9% of all possible values! So how far away from zero do we need our estimated parameter to be before we decide it truly isn’t zero? Researchers usually decide this in advance and typically use p-value thresholds of .05 and .01. Any values lower than these thresholds indicate we should reject the null hypothesis and conclude our data provides evidence that the population parameter is not zero. Phew, you survived! One last thing: standardized coefficients. All that means is that instead of measuring the expected change in Y for a one-unit change in X, we’re measuring the expected change for a one standard deviation change in X. Some people find this interpretation more useful. When reviewing regressions analyses for debate purposes, make sure you review not only the coefficient estimate, but also whether it is statistically significant and at what threshold. Also keep in mind whether the model uses appropriate control variables to avoid lumping the effects of other factors in with the regressor the researcher is actually interested in. General notes This article does not, by any means, cover all of statistics, let alone all of statistics that debaters might find useful. The explanations of many terms and ideas will also not satisfy more curious debaters who may already have taken statistics classes. However, it covers an assortment of topics which might improve the quality of evidence comparison, cross-examination, and out-of-round research. Rather than resorting to flowery language and doomsday rhetoric in weighing impacts, statistical research allows debaters to precisely quantify and rigorously support impact scenarios, which I guarantee judges will be appreciative of and impressed by. Though this type of research may seem inaccessible and arcane at first glance, any debater who uses or encounters quantitative studies should understand how they function, which parts can be used most effectively, and where intimidating numbers and figures might gloss over fundamental errors in reasoning and design. Many topics will have a vast literature of empirical research supporting either side. Debaters should be equipped with the tools to intelligently discuss and defend their quantitative evidence against contrary positions just as they would any ethical philosophy text. Eli holds a Master of Science degree in Statistics from Texas A&M University. He competed in Lincoln-Douglas debate for Stoneman Douglas HS in Parkland, Florida, reaching the TOC his senior year.
https://nsdupdate.com/2020/intro-to-statistics-part-3-by-eli-hymson/
5 The Common Household Prison In Toni Morrison’S Song Of Solomon In Song of Solomon oppression takes on various forms, the most prevalent being racial oppression, or so it seems. The author Toni Morrison laces the oppression of women throughout the plot in a way that it is not clearly evident at face value, but is concrete upon detailed analysis. The system of patriarchy that exists defines women’s role as the caretaker of the house, the children, and the men; which places their focus on everyone but themselves. Morrison demonstrates the negative impacts that patriarchal culture has on women in order to challenge these ideals which are deep rooted in society today. In Song of Solomon by Toni Morrison, the author uses relationships between men and women to reveal how the patriarchy, which is maintained through traditions and societal norms, causes the destruction of women’s self image and sanity at the expense of men’s success. Traditions like songs and familial teachings is one of the modes used in the novel as a way to uphold expectations for the roles of women. Toni Morrison repeatedly uses songs as a means to represent tradition that is passed from generation to generation. In the beginning of the novel the songs lyrics are “sugarman done gone” (Morrison 9) where as in the end it is “solomon done gone” (303), demostrating a shift in the subject but not the content. The lack of thematic change in the songs illustrates how their content, which alludes to the idea of women’s abandonment by men, has not faded despite the chronological shift. Another mode that gender roles are maintained is through familial teachings which uphold traditional values. This is seen through the passing down of values from Macon II to Milkman. Macon enforces the value of ownership including the ownership of “other people” (55). This statement displays the ideals of masculinity and patriarchy that are taught to Milkman as he approaches manhood. As a result of these teachings, Milkman then subconsciously embodies the ideals and values of gender roles that his father upholds. Along with traditions, societal norms continue force and secure the roles of women. Corinthians and Lena are seen to mindlessly make artificial roses as this is what is expected of them. It is an unspoken expectation that girls stay at home while the men perform the real business outside of the house. Although real roses are often associated with love and happiness, these artificial roses represent the opposite. The roses that they make are described as “bright, lifeless” (10), revealing a complex dynamic between two contrasting descriptions. Although the roses may appear bright in color, they truly remind Corinthians and Lena “of death” (198), more specifically their own. The making of artificial roses is a symbol for the endless belittling tasks that women are expected to perform. This task may be interpreted as bright and lively, but Morrison reveals the dark and degreading aspects of it. The same societal norms reinforce ideas of how women should dress to impress men. Products in the women’s beauty industry display ideals of the perfect woman. Hagar comes across a perfume bottle that labeled as “Myrurgia for primeval woman who creates for him a world for tender privacy” (311). The label reveals how in an ideal world women live to create a better world for men. Beauty products in reality should be for making a woman feel better about herself, but Morrison’s specific description of the label reveals that the beauty industry has been instilled for the purpose of serving men’s needs. Patriarchy is indoctrinated in society through traditional teachings and the beauty industry; these ideals lead to women’s lack of self direction and worth. Morrison demonstrates the damage that internalizing societal expectations has on women through various relationships in the novel. As women embody their role as the standard housewife, they tend to lose sight of who they are as an individual and become trapped in the patriarchal system. Hagar devotes herself to Milkman and when he leaves her she no longer is able to cope with her life in the absence of him. Hagar says, “he don’t like hair like mine” (315), without ever receiving concrete evidence that this is the reason Milkman left her. This quote reveals Hagar’s falsely construed self image that is a result of societal expectations of beauty. In addition, Hagar turns to criticize herself rather than Milkman due to his more powerful status as a man. Morrison uses Hagar’s critique of herself to display the damaging effects this system of oppression has on women. The role Hagar has accepted as a women acts to make her “a puppet strung up by a puppet master” (301). A clear image of Hagar being controlled by someone is evoked through this quote. The puppet master that is referred to is the gender roles established or more specifically, Milkman himself. The use of the comparison of Hagar to a puppet is crucial as it enforces the idea that these gender roles do not allow women to live for themselves. Puppets are inherently used for the entertainment of others, which may also point to the underlying message that Hagar is performing all acts for the enjoyment of others rather than for herself. Generational similarities can be seen through Ruth and Hagar. Ruth is compared to a “prisoner automatically searching out the sun as he steps into the yard for his hour of exercise” (11) as she similarly searches for the watermark on her dining table several times a day. This watermark reminds Ruth of her father, who she is dependent on for validation as is Hagar to Milkman. This simile suggests that Ruth is like a prisoner, trapped by the need of the dependency that she once had on her father. Although her father is dead, this table is stained, as is she by this relationship that does not allow Ruth to be free to live for her own enjoyment. The house that Ruth lives in is described as “more prison than palace” (10). The repetitive comparison of the household as a prison and Ruth as the prisoner is used to signify the entrapment of women that patriarchy has instilled upon them. Although Ruth lives in a bigger house than most other houses in her neighbourhood, the wealth that she possesses does not prevent the oppression of her as a woman. Morrison paints the common household as a place of the destruction of women. It is where they are expected to perform tasks that do not improve their self wellness rather improve everyone else’s. Both Ruth and Hagar lose sight of how they can achieve happiness and purpose through means which do not revolve around the men in their life. Throughout the novel, women are continually described as deteriorating as the men in their life abandon them in acts to free themselves from their life circumstances. In multiple instances, men are commended for their acts of self liberation at the expense of the women who have internalized the role of providing endlessly for them. These acts of liberation comment on the selfish condition that is socially accepted for men. Imagery and actual attempts of flight are presented throughout the novel specifically by men. Guitar corrects Milkman for mistaking a peacock as a girl by saying “that’s a he” (178). Guitar does this to ensure that Milkman understands that “the male is the only one got that tail” (178); implying the importance of the tail is that it enables the male peacock to be capable of flight. The use of the peacock to secure the fact that men are only capable of flight gives this gender specific role a natural quality, implying that it is set in nature as true. Another instance in which male flight is seen as natural is when Milkman dreams of his own flight “in the relaxed position of a man lying on a couch reading a newspaper” (298). The imagery of a man on a couch suggests that flight for men is as easy and second nature as laying on the couch, something they do in their everyday life. Morrison’s description alludes to flight as an intrinsic act for men due to their social ranking that allows them to fulfill all of their wants and needs without repercussions. Repeatedly, men are seen as heroic because of their attempts at flight. Milkman is beyond exhilarated at the thought that his “great-granddaddy could flyyyyyy” (328). This excitement is exerted through this exaggerated spelling of fly as well as Milkman’s non-stop talking about the topic. Milkman’s inability to recognize the harm which Solomon’s flight has caused on Ryna is then reflected through his relationship with Hagar. Flight by men in the novel is attached to the downfall of women in numerous instances. The repetition of this condition enforces the prevalence of gender roles in women’s lives. Solomon’s song is one of the repeated modes implying that as men free themselves from their reality, they often leave behind the women in their lives. “Solomon don’t leave me here” (303) comments on the desperation and downfall of the women as the men leave in their own acts of flight. The liberation which the men feel is balanced by the destruction that the women experience. The women that are left behind by the mens flight are described as having “lost their minds, or died or something” (323). The deaths of Ryna and Hagar both follow the series of events stated by Solomon’s song. Although the circumstances that lead to the self destruction of these two women differ greatly, their reactions align very closely. This similarity is due to the gender roles that have transcended generations and caused women to lose sense of direction and self fulfillment in the absence of men. In Toni Morrison’s Song of Solomon, the oppression of women is created through a patriarchal system. This oppression of women disallows them to perform roles other than what is expected of them. As women accept the oppression that is forced upon them, they may develop a lack of self worth and sanity. Familial traditions and the beauty industry act to maintain women’s proper roles in relationship to those around them, mostly men. Men’s natural tendency to seek freedom from their current circumstances is repeatedly described through the use of flight. Flight is able to free one from their surroundings, but also leaves behind those who can not fly. Morrison repeatedly describes men as being capable of flight, while women are imprisoned by this societal system of patriarchy. The condition of men’s freedom at the expense of womens entrapment is made apparent by Morrison. Morrison’s critique of the premeditated roles of men and women in her novel poses the discussion of how prevalent is this condition in today’s society.
https://literatureessaysamples.com/the-common-household-prison-in-toni-morrisons-song-of-solomon/
What is the learning theory in criminology? The social learning theory of crime argues that some people learn to commit crimes through the same process through which others learn to conform. The theory assumes that people, at birth, have neither a motivation to commit crime nor to conform.. How is criminal behavior learned? Criminal behavior is learned from other individuals. 2. Criminal behavior is learned in interaction with other persons in a process of communication. … A person becomes delinquent because of an excess of definitions favorable to violation of law over definitions unfavorable to violation of the law. What theory best explains criminal behavior? Rational choice theory: People generally act in their self-interest and make decisions to commit crime after weighing the potential risks (including getting caught and punished) against the rewards. What are the 3 theories of criminal behavior? Broadly speaking, criminal behavior theories involve three categories of factors: psychological, biological, and social. How does learning occur in social learning theory? Social learning theory is a theory of learning process and social behavior which proposes that new behaviors can be acquired by observing and imitating others. … In addition to the observation of behavior, learning also occurs through the observation of rewards and punishments, a process known as vicarious reinforcement. What are the key concepts of social learning theory?
https://shalinemiller.com/qa/quick-answer-how-does-learning-theory-explain-criminal-behavior.html
HONOLULU (HawaiiNewsNow) – Governor David Ige on Thursday signed five education bills that will fund ambitions for better learning environments for traditional and non-traditional students. “Collectively, these measures allow our public schools to focus on workforce development and ensure schools have the resources they need to provide a healthy and safe learning environment,” Ige said. House Bill 2000 will allocate $200 million to the School Facilities Authority for the construction of pre-school facilities in fiscal year 2022 to 2023. This is the largest investment in public preschools in state history, Ige said. The bill will seek to strengthen and improve the reception conditions for eligible children in public kindergartens. Senate Bill 2182 will create a school garden coordinator position within the Department of Education. The state hopes that building on Hawaii’s farm-to-school programs will influence improvements in student health, the farm workforce, and education on the farm. Senate Bill 2818 will create a Summer Learning Coordinator position within the DOE. The bill states that more than 60% of elementary and middle school students are behind in their studies due to the pandemic, necessitating support for summer programs. The coordinator will be responsible for all summer school-based programs, including public programs, online schools, credit recovery, and alternative summer learning programs. Senate Bill 2862 earmarks $10 million to give air conditioning units to public school classrooms that have not yet received them. The bill states that more than 5,000 classrooms still need to be upgraded. Ige hopes that by cooling classrooms, especially during recent hot summers, it will create an “attractive and engaging environment for learning”. Studies show how classrooms in Hawaii have recorded over 100 degrees during certain times of the year, as reported in the bill. In 2016, $100,000 was earmarked for the DOE, which, according to the bill, funded improvements for more than 1,300 public school classrooms. House Bill 1561 establishes funding for a workforce readiness program within the DOE, specifically for mature or non-traditional students. The bill directs the ministry to designate schools that can participate in the program. Ige says he hopes it will create a “nurturing community for everyone”. Successful programs already exist at McKinley and Waipahu schools, according to Ige. Copyright 2022 Hawaii News Now. All rights reserved.
https://sites4students.com/governor-signs-bill-providing-historic-investment-in-state-funded-preschools/
It is worth noting that situations similar to those described in this birth injury case could just as easily occur at any of the healthcare facilities in the area, such as Kaiser Permanente, UC Davis Medical Center, Mercy, or Sutter. (Please also note: the names and locations of all parties have been changed to protect the confidentiality of the participants in this medical malpractice case and its proceedings.) This doctrine was again reviewed in the sentinel case of Haft v. Lone Palm Motel (1970) 3 Cal.3d 756, where defendants attempted to characterize imputing parental negligence as an intervening or superseding cause. In Haft, the negligent party attempted to argue that the alleged negligence of the father, Mr. Haft, in the death of his five-year-old son was a causation issue. They claimed that his failure to appropriately supervise was a intervening and superseding cause which broke the chain of proximate causation with respect to the deaths of father or son. (Haft v. Lone Palm Motel, supra, 3 Cal.3d 756, 769.) In response to same, the court stated as follows: The fallacy of defendants’ contentions as to “superseding cause” is perhaps most clearly illuminated by its application to the cause of action relating to the death of five-year-old Mark. In that context the claim that defendants’ responsibility to Mark was cut off by Mr. Haft’s alleged negligence is in reality no more than an attempt to resurrect the doctrine of imputed contributory negligence between a minor and his parent, a theory which the California courts have long repudiated. (Crane v. Smith (1943) 23 Cal.2d 288, 295,144 P.2d 356; Zarzana v. Neve Drug Co. (1919) 180 Cal. 32, 34–37, 179 P. 203) [FN15] The imputed contributory negligence formula transferred the negligence of a parent (in not carefully supervising his child, for example (see Hartfield v. Roper (N.Y. 1838) 21 Wend. 615, 34 Am.Dec. 273)) to a plaintiff child so as to bar the child’s recovery against an admittedly negligent defendant; defendants seek to obtain a like dispensation through the jury’s application (in reality, misapplication) to the nebulous superseding cause doctrine. For more information you are welcome to contact Sacramento personal injury lawyer, Moseley Collins. This argument has no more merit phrased in superseding cause terms than it had in the context of imputed contributory negligence. (See Adamson v. Traylor (1962) 60 Wash2d 332, 335–336, 373 P.2d 961,963; Rest.2d Torts, s 452(1), com.b.) (Haft v. Lone Palm Motel, supra, 3 Cal.3d 756, 770.) This is the same argument that Dr. Black’s testimony suggests, by not returning to Dr. Hill or other physicians, Mrs Smith’s actions become a superceding cause and brake in the causation chain. This argument and this testimony have no merit in the context of this litigation. (See Part 5 of 6.) For more information you are welcome to contact Sacramento personal injury lawyer, Moseley Collins.
https://blog.moseleycollins.com/sacramento-doctors-try-to-blam/
Core Differences Between Interpreter And Translator Is Here The primary distinctions between interpreter and translator may be found in each service’s medium and skill set: interpreters verbally translate spoken language, whereas translators translate the written word. However, both demand a thorough cultural and language awareness, in-depth topic knowledge, and the capacity to communicate effectively. Although the names are sometimes used interchangeably, knowing the differences between these closely related language domains is critical when selecting the service you want. Interpretation Interpretation is a service that is provided in real-time. It is provided life, without using scripts, dictionaries, or other reference materials, in harmony with (simultaneous) or immediately after (consecutive) the original speech. Professional interpreter and translator must transpose the source language (to be translated) into context, keeping the original meaning while rephrasing idioms, colloquialisms, and other culturally unique allusions in a way that the target audience can comprehend. Translator The fact that most professional translators employ computer-assisted tools in their work is perhaps the most significant distinction between interpreters and translators. This entails turning the original content into an easy-to-work-with file type (usually RTF), using a translation memory (TM) to automatically translate everything the program has already cracked and filled in the blanks from fresh. Here are the critical differences between Interpreter And Translator Interpreters deal with the conversation. An interpreter and translator primary responsibility is to communicate in another language. To be accurate in their interpretation, they must have native-level fluency in at least two languages. This implies they must be fluent in your language, such as English, as well as the language in which you are seeking to converse. Translators can also translate conversations. However, they’re more concerned with precisely documenting whatever you say in the language you speak. They aren’t there to make typical real-time discussion easier. Translators work with written language. A interpreter and translator can help you translate from one language to another. This is not an area in which interpreters excel. Technical manuals, official correspondence, and other material may all be solved in this manner. You don’t need to know how to write in another language to be a good translator. Most of them do, but it isn’t a deal-breaker in that field. Instead, being conversational is more crucial. Conversation Translators can deal with spoken language and do so often. They can translate phone calls, for example. When a competent conversational language has to be solved quickly and efficiently, interpreters are hired. This is why world leaders use interpreter and translator. A translator listens to a paper before translating it. Interpreters must go seamlessly from one language to the next. This necessitates a set of abilities that aren’t required of a translator. Use of Correct Words You ask a translator to translate anything from one language to another and then return it to you. A translator must have a wide range of expertise to do this. A single word in the wrong location at the wrong time may radically transform the meaning of a phrase. interpreter and translator are concerned with correctness as well, but the essential thing to them is the meaning. Real-time communication is an entirely different way than textual communication. Learn more like this: https://24x7offshoring.com/blog/ Working Conditions Interpreters operate in high-traffic areas such as conference rooms, military bases, and diplomatic gatherings. They must travel frequently, and many are on call. A single critical phone call may have them fleeing the nation that night. The standards for translators are less stringent. If a translator is self-employed, they may be able to work from home. To finish a project, a translator may be required to work long hours. If you’re considering hiring a translation, do your homework beforehand. Many of the requirements for both interpreter and translator are similar. The number of languages available You need to know two languages to be an influential interpreter. An interpreter, for example, maybe a natural Spanish speaker whose job is translating their original dialect into English. Translators frequently speak more than one language. Linguistics is a vast subject. The practical applications are numerous, and communication facilitation is critical. To be an excellent translator, you need to be fluent in at least a few languages, preferably those that are geographically adjacent to one another. Context and Culture It’s simple to acquire the context from written materials of interpreter and translator. They express themselves clearly and are frequently expressed in technical terminology. This eliminates some, but not all, of the cultural and contextual influences on communication. In order to be more precise, an excellent translator will leverage the context of communication. You must have a basic understanding of a culture to be an accurate interpreter. This is because you must understand why and what they are saying when conversing. There are sayings and analogies in every culture and language that don’t translate word for word, or if they do, don’t make much sense of interpreter and translator. A good writer is required of a translator. A translator must be able to write correctly in order to accomplish their work well. This is due to the fact that they must be able to write with both precision and grammatical expertise. Because interpreters are primarily concerned with oral translation, they are not necessary to be outstanding writers. This is a significant distinction in the debate between interpreter and interpreter and translator. Translators must also possess cultural intelligence in order to comprehend linguistic variances. French Canadian and European French, for example. A translator can be more accurate if they understand the difference between the two. Translations are time-consuming. Because interpreters and translators must have a thorough understanding of linguistics, the majority of them hold bachelor’s degrees. However, they employ languages in distinct ways. To translate the written word, translators require time. They must rely on their own understanding as well as research to convey a country’s cultural subtleties interpreter and translator accurately.
https://24x7offshoring.com/services/differences-between-interpreter-and-translator
math Please someone help!! a rectangular table is two times long as it is wide if the area is 98ft^2 what is the width and length of the table - Algebra The area of the conference table in Mr. Nathan’s office must be no more than 175 ft2. If the length of the table is 18 ft more than the width, x, which interval can be the possible widths? - algebra 1 Evan is making a table that will be created in the shape of the figure below. The table top is a triangle attached to a rectangle. To purchase the right amount of paint, he needs to know the area of the table top. He can only - science Why did a portion of the tabletop get cold when Natasha placed her cold drink on the table? (1 point) The table’s molecules were vibrating more slowly in that area. The table’s molecules had spread further apart in that area. - Algebra 2 a rectangle table is five times as long as it is wide if the area is 245 feet ^2 find the length and width of the table - Science students measure the length of a paper clip whose actual length is 3.2 cm. whose measurement are precise but not accurate? samuel:2.7, 2.7, 2.7, 2.8 jose:2.2, 2.6, 3.0, 3.3 nikita:3.1, 3.1, 3.2, 3.2 anne:2.6, 3.2, 3.2, 3.6 - algebra. A rectangular is six times as long as it is wide. If the area is 54 ft^2, find the length and the width of the table. - physics A thin 2 kg box rests on a 6 kg board that hangs over the table. the length of the board on the table is 30 cm and the length of the board hanging off the table is 20 cm. How far can the center of the box be from the end of the - Physics A uniform chain of total mass m is laid out straight on a frictionless table and held stationary so that one-quarter of its length, L = 1.79 m, is hanging vertically over the edge of the table. The chain is then released. You can view more similar questions or ask a new question.
https://www.jiskha.com/questions/1749442/mang-jose-wants-to-make-a-table-which-has-an-area-of-6m-2-the-length-of-the-table-has-to
I’d like to start this blog by sharing some ideas about teaching free improvisation to musicians who are new to it. I have one exercise, “Forest Duos,” that has proved to be an effective introduction to some of the key elements of group improvisation and choices one has as an improviser: listening, making transitions and endings, choosing material, choosing among varieties of interaction (the spectrum from imitative counterpoint to independence and contrast; use of space), development, flow, free and metric rhythm, dynamics, timbre, etc. I’ll be part of a panel discussion on this subject at the 2019 JEN (Jazz Education Network) Conference in Reno with Ryan Meagher (organizer) , Dawn Clement, Ralph Alessi, and Samantha Boshnack. It’s called “Coloring Outside the Lines: How We Can Encourage Our Students to Truly Explore Improvisation,” and it’s Saturday, January 12, 2019, 2:00 PM – 2:50 PM in Sierra EL. Here’s a handout I prepared for a similar panel on teaching free improvisation from the IAJE (International Association of Jazz Educators) conference in New Orleans in January 2000. Graham Collier and Ed Sarath were also on the panel. This one-page summary includes some listening suggestions and some beginning ensemble teaching techniques: Teaching Group Free Improvisation – IAJE 2000 Here’s a full explanation of my Forest Duos idea. It’s both an exercise with step-by-step instructions, and a composition or framework that can be used in performance: Forest Duos – Group Improvisation – Chase – 2019 And here is a sample template and a blank template for use in structuring a Forest Duos performance. Forest Duos – Sample Template – Chase Forest Duos Blank Template – Chase Forest Duos Blank Template – Chase – Word Doc to Download As I say in the Forest Duos document, this evolved out of my studies with some of the great innovators in free jazz and free improvisation at the Creative Music Studio in Woodstock, NY, from my work with Tom Hall and other members of Your Neighborhood Saxophone Quartet over almost four decades of playing together, and from my work with student ensembles, mostly septets and octets: Berklee “large avant-garde ensembles” (1983-88), NEC Duo Ensembles (2003-9), and particularly this graduate ensemble I taught at New England Conservatory in 1995-6 (who helped name the elements like Forest, etc.): (The ensemble that first worked on Forest Duos and helped develop the idea. Back row: Joel Springer, Thomson Kneeland, Joe Karten, Zach Buell, Russell Mofsky. Front row: Eric Rasmussen, Allan Chase, JC Sanford. Not pictured: Satoko Fujii (piano). NEC 1995-6.) When we talk about teaching free improvisation, a few frequently asked questions are: • What is free improvisation? The phrase is a commonly-used shorthand for improvisation that is open in form, where the form is improvised or flexible rather than specified in detail in advance; also, usually, there is no precondition about tonal or modal harmony. It doesn’t mean there can’t be any agreed-upon structure, any criteria or values, or any predetermined (stated or understood) guidelines for playing together, or that “anything goes.” Free jazz is usually used for music that has more characteristics of jazz — for example, in the roles of rhythm section instruments, or the way a composed theme is used — but has an improvised form (not specified choruses of predetermined length) and/or freedom to move anywhere tonally. • How can you teach something that you can’t evaluate? What basis could you have for assessment of free improvisation in an educational setting? This question seems to be based on the premise that bebop improvisation is the norm in jazz education, and it is measurably right or wrong (the student is making the chord changes or not). There’s some truth to that, although simply making the changes is a small part of the art of bebop improvisation. But many things are taught where the form and details are not predetermined or strictly measurable as right or wrong: contemporary classical composition, creative writing, abstract visual art, modern dance choreography, and many other things. Teachers are not afraid to teach these subjects because they can’t assess every aspect of them quantitatively. If you’re concerned about assessment and measurable learning outcomes, you can make a grading rubric that weighs musical aspects — success in achieving the goals of the piece or project — as well as participation, effort, improvement, and productivity, as you might for a visual art or writing assignment in a class where students may have a range of ability and experience. • How can we do this while maintaining control of the classroom? I think this is determined largely by the messages the teacher conveys. If the teacher or leader says this is serious but fun, a creative but structured experience, and we’re going to make something interesting, and gives the right amount of structure (process, duration, roles to play), then students can get started on a positive track, and they’ll respond well to greater freedom later. On the other hand, if the teacher is apologetic, anxious, disrespectful of the music (a free section was often called a “freakout” when I was a young jazz student in the early 1970s, with predictable results), or implies that something out of control is about to happen, then the results will probably reflect those expectations that the teacher has created. • Who likes this music? Not all jazz educators have learned to listen to and appreciate free jazz and free improvised musics. Some have a tolerance or sincere liking for Ornette Coleman’s early quartets and/or the freer music of contemporary players who have ostensibly “proven themselves” in post-bebop harmonically determined music — the Wayne Shorter Quartet, David Liebman, Joe Lovano, some ECM artists, etc. — but they may not “get” the music of Cecil Taylor, Albert Ayler, the AACM, or the European free improvisation scenes, for example. It’s important to know that the appreciation of this music is gained by listening and curiosity. Learning about the historical and cultural context and the artists’ biographies can help, too. You don’t have to like this, or any style of music, but it is a significant 60-year-old tradition at this point (free jazz has been around for more than half the history of recorded jazz) and there’s a worldwide audience for it which I would estimate is as large or larger than the audience for, say, traditional bebop instrumental jazz today. For example, the Big Ears Festival takes over Knoxville, Tennessee for four days each year with multiple simultaneous sold-out theaters full of people listening to a Milford Graves solo drum set concert, Evan Parker’s Electro-Acoustic Ensemble, Roscoe Mitchell’s quartet, the ROVA saxophone quartet and guests playing “Ascension,” and dozens of others. The Moers Festival in Germany, Victoriaville in Quebec, and several others have been successful for decades, as are many small record labels and publications dedicated to this music. There are free improvisation venues and scenes in almost every major city, and in some small cities around the world. It’s a small portion of the music industry, of course (as is jazz as a whole), but it’s vibrant and ongoing, and comparable in size to many other traditional and avant-garde music scenes. I think it’s important to point out that “free improvisation” and the related “free jazz” are not single styles of music. There have been, and are, subcultures of free improvisors and free jazz players that have developed quite distinct aesthetics, practices, and materials, and often they don’t interact with one another easily. The differences, it seems to me, are bigger than, for example, the differences among the swing-era players and New York and Los Angeles bebop players who appeared together on Jazz at the Philharmonic jam sessions. They shared a repertoire and enough assumptions about form, harmony, instrumental roles, and interaction to play music effectively together, even without rehearsal. Major innovators of free jazz and free improvised music coming out of jazz — for example, Ornette Coleman, Sun Ra, Cecil Taylor, Roscoe Mitchell, Evan Parker, and John Zorn — have rarely performed with one another and have quite different musical ideas and repertoires. There are many prominent circles of players in free jazz and free improvisation that have had little overlap of players over decades. Some of this is social and geographical, but it’s also because they have different ideas about music. There have been some interesting encounters between these groups, but they are the exceptions rather than the rule. I don’t use the term “non-idiomatic improvisation“* because, as others have pointed out, if it’s recognizably, audibly a thing, then it has musical characteristics (including the absence of certain common musical elements) and is an idiom or style. Everything I can think of that’s been called “non-idiomatic” has sounded like post-1945 new music in the international (initially European and American) style due to a tendency to avoid tonal, conjunct, conventionally metric material. There are also certain recognizable traits in the pace and types of interaction and development over time. Often subgroups of improvisers within this field have very specific ideas about repetition, development, imitation, and metric agreement, or the avoidance of them. It seems a little inaccurate and perhaps self-flattering to suggest that this recognizable body of music is the only one that is not idiomatic. But categories and names for musical styles (including, of course, “classical” and “jazz”) are always incomplete and contentious and their boundaries are fuzzy, as they should be. The names of musical styles are just nicknames for loose groups of musics that have a family resemblance to one another. I’ll write about teaching the history and analysis of free jazz, and about the music of Ornette Coleman, Cecil Taylor, and Sun Ra (the subject of my M.A. thesis) in future posts. I’ll also try to follow up with a list of further resources for teachers: books, websites, listening materials, colleagues’ ideas…and free jazz playalongs (yes, they exist).
https://allan-chase.com/2018/01/01/first-blog-post/
Divorce can be a time-consuming process in Kentucky. By the time someone has managed to work his or her way through child custody, support and more, there may not be much energy left to fully deal with matters like property division. This can be problematic, as dividing up marital assets can have financial implications for one’s future. Here are a few things to watch out for when heading into this part of divorce. Some assets that seem equal in value can have different tax implications. For example, there is a difference in taking $100 cash and a stock that is worth $100, because there are capital gains taxes to consider when selling stock. Stocks are not the only taxable asset to keep an eye out for during property division either, so it is important to carefully evaluate marital assets before agreeing to anything. Retirement accounts also pose potential problems, especially 401(k) accounts. Distributions are taxed, so it leads to the same problem as the stock — one spouse might get much less than the other. Other problems with 401(k) accounts arise when one spouse tries to simply withdraw part of the funds as part of the divorce settlement, which involves a tax withholding of 20%. One solution is to draft a qualified domestic relations order — a QDRO — which can avoid taxes by directly rolling funds into a different account. Failing to consider taxes during property division can have negative, long term implications. Unfortunately, it can be difficult to fully consider the outcome of certain decisions when there are many different pressing matters at hand. Speaking with an attorney who is experienced in Kentucky family law can prove helpful for those who are unsure of how to approach these types of financial decisions.
https://www.bluegrassfamilylaw.com/blog/2020/12/remember-taxes-during-property-division/
# KBUN-FM KBUN-FM (104.5 FM, "Sports Radio FM 104.5 The Fan") is a sports radio station based in Bemidji, Minnesota, licensed to nearby Blackduck, Minnesota. It is owned by Hubbard Broadcasting, Inc. The Bemidji studios are located at 502 Beltrami Avenue, downtown Bemidji. The transmitter site is north of Lake Bemidji on Sumac Road. The station signed on May 1, 2008 as WQXJ, as the newest station to Omni Broadcasting's Paul Bunyan Broadcasting group. Originally, it was an affiliate of The True Oldies Channel. It later derived most of its programming from Westwood One's Classic Hits. KBUN-FM is the local home to the Minnesota Twins broadcasts, along with its sister station KBUN. Hubbard Broadcasting announced on November 13, 2014 that it would purchase the Omni Broadcasting stations, including WQXJ. The sale was completed on February 27, 2015, at a purchase price of $8 million for the 16 stations and one translator. On October 26, 2015, WQXJ changed its program format to sports, branded as "Sports Radio FM 104.5 The Fan" under new call letters, KBUN-FM.
https://en.wikipedia.org/wiki/KBUN-FM
Patients with SCT are everywhere in Northeastern Brazil. In the state of Sergipe, the prevalence of HbAS among blood donors is 4.1%.9 However, the prevalence among newborn infants in the general population, as well as their spatial distribution, is not known. The Newborn Screening Program (NSP) is a public health project that screens all babies for a range of conditions, including phenylketonuria, congenital hypothyroidism, SCD, and cystic fibrosis.10 The use of geotechnologies and spatial analysis of HbAS SCT may enable effective health care planning to address the needs of this population. This study aimed to describe the spatial distribution of individuals with HbAS SCT by using data from the NSP for hemoglobinopathies in the state of Sergipe, Northeastern Brazil. METHOD This study was conducted in the state of Sergipe, the smallest federal unit in terms of territory extension in Brazil (21,910 km2). Sergipe is in the Northeastern region and comprises 75 cities, grouped into three mesoregions: Eastern, Agreste, and Sertão Sergipano. Its Human Development Index (HDI) is 0.681, life expectancy at birth is 72.1 years, and the infant mortality rate is 18 per 1,000 live births.11 Approximately 50% of the population lives below the Poverty Index.12 The population has many ethnic and national origins, including especially Portuguese, Germans, Italians,13 and black Africans.14 The studied population included live births in the first year of implementation of NSP for hemoglobinopathies in Sergipe (October 2011 to October 2012), consisting of about 80% of the live births in the state during this period. The remaining 20% of newborn babies were screened at private outpatient clinics and data from this population is not available. The screening program collected heel prick samples up to 30 days after birth, and babies with positive screening were retested. Tests were carried out in basic health units and forwarded to the University Hospital laboratory, where they were analyzed. Isoelectric Focusing Electrophoresis was performed to identify HbAS SCT, as recommended by the Brazilian government directive MS 822/01. Data regarding gender, ethnicity, birth date, and zip code of birth were also collected. Information about population estimates and self-reported ethnicity were gathered from the Brazilian Institute of Geography and Statistics,15 available on the website of the Technology Department of the public health system (Departamento de Informática do Sistema Único de Saúde - DATASUS).16 We calculated the cumulative incidence of HbAS SCT with the proportion of new cases during the period of study divided by the total population at risk. The existence of spatial patterns of HbAS SCT in Sergipe was measured by the Moran’s Index (I). Analyses were made by area (cities). We adopted the global spatial autocorrelation Moran’s I statistics to assess the degree of similarity between a certain location and its neighboring units. Positive values (between 0 and +1) were associated with spatial clustering patterns, whereas negative values (between 0 and -1) indicated a spatial dispersion pattern. Moran’s I value close to zero represented a random pattern of distribution. We elaborated a Moran scatter plot to visualize the results. Observations in the lower left (Low-low) and upper right (High-high) quadrants represented potential spatial clusters, while observations in the upper left (Low-high) and lower right (High-low) suggested potential spatial outliers. The slope of the scatter plot corresponded to the value for global Moran’s I.17 The location of clusters or hotspots of HbAS SCT were examined by the Local Indicators of Spatial Association (LISA) cluster map. We used the Benjamini-Hochberg False Discovery Rate (FDR) to adjust the p values. We also adopted the Kernel method, a statistical non-parametric interpolation technique, in which a distribution of points or events is transformed into a “continuous risk surface”. This procedure allowed us to filter the variability of the data set, without, however, changing its local characteristics in an essential way.18 The analyses were performed with the R-language (R 2.8.1) and TerraView (4.2.2).19 We considered significant a p<0.05. The local Research Ethics Committee approved this study under the Approval Protocol CAAE-06347012.0.0000.0058. RESULTS The study included 32,906 children born in the state of Sergipe from October 2011 to October 2012. A total of 921 children had HbAS SCT, 242 had hemoglobin A (HbA) with another non-S hemoglobin, and 21 were diagnosed with SCD (Table 1). Figure 1 shows the spatial distribution of 921 cases of HbAS Sickle Cell Trait in Sergipe State. |Result||Frequency||% of cases||*Incidence coefficient| |AFS||921||76.6%||2.70| |AFC||234||19.4%||0.70| |AFA2||18||1.5%||0.05| |FS||16||1.3%||0.04| |AFD||7||0.5%||0.02| |FSC||4||0.3%||0.01| |FC||1||0.0%||0.003| |AF indeterminate variant||1||0.0%||0.003| |Total||1,202||100%||-| AFS: heterozygous for Hg S and A; AFC: heterozygous for Hg C and A; AFA2: thalassemia; FS: homozygous for Hg S; AFD: heterozygous for Hg A and D; FSC: double heterozygous for Hg S and C; FC: homozygous for Hg C; AF indeterminate variant: heterozygous for Hg A and indeterminate variant. *number of detected traits or hemoglobinopathies divided by all newborns tested in the period multiplied by 100. HbAS presented a positive spatial correlation with the percentage of self-identified non-white individuals, indicating that, in Sergipe, both conditions have clustering characteristics (Moran’s Index 0.2339; p<0.001). According to the Moran scatter plot, the first quadrant of the coordinate system represented the spatial connectivity of the high observed value area unit surrounded by the high observed value region (High-high). The High-high clustering areas are mainly located in Aracaju (capital and main city of the state of Sergipe) and surrounding cities (Figure 2). By using LISA, we detected hotspots of HbAS in the Agreste and Eastern regions, especially in Aracaju and surrounding cities (Figure 3). Most cases of HbAS SCT were found in the cities of Aracaju (n=273; 22.7%), Nossa Senhora do Socorro (n=102; 8.4%), São Cristóvão (n=58; 4.8%), Itabaiana (n=39; 4.2%), Lagarto (n=37; 4.01%), and Estância (n=46; 4.9%) (Figure 4). DISCUSSION The incidence of HbS has been detected by neonatal screening in several Brazilian states,20,21,22 and this study was the first to report it for Sergipe. Universal neonatal screening could identify affected babies before any symptoms, as well as asymptomatic heterozygous individuals, who can still transmit the gene to their offspring. The geographical distribution of this “silent” population is of extreme interest. Sergipe still has quilombola communities, which are rural, suburban, or urban communities where enslaved descendants live and share a strong link to their African origins.23Quilombola communities contributed to the maintenance of HbS areas because they used to be isolated and had many consanguineous marriages. They are still quite closed communities. The lack of miscegenation in these regions might have allowed the maintenance of the high incidence of HbS. Geographical distribution has been used to study diseases in epidemiological analyses.24,25,26,27 This tool becomes important in the study of genetic diseases when there is a possibility of intervention with genetic guidance, as in the case of increased incidence of healthy carriers of the gene that causes the condition. Therefore, knowing the regions of Sergipe with higher incidence of heterozygous individuals is useful to guide the planning of health care actions for patients with SCA, as well as informing the situation to asymptomatic carriers and counseling the families, which may change the incidence profile of SCD in this population. Screening for β-thalassemia trait in countries around the Mediterranean Sea region led to a drastic drop in the incidence of thalassemia cases because the affected families were informed about the carrier condition and had the opportunity to decide on their reproductive future.28 This experience raises the expectation that a similar approach with SCT individuals and their families could eventually affect the incidence of SCD. Regardless of the impact of screening in reducing the incidence of cases, individuals should be informed about their SCT condition to analyze their reproductive decisions better. The distribution of new cases of HbS detected by the NSP is similar to that of cases of SCD in cities of Sergipe previously reported in another strategy.29 This result reinforces the need of support by health services in cities with a higher number of patients, focusing on treatment of acute events and medical follow-up, and informing asymptomatic heterozygotes and their families about the carrier condition.30 The incidence of SCT in Sergipe, according to NSP, was 2.7%, which is lower than the value estimated by Vivas9 in Aracaju (4.1%). This probably happened because the previous study estimated the HbAS proportion among blood donors, who may have agreed to a blood donation request from relatives with SCD. The result of the universal screening in Sergipe reveals spatial randomness (p-value of black and multiracial people <0.05). We found a positive spatial correlation, that is, high values of a variable will have high values of the same variable in their adjacency. The association between SCD and black individuals was present since the beginning of the disease characterization.31 In order to obtain the benefits of universal neonatal screening for hemoglobinopathies, besides the screening test, adequate medical follow-up should be available for patients with SCD, and their families should be informed about the condition.32 The implementation of these actions should be based on spatial distribution, prioritizing the regions with a higher incidence of HbS. We emphasize that the data collection started in 2011, when the public health system implemented the universal neonatal screening in Sergipe. However, the population distribution changed since 2011 due to migrations, which can have modified the findings of this study. Also, we only assessed individuals treated in the public health system, so the frequency of hemoglobinopathies in the private system is not known. Despite these limitations, we found a positive spatial correlation between the incidence of HbAS SCT and a large proportion of black and multiracial people, indicating a clustered characteristic of the condition in the state of Sergipe. We detected hotspots of HbAS SCT in the Agreste and Eastern regions, especially in Aracaju and surrounding cities.
https://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-05822020000100427&lng=en&nrm=iso&tlng=en
Warning: more... Fetching bibliography... Generate a file for use with external citation management software. Currently there is no consensus on the impact of dietary protein on calcium and bone metabolism. This study was conducted to examine the effect of increasing protein intake on urinary calcium excretion and to compare circulating levels of IGF-I and biochemical markers of bone turnover in healthy older men and women who consumed either a high or a low protein food supplement for 9 wk. Thirty-two subjects with usual protein intakes of less than 0.85 g/kg.d were randomly assigned to daily high (0.75 g/kg) or low (0.04 g/kg) protein supplement groups. Isocaloric diets were maintained by advising subjects to reduce their intake of carbohydrates. Selected biochemical measurements were made at baseline and on d 35 and either d 49 or 63. Changes in urinary calcium excretion in the two groups did not differ significantly over the course of the study. The high protein group had significantly higher levels of serum IGF-I (P = 0.008) and lower levels of urinary N-telopeptide (P = 0.038) over the period of d 35-49 or 63. We conclude that increasing protein intake from 0.78 to 1.55 g/kg.d with meat supplements in combination with reducing carbohydrate intake did not alter urine calcium excretion, but was associated with higher circulating levels of IGF-I, a bone growth factor, and lowered levels of urinary N-telopeptide, a marker of bone resorption. In contrast to the widely held belief that increased protein intake results in calcium wasting, meat supplements, when exchanged isocalorically for carbohydrates, may have a favorable impact on the skeleton in healthy older men and women. National Center for Biotechnology Information,
https://www.ncbi.nlm.nih.gov/pubmed/15001604?dopt=Abstract
I was reading an elementary book on dark matter (in fact, a historical perspective) and there were mentioned how the scientific community react to the idea of dark matter proposed as a solution to observed discrepancy between the actual mass of astronomical systems and the predicted mass from Newton's theory. I was wondering where Einstein theory stands in relation to dark matter, did it somehow predict it, or does dark matter prove the incompleteness of Einstein's theory? And what about dark energy? - 3$\begingroup$ Please don't cross-post. $\endgroup$– HDE 226868 ♦Dec 4 '15 at 23:04 - $\begingroup$ Are you looking to find out about WIMP's? en.wikipedia.org/wiki/Weakly_interacting_massive_particles $\endgroup$ Dec 11 '15 at 20:20 Dark matter was originally hypothesised because there is a greater degree of rotational energy in galaxies than visible matter would allow for - crudely they rotate so fast that they ought to spin apart and therefore it was hypothesised that there is an additional source of gravitational force in some form of invisible matter. This is not as a result of a discrepancy in Einstein's General Relativity. Newtonian gravitation is a very good approximation of Einstein's GR at most ordinary scales and energies and this is true in this case also. I am not sure, exactly, what your "what about dark energy" question means, but assuming you mean, does this imply an incompleteness in GR, then again the answer is no, or at least not necessarily, but it's a more subtle point. Dark energy is the hypothesised source of the energy that causes the universe to expand at an accelerating rate. It can be inserted as a term into the field equations of GR (other explanations of DE exist though) - but that is just a mathematical term, rather than an explanation of the physicality of it. Einstein originally inserted such a term into his solutions - to predict an essentially static universe. When it was shown that the universe was expanding he described this as his "greatest mistake". The evidence for the accelerating rate of expansion is relatively recent and came long after Einstein's death so he never lived to see the essential idea - of a "cosmological constant" in his equations - revived.
https://astronomy.stackexchange.com/questions/12764/whats-the-relation-between-einsteins-gravitational-theory-and-dark-matter
Research The stellar group's research topics include the evolution of single and binary stars, nucleosynthesis, stellar population, supernovae and gamma-ray bursts, and stellar and circumstellar (magneto)hydrodynamics. Below gives some ideas on what our members have been involved with. BOB: B-Fields in OB Stars Massive stars are key agents in the Universe, driving the evolution of star forming galaxies through their photons, winds and violent deaths at all redshifts. The BOB survey collects and studies spectropolarimetric observations for a large number of massive, early-type stars in order to study the oc-currence rate, properties, and ultimately the origin of magnetic elds in massive stars VLT-FLAMES Tarantula Survey This is an international collaboration studying about 900 massive stars in the LMC cluster 30 Doradus. The survey tries to answer questions such as the effects of stellar rotation and binarity on the evolution of stars, the binary fraction of massive stars, as well as studying the gas and stellar dynamics to provide input for models of star and cluster formation. VFTS 682 is a very bright and massive star in the neighbourhood of the Tarantula nebula, a possible 'runaway star'. The star's apparent isolation is exceptional, because very massive stars are usually found only in dense star cluster environments. ISM Project Our group is interested not just in stars themselves, but also in how they affect their surroundings. The Inter-Stellar Matter project investigates the feedback effects of winds and radiation from massive stars on the interstellar medium. We also model the effects of stellar explosions (supernovae) at the end of their lives.
“Unbanked” is a slang definition of adults who do not own bank accounts and avoid using banks in any other capacity. The term ‘unbanked’ should not be confused with “underbanked,” which is used to describe people who do not have sufficient access to financial services. The problem of being unbanked is widely researched in the USA, where unbanked people represent 7.7 percent of population. Due to the US’s research, most of the US’s unbanked are white, native-born Americans. They choose to be unbanked for various reasons, including involvement in illegal activities, distrust of financial institutions and extreme poverty. Banking the unbanked is an important issue and can help the poor to gain financial stability. Blockchain-based technologies are often referred to as a way of providing the services of financial institutions to people who don’t want to get involved or can’t be involved in traditional organizations. Top News News Arcane Research says that Lightning Network usage has been on a steep upward trajectory since late last year, but in September, growth went parabolic off the ba... Arcane Research predicts 700 million Lightning Network users by 2030 6735 Opinion With the proper regulation, stablecoins could potentially fulfill their promise and enable more funds to reach those in greatest need. Stablecoin adoption and the future of financial inclusion 5773 Experts Answer Here’s what crypto and blockchain experts think about the impact of blockchain technology on lesbian, gay, bisexual, transgender and int... How will blockchain and crypto improve the lives of LGBTQ+ people? Experts answer Analysis Bitcoin ATMs may make it easier for the mainstream and unbanked to access crypto, but will security risks hamper adoption? Bitcoin for cash: Do crypto ATMs make buying BTC easier for the mainstream? 6767 Experts Answer Here’s what crypto and blockchain experts think about Salvadoran President Nayib Bukele’s announcement that Bitcoin is now legal tender.... What is really behind El Salvador’s ‘Bitcoin Law’? Experts answer News The fintech firm hopes to expand financial products for the unbanked. Mobile money platform Pngme raises $3M to expand across Africa 5211 Opinion Without proper crypto regulation, the U.S. government might create a massive roadblock to financial inclusivity.
https://cointelegraph.com/tags/unbanked/amp
During the discussion, I found CRISPR to be really interesting technology on how it can generate and edit genes and how this can change someones life. Before the discussion in class today I had no idea that there was such a thing. So I did more research on my own and found this video that was about CRISPR and thought was helpful. As we discussed last class, technologies such as CRISPR are becoming more and more accessible to the general population. Because of this, gene editing and gene modification will become more and more prominent in the near future. But CRISPR is not the only type of tool in which we use technology to enhance and better our lives. In this post, I attempt to consider some of the common fears people have toward human alteration. In Melbourne, Australia, 13 birds, descendants of the common rock-pigeon have been brought to life. They have the Cas9 gene embedded into their genomic properties, which will be passed on from each generation to generation. The common rock pigeon has been extinct until that moment. CRISPR technology, we discussed during the class, was one of the most interesting technologies that can generate and manipulate genes, or gene editing. Through this system, the virus can be removed or can extend one’s life. Ethical concerns are brought up as CRISPR is able to alter human genomes. Even though it is not influential to certain types of cells and tissues, it has a possibility to change egg and sperm cells. This allows altering human’s basic traits, such as height or intelligence (U.S. National Library of Medicine). I found the topic of last class to be extraordinarily interesting in regards to learning about CRISPR (Clusters of Regularly Interspaced Short Palindromic Repeats) gene editing technology. The CRISPR-Cas9 system is compared to a pair of molecular scissors, allowing for the precise cut of a particular strand or piece of DNA. It has established an entirely new realm of science: genome engineering. Here are some of the resources that were talked about yesterday! FUNGI, BACTERIA, AND CONSTRUCTION https://docs.google.com/presentation/d/1ydprAxlD5l1yszgAo4ja4v4QXXflxUh2NwNcmQMBwMM/edit?usp=sharing This week, we visited the ArtSci installation entitled, The Noise Aquarium. Entering the simulated experience was extremely intense. The noises were very loud and the screen was eerie. The experience was also interactive for its audience as a person could step on a pad and balance to better understand noise pollution and micro plastics that flood the ocean. The entire set-up was very immersive and almost surreal. I could tell a lot of work had been put in to really affect the viewer as they entered. Like all things encased in science there tends to be a devotion amongst its believers to follow epochs of fantasy and in the science of Genetic engineering it is no different. From the beginning of time fiction has been delving into what it means to manipulate the makeup of our human species in alignment with its cause to make something for which it feels obligated to perfect. The ancient myths even allow for the mutations of other species to be elevated as god forms each within there specific canons. While dealing with ideas of noise pollution in the ocean I came across a couple of interesting articles revolving around sound and water. The first is a recent development where Stanford scientists successfully created the loudest possible sound in water (1). This was a sound that reached over 270 decibels! The reason that this is the loudest sound possible is that at this high of amplitude, the pressure breaks down the medium of water itself and it instantly vaporizes. Previously the loudest sound made was in air, which could only reach around 194 decibels.
https://biotechdesign.artscinow.com/week/week-6
The Details for this job are shown below, if you would like more details or have any questions please use the contact details provided. Johnson Matthey Fuel Cells is a global business dedicated to the supply of high performance catalysed components to customers, making fuel cells for major markets such as buildings, transport, and remote power sources. The business, headquartered in Swindon, UK, is a leader in its chosen markets with rapidly growing sales supported by substantial R&D and manufacturing capabilities. We now have a vacancy for a Mechanical Design/solidworks CAD Engineer. Key Areas of Responsibility:• Ensure health and safety of the workplace has the highest consideration in all aspects of the role• Support the mechanical design of the product, ensuring that customer’ requirements are realised and can be manufactured capably • Provide a technical support function internally to JMFC operations• New part and project drawing generation, associated tooling design, and procurement • Maintain existing new product development processes and procedures• Monitor and improves mechanical product quality, feeding back into new developments• Ensure an up to date knowledge of lean manufacturing techniques, actively promoting them within the Business • Degree in appropriate Engineering discipline or relevant experience• Proven experience of SOLIDWORKS, Model/Drawing Generation• Up to date knowledge of lean manufacturing techniques• Proven experience in a mechanical design function within a fast moving, evolving manufacturing environment• Proven experience of positive interaction with customers and a demonstrated ability to build successful working relationships within a diverse business• Demonstrated ability to identify and deliver improvement projects to planned cost and schedule in an environment of rapidly evolving technology Qualification: Degree in appropriate Engineering discipline (or equivalent) Johnson Matthey Plc is an equal opportunities employer and positively encourages applications from suitably qualified and eligible candidates regardless of sex, race, disability, age, sexual orientation, religion or belief. Salary Range: Competitive Location Swindon Industry Manufacturing Placement Type: Temporary (6 month FTC) If you are interested in applying for this job or would like to find out more, please send CV along with introductory email identifying this position to the contact details below:
https://www.solidsolutions.co.uk/solidworks-Services/Jobs-Directory/september-2012/johnson-matthey-fuel-cells-temp-contract-swindon.aspx
Last month I blogged about the Hour of Code which occurs during Computer Science Education Week. Little did I know that it would prompt further conversations geared around wondering whether or not an hour makes a difference. My post by no means was the be all and end all of coding or computational thinking but was meant to spark conversations, perhaps an interest and possibly support educators for whom coding or computational thinking might be new. To be clear, I know that coding for an hour during that week might not have a significant impact in the grand scheme of things, but the opportunities that it provided for my students certainly had a significant impact. While these opportunities should exist on a daily basis, let’s face it, weeks like this often allow for conversations amongst educators to be had and provide spaces for collaboration. This was the case for me and my students. I think that we sometimes forget that there is a continuum of learning – even for educators – and while everyone has strengths and areas of need, those strengths and areas of need vary from person to person. Unless we’re willing to start somewhere and be vulnerable with colleagues, we can miss out on the chance to learn with incredible colleagues. This year, my students had the chance to participate in coding activities with 3 other classes and for them, it was an exercise in developing greater empathy; growing in clear communication; and problem solving. At the end of the week, coding was the tool that facilitated this learning for my students and they were able to help younger students develop their own set of problem solving and computational skills. That being said, this post, (part 2) is really to go a little deeper into what I believe computational thinking is about. I’ve always seen coding as being one, creative way to helping students develop computational thinking skills. I’ve learned that computational thinking is about solving problems, using similar methods as would a computer. There are four kills that make up computational thinking: - Algorithmic Thinking – using algorithms to show the different steps in a solution or process. This can be applied across subject areas and can help to outline the process by which something is accomplished. When students are using some of the coding activities mentioned in my previous post, they are thinking about the steps needed to move through a maze or a specific sequence to achieve a goal. In language, students are often taught procedural writing. These procedures are used in recipes and in instruction manuals. In Science, we can think of this as the execution of an experiment. While students have the opportunity to hypothesize based on what they know, they may be required to follow procedures as they gain new skills for their experiment. Again, it’s that specific sequence of events that needs to take place to accomplish a task. - Decomposition – the breaking down of big problems into smaller ones. When broken down into smaller parts, tasks become less daunting. With large projects, when students can solve one task at a time, they’re better able to achieve success with the overall project. Knowing how to break down big challenges into smaller, more manageable parts is really a skill. When we help students in this, they are better able to become more autonomous, knowing specifically the next step that they need to take in order to succeed. During our coding activities, Code.org’s Dance Party was a hit! As students navigated through the challenges, they realized that they were gaining the skills required to ultimately create their own dance sequence. When they got to the end, they understood the functions of all of the blocks and were really excited to create and I must say that a few even replicated their dances in small groups. - Abstraction – the idea of using a simple model to explain more complicated systems. By taking away minute details, we are more easily able to understand the overall concept by making sense of the important parts in the model before us. I often think of this as making things more concrete before moving into the abstract. We can do this for ourselves when planning a unit. It might be daunting to understand all of what has to be taught but if we think first about the big ideas, we can then understand what is most important for students to understand and work backwards from there. When we were working on coding activities with the kindergarten students, it was amazing to see how my students were helping students to physically move around the space in order to understand direction. When you first gain a grasp of direction and understand it clearly, perhaps moving around the physical space is no longer needed as much and you can move onto other skills, as you learn. - Pattern Recognition – helps determine probability by interpreting data & identifying patterns. Scientists are recognizing patterns and are able to more effectively predict outcomes for things like diseases and weather. Why not get students identifying patterns in everyday life and see what they might be able to make sense of in the world. In my teaching practice, I have found Math so much more meaningful to students when they are able to see and identify the concepts being taught in real life. By looking at patterns, they understand and can identify why some structures might be more stable than others and can make more accurate predictions based on data they have collected. Lightbot was one of the activities we tried with younger students and it was a great way for my students to help the younger students to see that by creating a program once, they could repeat it and it was similar to the core in a repeating pattern. It took us a minute but it was amazing when the “ah ha” moments came. As with all things, I am growing in my understanding of computational thinking and coding. My first post was merely a conversation – and perhaps an activity – starter as we think about helping students to develop these skills. Doing or looking to do amazing things in your classroom in this area? Please share it in the comments! I would love to know more and grow with you.
https://heartandart.ca/?p=8333
Abstract: This research sought to explore the connection between a small-group intensive reading comprehension project and students’ performance in two sample English national exit exams (ENEEs) developed by the Ministry of Public Education, Costa Rica. The data were gathered from an intervention plan that combined the theoretical principles of schema theory, scaffolded reading comprehension, and intensive reading. The study adopts an action-research approach and uses a mixed design that combines quantitative and qualitative data in the analysis and interpretation of results. Participants included twelve students from a public high school in the Western Area of Costa Rica who needed special preparation for the ENEE, which narrows the research scope down to this population only. The data collection techniques included two sample ENEEs, field notes, and research artifacts. Findings reveal positive effects of scaffolded reading comprehension on student ENEE performance, but also they warn that generalizations to larger populations are not possible. The study yields implications at theoretical and practical levels, and it calls for further investigation as a way to tackle the limitations identified. Palabras clave Texto completo:PDF Referencias Acuña, Elian, and Campos, Rodrigo. (October, 2015). Interactive Reding: A Method to Enhance EFL Learners’ Reading Habits. Proceedings of the II Congreso de Lingüística Aplicada CONLAUNA, Pérez Zeledón, Costa Rica. Alyousef, Hesham Suleiman. (2005). Teaching Reading Comprehension to ESL/EFL Learners. The Reading Matrix, 5(2), 143-154. Anders, Patricia. (2002). Toward an Understanding of the Development of Reading Comprehension Instruction Across the Grade Levels. In C.M. Roller (Ed.), A Collection of Papers from the Reading Research 2001 Conference (pp. 111-132). Chicago, IL: International Reading Association, Inc. Attarzadeh, Mohammad. (2011). The effect of Scaffolding on Reading Comprehension of Various Text Modes on Iranian EFL Learners with Different Proficiency Levels. Scientific International Journal, 8(2), 4-29. Bula, Olmedo. (2015). Action Research: Fostering Students’ Oral Production in the EFL Class. Revista de Lenguas Modernas, (23), 349-363. Bula, Olmedo. (2010). The Use of Recast in the EFL Classroom through Action Research Approach. Revista de Lenguas Modernas, (13), 225-240. Coon, Megon, Ferrell, Ken, and Kevin Klott. (2014). English Bachillerato Prep Course. A Guide to Preparing Students for the Ministry of Public Education’s Graduation Exam. Retrieved from http://www.liceodepoas.ed.cr/uploads/4/3/0/4/43041915/bachillerato_prep_peace_corps.pdf Choo, Tan Ooi Leng, Eng, Tan Kok, and Ahmad, Norlida. (2011). Effects of Reciprocal Teaching Strategies on Reading Comprehension. The Reading Matrix 11(2), 140-149. Retrieved from http://www.readingmatrix.com/articles/april_2011/choo_eng_ahmad.pdf Córdoba, Patricia, Coto, Rossina and Marlene Ramírez. (2005). La enseñanza del inglés en Costa Rica y la destreza auditiva en el aula desde una perspectiva histórica. Revista Actualidades Investigativas en Educación, 5(2), 1-12. Retrieved from http://revista.inie.ucr.ac.cr/index.php/aie/article/view/71 Derry, Sharon. (1996). Cognitive Schema Theory in the Constructivist Debate. Educational Psychologist, 31(3/4), 163-174. Díaz-Ducca, Jenaro. (2015). Working with Documentaries in the EFL Classroom: Successful Strategies for Decreasing Anxiety during Oral Tests. Revista de Lenguas Modernas, (23), 235-252. Ebrahimi, Nabi, and Rahimi, Ali. (2013). Towards a more efficient EFL reading comprehension classroom environment: The role of content and critical reading. Apples — Journal of Applied Studies, 7(2), 1-15. Efron, Sara, and Ravid, Ruth. (2013). Action Research in Education: A Practical Guide. New York: Guilford Publications. Erten, İsmail, and Razi, Salim. (2009). The effects of cultural familiarity on reading comprehension. Reading in a Foreign Language, 21(1), 60-77. Font, Alberto. (June 02, 2014). Costa Rica’s month-long teachers’ strike comes to an end. The Tico Times. Retrieved from: http://www.ticotimes.net/2014/06/02/costa-ricas-month-long-teachers-strike-comes-to-an-end Latorre, Antonio. (2014). La investigación-acción: Conocer y cambiar la práctica educativa. Barcelona, España: Editorial Graó. McLaughlin, Maureen, and Overturf, Brenda. (2013). The Common Core: Teaching Students in Grades 6-12 to Meet the Reading Standards. Newark, DE: International Reading Association. Ministerio de Educación Pública (MEP), Costa Rica. (2014). English Syllabus for Diversified Education. San José, Costa Rica: MEP. Novotny, Kathryn. (2011). Reading Comprehension in the Secondary Classroom (In partial fulfillment of the requirements for the degree of Master of Science in Reading). Minnesota State University, Mankato, USA. Otárola, Cindy, and Valverde, Marigen. (2013). Nurturing Literacy Development throught Constructivist Strategies in EFL Sixth Graders: The Case of Villareal School (Tesis para optar por el grado de licenciatura en Ciencias de la Educación con Énfasis en la Enseñanza del Inglés para I y II Ciclo). Universidad Estatal a Distancia, San José, Costa Rica. Ramírez, Jimmy. (2013). English Made Easy: Bachillerato. San José, Costa Rica: Grupo Dinatext S.A. Rhea, Anisa, and Nancy Baenen. (2007). Project IRIS: intensive reading intervention study, a three-year follow-up. Retrieved from https://webarchive.wcpss.net/results/reports/2007/0620proj_iris_followup.pdf Richards, Jack, and Lockhart, Charles. (1994). Reflective Teaching in Second Language Classrooms. Cambridge, England: Cambridge University Press. Roberts, Amy. (1999). Taming the monsters: practical intimacies in a third grade costa rican classroom. In Educational Action Research, 7(3) 345-363. DOI: 10.1080/09650799900200093 Sevilla, Henry and Gamboa, Roy. (October, 2015). Student Self-evaluation and Autonomy Development in EFL Learning. Proceedings of the II Congreso de Linguística Aplicada CONLAUNA, Pérez Zeledón, Costa Rica. Shen, Yanxia. (2008). An Exploration of Schema Theory in Intensive Reading. English Language Teaching, 1(2), 104-107. Trinity Western University Counseling Center. (n.d.). Intensive Reading Techniques. Retrieved from http://www.twu.edu/downloads/counseling/A-7_Intensive_Reading_Technique.pdf Willis, Arlette. (2014). Reading Comprehension Research and Testing in the U.S.: Undercurrents of Race, Class, and Power in the Struggle for Meaning. New York: Lawrence Erlbaum Associates. Zainuddin, Hanizah, Yahya, Noorchaya, Morales-Jones, Carmen and Ariza, Eileen. (2011). Fundamentals of Teaching English to Speakers of Other Languages in K-12 Mainstream Classrooms. Dubuque, Iowa: Kendall Hunt Publishing Co. Enlaces refback - No hay ningún enlace refback.
https://revistas.ucr.ac.cr/index.php/aie/article/view/27204
BACKGROUND 1. Technological field 2. Description of the Related Art SUMMARY BRIEF DESCRIPTION OF THE DRAWINGS DETAILED DESCRIPTION OF EMBODIMENTS [Configuration of Image Forming Apparatus] [Operation of Image Forming Apparatus] <Data Acquisition Process> <Future Image Output Process> [Advantageous Effects of Embodiments] [First Modification] [Second Modification] The present invention relates to an image forming apparatus and a program. An electrophotographic image forming apparatus is known to have horizontal and vertical streaks on formed images with long-term use. When such defects occur, components related to image forming (image forming components) are replaced. JP 2016-145915 A Various technologies to let a user(s) know a replacement timing of each image forming component in advance have been investigated and proposed, such as a technology which allows a user(s) to determine the replacement timing on the basis of a life indicating value(s) calculated from the number of formed images or from a driven distance of a belt, and a technology which obtains a capability value(s) related to the life of a photoreceptor, and predicts the remaining life of the photoreceptor therefrom (for example, disclosed in ). However, even if the replacement timing of each image forming component is predicted in advance, in actual cases, a user(s) tends to keep using the image forming components until the defects occur in order to reduce downtime of the image forming apparatus. JP 2016-145915 A Because timings when the defects occur are unpredictable, and also a desired image quality level varies from user to user, it is difficult to predict the replacement timing which is suitable for such cases/use. For example, the technology disclosed in predicts the replacement timing on the basis of the capability value only, and cannot determine whether or not the user accepts images of the time when the replacement timing arrives. Furthermore, if the defects occur suddenly, the user needs to call a technician, which may lead to the downtime longer than that in the case of replacing an image forming component at its predicted replacement timing. Objects of the present invention include providing an image forming apparatus and a program which allow a user(s) to determine the replacement timing of each image forming component suitable for the image quality level that the user(s) desires. In order to achieve at least one of the abovementioned objects, according to a first aspect of the present invention, there is provided an image forming apparatus including: an output device; an information obtaining unit which obtains estimated deterioration information on an estimated degree of deterioration of image quality of an image; an image data generator which, based on the estimated deterioration information obtained by the information obtaining unit, generates, from arbitrary image data, image data of a future image having a predicted image quality; and an image output controller which causes the output device to output the future image based on the image data of the future image generated by the image data generator. According to a second aspect of the present invention, there is provided a program to cause a computer to function as: an information obtaining unit which obtains estimated deterioration information on an estimated degree of deterioration of image quality of an image; an image data generator which, based on the estimated deterioration information obtained by the information obtaining unit, generates, from arbitrary image data, image data of a future image having a predicted image quality; and an image output controller which causes an output device to output the future image based on the image data of the future image generated by the image data generator. FIG. 1 is a schematic diagram of an image forming apparatus; FIG. 2 is a block diagram showing functional configuration of the image forming apparatus; FIG. 3 is a flowchart showing a data acquisition process; FIG. 4A is an example of a data table; FIG. 4B is an example of a data table; FIG. 5A is an example of a data table; FIG. 5B is an example of a data table; FIG. 6 is an example of a data table; FIG. 7 is a flowchart showing a future image output process; FIG. 8 is an example of a graph in which, about a frequency, power spectrum values are plotted with respect to the numbers of formed images; FIG. 9 is an example of a graph in which, about a main-scanning-direction position, luminosity values are plotted with respect to the numbers of formed images; FIG. 10A is an example of data for explaining a modification; FIG. 10B is an example of data for explaining the modification; FIG. 10C is an example of data for explaining the modification; and FIG. 11 is an example of data for explaining the modification. The advantages and features provided by one or more embodiments of the present invention will become more fully understood from the detailed description given hereinbelow and the appended drawings which are given by way of illustration only, and thus are not intended as a definition of the limits of the present invention, wherein: Hereinafter, one or more embodiments of the present invention will be described with reference to the drawings. However, the scope of the present invention is not limited to the disclosed embodiments. First, configuration of an image forming apparatus according to this embodiment is described. FIG. 1 FIG. 2 is a schematic diagram of an image forming apparatus 100 according to this embodiment. is a block diagram showing functional configuration of the image forming apparatus 100. FIG. 1 FIG. 2 As shown in and , the image forming apparatus 100 includes a print controller 10, a controller 11 (information obtaining unit, image data generator, image output controller), an operation unit 12, a display 13, a storage 14, a communication unit 15, an automatic document scanner 16, an image processor 17 (image data generator), an image former 18, a conveyor 19, and an image reader 20. The print controller 10 receives PDL (Page Description Language) data from a computer terminal(s) on a communication network, and rasterizes the PDL data to generate image data in a bitmap format. The print controller 10 generates the image data for respective colors of C (cyan), M (magenta), Y (yellow), and K (black). The controller 11 includes a CPU (Central Processing Unit) and a RAM (Random Access Memory). The controller 11 reads a program(s) stored in the storage 14, and controls each component of the image forming apparatus 100 in accordance with the program(s). The operation unit 12 includes operation keys and/or a touchscreen provided integrally with the display 13, and outputs operation signals corresponding to operations thereon to the controller 11. A user(s) inputs instructions, for example, to set jobs and to change processing details with the operation unit 12. The display 13 is a display, such as an LCD, and displays various screens (windows) in accordance with instructions of the controller 11. The storage 14 stores programs, files and the like which are readable by the controller 11. A storage medium, such as a hard disk or a ROM (Read Only Memory), may be used as the storage 14. The communication unit 15 communicates with, for example, computers and other image forming apparatuses on a communication network in accordance with instructions of the controller 11. The automatic document scanner 16 optically scans a document D which is placed on a document tray and conveyed by a conveyor mechanism, thereby reading the image of the document D by forming, on the light receiving face of a CCD (Charge Coupled Device) sensor, an image of the reflected light from the document D, and generating image data for respective colors of R (red), G (green), and B (blue), and then outputs the data to the image processor 17. The image processor 17 corrects and performs image processing on the image data input from the automatic document scanner 16 or from the print controller 10, and outputs the data to the image former 18. Examples of the image processing include level correction, enlargement/reduction, brightness/contrast adjustment, sharpness adjustment, smoothing, color conversion, tone curve adjustment, and filtering. The image former 18 forms an image on paper on the basis of the image data output from the image processor 17. FIG. 1 As shown in , the image former 18 includes four sets of an exposure device 18a, a photoconductive drum 18b, a charger 18c, and a developing device 18d, for respective colors of C, M, Y, and K. The image former 18 also includes an intermediate transfer belt 18e, secondary transfer rollers 18f, and a fixing device 18g. The exposure device 18a includes LDs (Laser Diodes) as light emitting elements. The exposure device 18a drives the LDs on the basis of the image data, and emits laser light onto the photoconductive drum 18b charged by the charger 18c, thereby exposing the photoconductive drum 18b. The developing device 18d supplies toner onto the photoconductive drum 18b with a charging developing roller, thereby developing an electrostatic latent image formed on the photoconductive drum 18b by the exposure. The images thus formed with the respective colors of toner on the four photoconductive drums 18b are transferred to and superposed sequentially on the intermediate transfer belt 18e. Thus, a color image is formed on the intermediate transfer belt 18e. The intermediate transfer belt 18e is an endless belt wound around rollers, and rotates as the rollers rotate. The secondary transfer rollers 18f transfer the color image on the intermediate transfer belt 18e onto paper fed from a paper feeding tray t1. The fixing device 18g heats and presses the image-formed paper, thereby fixing the color image to the paper. The conveyor 19 includes a paper conveyance path equipped with pairs of conveying rollers. The conveyor 19 conveys paper in the paper feeding tray t1 to the image former 18, and conveys the paper on which the image has been formed by the image former 18 to the image reader 20, and then ejects the paper to a paper receiving tray t2. The image reader 20 reads the image formed on the paper, and outputs the read data to the controller 11. The image reader 20 includes a first scanner 21A, a second scanner 21B, and a spectrophotometer 22. The first scanner 21A, the second scanner 21B, and the spectrophotometer 22 are provided on the downstream side of the image former 18 along the paper conveyance path so that they can read, before the paper is ejected to the outside (paper receiving tray t2), the image-formed side(s) of the paper. The first scanner 21A and the second scanner 21B are provided where the first scanner 21A and the second scanner 21B can read the back side and the front side of the paper being conveyed thereto, respectively. FIG. 2 The first scanner 21A and the second scanner 21B are each constituted of, for example, a line sensor which has CCDs (Charge Coupled Devices) arranged in line in a direction (paper width direction) which is orthogonal to the paper conveying direction and horizontal to the paper face. The first scanner 21A and the second scanner 21B are image readers which read an image(s) formed on paper being conveyed thereto, and output the read data to the controller 11 (shown in ). The spectrophotometer 22 detects spectral reflectance at each wavelength from the image formed on the paper, thereby measuring color(s) of the image. The spectrophotometer 22 can recognize color information highly precisely. Next, operation of the image forming apparatus 100 according to this embodiment is described. The image forming apparatus 100 allows the user to recognize the replacement timing of each image forming component by outputting an image having a quality predicted to be in the future (future image). More specifically, in order to output the future image, the image forming apparatus 100 performs a data acquisition process to periodically acquire and store noise information on a predetermined image and a future image output process to actually form the future image on paper or display the future image on the display 13. FIG. 3 is a flowchart showing the data acquisition process. First, the controller 11 determines whether or not a predetermined timing has arrived (Step S11). Examples of the predetermined timing include a timing when a predetermined number of images (e.g. 1,000 images) has been formed since the last data acquisition process, and a timing when the image forming apparatus 100 is turned on. The user can change the predetermined timing as desired. When determining that the predetermined timing has not yet arrived (Step S11: NO), the controller 11 ends the data acquisition process. On the other hand, when determining that the predetermined timing has arrived (Step S11: YES), the controller 11 determines whether or not an instruction to form an image for detecting image noise (noise detecting image) has been input (i.e., whether or not an instructing operation for making such an instruction has been performed; the same applies hereinafter) through the operation unit 12 (Step S12). The noise detecting image is an image having a predetermined gradation (e.g. 50% density), and is set beforehand. More specifically, the controller 11 displays a pop-up image (e.g. pop-up window) on the display 13 for the user to choose whether or not to form the noise detecting image, and in accordance with the user's choosing operation on the pop-up image through the operation unit 12, determines whether or not an instruction to form the noise detecting image has been input. There (e.g. operation unit 12) maybe provided a switch which, in accordance with the user's operation on the switch, switches to or from a control mode in which the noise detecting image is automatically formed at the predetermined timing. When determining that an instruction to form the noise detecting image has not been input (Step S12: NO), the controller 11 ends the data acquisition process. On the other hand, when determining that an instruction to form the noise detecting image has been input (Step S12: YES), the controller 11 causes the image former 18 to form the noise detecting image on paper (Step S13). Next, the controller 11 causes the image reader 20 to read the noise detecting image formed on the paper and measure luminosity of the image (Step S14). More specifically, the controller 11 causes the spectrophotometer 22 to measure the luminosity (L*) of the noise detecting image on the paper. The luminosity is measured at a plurality of dots which are indicated by positions in the main scanning direction (main-scanning-direction positions) and positions in the sub-scanning direction (sub-scanning-direction positions). The read/measured data (luminosity values) is stored in a data table T1 in the storage 14. FIG. 4A shows an example of the data table T1. FIG. 4A As shown in , the data table T1 stores the luminosity values (L*) measured at a plurality of dots each indicated by the main-scanning-direction position and the sub-scanning-direction position. Next, the controller 11 calculates a luminosity distribution profile in the main scanning direction by averaging the read data, which is stored in the data table T1, in the sub-scanning direction (Step S15). The calculated data (luminosity distribution profile) is stored in a data table T2 in the storage 14. FIG. 4B shows an example of the data table T2. FIG. 4B As shown in , the data table T2 stores the data calculated by averaging the read data, which is stored in the data table T1, in the sub-scanning direction. Additionally or alternatively, a luminosity distribution profile in the sub-scanning direction may be calculated by averaging the read data, which is stored in the data table T1, in the main scanning direction. Next, the controller 11 calculates frequency characteristics of streaks by performing discrete Fourier transform on the calculated luminosity distribution profile (Step S16). FIG. 5A FIG. 5B FIG. 5A shows an example of a graph of the calculated frequency characteristics (F(ω)(t0)). shows an enlarged view of a region R in the graph of . Next, the controller 11 obtains power spectrum values with respect to frequencies [cycles/mm] from the calculated frequency characteristics, associates the obtained data with the date and the number of formed images at the time, stores (saves) the same in a data table T3 in the storage 14 (Step S17), and ends the data acquisition process. FIG. 6 shows an example of the data table T3. FIG. 6 As shown in , the data table T3 stores, for respective frequencies (ω), the power spectrum values (noise information) associated with the date(s) and the number(s) of formed images. The data acquisition process forms the noise detecting image at each predetermined timing, and consequently data is obtained and accumulated. Instead of the luminosity, density may be used. FIG. 7 is a flowchart showing the future image output process. First, the controller 11 determines whether or not any of counter values of continuous use hours for the respective image forming components, such as the photoconductive drum(s), the charger(s) and the developing device(s), has exceeded its corresponding predetermined value (Step S21). The predetermined value is set, for example, at 80% of predetermined maximum hours (life) of use. Counter values for the transfer devices (intermediate transfer belt and secondary transfer rollers) and the fixing device may be included. When determining that none of the counter values has exceeded their corresponding predetermined values (Step S21: NO), the controller 11 ends the future image output process. On the other hand, when determining that at least one of the counter values has exceeded its (or their) corresponding predetermined value(s) (Step S21: YES), the controller 11 determines whether or not an instruction to output the future image has been input through the operation unit 12 (Step S22). When determining that an instruction to output the future image has not been input (Step S22: NO), the controller 11 ends the future image output process. On the other hand, when determining that an instruction to output the future image has been input (Step S22: YES), the controller 11 receives an instruction on a point of time in the future to predict the image quality (prediction point: the number of images to be formed) through the operation unit 12 (Step S23). More specifically, the controller 11 displays an input section (e.g. input window) on the display 13 for the user to enter/input the number of images to be formed, and in accordance with the user's input operation to the input section through the operation unit 12, receives an instruction on a point of time in the future to predict the image quality. Next, the controller 11 performs calculation for prediction of frequency characteristics (Step S24). More specifically, first, the controller 11 refers to the data table T3 stored in the storage 14 and, for each frequency, plots the power spectrum values with respect to the numbers of formed images, and then obtains an approximation straight line therefrom. Next, the controller 11 obtains a power spectrum value at the specified prediction point using the obtained approximation straight line. The power spectrum value at the prediction point is estimated deterioration information on an estimated degree of deterioration of image quality of an image. FIG. 8 is an example of a graph in which, about a frequency (ω1), the power spectrum values are plotted with respect to the numbers of formed images (images formed and images to be formed). The approximation straight line is represented by a solid line. FIG. 8 In the case shown in , the prediction point is set at when 10,000 more images are formed from the present point, and a power spectrum value at the prediction point (estimated deterioration information) is obtained. Similarly, the controller 11 obtains power spectrum values at the prediction point about the other frequencies (ω2, ω3, ...). Next, the controller 11 receives an instruction on an output method through the operation unit 12 (Step S25). The output method is either forming an image on paper with the image former 18 or displaying an image on the display 13. The controller 11 displays a choice section (e.g. choice window) on the display 13 for the user to choose an output method, and in accordance with the user's choosing operation on the choice section, determines the output method. Next, the controller 11 reads image data of an image to be output as the future image (Step S26). The image to be output by the future image output process may be any image, while it is preferable to always use the same image so as to make comparison easy considering that this future image output process is repeated. Next, the controller 11 performs frequency filtering on the read image data with the image processor 17 (Step S27). More specifically, to display an image on the display 13, the controller 11 performs F(ω)(t1) filtering on the read image data. Meanwhile, to form an image on paper, the controller 11 performs F(ω)(t1-t0) filtering on the read image data. In this case, where an image is formed on paper, the controller 11 performs the process which adds only the difference of F(ω)(t1-t0) because the image forming apparatus 100 at this point forms an image which already includes noise of F(ω)(t0). Using the filtered image data, the controller 11 outputs the future image either by displaying the image on the display 13 or by forming the image on paper with the image former 18 (Step S28). Next, the controller 11 determines whether or not an instruction to continue the future image output process has been input through the operation unit 12 (Step S29). When determining that an instruction to continue the future image output process has been input (Step S29: YES), the controller 11 returns to Step S23 to repeat Step S23 and the subsequent steps. That is, when the controller 11 determines that an instruction to continue the future image output process has been input, the future image will be output at a different prediction point. On the other hand, when determining that an instruction to continue the future image output process has not been input (Step S29: NO), the controller 11 ends the future image output process. The future image output process outputs the future image of the user's desired timing (prediction point) on the basis of the data reflecting usage conditions of the image forming apparatus 100. Hence, the user can accurately recognize in advance until when his/her desired image quality will be maintained. As described above, according to this embodiment, the controller 11 obtains estimated deterioration information on an estimated degree of deterioration of image quality of an image; with the image processor 17, on the basis of the obtained estimated deterioration information, generates, from arbitrary image data, image data of a future image having a predicted image quality; and causes an output device to output the future image on the basis of the generated image data of the future image. This allows the user to check a future image quality with the output future image and determine the replacement timing of each image forming component in advance, the replacement timing being suitable for the user's desired image quality. Furthermore, according to this embodiment, the image forming apparatus 100 includes the data tables T1, T2, and T3 (noise information storages in the storage 14) where noise information on a predetermined image is accumulated, wherein the controller 11 obtains the estimated deterioration information on the basis of the noise information accumulated in the data tables T1, T2, and T3. Thus, the image forming apparatus 100 is configured to accumulate noise information in the data tables T1, T2, and T3, and obtain the estimated deterioration information by using the noise information. Furthermore, according to this embodiment, the noise information is a frequency characteristic(s) of luminosity of the predetermined image in the main scanning direction or the sub-scanning direction. Thus, the image forming apparatus 100 is configured to use, as the noise information, frequency characteristics of luminosity of the predetermined image in the main scanning direction or the sub-scanning direction. Furthermore, according to this embodiment, the controller 11 generates the image data of the future image by performing, on the arbitrary image data, frequency filtering fit for the estimated deterioration information. Thus, the image data of the future image can be generated from arbitrary image data. Furthermore, according to this embodiment, the output device is the image former 18 which forms the future image on paper. This allows the user to check the future image formed on paper. Furthermore, according to this embodiment, the output device is the display 13 which displays the future image. This allows the user to check the future image displayed on the display 13. Furthermore, according to this embodiment, the image forming apparatus 100 includes the image reader 20 which reads the predetermined image, wherein the controller 11 obtains the noise information on the basis of read data obtained by the image reader 20 reading the predetermined image. Thus, the image forming apparatus 100 is configured to obtain the noise information from the actually read and measured data. In the above embodiment, the controller 11 uses the power spectrum values with respect to the frequencies [cycles/mm] (shown in the data table T3) to obtain the estimated deterioration information. Instead of the frequency characteristics, the controller 11 may use the luminosity distribution profile (shown in the data table T2) to output the future image. That is, the controller 11 may use the luminosity distribution profile as the noise information. In this case, the controller 11 refers to the data table T2 stored in the storage 14 and, for each main-scanning-direction position, plots luminosity values with respect to the numbers of formed images, and then obtains an approximation straight line therefrom. The controller 11 determines/obtains a luminosity value at a specified prediction point using the obtained approximation straight line. FIG. 9 is an example of a graph in which, about a main-scanning-direction position (XI), the luminosity values are plotted with respect to the numbers of formed images (images formed and images to be formed). The approximation straight line is represented by a solid line. FIG. 9 In the case shown in , the prediction point is set at when 10,000 more images are formed from the present point, and a luminosity value at the prediction point is obtained as the estimated deterioration information. Similarly, the controller 11 obtains luminosity values at the prediction point about the other main-scanning-direction positions (X2, X3, ...). To display an image on the display 13, the controller 11 performs γ (gamma) correction which adds the luminosity values at the respective main-scanning-direction positions, X1t1, X2t1, ..., to the image data to output. Meanwhile, to form an image on paper, the controller 11 performs γ correction which adds the luminosity differences at the respective main-scanning-direction positions, X1t1 - X1t0, X2t1 - X2t0, ..., to the image data to output. Thus, the future image is output as with the above embodiment. In the above embodiment, the data acquisition process is performed. Without the data acquisition process, known experimental data prepared beforehand may be used. FIG. 10A, FIG. 10B, and FIG 10C For example, are a data table T11 for chargers, a data table T12 for developing devices, and a data table T13 for photoreceptors, respectively. The data tables T11, T12, and T13 (durability information storages) each store, for respective frequencies (ω), power spectrum values (durability information) obtained by simulation or the like and associated with the numbers of formed images. The data tables T11, T12, and T13 are stored in the storage 14. FIG. 11 For example, to estimate the frequency characteristics with the charger 18c having 20,000 as the number of formed images, the developing device 18d having 30,000 as the number of formed images, and the photoconductive drum 18b having 10,000 as the number of formed images, the total value for each frequency is calculated as shown in and used to obtain the estimated deterioration information. Embodiments to which the present invention is applicable are not limited to the abovementioned embodiment(s) or modifications, and can be appropriately modified without departing from the scope of the present invention. Although some embodiments of the present invention have been described and illustrated in detail, the disclosed embodiments are made for purposes of illustration and example only and not limitation. The scope of the present invention should be interpreted by terms of the appended claims.
I would like to present the following questions as a structural prelude: a) How and for what purpose technology is created? b) How does technology serve humanity? c) What does humanity expect from technology? d) How are those relations regulated and by whom? This mode of approach to 'searching for truth,' fortunately, begets more inquiries than any other. The issue of Identity in each case will become self-evident at every turn. So, we ask and search: 1. How do human organizations, as designed by humans, govern polities? Current web-site analyses indicate that the medical-sites register the heaviest use. Humans are concerned with their health in a variety of iterations. If you will, it is the choice of the marketplace. But, humans must tend to the business of life. The humans live in communities, which necessarily choose definitions for their polities. Polities cannot exist without explicitly appointed and generally known socio-legal laws. In defining those rules, societies decide how they are going to be organized and ruled—either consciously or by default—and how the common functions of the community are going to be financed. Either the members of the polity take the matter in their hands, and write a charter, or, allow—by not taking any action, such as not revolting against an invading force—the overlord (open, or secret) to write the rules for them. There is no polity that can live without taxing itself. Communal functions need financing just as individuals in their private lives. The only question is how that taxation is going to be arranged. That is, the flavor of governance defines how the communal spending decisions will be made in the polity. This taxation may be in the form of forced labor, part of crops raised, or in cash. Earliest codifications of communal rules, such as Hammurabi's laws, Asoka's columns, Roman twelve tables, Solon's laws, Ten Commandments do not always openly address the issues of taxation. All those codifications are meant to, in the first instance, to secure a society living in relative peace and order, regulating interpersonal relations. Even though there certainly was taxation in all of the named polities, the matter of relations between public finance and securing harmony in that society were not directly linked. The American idea of "no taxation without representation" is perhaps the first time the case of polity governance and public finance was brought to the same platform. The Magna Carta of 1215, signed between the Barons of the English polity and the King was also an attempt to restore harmony at a higher level, among and within the governing strata rather than directed strictly at the public good. Napoleonic codes, to a certain extent—whether influenced by the American declarations or not—(not forgetting the Swedish example), followed the thought that it was necessary for the government to spend part of the tax income toward constructing state infrastructure such as roads, ports, and so on. This construction of the infrastructure was meant to stimulate the economy, so that more income would yield greater tax receipts, as well as organizing the polity for future wars. It was recognized, by experience, that the increasing cost of fighting wars, defensive or offensive, required maximum use of all available resources. And, the state—or the ruling strata— could not accomplish that task alone; participation of the members of the polity was imperative, with or without their consent. Thus, the nature of governance determines the nature of public finance. "No taxation without representation" model gives the taxpayer a say in the tax rates, and how and for what those receipts are to be spent. If the governance turns out to be authoritarian, then the state or the designated agencies thereof will dictate terms to the populace. In an authoritarian polity, a very small percentage of individuals who manage to appoint themselves as the guardians of public good will decide what is good or permissible. The remainder, the majority, will often have no choice but to obey, until they "rebel," because they do not posses any meaningful input into the process. And, it is undeniably their resources that are being spent. 2. Inherent conflicts between authoritarianism and pluralism Authoritarianism and pluralism have always been polar opposites and formed the 'outside boundaries' of human governance modes from the earliest times. The primary motive for organizing a governance system within the polity was survival; either against the forces of nature, or as a defense against armed neighbors (immediate, meaning, next door; or long-distance, across the border). It is intended to make life easier for the polity. Likewise, in any sub-community, such as settlements, individuals usually seek means to make their lives easier for themselves. What is now known as technology is no more than methods and techniques developed by able individuals to perform a task with greater alacrity and efficiency. The tendency has been to replace human (or biological) effort—muscle power—with mechanical operation. We gather, initially it was the humans what pulled the plough to till the soil. Then, oxen or horses replaced the humans; later steam-powered tractors called locomobile, took over. All were supplanted, in time, with machinery powered with an internal combustion engine. As far as the landowner was concerned, the mechanical replacement of human power was beneficial to the users; it reduced overall costs. But, this replacement began a new stratification in the society. Only those with the necessarily large land holdings could afford the mechanical contraptions, which, in turn, increased the crop yield and accrued greater disposable incomes. To combat the disparities in income between the small-holders and large farm owners, some polities instituted, at various times and localities, "state farms." These institutions ostensibly worked for the benefit of all members of their society. Yet, at the same time, the state or communal farms transferred the mode of production from private to public means. Now, some governments were controlling the food production directly. That led to the direct political control of the populations by the governing strata. Transfer of resources from private hands to public also had additional repercussions. Rather than individuals creating new technology (means of labor saving), governmental bureaucracies obtained the funds from tax receipts to conduct research and development work. The matter is further compounded, when the government pouring public money into technology development happens to have an authoritarian flavor. The production of technology in such an authoritarian society will also be within the monopoly of the state. Any and all access to knowledge—including education and developmental laboratories—will be tightly controlled according the perceptions and goals of a central administrative apparatus. And, the uses of technology will also be dictated by those high bureaucrats. Even though the process involved is the classical "guns-or-butter" issue, as defined by Paul Samuelson, in authoritarian polities it is not the general public but the bureaucracy decides the percentage. When the polity decides that it will instead have a governance system we term as "pluralistic," then the decisions may be made accordingly. Pluralism will allow for much more individuality, provided decisions made by a single individual does not curtail the rights of others. Rather than governmental agencies or bureaucrats, persons with ideas and energy will begin the process of harnessing innovative technology. The aim of the creative individual here, of course, is to make and accumulate personal wealth—as opposed to increasing the direct power of the state. This, the creative individuals may choose to effect by means of Mercantilist Monopoly. In that case, all the applicable identity issues and approaches will be identical to that of the authoritarian state. 3. Role of technology in the human conflict between authoritarianism and pluralism A short overview of authoritarianism and pluralism may be beneficial: Authoritarian governance system comes in several flavors, and can be organized around a belief system (Judaism, Buddhism, Christianity, Islam, Confucianism, et al); a social order (communism, socialism, mercantilism); military leadership (juntas of various degrees and social orientations); philosophical strain (utopianism, stoicism, realpolitik, opportunism); or, commercial interests (mercantilism, capitalism, 'mixed' economy). The ruling strata of an authoritarian society is usually very small, and seldom allows participation of any kind from the masses it controls. It is generally inflexible and doctrinaire, seeks to impose a particular set of rules on the society no matter what the cost. Pluralism, on the other hand, has rarely achieved a wide-spread application in the practical sense. Republicanism and democracy came closest, but not entirely. True pluralism would allow for all the voices in a polity equal hearing. This aspect makes pluralism a highly contentious system, requiring moderation by a category of individuals we might term opinion leaders comprised of various specialties. Anyone may aspire to join the governing process, and make a contribution. It can also be noted that pluralism provides the most flexible approach to problem solving, but it is also the most expensive (and, some say, the noisiest) means of governance. It takes a long time to make policy and mobilize large resources for the good of all. However, the pluralistic governance best harnesses the energies of a society or polity. The allergy or dislike some societies have for pluralism stems from the fact that it takes a long time to find and apply solutions to problems facing the polity; no one sub-group is allowed to dominate; the cost of decision-making is the largest; the end product rarely satisfies everyone. Essentially, every participant is required to compromise at some point. Yet, the process facilitates living together—provided compromises are spread among all participants according to their population proportions and the general living conditions. Either authoritarianism or pluralism may emerge from any of the above enumerated belief, social or military, philosophical or commercial systems. That is an outcome of particular conditions in the life of the spawning polity in the given time-frame. Technology, amidst this tug-of-war, may serve to consolidate the rule of one system over the other. The outcome of this competition between two diametrically opposed systems depends on the ability of the polity to balance the ensuing partisanship. "Nationalism," often in extreme forms, under whatever guise or terminology, emerges under such conditions, to redress imbalances—either real or imagined. 4. Strains between the interests of the technologists and society In considering the relations between society and technology, we must remind ourselves of the inherent natures of both. The governance methods (authoritarianism or pluralism—or anything in between those) are well-defined. There are only so many types of governance systems, all evolved to their present stages since humans appeared on earth. For all practical purposes, general governance methods are unchanging. Almost everything about them is known—at least for those who care to pursue that knowledge. Therefore, governance becomes personal, in that authoritarianism or pluralism may acquire and display the face of a single individual, symbol or token. Technology, on the other hand, continually changes. What drives technology (of course, apart from a desire for personal advantage) is the discovery of the laws of nature by individuals. And the design of methods by which those laws can be applied to solve problems—be those problems fictive or natural. The laws of nature have always been there; but not necessarily known to humans. This is unlike laws of governance, which have always been available. Technology, too, may thus present the face of an individual or institution to the world. For the purpose, there are even more "named awards" in the world of technology celebrating the accomplishments of individuals who manage to understand the laws of nature. Here, we reach a paradox: It is a human that discovers a heretofore unknown law of nature, or designs a means of doing a task more efficiently. Some of those designers and discoverers become so attached to the results they have achieved, they begin to disregard the effect their product will have on their immediate polity, or the humanity at large. At this juncture, the 1990s televised debate between three technologists and three humanists come to mind. All six were eminent in their fields; some even were household names. Over the course of an hour, the technologists insisted that, as they put it, "Technology is where it is at. It is the future." In return, the humanists shot back with the statement that "The technologists did not get it." Neither side could "see" the other's point of view. And none attempted to elaborate on their viewpoints. They parted without changing their own understanding in the least, let alone the public in general. Upon reflection, one could observe that the technologists were referring to the way humans are living their daily lives, and the influence of technology on every activity of individuals in a given polity. But the humanists, mainly historians, were thinking slightly more broadly. They were envisioning that, an authoritarian government can easily monopolize all available technology. In doing so, that authoritarian government can restrict the rights and actions of its citizens. The examples abound. What is important is how humans govern themselves. If the governance system is not pluralistic, all technology becomes a hazard—not a benefit— to the community. It is not the technologists who decide the fate of polities, or construct the methods of governance. The technologist only becomes a tool or designer for either side; depending on the awareness or lack of the same by the affected polity. In the end, it is not the technology all by itself that tips the balance between authoritarianism and pluralism; it is the use of technology in the hands of the partisans of either side. 5. Possible resolution to the tensions One might take a popular spectacle, say a science fiction yarn, as an example, and there are many of those available to the general public. In those presentations, the "empire," or a "government" uses technology to destroy the individuality of the so-called "rebels" who may happen to be pluralists wishing to live their chosen lives. And the technology constitutes the main support of the emperor or the "leader" in his quest to stay as emperor, by destroying the home worlds and societies of those who wished to have another means of governing themselves. Of course, the subject matter is a movie, or several movies, by definition, a fiction. Or, is it? By admission of the writers of those movie scripts, the events forming the backbone of the stories were taken from the experiences of past societies, from real life past and present. Only the technology depicted on the screen was new. In fact, so new, it did not even exist. In actual historical societies, the invention of the iron smelting was a tremendous technological innovation. That new weapon gave the advantage to their owners and users. So was the cross-bow. But the struggle was still the same: what kind of a society were they going to have; authoritarian or pluralistic? Who shall govern: an emperor, or the members of the polity, through their representatives or votes? That there are variations of both the authoritarian and the pluralistic modes of governance, for example, mercantilism and capitalism; democracy and socialism, does not materially alter this equation. The same issue was contested once again when the firearms were introduced by technologists. The cycle was repeated yet again, with the invention of the nuclear weapons, not to mention numerous others in between those. In the future, the contest will continue, with whatever new technological ways are invented. The interactions among the society and technology are not limited to the nature of governance, the eternal struggle between authoritarianism and pluralism. There are tensions—essentially— between society and technology within a pluralistic society. Let us take the very recent case of genetically engineered foods and plants. Genetically engineered plants provide higher yields, resist traditional (natural) maladies afflicting the non-modified varieties. As a result, the technology companies and the food-processors can earn higher returns on their products based on the genetically altered foodstuffs. But, what about the effects of those altered foods on the humans? What happens when the consumers decline to buy them? The European Union refused to allow US origin genetically engineered soybeans and fruits. That is their choice. Can the companies overcome the consumer resistance by insisting that there is no danger from genetically altered foods? Who decides, and how? The stated objectives of those genetic engineering companies may be read in their stock prospectuses. Of course, all were established legally for pecuniary interest. Yet, the products they develop already had influence on the society, usually without the knowledge and consent of the polity. When those companies develop tastier, longer lasting and suitable for packaging tomatoes, most consumers (and farmers; individual or corporate) benefit. When the product begins to interfere with nature's cycles, then the results must be audited by those who will be affected. For example: What happens when the lawn-grass genetically altered not to grow more than a few centimeters an entire year produces pollen that interferes with food crops? When crops fail, because the altered pollen of one species stunts the growth, who will answer? Who will pay for the cost of feeding the populace? And who will pay when that leads to the extinction and extermination of various species—be it plant of animal that earned, up to that point, a livelihood for individual farmers? Examples abound, and are scarcely limited to genetic engineering. One can cite the present court cases involving two technology companies charged with predatory and monopolistic practices? One revolves around a company that copied an operating system and made it a monolith, and the other became dominant among charge cards by threatening to raise service fees and driving out competition (to raise the fees in earnest later, without competition). We already discussed the inherent struggle between the authoritarian and pluralistic modes of governance. The basis of pluralistic governance is the ability of individuals to make choices without interference. But, those decisions cannot interfere with the rights and benefits of other members of the polity either; or, other polities. If there is such interference, then, that will form the bases of another conflict; usually technology will be involved. The governance system of the polity will include provisions to make that right of choice available. But, nothing can be permanent, if the members of the polity do not defend their rights, their rights to right of choice. We, humans have seen, many a time, the transformation of a pluralistic (more or less) governance system turning authoritarian. We have observed republics becoming empires, choices being restricted, or altogether being eliminated. Technology, therefore, is called upon by societies to solve the problems facing humans; not to create new ones, or aid the repression of societies by new means. This issue is valid not only for authoritarian societies, but also to the pluralistic ones. For example, does a company, a private entity, has the right to curtail the right of choice in a given society? Does the fact that companies may insist that they are not restricting the right to choice change that? The right to pursue the development and exploitation of technology does not mean having the right to restrict or eliminate the right to choice by the society. The aforementioned anti-monopoly cases must be viewed from that point of view as well. Returning to the case of debating technologists and humanists, we can now begin to place events, and issues, into larger and proper perspective. There is no question that technology will progress, as it is part of the human nature to be inquisitive. On the other hand, technology will have to be in aid of humanity, and not the other way around. Humans cannot allow technology to dictate terms, precisely because the technologists are humans, and must live in polities and societies. Therefore, it is incumbent upon all members of a society to engage in a continuous dialogue, without doctrinaire or inflexible approaches. Humans are capable of learning, provided they wish to acquire the knowledge that will lead them to a life affording more and responsible choices. That betterment requires an intake rich in variety if it is to yield more choices. Just as a human body biologically requires a wide range of foods to sustain its metabolism, the human mind is also in need of multiple sources of stimulus to maintain its humanity. A single-source diet leads to defects. And in the case of the mind, a single-track approach will yield low returns. The remedy lies in acquainting the mind with sources from the collective experiences of humanity, without forgetting the cost of that wisdom. The "Borg," another TV series, is the creation of human minds as well. That program, too, attempts to represent another facet of authoritarian governance system. Against which, humanists, the crew of a starship, even if they are technologically very advanced, are fighting to preserve the right to choose. The Starship crew also learned their humanity from the large body of humanistic literature available to them—and, yes, through technology, on-line. NOTE: *As can be readily inferred, the reference is to: Tracts of Mr. Thomas Hobbs of Malmsbury containing I. Behemoth, the history of the causes of the civil wars of England, from 1640 to 1660, printed from the author's own copy never printed (but with a thousand faults) before, II. An answer to Arch-bishop Bramhall's book called the catching of the Leviathan, never before printed, III. An historical narration of heresie and the punishment thereof, corrected by the true copy, IV. Philosophical problems dedicated to the King in 1662, but never printed before. Thomas Hobbes, London, 1682 Printed for W. Crooke.
http://historicaltextarchive.org/print.php?action=section&artid=752
News broke yesterday on ESPN and several other outlets that a Miami-based attorney and Heat fan was filing a class action lawsuit against the San Antonio Spurs for violating Florida's deceptive and fair trade practices law. Kiko Martinez, a friend of Project Spurs and writer for the San Antonio Current interviewed the fan, Larry McGuinness about his reasons for the lawsuit and why he waited two months to file the lawsuit after the early season game. This game was almost two months ago. Why are you filing this lawsuit now?Well, I thought somebody would’ve done it before I did. I was surprised nobody had done anything about it. It was stuck in the back of my mind. It was something that needed to be done at least from a fan’s perspective. [NBA] Commissioner [David] Stern did what he needed to do, which he was entitled to, but this is more about the fans. As we talked about on our special edition of the Sports Roundtable on News4WOAI and on Project Spurs, the NBA and David Stern apologized to the fans and fined the Spurs a quarter of a million dollars. But while Stern made the fans the reason for his decision to fine, there has been no record of any of the money fined going back to fans, who may have ponied up a bit more for a big game against the Spurs. The NBA had every opportunity to use any portion of the $250,000 to distribute ticket vouchers, upgrade future seats and make good on what paying fans could have lost out on. Now, McGuinness is hoping to recoup some of those losses for himself and other fellow fans. What kind of compensation are you looking for from the Spurs? Do you just want the difference between what you paid for your two tickets and what you would’ve paid if the Heat were playing, say, the Charlotte Bobcats?That would be one component. Another would be for those folks who are out a bit more. For example, I just talked to someone who flew down from South Carolina with nine of his family members, specifically to see this game because they were huge Spurs fans. Needless to say, he and his family were very disappointed. They didn’t see the stars they had come so far to see. Martinez asked McGuinness if he was worried about other fans suing NBA teams who choose to rest their stars if his case is won to which he replied that his issue wasn't with resting players, but resting an entire starting lineup without notifying someone. Read the full interview over at the Current for McGuinness' thoughts on the game and suing a team he "loves watching."
https://projectspurs.com/mcguinness-i-don-t-have-a-problem-with-resting-players/
Page Updated on January 11, 2023 If you’re training your new first-time managers, it’s important to provide them with proper training so that they manage effectively from the start. With training in mind, below are seven training programs that your first-time managers might find invaluable. 1. Communication Skills Great communication skills are essential for first-time managers in today’s workplace. With the ever-increasing demands of the job, it is more important than ever for new managers to be able to communicate effectively with their team members, superiors, and other managers. Furthermore, with a hybrid model of working with a mix of Working From Anywhere (WFA) and working from the office, managers now need to communicate across geographical and cultural barriers. By honing their communication skills, managers can ensure that they are getting the most out of their team, aligning their goals with those of their superiors, and collaborating effectively with other managers. Many new managers lack the experience and training in key communication skills techniques including in topics such as Intercultural Communication. 2. Listening Skills Also a form of communication, listening skills are essential as a manager. For first-time managers, one can argue that listening skills are even more essential than ever, in order to understand the characteristics and needs of each person that you will manage and need to get to know in your team. Managers who fail to listen enough fail to notice key signals and information, that could otherwise help them to identify issues and also opportunities. Listening helps you to: - Be more aware of staff issues and unhappiness - Understand what motivates each person - Come across as a more compassionate and understanding leader 3. Delegation Skills Delegation skills are critical for new managers. The ability to delegate tasks effectively helps managers to stay organized and focused on meeting their goals. When delegating tasks, it is important to consider the skills and abilities of the person you are delegating to. You need to delegate tasks that are within the capabilities of the person you are delegating to. This will help to ensure that the task is completed effectively and efficiently. It is also important to provide clear instructions when delegating tasks. This will help to minimize confusion and maximize the chances of a successful outcome. When used effectively, delegation can be a powerful tool for new managers. 4. Conflict Management When managing other people, conflicts will inevitably emerge and the skill is in tackling and dealing with these conflicts early on, and effectively. The problem for many news first-time managers is that they really lack the experience and training in techniques for dealing with conflict. The key to conflict management is to develop a constructive approach that meets the needs of all parties involved. When approaching conflict, it is important to be aware of your own emotions and to stay calm. It can also be helpful to actively listen to the other person’s perspective and avoid making assumptions. By taking the time to understand the situation and the needs of all involved, you will be better equipped to find a resolution that everyone can agree on. With practice, you will develop the skills needed to effectively manage conflict in the workplace. 5. Leadership Skills If you have never led other people before then the task of becoming a leader can be daunting. You might be an expert in your field and have great skills in terms of job tasks, but managing a team requires a different skill set. You need to be good at: - motivating others - being able to lead by example - building rapport A number of studies on first-time managers have in fact shown that a high percentage really struggle with managing other people. If you yourself are a new manager or you are in charge of HR and need to get your new manager/s trained, you have two main options and these are to: - Get your new managers onto an online course that they can study in their own time such as the Harvard business school leadership course. - Or offer training in-house with a trainer (in-house or hire an external trainer) and provide workshops on several of the leadership titles we list here, including Coaching for Leaders and Managers. 6. Effective Time Management As a new manager, one of the most important skills you can have is time management. With so many demands on your time and resources, it can be difficult to stay organized and keep on top of everything. However, effective time management is essential for keeping business running smoothly. By staying organized and on top of deadlines, you can help to reduce stress levels and improve your overall productivity. Time management is an important skill for any manager, but if you can master it from the start, you will learn how to do it effectively and set the right kind of example from day one. 7. Hiring and Firing One of the most important skills for any business owner or manager is learning how to hire and fire employees. This can be a difficult task, as it requires both strong people skills and a firm understanding of the law. However, it is essential to get the hiring and firing process right in order to build a strong and successful business. There are a few key considerations when hiring or firing staff. - First, it is important to make sure that you are following all applicable laws and regulations. - Second, you need to be clear about your expectations and requirements for the position. - Finally, you need to be able to communicate effectively with both prospective and current employees. In Conclusion Managers are responsible for ensuring that their team is productive and efficient. They need to have strong leadership skills and be able to motivate their employees. In addition, managers need to be able to handle conflict and make decisions that are in the best interests of their team. First-time managers may not have all of the necessary skills to be successful in their new role at first. That’s why it’s important for them to receive training on how to effectively manage a team. There are many different types of training that first-time managers can receive. Some companies offer specific management training programs, while others allow managers to attend workshops or seminars. Additionally, many online resources offer helpful tips and advice for first-time managers. No matter what type of training a first-time manager receives, it’s important that they are prepared to take on their new role.
https://symondsresearch.com/training-first-time-managers/
The next time you hear something about the Sri Lankan cat snake Boiga ceylonensis’ population from India, be sceptical for it could be a case of mistaken identity, advise researchers on the back of a discovery of a cat snake species from India. The latest addition to the cat-eyed Boiga genus, Thackeray’s cat snake or the Boiga thackerayi sp. nov, described in a paper in the journal of the Bombay Natural History Society, is genetically distinct but looks like – or, in more complicated words, is morphologically similar to – the Sri Lankan cat snake. “This discovery from the northern Western Ghats is going to change our historical perception of Boiga ceylonensis from India,” Varad Giri, director of the Pune-based Foundation for Biodiversity Conservation and one of the authors of the paper, told Mongabay. Boiga is one of the most diverse colubrid genera that occur in the Indian subcontinent. Colubridae is the largest snake family and includes about two-thirds of all known living snake species. Of the 34 species of Boiga currently regarded as valid, 16 are known from India and five from Sri Lanka. Thackeray’s cat snake or the Boiga thackerayi sp. nov with its tiger-like stripes is the first known species of Boiga, that feeds on frog eggs and exclusively eats climbing frogs. This is the second species of Boiga, after B dightoni, that is endemic to the Western Ghats and the first new species of Boiga described after 125 years from the Western Ghats. The discovery highlights the importance of cryptic diversity – when two or more distinct species that are lumped together as a single one because they look very similar – and implications in conservation measures. “If you look at the identification of so-called Boiga ceylonensis is India, it is merely based on morphology,” said Giri. “The type locality [the place from where the first individuals were used in describing this species] of B ceylonensis is Ceylon [Sri Lanka]. Eventually, individuals observed in India, that have similar morphological features, were also considered as the same species. So the apparent distribution of B ceylonensis is Sri Lanka and the entire Western Ghats.” The first individual of the cat snake was spotted in Koyna Wildlife Sanctuary, which is part of the Sahyadri Tiger Reserve in Maharashtra. It was named after Shiv Sena Chief Uddhav Thackeray’s son and wildlife researcher Tejas Thackeray, for his contribution to the find. Most of the earlier and recent studies indicate that the species that occur in wet zones in India and Sri Lanka show endemism and are confined to respective landscapes. The species which are in the dry zones are widely distributed and are known from both the countries. “The B ceylonensis and B thackerayi both are endemic to the Western Ghats, hence we decided to check their phylogenetic positions,” said Giri. “And the genes provided a real story, where it is proved that both these species are genetically very distinct and unrelated, not the same as morphology suggests. In the future, all the populations considered as B ceylonensis from India will be taken with a pinch of salt.” Sri Lankan herpetologist DMS Suranjan Karunarathna commended the identification. He told Mongabay that identification of Indian species will be extremely helpful to pin-point Sri Lankan counterparts as well. Two of a kind The morphological similarity between Sri Lankan cat snake and Thackeray’s cat snake is due to their proclivity to a specific habitat. “By being arboreal [living in trees] the snakes of the genus Boiga are habitat-specific and have a set of morphological characters that suit them in being so,” explained Giri. “This selection pressure which is similar across their range would have resulted in morphological conservatism and cryptic diversification, forcing them to look similar.” The researchers underscore the use of more evidence when dealing with the identification of species, especially for cryptic species. It took almost took 15 years for the researchers to ascertain Thackeray’s cat snake’s true identity after an individual of the species was first spotted in 2005. The researchers were being cautious with the taxonomic identification that was supported by molecular data. “In dealing with cryptic species one needs to be confident about the proper identity of species you are dealing with,” Giri added. “I did not want to rush with limited understanding. Hence I contacted Frank Tillack [of Leibniz Institute for Research on Evolution and Biodiversity] who is an authority on this group. He provided me with the morphological data of type specimens of all Boiga species. This was needed to confirm the identification.” The first individual of the cat snake was spotted in Koyna Wildlife Sanctuary, which is part of the Sahyadri Tiger Reserve in the west Indian state of Maharashtra. The cat snake’s tiger-like stripes are also reflective of Tejas Thackeray’s father’s political party Shiv Sena’s symbol, the tiger. “The first individual of this snake, which was a baby, was spotted by me in 2005 near a stream in Koyna,” Giri elaborated. “Then it fell from a tree next to us when we were doing a nocturnal survey to study breeding behaviour of an endemic frog, Nyctibatrachus humayuni. Then Tejas spotted a few more individuals and this time adults. He realised that this snake looks very different and for further observation collected a few individuals.” Thakeray, who had necessary permits, observed that they are always seen close to the streams in Koyna Wildlife Sanctuary. He sent those specimens to Giri for further studies. “The specimens on which we described the species were collected by Tejas, he provided crucial information about their natural history and habitat and he supported us in doing this study in providing much-needed support,” said Giri acknowledging Thackeray’s contributions. While most of the cat snakes in India are generally known to feed on lizards, birds, and frogs, this was the first report of cat snake feeding on eggs of frogs. During monsoons, forest streams in the northern Western Ghats, where this snake is seen, are flooded with the Humayun’s wrinkled frogs, Nyctibatrachus humayuni, the researcher noted. “These frogs are again exclusive in laying their eggs outside water, on leaves, rocks, stems, and overhanging stems,” Giri reasoned. “Their number is large and naturally their eggs as well. This large amount of ‘protein supply’ in their habitat would have been an easy diet for these snakes.” The apparent habitat specificity also made this snake choosier in its diet. A single individual kept in captivity for observations only fed on frogs which are good climbers – for example, the Indian leaping frog (Indirana sp), common tree frog (Polypedates maculatus) and Ghate’s bush frog (Raorchestes ghatei). It avoided geckos and other terrestrial species of frogs, the authors noted. Gap in information With Thackeray’s cat snake slithering its way into the limelight, researchers feel this discovery, as well as others prior to this, should catalyse efforts to expand understanding of reptiles in India. “Reptiles are poorly studied in India and for the amount of diversity of reptiles we have, there are a few dedicated efforts to document this diversity,” said Giri. “Our present-day understanding of reptile diversity is mainly based on historical studies with limited and dedicated efforts in recent years. This understanding has many lacunae as the understanding of taxonomy is revised with the advent of recent tools like molecular phylogeny.” The dedicated efforts in the last decade on frogs and a few groups of lizards proved that our historically understanding of Indian reptiles was wrong as it is composed of cryptic species diversity, especially in the context of biogeography, maintained Giri. Efforts on studying frogs and lizards resulted in the description of many new species as there were genetically distinct. “We now know that many supposedly widely distributed species are species complexes,” he said. “We do have 10 biogeographic zones in India and every region has its own biogeographic history. The new discovery of snake also highlights that similar kind of work is warranted for snakes as well.” The Western Ghats is a biodiversity hotspot and is considered to be one of India’s well-explored landscapes. But in the last two decades, a new family and many more new species of amphibians and reptiles were described from the Western Ghats. “Discoveries like this highlight the importance of this landscape,” Giri added. “The northern Western Ghats are considerably poor in terms of species diversity and endemicity as compared to the southern part. The presence of species like B thackerayi proves the uniqueness of this landscape highlighting the need for its conservation.” This article first appeared on Mongabay.
https://scroll.in/article/943832/thackerays-cat-snake-and-a-case-of-mistaken-identity
History knows Giovanni Batista Piranesi as an 18th-century Italian artist. Piranesi made many fine etchings of monuments, often magnificent old buildings with grand statues in huge halls beneath soaring ceilings. His fame also rests largely on his depiction of a series of fictitious but impressive prisons, thick stone walls and crumbling carving included. At no stage in Susanna Clarke’s Piranesi is this historical Piranesi even mentioned, but it does not take much effort to see where Clarke draws her inspiration for the House, the setting of this novel. The House stretches across many, many halls. The highest, atop massive staircases, reach up to the clouds, while the lowest are swept by the seas. The farthest are so distant that the lone full-time inhabitant of the House has never even got close to them, though he’s been as far as 20 km out from the First Vestibule. Besides this man (who is the narrator), there are lots of birds in the House. And, kept carefully in niches and alcoves among the thousands of grand statues that fill every hall of the house, there are 13 skeletons. All of this is documented by ‘the Beloved Child of the House’, as he styles himself: the narrator who meticulously maintains a journal, documenting everything he sees and observes in the House, which is the world for him. A world he shares sometimes with a regular visitor, the man he calls the Other, and who in turn refers to him, the journal-keeper, as ‘Piranesi’. It is not his name, the Other admits, but Piranesi is willing to submit to being called by a strange name, since he knows nothing of himself anyway. Not his real name, not where he’s come from or who the Other is. What is this world, this House that Piranesi reveres and adores? Is it fantasy? Or a bit of science fiction, a parallel world? Is it, perhaps, just an elaborate figment of Piranesi’s own imagination? Piranesi is a brilliant example of storytelling at its finest. Susanna Clarke, using primarily the thoughts and words of one character—‘Piranesi’ is our only insight into what happens in the House and even outside of it—conjures up a world that’s eerily magnificent. And in that world she sets a story of great suspense: how, after all, did Piranesi and the Other arrive in this world? Whose are the skeletons Piranesi so carefully looks after? What lies beneath? The truth, when it begins to emerge, draws the reader in, completely and inexorably. Clarke builds up the suspense superbly, leaving one tantalising clue at a time, increasing the pace of the revelations, and taking us into the mind of a man who is only now beginning to unearth the secret of his presence in the House. Along with the suspenseful story and the vivid descriptions, Piranesi is marked by a fine attention to detail. Note, for instance, the subtle way in which the way Piranesi writes changes as time passes, or how something as minor as Piranesi’s grooming of his hair changes with his changing situation and his growing awareness. A fantastic book, well worth the wait of more than 15 years since Clarke’s last book, Jonathan Strange & Mr Norrell. Comments Disclaimer : We respect your thoughts and views! But we need to be judicious while moderating your comments. All the comments will be moderated by the newindianexpress.com editorial. Abstain from posting comments that are obscene, defamatory or inflammatory, and do not indulge in personal attacks. Try to avoid outside hyperlinks inside the comment. Help us delete comments that do not follow these guidelines. The views expressed in comments published on newindianexpress.com are those of the comment writers alone. They do not represent the views or opinions of newindianexpress.com or its staff, nor do they represent the views or opinions of The New Indian Express Group, or any entity of, or affiliated with, The New Indian Express Group. newindianexpress.com reserves the right to take any or all comments down at any time.
The purpose of this lesson is twofold. Firstly, the lesson aims to teach students about several key physical concepts including forces, potential and kinetic energy, kinematics, and projectile motion utilizing gravity as the foundational piece and narrative for these topics. Secondly, the lesson focuses on teaching the subject matter via an integrative lecture/experiment format. This format is preferential in that it will train the students to become better scientists and thinkers allowing them to synthesize facts, data, and observations into meaningful approaches to answering questions and reaching solid conclusions. Hopefully such an approach will engage the students and stimulate them to translate the question asking skills and connection making abilities required in experimenting to the realm of static lectures. Essential questions include: - What is gravity? - What is a force? - What is acceleration? velocity? - What is the difference between potential and kinetic energy? Most importantly: how do all of these ideas relate to each other and how can one utilize information about one to solve questions involving the others? Performance / Lesson Objective(s) By the end of this lesson, students should feel comfortable explaining the above mentioned topics as well as be able to solve projectile motion, gravity, and conservation of energy physics problems. Lesson Materials Anti-Gravity Water Demo: - Tall glass with round edge - A handkerchief - A pitcher of water - A marble - A wine glass - A hand - Small bottle with a long neck - A short length of cotton rope (about 1 foot long) - A small piece of aluminum foil (about 6 square inches) - Super ball (bouncy ball) - Meter stick - Timer - Marble - Ramp - Ruler Lesson Motivation - Physics provides a wonderful backdrop for practicing reasoning and mathematical skills. Furthermore, gravity functions as a wonderful narrative for what science is: making an observation and then trying to explain it. Lesson Activities The lesson will open with three demonstrations attempting to discredit gravity. - Water antigravity demo - Marble spinning in a cup demo - Genie in a bottle demo These demos will be done in front of the entire class. Then students will be split into groups for the two experiments: - The first experiment involves attempting to measure the acceleration due to gravity by dropping an object and timing its descent. - The second experiment involves using concepts of potential and kinetic energy along with projectile motion formulas to determine where a marble dropped on a ramp will land on the floor. Procedure - Open the lesson by asking the entire group of students what gravity is. For this purpose, describe gravity as the attractive force on objects by the earth. - Ask the students if they believe in gravity. Ask them what would be required to make them not believe in it. - Perform the three demos that attempt to "cheat" gravity. (See attached documents) - Separate students into groups. Begin the Measuring the Acceleration due to Gravity experiment. - This opens with a demo illustrating that objects fall at the same rate. Wrap up a feather in aluminum foil and drop from same height as a brick. First ask the students which they think will fall faster. - Preface the experiment with some information on what gravity is (a force) and explain what a force is. Here present Newtwon's second law: F = ma. Now explain that when falling, there is a force on you. (Ask students what that force is, they should say gravity). Now by algebra show that this force is equal to your weight which proportional to your mass time some constant. Call constant g. So Weight = mg. But the weight is a force, so Weight should also equal ma where a represents your acceleration falling down. (At this point or prior explain that acceleration is the rate of change in velocity over time) Can go over Kinematic equations at this point if you like. Basically, mg = ma. The masses cancel, and you see that this constant is the acceleration due to gravity. - Perform the experiment: see attached. - Now perform projectile motion experiment (see attached). - In this case, explain concepts of graviational potential energy and kinetic energy. Explain conservation of energy. Try and pull out of students how we can use the two equations and that fact that energy is conserved to calculate the speed of something dropped from a given height. Also, talk about kinematic equations. Rather than tell the students, have them try and synthesize how we put all these formulas togetehr to determine where a marble will land if fallen down a ramp and then shot horizontally off of a table. - Everyone meets again in the main room for wrap-up discussed above. Wrap up / Conclusion All the students will reconvene in the main classroom to discuss their results. A brief discussion will be given on extending gravity to planets and the gravitational force equation will be explained. The class will then be posed the question: If all objects that have mass exert a gravitational force on other objects, why do two people (or any other object in the room) orbit around each other? A quick discussion will take place. Follow up At the next meeting, give a sample physics problem and see if students can solve it. Pre Assessment Plan Give a kinematics problem. Post Assessment Plan Give similar kinematics problem with different numbers.
https://www.brown.edu/academics/science-center/outreach/stem_orc/lessons/detail/13121ab6-14a0-88e4-15cd-f64b23915704
By conducting building water audits, fixing leaks and installing low-flow fixtures, the university has reduced potable water use by nearly 29% — by 379 million gallons — since 2015. In 2021, the university increased its reduction target to 10% from the original 5% per capita by 2025. Home to more than 100,000 students, faculty, staff and visitors each day, Ohio State is comparable to a medium-sized city. The university uses more than 1.3 billion gallons of water per year — enough to fill Ohio Stadium more than 3.5 times. Ohio State aims to further increase water efficiency by making infrastructure improvements and reducing water consumption. Additional accomplishments and initiatives - In fiscal years 2018 and 2019, the university used acoustic leak detection technology and fixed two leaks in its water main shutoff valves, resulting in an annual savings of 50 million gallons. - The university conducts ongoing audits of water use in 10 to 20 campus buildings per semester. - College of Pharmacy lab changes reduced once-through water, or water used only one time and then disposed of, resulting in a savings of 16 million gallons annually. - Low-flow fixtures are being installed in restrooms across campus. Student Life replaced 4,788 showerheads and 225 faucets in the summer of 2019. - Continued education and communication from Facilities Operations and Development helps the campus community learn about water consumption. Water Consumption Baseline (2015): 21,755 gallons per weighted campus user (population adjusted to how intensively community members use the campus) Water Consumption Performance (2020): 15,478 gallons per weighted campus user Water Consumption Goal (2025): 15,224 gallons per weighted campus user Water Consumption Reduction (To Date): 28.9% Take action - Take short showers rather than baths. - Install water-saving shower heads. - Report leaks from faucets and pipes when you see them. - Run your washing machine and dishwasher only when you have full loads. - Avoid ice use at drink stations.
https://si.osu.edu/campus/water-efficiency
Applying a keen understanding of the balance of public-private interface, Brown Rudnick’s Government Law & Strategies team navigates through the most complex politics and policies, across both the legislative and administrative branches of government. We work with our clients to articulate, develop and implement strategies that reflect the specific needs and interests of each client. As part of a 250-attorney international law firm and with a nationwide network of political contacts, we have the reach and the resources to identify and advance strategic opportunities. Our economic, political and regulatory climates are in a state of change. Clear vision and active participation in these changes are needed. Caught unaware, business professionals can be significantly affected by legislative and policy decisions. In this high-stakes environment, accurate information, strong advocacy and sound relationships with key decision makers are crucial. We manage these processes and relationships for you, facilitating access to the information you need and the opportunities to make your voice heard on issues of importance to your business. Our unwavering focus is to see that your interests are fairly represented and that you are positioned to make informed decisions on matters vital to your business.
https://directory.ctnewsjunkie.com/directory-listing/brown-rudnick/
FIELD OF THE INVENTION BACKGROUND OF THE INVENTION BRIEF DESCRIPTION OF THE INVENTION BRIEF DESCRIPTION OF THE DRAWINGS DETAILED DESCRIPTION OF THE INVENTION The present subject matter relates generally to wind turbines and, more particularly, to a system and method for controlling the operation of wind turbine in a manner that avoids overspeed and/or runaway conditions due to rapidly changing wind conditions. Wind power is considered one of the cleanest, most environmentally friendly energy sources presently available and wind turbines have gained increased attention in this regard. A modern wind turbine typically includes a tower, generator, gearbox, nacelle, and one or more rotor blades. The rotor blades are the primary elements for converting wind energy into electrical energy. The blades typically have the cross-sectional profile of an airfoil such that, during operation, air flows over the blade producing a pressure difference between its sides. Consequently, a lift force, which is directed from the pressure side towards the suction side, acts on the blade. The lift force generates torque on the main rotor shaft, which is geared to a generator for producing electricity. US 2009/0206605 A1 In many instances, wind turbines are operated at locations with significantly varying wind conditions. For example, wind turbines are often subject to sudden wind gusts, high turbulence intensities and/or abrupt changes in the direction of the wind. Such rapidly changing wind conditions make it difficult to control the operation of a wind turbine in a manner that avoids tripping of the turbine due to overspeed and/or runaway conditions. For instance, when there is an abrupt change in the wind direction at a wind turbine site, a wind turbine located at the site perceives the change in wind direction as a drop in wind speed. As a result, the typical control action implemented by the turbine controller is to pitch the blades in a manner that provides increased efficiency at the perceived, lower wind speeds. Unfortunately, for a wind turbine site with rapidly changing wind conditions, the wind direction may shift back to the original direction in a very short period of time, thereby immediately subjecting the wind turbine to increased wind speeds. Such an abrupt increase in the wind speed following a control action to pitch the rotor blades to a more efficient pitch angle can lead to overspeed and runaway conditions for the wind turbine, which may necessitate tripping the turbine to avoid component damage and/or unsafe operation. Document shows an example of a method for operating a wind energy plant. Accordingly, an improved system and method that allows for the operation of a wind turbine to be effectively and efficiently controlled despite substantially varying wind conditions would be welcomed in the technology. Aspects and advantages of the invention will be set forth in part in the following description, or may be obvious from the description, or may be learned through practice of the invention. In one aspect, the present subject matter is directed to a method according to claim 1 for controlling the operation of a wind turbine. The method may generally include monitoring a current yaw position of a nacelle of the wind turbine, wherein the current yaw position is located within one of a plurality of yaw sectors defined for the nacelle. In addition, the method may include monitoring a wind-dependent parameter of the wind turbine and determining a variance of the wind-dependent parameter over time, wherein the variance is indicative of variations in a wind parameter associated with the wind turbine. Moreover, the method may include determining at least one curtailed operating setpoint for the wind turbine when the variance exceeds a predetermined variance threshold, wherein the curtailed operating setpoint(s) is determined based at least in part on historical wind data for the yaw sector associated with the current yaw position. In another aspect, the present subject matter is directed to a method for controlling the operation of a wind turbine. The method may generally include monitoring a current yaw position of a nacelle of the wind turbine, wherein the current yaw position is located within one of a plurality of yaw sectors defined for the nacelle. The method may also include monitoring a generator speed of the wind turbine, monitoring a wind speed associated with the wind turbine, and determining a standard deviation of the generator speed over time, wherein the variance is indicative of variations in the wind speed. In addition, the method may include determining at least one curtailed operating setpoint for the wind turbine when the variance exceeds a predetermined variance threshold and when the wind speed exceeds a predetermined wind speed threshold, wherein the curtailed operating setpoint(s) is determined based at least in part on historical wind data for the yaw sector associated with the current yaw position. Moreover, the method may include operating the wind turbine based on the curtailed operating setpoint(s). In a further aspect, the present subject matter is directed to a system according to claim 10 for controlling the operation of a wind turbine. The system may generally include a computing device including a processor and associated memory. The memory may store instructions that, when implemented by the processor, configure the computing device to monitor a current yaw position of a nacelle of the wind turbine, wherein the current yaw position is located within one of a plurality of yaw sectors defined for the nacelle. The computing device may also be configured to monitor a wind-dependent parameter of the wind turbine and determine a variance of the wind-dependent parameter over time, wherein the variance is indicative of variations in a wind parameter associated with the wind turbine. In addition, the computing device may be configured to determine at least one curtailed operating setpoint for the wind turbine when the variance exceeds a predetermined variance threshold, wherein the curtailed operating setpoint(s) is determined based at least in part on historical wind data for the yaw sector associated with the current yaw position. These and other features, aspects and advantages of the present invention will become better understood with reference to the following description and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention. FIG. 1 illustrates a perspective view of one embodiment of a wind turbine. FIG. 2 FIG. 1 illustrates an internal view of one embodiment of a nacelle of the wind turbine shown in FIG. 3 illustrates a schematic diagram of one embodiment of a turbine controller suitable for use within a wind turbine in accordance with aspects of the present subject matter; FIG. 4 illustrates a flow diagram of one embodiment of a control algorithm that may be implemented by a turbine controller in order to control the operation of a wind turbine in accordance with aspects of the present subject matter; FIG. 5 illustrates an example of how the yaw travel range for a nacelle may be divided into a plurality of individual yaw sectors; and FIG. 6 FIG. 4 illustrates a flow diagram of one embodiment of a method for controlling the operation of a wind turbine in accordance with aspects of the present subject matter, particularly illustrating method elements for implementing an embodiment of the control algorithm shown in . A full and enabling disclosure of the present invention, including the best mode thereof, directed to one of ordinary skill in the art, is set forth in the specification, which makes reference to the appended figures, in which: Reference now will be made in detail to embodiments of the invention, one or more examples of which are illustrated in the drawings. Each example is provided by way of explanation of the invention, not limitation of the invention. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope of the invention. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present invention covers such modifications and variations as come within the scope of the appended claims. In general, the present subject matter is directed to a system and method for controlling the operation of a wind turbine. In several embodiments, the disclosed system and method may be utilized to curtail or de-rate the operation of a wind turbine when the turbine is being subjected to rapidly changing wind conditions. Specifically, in one embodiment, the turbine controller of a wind turbine may be configured to monitor the variability of one or more wind-dependent parameters of the wind turbine, which, in turn, may provide an indication of variations in one or more wind parameters associated with the wind turbine. For example, the turbine controller may be configured to calculate the standard deviation in the generator speed occurring over a relatively short period of time (e.g., over 5 seconds). A relatively high standard deviation for the generator speed (e.g., higher than a predetermined variance threshold defined for the generator speed) may indicate that the wind turbine is currently experiencing rapidly changing wind conditions, such as abrupt changes in the wind speed and/or wind direction, sudden wind gusts and/or increased turbulence intensity. In such instance, the turbine controller may be configured to de-rate the wind turbine by selecting one or more curtailed operating setpoints for the wind turbine, such as a reduced generator speed setpoint or a reduced generator torque setpoint. Once the variability in the generator speed is reduced, the turbine controller may then be configured to up-rate the wind turbine back to its normal or non-curtailed operating setpoints. Additionally, in several embodiments, when de-rating the wind turbine due to high variability in the monitored wind-dependent parameter, the turbine controller may be configured to take into account historical wind data associated with the yaw sector within which the nacelle is currently located to select an appropriate curtailed operating setpoint(s) for the turbine. Specifically, the yaw range of travel of the nacelle (e.g., a 360 degree circle) may be divided into a plurality of different yaw sectors. In such embodiments, the turbine controller may be provided with or may be configured to collect wind data associated with each wind sector. For example, wind data related to the variability in the wind speed, wind direction, wind gusts and/or turbulence intensity experienced by each yaw sector may be stored within the controller's memory. The turbine controller may then reference the historical wind data when selecting the curtailed operating setpoint(s) for the wind turbine. In particular, if the yaw sector within which the nacelle is currently located typically experiences rapidly changing wind conditions, the controller may set a setpoint limit(s) for the operating setpoint(s) that provides a relatively high operating margin in order to avoid overspeed and/or runaway conditions. However, if the historical wind data indicates that the yaw sector is typically not subjected to rapidly changing wind conditions, the controller may set a setpoint limit(s) for the operating setpoint(s) that provides a lower operating margin. FIG. 1 FIG. 2 Referring now to the drawings, illustrates a perspective view of one embodiment of a wind turbine 10 in accordance with aspects of the present subject matter. As shown, the wind turbine 10 generally includes a tower 12 extending from a support surface 14, a nacelle 16 mounted on the tower 12, and a rotor 18 coupled to the nacelle 16. The rotor 18 includes a rotatable hub 20 and at least one rotor blade 22 coupled to and extending outwardly from the hub 20. For example, in the illustrated embodiment, the rotor 18 includes three rotor blades 22. However, in an alternative embodiment, the rotor 18 may include more or less than three rotor blades 22. Each rotor blade 22 may be spaced about the hub 20 to facilitate rotating the rotor 18 to enable kinetic energy to be transferred from the wind into usable mechanical energy, and subsequently, electrical energy. For instance, the hub 20 may be rotatably coupled to an electric generator 24 () positioned within the nacelle 16 to permit electrical energy to be produced. FIG. 2 FIG. 2 The wind turbine 10 may also include a turbine control system or turbine controller 26 centralized within the nacelle 16 (or disposed at any other suitable location within and/or relative to the wind turbine 10). In general, the turbine controller 26 may comprise a computing device or any other suitable processing unit. Thus, in several embodiments, the turbine controller 26 may include suitable computer-readable instructions that, when implemented, configure the controller 26 to perform various different functions, such as receiving, transmitting and/or executing wind turbine control signals. As such, the turbine controller 26 may generally be configured to control the various operating modes (e.g., start-up or shut-down sequences) and/or components of the wind turbine 10. For example, the controller 26 may be configured to adjust the blade pitch or pitch angle of each rotor blade 22 (i.e., an angle that determines a perspective of the blade 22 with respect to the direction of the wind) about its pitch axis 28 in order to control the rotational speed of the rotor blade 22 and/or the power output generated by the wind turbine 10. For instance, the turbine controller 26 may control the pitch angle of the rotor blades 22, either individually or simultaneously, by transmitting suitable control signals to one or more pitch drives or pitch adjustment mechanisms 32 () of the wind turbine 10. Similarly, the turbine controller 26 may be configured to adjust the yaw angle of the nacelle 16 (i.e., an angle that determines a perspective of the nacelle 16 relative to the direction of the wind) about a yaw axis 44 of the wind turbine 10. For example, the controller 26 may transmit suitable control signals to one or more yaw drive mechanisms 46 () of the wind turbine 10 to automatically control the yaw angle. FIG. 2 FIG. 1 Referring now to , a simplified, internal view of one embodiment of the nacelle 16 of the wind turbine 10 shown in is illustrated. As shown, a generator 24 may be disposed within the nacelle 16. In general, the generator 24 may be coupled to the rotor 18 for producing electrical power from the rotational energy generated by the rotor 18. For example, as shown in the illustrated embodiment, the rotor 18 may include a rotor shaft 38 coupled to the hub 20 for rotation therewith. The rotor shaft 38 may, in turn, be rotatably coupled to a generator shaft 40 of the generator 24 through a gearbox 42. As is generally understood, the rotor shaft 38 may provide a low speed, high torque input to the gearbox 42 in response to rotation of the rotor blades 22 and the hub 20. The gearbox 42 may then be configured to convert the low speed, high torque input to a high speed, low torque output to drive the generator shaft 40 and, thus, the generator 24. Additionally, as indicated above, the controller 26 may also be located within the nacelle 16 (e.g., within a control box or panel). However, in other embodiments, the controller 26 may be located within any other component of the wind turbine 10 or at a location outside the wind turbine 10. As is generally understood, the controller 26 may be communicatively coupled to any number of the components of the wind turbine 10 in order to control the operation of such components. For example, as indicated above, the controller 26 may be communicatively coupled to each pitch adjustment mechanism 32 of the wind turbine 10 (one for each rotor blade 22) via a pitch controller 30 to facilitate rotation of each rotor blade 22 about its pitch axis 28. Similarly, the controller 26 may be communicatively coupled to one or more yaw drive mechanisms 46 of the wind turbine 10 for adjusting the yaw angle or position of the nacelle 16. For instance, the yaw drive mechanism(s) 46 may be configured to adjust the yaw position by rotationally engaging a suitable yaw bearing 48 (also referred to as a slewring or tower ring gear) of the wind turbine 10, thereby allowing the nacelle 16 to be rotated about its yaw axis 44. FIGS. 1 2 In addition, the wind turbine 10 may also include one or more sensors for monitoring various operating parameters of the wind turbine 10. For example, in several embodiments, the wind turbine 10 may include one or more shaft sensors 60 configured to monitor one or more shaft-related operating parameters of the wind turbine 10, such as the loads acting on the rotor shaft 38 (e.g., thrust, bending and/or torque loads), the deflection of the rotor shaft 38 (e.g., including shaft bending), the rotational speed of the rotor shaft 38 and/or the like. The wind turbine may also include one or more blades sensors 62 ( and ) configured to monitor one or more blade-related operating parameters of the wind turbine 10, such as the loads acting on the blades 22 (e.g., bending loads), the deflection of the blades 22 (e.g., including blade bending, twisting and/or the like), the vibration of the blades 22, the noise generated by the blades 22, the pitch angle of the blades 22, the rotational speed of the blades 22 and/or the like. Additionally, the wind turbine 10 may include one or more generator sensors 64 configured to monitor one or more generator-related operating parameters of the wind turbine 10, such as the power output of the generator 24, the rotational speed of the generator 24, the generator torque and/or the like. FIG. 2 Moreover, the wind turbine 10 may also include various other sensors for monitoring numerous other turbine operating parameters. For example, as shown in , the wind turbine 10 may include one or more tower sensors 66 for monitoring various tower-related operating parameters, such as the loads acting the tower 12, the deflection of the tower 12 (e.g., tower bending and/or twisting), tower vibrations and/or the like. In addition, the wind turbine 10 may include one or more wind sensors 68 for monitoring one or more wind parameters associated with the wind turbine 10, such as the wind speed, the wind direction, wind gusts, the turbulence or turbulence intensity of the wind and/or the like. Similarly, the wind turbine 10 may include one or more hub sensors 70 for monitoring various hub-related operating conditions (e.g., the loads transmitted through the hub 20, hub vibrations and/or the like), one or more nacelle sensors 72 for monitoring one or more nacelle-related operating conditions (e.g., the loads transmitted through the nacelle 16, nacelle vibrations, the yaw angle or position of the nacelle 16 and/or the like) and/or one or more gearbox sensors 74 for monitoring one or more gearbox-related operating conditions (e.g., gearbox torque, gearbox loading, rotational speeds within the gearbox and/or the like). Of course, the wind turbine 10 may further include various other suitable sensors for monitoring any other suitable operating conditions of the wind turbine 10. It should be appreciated that the various sensors described herein may correspond to pre-existing sensors of a wind turbine 10 and/or sensors that have been specifically installed within the wind turbine 10 to allow one or more operating parameters to be monitored. It should also be appreciated that, as used herein, the term "monitor" and variations thereof indicates that the various sensors of the wind turbine 10 may be configured to provide a direct measurement of the operating parameters being monitored or an indirect measurement of such operating parameters. Thus, the sensors may, for example, be used to generate signals relating to the operating parameter being monitored, which can then be utilized by the controller 26 to determine the actual operating parameters. For instance, measurement signals provided by generator sensor(s) 64 that measure the power output of the generator 24 along with the measurement signals provided by the blade sensor(s) 62 that measure the pitch angle of the rotor blades 22 may be used by the controller 26 to estimate one or more wind-related parameters associated with the wind turbine 10, such as the wind speed. FIG. 3 FIGS. 4 6 Referring now to , a block diagram of one embodiment of suitable components that may be included within the controller 26 is illustrated in accordance with aspects of the present subject matter. As shown, the controller 26 may include one or more processor(s) 76 and associated memory device(s) 78 configured to perform a variety of computer-implemented functions (e.g., performing the methods, algorithms, calculations and the like disclosed herein). As used herein, the term "processor" refers not only to integrated circuits referred to in the art as being included in a computer, but also refers to a controller, a microcontroller, a microcomputer, a programmable logic controller (PLC), an application specific integrated circuit, and other programmable circuits. Additionally, the memory device(s) 78 may generally comprise memory element(s) including, but are not limited to, computer readable medium (e.g., random access memory (RAM)), computer readable non-volatile medium (e.g., a flash memory), a floppy disk, a compact disc-read only memory (CD-ROM), a magneto-optical disk (MOD), a digital versatile disc (DVD) and/or other suitable memory elements. Such memory device(s) 78 may generally be configured to store suitable computer-readable instructions that, when implemented by the processor(s) 76, configure the controller 26 to perform various functions including, but not limited to, implementing the control algorithm(s) 100 and/or method(s) 200 disclosed herein with reference to and . Additionally, the controller 26 may also include a communications module 80 to facilitate communications between the controller(s) 26 and the various components of the wind turbine 10. For instance, the communications module 80 may include a sensor interface 82 (e.g., one or more analog-to-digital converters) to permit the signals transmitted by the sensor(s) 60, 62, 64, 66, 68, 70, 72, 74 to be converted into signals that can be understood and processed by the processors 76. FIG. 4 Referring now to , a diagram of one embodiment of a control algorithm 100 that may be implemented by a turbine controller 26 in order to control the operation of a wind turbine 10 is illustrated in accordance with aspects of the present subject matter. As indicated above, the disclosed algorithm 100 may, in several embodiments, be advantageously applied when a wind turbine 10 is subject to one or more substantially varying wind parameters, such as a wind speed, wind direction, wind gust and/or turbulence intensity that varies significantly over time. In particular, the algorithm 100 described herein may allow for the operation of a wind turbine 10 to be de-rated or curtailed in an efficient and effective manner in instances in which the local wind parameter(s) for the turbine 10 are changing significantly within a relatively short period of time. Such de-rating or curtailment of the wind turbine 10 may allow for overspeed and/or runaway conditions to be avoided despite the occurrence of sudden or rapid changes in the wind parameter(s) associated with the wind turbine 10. FIG. 4 As shown in , the turbine controller 26 may be configured to receive one or more input signals associated with one or more monitored operating parameters of the wind turbine 10, such as one or more wind parameters 102 and one or more wind-dependent parameters 104. For example, the disclosed algorithm 100 will generally be described herein with reference to the turbine controller 26 receiving input signals associated with a monitored wind speed for the wind turbine 10. However, in other embodiments, the turbine controller 26 may be configured to monitor any other suitable wind parameters 102 associated with the wind turbine 10, such as the wind direction, the turbulence intensity of the wind, wind gusts and/or the like. Additionally, the disclosed algorithm 100 will generally be described herein with reference to the turbine controller 26 receiving input signals associated a monitored generator speed. However, in other embodiments, the turbine controller 26 may be configured to monitor any other suitable wind-dependent parameter(s) 104 that provides an indication of the variability in one or more of the wind parameter(s), such as the power output of the wind turbine 10, the generator torque and/or the like. In several embodiments, the controller 26 may be configured to apply one or more suitable filters or S-functions (as shown at box 106) to the monitored wind parameter(s) 102. For example, as indicated above, the turbine controller 26 may be configured to estimate the wind speed based on one or more other monitored operating parameters of the wind turbine 10, such as by estimating the wind speed based on the pitch angle of the rotor blades 22 and the power output of the generator 24. In such embodiments, the estimated wind speed provided by the turbine controller 26 may be highly variable. Thus, in several embodiments, application of the corresponding filter(s) and/or S-function(s) may allow for the variations in the estimated wind speed to be accommodated within the system. For example, in one embodiment, the controller 26 may be configured to input the monitored wind parameter(s) 102 into a low-pass filter. As is generally understood, the low-pass filter may be configured to filter out the high frequency signals associated with the monitored wind parameter(s) 102, thereby providing more reliable data. For instance, the low-pass filter may be configured to pass low-frequency signals associated with the monitored wind parameter(s) 102 but attenuate (i.e. reduces the amplitude of) signals with frequencies higher than a given cutoff frequency. ∗ ∗ Additionally, in one embodiment, the filtered or unfiltered wind parameter(s) 102 may be input into an S-function to smooth or stabilize the input signals associated with the wind parameter(s) 102. As is generally understood, the S-function may correspond to a mathematical equation having an S-shape. For example, in one embodiment, the S-function may be represented by: y = k/(1+aexp(bx)), wherein k, a, and b are parameters of the S-curve, x is the input, and y is the output. Of course, it should be understood by those skilled in the art that the S-function may also be any other suitable mathematical function, e.g. a Sigmoid function. FIG. 4 Referring still to , the turbine controller 26 may also be configured to calculate a variance in the wind-dependent parameter(s) 104 over time (indicated at box 108), with the variance generally being indicative of the variability in the monitored wind parameter(s). Specifically, fluctuations in one or more of the wind parameter(s) 102 associated with the wind turbine 10 may result in corresponding variations in one or more of the wind-dependent parameters 104. Thus, by calculating the variance in the monitored wind-dependent parameter(s) 104 over time, such variance may provide a strong indication of the instability or variability in the associated wind parameter(s) 102. In several embodiments, the variance calculated by the turbine controller 26 may correspond to a standard deviation of the wind-dependent parameter(s) 104 occurring across a given time period. For example, the generator speed may be continuously monitored and stored within the controller's memory 78. The stored data may then be utilized to calculate the standard deviation of the generator speed across a relatively short period of time (e.g., 5 seconds). A high standard deviation may indicate that one or more of the wind parameter(s) 102 is rapidly changing whereas a low standard deviation may indicate that the wind parameter(s) 102 is remaining relatively stable over the specific time period. Additionally, the turbine controller 26 may, in several embodiments, be configured to apply one or more adaptive filters (not shown) to smooth and/or stabilize the calculated variance 108 so as to improve the overall system stability. In such embodiments, the adaptive filter(s) may correspond to any suitable type of filter(s), such as a low-pass filter, high-pass filter and/or band-pass filter. FIG. 4 As shown in , based on the calculated variance and the wind parameter(s) input, the controller 26 may be configured to select or calculate one or more operating setpoints for the wind turbine 10, such as a generator speed setpoint and/or a generator torque setpoint. In doing so, the turbine controller 26 may be configured (at box 110) to compare the monitored wind parameter(s) to a predetermined wind parameter threshold and the calculated variance to a predetermined variance threshold in order to determine whether to apply the normal or non-curtailed operating setpoints typically provided for the wind turbine (indicated at box 112) or to instead apply one or more curtailed operating setpoints so as to de-rate the wind turbine 10 (indicated at box 114). Specifically, in several embodiments, the threshold values for the wind parameter and variance thresholds may be selected such that, when each input parameter exceeds its corresponding threshold, it is indicative of operating conditions in which there is a high likelihood that the wind turbine 10 may experience an overspeed or runway condition. In such instance, the turbine controller 26 may be configured to select a reduced operating setpoint(s) that curtails or de-rates the operation of the wind turbine 10, thereby allowing the turbine 10 to ride-through the unstable operating conditions with greater safety or operating margins. For example, in a particular embodiment, a predetermined variance threshold may be utilized that corresponds to a standard deviation value for the generator speed above which it can be inferred that the wind turbine 10 is being subjected to dynamic, rapidly changing wind conditions. Similarly, in such an embodiment, the predetermined wind parameter threshold may, for example, correspond to a wind speed value above which there is an increased likelihood for the wind turbine 10 to be placed in a potential overspeed or runway condition given the dynamic, rapidly changing wind conditions. As such, when the standard deviation for the generator speed exceeds the corresponding variance threshold and the wind speed exceeds the corresponding wind speed threshold, the turbine 10 may be de-rated by applying a reduced or curtailed operating setpoint(s) in a manner so as to prevent the overspeed/runway condition. For instance, the generator speed setpoint may be reduced in a manner that provides for an increased speed margin for the wind turbine 10, thereby allowing the turbine 10 to continue to be safely operated despite the dynamic and varying wind conditions. It should be appreciated that, in several embodiments, the threshold values associated with the variance and the wind parameter correspond to minimum threshold values. Additionally, in several embodiments, a maximum threshold value may also be associated with the variance and/or wind parameter for determining when to apply the curtailed operating setpoint(s). For example, in a particular embodiment, it may be desired that the monitored wind parameter (e.g., wind speed) fall within a given range of values (e.g., a range bound by a predetermined minimum threshold and a predetermined maximum threshold) prior to applying the curtailed operating setpoint(s). FIG. 4 FIG. 5 FIG. 5 Additionally, as shown in , the turbine controller 26 may be configured to analyze yaw sector data associated with the wind turbine 10 (indicated at box 116) when selecting a curtailed operating setpoint(s) for the turbine 10. Specifically, in several embodiments, the yaw travel range for the nacelle 16 may be divided into a plurality of yaw sectors, with each yaw sector corresponding to an angular section of the entire travel range. For example, illustrates a plurality of yaw sectors 140 defined for a nacelle 16 having a 360 degree yaw travel range (indicated by circle 142). As shown in , the yaw travel range 142 has been divided into sixteen different yaw sectors 140, with each yaw sector 140 corresponding to a 22.5 degree angular section of the travel range 142. However, in other embodiments, the yaw travel range 142 may be divided into any other suitable number of yaw sectors 140 correspond to any suitable angular section of the overall travel range. For example, in one embodiment, each yaw sector 140 may correspond to an angular section of the yaw travel range ranging from about 10 degrees to about 30 degrees, such as from about 15 degrees to about 25 degrees and all other subranges therebetween. For each yaw sector 140 defined for the wind turbine 10, the turbine controller 26 may be configured to store historical wind data corresponding to one or more monitored wind parameter(s) for the yaw sector. For example, historical wind speed measurements, wind gust measurements, wind direction measurements, turbulence intensity measurements and/or the like may be collected and stored within the controller's memory 78 for each yaw sector 140. As a result, it may be determined whether a given yaw sector 140 is typically subjected to varying wind conditions based on its historical wind data. For example, the historical wind data may indicate that a particular yaw sector 140 is subject to recurring wind gusts or systematically experiences sudden shifts in wind direction. FIG. 5 In several embodiments, the historical wind data may be utilized to define one or more setpoint limits for the curtailed operating setpoint(s). Specifically, as indicated above, the controller 26 may be communicatively coupled to one or more sensors (e.g., a nacelle sensor(s) 72) that allow for the yaw angle or position of the nacelle 16 to be monitored, which may then allow the controller 26 to identify the yaw sector 140 within which the nacelle 16 is currently located (e.g., the current location of the nacelle 16 is indicated by arrow 144 in such that the nacelle 16 is currently located within the cross-hatched yaw sector 140). The turbine controller 26 may then reference the historical data stored for the relevant yaw sector 140 to determine of such yaw sector 140 typically experiences substantially varying wind conditions. If the data indicates that the yaw sector 140 is typically not subjected to rapidly changing wind conditions, the turbine controller 26 may infer that the high variance calculated for the wind-dependent parameter(s) 104 may be due to another factor(s) or may simply correspond to an atypical operating event. In such instance, the setpoint limit(s) selected for the curtailed operating setpoint(s) may correspond to a relatively high operating setpoint(s) given that the variance is probably not due to recurring variations in the wind conditions. For example, the setpoint limit for the generator speed setpoint may be defined as a speed value that is only slightly less than the generator speed setpoint that would otherwise be commanded if the turbine controller 26 was utilizing its normal or non-curtailed operating setpoints. Alternatively, if the data indicates that the yaw sector 140 has historically been subjected to rapidly changing wind conditions, the turbine controller 26 may infer that the high variance calculated for the wind-dependent parameter(s) 104 is due to the varying wind conditions. In such instance, the setpoint limit(s) selected for the curtailed operating setpoint(s) may be correspond to a lower operating setpoint(s). For example, the setpoint limit for the generator speed setpoint may be defined as a speed value that is significantly less than the generator speed setpoint that would otherwise be used if the turbine controller 26 was commanding its normal or non-curtailed operating setpoints, thereby allowing for a larger speed margin to be provided for the wind turbine 10 given the increased likelihood of substantially varying wind conditions. FIG. 4 Referring back to , in several embodiments, the turbine controller 26 may also be configured to apply one or more suitable filters or S-functions (indicated at box 118) to the operating setpoint(s) determined by the controller 26 in order to smooth and stabilize the operation of the wind turbine 10 when transitioning between normal and curtailed operation. For example, in one embodiment, a low-pass filter may be utilized to limit the rate at which the wind turbine 10 is de-rated when transitioning from the use of non-curtailed operating setpoints to the use of curtailed operating setpoints. Similarly, the low pass filter may also be utilized to limit the rate at which the wind turbine 10 is up-rated when transitioning operation back from the use of curtailed operating setpoints to the use of non-curtailed operating setpoints. FIG. 4 As shown in , the turbine controller 26 may then command (at box 120) that the wind turbine 10 be operated at the resulting operating setpoint(s). For example, turbine controller 26 may command that the wind turbine 10 be operated at a given generator speed setpoint and a given generator torque setpoint. In doing so, the turbine controller 26 may be configured to implement any suitable control action that allows for the wind turbine 10 to be operated at the commanded setpoints. For instance, the controller 26 may de-rate or up-rate the wind turbine 10, as the case may be, by commanding that one or more of the rotor blades 22 be pitched about its pitch axis 28. As indicated above, such control of the pitch angle of each rotor blade 22 may be achieved by transmitting suitable control commands to each pitch adjustment mechanism 32 of the wind turbine 10. In other embodiments, the controller 26 may implement any other suitable control action in order to de-rate or up-rate the wind turbine 10 to the commanded setpoints, such as by modifying the torque demand on the generator 24 (e.g., by transmitting a suitable control command to the associated power converter (not shown) in order to modulate the magnetic flux produced within the generator 24) or by yawing the nacelle 16 to change the angle of the nacelle 16 relative to the direction of the wind. FIG. 6 FIG. 4 FIG. 6 Referring now to , a flow diagram of one embodiment of a method 200 for controlling the operation of a wind turbine is illustrated in accordance with aspects of the present subject matter. In general, the method 200 will be described herein with reference to implementing aspects of the control algorithm 100 described above with reference to . However, in other embodiments, the method 100 may be utilized in connection with any other suitable computer-implemented algorithm. Additionally, although depicts steps performed in a particular order for purposes of illustration and discussion, the methods discussed herein are not limited to any particular order or arrangement. One skilled in the art, using the disclosures provided herein, will appreciate that various steps of the methods disclosed herein can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure. As shown, at (202), the method 200 includes monitoring a current yaw position of the nacelle. As indicated above, by monitoring the yaw position of the nacelle 16, the turbine controller 16 may be configured to determine which yaw sector 140 in which that nacelle 16 is currently located. Additionally, at (204), the method 200 includes monitoring at least one wind-dependent parameter (e.g., generator speed) and at least one wind parameter of the wind turbine (e.g., wind speed). Moreover, at (206), the method 200 includes determining a variance of the wind-dependent parameter(s) over time. For example, as indicated above, the controller 26 may be configured to calculate a standard deviation of the generator speed occurring over a relatively short period of time, which may be indicative of the variability of the monitored wind parameter across such time period. Further, at (208), the method 200 includes determining whether the calculated variance exceeds a predetermined variance threshold and whether the monitored wind parameter exceeds a predetermined win parameter threshold. If so, at (210), the method 200 includes determining at least one curtailed operating setpoint for the wind turbine based at least in part on historical wind data for the yaw sector associated with the current yaw position of the nacelle. Specifically, as indicated above, the turbine controller 26 may be configured to take into account the historical wind data for the yaw sector 140 within which the nacelle 16 is currently located in order to determine whether such yaw sector 140 typically experiences rapidly changing wind conditions. If so, the controller 26 may be configured to establish a lower setpoint limit(s) for the operating setpoint(s) in order to provide an increased operating or safety margin for the wind turbine 10. Alternatively, if the yaw sector 140 is not typically subjected to rapidly changing wind conditions, the controller 26 may be configured to establish a higher setpoint limit(s) for the operating setpoint(s), such as a setpoint limit(s) near the normal operating setpoint(s) typically set for the wind turbine 10. Additionally, at (212), the method 200 includes controlling the operation of the wind turbine based on the curtailed operating setpoint(s). This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the
Please note: The Buy Button on this page will take you to Amazon.com for final purchase. You may buy from other sources if available, listed below. “American mythologist Joseph Campbell (d 1987) suggested that old myths are no longer satisfying the needs of modern humankind. Those myths served as truth to the people of that era. However, amazing new discoveries about the universe, our natural environment and our minds and bodies are constantly nudging us toward a more realistic understanding of our place in the cosmos. The realities of the natural universe are the same for all cultures. From early childhood exposures we develop construct of myths or gods. As we mature through personal experiences and expanding knowledge, we strive to keep our beliefs in sync with new discoveries and developing ideas.
http://epubco.com/shop/products/wisdom-for-a-new-era-balancing-nature-science-and-belief-21-contemporary-dialogues-by-benjamin-c-godfrey/
You can check out the RAW image of the pyramid here. Ufologists and researchers have already started speculating what this artifact is and according to many it might actually be the cornerstone of a much larger pyramid buried under the Martian sand. Even though many people suggest that this is just another coincidental rock formation of the red planet, others believe that the perfect geometry of the structure suggests it is an artificially created construction and is not in any way Pareidolia or tricks of light and shadow. Is this pyramid among the clearest photographic evidence that points toward the existence of artificial structures on Mars built by an earlier civilization? many believe yes. The “perfect” symmetry of the structure is outstanding and the “pyramid” stands out from the rest of the Martian rocks. The angles and lines of the pyramid are believed to be a telltale sign that differentiates a natural formation from a manmade one. image was taken by Curiosity rover’s Mastcam on Sol 978, May 7th on Earth. Despite the remarkable symmetry of the pyramid, none of the Curiosity rovers subsequent pictures taken at intervals a couple of minutes or hours include the mysterious Martian Pyramid. It seems that the team in charge of Curiosity deliberately chose not to zoom into the mysterious object on the martian surface for further imagery. If they did in fact take more images of the Mysterious “martian pyramid” these were not released to the general public. Many believe that we have not seen everything what actually happens on the Red Planet. Former NASA employees have come forward suggesting that the agency operates under a veil of secrecy and much of what happens on the red planet has remained as highly confidential information that has been kept secret in a very tight circle inside the agency. Lately there have been many important discoveries on Mars which could point to the existence of life on the red planet in the distant past. Is it possible that this “newly discovered” pyramid is the crucial piece of evidence that has missed all along? Is it possible that the Pyramid is in fact the product of extraterrestrial civilizations on the red planet in the distant past? Many suggest that the evidence of intelligent life on the red planet has been abundant, yet we all have decided to ignore it for some reason. Will the discovery of a pyramid on Mars wake interest of other researchers? Will it change the way we look at the red planet? There are many questions surrounding the red planet that have yet to be answered, now, one of these questions surely is, if this mysterious structure that clearly resembles the great pyramid of Giza, is the product of ancient civilizations present on the red planet. Did Curiosity snap an image of a mysterious “creature” on Mars? Ancient Alien Outpost Found On Mars? Real Proof of alien life! A ‘huge tower’ was found on the surface of Mars!
http://mysteriousearth.net/2017/08/09/nasas-curiosity-rover-finds-a-pyramid-on-mars/
This article discusses the usage of Artificial Intelligence to ease the replacement of elements in pictures and videos. Introduction Artificial Intelligence to ease the replacement of elements in pictures and videos is of great importance in many situations. (like video editing, video segmentation, video enhancement, media watermarking, etc.) These operations have been considered for a long time as tedious and complicated. However, in recent years, many works have shown how to employ artificial intelligence to handle these tasks. We made the choice not to enter the details of how these techniques are working, as we are focusing on demystifying how AI is currently used to help us transform images the way we want more efficiently without needing technical editing skills. However, we are providing links to interesting articles for each of these use-cases in case you want to learn more! 🎨Note: This article is part of our series on content creation. We hope to provide you with a better idea about the usages of AI for content creators, on a wide range of domains. Don't hesitate to check our other articles! Background removal and image relighting In this first example, AI is helping in the removal of the background in pictures. It is a common task when it is needed to replace the backdrop in an image or use an element of the picture somewhere else. For a long time, this operation had to be performed manually. It required knowledge and experience to use an image-manipulation software to manually detour the whole face pixel by pixel to extract it from the picture. Thanks to Artificial Intelligence, it is now possible to automatically extract the foreground of a picture within a few seconds and with promising results! No more tedious manual labor is needed. Upload your image, and an AI will automatically detect which objects to extract and detour them with near perfection! Content creators can apply the same process to videos. Background removal of videos It is common to use a green screen to put actors in another context in the cinematographic industry. Actors are performing in front of a vast green background. Then, we can easily change the green pixels of the videos for a new environment. However, it is now possible to perform such background replacement on videos that are already in context, without using green screens. The substitution is done with machine learning-based algorithms that will learn how to separate the people and objects of interest from what appears to be the background View example here The process is similar to the one used to remove the background in pictures, however, we can mention two important points: - The algorithm can benefit from the fact the video has multiple frames to better understand where is the foreground, which helps the algorithm to produce better results by having access to more information. - The algorithm has to maintain a temporal coherence throughout the video, which means that it has to consider multiple neighboring images from the video to provide good results. Otherwise, the result may look like some parts of the foreground are blinking, as they are sometimes detected as part of the foreground and sometimes not. Now, we have to cover one more problem that may appear, where we can also use artificial intelligence to help with it. Extracting yourself from an image to put it on a different background is fantastic, especially when you can do it with a single click in a split second. But what happens if the lighting is very different on the image where you want to put yourself in? Well, it results in very unrealistic-looking images. You are simply taking your picture in a specific lighting environment and pasting it in a different scene, expecting it to look natural. Professionals could manually adjust the lighting, shadows, and highlights to make you look like you were there. But to make it available to anyone, artificial intelligence can also help. This task is called image relighting. Image relighting Google recently shared an exciting research paper achieving excellent results. It can automatically adapt the lighting of human foregrounds to new scenes. This relighting is challenging because the algorithm has to understand how the foreground reacts to the surrounding light and define a coherent lighting scheme with the new background. Such understanding of how the foreground reacts to surrounding light is complex for a computer that only has the information of 2-dimensional images instead of the actual 3D world. We humans can easily imagine the depth of different objects based on our visual perception and memories. That is what we train the algorithms to achieve: replicate how humans understand a scene by understanding depth, the different items, lighting sources, etc. We are excited to see and participate in more of such advances in the future! We've now seen the extraction of objects from images or videos and adapted the lighting to a new background. But what if you would like to remove the friend that photobombed your profile picture where you look incredible? Artificial intelligence can also help with this with a process called image inpainting. Image inpainting Image inpainting is a process similar to background removal. Still, instead of extracting a foreground to do something else with it, it removes an element from a picture and fills the missing part. You would need to manually extract this part and create a plausible replacement with the appropriate textures or objects with traditional techniques. You could also manually draw something that would seem realistic for the replacement. Fortunately, we do not need to go through such a tedious process over pictures anymore. Artificial Intelligence allows us to automate this process as well! The algorithm will understand what's happening in the image and create a plausible replacement for the element we want to remove, considering specific textures, borders, and objects to make the new picture as realistic as possible. Stalinpainting, showing that inpainting is by no mean a new concept A similar application of image inpainting is to "extend the canvas" to make a picture bigger. Indeed, it uses the same process of understanding the image and adding information that would fit well with it. Image inpainting can be helpful for anyone to change the aspect ratio of images intelligently. This operation can also remove undesired objects from your pictures. Of course, this algorithm is just like the image upscaling one that we also covered on our blog. It has the same underlying problem: it cannot know what was really behind an object you removed. Thus, the reconstruction will be a simple (but good-looking) guess and no more than that. Unfortunately, we cannot dive into a picture and check behind objects as of now. But this might come much faster than we think! Conclusion These were some of the most exciting applications of AI we wanted to share, covering image and video editing. Still, many more are out there, and many more are to come. We invite you to read our other articles about content creation. They are highly related to this one using similar algorithms with different media types. All with the common goal of helping creators produce their vision more efficiently.
https://designstripe.com/blog/artificial-intelligence-image-and-video-editing
Hemochromatosis is the most common progressive (and sometimes fatal) genetic disease in people of European descent. Hemochromatosis is a disease state characterized by an inappropriate increase in intestinal iron absorption. The increase can result in deposition of iron in organs such as the liver, pancreas, heart, and pituitary. Such iron deposition can lead to tissue damage and functional impairment of the organs. In some populations, 60-100% of cases are attributable to homozygosity for a missense mutation at C282Y in the Histocompatibility iron (Fe) loading (HFE) gene, a major histocompatibility (MHC) non-classical class I gene located on chromosome 6p. Some patients are compound heterozygotes for C282Y and another mutation at H63D. The invention is based on the discovery of novel mutations which are associated with aberrant iron metabolims, absorption, or storage, or in advanced cases, clinical hemochromatosis. Accordingly, the invention features a method of diagnosing an iron disorder, e.g., hemochromatosis or a genetic susceptibility to developing such a disorder, in a mammal by determining the presence of a mutation in exon 2 of an HFE nucleic acid. The mutation is not a Cxe2x86x92G missense mutation at position 187 of SEQ ID NO:1 which leads to a H63D substitution. The nucleic acid is an RNA or DNA molecule in a biological sample taken from the mammal, e.g. a human patient, to be tested. The presence of the mutation is indicative of the disorder or a genetic susceptibility to developing it. An iron disorder is characterized by an aberrant serum iron level, ferritin level, or percent saturation of transferrin compared to the level associated with a normal control individual. An iron overload disorder is characterized by abnormally high iron absorption compared to a normal control individual. Clinical hemochromatosis is defined by an elevated fasting transferrin saturation level of greater than 45% saturation. For example, the mutation is a missense mutation at nucleotide 314 of SEQ ID NO:1 such as 314C which leads to the expression of mutant HFE gene product with amino acid substitution I105T. The I105 T mutation is located in the xcex11 helix of the HFE protein and participates in a hydrophobic pocket (the xe2x80x9cFxe2x80x9d pocket). The alpha helix structure of the xcex11 domain spans residues S80 to N108, inclusive. The I105T mutation is associated with an iron overload disorder. (SEQ ID NO:1l GENBANK(copyright) Accession No. U60319) Residues 1-22=leader sequence; xcex11 domain underlined; residues 63, 65, 93, and 105 indicated in bold type) Other mutations include nucleotide 277 of SEQ ID NO:1, e.g., 277C which leads to expression of mutant HFE gene product G93R and one at nucleotide 193 of SEQ ID NO:1, e.g., 193T, which leads to expression of mutant HFE gene product S65C. Any biological sample containing an HFE nucleic acid or gene product is suitable for the diagnostic methods described herein. For example, the biological sample to be analyzed is whole blood, cord blood, serum, saliva, buccal tissue, plasma, effusions, ascites, urine, stool, semen, liver tissue, kidney tissue, cervical tissue, cells in amniotic fluid, cerebrospinal fluid, hair or tears. Prenatal testing can be done using methods used in the art, e.g., amniocentesis or chorionic villa sampling. Preferably, the biological sample is one that can be non-invasively obtained, e.g., cells in saliva or from hair follicles. The assay is also used to screen individuals prior to donating blood to blood banks and to test organ tissue, e.g., a donor liver, prior to transplantation into a recipient patient. Both donors and recipients are screened. In some cases, a nucleic acid is amplified prior to detecting a mutation. The nucleic acid is amplified using a first oligonucleotide primer which is 5xe2x80x2 to exon 2 and a second oligonucleotide primer is 3xe2x80x2 to exon 2. To detect mutation at nucleotide 314 of SEQ ID NO:1, a first oligonucleotide primer which is 5xe2x80x2 to nucleotide 314 and a second oligonucleotide primer which is 3xe2x80x2 to nucleotide 314 is used in a standard amplification procedure such as polymerase chain reaction (PCR). To amplify a nucleic acid containing nucleotide 277 of SEQ ID NO:1, a first oligonucleotide primer which is 5xe2x80x2 to nucleotide 277 and a second oligonucleotide primer which is 3xe2x80x2 to nucleotide 277 is used. Similarly, a nucleic acid containing nucleotide 193 of SEQ ID NO:1 is amplified using primers which flank that nucleotide. For example, for nucleotide 277, the first primer has a nucleotide sequence of SEQ ID NO:3 and said second oligonucleotide primer has a nucleotide sequence of SEQ ID NO:4, or the first primer has a nucleotide sequence of SEQ ID NO:15 and said second oligonucleotide primer has a nucleotide sequence of SEQ ID NO:16. Table 3, below, shows examples of primer pairs for amplification of nucleic acids in exons and introns of the HFE gene. Mutations in introns of the HFE gene have now been associated with iron disorders and/or hemochromatosis. By xe2x80x9cexonxe2x80x9d is meant a segment of a gene the sequence of which is represented in a mature RNA product, and by xe2x80x9cintronxe2x80x9d is meant a segment of a gene the sequence of which is not represented in a mature RNA product. An intron is a part of a primary nuclear transcript which is subsequently spliced out to produce a mature RNA product, i.e., a mRNA, which is then transported to the cytoplasm. A method of diagnosing an iron disorder or a genetic susceptibility to developing the disorder is carried out by determining the presence or absence of a mutation in an intron of HFE genomic DNA in a biological sample. The presence of the mutation is indicative of the disorder or a genetic susceptibility to developing the disorder. The presence of a mutation in an intron is a marker for an exon mutation, e.g., a mutation in intron 4, e.g., at nucleotide 6884 of SEQ ID NO:27 is associated with the S65C mutation in exon 2. A mutation in intron 5, e.g., at nucleotide 7055 of SEQ ID NO:27 is associated with hemochromatosis. In some cases, intron mutations may adversely affect proper splicing of exons or may alter regulatory signals. Preferably, the intron 4 mutation is 6884C and the intron 5 mutation is 7055G. To amplify nucleic acid molecule containing nucleotide 6884 or 7055, primers which flank that nucleotide, e.g., those described in Table 3, are used according to standard methods. Nucleic acid-based diagnostic methods may or may not include a step of amplification to increase the number of copies of the nucleic acid to be analyzed. To detect a mutation in intron 4, a patient-derived nucleic acid may be amplified using a first oligonucleotide primer which is 5xe2x80x2 to intron 4 and a second oligonucleotide primer which is 3xe2x80x2 to intron 4, and to detect a mutation in intron 5, the nucleic acid may be amplified using a first oligonucleotide primer which is 5xe2x80x2 to intron 5 and a second oligonucleotide primer which is 3xe2x80x2 to intron 5 (see, e.g., Table 3). In addition to nucleic acid-based diagnostic methods, the invention includes a method of diagnosing an iron overload disorder or a genetic susceptibility thereto by determining the presence of a mutation in a HFE gene product in a biological sample. For example, the mutation results in a decrease in intramolecular salt bridge formation in the mutant HFE gene product compared to salt bridge formation in a wild type HFE gene product. The mutation which affects salt bridge formation is at or proximal to residue 63 of SEQ ID NO:2, but is not amino acid substitution H63D. Preferably, the mutation is between residues 23-113, inclusive of SEQ ID NO:2 (Table 2), more preferably, it is between residues 90-100, inclusive, of SEQ ID NO:2, more preferably, it is between residues 58-68, inclusive, of SEQ ID NO:2, and most preferably, the mutation is amino acid substitution S65C. Alternatively, the mutation which affects salt bridge formation is a mutation, e.g., an amino acid substitution at residue 95 or proximal to residue 95 of SEQ ID NO:2. Preferably, the mutation is G93R. Such an HFE mutation is detected by immunoassay or any other ligand binding assay such as binding of the HFE gene product to a transferrin receptor. Mutations are also detected by amino acid sequencing, analysis of the structural conformation of the protein, or by altered binding to a carbohydrate or peptide mimetope. A mutation indicative of an iron disorder or a genetic susceptibility to developing such a disorder is located in the xcex11 helix (e.g., which spans residues 80-108, inclusive, of SEQ ID NO:2) of an HFE gene product. The mutation may be an addition, deletion, or substitution of an amino acid in the wild type sequence. For example, the mutant HFE gene product contains the amino acid substitution I105T or G93R or in the loop of the xcex2 sheet of the HFE molecule, e.g., mutation S65C Isolated nucleic acids encoding a mutated HFE gene products (and nucleic acids with nucleotide sequences complementary to such coding sequences) are also within the invention. Also included are nucleic acids which are at least 12 but less than 100 nucleotides in length. An isolated nucleic acid molecule is a nucleic acid molecule that is separated from the 5xe2x80x2 and 3xe2x80x2 sequences with which it is immediately contiguous in the naturally occurring genome of an organism. xe2x80x9cIsolatedxe2x80x9d nucleic acid molecules include nucleic acid molecules which are not naturally occurring. For example, an isolated nucleic acid is one that has been amplified in vitro, e.g, by PCR; recombinantly produced; purified, e.g., by enzyme cleavage and gel separation; or chemically synthesized. For example, the restriction enzyme, Bst4C I (Sib Enzyme Limited, Novosibirsk, Russia), can be used to detect the G93R mutation (point mutation 277C); this enzyme cuts the mutated HFE nucleic acid but not the wild type HFE nucleic acid. Such nucleic acids are used as markers or probes for disease states. For example, a marker is a nucleic acid molecule containing a nucleotide polymorphism, e.g., a point mutation, associated with an iron disorder disease state flanked by wild type HFE sequences. The invention also encompasses nucleic acid molecules that hybridize, preferably under stringent conditions, to a nucleic acid molecule encoding a mutated HFE gene product (or a complementary strand of such a molecule). Preferably the hybridizing nucleic acid molecule is 400 nucleotides, more preferably 200 nucleotides, more preferably 100, more preferably 50, more preferably 25 nucleotides, more preferably 20 nucleotides, and most preferably 10-15 nucleotides, in length. For example, the nucleotide probe to detect a mutation is 13-15 nucleotides long. The nucleic acids are also used to produce recombinant peptides for generating antibodies specific for mutated HFE gene products. In preferred embodiments, an isolated nucleic acid molecule encodes an HFE polypeptide containing amino acid substitution I105T, G93R, or S65C, as well as nucleic acids the sequence of which are complementary to such nucleic acid which encode a mutant or wild type HFE gene product. Also within the invention are substantially pure mutant HFE gene products, e.g., an HFE polypeptide containing amino acid substitution I105T, G93R, or S65C. Substantially pure or isolated HFE polypeptides include those that correspond to various functional domains of HFE or fragments thereof, e.g., a fragment of HFE that contains the xcex11 domain. Wild type HFE binds to the transferrin receptor and regulates the affinity of transferrin receptor binding to transferrin. For example, a C282Y mutation in the HFE gene product reduces binding to the transferrin receptor, thus allowing the transferrin receptor to bind to transferrin (which leads to increased iron absorption). The polypeptides of the invention encompass amino acid sequences that are substantially identical to the amino acid sequence shown in Table 2 (SEQ ID NO:2). Polypeptides of the invention are recombinantly produced, chemically synthesized, or purified from tissues in which they are naturally expressed according to standard biochemical methods of purification. Biologically active or functional polypeptides are those which possess one or more of the biological functions or activities of wild type HFE, e.g., binding to the transferrin receptor or regulation of binding of transferrin to the transferrin receptor. A functional polypeptide is also considered within the scope of the invention if it serves as an antigen for production of antibodies that specifically bind to an HFE epitope. In many cases, functional polypeptides retain one or more domains present in the naturally-occurring form of HFE. The functional polypeptides may contain a primary amino acid sequence that has been altered from those disclosed herein. Preferably, the cysteine residues in exons 3 and 4 remain unchanged. Preferably the modifications consist of conservative amino acid substitutions. The terms xe2x80x9cgene productxe2x80x9d, xe2x80x9cproteinxe2x80x9d, and xe2x80x9cpolypeptidexe2x80x9d are used herein to describe any chain of amino acids, regardless of length or post-translational modification (for example, glycosylation or phosphorylation). Thus, the term xe2x80x9cHFE polypeptide or gene productxe2x80x9d includes full-length, naturally occurring HFE protein, as well a recombinantly or synthetically produced polypeptide that correspond to a full-length naturally occurring HFE or to a particular domain or portion of it. The term xe2x80x9cpurifiedxe2x80x9d as used herein refers to a nucleic acid or peptide that is substantially free of cellular material, viral material, or culture medium when produced by recombinant DNA techniques, or chemical precursors or other chemicals when chemically synthesized. Polypeptides are said to be xe2x80x9csubstantially purexe2x80x9d when they are within preparations that are at least 60% by weight (dry weight) the compound of interest. Preferably, the preparation is at least 75%, more preferably at least 90%, and most preferably at least 99%, by weight the compound of interest. Purity can be measured by any appropriate standard method, for example, by column chromatography, polyacrylamide gel electrophoresis, or HPLC analysis. Diagnostic kits for identifying individuals suffering from or at risk of developing an iron disorder are also within the invention. A kit for detecting a nucleotide polymorphism associated with an iron disorder or a genetic susceptibility thereto contains an isolated nucleic acid which encodes at least a portion of the wild type or mutated HFE gene product, e.g., a portion which spans a mutation diagnostic for an iron disorder or hemochromatosis (or a nucleic acid the sequence of which is complementary to such a coding sequence). A kit for the detection of the presence of a mutation in exon 2 of an HFE nucleic acid contains a first oligonucleotide primer which is 5xe2x80x2 to exon 2 and a second oligonucleotide primer is 3xe2x80x2 to exon 2, and a kit for an antibody-based diagnostic assay includes an antibody which preferentially binds to an epitope of a mutant HFE gene product, e.g., an HFE polypeptide containing amino acid substitution I105T, G93R, or S65C, compared to its binding to the wild type HFE polypeptide. An increase in binding of the mutant HFE-specific antibody to a patient-derived sample (compared to the level of binding detected in a wild type sample or sample derived from a known normal control individual) indicates the presence of a mutation which is diagnostic of an iron disorder, i.e., that the patient from which the sample was taken has an iron disorder or is at risk of developing one. The kit may also contain an antibody which binds to an epitope of wild type HFE which contains residue 105, 93, or 65. In the latter case, reduced binding of the antibody to a patient-derived HFE gene product (compared to the binding to a wild type HFE gene product or a gene product derived from a normal control individual) indicates the presence of a mutation which is diagnostic of an iron disorder, i.e., that the patient from which the sample was taken has an iron disorder or is at risk of developing one. Individual mutations and combinations of mutations in the HFE gene are associated with varying severity of iron disorders. For example, the C282Y mutation in exon 4 is typically associated with clinical hemochromatosis, whereas other HFE mutations or combinations of mutations in HFE nucleic acids are associated with disorders of varying prognosis. In some cases, hemochromatosis patients have been identified which do not have a C282Y mutation. The I105T and G93R mutations are each alone associated with an increased risk of iron overload (compared to, e.g., the H63D mutational one), and the presence of both the I105T and H63D mutation is associated with hemochromatosis. Accordingly, the invention includes a method of determining the prognosis for hemochromatosis in a mammal suffering from or at risk of developing said hemochromatosis by (a) detecting the presence or absence of a first mutation in exon 4 in each allele of an HFE nucleic acid, e.g., patient-derived chromosomal DNA, and (b) detecting the presence of a second mutation in exon 2 in each allele of the nucleic acid. The presence of the first mutation in both chromosomes, i.e. an exon 4 homozygote such as a C282Y homozygote, indicates a more negative prognosis compared to the presence of the second mutation in one or both chromosomes, i.e., an exon 2 heterozygote or homozygote. An exon 4 mutation homozygote is also associated with a more negative prognosis compared to the presence of a first mutation (exon 4) in one allele and the presence of the second mutation (exon 2) in one allele, i.e., a compound heterozygote. Other features and advantages of the invention will be apparent from the following detailed description, and from the claims.
The International Refugee Congress was initiated by a group of 10 civil society organisations united by a desire to strengthen the participation of refugees and host communities in international policymaking processes. This group includes refugee-led organisations and networks, national civil society organisations working on refugee issues, and international NGOs. It has since brought together 76 organisations from 25 countries in five thematic working groups. Our shared goal is to ensure that the voices of refugees and the communities that host them are at the heart of the decisions being made about their lives. All over the world, many groups and organisations are working to put the voices of refugees and host communities at the heart of decisions which are being made about their lives. Despite this, opportunities for doing so have mainly been ad hoc, and these efforts have stopped short of creating a mechanism that ensures the inclusion of refugees and host communities in policymaking discussions and negotiations. As a result, the refugees and host communities based in the countries that host the highest numbers of refugees have largely been underrepresented in decision making processes that impact their lives. The International Refugee Congress addressed this underrepresentation by bringing together a diverse range of organisations working in support of refugee rights, with a particular focus on refugee-led organisations and civil society from the world’s major refugee-hosting countries. We brought these voices together and established shared strategies and policy options to shape the decisions which affect the lives of refugees and host communities. Find out more about the organisations behind the International Refugee Congress In order to better understand the diverse priorities of refugees and their host communities around the world, and to expand the pool of organisations involved in this initiative, we asked the experts - refugees and groups with first-hand experience of hosting them – to share their perspectives. Through an independent, international consultation we reached out to representatives from refugee-led organisations and national civil society from the world’s major refugee-hosting countries and beyond. Over 500 individuals and organisations responded. You can read the full report from the consultation here. On the basis of the consultation findings, five thematic working groups were convened to lead the development of policy positions on the following themes:
https://www.refugeecongress2018.org/who-we-are/
Another one of my favourite plants. I adore Ferns and am surrounded by them living in the countryside. They come in so many different sizes and shapes. The plant pots and vases I have used are also drawings from my own collectibles. This design is a mix of linear drawings mixed with contemporary colourful shapes to lift any interior. Printed on slightly textured 310gsm fine art paper with the finest pigment inks. All our prints come unframed - frames are for display photos only. Each A3 prints will be shipped in a clear plastic sleeve inside a stiff protective envelope with a sturdy backing board for protection. Larger size sent in a postal tube This is a digital/Giclee print.
https://www.niamhgillespiedesign.com/products/fern
Finding a delivery plan for cancer radiation treatment using multileaf collimators operating in ''step-and-shoot mode'' can be formulated mathematically as a problem of decomposing an integer matrix into a weighted sum of binary matrices having the consecutive-ones property - and sometimes other properties related to the collimator technology. The efficiency of the delivery plan is measured by both the sum of weights in the decomposition, known as the total beam-on time, and the number of different binary matrices appearing in it, referred to as the cardinality, the latter being closely related to the set-up time of the treatment. In practice, the total beam-on time is usually restricted to its minimum possible value, (which is easy to find), and a decomposition that minimises cardinality (subject to this restriction) is sought. Many polynomially solvable combinatorial optimization problems (COP) become NP when we require solutions to satisfy an additional cardinality constraint. This family of problems has been considered only recently. We study a newproblem of this family: the k-cardinality minimum cut problem. Given an undirected edge-weighted graph the k-cardinality minimum cut problem is to find a partition of the vertex set V in two sets V 1 , V 2 such that the number of the edges between V 1 and V 2 is exactly k and the sum of the weights of these edges is minimal. A variant of this problem is the k-cardinality minimum s-t cut problem where s and t are fixed vertices and we have the additional request that s belongs to V 1 and t belongs to V 2 . We also consider other variants where the number of edges of the cut is constrained to be either less or greater than k. For all these problems we show complexity results in the most significant graph classes. Given an undirected, connected network G = (V,E) with weights on the edges, the cut basis problem is asking for a maximal number of linear independent cuts such that the sum of the cut weights is minimized. Surprisingly, this problem has not attained as much attention as its graph theoretic counterpart, the cycle basis problem. We consider two versions of the problem, the unconstrained and the fundamental cut basis problem. For the unconstrained case, where the cuts in the basis can be of an arbitrary kind, the problem can be written as a multiterminal network flow problem and is thus solvable in strongly polynomial time. The complexity of this algorithm improves the complexity of the best algorithms for the cycle basis problem, such that it is preferable for cycle basis problems in planar graphs. In contrast, the fundamental cut basis problem, where all cuts in the basis are obtained by deleting an edge, each, from a spanning tree T is shown to be NP-hard. We present heuristics, integer programming formulations and summarize first experiences with numerical tests. It is well-known that some of the classical location problems with polyhedral gauges can be solved in polynomial time by finding a finite dominating set, i.e. a finite set of candidates guaranteed to contain at least one optimal location. In this paper it is first established that this result holds for a much larger class of problems than currently considered in the literature. The model for which this result can be proven includes, for instance, location problems with attraction and repulsion, and location-allocation problems. Next, it is shown that the approximation of general gauges by polyhedral ones in the objective function of our general model can be analyzed with regard to the subsequent error in the optimal objective value. For the approximation problem two different approaches are described, the sandwich procedure and the greedy algorithm. Both of these approaches lead - for fixed epsilon - to polynomial approximation algorithms with accuracy epsilon for solving the general model considered in this paper. Location problems with Q (in general conflicting) criteria are considered. After reviewing previous results of the authors dealing with lexicographic and Pareto location the main focus of the paper is on max-ordering locations. In these location problems the worst of the single objectives is minimized. After discussing some general results (including reductions to single criterion problems and the relation to lexicographic and Pareto locations) three solution techniques are introduced and exemplified using one location problem class, each: The direct approach, the decision space approach and the objective space approach. In the resulting solution algorithms emphasis is on the representation of the underlying geometric idea without fully exploring the computational complexity issue. A further specialization of max-ordering locations is obtained by introducing lexicographic max-ordering locations, which can be found efficiently. The paper is concluded by some ideas about future research topics related to max-ordering location problems.
KE Group has more than 40 years experience in designing sheet metal solutions. Our design team works closely with our clients to ensure their needs are met, often to comply with tight launch deadlines. We continually invest in the most modern design technology to provide optimum efficiency and total customer solutions. KE aim to offer our customers the most efficient and effective route to production. Our design, development and production teams work closely from initial concept through to prototype and finally to finished product, completing full product performance testing in order to ensure total quality and customer satisfaction. Combining the latest 3D modelling technology with CAD software allows us to supply our client’s needs accurately, quickly and efficiently. Our project management solution approach from design concept through to competitively priced, quality assured products gives our customers a distinct advantage in the market place.
http://ke-group.com/index.php/services/design-engineering-services
Friday, Jul 03, 2020 19:11 Several business models in Viet Nam are heading towards the circular economy based on the principles of eliminating waste and recycling resources, Nguyen The Chinh, head of the Institute of Strategy and Policy on Natural Resources and Environment, told a conference in HCM City. The models include recycling villages and eco-industrial parks, he said. Eco-industrial parks in Ninh Binh Province, Can Tho and Da Nang save US$6.5 million a year, he revealed. But not all waste is reused and recycled and the technologies available for it is outdated, he said. Nguyen Tu Anh of the Party Central Committee’s Economic Commission said changing from the traditional to circular economic model is vital for Viet Nam. The country has seen environmental and water pollution and “The circular economy will help Viet Nam address these challenges.” A legal framework would be created for circular economic development, but more technologies to aid it would be needed, he said. The conference was followed by the Viet Nam National University-HCM City announcing the establishment of the Institute for Circular Economy Development to aid change from the traditional economic model. Assoc Prof Dr Nguyen Hong Quan, the institute’s head, said it would recommend policies that could bring economic and social benefits like environmental protection and achieving sustainable development goals based on the proper use of materials and energy. Assoc Prof Dr Huynh Thanh Dat, president of the Viet Nam National University-HCM City, said the institute would advise on methods and policies for the country’s sustainable development, and connect enterprises, the Government and universities. — VNS The Outlook on the IDR is Positive, in line with Viet Nam's sovereign rating of 'BB' with a Positive Outlook. The Viability Rating (VR) was affirmed at 'b'.
http://bizhub.vn/news/viet-nam-pivots-to-circular-economy_317013.html
Over the summer we have been working hard to update our outdoor area with more challenging and enriching areas, including a mud kitchen, baby terrace, and a growing and nature area. All of which provide opportunities for our children to engage in learning through outdoor and natural play. The mud kitchen has been busy and popular with the children as they explore the variety of resources creating concoctions like mud and water transformed into coffee with sugar or chocolate cake. As the children engage in conversation they are also developing their personal, social and emotional skills, forming friendships through play whilst developing their vocabulary. The children have loved planting potatoes in the growing area. They have taken on roles in their play as they have watered the vegetables and plants to make sure they grow ready for the children to harvest. Wendy our chef enhanced this experience by helping the children to make a potato salad. The children have been exploring herbs too– this has allowed them to making discoveries about the world around them. Whilst engaged in conversation, they have asked questions such as “What can I smell?” and our practitioners are able to extend the children’s knowledge and language through discussion. We have created a baby garden too with lots of space to explore. We have enhanced the area with a sandpit, herb garden and mud kitchen. Babies have the opportunity to develop their large motor skills as they play with the footballs or engage in messy play. They have enjoyed watering the garden and making new discoveries through taste and touch. The baby garden has provided more opportunities for them to see siblings throughout the day too as everyone plays outside together.
https://townhouse-childcare.co.uk/2016/11/01/changes-to-our-garden-at-audley-road/
The website of the Regional Centre for Handicrafts (CRAA) aims, in its essence, to promote and divulge the certified handicraft products. The Regional Centre for Handicrafts is an executive service of the Vice-Presidency of the Regional Government of the Azores, which is responsible for the implementation of regional policy in the areas of development and appreciation of traditional products, including the regional handicrafts and handicraft production units, professional training and the coordination of multifunctional initiatives with development in the local environment. CRAA develops a consistent and diverse annual plan, acting on four areas that are considered fundamental: Research/Certification, Training, Artisan Support and Promotion. The development of these areas involves the certification of products, editing publications, organizing exhibitions, promoting workshops, workshops and training activities, providing support to the artisans, namely through the incentive system. The training in handicraft responds to the needs of the market, contributing at the same time to the training of a new generation of artisans endowed with technical and scientific knowledge always linked to the traditional knowledge. The training is a commitment to the revitalization of the traditional crafts, each one characteristic of each island and in the re-creation of more contemporary products, through the project Hora do Ofício (Craft Hour). CRAA develops a diversified annual plan, to be able to had value to the handicraft products of the Azores, with the goal of commercialization and promotion of the excellent craftsmanship in the region, participating and organizing several fairs and markets, such as the Handicraft Shows – M.ART’s, the Urban Market of Handicrafts – MUA, the AçorExpo, the Market of Azorean Sweets – Dias Doces and the Handicraft Festival of the Azores – PRENDA. Artisan is the worker who carries out a manual labor, on his own account or on behalf of another person, inserted in a recognized handicraft productive unit, which requires the mastery of the knowledge and techniques inherent to the activity in question and accurate aesthetic sense and manual skill. The recognition of the artisan status is done through the attribution of a title called “carta de artesão” (artisan’s letter). The collective brand of origin “Artesanato dos Açores”, implemented in 1998, intends, with the valorization of the products it integrates, a more effective distinction, dissemination and commercialization, especially in the foreign market. The projection of the brand is also based on the responsibility of the artisans, as the most interested in the distinction of their products in the market, with the acquirement of a seal of certification.
http://artesanato.azores.gov.pt/en/
10 How to Build Emotional Intimacy in a Relationship The strongest relationships are designed on love, trust, and respect, however in purchase to possess them, partners must have this vital element also at the start of their life together: psychological closeness. Simply because the connection built via a bond that is emotionally-secured both sacred and unbreakable. Numerous partners split up simply because they neglected to recognize the significance of building psychological intimacy in their relationship while just concentrating on the real element of it. Although some of these simply intentionally pick the latter, other people just simply don’t understand how to perform some right thing. Should you want to know the how to build psychological closeness in a relationship, this article will let you know the place to start. Right right Here they’ve been. 1. Explore yesteryear having a available mind.The past is instead a sensitive and painful topic specifically for couples who will be beginning to develop a relationship. Often, it could also be described as a painful experience just to narrate the stories that broke your heart nonetheless it’s a significant step up building psychological closeness with your current boyfriend or gf. By showing them that which you’ve experienced, they’re going to know why you function and react the method you are doing. Exactly why is this crucial? It is because regardless of how much you try, your previous experiences therefore the individuals you’re with can contour your overall philosophy and values – and helping your partner understand why is a huge and step that is vital. 2. Acknowledge the items that hurt them.Just like the way you need to assist your spouse understand the experiences that aided you develop as an individual, you additionally have to acknowledge their very own experiences and things that hurt them. To construct an emotionally-strong relationship, be alert to the circumstances, actions, as well as terms that can cause them to feel hurt or unloved. Emotionally couples that are intimate sensitive adequate to understand the right terms to state so when to express them. 3. Understand the items that cause them to feel alive.Aside from having an understanding that is clear of items that hurt them, you additionally have to understand things, places, individuals, and tasks that produce them feel alive. In this manner, you’ll get to own a glimpse of who they really are when they’re at their utmost. Such knowledge and understanding can ultimately make suggestions to also simply the essentials of steps to make them feel excited and happy in your relationship particularly when facing that is you’re times. 4. Accept and progress to understand the people who cause them to become happy.Accept the folks who have been here you were not yet together for them even when. These people were here first, assisting your lover grow and revel in life even that you’re not the jealous type before you entered the picture – so if you want to build an emotionally-intimate relationship, show them. Psychological closeness starts once you feel protected regarding the place that is own in life. Psychological intimacy only comes if you’re maybe perhaps not troubled by the negativities like envy and insecurity. 5. Invest the nice additionally the bad times keeping fingers.Emotionally-intimate partners understand the need for dealing with bad times together rather than making one another alone and susceptible. In reality, they realize that just those partners who face the storms together can make that unusual yet stunning relationship constructed on a good psychological relationship. These exact same kinds of partners are the ones who reach commemorate the days that are good genuine pleasure and pride. 6. Break all your valuable walls and allow them to in.Intimacy starts whenever both people trust each other despite their worries, weaknesses, and vulnerability. That’s the reason it is critical to break your entire walls and allow your lover in. Trust them to help keep you safe and also to protect you against the potential risks of life and love. Psychological closeness can only just occur them see the real you if you have this trust for your partner and the only way to do that is to take off that armor and let. 7. Relationship within the mundane, day-to-day things and merely have actually fun.Emotional closeness can be a deep and solemn facet of your relationship, nonetheless it does not suggest which you can’t build them through the essential mundane and simplest things you are doing as a few. In reality, spending the afternoon together, doing day-to-day things whilst having enjoyable is a fast and way that is sure cause them to become feel emotionally fused to you. Therefore prepare together, go to your neighborhood museum, relationship over a romantic film, or perhaps stay here and flake out. It’s essential if you want your love to last that you feel comfortable doing these things together. 8. Be genuine and genuine in anything you state and do.Emotional intimacy can simply be contained in partners who practice sincerity and honesty in every thing they are doing. Be genuine whenever love that is expressing care to your lover – and also this doesn’t also simply simply simply take lots of work if you’re certainly in love. Ensure they believe that they don’t need certainly to doubt your terms as well as your intention. Allow them to trust you just by being real as to the you are feeling. 9. Have actually significant conversations and simply be you.Deep and conversations that are meaningful allow you to produce wonderful memories as a couple – plus the most useful relationships have really really started from the forms of interactions. Simply because talking to somebody will not just expose whom they are really nonetheless it makes it possible to get a glimpse for the items that they would like to attempt to conceal off their individuals. These conversations might help partners build emotional closeness since they act as a connection to contact each other, to understand what’s actually happening inside their mind, also to realize why they hurt, laugh, and love. 10. Master the art of humor – make them laugh!Emotional closeness is current among partners that are proficient at making one another laugh. Humor is a massive element for a durable relationship and lots of individuals can concur them up that they don’t get tired of being with anyone who can easily crack. Psychological closeness may seem like a serious subject, but laughter will make a massive difference between assisting couples develop a very good psychological relationship in an even more fun and lighter opportunity. Building intimacy that is emotional your relationship means starting you to ultimately someone else whom may or may well not share exactly the same views and passions while you. But, for as long you should not be afraid to let another person see your soul as you care and love each other. doctor dating app free It’s the only means to produce a genuinely strong relationship and build a lasting emotional connection which will help your relationship grow.
http://www.wynajem-autokaru.com/10-how-to-build-emotional-intimacy-in-a/
Next year will be an exciting year for the Internet of Things (IoT), writes Joanne Phoenix, interim executive director of Sensor city. It continued to proliferate in multiple sectors and applications, with many high-tech companies and leading research companies that anticipate a general expansion. It is obvious that it is no longer about itself, but when we will realize a future completely connected through IoT for commercial operations. However, technology is only part of the necessary ingredients for the success of IoT: strong leadership is perhaps the most essential element for companies seeking to adopt IoT. The challenge for companies is that the evidence points to a possible shortage of a strong IoT leadership capacity: a 2019 Microsoft The study cited the lack of support and attention to leadership as the main cause of the failure of the IoT project. I would suggest that there is still a way to go when it comes to equipping leaders with the knowledge and tools they need to successfully implement IoT. That said, the success of IoT cannot be achieved with just one person: business leaders must take the entire company on the path of adoption to create success. Why? Creating acceptance of internal and external teams is crucial to optimize long-term implementation and effectiveness in all parts of the business. To do this, leaders must provide a solid strategy and direction, backed by clear communication. The challenge for leaders is to ensure that they understand well what IoT is and how to benefit their businesses to provide this address to their teams. What leaders must do to unlock the full potential of IoT The understanding of the benefits and implementation of IoT varies greatly from one company to another. This means that there is no coherent trip or experience from which business leaders can take the initiative in creating their own strategy, and without a clear digital strategy, organizations will have difficulty harnessing the full potential of IoT. It is likely that companies not only spend significant money upfront on IoT, but also spend significant time in a trial and error process to find the right solution. However, with the potential of IoT a great opportunity presents itself: the 2019 Microsoft IoT Signals report says that IoT adopters predict a 30% increase in return on investment (ROI) in two years. Understanding how IoT benefit your business is essential to create a coherent strategy and guide your organization and teams to the success of IoT. Consider where your business is currently, where it should be and what will have the greatest impact on operations. You may need to invest in skills or set goals for your IoT investment. To develop this understanding and communicate with confidence the need for IoT, we suggest working on the following checklist. In this way, being equipped with the tools to successfully develop an IoT strategy and bring your team to success. The leader's checklist for IoT success Evaluate your current position: Have you started investing in IoT or are you working from scratch? What is working so far? What does not work? What data are you collecting already? What data do you need to collect in the future and why? Set your objects: Be clear about what you want to get from IoT implementation, from a business perspective. Make sure that IoT aligns with your business plan. Consider what benefits will be brought to the company, its staff or its customers. Make an informed investment: You need to calculate your possible ROI. Identify your direct and indirect costs and savings, cash flow and initial costs, and minimum income, or "obstacle fee", necessary for your business to be profitable. Look at all the elements of your digital strategy: Business leaders should consider their current technological infrastructure, assess where there will be skills gaps and where they should adapt to IoT technologies. Create a technology roadmap: It should address the areas of implementation and where IoT will have the greatest impact. Seek to create a plan on what technology will be implemented, when and how to integrate with existing technology. Have a communication strategy: The implementation of IoT is not just about a technological change: success also depends largely on cultural change. Communication is key: you are likely to be asked if any team member will have to be re-trained, if it is necessary to replace or recruit skills and / or why the change is needed in the first place. The author is Joanne Phoenix, interim executive director of Sensor City, a global center for the development of sensor and IoT technologies. For more guidance on IoT adoption, download The free Sensor City 2020 guide for IoT adoption. COMING SOON: The last video of IoT Now Quickfire. We interviewed Nick Earle, CEO of Eseye, about why IoT has taken longer to take off than many predicted.
https://www.funzen.net/2019/12/19/strong-leadership-is-driving-iot-adoption/
INTBAU MONTENEGRO is a non-governmental organization, founded by Mirjana Lubarda and based in Cetinje. The Montenegrin Chapter aims to assist local, state and international communities and all other stakeholders whose activities are aimed at preserving traditions and identity, preserving and enhancing cultural heritage and transmitting its authentic form to future generations. The main goal of the association is the promotion and dissemination of knowledge and experience about the value and importance of cultural heritage, the implementation of activities which are aimed at the promotion and development of architecture, urban planning, cultural heritage and the entire society. Bearing in mind the potentials of Montenegrin cultural heritage, recognition of the need to preserve it in a proper manner is of great importance. Intensive historical events have left us a number of traces of past life – witnesses of social and cultural shifts in the area of Montenegro. Thus through preservation and promotion, raising awareness of local and state community will be the main goal, which will further allow sustainable development. In order to achieve its vision and mission, INTBAU MONTENEGRO will perform the activities in the fields of architecture, urban planning, town planning, cultural heritage protection, visual arts, culture, design and public advocacy. These activities will include: - Promoting principles and knowledge about the value and importance of cultural heritage, with a tendency to understand the cultural heritage in its broadest sense, as something that contains hints that testify about the activities and achievements of man through time - Promoting values through educational and training programs - Establishing cooperation with universities and other institutions interested in the cultural heritage in Montenegro and abroad - Organisation of scientific, professional and research projects in the fields of interest with all stakeholders in Montenegro and abroad - Participation in the development of projects related to the rehabilitation of cultural heritage - Architectural examination of the current state of cultural heritage, the collection of documentation and archival materials of the cultural heritage, and systematization of collected data, and compiling of data - Creation of protection studies aimed at improving the situation of cultural heritage - Provision of consulting services to representatives of non-governmental organizations, local and national communities and all other stakeholders - Publishing of printed and electronic material in the fields of its activities - Promotion of NGOs activities through multiple means, such as radio, television, public meetings, conferences, publications in journals, books, DVD, etc - Organisation of meetings with representative offices from abroad in order to exchange experiences, materials and knowledge - Identifying problems, promoting and participating in finding solutions for devastated and endangered localities through specific projects (research, studies, workshops) - Performing other activities related to the achievement of objectives of the organisation Chapter board: Mirjana Lubarda (Chair) Milena Latković, architect – conservator Andreja Mugoša, architect – conservator Aleksandar Dajković, architect – conservator Vildan Ramusović, architect – conservator Milan Jovićević, architect – conservator Petra Zdravković, archaeologist Balša Perović, lawyer Lidija Vujović, sociologist Chapter contact: Mirjana Lubarda Chair, INTBAU Montenegro:
https://www.intbau.org/chapters/montenegro/
Here is a list of top tourist attractions in Central African Republic. Only the topmost tourist destinations are presented here. To see other destinations, please check the images from Central African Republic section. Curious if any of these place from Central African Republic made it our best tourist attractions in the world list? Read the aformentioned article in order to find out. You can also view all tourist attractions in Central African Republic and other countries on our tourist attractions map. Manovo-Gounda St.Floris National Park is a national park and UNESCO World Heritage Site located in the Central African Republic prefecture Bamingui-Bangoran, near the Chad border. It was inscribed to the list of World Heritage Sites in 1988 as a result of the diversity of life present within it. Notable species include black rhinoceroses, elephants, cheetahs, leopards, red-fronted gazelles, and buffalo; a wide range of waterfowl species also occurs in the northern floodplains. The site is under threat due to its rare wildlife dying and animals species being wiped out. The site was added to the List of World Heritage in Danger after reports of illegal grazing and poaching by heavily armed hunters, who may have harvested as much as 80% of the park's wildlife. The shooting of four members of the park staff in early 1997 and a general state of deteriorating security brought all development projects and tourism to a halt. The government of the Central African Republic proposed to assign site management responsibility to a private foundation. The preparation of a detailed state of conservation report and rehabilitation plan for the site was recommended by the World Heritage Committee at its 1998 session. People are working on breeding programs to revive the natural wildlife. The Bamingui-Bangoran National Park complex is a national park and biosphere reserve located in the northern region of the Central African Republic. It makes up part of the Guinea-Congo Forest biome. It was established in 1993. The Vassako Bolo Strict Nature Reserve is in the midst of the park. The Mbaéré-Bodingué National Park is found in the Central African Republic, and covers 872 km².
http://countrylicious.com/central-african-republic/tourist-attractions
Fahrenheit 451 is a dystopian novel by American writer Ray Bradbury, first published in 1953. It is regarded as one of his best works. The novel presents a future American society where books are outlawed and “firemen” burn any that are found. The book’s tagline explains the title: “Fahrenheit 451 – the temperature at which book paper catches fire, and burns…” The lead character, Guy Montag, is a fireman who becomes disillusioned with his role of censoring literature and destroying knowledge, eventually quitting his job and committing himself to the preservation of literary and cultural writings. The novel has been the subject of interpretations focusing on the historical role of book burning in suppressing dissenting ideas. In a 1956 radio interview, Bradbury said that he wrote Fahrenheit 451 because of his concerns at the time (during the McCarthy era) about the threat of book burning in the United States. In later years, he described the book as a commentary on how mass media reduces interest in reading literature. In 1954, Fahrenheit 451 won the American Academy of Arts and Letters Award in Literature and the Commonwealth Club of California Gold Medal. It later won the Prometheus “Hall of Fame” Award in 1984 and a “Retro” Hugo Award, one of only six Best Novel Retro Hugos ever given, in 2004. Bradbury was honored with a Spoken Word Grammy nomination for his 1976 audiobook version. Adaptations of the novel include François Truffaut‘s 1966 film adaptation and a 1982 BBC Radio dramatization. Bradbury published a stage play version in 1979 and helped develop a 1984 interactive fiction computer game titled Fahrenheit 451, as well as a collection of his short stories titled A Pleasure to Burn. HBO released a television film based on the novel and written and directed by Ramin Bahrani in 2018.
https://knightsofacademia.org/fahrenheit-451-by-ray-bradbury/
Chalcanthite, whose name derives from the Greek, chalkos and anthos, meaning copper flower, is a richly-colored blue/green water-soluble sulfate mineral CuSO4·5H2O. It is commonly found in the late-stage oxidation zones of copper deposits. Due to its ready solubility, chalcanthite is more common in arid regions. Chalcanthite is a pentahydrate and the most common member of a group of similar hydrated sulfates, the chalcanthite group. These other sulfates are identical in chemical composition to chalcanthite, with the exception of replacement of the copper ion by either manganese as jokokuite, iron as siderotil, or magnesium as pentahydrite. Uses of Chalcanthite As chalcanthite is a copper mineral, it can be used as an ore of copper. However, its ready solubility in water means that it tends to crystallize, dissolve, and recrystallize as crusts over any mine surface in more humid regions. Therefore, chalcanthite is only found in the most arid regions in sufficiently large quantities for use as an ore. Secondarily, chalcanthite, due to its rich color and beautiful crystals, is a sought after collector’s mineral. However, as with its viability as an ore, the solubility of the mineral causes significant problems. First, the mineral readily absorbs and releases its water content, which, over time, leads to a disintegration of the crystal structure, destroying even the finest specimens. It is critical to store specimens properly to limit exposure to humidity. Second, higher quality crystals can be easily grown synthetically, and, as such, there is a concern that disreputable mineral dealers would present a sample as natural when it is not. Many Chalcanthite specimens offered for sale are artificially grown.
http://rocksandmineralstrader.com/chalcanthite/
Gypsum is a soft sulfate mineral composed of calcium sulfate dihydrate, with the chemical formula . It is widely mined and is used as a fertilizer and as the main constituent in many forms of plaster, blackboard/sidewalk chalk, and drywall. A massive fine-grained white or lightly tinted variety of gypsum, called alabaster, has been used for sculpture by many cultures including Ancient Egypt, Mesopotamia, Ancient Rome, the Byzantine Empire, and the Nottingham alabasters of Medieval England. Gypsum also crystallizes as translucent crystals of selenite. It forms as an evaporite mineral and as a hydration product of anhydrite. The Mohs scale of mineral hardness defines gypsum as hardness value 2 based on scratch hardness comparison. The word gypsum is derived from the Greek word Greek, Modern (1453-);: γύψος, "plaster". Because the quarries of the Montmartre district of Paris have long furnished burnt gypsum (calcined gypsum) used for various purposes, this dehydrated gypsum became known as plaster of Paris. Upon adding water, after a few dozen minutes, plaster of Paris becomes regular gypsum (dihydrate) again, causing the material to harden or "set" in ways that are useful for casting and construction. Gypsum was known in Old English as English, Old (ca.450-1100);: spærstān, "spear stone", referring to its crystalline projections. (Thus, the word spar in mineralogy is by way of comparison to gypsum, referring to any non-ore mineral or crystal that forms in spearlike projections). In the mid-18th century, the German clergyman and agriculturalist Johann Friderich Mayer investigated and publicized gypsum's use as a fertilizer. Gypsum may act as a source of sulfur for plant growth, and in the early 19th century, it was regarded as an almost miraculous fertilizer. American farmers were so anxious to acquire it that a lively smuggling trade with Nova Scotia evolved, resulting in the so-called "Plaster War" of 1820. In the 19th century, it was also known as lime sulfate or sulfate of lime. Gypsum is moderately water-soluble (~2.0–2.5 g/l at 25 °C) and, in contrast to most other salts, it exhibits retrograde solubility, becoming less soluble at higher temperatures. When gypsum is heated in air it loses water and converts first to calcium sulfate hemihydrate, (bassanite, often simply called "plaster") and, if heated further, to anhydrous calcium sulfate (anhydrite). As with anhydrite, the solubility of gypsum in saline solutions and in brines is also strongly dependent on NaCl (common table salt) concentration. The structure of gypsum consists of layers of calcium (Ca2+) and sulfate ions tightly bound together. These layers are bonded by sheets of anion water molecules via weaker hydrogen bonding, which gives the crystal perfect cleavage along the sheets (in the plane). See main article: Selenite (mineral). Gypsum occurs in nature as flattened and often twinned crystals, and transparent, cleavable masses called selenite. Selenite contains no significant selenium; rather, both substances were named for the ancient Greek word for the Moon. Selenite may also occur in a silky, fibrous form, in which case it is commonly called "satin spar". Finally, it may also be granular or quite compact. In hand-sized samples, it can be anywhere from transparent to opaque. A very fine-grained white or lightly tinted variety of gypsum, called alabaster, is prized for ornamental work of various sorts. In arid areas, gypsum can occur in a flower-like form, typically opaque, with embedded sand grains called desert rose. It also forms some of the largest crystals found in nature, up to 12m (39feet) long, in the form of selenite. Gypsum is a common mineral, with thick and extensive evaporite beds in association with sedimentary rocks. Deposits are known to occur in strata from as far back as the Archaean eon. Gypsum is deposited from lake and sea water, as well as in hot springs, from volcanic vapors, and sulfate solutions in veins. Hydrothermal anhydrite in veins is commonly hydrated to gypsum by groundwater in near-surface exposures. It is often associated with the minerals halite and sulfur. Gypsum is the most common sulfate mineral. Pure gypsum is white, but other substances found as impurities may give a wide range of colors to local deposits. Because gypsum dissolves over time in water, gypsum is rarely found in the form of sand. However, the unique conditions of the White Sands National Park in the US state of New Mexico have created a 710km2 expanse of white gypsum sand, enough to supply the US construction industry with drywall for 1,000 years. Commercial exploitation of the area, strongly opposed by area residents, was permanently prevented in 1933 when President Herbert Hoover declared the gypsum dunes a protected national monument. Gypsum is also formed as a by-product of sulfide oxidation, amongst others by pyrite oxidation, when the sulfuric acid generated reacts with calcium carbonate. Its presence indicates oxidizing conditions. Under reducing conditions, the sulfates it contains can be reduced back to sulfide by sulfate-reducing bacteria. This can lead to accumulation of elemental sulfur in oil-bearing formations, such as salt domes, where it can be mined using the Frasch process Electric power stations burning coal with flue gas desulfurization produce large quantities of gypsum as a byproduct from the scrubbers. Orbital pictures from the Mars Reconnaissance Orbiter (MRO) have indicated the existence of gypsum dunes in the northern polar region of Mars, which were later confirmed at ground level by the Mars Exploration Rover (MER) Opportunity. |Country||Production||Reserves| |style=text-align:left;||China||132,000| |style=text-align:left;||Iran||22,000||1,600| |style=text-align:left;||Thailand||12,500| |style=text-align:left;||United States||11,500||700,000| |style=text-align:left;||Turkey||10,000| |style=text-align:left;||Spain||6,400| |style=text-align:left;||Mexico||5,300| |style=text-align:left;||Japan||5,000| |style=text-align:left;||Russia||4,500| |style=text-align:left;||Italy||4,100| |style=text-align:left;||India||3,500||39,000| |style=text-align:left;||Australia||3,500| |style=text-align:left;||Oman||3,500| |style=text-align:left;||Brazil||3,300||290,000| |style=text-align:left;||France||3,300| |style=text-align:left;||Canada||2,700||450,000| |style=text-align:left;||Saudi Arabia||2,400| |style=text-align:left;||Algeria||2,200| |style=text-align:left;||Germany||1,800||450,000| |style=text-align:left;||Argentina||1,400| |style=text-align:left;||Pakistan||1,300| |style=text-align:left;||United Kingdom||1,200||55,000| |style=text-align:left;||Other countries||15,000| |style=text-align:left;||World total||258,000| Commercial quantities of gypsum are found in the cities of Araripina and Grajaú in Brazil; in Pakistan, Jamaica, Iran (world's second largest producer), Thailand, Spain (the main producer in Europe), Germany, Italy, England, Ireland, Canada and the United States. Large open pit quarries are located in many places including Fort Dodge, Iowa, which sits on one of the largest deposits of gypsum in the world, and Plaster City, California, United States, and East Kutai, Kalimantan, Indonesia. Several small mines also exist in places such as Kalannie in Western Australia, where gypsum is sold to private buyers for additions of calcium and sulfur as well as reduction of aluminum toxicities on soil for agricultural purposes. Crystals of gypsum up to 11m (36feet) long have been found in the caves of the Naica Mine of Chihuahua, Mexico. The crystals thrived in the cave's extremely rare and stable natural environment. Temperatures stayed at, and the cave was filled with mineral-rich water that drove the crystals' growth. The largest of those crystals weighs 55t and is around 500,000 years old. Synthetic gypsum is produced as a waste product or by-product in a range of industrial processes. Flue gas desulfurization gypsum (FGDG) is recovered at some coal-fired power plants. The main contaminants are Mg, K, Cl, F, B, Al, Fe, Si, and Se. They come both from the limestone used in desulfurization and from the coal burned. This product is pure enough to replace natural gypsum in a wide variety of fields including drywalls, water treatment, and cement set retarder. Improvements in flue gas desulfurization have greatly reduced the amount of toxic elements present. Gypsum precipitates onto brackish water membranes, a phenomenon known as mineral salt scaling, such as during brackish water desalination of water with high concentrations of calcium and sulfate. Scaling decreases membrane life and productivity. This is one of the main obstacles in brackish water membrane desalination processes, such as reverse osmosis or nanofiltration. Other forms of scaling, such as calcite scaling, depending on the water source, can also be important considerations in distillation, as well as in heat exchangers, where either the salt solubility or concentration can change rapidly. A new study has suggested that the formation of gypsum starts as tiny crystals of a mineral called bassanite . This process occurs via a three-stage pathway: The production of phosphate fertilizers requires breaking down calcium-containing phosphate rock with acid, producing calcium sulfate waste known as phosphogypsum (PG). This form of gypsum is contaminated by impurities found in the rock, namely fluoride, silica, radioactive elements such as radium, and heavy metal elements such as cadmium. Similarly, production of titanium dioxide produces titanium gypsum (TG) due to neutralization of excess acid with lime. The product is contaminated with silica, fluorides, organic matters, and alkalis. Impurities in refinery gypsum waste have, in many cases, prevented them from being used as normal gypsum in fields such as construction. As a result, waste gypsum is stored in stacks indefinitely, with significant risk of leaching their contaminants into water and soil. To reduce the accumulation and ultimately clear out these stacks, research is underway to find more applications for such waste products. People can be exposed to gypsum in the workplace by breathing it in, skin contact, and eye contact. Calcium sulfate per se is nontoxic and is even approved as a food additive, but as powdered gypsum, it can irritate skin and mucous membranes. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for gypsum exposure in the workplace as TWA 15 mg/m3 for total exposure and TWA 5 mg/m3 for respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 10 mg/m3 for total exposure and TWA 5 mg/m3 for respiratory exposure over an 8-hour workday. Gypsum is used in a wide variety of applications: In the late 18th and early 19th centuries, Nova Scotia gypsum, often referred to as plaster, was a highly sought fertilizer for wheat fields in the United States. Gypsum provides two of the secondary plant macronutrients, calcium and sulphur. Unlike limestone, it generally does not affect soil pH.
https://everything.explained.today/Gypsum/
What is liberal feminist perspective? Liberal feminism, also called mainstream feminism, is a main branch of feminism defined by its focus on achieving gender equality through political and legal reform within the framework of liberal democracy. What do feminist think about the nuclear family? 2 Feminists argue that in nuclear families women are expected to take on a restrictive role, which involves tasks such as housework and childcare. These are not recognised as ‘real’ work and so women can be seen as being exploited. What is the main view of feminist theory? It aims to understand the nature of gender inequality, and examines women’s social roles, experiences, and interests. While generally providing a critique of social relations, much of feminist theory also focuses on analyzing gender inequality and the promotion of women’s interests. What is a feminist view? Feminist theory aims to understand gender inequality and focuses on gender politics, power relations, and sexuality. While providing a critique of these social and political relations, much of feminist theory also focuses on the promotion of women’s rights and interests. What is liberal feminism in sociology? Abstract. Liberal feminism is one of the earliest forms of feminism, stating that women’s secondary status in society is based on unequal opportunities and segregation from men. … Liberal feminists create change by working within existing social structures and changing people’s attitudes. What is liberal feminism Brainly? Answer: Liberal feminism is an individualistic form of feminist theory, which focuses on women’s ability to maintain their equality through their own actions and choices. . Explanation: Feminist movements and ideologies. A variety of movements of feminist ideology have developed over the years. What are the dark side of family? The dark side of the family is often overlooked by the main sociological theories. This side of the family addresses internal issues within the unit, such as domestic violence and abuse (sexual, physical and emotional). What are the three main feminist approaches? Nevertheless, it is possible to identify three main ways in which feminists have conceptualized power: as a resource to be (re)distributed, as domination, and as empowerment. What are the main points of feminism? Feminism works towards equality, not female superiority. Feminists respect individual, informed choices and believe there shouldn’t be a double standard in judging a person. Everyone has the right to sexual autonomy and the ability to make decisions about when, how and with whom to conduct their sexual life. What are feminist values? It begins by establishing a link between feminine gender and feminist values, which include cooperation, respect, caring, nurturance, intercon- nection, justice, equity, honesty, sensitivity, perceptiveness, intuition, altruism, fair- ness, morality, and commitment.
https://twofeministsblog.com/womens-rights/question-what-is-the-liberal-feminist-view-of-the-family.html
In 1988, chefs Mark Gaier and Clark Frasier opened Arrows Restaurant, a sustainable dining experience set in a restored eighteenth century Maine farmhouse. It was chosen as "One of America's 10 Most Romantic Restaurants" by Bon Appétit and “One of America’s Top 50 Restaurants” by Gourmet. The chefs championed old world practices that included growing and foraging their own crops, curing their own meats, and making their own cheese. They cultivated over two acres of gardens, including an abundant greenhouse, which supplied the restaurant with its handpicked produce and herbs. - In 2002, Mark and Clark's cookbook celebrating their flagship restaurant, Arrows Restaurant was published. - In 2005, Mark and Clark opened MC Perkins Cove in Ogunquit, Maine, a sleek and more casual restaurant dedicated to classic New England fare with stunning ocean views. - In 2007, the chefs partnered with Marriott to open Summer Winter in Burlington, MA. - In 2010, the James Beard Foundation named the duo as "Best Chefs in the Northeast." They have been featured in Bon Appétit, Food & Wine, Time, Travel + Leisure, and Vanity Fair and they have made frequent appearances including the CBS Early Show and NBC’s TODAY show. - In 2011, Mark and Clark published their second cookbook; Maine Classics. A nod to their favorite bistro style dishes. - In 2012, Mark and Clark participated in Top Chef Masters, Season 4. - In 2014, M.C. Spiedo opened in the Seaport District of Boston in the Renaissance Boston Waterfront Hotel. Inspired by their love of the Italian Renaissance, discovered during their personal travels, the restaurant is named for the style of spit-roasting or rotisserie cooking, which was a popular technique at that time. The menu incorporates Mark and Clark’s fascination with the old world flavors of Florence, Bologna and Venice, along with their focus on farm-fresh ingredients and contemporary cooking techniques. Mark Gaier grew up in a big family near Dayton, Ohio, where his mother inspired him to begin cooking and even baking bread by the time he was fourteen. He went on to work in publishing, where his favorite work was putting on the dinner parties for the staff and advertisers at the magazine. He decided to go back to school and study culinary arts under Jean Wallach in Boston. Gaier worked at the Whistling Oyster under Michael Allen before meeting Frasier at Stars Restaurant in San Francisco.
http://www.markandclarkrestaurants.com/chef-mark
We all want to be happy. Behind the gym workouts, long commutes, office politics, internet dating and life struggles, resides our deepest desires for joy. The optimists among us struggle to stay vigilant when outcomes disappoint. The pessimists find temporary solace in pointing out that they were right. Work, school, alarm clocks and meetings intrude in all our lives. In the end, everyone soldiers on. Just as perfection can be the enemy of completion, endless routine can become the enemy of personal growth. “As long as habit and routine dictate the pattern of living, new dimensions of the soul will not emerge.” — Henry Van Dyke To break the routines and work cycles, we celebrate the weekends and schedule our vacations. We hold on to family time, and squeeze in a few hours here and there for our passions and personal enrichment. Amazingly, there are some people who seem to navigate this life journey with greater ease and contentment. The question is, what’s their secret? — — — A new elevation We live in an age of rapid communication, omnipresent technological advances and great opportunity. While it is true that some industries and professions are dying, new industries and professions are emerging. “I can’t change the direction of the wind, but I can adjust my sails to always reach my destination.” — Jimmy Dean The changing landscape of work creates uncertainty and stress. Adding to this is the transient nature of employees today. People come and go, searching for that perfect job or better promotional positioning. Unfortunately, even when people land that perfect job or promotion, they’re not always happy. They find things to complain about. Many of us seem to spend more time focusing on negativity than positivity. We get hung up on the small stuff. Fortunately, the frequency of happiness in our lives can be increased by reorienting our perspectives and reaching for a new elevation. Individuals can raise the bar in both their personal habits and interactions with others. By avoiding known pitfalls, learning from the mistakes of others, devoting oneself to constant learning, loving and pursuing one’s passions, a new elevation can be achieved. The result will be greater self-esteem, well-being and yes, even happiness. But before you can attain any of these things, you have to develop one, essential quality. — — — The essential quality Who among us has not encountered the negative coworker, always complaining about real or imagined problems. Or the whiny friend who gossips about everyone and bristles at perceived sleights. Such individuals stumble through life, colliding into one drama after another. Ill equipped to efficiently work around inconveniences and disappointments, they remain mired in their unhappiness. Many of their problems could easily be traversed with half the energy they put into complaining. Some of these people come with entitlement mentalities, convinced the world owes them everything. “Every day we have plenty of opportunities to get angry, stressed or offended. But what you’re doing when you indulge these negative emotions is giving something outside yourself power over your happiness. You can choose to not let little things upset you.” — Joel Osteen So, what is the essential quality you need to attain new heights and achieve your dreams? Emotional maturity. In my 26 year law enforcement career, low emotional maturity was a common theme I found in miserable people. They seemed to find no end of complaints or injustices to rally around. I remember some officers in my own police department that forever butted heads with management. They would complain to anyone who listened and tried to enlist supporters and foment anger and dissatisfaction. They adopted adversarial postures, derailed morale and created a strained work environment. They might have won a few battles here and there over policy differences, but in the long run, they remained miserable. And because of their poor attitudes, they usually were passed up for promotion. Was it worth it? Years ago, I attended a lecture by Dr. Kevin Gilmartin, author of the book Emotional Survival for Law Enforcement. Dr. Gilmartin, a twenty year police veteran, expertly explained how idealistic cops turn into cynical, bitter cops. Central to Dr. Gilmartin’s message is the importance of letting go of things you can’t change, and focusing more on your family, health and personal passions. In his lecture, Dr. Gilmartin told the story of a police sergeant who literally quit his job over a management rule he disagreed with. Officers in the police department used to be able to wear goatees, but a newly appointed police chief disallowed them. The sergeant rallied supporters and battled the new police chief over the rule, inevitably quitting his job in protest. Dr. Gilmartin ran into this sergeant years later, working in an auto-repair shop. He asked the sergeant, “Was it worth it?” The old sergeant said, “No, it was stupid of me.” The problem was that the sergeant lacked emotional maturity. Rather than letting go of things he could not change, he decided to battle them. And it unnecessarily cost him his job. Are there some things in life worth going to battle over? Certainly. Some injustices are too big to ignore. But life is full of daily indignities, small injustices and things we disagree with. Emotionally mature individuals figure out how to circumvent these irritants and get on with their lives. — — — How’s that working for you? How about you? Do you declare war with every management decision? Do you swath yourself in self-righteousness? Do you enlist fellow mutineers and shop stewards to rock the boat at work? If so, how’s that working for you? Are you happier? The happiest and most successful people I knew in my career found joy and pleasure in their work product. Whatever the rules, whether they agreed or not, they focused on exemplary work. And they had family, friends and passions outside of work that sustained them. These people possessed emotional maturity. No tantrums. No angry, anonymous letters to management. No hastily called union meetings to circle the wagons and proclaim a hostile work environment. These people did great work, ignored minor inconveniences, and were genuinely happier. Please don’t misread the message here. Yes, there are some leaders and management teams that are truly malevolent. There comes a time when employees need to take a stand and change things. But for the day to day decisions you might not agree with, is it worth the stress and strain to cater to your emotional immaturity? Should you really quit because the boss said “no” to goatees? We’re all striving for new elevations. Higher levels of achievement and personal happiness. To get there, we have to sidestep a lot of stuff we disagree with. The essential quality that will move us closer to our goals and dreams is emotional maturity. It’s all about self-restraint, not sweating the small stuff or giving in to petty grievances. It’s about keeping our eye on the prize. Namely, a healthy, fulfilling life. Embrace emotional maturity, and a new path will emerge for you. It’s a path worth taking. Before you go I’m John P. Weiss. I draw cartoons, paint, and write about life. Thanks for reading.
https://www.theladders.com/career-advice/the-essential-quality-you-need-to-reach-a-new-elevation
Our modern neurosciences now tell us that the human psyche adapts and evolves according to social and environmental impacts and influences. In other words, how we communicate as a “social animal” wires our neuronal brains and influences the formation of our psyche. As many of us are nowadays living embedded in vast networks and exchanges of information, perhaps this has helped to usher in not only new and novel forms of social organization, but has also influenced how we have “wired” our brains. When our mind and attention are focused in specific ways, we create neural firing patterns that link and integrate with previously unconnected areas of the brain. In this way, synaptic linkages are strengthened, the brain becomes more interconnected and the human mind becomes more adaptive. According to psychologist Daniel Siegel, the brain undergoes genetically programmed “neural pruning sprees,” which he says involve removing various neural connections to better organize brain circuitry. In other words, the neural connections that are no longer used become disconnected (de-activated), thus strengthening those regularly used synaptic connections, which helps the brain to operate more efficiently. As the phrase goes, “Neurons that fire together, wire together.” What is also significant is that the human brain is not solely an organ situated in the head. Physiologically, the human mind is embodied throughout the whole body to regulate the flow of energy and information. Siegel tells us that neural networks throughout the interior of the body, around the heart and our various organs, are intimately interwoven and send sensory input and information to the brain. This input from the body forms a vital source of our intuition and, says Siegel, also influences our reasoning and the way we create meaning in our lives. In this sense, the body forms an extended mind — only that the brain is the receiver and interpreter of the signals. In this context we see that physical organs, through their neural and sensory networks, form another type of extended brain, or distributed mind. This view helps us to see how living systems, rather than involving separate actions, are in fact a field-network of integrated processes. This type of knowledge, I suggest, will gradually become to represent how we view our dynamic, living universe – a relational system of integrated processes and energies Our modern self-aware extended mind has clearly evolved to root us in our social world: a world of extended relations and social networks. Humanity, it can be said, has been hard-wired to extend our linkages, connections, and communication networks. We are also hard-wired to adapt physically in response to experience, and new neural processes in our brains can come into being with intentional effort — with focused awareness and concentration. This capacity to create new neural connections, and thus new mental skill sets, through experience has been termed “neuroplasticity.” Siegel notes that if we compared the neural structure of an adult brain in today’s modern society with that of an adult brain from 40,000 years ago, there would be huge differences. This is because the human brain in each context would have responded differently to the energy and information flows present within the environment and cultural experiences. By being aware of our experiences and influences, we can gain understanding of how our brain and thinking becomes patterned. This awareness is what Siegel calls “mindsight” and suggests that how we focus our attention greatly shapes the structure of our brains; and the ability to grow new neural connections is available throughout our lives, and not only in our young formative years. This knowledge encourages us to nurture our mindfulness, to establish greater self-awareness, and to pay attention to our intentions and thinking patterns. Neuroplasticity also encourages us to be more reflective over our connections with others, and also to develop our reflective skills that underlie empathy and compassion. These new “wired connections” are exactly what are becoming activated through the rise of new media and communication tools. So when we, our friends and family, spend hours in the front of a computer screen, it may not only be a case of lazy surfing or eye-tiring work. We may be involved in rewiring our neural connections to make our brains more receptive to a modern, highly connected and networked lifestyle. We are quite adaptable creatures, after all. Leave A Comment You must be logged in to post a comment.
https://kingsleydennis.com/rewiring-our-minds-for-a-modern-lifestyle/
We are proud to announce the first episode of our new cleantech series! The Cleantech Race is an FCA-produced mini-series which aims to tackle some of the most challenging issues and technological innovations needed to decarbonize the hard-to-abate sectors. Each episode features highly qualified guest-speakers from politics, industry, or research. This first episode – “Hydrogen – Out of touch with reality?” – zooms in on the complex process of decarbonizing existing hydrogen applications – and investing public and private resources in those sectors that cannot become sustainable without the use of hydrogen. We are pleased to have Dr. Klaus Grellmann, Managing Director at goetzpartners, as co-host of this episode. Together, we have developed five theses on competitive hydrogen – covering topics such as cost-effectiveness, scalability, and relevance for climate policy. We look forward to hearing your take on the potential of hydrogen for decarbonization. You can find more information on the five theses here. You can find more information on The Cleantech Race here.
https://fcarchitects.org/fca-launches-new-series-the-cleantech-race/
Irrigation systems are essential elements of landscapes for institutional and commercial facilities. When properly installed, operated, and maintained, irrigation systems ensure that grounds look their best and remain healthy. But grounds managers also can run into costly leaks, unsightly damage to landscapes, and waste of valuable water resources if just one component of the system does not work properly. It is extremely important that technicians pinpoint and fix problems as soon as they arise. Regular maintenance can ensure that irrigation systems run as efficiently as possible. In addition to complete monthly system checks, weekly visual checks can ensure that technicians identify potential problems before they cause significant damage. Comprehensive maintenance of an irrigation system will help managers determine the system's health, and, if problems exist, it will provide information to help managers determine whether repair or replacement is the wisest decision. Effective maintenance should focus on key system components. Controllers. When an irrigation system is not working correctly, the first component technicians need to check is the controller, which tells the system when to turn on and for how long it should run. In most cases, a programming error is to blame. The most common controller issues involve the controller starting over after it has finished and the irrigation operating at strange times of the day. In both cases, a simple programming error is the cause. Technicians should check the controller's programming to be sure it contains only the desired start time. They should delete unwanted start times and make sure the start times are correct as to a.m. and p.m. In some cases, the controller reverts to its default settings. The remedy is to reprogram it to the desired start and stop times. When using electronic controllers, a power surge can cause them to freeze or lock up. To reset the controller, unplug the controller from its power source for two-three minutes. If it has a backup-battery feature, make sure to unplug the backup, as well, which allows the unit's microprocessor to reset itself. After waiting a short time, the technician can plug the controller back in and reprogram it to the usual settings. Sprinklers. Irrigation systems feature two types of sprinklers — rotary and stationary. A rotary sprinkler is designed for use on large areas and sends a spray of water rotating in a circle. A stationary sprinkler is used for smaller areas and sends a mist in all directions simultaneously. It is hidden in the ground until the system is pressurized, which makes the sprinkler head pop up. The most visually obvious problems with irrigation systems are associated with sprinkler heads and result in uneven water coverage and high volumes of water waste. Nozzle heads should pop up completely when the water is on and fully retract when the water is off. If the noticing spray is uneven or intermittent or non-existent, the sprinkler head might be clogged. Dirt, grass and other debris can build up on sprinkler heads and block or redirect the water. Spray nozzles also can get knocked out of adjustment and require regular inspection. Other common causes of spray disruption include damage from lawn mowers, normal wear and tear over time, and improper installation. Technicians need to make sure sprinklers are vertical and flush with the soil grade and that water pressure is steady. They also should make sure sprinklers are spraying onto the grass and not on sidewalks, driveways, or streets. Valves. Water leaks in irrigation systems can result from weather — freezing and thawing — damage from shovels and other sharp tools, vandalism, invasive tree roots, and normal wear and tear. Large leaks are obvious to spot, but smaller leaks might not show up immediately and require careful investigation of system components. Technicians can look for several common electrical problems related to valves. The wiring connection at the valve has been corroded from failing to use waterproof connectors. The solenoid has failed. Or the wiring between the valve and the controller is damaged. Among the common hydraulic problems with valves is that dirt or debris has gotten inside the valve, or that the diaphragm has a hole or tear. Pipes. Problems with underground pipes generally result in visible pools of water or large wet areas on the surface. If no wet area appears, but the zone has low pressure, technicians look for an area of grass that is significantly greener than the surrounding area. That might indicate the source of the system's leak. While many larger leaks are visible above the surface, some occur below ground and require digging, which might call for a local irrigation contractor. The difficult part of underground repairs is exposing and cleaning the damaged area and keeping the repair area clean while making the repair. Finally, an uneven spray pattern within a particular zone is a sign of pressure problems, which could be the result of a valve or pipe leak. One easy way for technicians to check for a leak is to monitor the water meter. If it is constantly moving, the system more than likely has a leak.
https://www.facilitiesnet.com/groundsmanagement/article/Irrigation-Systems-Maintaining-for-Savings--15225?source=previous
An EPA insider's illustration of how public policy is too often influenced by breach of scientific integrity in high places, with a solution to the problem spelled out. by J. William Hirzy , Ph.D. The author is a charter member and past-President of the labor union which represents professional employees at the headquarters offices of the U.S. Environmental Protection Agency, where he has been employed as a senior scientist in risk assessment since 1981. Prior to that, he was employed by Monsanto Company in research and development and environmental management positions. He is currently Senior Vice-President of National Treasury Employees Union Chapter 280, representing the U.S. EPA employees mentioned above. In this commentary, I attempt to explain why the union which represents professional employees at the headquarters office of the U.S. Environmental Protection Agency (EPA) considers the EPA's Principles of Scientific Integrity (Principles) so important. The Principles are a policy statement that was adopted by EPA's National Partnership Council and promulgated by Administrator Carol Browner in 2000. The union was the driving force behind the acceptance of the Principles, having worked for over a decade to secure their adoption by the Agency. The Principles appear at the end of this essay. When I worked as a research and process development chemist in the private sector, I was almost always involved in research work that had a specific, management-directed goal in mind - make a plasticizer that won't mar polystyrene finishes on refrigerators, that won't degrade to produce odors that will affect food flavors - make a non-toxic plasticizer for blood bags that won't migrate into blood lipids, yet forms a stable plasticized container system at all temperatures - develop a manufacturing process for a phosphorus-based flame retardant that won't blow up the plant, yet produces high-quality product at a cost customers will bear, etc., etc. Management never told me to lie about how well a plasticizer performed in an extraction test or whether it marred a test surface. Management never told me to change the yield figures to make process economics look good. They never had to. It is obvious that faking it under such circumstances simply is not an option - as Richard Feynmann put it when he investigated the Challenger disaster - "Mother Nature will not be fooled." Those of us who work at EPA headquarters mostly don't have the pleasure of working at a research bench, asking Mother Nature questions, observing her answers and then deciding what next to do: publish what we think she is saying, if our confidence in our interpretation is high enough; or go back and ask more questions until our understanding and confidence are sufficient to publish. The way we deal with science at headquarters is quite different from that ideal. There may be a court-ordered schedule of rule-making facing our Office management, and it might involve setting what amounts to a "safe" exposure level for humans or other species. On occasion, a manager has his or her own idea of what that "safe" level should be, or the manager gets orders from up the line, perhaps even the White House or Capitol Hill, that the "safe" level is some particular value. This is not hypothetical. This has happened and continues to happen. The manager then comes to a staff scientist and says, "This is the safe level that we are going to propose in the Federal Register. Write me a justification for it." What is sometimes overtly stated, sometimes not, is, "And I don't care what the literature says, my bosses have given me instructions on this, and if you want to stay on my good side, and if you want to see some award money, you will craft for me an elegant justification for this 'safe' level." This situation, while not always the norm, does happen. When the literature does not support what management wants to do, it is a gut-wrencher for ethical scientists whose work involves reading the literature, making value judgments about the merits of the published work of other scientists, and writing technical support documents for Agency rule-making (or overseeing contractors who do that). Certain statutes permit management to set "safe" levels based on factors other than the physical and biological sciences, for example, Maximum Contaminant Levels (MCL) for drinking water. MCLs are supposed to be set as close as possible to the Maximum Contaminant Level Goal (MCLG), which is supposed to be based solely on scientific considerations of toxicity, but the MCL can be set at a different level if economic, feasibility or other factors so indicate. We have no quarrel with that situation - it is, after all, what the law passed by Congress and signed by the President and adjudicated by the Courts, mandates - it is a Constitutionally right-on situation, and we are sworn to uphold the Constitution. Where we do have a quarrel, however, is when management orders up a phony MCLG so that a politically dictated MCL will have scientific "cover." We do have a quarrel when management arranges it so that an EPA toxicologist is prevented from attending a pathology review at which all malignant tumors get down-graded so that an economically important pesticide can achieve a lower cancer rating. And when management collects data on indoor air pollutants within its own buildings, conducts and publishes a major survey showing that its own employees were sickened by the pollutants, privately (and in a newspaper article) admits that those pollutants made the employees sick - and then disavows these results and statements - all to protect a large industry and avoid "getting involved in lawsuits," do we ever have a quarrel. These are just a few high-profile, real-life examples of what scientific integrity means - or doesn't mean - at headquarters. We organized this union at headquarters in the early 1980's to fight the sort of distorted use of science described above, which impinges negatively on our working conditions, on our reputations, and ultimately on public health. It took almost two decades and much blood and tears finally to sway one set of senior EPA managers to acquiesce to establishing a set of professional ethics for EPA scientists, now called the Principles of Scientific Integrity. But the job of establishing the Principles as a working policy is not complete, because a key component is lacking. There is no agreed-upon method of resolving disputes that arise involving the Principles. Under these conditions, the Principles are not much more than pretty window dressing to which EPA management can point, thump its chest and claim to the world that quality science underpins all of its regulatory work. The union has filed two grievances, citing violations of the Principles, and both times management has alleged that the grievance process doesn't reach to enforcing the Principles. We can, and we may yet, test this allegation before an arbitrator, but at the present time we are using another method to bring the Principles to fruition - public pressure. Our work with EPA Headquarters employees who have taken ethical stands against distorted science has become public knowledge. Citizens and groups outside the Agency have inquired about our work with these employees and we have responded in ways that were useful to these citizens. When EPA tried to limit our ability to defend ethical employees and the public interest in the early 1990's, we called upon those citizens for help. We were successful then, and we have begun a new campaign on behalf of the Principles. We are asking citizens for help in making EPA see the need to make the Principles of Scientific Integrity more than mere window dressing. This campaign began on May 1, 2002 and was triggered by an incident in which a supervisor told a member of our bargaining unit, "It's your job to support me, even if I say 2+2=7." In 1981, with the advent of an administration expected to be hostile to environmental regulation and to labor, professional employees at the U.S. Environmental Protection Agency headquarters decided to organize. After considering various options, the organizational structure chosen was a labor union. In the federal sector, the right to organize a labor union is protected, and unions have the statutory right to bargain over certain working conditions. We considered then, and we consider now, that doing professional work in protecting the environment and public health according to high standards of professional ethics to be a working conditions issue. While the three federal US government branches are headed by people elected by our citizens, or, in the case of the courts, directly appointed and approved by those who have been elected, the day-to-day functions of each branch are carried out by people who are not elected or directly appointed by those who have been elected; i.e. these functions are carried out by government employees in the military and the civil services. These day-to-day governmental tasks range from staffing Congressional offices, to processing Social Security paperwork, to conducting military operations in defense of the Nation. While they are not elected or appointed officials, all of these government employees in the military and Civil Service are none the less "officers" of the government, and as such they take the same oath of office as the elected and directly appointed officers of government, the President, Members of Congress and U.S. Court Judges. That oath, among other things, binds all of us who serve in the federal government to "...preserve, protect and defend the Constitution of the United States against all enemies, foreign and domestic..." This oath is much more than a mere formality. The oath actually binds, by personal honor, each person who takes it to the faithful performance of duty to uphold the Constitution. A person's specific duty depends on which branch the person serves and that branch's responsibility under the Constitution. We Civil Service scientific employees at the headquarters of the U.S. Environmental Protection Agency are scientific advisers, in essence, to the Administrator and her subordinate EPA managers, i.e. those government officers who have been elected or directly appointed to their positions and who carry the Constitutional authority and responsibility to faithfully administer environmental laws passed by Congress, signed by the President and adjudicated by the Courts. What we do on a day-to-day basis is use the best scientific principles to honestly and ethically evaluate scientific research work done by other scientists so that work can be applied to the laws EPA administers. Our duty also requires us to be on guard to see that our work is not distorted or misused to subvert environmental laws - our oath to preserve, protect and defend the Constitution requires this of us. Anyone who would misuse our work to subvert environmental laws is a "domestic enemy" referred to in the oath. The Principles of Scientific Integrity adopted by EPA give us a tool for carrying out this element of our duty "within the family" of EPA. There are provisions in a number of other statutes that proscribe falsifying information or discriminating against employees who blow the whistle on management attempts to subvert the law, but these statutory tools are often cumbersome and difficult to use and can involve great personal risk to conscientious employees. Having the Principles of Scientific Integrity as a fully functioning internal EPA mechanism to both resolve disputes and admonish employees and managers against less than faithful execution of the law will be a giant leap forward in improving the administration of the Nation's environmental laws and in making EPA headquarters a professionally satisfying workplace. It is essential that EPA's scientific and technical activities be of the highest quality and credibility if EPA is to carry out its responsibilities to protect human health and the environment. Honesty and integrity in its activities and decision-making processes are vital if the American public is to have trust and confidence in EPA's decisions. EPA adheres to these Principles of Scientific Integrity. Ensure that their work is of the highest integrity - this means that the work must be performed objectively and without predetermined outcomes using the most appropriate techniques. Employees are responsible and accountable for the integrity and validity of their own work. Fabrication or falsification of work results are direct assaults on the integrity of EPA and will not be tolerated. Represent their own work fairly and accurately. When representing the work of others, employees must seek to understand the results and the implications of this work and also represent it fairly and accurately. Respect and acknowledge the intellectual contributions of others in representing their work to the public or in published writings such as journal articles or technical reports. To do otherwise is plagiarism. Employees should also refrain from taking credit for work with which they were not materially involved. Avoid financial conflicts of interest and ensure impartiality in the performance of their duties by respecting and adhering to the principles of ethical conduct and implementing standards contained in Standards of Ethical Conduct for Employees of the Executive Branch and in supplemental agency regulations. Be cognizant of and understand the specific, programmatic statutes that guide the employee's work. Accept the affirmative responsibility to report any breach of these principles. Welcome differing views and opinions on scientific and technical matters as a legitimate and necessary part of the process to provide the best possible information to regulatory and policy decision-makers. Adherence by all EPA employees to these principles will assure the American people that they can have confidence and trust in EPA's work and in its decisions.
https://slweb.org/hirzy-commentary1.html
The current process for creating voluntary product safety standards has recently been criticized in the media in connection with a debate over whether rare-earth magnets can be adequately regulated through the voluntary standard process in order to protect children from swallowing the magnets. Regardless of opinions about how the process works, the CPSC treats them as a “floor” for consumer safety measures, so manufacturers should incorporate any applicable voluntary standards into safety compliance programs to guarantee compliance. Here’s a refresher on what voluntary standards are and how they are used. Voluntary Standards: FAQs What are voluntary standards? A voluntary consensus safety standard (also known as a “non-government consensus standard”) is a safety standard for consumer products that establishes consumer product safety practices recommended to be followed by product manufacturers, distributors, and sellers. Voluntary standards can provide much-needed consistency and buy-in from industry stakeholders. Who makes voluntary standards? Voluntary standards are established by collaboration between industry groups, consumer groups, government agencies, and a private-sector body like ANSI, ASTM International, CSA Group, UL, etc. The CPSC commonly engages in the creation of voluntary standards, and a list of which voluntary standard activities CPSC staff is currently involved in can be found on the CPSC’s website. Summary reports on the status of CPSC’s staff involvement on each of the voluntary standards that CPSC staff is tracking are also available on the CPSC’s website. Why are voluntary standards important? While voluntary standards are not a mandatory requirement, product manufacturers, distributors, and sellers should be aware of and comply with voluntary standards for the products they make. CPSC staff considers the guidance contained in voluntary standards to be a safety floor from which products are designed, and staff representatives have said: “The commission expects all consumer products – including promotional products – to be fully compliant with applicable voluntary standards.” While voluntary standards are not mandatory laws enforceable by the CPSC in all instances, the agency sometimes enforces them through recalls of noncompliant products. If no voluntary standard exists, the CPSC still expects companies to consider voluntary standards in place for any similar products. From a tort liability standpoint, voluntary standards are frequently the basis for expert testimony in court as to what is customary in industry. In products liability, voluntary standards can serve as an important defense to negligence claims, and a failure to use them is customarily used to indicate a manufacturer is not following industry standards. How should voluntary standards be used? - When designing a consumer product, companies should review voluntary standards available for that product (and any similar products). - Companies should make sure that their product conforms to the standards that exist. The CPSC expects companies to consider the guidance contained in voluntary standards and use the standard as a “floor” for consumer safety measures. - Companies should think through the use or foreseeable misuse of their product’s design and determine if there are additional safety standards that should be put into place above and beyond those included in the voluntary standards.
https://www.retailconsumerproductslaw.com/2020/02/voluntary-standards-treat-as-voluntary-at-your-own-risk/
Borrowing a phrase from NSIDC’s Dr. Mark Serreze, Phytoplankton are now apparently in a “Death Spiral”. See Death spiral of the oceans and the original press release about an article in Nature from a PhD candidate at Dalhousie University, which started all this. I’m a bit skeptical of the method which they describe in the PR here: A simple tool known as a Secchi disk as been used by scientists since 1899 to determine the transparency of the world’s oceans. The Secchi disk is a round disk, about the size of a dinner plate, marked with a black and white alternating pattern. It’s attached to a long string of rope which researchers slowly lower into the water. The depth at which the pattern is no longer visible is recorded and scientists use the data to determine the amount of algae present in the water. Hmmm. A Secchi disk is a proxy, not a direct measurement of phytoplankton. It measures turbidity, which can be due to quite a number of factors, including but not limited to Phytoplankton. While they claim to also do chlorophyll measurements, the accuracy of a SD measurements made by thousands of observers is the central question. From the literature: The Secchi disk transparency measurement is perhaps one of the oldest and simplest of all measurements. But there is grave danger of errors in such measurements where a water telescope is not utilized, as well as in the presence of water color and inorganic turbidity (source: Vollenweider and Kerekes, 1982). I’ll have more on this later. – Anthony ====================================================== Phytoplankton need cap and trade By Steve Goddard Yesterday, Joe Romm reported : Nature Stunner: “Global warming blamed for 40% decline in the ocean’s phytoplankton” “Microscopic life crucial to the marine food chain is dying out. The consequences could be catastrophic.” That sounds scary. Does it make any sense? Phytoplankton thrive everywhere on the planet from the Arctic to the tropics. One of the primary goals of this year’s Catlin expedition was to study the effect of increased CO2 on phytoplankton in the Arctic. They reported: Uptake of CO2 by phytoplankton increases as ocean acidity increases That sounds like good news for Joe! We also know that phytoplankton have been around for billions of years, surviving average global temperatures 10C higher and CO2 levels 20X higher than the present. http://ff.org/centers/csspp/library/co2weekly/2005-08-18/dioxide.htm Phytoplankton growth/reduction in the tropics correlates closely with ENSO. El Nñio causes populations to reduce, and La Niña causes the populations to increase. During an El Niño year, warm waters from the Western Pacific Ocean spread out over much of the basin as upwelling subsides in the Eastern Pacific Ocean. Upwelling brings cool, nutrient-rich water from the deep ocean up to the surface. So, when upwelling weakens, phytoplankton do not get enough nutrients to maintain their growth. As a result, surface waters turn into “marine deserts” with unusually low populations of phytoplankton and other tiny organisms. With less food, fish cannot survive in the surface water, which then also deprives seabirds of food. During La Niña conditions, the opposite effect occurs as the easterly trade winds pick up and upwelling intensifies, bringing nutrients to the surface waters, which fuels phytoplankton growth. Sometimes, the growth can take place quickly, developing into what scientists call phytoplankton “blooms.” The phytoplankton must be loving life now! The author of this study (Boris Worm) also reported last year “if fishing continued at the same rate, all the world’s seafood stocks would collapse by 2048” So we know that phytoplankton have survived for billions of years in a vast range of climates, temperatures and CO2 levels. Apparently they have become very sensitive of late – perhaps from all the estrogens being dumped in the oceans? Or maybe they have been watching too much Oprah? The standard cure for hyperventilation is to increase your CO2 levels by putting a bag over your head.
https://wattsupwiththat.com/2010/07/30/now-its-phytoplankton-panic/
Stephen King must be doing something right with his novels, short stories and novellas since a lot of them have become adapted to the big screen. Many are highly praised and regarded as some of the best in film history thanks to the collaborators. The Shawshank Redemption is no different. Based on the novella by the name of Rita Hayworth and The Shawshank Redemption, the adapted film is featured on the AFI (American Film Institute) top 100 list, as it should be, it is one of the greatest films of all time. The Shawshank Redemption is a story that focuses on one central idea: Hope. Andy Dufresne (Tim Robbins) is charged with the murder of his wife and lover in 1947. Sentenced to Shawshank penitentiary where everybody incarcerated is innocent, Andy befriends Red (Morgan Freeman) who is the main smuggler who can get just about anything. The one thing prison gives you is time, time to let your mind wander and go insane or time to focus on a hobby to keep yourself preoccupied. Giving any amount of time to someone who is motivated enough to not want to be in that place can be dangerous. Andy holds on to hope the entire time he’s incarcerated whereas Red views hope in an entirely different light. Red sees hope as something that can destroy a person breaking them down over the course of their life. Over the course of Andy’s imprisonment, hope is easily destroyed when he gets raped and then returned to him after helping the guards with legal issues including tax returns and other projects. It’s the most prominent theme that builds around every character in the film. Hope is crafted differently from each of the characters perspectives and how they view it. Brooks (James Whitmore) hopes for a life on the outside but is too afraid of the differences and therefore cannot handle it. “The funny thing is, on the outside I was an honest man. Straight as an arrow. I had to come to prison to be a crook.” Shawshank is a love letter to hope and persistence blended with sincere friendships. That much hasn’t changed in the 26 years that the film has been released. Although the prominent theme of hope, persistence and friendship throughout, Shawshank is brutal. There are several scenes that depict the brutality of prison with rape and murder but hope ultimately is the good that prevailed. These few scenes that depict the evilness of prison are absolutely necessary for the narrative to ground the film while not having it be a complete fantasy that nothing bad ever happens. Some people who end up in prison are actually bad, it’s a reality that is handled beautifully by director Frank Darabont. Paired with the outstanding direction is the hauntingly intimate cinematography of Roger Deakins. The use of the Sepia tone brings us right back to the time period as if we are actually transported in a time machine. Deakins is a master at capturing the emotion of the film and portraying it in a way that we can relate to it regardless if these are bad men or not. The trifecta of Shawshank is the performances by Tim Robbins and Morgan Freeman. Both bring a certain inspiration and reverence for their characters respecting who they are as individuals. As much as Freeman and Robbins shine in the spotlight, the ensemble cast give equally powerful performances. As Andy gets deeper into his prison sentence, a sort of acceptance is achieved. He accepts the fact that he’s almost untouchable looking to push the boundaries when he blasts opera music in warden Norton’s (Bob Gunton) office. He even accepts the fact that he’s guilty of the crime even though he didn’t pull the trigger that killed his wife and her lover. When the evidence is provided by Tommy (Gil Bellows) Andy fully accepts his innocence and even pushes for a new trial. But it’s the Corruption of Norton that keeps Andy down forcing Andy to ultimately plan his escape and expose the murder of Tommy on him and Byron Hadley (Clancy Brown). “There’s not a day goes by I don’t feel regret. Not because I’m in here, or because you think I should. I look back on the way I was then: a young, stupid kid who committed that terrible crime. I want to talk to him. I want to try and talk some sense to him, tell him the way things are. But I can’t. That kid’s long gone and this old man is all that’s left. I got to live with that. Rehabilitated? It’s just a bullshit word. So you can go and stamp your form, Sonny, and stop wasting my time. Because to tell you the truth, I don’t give a shit.” Shawshank can be interpreted in many different ways. Andy can be seen as a prophet in the scenes where he gets his fellow inmates’ beer when working on the roof or when the library is built. He’s ultimately a good person who gets caught up in a bad situation, but he never loses that spark inside him that holds on to the hope of one day being proved innocent. Being in prison doesn’t help that innocence as the warden does everything possible to keep him there. Andy exposes warden Norton, laundering money to a person that doesn’t exist turning him into the criminal that is handed the life sentence.The town of Zihuatanejo can be interpreted as heaven – a place with no memory, where your past doesn’t exist nor define who you are. Overall, The Shawshank Redemption is poetic the way its shot, edited, directed and acted. It can feel quite lengthy and slowly paced but the positives heavily out way the minor negatives. Its crafted in a way that stands the test of time and the film has aged extraordinarily well. The theme’s and idea’s presented in Shawshank hold up as these same theme’s and ideas are just as relevant today as they were 26 years ago. Shawshank should be seen by everyone, it’s an absolute classic film in every sense of the word that has the ability to insure anyone who watches it. If I were to rate The Shawshank Redemption, I’d rate it a 5 out of 5. So, tell me guys, have you seen The Shawshank Redemption and if so, what do you think about it? Do you agree or disagree with me? Comment below or send me an email and let me know what you think. The Shawshank Redemption is written by Frank Darabont and Stephen King and directed by Frank Darabont is Rated R and has a 91% on Rotten Tomatoes. The Shawshank Redemption was released on September 22, 1994 and has a runtime of 2 hours and 22 minutes. The Shawshank Redemption can be bought by online retailers such as iTunes, Google, Amazon and Vudu. If you guys like what you’re reading, please subscribe and check out my Patreon to support the blog in different way.
https://reelinterpretations.com/2020/11/11/theshawshankredemption1994/
FRQ 2000.1 Homework Research—(Energy Consumption & Sulfur Emissions) A large, coal-fired electric power plant produces 15 million kilowatt-hours of electricity each day. Assume that an input of 15,000 BTU’s of heat is required to produce an output of 2 kilowatt-hours of electricity. 1. Showing all steps in your calculations, determine the number of: · BTU’s of heat needed to generate the electricity produced by the power plant each day. · Pounds of coal consumed by the power plant each day, assuming that two pounds of coal yields 6,000 BTU’s of heat. · Pounds of sulfur released by the power plant each day, assuming that the coal contains 2% sulfur by weight. 2. The Environmental Protection Agency (EPA) standard for power plants such as this one is that no more than 1.2 pounds of sulfur be emitted per million BTU’s of heat generated. Using the results in part (1), determine whether the power plant meets the EPA standard. 3. Describe three ways a fuel-burning electric power plant can reduce its sulfur emissions. 4. Why are sulfur emissions from coal-fired power plants considered an environmental problem? What are two negative effects that can occur on an ecosystem that has between associated with these emissions? FRQ 2001.1 Homework Research (Energy Consumption) For the following problems, show all steps of your calculations. Include units! Use the information listed below to answer all questions: · The house has 3,000 square feet of living space. · 70,000 BTUs of heat per square foot are required to heat the house for winter. · Natural gas is available at a cost of $4.00 per thousand cubic feet. · One cubic foot of natural gas supplies 2,000 BTUs of heat energy. · The furnace in the house is 85% efficient. 1) How much natural gas (in cubic feet) does a house need for heat for one winter? 2) What is the cost of heating the house for one winter? 3) What are three actions that the residents of the house could take to conserve heat energy and lower the cost of heating the house? 4) The residents decide to supplement the heating of the house by using a wood-burning stove. What are two environmental impacts—one positive and one negative—of using a wood-burning stove. FRQ2004.1 Homework Research (Biomagnification) · What are two human activities that release mercury into the environment? How is mercury transported from that source to an aquatic system. · What are three ways that can be done to reduce the amount of mercury released by the two activities that you listed? · Why is there a greater health risk associated with eating large predatory fish than from eating small non-predatory fish. · What are two toxic metals, other than mercury, that have a negative impact on human health and explain how they are introduced into the environment? What effects can these toxic metals have when exposed to humans? FRQ 2005.4 Homework Research (Drilling in ANWR) The Alaskan National Wildlife Refuge (ANWR) on Alaska’s North Slope is frequently in the news because petroleum geologists estimate that there are billions of barrels of economically recoverable oil beneath the surface of its frozen tundra. According to a 1998 United States Geological Survey (USGS) estimate, ANWR could contain up to 10 billion barrels of technically recoverable oil. Oil company officials advocate opening the refuge to oil exploration and the subsequent development of its petroleum resources. Environmentalists argue that oil exploration and development will damage this fragile ecosystem and urge Congress to protect ANWR by designating it as a wilderness area. 1) The United States consumes approximately 22 million barrels of oil per day. According to the USGS estimate, for how many days would the technically recoverable oil resource in ANWR supply the total United States demand for oil? 2) Describe 3 characteristics of arctic tundra that make it fragile and explain how these make the tundra susceptible to damage from anthropogenic impacts. 3) What are 3 activities that are associated with the development of ANWR petroleum resources and how would each of these activities environmentally impact ANWR? 4) What are 2 major end uses of the 22 billion barrels of oil that the US consumes each day? For each of these, what is a conservation measure that would reduce US consumption? FRQ 2007.1 Homework Research (Sewage Treatment) · Name one component of sewage that is targeted for removal by primary treatment and one component targeted for removal by secondary treatment. For each of these, explain how the pollutant is removed in the treatment process. · Two methods of disposing solid waste from sewage treatment plants are transporting it to a landfill or spreading it onto agricultural lands. What is an environmental problem associated with each of these? · During the final step in sewage treatment, disinfection, what are two pollutants targeted during this step? What is a commonly used method of disinfection? · Name a US federal law that requires monitoring the quality of the treated sewage that is discharged into rivers. FRQ 2007.3 Homework Research (Ozone) 1) What is the class of chemical compounds that is primarily responsible for the thinning of the stratospheric ozone layer? Describe 3 major uses for which these chemicals were made. How does this class of chemical compounds destroy stratospheric ozone molecules? (Show the chemical equations for this!) 2) What is the major environmental problem of the depletion of stratospheric ozone and describe 3 effects on ecosystems and/or human health that can result. 3) Ozone formed at the ground level is a harmful pollutant. What are 3 effects that ground-level ozone can have on ecosystems and/or human health. FRQ 2008.3 Homework Research (Fire Suppression) 1) What are 3 characteristics of forests that develop when fires are suppressed? Explain why fire suppression increases the risks of intense & extensive forest fires. 2) Healthy Forests Initiative (HRI) was legislation that was passed in 2003. The effects of this are expected to extend beyond fire reduction. Aside from fire reduction, describe one positive and one negative effect likely to result from the implementation of the provisions of the HFI. 3) Identify 3 ecosystem services forests provide humans with. How does clear-cutting affect each service you identified? 4) What is a specific type of plant community or biome (other than a forest) that is naturally maintained by fire? How does the fire maintain the community or biome?
http://lbranch.bajaru.com/ap-environmental-science/ap-environmental-science-2/apes-summer-assignment/frq-summer-hw-research.html
Did you ever imitate your parents when you were a kid? Or better yet, imitate your parents imitating someone else? Or try to win favor by pretending to be well-behaved children talking in tones like, “yes father, I would be delighted to pick up the common area of all of my things”? “Real love amounts to withholding the truth, even when you’re offered the perfect opportunity to hurt someone’s feelings” “I won’t put in a load of laundry, because the machine is too loud and would drown out other, more significant noises – namely, the shuffling footsteps of the living dead.” “She’s afraid to tell me anything important, knowing I’ll only turn around and write about it. In my mind, I’m like a friendly junkman, building things from the little pieces of scrap I find here and there, but my family’s started to see things differently. Their personal lives are the so-called pieces of scrap I so casually pick up, and they’re sick of it. More and more often their stories begin with the line “You have to swear you’ll never repeat this.” I always promise, but it’s generally understood that my word means nothing.” I fell in love with David Sedaris’ writing when I listened to his book Let’s Explore Diabetes With Owls. I then went on to list to Me Talk Pretty One Day and his line about the youth in Asia (euthanasia) still cracks me up when I think about it. It was only natural for me to crave his style of funny humor again. And trust me – David makes anything funny. He talks about anything from what to eat for dinner to doing laundry and I find myself giggling. Be warned though – nothing is sacred. NOTHING. You never quite know what David will say next. While Dress Your Family In Corduroy and Denim is funny, it is not my favorite f the three I have listened to. It is good, and it filled my David fix. I will definitely be listening to him again. If you are planning to give his books a try I highly recommend to listen to them on audio. David narrates them himself and he has just the right tone and pause in his voice that makes it all the better.
https://bookjourney.net/2014/02/07/dress-your-family-in-corduroy-and-denim-by-david-sedaris/
BACKGROUND AND SUMMARY OF THE INVENTION This invention relates to a safety device for use with a protective sports helmet and more particularly to a football type helmet having a fully break-away face mask portion. Modern football helmets now generally include an outwardly extending face mak permanently connected to the helmet in such a manner so as to protect the face of the wearer from injury during maneuvers associated with such sport. However, with the introduction and common use of such face mask construction there has been a consequential rise in neck injuries caused by opponents' grasping the face mask portion of the helmet in attempts to tackle or otherwise displace the position of the wearer. Generally, injuries of this type are caused by a twisting motion imparted to the head and subsequently the wearer's neck. This drawback has been recognized and attempts have been made to provide helmets which eliminate or reduce such effect. Accordingly, helmet face guard combinations in which the face guard is somewhat resiliently mounted so as to absorb shock or distort upon being grasped are shown in U.S. Pat. No. 3,170,164. Other attempts to alleviate the above-indicated problem include the provision of a pivotal face guard such as set forth in U.S. Pat. No. 3,139,624 or a fully break-away face guard as disclosed in U.S. Pat. No. 3,283,336. The citation of the above- indicated patents along with the discussion thereof constitutes applicant's Prior Art Disclosure and in that regard, a copy of each such patent is enclosed with this application. Such prior art attempts to alleviate the above-discussed problems have not, however, met with complete success, and accordingly the need still exists for a face mask helmet combination which effectively reduces the above-discussed type of neck injury. It is accordingly a primary object of the present invention to provide a safety device for use with a protective sports helmet including an open grid- like mask adapted to be connected thereto but fully removable therefrom when grasped by another player no matter what the direction of the pull may be. A further object of the present invention is the provision of a safety device of the aforementioned type in which the mask portion thereof is normally engaged with the helmet by a frictional snap lock in a relatively uncomplicated fashion and which does not alter the overall appearance of the helmet. These and other objects of the invention are accomplished by the fully break-away connection of a mask having a peripheral rod-like base portion over the generally central facial opening of a helmet and having downwardly extending opposed side portions and an upper central connecting portion. The mask/helmet connection is accomplished by attachment means having a plurality of spaced attachment points including an element having a forwardly directed undercut channel opening provided on each of said opposed side portions and said connected central portion. In alternate embodiments of the invention, the attachment points may either take the form of a plurality of spaced blocks disposed about the periphery of the facial opening of the helmet, or in the form of a single element extending about the periphery of said opening. Other objects, features and advantages of the invention shall become apparent as the description thereof proceeds when considered in connected with the accompanying illustrative drawing. DESCRIPTION OF THE DRAWING In the drawing which illustrates the best mode presently contemplated for carrying out the present invention: FIG. 1 is a side elevational view of a sports helmet incorporating one form of the present invention; FIG. 2 is an enlarged view of a portion of FIG. 1; FIG. 3 is a side sectional view taken along the line 3--3 of FIG. 2; FIG. 4 is a perspective view showing the configuration of one of the blocks forming an attachment point whereby the face mask of the present invention may be attached to a helmet; FIG. 5 is a side elevational view of an alternate form of the invention with the face mask attached to the helmet; FIG. 6 is a view similar to FIG. 5 with the face mask portion removed from the helmet; FIG. 7 is an enlarged view of a portion of FIG. 5; and FIG. 8 is a sectional view taken along the line 8--8 of FIG. 7. DESCRIPTION OF THE INVENTION Turning now to the drawing and particularly FIGS. 1 through 4 thereof, one form of the invention is depicted. Therein, a helmet 10 is provided with a face mask such that the mask can be fully broken away from the helmet when separately grasped and a force having a significant forward and/or downward component applied thereto. The helmet 10 is of generally conventional configuration and includes opposed downwardly extending side portions 14 and an upper central connecting portion 16 which cooperatively define a forward or facial opening 18 through which the wearer may observe and thereby participate in the game or sport being undertaken. The purpose of the mask 12 is, of course, to protect the wearer's face from injury as by contact with hand, arm, knee or foot portions of other participants in such sport. The mask 12 is of generally grid-like configuration and includes a peripheral rod- like base portion 20 configured so as to conform to portions of the helmet adjacent the opening 18 thereof. The mask 12 further includes a forwardly extending guard portion 21 to accomplish the above-stated purpose. In order to attach the mask to the helmet, a plurality of connecting blocks are attached to the helment at spaced locations about the periphery of the opening 18. Such blocks are of various configurations dependent upon their position with regard to the helmet. Accordingly, the device includes at least a pair of side blocks 22 attached to each side 14 of the helmet 10. Each such side block 22 includes a pad 24 having an opening 26 provided therein and at the rear end thereof with an upwardly extending generally C-shaped head 28. The C- shaped head in conjunction with the pad 24 form a reduced dimension opening 30 for a longitudinally extending channel 32. A screw 34 or other fastening means is adapted to extend through the pad opening 26 directly into the side surface 14 of the helmet 10, and accordingly mount the side block 22 in position as shown in FIGS. 1 and 2. The connecting means further includes front and corner face mask attachment blocks 36 and 38 respectively. In such regard, the corner blocks 38 include upper and lower components 40 and 42 respectively which cooperate to form a generally U-shaped longitudinally orientated channel 44 in which a corner 46 of the base portion 20 of the mask is adapted to be received. Similarly, the front block 36 includes head 48 and pad portion 50. Suitable holes are provided in both the components of the side block 38 and through the pad portion 50 of the front block 36 so as to permanently affix such members directly to the helmet 10 as in the case of side blocks 22. The base portion 20 of the mask is generally of rod-like construction and accordingly exhibits a somewhat circular cross- sectional configuration. Portions of the face mask are adapted to be received in snap frictional engagement within the forwardly facing open channels of each of the connecting blocks 22, 36 and 38. The blocks are preferably each formed of a somewhat elastic or flexible material such as molded plastic resinous compositions, i.e. polyethylene or polypropylene, which provide a certain give to the head portions thereof. Accordingly, in this manner, the face mask 12 may be snap locked in the desired position to the helmet 10 by forcibly pushing the base portion 20 thereof rearwardly when so positioned on the helmet. The face mask will be retained in such position unless it is forcibly grasped as by an opposing player and subjected to a force having a significant forward and/or downward component at which time the mask will be forced out of the various channels and completely break away from the helmet. It is contemplated that the extent of the undercut or reduced opening of the several channels may be varied such that the force necessary to break the mask away from the helmet can be varied according to the sport and the degree of expertness or strength of the player participants. Similarly, the materials from which the blocks are formed might also be varied so as to vary the force required to accomplish such break-away action. Turning now to FIGS. 5 through 8 of the drawing, an alternate embodiment of the invention is disclosed wherein the attachment means by which the mask 12a thereof may be connected to the helmet 10 includes an elongated member 52 outwardly extending from outer surface portions of the helmet 10 and in effect extending about the periphery of the front opening 18 thereof. Such member 52 is affixed to the helmet as by fastening means such as the screws 54 shown. The overall configuration of the member 52 is somewhat U-shaped and exhibits forwardly facing side edge surfaces 56 and a similarly disposed front edge surface 58. Outwardly extending from each of such surfaces 56, 58 is at least one forwardly projecting headed member 60 which is adapted for frictional snap engagement with an open pocket 62 provided within the rearwardly extending edge 64 of the base portion 20a of the face mask 12a. The pockets 62 are spaced along the periphery of the base portion 20a in spaced relation conforming to the disposition of the headed members 60 about the periphery of the member 52. It should thus be apparent that the pockets 62 roughly correspond to the recesses or channels 32 in the previous embodiment and accordingly a similarly frictional snap engagement may be provided by reason of the reduced extent of the lead-in opening 66 thereto. Also, while in the alternate embodiment depicted in FIGS. 5 through 8, the pockets are shown disposed in the base portion 20a, a variant construction thereof may include the pockets within the member 52 and the headed members rearwardly extending from the rear surface 64 of the base portion 20a without departing from the spirit of the invention disclosed herein. Furthermore, the spacing between member 52 and base portion 20a may be varied, and in some cases, the opposed surfaces thereof may contact each other so as to provide increased lateral and rear stability to the face mask 12a. While there is shown and described herein certain specific structure embodying the invention, it will be manifest to those skilled in the art that various modifications and rearrangements of the parts may be made without departing from the spirit and scope of the underlying inventive concept and that the same is not limited to the particular forms herein shown and described except insofar as indicated by the scope of the appended claims.
For individuals, the decision over whether they are granted a loan can either enable or frustrate key life decisions, affecting everything from houses to livelihoods. For the economy, lending money facilitates the mechanism by which money is created safely. Prudent lending is a prerequisite for a stable and growing economy. Consumer credit decisions have, for the past 50 years or so, been automated based on statistical models of past customer behaviour. Over that time, the data and computing power available to make these important decisions have grown exponentially. It is now quite possible to accurately infer some of the most private and sensitive characteristics (such as race and religion) from the data captured automatically as the exhaust of a digital lifestyle. As we look to choose which of the available data to use, it is essential for us do so fairly. While fairness is a concept we learn from an early age, it is difficult to encode what it is, specifically, that justifies whether a decision is ‘fair’ from a legal perspective — or indeed to specify this as a robust mathematical statement. Lenders need a pragmatic approach to support good credit decisions that are both accurate and legitimately defendable, based on the available data. Equality paradox One of the most natural ways to tackle the issue of fairness is to attempt to exclude manifestly unfair decision processes. To this end, most societies have a legal framework that makes it illegal to discriminate based on protected characteristics, such as race, gender and religion. While equality laws have efficacy in eliminating examples of past unfair discrimination, there is less clarity on whether such rules would have the same effect on future cases. However, there are several complicators that already spring to mind here: - There are multiple ways of considering equality, such as equality of outcome vs. accuracy vs. opportunity. It turns out that when specified mathematically, not all of these can necessarily be equal simultaneously, leaving decision makers with a paradox as to which measure of equality to choose to comply with, and which inequalities they are happy to live with. - While protected characteristics have been laid down in law, human society is constantly changing and therefore laws will over time change too. The current debates regarding the rights and treatment of non-binary genders and the differentiations around LGBT+ sexual orientation suggests that views are not fixed as to how protected characteristics should be defined. The fairness of a decision, however, should be resilient to such societal changes. - On a pragmatic level, to assess equality on a protected characteristic requires the capture, storage and analysis of this characteristic. That seems deeply personal and, for some, offensive. Certainly, to make it mandatory to disclose your protected characteristics would be very unfair, even if the objective was only ever to prove that decisions were not discriminatory. Making the disclosure voluntary would almost certainly introduce some material bias, as non-disclosers are likely to be over-represented by discriminated-against groups, and therefore render the whole exercise pointless. Concept of fairness Given that fairness is not the same as equality, can we identify some other attributes of fairness that would provide a guide as to which equality measures are usable in different contexts? A review of the philosophical literature on the concept of fairness highlights the following attributes as being key considerations: - Causality — if something is a direct cause, it might be reasonable to base a fair decision on that basis, in line with the magnitude of the effect. - Relevance — even for attributes that are not causal, it at least needs to be associated to be fair, and foreseeable that the information would be used in a decision. - Volition — the subject is more likely to think a decision is fair if they have discretion over the behaviours being used to make decisions (for example, if the speed at which a person chooses to drive affected their car insurance). - Reliable — fair decisions must be as predictable as possible and not include unnecessary randomness from poor-quality data. - Non-private — subjects should have control over their private data, and decisions should not be made using information that they have not chosen to share. Fairness defined Rather than using equality measures, the following approach should be used to demonstrate fairness in decision-making models. Importantly for businesses, this approach meets the key requirements of the General Data Protection Regulation and the EU’s proposed Fair AI legislation. - Type 1: are there data items that are reasonably available and causal of the outcome being targeted? If so, the feature must be used equitably in the decision process. Causality can be determined based on causal discovery algorithms and/or supported by human opinion. - Type 2: Are there data items that are reasonably foreseeable and volitional? If so, they may be used equitably in the decision-making process. All other characteristics should be excluded in a fair decision. Focusing on these general rules as to which characteristics can be used in models ensures that key aspects of fair decisions are implemented in a consistent and practical manner. This will naturally avoid legally protected classes, which tend not to be causal or volitional, and others that are not formally protected would be unfair. Providing the model is constructed using a statistically unbiased fitting criterion, it will also fulfill an equality of accuracy condition. In addition, normal good modelling practice should ensure that only good-quality and reliable data is used. This article is free to read, request a no obligation trial access to Global Risk Regulator.
https://www.globalriskregulator.com/Subjects/Reporting-and-Governance/Fair-enough-Justifying-fair-use-of-data-in-credit-risk
Q: Finding the coefficient of friciton I'm asked to find the coefficient of friction before the bar starts sliding. Here is my solution -> ∑M = 0 -mg * 1.4863 + 1.8159*T = 0 (T*1.8159) i found by using a cross product ( -2.9726i+ 2.6765j ) cross (-Tsin21i +T cos21j) ∑Fx = 0 -Tsin21 + Ffr = 0 ∑Fy = 0 -mg + N + Tcos 21 = 0 1) from ∑M = 0, T = 8.02m 2) from ∑Fx = 0, -Tsin21 + uN = 0 3) from ∑Fy = 0, N = mg - Tcos21 Substitute 3) to 2) 4) -Tsin21 + umg - uTcos21 = 0 Substitute 1) to 4) 8.02sin21m + umg + u*7.487m = 0 from here i get u = 0.166 But the correct answer is 1.24. I have checked it over 10 times, but everything seems correct. Please, can someone tell me where i made a mistake? A: In your equation (1), $T = 8.02 m$ and in your equation (4) $-T\sin{21^{\circ}} + \mu mg - \mu T \cos{21^{\circ}} = 0$. Rearranging (4) and then substituting the value of T from (1) gives $$\mu = \frac{T\sin{21^\circ}}{mg - T\cos{21^\circ}}= \frac{8.02(0.358)}{9.8-8.02(0.934)}=1.24$$ There is a minus sign in the denominator. You must have added instead of subtracting, to arrive at 0.166. I use the cross product for the torque because I tend to make too many mistakes the other way. It would help if I made a good sketch (free body diagram) of all the force components. Without one, I need to use the cross product.
The Good Doctor, Ken Ravizza I'm grateful to be given the time and space to write about the work of Dr. Ken Ravizza from reading his chapter in Expert Approaches to... The Good Doctor, Ken Ravizza Emotional Regulation for Coaches Why do you do what you do? And how? Ok, what do you do again? Philosophy of Practice. Change is coming. Will you be ready? Athletes, remember: You are NOT your sport! What are you willing to get fired over? You have the Answers! But what are the Questions? Train the Mind with an Attitude of Gratitude Accept your thoughts, Change your behaviors PREP: Create Routines, Create Consistency "PREP" - Pre-Performance Routines Mindfulness in Sport - Catch the Moment, Don't Get Caught in it Showing Up Day in, Day out You are the Grandmaster of your Life Shift Your Focus, Grow Your Game Athletic Identity: You are NOT your sport! Now is the Moment to Be with the Moment Celebrate the "Little Wins" What Time is it? Team Time Woo!
https://www.hoopsminded.com/blog/categories/mental-performance
Influence of External Forces and the Rise of Nationalism This topic discusses the foreign contribution to nationalism and decolonization in Africa. It was this external contribution that led to African countries gaining their independence. Role of the principal of self determination in the development of nationalism and the struggle for independence The principle of self determination was among the 14 points of the US president Woodrow Wilson on which Versailles Treaty of 1919 was made. The principle state that, “Rights of all people to choose the form of government under which they will live.” The principle of self determination advocated for the restoration of sovereign rights and self government to people who have been colonized. The following were the role of the principal of self determination in the development of nationalism and struggle for independence: 1. Raised awareness The principle of self-determination brought awareness to African nations by recognizing that colonialism was a bad thing and did not deserve to be in Africa. That is why these nations started movement to end colonialism. 2. It promoted unity Principle of self-determination unified Africans so that they can fight colonialism. For example, political parties like CPP (Ghana) and TANU (Tanganyika) and militant movements like the Maumau in Kenya were formed to end colonialism. 3. Commitment of the UN The principle of self determination was incorporated into the UN Charter. By doing that, the UN had to take responsibility to ensure the respect of the principle by challenging colonialism for the colonial subjects to attain self governance. 4. Attracted the attention of the superpowers The USA and USSR approved the principle of self determination. They influenced decolonization of Africa for their own interest. 5. It inspired Africans to demand participation in the colonial government Africans demanded for constitutional reforms to allow or increase their representation in the legislative councils and participation in colonial administration. This was good foundation for nationalism and independence as it gave them experience and desire for self rule. 6. It magnified Pan Africanism After the principle of self-determination, pan Africanism gained power to continue with their movement. For example, the Manchester (1945) and Accra conference (1958) demanded immediate decolonization of Africa. How some nationalists in Africa used the principle of self determination to demand for their independence in their countries 1. Formation of political parties Political parties like CPP (Ghana), TANU (Tanganyika), KANU (Kenya) and many others were formed by African nationalist to mobilize masses in the struggle for independence. The parties were an expression of national consciousness and a will for self determination. 2. Demand for fair representation in the legislative councils Africans demanded equal representation in the legislative councils with the Europeans. The aim was to defend their interest and rights. 3. Militant nationalism The unwillingness of some colonial powers to grand independence forced nationalists in some colonies to turn to militancy. For example, Maumau movement in Kenya. 4. Approached the UN The UN gave hand to assist in the decolonisation through her decolonisation committee by pressing colonial powers grand independence to African countries. 5. Conference and demonstration Nationalist leaders organized conference and demonstration. For example, after the invasion of Ethiopia, West African students organised street matching in Nigeria, Lagos to demonstrate their bitterness against imperialism. 6. Formation of pressure groups This included welfare associations and religious movement that rose to fight for Africans socio-political and economic rights. Their attention was to liquidate colonialism and set African people free to decide on their own. Role of Pan Africanism conferences in the development of nationalism and struggle for independence in Africa Pan-Africanism is a movement that includes the unity of people of African descent throughout the world in revolutionary struggle for the total liberation of Africans from colonial rule and any kind of oppression. The first Pan Africanism conference was held in London (1900), the second was held in Paris (1919), the third was held in London (1921), the fourth was held in London and Lisbon (1923), the fifth was held in New York (1927), the sixth was held in Manchester (1945). The following are the role of Pan Africanism conferences in the development of nationalism and struggle for independence in Africa: 1. It strengthened the consciousness of blacks throughout the world Pan Africanism awakened Africans to realize that they were being exploited and now was the right time to fight. 2. It provided a forum Pan Africanism provided a platform for people to talk about their problems. For example, the Manchester congress proposed appropriate liberation strategies urging Africans to first use peaceful means until when they fail then resort to militant strategy. 3. It united people Pan Africanism united Africans in diaspora and those in Africa. They became one and worked together to solve their problems. 4. It created nationalist leaders Those who attended the Manchester Pan African conferences in 1945 like Nkrumah, Azikiwe and Kenyatta, came back to Africa in high gear to lead their countries to self rule. 5. Formation of the Organisation of African Unity (OAU) Pan Africanism was the forerunner of the OAU. Among the major aims of OAU was to speed decolonization in Africa. To this end, OAU sponsored liberation movements in a number of countries such as Mozambique, Angola, Zimbabwe, Namibia and South Africa. 6. It restored Africans dignity It helped Africans see themselves as other people who needed respect. Impact of the second world war on the development of nationalism and the struggle for independence 1. Decline of European economy The second world war led to the collapse of the European economy. factories, farms and infrastructure were destroyed. For reconstruction, they intensified exploitation of the colonies. The result of all was escalation of anti-colonial feelings in the colonies. 2. The ex-soldiers Africans who participated in the second world war, got new experience. Also, they found that whites were cowards, weaker than they had previously thought. On returning home they started movements like the Maumau in Kenya by men like Dedan Kimathi and the FLN in Algeria by Ahmed Ben Bella to overthrow colonialism. 3. Emergence of new superpowers The second world war made USA and USSR superpower. They de-campaigned colonialism using their Veto power. Also their supported African nationalism both morally and materially. For example, the USSR gave financial and military support to liberation movements of many states like Angola, Mozambique, Zimbabwe, Namibia and South Africa. 4. Formation of the UNO UNO was formed immediately after the World war II to replace the league of nations which failed to keep world peace. In the respect to the right of self determination, UNO formed the Decolonization Committee and Trusteeship council which were assigned duties to press colonial powers to grant independence to their colonies. 5. Independence of Asians countries The war led to attainment of self rule of Asians countries soon after its end: Indonesia in 1945, India and Pakistan in 1947 and Burma in 1948. Because the war was directly fought in these countries and weakened their colonial matters militarily and economically. Their independence was a boost to independence in Africa. 6. The Bandug conference and NAM The second world war resulted into the dangerous cold war. To fight imperialism, the Afro-Asian conference was convened at Bandug in 1955 and 1961. Non Allied Movement was formed. The two forged Third World solidarity against colonialism and neocolonialism. Contribution of the economic decline of European capitalism in facilitating decolonization of Africa 1. Excessive exploitation of the colonies After the collapse of the European economy, they began to exploit Africans to recover from war losses. For example, land alienation increased and intensive labour exploitation was introduced. This aroused grievances among the Africans who began to fight for their independence. 2. Rise of USA as a leading capitalist power After the USA became superpower, Europe become dependent on USA for financial aid. USA used this chance to condition the European nations to decolonize Africa. 3. Economic weakness The economic crisis affected the colonists in their homes and in their colonies. Thus, they decided to decolonize Africa so that they could build their own home economy that had collapsed. They were no longer able to run their colonies. 4. Rise of anti-colonial groups in Europe Anti-colonial feelings emerged among politicians, bourgeoisie, socialists and the public who now saw colonies as a burden since some were not resourceful and metropolitan tax payer’s money was spent on running them. 5. Military weakness The colonial power who directly involved in the war: Britain and France were militarily weak after the war. Their military weakness gave morale to Africans to intensify liberation campaigns like the Mau Mau in Kenya and FLN in Algeria. 6. Change in ideology The heavy economic burden faced by colonial powers, Britain and France, compelled them to shift their minds from colonial policy in favour of neo-colonialism. Role of USSR in the process of decolonization of Africa and nationalism struggle for independence 1. Provision of military support like guns and bombs that were extended to African fighters. 2. Condemnation of colonialism on the world scale like in the UNO, where Russia used her veto power against colonialism. 3. Giving training to nationalist fighters. 4. Provision of moral support. They encouraged the fighters to keep on fighting and never to give up. Role of USA in the process of decolonization of Africa and nationalism struggle for independence 1. USA condemned colonialism on the world scale in the international tools like the UNO. 2. Provision of moral support. 3. Provision of material support. For example, she supported Jonas Savimbi in Angola. 4. The use of marshal plan. This was an economic recovery plan introduced by the USA to the European nations affected by the Second world war in 1947. One of the conditions for the European countries to get loan was decolonization of African and Asian countries. This pressure made them accept to prepare their colonies for independence. Contribution of the independence of India and Burma to the development of African nationalism in the struggle for independence India got her independence in 1947 and Burma in 1948. The independence of those Asians countries was of great impact to the struggle for self rule in Africa as explained: 1. Indian independence was of a great lesson and very inspirational to African nationalist fighters Mahatma Gandhi believed in the use of non violence and taught so the Indians. Through this method they were able to drive the British out of their land. Many nationalists like Kenneth Kaunda of Zambia, Julius Nyerere of Tanganyika and others copied non violence method and they got their independence early. 2. Moral support For example, Gandhi paid visits full of inspiration to the fighters in Africa. He sat with and gave speeches that enlightened a lot of them. This was a moral support and Africans came to believe that if it was possible to the Indians then it could no doubt at all be possible to the oppressed Africans. 3. Africans nationalists learned from the independence of India that unity would do it all They had to bring together their people and go as one to face the British. 4. Asian independence weakened European imperialism Asian countries like India and Burma were rich European foreign economically. They were source of market, cheap labour and raw materials. Their independence was a blow to European economies. 5. Pressurized UNO Independence of Asian countries pressed the UNO to push colonial powers to grant independence to colonies. Using the UN platform, they condemned colonialism as an abuse of fundamental rights of nations. 6. Formation of NAM The independence of Asian countries strengthened the Afro-Asian solidarity, leading to the Bandung Conference of 1955 which laid foundation for formation of NAM in 1961. NAM condemned imperialism and laid down strategies and solicited for support to fight colonialism.
https://www.mwalimumakoba.co.tz/2020/02/influence-of-external-forces-and-rise.html