content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
I have worked with children and families for over 20 years as a counselor and teacher helping individuals to extract excellence. I have worked with a variety of individuals, couples and families ages 5-years-old and above from a variety of backgrounds. I use person-centered, creative and evidence-based interventions to help individuals use their strengths and passions to overcome their struggles. My services incorporate optimism, appropriate humor, transparency, and acceptance. I have a PhD in Human Services with a focus in Counseling Studies from Capella University. I have a Masters in Marriage and Family Therapy from Liberty University, and I am comfortable doing Christian Counseling. I can speak and understand French, Spanish, and sign language. I am working on becoming a certified play therapist, getting a license in professional counseling, and getting a license in marriage and family therapy. I enjoy quilting, crochet, knitting, yoga, dancing, Pilates, animals, crafts, board games, and going to the beach. In network with: - Anthem - BCBS - Highmark - Humana - Magellan - Optum - UBH - United Healthcare In process of being in network with: - Aetna - Beacon Health Options - Cigna - Medicare - Multiplan - Optima - Tricare Self Pay Rate: $65 Brittany Morris, MSW, LCSW I earned my Masters of Social Work at Virginia Commonwealth University in Richmond, VA in 2014, but I have been working in the mental health field since 2009. During these ten years, I have worked inpatient, intensive outpatient, in-home and school settings and I am looking forward to transitioning into an outpatient setting. I prefer to work individually with children, adolescents and adults as well as groups. I also enjoy working with non-traditional, culturally diverse and military families. I have special experience working with individuals with chronic illness, eating disorders(anorexia, bulimia, orthorexia, binge eating disorder, diabulimia) and disordered eating , trauma survivors and LGBTQ adolescents. I also have experience working with individuals with depression, anxiety, grief and loss, autism and general life coaching. My goal as a clinician is to create a collaborative environment to help clients be successful. I work to provide person centered treatment allowing for personalized treatment to make YOU the most successful version of yourself. I prefer to utilize a multi-disciplinary approach including but not limited to CBT, DBT, play and art therapy techniques to address issues in the way that feels best for each person. I work hard to create a non-judgmental, warm and non traditional therapy environment in hopes to break down the stigma of mental health and create overall healthier communities for us to be a part of. In network with: - Anthem - Beacon Health Options - Blue Cross Blue Shield - Cigna - Highmark - Humana - Humana-Military - Magellan - Medicare - Tricare In process of being in network with: - Aetna - Multiplan - Optima - UBH Self Pay Rate: $65 Char Bentley, LCSW I am a licensed clinical social worker (LCSW) and a Board-Certified Diplomat (BCD) in Clinical Social Work with over 10 years of clinical experience. I received my Bachelor’s degree in Psychology from the University of Pittsburgh and earned my Master’s degree in Social Work from the University of Maryland. I began my professional career by providing counseling services at a non-profit, therapeutic foster care program. I later worked with adults, adolescents, and children at a community-based mental health program in Maryland. For six years, I served as an officer and clinician in the United States Navy where I continued to provide counseling in various settings (Department of Defense schools, large military treatment facility, and a remote, outpatient clinic overseas). I am eager to continue my pursuit of supporting children, adolescents, and adults from ages 4-65 years old as they overcome obstacles, build resiliency, and achieve peace. Most of my professional experience has focused on assisting those who struggle with the aftermath of trauma, but I also have extensive experience working with anxiety disorders, mood disorders, adjustment disorders, grief and loss, and phase of life problems. I believe mental and emotional well-being is essential to one’s overall health and thus participating in counseling can be an integral part of achieving a healthy and fulfilling life. In network with: - Aetna - Anthem - Blue Cross Blue Shield - Highmark - Humana - Magellan - Medicare - United Dr. Amanda N. Trent, Psy.D Dr. Amanda N. Trent is excited to be part of the Thriveworks team after moving back to the Hampton Roads area. Originally from Michigan, Dr. Trent moved to Virginia Beach in 2005 to complete her graduate school training at Regent University where she received both her Master’s and Doctorate degree in Clinical Psychology. Since being licensed as a Clinical Psychologist, she has worked in private practice with children, teens, and adults, providing individual therapy, psychological testing, neurofeedback, and group therapy in various places around the state. She also has experience as a Clinical Director of a community-based mental health agency and a rape crisis center. One of her specialties is using principles of Dialectical Behavior Therapy (DBT) in her work with clients. She also specializes in spiritually-integrated therapy which helps clients develop their spiritual resources and use their faith as a way to cope and experience post-traumatic growth. Dr. Trent has also been trained in Trauma Focused Cognitive Behavioral Therapy, EMDR Level 1 and 2, Sexual Identity Therapy Level 1 by M. Yarhouse, and infralow neurofeedback training. She has received trauma training through Green Cross Academy of Traumatology and is a Compassion Fatigue Educator and Therapist. Dr. Trent is passionate about providing excellent clinical care to anyone who desires to build a life worth living, especially those with a trauma history. In network with: - Aetna - Anthem - Blue Cross Blue Shield - Beacon - Cigna - Highmark - Humana - Magellan - Medicare - Multiplan - Tricare Self Pay Rate: $99 Patricia Jones, Psy.D. Dr. Jones previously practiced as a Licensed Psychologist in Maryland prior to relocating to the Hampton Roads area in Virginia during the Spring of 2017. She is currently a Licensed Clinical Psychologist in the State of Virginia. She finds it to be very rewarding to assist adults with addressing mental illness and daily stresses of life. Dr. Jones has a passion to help clients become successful in effectively coping with various mental illnesses for diagnoses on a spectrum to include addictions, anxiety, depression, grief and loss, and post traumatic stress disorder, among others. She assists individuals who struggle with substance abuse and gambling addictions, as well as, people with mood disorders, individuals encountering difficulties in daily living, and those who have experienced traumatic events. Dr. Jones primarily works with clients from a Cognitive Behavioral Therapy perspective. She works with individuals to assist them in recognizing their negative thoughts that are influential in producing negative emotions, which in turn, involves responding with a negative pattern of behavior. Dr. Jones also integrates Motivational Interviewing, Solution Focused Therapy, and Structural Family Therapy principles to collaboratively work with clients to help them to overcome dysfunctional thinking, emotions, and behaviors that prevent them from living productive and satisfying lives. Dr. Jones’ goal is to assist clients with becoming “their own therapists.” Her clients are able to achieve this by utilizing strategies and solutions that were learned in therapy to solve other problems that may arise. Additionally, she works with clients to aid them in viewing themselves from a holistic perspective by considering aspects of their Physical, Psychological, Emotional, Spiritual, Cultural, Social, and Economic selves. Dr. Jones obtained a Doctorate of Psychology Degree in Clinical Psychology in 2010 from Argosy University, The American School of Professional Psychology in Arlington, Virginia. She has since worked with individuals and conducted group therapy. Helps with: - Addiction - Alcohol Abuse - Anger Management - Anxiety - Coping Skills - Depressed Mood - Drug Abuse - Fear/Phobia - Gambling Problems - Goal Setting - Grief & Loss - Performance Anxiety - Relationships - Self Esteem - Sleep/Insomnia - Social Anxiety - Stress - Substance Abuse - Test Anxiety - Trauma/PTSD - Women’s Issues In network with: - Cigna In process of being in network with: - Aetna - Anthem BCBS - Beacon Health Options - Humana - Magellan - Multiplan - Optima - Tricare - UBH Self Pay Rate: $65 Porcher Jackson, LCSW Greetings my name is Porcher Jackson, LCSW and I’m a Licensed Clinical Social Worker currently licensed in the following states Virginia and Florida. I have experience in providing individual, couple, family, and group psychotherapy services in an array of settings including public, private, military, telephonic, and video-conference settings. I operate from a person-centered concept in which I allow clients to take an active role in the development and implementation of their treatment plan goals and objectives in order to achieve desired outcomes for treatment. In network with: - Aetna - Anthem - Beacon - Cigna - Humana - Magellan - Medicare - Multiplan - Optima - Tricare Self Pay Rate: $65 Stephanie Gore, MSW, LCSW Stephanie Gore, LCSW, is a licensed clinical social worker who treats children, adolescents, and adults from diverse backgrounds using a variety of individual, family, and group treatment modalities. Treatment is tailored to each individual in accordance with his/her strengths, interests, and goals for treatment. Ms. Gore has provided clinical services for over 15 years. Her varied background includes work in foster care and adoption settings, community service boards, juvenile/adult court systems, schools and private practice. Ms. Gore utilizes an eclectic approach to therapy, which includes cognitive-behavioral therapy, solution-focused therapy, empowerment therapy, and strengths-based perspectives. Issues she helps clients address include depression, anxiety disorders, trauma, anger management, child behavior and social skills issues, relationship issues, life transitions, grief and loss, school-related problems and workplace stress. Ms. Gore believes that the knowledge of one’s life helps guide them to the answers that bring about necessary changes. If you are able to get a person to identify when a trauma or major life event occurred, they can draw on their strengths to move past it and realize their potential to live a more fulfilling life. Ms. Gore provides a safe environment of dignity, respect, compassion and support; while also giving a realistic view so that individuals can identify and overcome obstacles and flourish in life. In network with: - Anthem - BCBS - Cigna - Highmark In process of being in network with: - Aetna - Beacon Health Options - Humana - Magellan - Multiplan - Optima - Tricare - UBH Self Pay Rate: $65 Tiffany Crayton, Ph.D., LPC-S Dr. Tiffany M. Crayton deeply values creating a space that awakens one’s bravery. A space that is free of shame and helps clients discover their true authentic selves. Dr. Crayton invites you to show up as you are with your complexities. All the things that make you who you are. Dr. Crayton works from a non-pathological approach, which means we will look past the diagnosis and work on the root causes of issues creating stress and hopelessness in one’s life. Dr. Crayton believes most circumstances are solvable and survivable. Dr. Crayton is a strong advocate of connections and being present in the journey of her clients. She believes humor and authenticity are the elements to a strong relationship; these are tools that are paramount in developing a therapeutic relationship. Dr. Crayton earned a Ph.D., in Counseling Education and Supervision with a concentration in Trauma and Crisis from Walden University. She earned her M.Ed., in Guidance and Counseling from the University of Central Oklahoma. She is a Licensed Professional Counselor (LPC) in the states of Virginia, Texas, and Oklahoma. She is also a National Certified Counselor and an approved LPC Supervisor. In addition to serving as a secondary counselor for over 16 years, she has worked with children, adolescents, couples and families on various life impacting issues as a mental health professional. Dr. Crayton specializes in working with those who have experienced trauma, intimate partner violence, sexual abuse, anxiety, depression, LGBTQ+ experiences and the value of self. Dr. Crayton is committed to being trauma informed and healing centered in her work and honors the trust a client places in her to guide them on their journey. Together we can explore ways to cope with the stressors of life past or present and help move towards a positive direction. Dr. Crayton is a strong believer in our inherent worth and value as human beings by simply embracing who we are. I see you and I have a place for you if you are ready for me to accompany you on your journey to peace, empowerment and the rediscovery of your voice. I look forward to meeting you! In network with: - Anthem - BCBS - Beacon Health Options - Cigna - Highmark - Humana - Medicare In process of being in network with: - Aetna - Magellan - Multiplan - Optima - Tricare - UBH Self Pay Rate: $99 Valerie Proctor, MSW, LICSW In network with: - Aetna - Anthem - Beacon Health Options - Cigna - Humana - Magellan - Medicare - Multiplan - Optima - Tricare Self Pay Rate: $65 James Baughman, LCSW James Baughman is a native to Virginia. Born in 1970 he remained a Virginia resident until 1994. From 1994 until 2002 he served in the United States Army as a mechanic. After completing military service in 2002 he remained in Oklahoma where he has live for the last 16-years (2002-2018) raising a family and furthering his college education. He studied social work at the University of Oklahoma. He focused his studies in clinical social work practice with diverse populations with severe mental illness. He specializes in working with adults with mood disorders including bipolar and depressive disorders. He uses a Cognitive Behavioral approach to assist people in modifying cognitive schemas/constructs and utilizing the therapeutic relationship to help the client build catharsis in solving everyday life problems. As a Licensed Clinical Social Worker he has spent most of his time in clinical practice working with adults having severe mental illness and/or co-occurring disorders. He has also provided crisis services to adults. His passion is to work with adults experiencing mental health problems achieve self-sufficiency, resilience, and optimal health & Wellness. He provides individual therapy, group therapy, and couples therapy. He especially enjoys working with Individuals experiencing:
https://thriveworks.com/chesapeake-counseling/counselors-and-life-coaches/
A perspective on quantum mechanics calculations in ADMET predictions. Understanding the molecular basis of drug action has been an important objective for pharmaceutical scientists. With the increasing speed of computers and the implementation of quantum chemistry methodologies, pharmacodynamic and pharmacokinetic problems have become more computationally tractable. Historically the former has been the focus of drug design, but within the last two decades efforts to understand the latter have increased. It takes about fifteen years and over $1 billion dollars for a drug to go from laboratory hit, through lead optimization, to final approval by the U.S. Food and Drug Administration. While the costs have increased substantially, the overall clinical success rate for a compound to emerge from clinical trials is approximately 10%. Most of the attrition rate can be traced to ADMET (absorption, distribution, metabolism, excretion, and toxicity) problems, which is a powerful impetus to study these issues at an earlier stage in drug discovery. Quantum mechanics offers pharmaceutical scientists the opportunity to investigate pharmacokinetic problems at the molecular level prior to laboratory preparation and testing. This review will provide a perspective on the use of quantum mechanics or a combination of quantum mechanics coupled with other classical methods in the pharmacokinetic phase of drug discovery. A brief overview of the essential features of theory will be discussed, and a few carefully selected examples will be given to highlight the computational methods.
In Schenck v. United States, the Supreme Court had to decide whether Schenck was protected by the First Amendment in speaking out against the draft during World War I. The Supreme Court ruled against Schenck, arguing that he was not protected because A. his actions proved that he was a spy for Germany. B. his actions were harmful to the Allies' military victory. C. his actions risked the welfare of the nation during wartime. D. his actions were seen as fraud against the wartime government. User: In Schenck v. United States, the Supreme Court had to decide whether Schenck was protected by the First Amendment in speaking out against the draft during World War I. The Supreme Court ruled against Schenck, arguing that he was not protected because Weegy: C. his actions risked the welfare of the nation during wartime. supernike|Points 610| User: Based on the table, what is the purpose of congressional committees? A. to help Congress perform its function of passing laws B. to help Congress vote on whether laws are constitutional C. to help Congress monitor rulings in the federal court system D. to help Congress set up specialized departments for the president Weegy: Please provide me the link of the table you are referring to regarding your question so that I can further assist you. Thanks.Expert answered|alfred123|Points 2308| Weegy: C. slavery . User: The national government has _____________ powers that are spelled out in the Constitution. A. delegated B. reserved C. concurrent D. denied Weegy: The national government has A. delegated powers that are spelled out in the Constitution. (More) Weegy: B. by issuing executive orders in an attempt to control the U.S. economy User: Why did the Founding Fathers create two houses in the legislative branch? A. to maintain equal representation and power for the states B. to give the central government more power over the states C. to keep the president from having too much power over the states D. to keep power equal between the judicial system and the states Weegy: A. to maintain equal representation and power for the states is the correct answer. (More) Weegy: What is the main purpose of the federal legislative branch? Answer: B. to initiate and approve federal laws User: The Constitution allows each branch of the government checks on the power of the other two branches. Which check does the executive branch have on the judicial branch? A. the power to remove a justice of the Supreme Court B. the power to nominate a justice to the Supreme Court C. the power to increase the size of the Supreme Court D. the power to overrule decisions of the Supreme Court Weegy: B. The power to nominate a justice to the Supreme Court User: One of the many powers given to Congress by the U.S. Constitution is the power to A. lay and collect taxes. B. nominate federal judges. C. pardon criminals for federal crimes. D. appoint heads of executive departments. Weegy: One of the many powers given to Congress by the U.S. Constitution is the power to A. lay and collect taxes. (More) Weegy: A. He rejected a third term as president. User: Which power does the Constitution give the executive branch regarding participation in congressional votes? A. The secretary of state votes for ambassadors. B. The president votes for Supreme Court justices. C. The vice president votes to break a tie in the Senate. D. The secretary of defense votes for war in the House of Representatives. Weegy: C. The vice president votes to break a tie in the Senate. User: According to this passage from the Constitution, who has the authority to create lower courts? “The judicial Power of the United States shall be vested in one Supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish.” –U.S. Constitution, Article III, Section 1 A. Supreme Court B. executive branch C. attorney general D. Congress Weegy: The CONGRESS has the authority to create lower courts Thank you for asking Weegy! :) (More) Weegy: The United States Postal Service is an example of D. an independent agency of the Executive Branch. User: All of the following are roles of the President's Cabinet EXCEPT? A. advise the president on any matter about which he seeks information B. Cabinet secretaries are also oversee their departments C. declares actions of the president unconstitutional D. makes sure that the president's policies are carried out Weegy: C. declares actions of the president unconstitutional User: Which Cabinet position was added after 1950? A. Secretary of Agriculture B. Secretary of Health and Human Services C. Secretary of Commerce D. Secretary of Labor Weegy: Which Cabinet position was added after 1950? B. Secretary of Health and Human Services User: Under which Executive Office of the president do the chief of staff and press secretary reside? A. White House Office B. Office of Management and Budget C. Council of Economic Advisers D. Office of Administration Weegy: Under which Executive Office of the president do the chief of staff and press secretary reside? A. [ White House Office ] (More)
Doug’s review published on Letterboxd: Wow. Just wow. David Ehrlich was right. I spent the whole ride back from the theatres trying to catch my breath from how much this floored me. I'm not even sure if I can put it into words, but I'll try. Carol is the most utterly fascinating, captiving, polarizing, ernest, powerful, heartbreaking, and fantastic films of the decade, and certainly the best film of 2015. I've always admired Todd Haynes, but he's really outdone himself here. I was captured from the very first frame, and everything from that point on was absolute perfection. Cate Blanchett and Rooney Mara both give the performances of their lives as Carol and Therese, and deserve all the praise they can get. All that aside, the best thing about Carol is how meticulous the whole thing is, and how heavily it plays in it's storytelling. The environment is perfectly captured with breathtaking cinematography, production/costume design executed to the very last detail, and a score that plays perfectly into each moment. There's a lot of attention to detail here, perhaps more than can be unpacked in a single viewing. In a film like this, a single glance or facial expression could speak louder than any monologue or speech. The characters and dialogue may play it subtle, but their meaning and impact is explosive. It's crazy to think that if I had watched this film five years ago, I probably would have hated it. Thank god my critical thinking skills have awoken over the years so I could properly appreciate something as brilliant as this. Carol floored me in every way possible from start to finish. Some may gripe about how slowly it moves, but I could've kept watching for hours after it ended. And that ending? Oh my gosh... It's films like these that are the reason why I love cinema. Films like these that put you in a trance and take you away. What can I say, I feel flung out of space.
https://embed.letterboxd.com/urbannerd98/film/carol-2015/
August is Women in Translation month and we wanted to celebrate by sharing a selection of brilliant authors from all over the world whose work we've published. Enjoy! "Who is American? How do we decide, and who decides?" an interview with E.M. Tran SFM: Why are you drawn to the genre of nonfiction? What about its history or form speaks to you? What compels you to write about truth, history, and your own experience? I would consider myself a fiction writer in general, but find a lot of my stories in the seeds of truth, altering experiences I’ve had or crafting characters from people I’ve known. However, even though you’re free to imaginatively invent narrative, there’s a lot about fiction that is conservatively tied to convention and form. There is experimental fiction out there, but often it’s difficult to write in the face of overbearing historical precedent and genre conventions. So, entering into the genre of creative nonfiction has been very liberating in the vastness of its boundaries. I’m bound in other ways to “truth,” but what I can do with that truth, how I position it, and how I convey it are so fluid. Because of that formal freedom, writing about my own experiences feels much more authentic, and I am more open as a result with the truth that I know. It also allows me to look outside of myself and explore other possible truths, which I think ultimately can only enrich my fiction rather than remain separate from it. SFM: Your winning essay, “Miss Saigon,” moves between the story of your mother’s escape from Saigon and your experiences of race and identity. The essay braids your mother’s story with your own, collaging past with present, memory with lived experience, and readers are driven through the piece by your use of form as much as your use of narrative. Why were you drawn to this form? How does the form work with, against, or perhaps because of the subject matter? “Miss Saigon” was as much about my own experience as it was about my mother’s, and also, as much about larger experiences of trauma for many people. Hurricane Katrina effected thousands and continues to leave a visible mark on the city of New Orleans and the Gulf Coast despite it having been more than a decade since its landfall. And the Vietnam War, too, has displaced generations of Vietnamese people, the effects of that war rippling into the future in ways that are innocuous and sometimes invisible. Memories around huge traumas like these are so often elided, forgotten, revised, or representative of the oppressor, and as a victim of a trauma, I am also guilty of engaging in this act of false memory. So are all of us. The essay is really an exploration of how trauma interacts with memory and identity, and I wanted the form to reflect that. We remember in bits and pieces, the different parts of our selves materializing around formative events in ways that only mingle and touch when we can stand back and view in hindsight. It was also impossible to talk about my own experience with Katrina and my mother’s experience with the Vietnam War and Katrina as separate narratives that occur chronologically in time, especially considering our own subjectivities as Vietnamese American women in a predominantly white culture. All of these memories and issues of identity enact themselves simultaneously, so breaking up the narrative in parts where the past and present are told outside of chronological time was necessary to the telling of the story. SMF: In his interview with Prairie Schooner, guest judge Kiese Laymon said, “I love essays that imagine a reader different than the reader we are taught to imagine in so-called literary essays. If you imagine a different reader, you produce a different piece with different rhythms, conclusions and questions.” Who is your imagined reader for “Miss Saigon” or your larger body of work? How does this reader impact your vision and craft? For “Miss Saigon” in particular, I imagined my mother as the reader. It forced me to render her in complex terms, to be always aware of the dangers of two-dimensional characterization. In general, I often imagine my family reading my work. As a Vietnamese American writer, naturally I write a lot about the Vietnamese American experience. I want it to feel true to any reader, but especially to the people I am writing about. Sometimes the burden of culture can be taxing—like, am I morally obligated to write about Asian people because I am Asian? Am I unable to write different characters? Is it my onus to bear to expose to a mostly white readership that Asian American stories are important and exist? At the end of the day, I write what I write because I find it compelling and important, and I imagine an audience that is both insider and outsider to the subject matter. This vision of readership pushes me to trust my reader in an immense way in the delivery of information and execution of narrative and character. SFM: Our guest judge also said, “In essay writing, I want to sometimes answer really old questions with different forms in the hopes of getting different answers. The readers of essays are so much closer to the process, too.” Do you see “Miss Saigon” answering old questions? What ongoing conversations is it entering or speaking back against? What “different answers” are you trying to get at in this essay or your other nonfiction? “Miss Saigon” attempts to answer that question, which I also feel is so relevant in this current historical moment, of who is American? How do we decide, and who decides? I think so often we are bombarded with images of whiteness and inculcated with a fear of the immigrant because the immigrant does not conform to those images of whiteness. My mother is very beautiful, but her beauty has no value in a culture that only praises a particular type of beauty. Those larger questions about what our identities are and how we deal with trauma—the answers to those questions are so dependent on place and our environments. SFM: Finally, what projects are you currently working on? What can we look forward to reading? I have been working for the last few years on a novel about sorority women in the south. It is an examination of race and gender when thrown into an environment where standards are often inflexible. Sororities in the south are so rooted in traditions of regional whiteness, and when those traditions are disrupted or challenged, it makes for a very interesting conflict. I also have a short story forthcoming this December in the Iron Horse Literary Review about a young girl and her father building a boat in order to survive a flood in New Orleans. Sarah Fawn Montgomery holds a PhD in creative writing from the University of Nebraska-Lincoln, where she teaches and works as Prairie Schooner’s Nonfiction Assistant Editor. She is the author of three poetry chapbooks, Regenerate: Poems from Mad Women (Dancing Girl Press 2017), Leaving Tracks: A Prairie Guide (Finishing Line Press 2016), and The Astronaut Checks His Watch (Finishing Line Press 2014). Her work has been listed as notable several times in Best American Essays, and her poetry and prose have appeared in various magazines including Crab Orchard Review, DIAGRAM, Fugue, The Los Angeles Review, Natural Bridge, Nimrod, North Dakota Quarterly, Passages North, The Pinch, Puerto del Sol, The Rumpus, Southeast Review, Terrain, Zone 3 and others. Categories:
https://prairieschooner.unl.edu/blog/who-american-how-do-we-decide-and-who-decides-interview-em-tran
Background: Congenital polydactyly is a common deformity of the limbs, and excision of the extra digit has shown good results in the vast majority of patients. However, this treatment approach may not suitable for all cases of polydactyly. For rare types of complex polydactyly, some complex surgical procedures are required to achieve satisfactory correction. The aim of this study was to report a rare type of polydactyly and to describe a novel method that is modified on-top plasty technique for treating it. Results: We performed the first osteotomy at the neck of the metatarsal bone by “grafting” the distal polydactyl digit with the normal axis to the 5th metatarsal bone. Excision of the duplicated toe was accompanied by simultaneous restoration of the 5th toe axis and decrease in the width of the forefoot. Finally, both appearance and function could be improved with this surgical approach. Most importantly, the complete osteoarticular structure and weight-bearing structure of foot were well reconstructed. Conclusions: The modified on-top plasty technique is an effective method for treating this rare type of polydactyly, and it is hoped that this method could provide important assistance in making treatment decision in similar situation in the future.
https://www.researchsquare.com/article/rs-1631711/v1
The Institute has members who are posted / residing all over India. When they visit Delhi from different parts of the country, the Institute has the provision of providing them accommodations within the Institute premises. The Institute also has few Family Suites for long occupancy up to a maximum period of six months. Mainly for those members who come on posting or for their medical treatment for a long duration to Delhi. Other miscellaneous charges will be charged extra as applicable. CANCELLATION AFTER BOOKING A RESIDENTIAL SUITE. Before 04 Weeks 10% of Total Booking Charges. 04 weeks to 01 week 20% of Total Booking Charges. 01 week till 24 hrs 30% of Total Booking Charges. Within 24 hrs 50% of Total Booking Charges. No Occupancy despite booking 100% of Total Booking Charges.
https://dsoidelhi.org/residential_suite.php
Human physiology is the study of the normal function of body organs and tissues. Body organs that health care professionals study during their training include the skeletal, muscle, skin, nervous, cardiovascular, respiratory, digestive, renal, and endocrine systems, and the role of these organ systems. A basic understanding of how the normal body maintains a stable internal environment and efficient organ system functioning is important to recognize the physical stresses that disrupt organ system stability, such as acute and chronic medical problems, diseases, and infections. It is also important for all health professionals to know normal human body functions and processes that must adjust to physical variants and environmental stressors or imbalances affecting human health.
https://cnazone.com/Physiology-Ceu
All relevant data are within the paper and its Supporting Information files. Introduction {#sec001} ============ Perception of environmental and intracellular cues is an essential feature of life. Signaling pathways enable cells to regulate genetic and biochemical programs for adaptation and survival. Among the most important strategies that bacteria employ for performing those tasks are the two-component systems (TCSs). They comprise a histidine kinase (HK) that autophosphorylates upon perception of a stimulus and then transfers the phosphoryl group to a cognate response regulator (RR) \[[@pone.0194486.ref001]\]. The phosphorylated RR is activated to perform output functions such as modulation of gene expression, interaction with partner proteins, etc. \[[@pone.0194486.ref001]\]. *Caulobacter crescentus* is a gram-negative bacterium that grows in dilute aquatic environments and is a member of the alpha-subdivision of proteobacteria. Much attention has been given to the study of *C*. *crescentus* signaling pathways to describe how they control cellular development and cell-cycle progression \[[@pone.0194486.ref002]\], and also to understand how this oligothopic bacterium is able to display nutritional versatility and to adapt to nutrient-poor environments. For example, a system-level investigation of TCSs showed that at least 39 of the 106 two-component genes are required for cell cycle progression, growth, or morphogenesis \[[@pone.0194486.ref003]\]. Among them, the gene that codes for the RR NtrX (CC_1743) was considered conditionally essential because a mutant strain with the deleted gene could not be obtained in rich medium (PYE) but the deletion procedure performed on minimal medium (M2G) yielded a stable deletion strain \[[@pone.0194486.ref003]\]. Further characterization of this mutant indicated that it has a growth deficiency and fitness disadvantage in phosphate-replete minimal medium (M2G), but this difference with respect to the wild-type strain is not manifested in phosphate-limited minimal medium (M5G) \[[@pone.0194486.ref004]\]. Although these observations suggested that NtrX might be necessary for responding to a signal or metabolite present in the M2G medium, such signal was not identified and the role of NtrX in *C*. *crescentus* biology remains elusive. NtrX forms a TCS with its cognate HK NtrY, which is predicted to be a membrane protein with a periplasmic domain and intracellular HAMP, PAS and HK domains. The NtrY/X pathway has been extensively studied in the pathogen *Brucella abortus*, in which it has been reported that it participates in sensing low oxygen tension and in the regulation of the expression of denitrification enzymes and high-oxygen affinity cytochrome oxidases \[[@pone.0194486.ref005],[@pone.0194486.ref006]\]. This TCS is also present in other microorganisms where it has been involved in a variety of functions that includes: nitrogen fixation and metabolism in *Azorhizobium caulinodans* \[[@pone.0194486.ref007]\], *Rhodobacter capsulatus* \[[@pone.0194486.ref008]\] and *Herbaspirillum seropedicae* \[[@pone.0194486.ref009]\]; regulation of proline and glutamine metabolism in *Ehrlichia chaffeensis* \[[@pone.0194486.ref010]\]; expression of respiratory enzymes in *Neisseria gonorrhoeae* \[[@pone.0194486.ref011]\]; and succinoglycan production, motility, and symbiotic nodulation in *Sinorhizobium meliloti* \[[@pone.0194486.ref012],[@pone.0194486.ref013]\]. In this article we report that NtrX expression is induced by 10 mM phosphate and that acidic pH leads to NtrX phosphorylation. We also show that this signal is physiologically relevant since *C*. *crescentus* produces the acidification of the M2G medium upon entry into stationary phase, causing NtrX phosphorylation at this stage of the growth curve. Besides, we demonstrate that *ntrX* deletion produces a decreased viability at stationary phase and a reduced resistance to acidic stress. Finally, we prove that NtrX is also phosphorylated by acidic pH in *B*. *abortus*, pointing out to a potentially conserved role across the alphaproteobacteria class. Materials and methods {#sec002} ===================== Bacterial strains and culture conditions {#sec003} ---------------------------------------- *C*. *crescentus* cells were grown at 30°C in M2G (10 mM phosphate, glucose as carbon source), M5G (50 μM phosphate, glucose as carbon source), M2X (10 mM phosphate, xylose as carbon source) or peptone yeast extract (PYE) media \[[@pone.0194486.ref014]\] supplemented when necessary with nalidixic acid 10 μg/ml, tetracycline 2 μg/ml or kanamycin 5 μg/ml (liquid) or 25 μg/ml (solid). Cultures reached logarithmic phase when their OD~600~ was 0.2--0.3, while stationary phase was defined by an OD~600~ of 1.2 or higher. When required, the pH of the liquid media was adjusted using HCl, unless otherwise indicated. *C*. *crescentus* strains CB15N and Δ*ntrX* were generously donated by Laub MT, Department of Biology, Massachusetts Institute of Technology, Cambridge, MA, USA. *B*. *abortus* cells were grown at 37°C in minimal medium \[[@pone.0194486.ref015]\] or tryptose agar (TA) (DIFCO), supplemented when appropriate with nalidixic acid 10 μg/ml and/or kanamycin 25 μg/ml. The cultures reached logarithmic phase at OD~600~ 0.2--0.4 and stationary phase at OD~600~ \> 1.2. When appropriate, the pH of the minimal medium was adjusted to different values with HCl. *E*. *coli* strains were grown at 37°C in LB supplemented with kanamycin (50 μg/ml). Construction of CC_NtrX~myc~ strain {#sec004} ----------------------------------- To construct a *C*. *crescentus* strain with a chromosomally myc-tagged NtrX protein (CC_NtrX~myc~) we amplified *ntrX* from *C*. *crescentus* CB15N genomic DNA with primers PstI-ntrX~ff~ and ntrX-Myc-PstI~rev~ (`5’-AACTGCAGATGAGCGCCGACGTTCTTGTG-3’` and `5’-TTCTGCAGTTACAGATCTTCTTCCGAGATCAGCTTCTGTTCCTCTTCCTCATCGCCCCGAG-3’`, respectively). Then, the PCR product was digested with PstI and ligated into the pNPTS138 plasmid. The resulting vector was transformed into *E*. *coli* S17-1 and transferred to *C*. *crescentus* CB15N by conjugation. Homologous recombination led to the integration of that plasmid, resulting in *ntrX_myc* in the locus that was previously occupied by the endogenous gene (therefore, under the same transcriptional regulation) and the wild-type endogenous copy of *ntrX* coded now after the integrated pNPTS138 backbone. The integration of the plasmid was selected by kanamycin resistance and verified by PCR. Construction of the complemented strain CC_ΔntrX-NtrX~myc~ {#sec005} ---------------------------------------------------------- DNA encoding full-length tagged NtrX was amplified from pNPTS138-ntrX-myc using primers pMR10-NtrX~ff~ and Myc-pMR10~rev~, `5’-tcctgcagagctctagagtcgagacATGAGCGCCGACGTTCTTGTGGTGG-3’` and `5’-TTAAGTGCGGCCCCCTCGAGGGGGTCTACAGATCTTCTTCCGAGATCAGCTTCTGTTC-3’`, respectively. The PCR product was used as a megaprimer in a PCR reaction with the pMR10 plasmid as template, according to the restriction-free cloning method \[[@pone.0194486.ref016]\]. Then, the PCR reaction was digested with DpnI at 37°C for 2 h and the mixture was transformed into competent *E*. *coli* DH5α cells. Selection was carried out on LB-kanamycin plates, and the resulting plasmid (pMR10-ntrXmyc) was isolated and sequenced. Finally it was transformed into *E*. *coli* S17-1 and transferred to *C*. *crescentus* Δ*ntrX* by conjugation. Complemented strains were selected by kanamycin resistance and then NtrX~myc~ expression was verified by Western blot against the tag. The plasmid encoding the mutant protein NtrX~myc~(D53A) was obtained from pMR10-NtrX~myc~ by PCR amplification using primers `5′-GCTTTGCTGGTGCTGGCCATCTGGATGCAGG-3′` and `5′-CCTGCATCCAGATGGCCAGCACCAGCAAAGC-3`′, followed by digestion with the enzyme DpnI. Further steps to obtain the *C*. *crescentus* strain were conducted as detailed in the previous paragraph. Construction of BA_NtrX~myc~ strain {#sec006} ----------------------------------- To construct a *B*. *abortus* strain with a chromosomally myc-tagged NtrX protein (BA_NtrX~myc~) we amplified *ntrX* from *B*. *abortus* 2308 genomic DNA with primers BA-ntrX~ff~ and BA-ntrXmyc~rev~, `5’-AACTGCAGATGGCGGCCGATATTCTTGTTGTTG-3’` and `5’-TTCTGCAGTTACAGATCTTCTTCCGAGATCAGCTTCTGTTCTACGCCGAGAGACTTCAGCTTGCGA-3’`, respectively. Then, the PCR product was digested with PstI and ligated into the pNPTS138 plasmid. The resulting vector was transformed into *E*. *coli* S17-1 and transferred to *B*. *abortus* 2308 by conjugation. Homologous recombination led to the integration of the plasmid, resulting in *ntrX_myc* in the locus that was previously occupied by the endogenous gene (therefore, under the same transcriptional regulation) and the wild-type endogenous copy of *ntrX* now coded after the integrated pNPTS138 backbone. The integration of the plasmid was selected by kanamycin resistance and verified by PCR. Isolation of total RNA from *C*. *crescentus* bacterial cell culture {#sec007} -------------------------------------------------------------------- *C*. *crescentus* wild type and CC_NtrX~myc~ were grown in M2G or M5G at 30°C until stationary phase (OD~600~ 1.0--1.3). After harvest, the supernatant was removed, and the pellet was resuspended in 100 μl of a solution containing 84 μl of TE buffer, 15 μl of 10% SDS and 1 μl of 10 mg/ml proteinase K. The samples were then incubated at 37°C for 1 h and 600 μl of Qiagen RLT lysis buffer was added. Total RNA was isolated following the Qiagen RNeasy Mini Bacterial protocol. DNA was subsequently removed by digestion with RQ1 RNase-free DNAse (Promega) according to the manufacturer's instructions. RNA was quantified using a NanoDrop spectrophotometer (ND-1000, Thermo Fisher Scientific). Real-time quantitative RT-PCR assay {#sec008} ----------------------------------- Reverse transcription was performed with SuperScritpt III First-strand synthesis kit (Invitrogen) using random decamer primers (Invitrogen). Complementary DNA (cDNA) samples were used as templates in quantitative real-time PCRs (qRT-PCRs). Primers were designed with the Primer3 program (<http://www.ncbi.nlm.nih.gov/tools/primer-blast/>) obtaining primers NtrX-RT~ff~ and NtrX-RT~rev~ (`5’-CTGGAGGATGAAGGCTATGC-3’` and `5’-CAGATATCCAGCACCAGCAA-3’`, respectively), which amplify a 101 bp region. Real-time PCRs were performed with SYBR Green in 96-well plates in an Mx3005P Stratagene instrument and analyzed with the MxPro program. The results for the target mRNA were normalized to the amount of the *C*. *crescentus* CC_0088 mRNA for which primers `5’-CGGCTCATTCTCGATCTCTT-3’` and `5’-CCTCGACAATGCTGAACTGA-3’` were used. Western blot analysis {#sec009} --------------------- To verify the expression of the NtrX~myc~ protein, the CC_NtrX~myc~ strain was grown under the conditions indicated in the figure legends. Then, the OD~600~ of the cultures was measured and volumes corresponding to the same amount of bacteria were centrifuged. The pellets were resuspended in 1X Laemmli buffer and heated 10 min at 90°C. These samples were loaded in two polyacrylamide gels and subjected to electrophoresis. One of them was stained with Coomassie Brilliant Blue (total protein stain for loading controls) while the other was transferred to a nitrocellulose filter (Millipore). Membranes were probed with monoclonal mouse anti-myc antibody (Cell Signaling Technology) at a 1:2,000 dilution, and a secondary HRP-conjugated anti-mouse antibody (Sigma) used at a 1:3,000 dilution. Blots were developed using SuperSignal^TM^ West Pico Chemiluminiscent Substrate (Thermo Scientific), following the manufacturer's instructions. Signal intensity was measured using ImageQuant LAS4000 (GE Healthcare Life Sciences) and quantified using the ImageJ program. ### Phosphoprotein affinity gel electrophoresis {#sec010} NtrX~myc~ phosphorylation was analyzed in cultures grown and incubated as detailed in the figure legends. The samples were prepared by centrifuging equal amounts of bacteria, according to the OD~600~ of the cultures, and then the pellets were frozen until used. *C*. *crescentus* samples were resuspended in 1X Laemmli buffer and disrupted by sonication, using one pulse of 15 seconds at an output wattage of 2 (QsonicaXL-- 2000 series, Misonix). *B*. *abortus* samples were disrupted using a Precellys24 homogenizer (Bertin Technologies) with 4 cycles of 3 x 30 seconds at 6,500 rpm, incubating on ice between each cycle. The homogenate was centrifuged for 2 min at 5,000 x g at 4°C to remove unbroken cells and precellys beads, and the supernatant was then centrifuged 5 min at 10,000 x g. Laemmli buffer to a final 1X concentration was added to the resulting supernatant. To avoid NtrX dephosphorylation, the samples were not heated and they were loaded after disruption in polyacrylamide gels copolymerized with 35 μM Phos-tag™ and 150 μM ZnCl~2~. Electrophoresis was performed with standard denaturing running buffer at 4°C under constant voltage (150 V). After electrophoresis, the gels were washed with EDTA 1 mM and then the proteins were transferred to a nitrocellulose membrane (Millipore) to perform Western blots as previously described. When appropriate, the bands were quantified with the ImageJ program, and the percentage of NtrX phosphorylation (NtrX\~P %) was calculated as the ratio \[(NtrX\~P)/(NtrX~TOT~)\]x100, where NtrX~TOT~ corresponds to the total intensity of the bands of phosphorylated and unphosphorylated NtrX. Growth curve and determination of bacterial viability {#sec011} ----------------------------------------------------- Overnight cultures of *C*. *crescentus* CB15N, the Δ*ntrX* mutant strain and the CC_Δ*ntrX*-NtrX~myc~ complemented strain were diluted to an OD~600~ of 0.005 in M2G and were incubated at 30°C with agitation (170 rpm). Samples were periodically taken to measure the OD~600~ and to determine the bacterial viability by counting the number of colony forming units (CFU) after plating 10-fold serial dilutions onto M2G plates and incubating them at 30°C for 3 days. Bacterial survival in response to acidic pH stress {#sec012} -------------------------------------------------- The different *C*. *crescentus* strains were grown in M2G medium until they reached logarithmic or stationary phases, according to the OD~600~ values that were previously mentioned. At this point, they were centrifuged, resuspended in the same volume of M2G medium adjusted to pH 4.0, and incubated at 30°C with agitation for 30 minutes. Samples were taken immediately after the addition of the acidic media (time 0) and after the incubation period (time 30) to determine the number of viable bacteria by plating 10-fold serial dilutions on solid M2G plates. The experiment was performed independently three times by triplicate and the percentage of survival was calculated as the ratio between the number of viable bacteria at time 30 and the initial viable bacteria (time 0) multiplied by 100. Statistical analysis {#sec013} -------------------- Statistical analyses were performed using a two-tail Student's t-test, or one-way or two-way ANOVA with a Bonferroni's multiple comparison post-test using GraphPad Prism5. Data are presented as mean ± standard deviation (SD) of the mean. P-values of ≤0.05 were considered significant. Statistical significance levels were defined as follows: \*p\<0.05; \*\*p\<0.01; \*\*\*p\<0.001. Results {#sec014} ======= NtrX expression is induced by high concentrations of phosphate {#sec015} -------------------------------------------------------------- In order to understand the activation of the NtrY/X system in *C*. *crescentus*, we decided to determine under which growth conditions NtrX is expressed. Previous reports indicated that a *C*. *crescentus* Δ*ntrX* strain grows more slowly and has a fitness disadvantage in phosphate-replete minimal medium (M2G, 10 mM phosphate), but not in the phosphate-limited medium M5G (50 μM phosphate) \[[@pone.0194486.ref004]\]. This might suggest that the *C*. *crescentus* NtrY/X pathway is necessary for responding to a signal or metabolite produced in M2G, but it could also imply that NtrX is not present in M5G. To establish the expression of this RR, we measured the levels of the *ntrX* transcript by qRT-PCR in *C*. *crescentus* wild type (CC_WT) grown in M2G and M5G and observed that the expression of the gene is significantly lower under phosphate-limited conditions ([Fig 1A](#pone.0194486.g001){ref-type="fig"}). ![NtrX expression is induced under phosphate-replete conditions.\ The expression of NtrX in different media was determined by qRT-PCR and Western blot. (A) qRT-PCR to determine the level of *ntrX* transcripts in *C*. *crescentus* wild type grown in M2G and M5G until stationary phase. Data represent the mean ± standard deviation of three independent experiments, each performed by triplicate. The *p*-value was determined by a two-tailed Student's t-test (\*\**p*\<0.01). (B) The CC_NtrX~myc~ strain was grown in M2G or M5G and samples of these cultures were analyzed after 16 h. Also, at this point an aliquot of the M5G culture was taken, 10 mM sodium phosphate was added and it was further incubated for 1 h or 24 h when samples were withdrawn to be analyzed by Western blot. (C) Initial cultures of CC_NtrX~myc~ were grown overnight in M2G or M5G, and samples were taken at time zero (lane '-'), or they were centrifuged and resuspended in fresh M2G or M5G. After an 8 h incubation aliquots of these samples were analyzed by Western blot. Each experiment from (B) and (C) was performed independently three times, and the result of one of these repetitions is shown.](pone.0194486.g001){#pone.0194486.g001} Then, to verify if there is a correlation between the levels of the *ntrX* transcript and the abundance of the protein, we generated a strain with a chromosomally myc-tagged NtrX (CC_NtrX~myc~) that was grown in M2G and M5G. As a control, we performed qRT-PCR with samples of this strain in the two media. Despite presenting higher levels of the *ntrX* transcript compared to CC_WT, it was confirmed that in the engineered strain the expression of *ntrX* is also lower in the M5G medium ([S1A Fig](#pone.0194486.s001){ref-type="supplementary-material"}). Moreover, Western blot analysis did not detect NtrX~myc~ under phosphate-limited conditions, but it could be demonstrated that the protein is expressed in M2G ([Fig 1B and 1C](#pone.0194486.g001){ref-type="fig"}). Then, CC_NtrX~myc~ was grown in M5G supplemented with phosphate to match the concentration present in M2G and NtrX~myc~ was detected either after 1 h or 24 h of culture indicating that, in fact, phosphate induces NtrX~myc~ accumulation ([Fig 1B](#pone.0194486.g001){ref-type="fig"}). Finally, cultures of CC_NtrX~myc~ were grown in M2G, centrifuged, and incubated in M2G or M5G. We observed that NtrX~myc~ was expressed in the M2G culture before and after resuspending the strain in the same medium, but the protein was no longer detected after 8 h in M5G ([Fig 1C](#pone.0194486.g001){ref-type="fig"}). When we used an initial culture in M5G, NtrX~myc~ was not expressed neither at the beginning of the assay nor after the incubation in the same medium, but it was detected after 8 h in M2G ([Fig 1C](#pone.0194486.g001){ref-type="fig"}). Altogether, our work points out that NtrX expression is induced under phosphate-replete conditions. NtrX is phosphorylated during stationary phase {#sec016} ---------------------------------------------- After verifying that NtrX is expressed in M2G, we studied its phosphorylation status at different stages of growth. To this end, the CC_NtrX~myc~ strain was grown in M2G and samples were taken at different times to measure their OD~600~ and to perform electrophoresis in gels with affinity for phosphoproteins. These gels were prepared with Phos-tag™, a reagent that reduces the migration of phosphorylated proteins, and NtrX~myc~ was identified by Western blot against the tag. Our experiments show that NtrX~myc~ is phosphorylated upon entry to stationary phase, it remains phosphorylated for 10 h and returns to a dephosphorylated state after a prolonged period of time (i.e. 44 h of culture) ([Fig 2A](#pone.0194486.g002){ref-type="fig"}). As a control, we also analyzed a stationary-phase sample of a Δ*ntrX* mutant transformed with a plasmid that codes for NtrX~myc~\_D53A (pMR10-NtrX~myc~\_D53A), in which the phosphorylatable aspartate residue was mutated for alanine. In this case, we did not observe a band of the tagged protein with reduced mobility ([S2 Fig](#pone.0194486.s002){ref-type="supplementary-material"}), indicating that the modification that NtrX undergoes during stationary phase is its phosphorylation, and that the Phos-tag^TM^ gels separate the phosphorylated isoform from the unphosphorylated protein. ![NtrX phosphorylation is achieved during the stationary phase of growth.\ Different samples of CC_NtrX~myc~ were analyzed by phosphoprotein affinity electrophoresis and Western blot to determine the presence of phosphorylated NtrX (the phosphorylated and non-phosphorylated forms of the protein are indicated on the left of the gels). (A) An overnight culture was diluted in fresh M2G and samples were taken at the indicated time points to determine their OD~600~ and NtrX phosphorylation. (B) M2G log-phase cultures ('M2G log') were centrifuged and resuspended in fresh M2G or in cell-free supernatants from cultures in stationary phase ('M2G sta'). NtrX phosphorylation was analyzed in samples taken after 0.5 h or 1 h incubations. As controls, aliquots of the original stationary- and log-phase cultures were included. (C) Bacteria grown in M2G until logarithmic phase were centrifuged, resuspended in fresh M2G (control) or in cell-free supernatants from cultures grown until stationary phase in M2G ('M2G sta',), M2X ('M2X sta') or M5G ('M5G sta'). Samples were obtained after an incubation period of 0.5 h. (D) Western blot of bacterial lysates obtained from cultures grown in M2G until logarithmic phase that were centrifuged and resuspended in fresh M2G, or in fresh M2G prepared without ammonium chloride ('M2G --NH~4~') or without glucose ('M2G --gluc'), and incubated for 0.5 h. Each experiment from panels (A) to (D) was performed independently at least three times, and the result of one of these repetitions is shown. However, the bands of all of them were quantified and used to elaborate the histogram presented in (E). The statistical analysis was performed by a one-way ANOVA followed by a Bonferroni's multiple comparisons post-hoc test, comparing 'M2G sta' to 'M2G log', and 'M2G log resuspended in M2G fresh' to all the conditions in which the 'M2G log' culture was resuspended. \*\*p\<0.01, \*\*\*p\<0.001.](pone.0194486.g002){#pone.0194486.g002} To determine if NtrX phosphorylation was a consequence of a modification in the culture medium associated with the bacterial growth, log-phase bacteria were centrifuged and resuspended in cell-free supernatants from stationary-phase cultures. After incubating them for 0.5 h or 1 h, we observed a significant increase in NtrX~myc~ phosphorylation ([Fig 2B and 2E](#pone.0194486.g002){ref-type="fig"}). On the contrary, phosphorylated NtrX~myc~ (NtrX\~P) was not detected after incubation of log-phase bacteria with fresh M2G medium ([Fig 2B and 2E](#pone.0194486.g002){ref-type="fig"}), indicating that NtrX phosphorylation is triggered by an extracellular signal that is present in the supernatants of stationary-phase M2G cultures. Then, log-phase bacteria were resuspended in stationary-phase supernatants obtained from cultures grown in M5G, which produced a significant increase in the phosphorylation of NtrX~myc~ with respect to fresh M2G, but this increment was not as high as that observed with stationary-phase supernatants from M2G cultures (the phosphorylated fraction reached levels of 30% and 60%, respectively) ([Fig 2C and 2E](#pone.0194486.g002){ref-type="fig"}). The experiment was repeated by resuspending the exponential-phase bacteria grown in M2G with a supernatant obtained after growing *C*. *crescentus* until stationary phase in M2X, a minimal medium that has xylose as the carbon source. In this case, we determined a significant increase in the phosphorylated fraction of NtrX~myc~ with respect to fresh M2G, reaching a percentage of NtrX\~P similar to stationary-phase M2G supernatants ([Fig 2C and 2E](#pone.0194486.g002){ref-type="fig"}), indicating that the signal that causes NtrX phosphorylation is produced by the bacterial metabolism using either glucose or xylose as carbon source. In order to identify the signal, we tested conditions that are hallmarks of cultures at stationary phase. We incubated log-phase bacteria with fresh M2G prepared without glucose (M2G -gluc) or without ammonium (M2G --NH~4~) and determined that phosphorylated NtrX~myc~ was not present in any of the cell lysates obtained from these samples ([Fig 2D](#pone.0194486.g002){ref-type="fig"}), and that there was no significant difference with respect to resuspending the bacteria in fresh M2G ([Fig 2E](#pone.0194486.g002){ref-type="fig"}). These results rule out that the scarcity of glucose or ammonium might cause NtrX phosphorylation during stationary phase. NtrX is phosphorylated under acidic pH conditions {#sec017} ------------------------------------------------- It has been reported that glucose, as a sole organic carbon source in minimal medium, is metabolized by *C*. *crescentus* by the Entner-Doudoroff pathway and that pH decreases during culture \[[@pone.0194486.ref017]\]. In fact, we measured the pH of supernatants from *C*. *crescentus* cultures in M2G at different times and corroborated that the medium is acidified as the bacteria enter stationary phase (reaching a pH value of 5.0, [Fig 3A](#pone.0194486.g003){ref-type="fig"}). Therefore, we investigated if the exposure to acidic pH is the environmental signal that leads to NtrX phosphorylation. CC_NtrX~myc~ at exponential phase was resuspended in cell-free supernatants from cultures in stationary phase, with or without their pH adjusted to 7.0. After performing Phos-tag™ electrophoresis and Western blot analysis to these samples, we observed that NtrX~myc~ was phosphorylated only when the bacteria were incubated in the acidic supernatant ([Fig 3B](#pone.0194486.g003){ref-type="fig"}), confirming that the acidification produced at stationary phase is responsible for NtrX phosphorylation. To determine the pH range at which this event is triggered, log-phase CC_NtrX~myc~ was resuspended in fresh M2G with the pH adjusted to different values. The results show that NtrX~myc~ is phosphorylated under mild acidic conditions, observing the maximum phosphorylation between pH 5.0 and 4.5, which corresponds to 60% of NtrX~myc~ in the phosphorylated state ([Fig 3C](#pone.0194486.g003){ref-type="fig"}), as was also determined in the cultures under stationary phase ([Fig 2E](#pone.0194486.g002){ref-type="fig"}). ![The acidification produced at stationary phase is responsible for NtrX phosphorylation.\ (A) Variation in the pH of the supernatant of a *C*. *crescentus* culture in M2G as a function of the bacterial optical density. (B-E) Phos-tag™ electrophoresis and Western blot of CC_NtrX~myc~ at exponential phase ('M2G log') treated under different conditions. (B) Bacteria were resuspended in cell-free supernatants from cultures in stationary phase ('M2G sta') with their pH adjusted to 7.0) or with their corresponding pH ('pH 5.0'), and analyzed after 30 minutes. Bacteria collected from the original cultures at stationary and log phases were used as controls. (C) Exponentially growing cultures were centrifuged, resuspended in M2G with the pH adjusted to different values (indicated in the figure) and incubated for 30 minutes before Phos-tag^TM^ and Western blot (left). The intensity of the bands was quantified in several experiments and used to elaborate the plot presented on the right of the panel. (D) Kinetics of phosphorylation at pH 5.0. Log-phase CC_NtrX~myc~ was resuspended in fresh acidic M2G (pH 5.0) and aliquots were taken at different times. (E) Kinetics of dephosphorylation at pH 7.0. A log-phase culture was incubated for 30 min with fresh M2G at pH 5.0 to allow NtrX phosphorylation. Then the culture was centrifuged, resuspended in fresh M2G at pH 7.0 and incubated for the indicated periods of time when aliquots were taken to perform the Phos-tag™ electrophoresis. Each experiment was performed independently at least three times, and the results of one of these repetitions are shown.](pone.0194486.g003){#pone.0194486.g003} Taken into consideration that stationary-phase supernatants obtained in M5G were not efficient to phosphorylate NtrX~myc~, and that this medium is prepared with a different buffer system (Pipes instead of phosphate), we measured their pH. During early stationary phase, M5G cultures reached a pH of 6.0, a value at which the phosphorylated fraction of NtrX~myc~ is low (approximately 20% as shown in [Fig 3C](#pone.0194486.g003){ref-type="fig"}), explaining their poor efficiency to phosphorylate NtrX~myc~ previously (around 30% of NtrX\~P, [Fig 2E](#pone.0194486.g002){ref-type="fig"}). Since our experiments were performed adjusting the pH of the M2G medium with HCl, it was necessary to exclude that the chloride ions were triggering NtrX~myc~ phosphorylation. For this reason, we repeated our assays resuspending log-phase CC_NtrX~myc~ in fresh M2G that was adjusted to different pH values with acetic acid (HAc) or sulfuric acid. Regardless of the acid used, we observed maximal phosphorylation at mild acidic pH ([S3 Fig](#pone.0194486.s003){ref-type="supplementary-material"}), confirming that the acidic pH is the signal that causes NtrX phosphorylation and not the presence of chloride ions. Finally, we incubated log-phase CC_NtrX~myc~ bacteria with fresh acidic M2G for different periods of time and observed that NtrX~myc~ phosphorylation takes place as soon as 1 min after the treatment ([Fig 3D](#pone.0194486.g003){ref-type="fig"}). Then, these treated cultures were centrifuged and resuspended in fresh M2G at pH 7.0, which caused a very fast (\< 1 min) dephosphorylation of NtrX~myc~ ([Fig 3E](#pone.0194486.g003){ref-type="fig"}). Therefore, acidic pH is acting as a switch able to dictate the phosphorylation status of NtrX. *ntrX* deletion causes a decreased bacterial viability during stationary phase {#sec018} ------------------------------------------------------------------------------ Given that NtrX is phosphorylated during stationary phase in M2G, we wanted to establish if deleting the *ntrX* gene affects the bacterial viability at this particular culture stage. *C*. *crescentus* CB15N (CC_WT) and Δ*ntrX* (here denoted as CC_Δ*ntrX*) were grown in M2G and samples were taken at different times to measure their OD~600~ and to determine the viability by plating on solid media. The culture of CC_WT increased its OD~600~ and the number of bacteria during exponential phase, presenting a slight decrease in viability upon entry into stationary phase and a stable number of CFU at this stage during the analyzed period ([Fig 4A and 4B](#pone.0194486.g004){ref-type="fig"}). In contrast to previous reports that described a slower doubling time for CC_Δ*ntrX* with respect to CC_WT \[[@pone.0194486.ref004]\], we observed that the OD~600~ and the number of cells of the mutant strain increased during exponential phase at a rate similar to the wild type. However, during stationary phase there was a persistent and significant decrease on the amount of viable bacteria when compared to the wild-type strain ([Fig 4B](#pone.0194486.g004){ref-type="fig"}). Given that the OD~600~ at stationary phase is similar between CC_WT and CC_Δ*ntrX* ([Fig 4A](#pone.0194486.g004){ref-type="fig"}), the reduction in the number of viable cells is accompanied by an increasing amount of dead bacteria that are not lysed. Complementation of the *ntrX* deletion with the wild-type tagged gene (CC_ΔntrX-NtrX~myc~) restores the phenotype of the wild-type strain ([Fig 4A and 4B](#pone.0194486.g004){ref-type="fig"}). Altogether, our results show that entry into stationary phase conduces to NtrX phosphorylation and that the presence of this RR is required to sustain viability throughout this stage. ![*ntrX* deletion causes a decreased bacterial viability during stationary phase and under acidic stress.\ Cultures of *C*. *crescentus* CB15N (WT), the Δ*ntrX* mutant strain and the complemented strain CC_ΔntrX-NtrX~myc~ (*ΔntrX*+pMR10-*ntrX*~*myc*~) were diluted to an OD~600~ of 0.005 in M2G and were incubated at 30°C with agitation. Samples were periodically taken to determine the OD~600~ (A) and bacterial viability (B) by counting colony-forming units (CFU) per ml. Each assay was performed by duplicate and the average ± SD of one representative experiment is shown. Statistical analysis was performed by a one-way ANOVA followed by a Bonferroni's multiple comparisons post-hoc test. \*\*\*p\<0.001 between CC_WT and CC\_ Δ*ntrX*. (C) Bacterial survival in response to acid stress. Bacteria grown in M2G until logarithmic or stationary phases were incubated in acidic M2G (pH 4.0) for 30 minutes. At the initial and final time points the number of viable bacteria was determined and the percentage of survival was calculated. The experiment was performed by triplicate and the mean + SD of a representative experiment is shown. Data was analyzed by a two-way ANOVA followed by a Bonferroni's multiple comparisons post-hoc test. \*\*p\<0.01.](pone.0194486.g004){#pone.0194486.g004} *C*. *crescentus* NtrX is involved in acid resistance {#sec019} ----------------------------------------------------- It has been proved that *C*. *crescentus* presents an increased resistance to acid stress during stationary phase \[[@pone.0194486.ref018]\]. Therefore, we decided to study *ntrX* relevance in the response to a sudden exposure to acidic pH when the cells are at an exponential or stationary phases. The performed experiment consisted in growing the bacteria in M2G until they reached the desired stage, resuspending them in M2G at pH 4.0, incubating them for 30 min and determining the number of viable cells. In accordance to previous reports \[[@pone.0194486.ref018]\], wild-type *C*. *crescentus* presented a significant increase in its resistance to acid stress during stationary phase, while exponential cultures showed a fast death rate ([Fig 4C](#pone.0194486.g004){ref-type="fig"}). When CC_Δ*ntrX* was exposed to pH 4.0 for 30 min, the cultures at exponential phase presented a drastic reduction on their viability, comparable to the percentage of survival of the wild-type strain ([Fig 4C](#pone.0194486.g004){ref-type="fig"}). Importantly, cultures of the mutant strain at stationary phase showed a marked reduction on their viability after the acidic stress, reaching a survival percentage that is significantly lower than that corresponding to the wild-type strain under the same culture stage ([Fig 4C](#pone.0194486.g004){ref-type="fig"}). On the other hand, the complemented strain CC_ΔntrX-NtrX~myc~ behaves as CC_WT. This indicates that NtrX is required to elicit a response during stationary phase that leads to the increased acid resistance that characterizes this stage. NtrX phosphorylation is also triggered by acidic pH in *Brucella abortus* {#sec020} ------------------------------------------------------------------------- As it has been mentioned, the NtrY/X system has been involved in numerous responses to diverse stimuli in different microorganisms. For this reason, we wanted to address if the phosphorylation of the RR by acidic pH is a singular feature of *C*. *crescentus* biology or if it is conserved in other bacteria. Our group has been studying the role of the NtrY/X system in the physiology and virulence of *B*. *abortus*, which is, as *C*. *crescentus*, an alphaproteobacteria. We reported that this pathway participates in the adaptation to low oxygen concentrations and that the intracellular PAS domain is important for this function \[[@pone.0194486.ref005]\]. We thought that it would be interesting to use *B*. *abortus* as another model to study the signaling triggered by acidic pH because one major mechanism of *Brucella* pathogenesis is the ability to survive in an acidic environment inside macrophages \[[@pone.0194486.ref019]\]. In fact, phagosome acidification is a key intracellular event to induce the expression of virulence genes \[[@pone.0194486.ref020]\]. We constructed a *B*. *abortus* strain with a chromosomally myc-tagged NtrX protein (BA_NtrX~myc~) and grew it in minimal medium (MM). During exponential phase, the pH of the medium was close to 7.0 and it did not change significantly upon entry into stationary phase. We also analyzed samples of BA_NtrX~myc~ at different stages of the growth cycle, observing a low proportion of phosphorylated NtrX~myc~ in exponential and stationary phases ([Fig 5](#pone.0194486.g005){ref-type="fig"}). Then, exponential-phase cultures were centrifuged, resuspended in fresh media with the pH adjusted to different values, incubated for 30 min and used to prepare cell lysates that were subjected to Phos-tag™ electrophoresis and Western blot. The results show that NtrX~myc~ is barely phosphorylated at neutral pH (consistent with the previous results in exponential and stationary phases) and that the phosphorylated fraction increases at lower pH values, with a maximal extent achieved at pH 4.0 ([Fig 5](#pone.0194486.g005){ref-type="fig"}). These findings demonstrate that the triggering of NtrX phosphorylation by acidic pH observed in *C*. *crescentus* also takes place in *B*. *abortus*, pointing out to a potentially conserved role across the alphaproteobacteria class. ![NtrX phosphorylation is also triggered by acidic pH in *Brucella abortus*.\ Phosphoprotein affinity gel electrophoresis followed by Western blot of samples of the *B*. *abortus* BA-NtrX~myc~ strain grown in minimal medium until logarithmic phase ('MM log'), resuspended in fresh MM with its pH adjusted to different values and incubated for 30 min. As controls, samples of cultures at log and stationary phases were analyzed ('MM log' and 'MM sta', respectively). The phosphorylated and non-phosphorylated forms of NtrX are indicated on the left of the gels. Two independent experiments were performed and the bands were quantified to calculate mean ± SD, which were plotted in a graph presented on the right of the figure.](pone.0194486.g005){#pone.0194486.g005} Discussion {#sec021} ========== The NtrY/X TCS is an intriguing signaling pathway in bacteria as it is one of the least characterized systems. It was initially described many years ago \[[@pone.0194486.ref007]\], and important contributions were made recently to the general understanding of its regulation and activity \[[@pone.0194486.ref005],[@pone.0194486.ref006],[@pone.0194486.ref009]--[@pone.0194486.ref013],[@pone.0194486.ref021],[@pone.0194486.ref022]\]. In the present article we identify a signal that positively regulates the expression of the RR NtrX, another signal that triggers its phosphorylation, and their relevance in the bacterial physiology. It is important to highlight that our experiments provide, for the first time, direct evidence of the *in vivo* phosphorylation of NtrX. It has been reported that NtrX expression is regulated by proline and glutamine in *E*. *chaffenssis* \[[@pone.0194486.ref010]\] and our group demonstrated that limited oxygen conditions induce the operon that codes for the NtrY/X TCS in *B*. *abortus* \[[@pone.0194486.ref005]\]. Herein, we report that the amount of NtrX in *C*. *crescentus* depends on the availability of phosphate in the medium, with high concentrations of phosphate leading to the accumulation of the RR. This finding is caused, at least in part, by an upregulation of *ntrX* transcription under phosphate-replete conditions. In this regard, it would be interesting to determine the pathway involved in this induction. For example, the PhoR/B TCS is a conserved signal transduction system that allows bacteria to respond to phosphate limitation \[[@pone.0194486.ref023]\], though *ntrX* has not been identified as a repressed target within the PhoB regulon \[[@pone.0194486.ref024]\] Therefore, the modulation of *ntrX* transcription by phosphate is not due to a direct binding of PhoB to the promoter of the NtrY/X operon, but rather through another transcriptional factor regulated by PhoB, or through a different signal transduction pathway. On the other hand, even though we determined that the level of the *ntrX* transcript in M5G is approximately 30% of that corresponding to M2G, the protein could not be detected by Western blot under the same phosphate-limited conditions. Besides, the level of the NtrX~myc~ protein became undetectable when CC_NtrX~myc~ was grown in M2G and then incubated in M5G for 8 h. However, when CC_NtrX~myc~ was grown in M5G and then incubated in M2G for the same period of time, the amount of NtrX~myc~ was not restored to the levels of the protein in M2G at time zero. All these observations might indicate that the concentration of phosphate could also modulate the proteolysis rate of NtrX. One of the most important findings that we present in this article is the triggering of NtrX phosphorylation when *C*. *crescentus* is under acidic conditions, a typical feature of its growth during stationary phase in M2G medium. It remains to be determined if periplasmic protons *per se* are the signal involved, but some of our results support this notion. The fact that fresh acidic M2G medium, which contains exclusively glucose and salts and was adjusted with HCl (not an organic acid), is enough to lead to the phosphorylation of NtrX indicates that the signal sensed is not an organic molecule produced and secreted by the bacteria during stationary phase. In fact, the same effect was obtained when the pH of the medium was adjusted with H~2~SO~4~ or HAc. On the other hand, the fact that neutralophilic bacteria generally maintain their cytoplasmic pH values in a narrow range despite the external pH \[[@pone.0194486.ref025]\] indicates that it is likely that the periplasmic pH is the environmental cue relevant to NtrX phosphorylation rather than the cytoplasmic pH. Nevertheless, the cytoplasm of some bacteria, such as *Salmonella enterica*, is acidified upon acid stress \[[@pone.0194486.ref026]\], but this phenomenon requires several minutes to occur (approximately 120 minutes to decrease the pH in 0.75 units \[[@pone.0194486.ref026]\]) in contrast to the fast phosphorylation of NtrX that takes place as soon as 1 min after incubation in an acidic medium. Of note, we showed that NtrX is rapidly dephosphorylated upon acidification and reincubation in a neutral-pH medium, indicating that NtrX\~P is being the substrate of a phosphatase. Despite our efforts to obtain a *C*. *crescentus* strain with the *ntrY* gene deleted and coding for a myc-tagged NtrX, we could not generate this mutant to confirm that NtrY is the sensor kinase that detects acidic pH and phosphorylates NtrX as a consequence. Other histidine kinases have been reported to respond to low pH such as PhoQ \[[@pone.0194486.ref027]\], PmrB \[[@pone.0194486.ref028]\], ArsS \[[@pone.0194486.ref029]\] and EvgS \[[@pone.0194486.ref030]\]. All of them contain periplasmic domains that allow the detection of an acidic environment (although it has been recently proposed that the activation of PhoQ occurs in response to a reduction in the cytoplasmic pH \[[@pone.0194486.ref031]\]). NtrY has a periplasmic domain with a secondary structure prediction \[[@pone.0194486.ref032]\] that classifies it within the PDC family \[[@pone.0194486.ref033]\], which groups extracellular sensor domains from PhoQ, DcuS and CitA. Taking into consideration that NtrY is a histidine kinase involved in redox sensing through its intracellular PAS domain \[[@pone.0194486.ref005]\], the potential role of its periplasmic domain in detecting acidic pH would imply that NtrY is able to integrate different environmental signals. Overall, our results lead us to postulate that *C*. *crescentus* NtrX orchestrates an adaptive response to acidic pH that initially requires the phosphorylation of the RR but is sustained over the time without NtrX\~P, given that the protein is phosphorylated upon entry to stationary phase and it becomes dephosphorylated after several hours. This initial response would allow the bacteria to survive for a prolonged period under stationary phase, since deleting *ntrX* produces a decreased viability after 5 h at this phase. Also, this response would be responsible for the acquisition of the characteristic acid-stress resistance observed in *C*. *crescentus* at stationary phase \[[@pone.0194486.ref018]\], given that the mutant strain CC_Δ*ntrX* does not present this phenotype. We did not observe differences between the mutant and wild-type strains when the experiment of acidic resistance was performed with bacteria at log phase, possibly because the stress is very drastic and the bacteria at this stage are too susceptible and die before the adaptive mechanisms are activated. In spite of our progress, it still remains to elucidate which are the molecular mechanisms involved in the response to acidic pH in *C*. *crescentus* and the role that NtrX has in their regulation. It has been proposed that glutamate, arginine and lysine decarboxylases contribute to pH homeostasis in *E*. *coli* \[[@pone.0194486.ref034]\], but these enzymes are not encoded in *C*. *crescentus* genome. Also, under conditions of acid challenge *E*. *coli* increases the expression of other cytoplasmic enzymes that catalyze reactions that consume protons and of respiratory chain complexes that pump protons out of the cell \[[@pone.0194486.ref025]\]. It is possible that the role of NtrX in the adaptation to acidic pH is linked to those strategies, since we have described that the NtrY/X TCS of *B*. *abortus* activates the expression of denitrification enzymes (which catalyze reactions that require protons) \[[@pone.0194486.ref005]\] and of the ccoN cytochrome oxidase (that pumps protons out) \[[@pone.0194486.ref006]\]. Our approach to study *C*. *crescentus* using defined minimal media proved to be very useful to dissect different components that promote NtrX expression and phosphorylation. However, *ntrX* is essential for growth in rich media (PYE) \[[@pone.0194486.ref003]\], where we determined that the pH is neutral and no acidification is produced during bacterial growth. Therefore, unphosphorylated NtrX must have a key role in the bacterial physiology that still has to be discovered. Another highlight of our work is that we demonstrate that acidic pH is also capable of triggering NtrX phosphorylation in the pathogenic bacterium *B*. *abortus*. Since *C*. *crescentus* and *B*. *abortus* belong to the same class but are not closely related (rhizobial and caulobacteral orders, respectively), our results could indicate that the phosphorylation of NtrX upon acidification, and its role in the adaptation to low pH, are conserved across the alphaproteobacteria class. Given that low pH acts as an intracellular signal for the expression of genes involved in survival and multiplication of *B*. *abortus* within the phagocytic cell \[[@pone.0194486.ref035]\], it would be interesting to determine if NtrX is involved in the regulation of this virulence-related transcriptional network. Also, some mechanisms have been proposed to protect *Brucella* against the adverse effects of acidification (such as the expression of urease) \[[@pone.0194486.ref036]\] and it would be important to prove if NtrX is required for their induction. In conclusion, we have contributed to deepen the knowledge on the NtrY/X pathway by identifying the phosphate concentration as a signal that is necessary for the expression of NtrX in *C*. *crescentus*, and acidic pH as a trigger of NtrX phosphorylation in two different species of alphaproteobacteria. We also demonstrate that NtrX has an important role in the adaptation to environments with low pH. It is noteworthy that we used a direct approach to detect NtrX\~P, which led us to postulate that the environmental pH acts as a switch capable of regulating the phosphorylation status of NtrX. Therefore, we have outlined an experimental set-up with the RR in two defined states (unphosphorylated in M2G at pH 7.0, and phosphorylated in M2G at pH 5.0) that will be valuable to engage the elucidation of the poorly-characterized NtrX regulon. Supporting information {#sec022} ====================== ###### Levels of the *ntrX* transcript in the engineered strain CC_NtrX~myc~ and loading controls of [Fig 1](#pone.0194486.g001){ref-type="fig"}. \(A\) The strains CC_WT and CC_NtrX~myc~ were grown until stationary phase in M2G and M5G. Total RNA was extracted and the levels of the *ntrX* transcript were determined in both strains and media by qRT-PCR. The data represent the mean ± standard deviation of an experiment performed in triplicate. (B and C) The same volumes of the samples analyzed in [Fig 1B and 1C](#pone.0194486.g001){ref-type="fig"} (Results) were loaded in SDS-PAGE gels that were stained with Coomassie Brilliant Blue. MWM: molecular weight marker. (TIF) ###### Click here for additional data file. ###### Phos-tag^TM^ gels separate phosphorylated NtrX. A *C*. *crescentus* Δ*ntrX* mutant strain that had been transformed with the plasmid pMR10 coding for NtrX~myc~\_D53A (CC\_ Δ*ntrX*-NtrX~myc~\_D53A) and the strain CC_NtrX~myc~ were grown until stationary phase in M2G, and samples were subjected to Phos-tag^TM^ electrophoresis and Western blot analysis. (TIF) ###### Click here for additional data file. ###### NtrX is phosphorylated under acidic pH regardless of the acid used to adjust the pH of the medium. CC_NtrX~myc~ was grown in M2G until logarithmic phase and it was resuspended in fresh M2G with the pH adjusted to different values (indicated in the figure) with acetic acid (HAc, upper panel) or sulfuric acid (lower panel). After a 30 min incubation aliquots were removed and analyzed by Phos-tag^TM^ gels and Western blot. (TIF) ###### Click here for additional data file. We thank Michael Laub for kindly providing us the *C*. *crescentus* CB15N and Δ*ntrX* strains. F.A.G and M.C.C are researchers from CONICET. I.F. received a fellowship from CONICET and G.S. is a fellow from this institution. HK : histidine kinase NtrX\~P : phosphorylated NtrX OD~600~ : optical density at 600 nm RR : response regulator TCS : two-component system [^1]: **Competing Interests:**The authors have declared that no competing interests exist.
Lister Hill National Center for Biomedical Communication's (LHNCBC) natural language processing (NLP), or text mining, research focuses on the development and evaluation of computer algorithms for automated text analysis. This area of research works primarily with text from the biomedical literature or electronic medical records and examines a wide variety of NLP tasks, including information extraction, literature searches, question answering, and text summarization. BabelMeSH and PICO (Patient, Intervention, Comparison, and Outcome) Linguist are multi-language tools for searching MEDLINE/PubMed. 13 languages, including character-based languages, are supported. Recent enhancements include a query using more than one language and retrieving citations in more than one language. The consumer health question answering project was launched to support NLM customer services that receive about 90,000 requests a year from a world-wide pool of customers. Computational de-identification uses natural language processing (NLP) tools and techniques to recognize patient-related individually identifiable information (e.g. names, addresses, and telephone and social security numbers) in the text, and redacts them. In this way, patient privacy is protected and clinical knowledge is preserved. The Indexing Initiative (II) project investigates language-based and machine learning methods for the automatic selection of subject headings for use in both semi-automated and fully automated indexing environments at NLM. Its major goal is to facilitate the retrieval of biomedical information from textual databases such as MEDLINE. This system automatically augments a patient's Electronic Health Record (EHR) with pertinent information from NLM resources. The software runs as background agents, both at a hospital and at NLM. The hospital uses our APIs to integrate the search setup and to display and store results in their existing EHR system. LHNCBC's Lexical Systems Group develops and maintains the SPECIALIST lexicon and the tools that support and exploit it. The SPECIALIST Lexicon and NLP Tools are at the center of NLM's natural language research, providing a foundation for all our natural language processing efforts. This project seeks to improve information retrieval from collections of full-text biomedical articles, images, and patient cases, by moving beyond conventional text-based searching to combining both text and visual features. PubMed for Handhelds research brings medical information to the point of care via devices like smartphones. This includes developing algorithms and public-domain tools for searching by text message (askMEDLINE and txt2MEDLINE), applying clinical filters (PICO) and viewing summary abstracts (The Bottom Line and Consensus Abstracts) in MEDLINE/PubMed, and evaluating the use of these tools in Clinical Decision Support.
https://lhncbc.nlm.nih.gov/LHC-research/nlp.html
Skeletons of conjoined twins and legs corkscrewed with rickets. Kidney stones the size of golf balls. The skull of a man who survived a crowbar shot through his head. The Warren Museum at the Harvard Medical School (HMS) knows how to get your attention. But the value of the museum, which recently reemerged after several years in hiding, goes far beyond the gross-out factor. “There’s actual valid medical, valid historical, valid contemporary use for the collection,” says curatorial manager Virginia Hunt, who is on a crusade to make over the museum’s image. Established at the Medical School in 1847, when John Collins Warren donated his personal teaching collection of anatomical models to the school, the museum is one of the few medical museums founded in the 19th century that still exists. The museum’s collection was housed in Gordon Hall for most of the 20th century until it slipped gradually into storage throughout the 1990s. By 1998, the museum’s 15,000 specimens were entirely invisible, stored in warehouses off-site. In September 2000, the museum resurfaced as part of the Countway Library Rare Books and Special Collections Department, displaying 300 of its “greatest hits” in a thoughtful and thought-provoking exhibit on the library’s fifth floor. Now Hunt and collections manager Suzanne Fitz are cataloging the collection, updating century-old paper records into a searchable database that they hope will help entice researchers to mine the museum’s rich resources. Tools to train doctors Modern medicine was a new field when Professor Warren, best known for his involvement in the first demonstration of ether anesthesia at Massachusetts General Hospital in 1846, donated the anatomical teaching models that formed the backbone of the collection. The models are human cadavers, dried and coated with resin, the tissue dissected, the veins and arteries injected with colored wax. Warren also donated wet tissue specimens, which are in storage. “He created the collection because he wanted his students to see what normal anatomy looked like,” says Hunt, who notes that social taboos about dead bodies hindered medical study in Warren’s day. Until 1830, it was illegal to dissect bodies for anatomical teaching in Massachusetts. The museum provides many such glimpses into the history of medicine and its study. Hunt calls the instrument collection the “oh my God, this is how they used to practice medicine stuff.” There are amputation kits that look like they belong in a carpenter’s tool chest and showy lancet cases and surgery tools of gold and ivory – beautiful, but hardly hygienic. Some items, like early stethoscopes and obstetric forceps, are remarkable in their similarity to today’s tools. Phineas Gage’s crowbar skull While Warren’s models, and other models of wax and papier mâché, focus on normal anatomy (the workaday stuff for Victorian-era medical students) later collectors brought abnormal specimens – and gawkers – into the Warren Museum. Headlining at the hall of human horrors is the “crowbar skull” of New Hampshire construction foreman Phineas Gage. Gage, by far the Warren’s most famous exhibit, won this dubious notoriety working on the Rutland & Burlington Railroad in Cavendish, Vt., when, in September of 1848, an accidental explosion fired a 13-pound tamping iron through his head. The three-foot rod, which accompanies Gage’s skull in the exhibit, landed several feet away from Gage. Remarkably, Gage survived the horrendous accident, which tore his scalp and fractured his skull. But when he returned home after 10 weeks, it was clear that the once well-liked Gage had undergone a dramatic change in personality. He had become obstinate and impatient; friends described him as “no longer Gage.” Gage unwittingly ushered in new thinking about personality and the brain. Previous studies of personality had been based on phrenology, which analyzed personality traits based on the size and shape of one’s skull. Gage’s accident turned the focus toward the physiology of the brain. New research from old bones At a recent lecture at the Warren, HMS tutor Peter Ratiu presented new research on Phineas Gage’s 150-year-old injury. As the museum regains its public presence, Hunt hopes that more researchers like Ratiu will take advantage of the collection. “It’s a wonderful key to the practice of medicine and thoughts about health at the time,” she says. From anthropologists and historians to forensic scientists, radiologists, and geneticists (who can take DNA from the wet tissue and bone specimens) the researchers who might benefit from the Warren’s collection are many. Collections manager Fitz, who has a background in forensics herself, points out some disturbingly timely applications for the collection: the Warren Museum has wet tissue specimens of smallpox in various stages as well as skin anthrax. Because infectious smallpox has been eradicated in the past 20 years, recent generations of doctors and medical personnel have never seen the disease, which is now a potential bioterrorism threat. Hunt is confident that once word spreads about the Warren Museum’s collection and revitalized catalog, it will be pressed into the sort of scientific service she and Fitz envision. Until then, she’ll be happy to show you the fetal skeletons, mummified hands, and crowbar skull that have captured the public’s imagination for generations.
https://news.harvard.edu/gazette/story/2001/11/beyond-phineas-gage/
The invention discloses a method of preparing nickel cobaltite/carbon nanotube composite materials. The invention further relates to a method of preparing a carbon nanotube loaded with nano-particles nickel cobaltite. The method comprises the following steps of dissolving Ni(NO3)2.6H2O and Co(NO3)2.6H2O in diglycol to prepare a mixed metal solution A containing Ni2+/Co2+ with a mol ratio being 1:2; dissolving NaOH and a carbon nanotube in the diglycol and performing ultrasonic dispersion to form a solution B; making the solution B added in the solution A drop by drop to acquire a mixed solution; fully and uniformly stirring the mixed solution in a condition with a temperature of 80 DEG C, moving the solution to a reaction vessel and further replacing CO2, and after the replacement, adjusting the pressure intensity of the CO2 to be 10MPa; putting the reaction vessel in an oil bath pan, setting the stirring rate to be 400r/min, the temperature to be 140-220 DEG C, and the reaction time to be 4-10h; cleaning the acquired product by ethanol and distilled water to be neutral and further performing centrifugation and 80 DEC C drying to acquire nickel cobaltite/carbon nanotube composite materials. The method which hardly damages the structure of the carbon nanotube has the characteristics of simple operation and environment friendliness. By means of the method, products can be directly acquired in the solution without calcining. The acquired nickel cobaltite/carbon nanotube composite materials have relatively high specific capacitance and good electrochemistry performance stability when applied to super capacitor electrodes.
ARTS291 Beginning Sculpture - Section A, John Taylor, Spring 2020 The aim of this class is to develop a sculpture making process, from the aesthetic idea to the finished state, in the context of the Italian art available in Florence, both historical and contemporary. Each student will be encouraged to find his/her own preferences and develop his/her own sensibilities, even with the difficulties of technical work in a beginning sculpture course, expressing these in your projects will be our priority. We will be exploring the two fundamental methods in sculpture: the additive method of sculpture, versus the subtractive method of sculpture. Please note: There will be a field trip to the marble quarries of Carrara and Pietrasanta, date to be announced. Firstly, through the creation of a modeled bas relief in clay, we will explore the differences between the additive methods of using clay, as against the subtractive process of carving a sculpture in a soft cement block. This course will also cover the primary technique in mold making: (the waste mold). Course Assignments Beginning students, in order to pass the class, are asked to complete a course proposal of two sculptures of good standard with drawings for each project. The first project: A bas-relief in clay - including the making of a one-piece waste mold in plaster. The second project: Carving a sculpture of your own choice in a soft cement stone. During museum field trips, beginning students are asked to make sketches and write a formal analysis of a Florentine sculpture of their choice. A series of short slideshows will be given by the instructor during the first half of the semester. Topics will include: the instructors own work; workshop practice at the time of Canova; modern figurative sculpture; modern abstract sculpture. Attendance Attendance is required; absences will affect your grade: two unexcused absences result in a lowering of the grade by 1/3 of a letter; two late arrivals to class are considered the equivalent of one unexcused absence. Refer to SACI Student Handbook. Responsibilities 1) Your grade will be affected by two unexplained absences. 2) Be on time for class: the assignment will often be explained at the beginning. 3) If you know that you will be absent on a given date you should notify the instructor ahead of time. General Safety & Emergency Instructions Click here for a pdf of SACI's General Safety and Emergency instructions. Additional Studio & Safety Rules It is essential that you leave your work space clean and functional for the next class. STORE your work on the shelves, NOT ON WORK TABLES. DO NOT POUR PLASTER DOWN THE DRAIN IN THE SINK OR DOWN THE TOILET. This will clog the plumbing. Only sculpture students are allowed in the sculpture facilities. Tools are not allowed out of the studio without special permission. Return all tools to their appropriate place in the tool cabinets. The sculpture studios have two air-exchange systems—one for ventilation of dust from plaster and the other for ventilation of dust from marble and stone. When working in the studios the air-exchange systems must be utilized for maintaining proper ventilation and extraction of undesirable fumes and odors. The control panels are located on the walls of the studios. Once the ventilation is turned on, it will run for 30 minutes. A timer will then shut it down. You are NOT permitted to use machinery unless you have been instructed in its use by the instructor of this course. It is important to obey safety rules when working with power tools. When using these tools, you are required to wear dust masks, safety glasses, gloves, and smocks provided by SACI. When dealing with jobs that create dust (sanding, grinding, mixing powder-based materials), you will be required to wear a dust mask and to put back in the appropriate containers and/or cabinets all powders, clays, cements, glues, etc. - Wear the smocks provided by SACI at all times. - Do not wear jewelry and be sure to tie long hair tied and close to your head when working with or near the grinding wheel, operating other machinery, or using open flames - Wear eye protection and gloves when working with hand tools or power tools that produce flying chips or airborne particles - Wear safety shoes provided by SACI when working with heavy materials (20 kilos or more) - Always use the appropriate ventilation system when working indoors to avoid excessive smoke or dust - If necessary, use the shower in the sculpture studio to wash off dust or other substances - Do not use Walkman Personal Stereos or listen to loud music while operating machinery or power tools - Visitors are not allowed into the sculpture studio - Smoking is not permitted indoors Evaluation & Grading Evaluation is determined by the commitment to quality, completion of course assignments, in-class activities, fieldwork assignments, attendance and your sketchbook. Graduate Students Students in MFA, MA, and Post-Bac programs are expected to complete additional assignments and to produce work at a level appropriate for students in a graduate program. They are graded accordingly and, if they successfully complete all course requirements for graduate students, receive graduate-level credit for the course. Schedule Please bring portfolios or CDs to the first class session. | | Week 1 | | Monday, January 13 | | Week 2 | | Monday, January 20 | | Week 3 | | Monday, January 27 | | Week 4 | | Monday, February 3 | | Week 5 | | Monday, February 10 | | Week 6 | | Monday, February 17 | | Week 7 | | Monday, February 24 Work in progress MIDTERM REVIEWS | | Week 8 | | MIDTERM BREAK (February 29 - March 8) | | Week 9 | | Monday, March 9 | | Week 10 | | Monday, March 16 | | Week 11 | | Monday, March 23 | | Week 12 | | Monday, March 30 | | Week 13 | | Monday, April 6 | | Week 14 | | Monday, April 13 | | Week 15 | | Monday, April 20 Related Field Trips The following is a list of related field trips to be decided every Wednesday, depending on the weather, and the stage of work in progress: The doors of the Baptistry The Opera del Duomo The Donatello pulpit in San Lorenzo The Bargello (the national museum of quattrocento sculpture in Florence) Casa Buonarroti (Michelangelo's bas relief) The Accademia (the David and the unfinished Prisoners) Recommended Reading In compliance with the Higher Education Opportunity Act Textbook Provision, SACI provides, when possible, the International Standard Book Number (ISBN) and retail price of required and recommended reading. Note: It is not necessary to purchase the book below. It is available for loan or consultation in the SACI Worthington Library. Kimon Nicolaides, The Natural Way to Draw, Mariner Books, 1990.
https://saci-florence.edu/arts291-beginning-sculpture-section-john-taylor-spring-2020
Egoism, Altruism, and Social Justice : Theory and Experiments on Cooperation in Social DilemmasREA AuditadoCooperative or altruistic behavior in the absence of egoistic incentives is an issue that has puzzled many social scientists. In this book an attempt is made to gain more insight into such behavior for a specific type of situation: the social dilemma. ...Tema: general: - Problems of Power in the Design of Indicators of Safety and Justice in the Global SouthREA AuditadoThis paper explores the possibility that governance indicators can be harmonized across three levels (within an individual ministry or government department, across government as a whole, and at the level of global governance) and that doing so will produce effective governance. - Assessment of current anglo-saxon proposals about criminal justice and the justification of legal punishment: Moore, Duff and Finnis on retribution, and the challenging compatibility with the up-and-coming Restorative JusticeREA AuditadoPendentTema: general: - A New Era for Justice Sector Reform in HaitiREA AuditadoIn the months before the January earthquake, Haiti and its criminal justice institutions were the subject of an unprecedented effort by two UN agencies to measure the state of the Rule of Law. Drawing on the results of that pre-quake assessment as well as on post-quake assessments of the justice sector, this paper raises four questions that should guide recovery and further development of the police, courts, and prisons in Haiti—questions that focus attention on the meaning of justice sector reform for the people of Haiti, especially the poor. - Prison Exit Samples as a Source for Indicators of Pretrial DetentionREA AuditadoMany governments, civil society organizations, and international development agencies today seek to limit the use of pretrial detention in criminal justice. Motivations vary. Some believe that pretrial detention is ordered indiscriminately and employed for unreasonably long periods; others are concerned with the conditions of confinement and the burdens detention places on families; still others worry about the criminogenic effects of pretrial incarceration. - Students for Social Justice, 1986REA AuditadoBlack-and-white photograph of two female students reading intently. "While attending a Students for Social Justice protest of crimes in Central America, juniors Laurie Laird and Betsy Roemer take time out to read up on the situation." Redwood, 1986.Tema: general: - RECOGNITION, JUSTICE AND SOCIAL PATHOLOGIES IN AXEL HONNETH'S RECENT WRITINGSREA AuditadoThe paper discusses Axel Honneth's recent book on Reification and its relation to Honneth's theory of recognition. It critically examines Honneth's hypothesis concerning the existential roots of recognition, and compares two classical concepts of social critique, Reification and alienation, in order to argue for the superiority of the latter over the formerTema: general: - Kenya Judicial Sector Assessment : Social Context in the Magistrates CourtsREA AuditadoThe proposed judicial sector assessment will focus on justice at the level of magistrate courts in Kenya. In addition to the general challenges faced by magistrates, it will concentrate on the role of the social context in dispensing justice. Social context in this sense means underlying socio-cultural structures and belief systems of a community as well as socio-economic backgrounds.Tema: general: - Social justice : meanings and politicsREA AuditadoNow that the main British political parties are committed to the ideal of social justice the political debate will focus on its meaning(s) and how and through which institutions it is best achieved. This article discusses key dimensions of social justice – conceptualised as distributional and recognition claims – with particular reference to poverty, inequality, disability and the perceived tension between diversity and solidarity in the welfare state.Tema: general: - Not fair for me! The influence of personal relevance on social justice inferencesREA AuditadoIn this paper, we argue that the personal relevance of a situation primarily influences spontaneous inferences about social justice, and not necessarily affects explicit justice judgments. To test this hypothesis, two studies manipulated personal relevance and assessed justice inferences and judgments: Participants read descriptions of fair or unfair events happening to stimulus persons referred to with first-person versus third-person pronouns (Experiment 1) or as ‘‘a friend’’ versus ‘‘a stranger’’ (Experiment 2).Tema: general:
http://temoa.tec.mx/es/search/apachesolr_search/Social%20justice?page=3
To address biodiversity issues in ecology and to assess the consequences of ecosystem changes, large quantities of long-term observational data from multiple datasets need to be integrated and characterized in a unified way. Linked open data initiatives in ecology aim at promoting and sharing such observational data at the web-scale. Here we present a web infrastructure, named Thesauform, that fully exploits the key principles of the semantic web and associated key data standards in order to guide the scientific community of experts to collectively construct, manage, visualize and query a SKOS thesaurus. The study of a thesaurus dedicated to plant functional traits demonstrates the potential of this approach. A point of great interest is to provide each expert with the opportunity to generate new knowledge and to draw novel plausible conclusions from linked data sources. Consequently, it is required to consider both the scientific topic and the objects of interest for a community of expertise. The goal is to enable users to deal with a small number of familiar and conceptual dimensions, or in other terms, facets. In this regard, a faceted search system, based on SKOS collections and enabling thesaurus browsing according to each end-users requirements is expected to greatly enhance data discovery in the context of biodiversity studies.
https://link.springer.com/chapter/10.1007%2F978-3-319-08590-6_5
The Commercial Case Law Index is a collection of judgments from African countries on topics relating to commercial legal practice. The collection aims to provide a snapshot of commercial legal practice in a country, rather than present solely traditionally "reportable" cases. The index currently covers 400 judgments from Uganda, Tanzania, Nigeria, Ghana and South Africa. Get started on finding judgments that are relevant to you by browsing the topic list on the left of the screen. Click the arrows next to the topic names to reveal a detailed list of sub-topics. Most judgments are accompanied by a short summary written by subject-matter expert postgraduate students from the University of Cape Town. The applicants sought an order declaring that the respondent’s premature removal of an advertisement from a billboard under the latter’s control was unlawful and unconstitutional. The advertisement concerned Israel’s occupation of Palestine depicted by contrasting maps. The applicants contested the removal on several grounds, including freedom of expression, which is entrenched by section 16 of the Constitution of the Republic of South Africa. Because respondent was not a state entity, this raised questions of when s 16 may be horizontally applied. The respondent substantiated its conduct in terms of its agreement with the second applicant, arguing it was permissible due to the advertisement’s alleged contravention of the City’s advertising by-laws, the Practice Code of the Advertising Standards Authority, as well as its own internal policies. The court found no legitimate basis in the parties’ agreement, on these facts, for the respondent’s removal of the advertisement prior to the stipulated flighting period. As a private body, the respondent was not positively burdened with respecting, promoting and upholding the applicants’ right to freedom of expression. However, it still faced a negative duty not to interfere with it. The court granted the application and directed the respondent to reinstate the advertisement, subject to practical qualifications. A portion of 9(h) of the Outdoor Advertising By-Laws of the City of Johannesburg was held to be invalid for exceeding the constitutional limitations of free speech. The case concerned the extent of the National Media Commission’s (‘the Commission’) legal mandate under the National Media Commission Regulations (‘the Regulations’). It was argued that certain provisions amounted to censorship, and control and direction of mass media communication as it required an operator to seek authorization of content prior to publication on a media platform, and were thus unconstitutional. The issues for determination were: whether the original jurisdiction of the court was properly invoked; whether the cumulative effect of the impugned provisions amounted to censorship; whether the cumulative effect amounted to control and direction over professional functions and operations; and whether the Standard Guidelines issued under the regulations were vague and unconstitutional. The jurisdictional issue concerned whether the plaintiff sought a striking down of provisions without scrutiny to assist the court in its determination. This issue was to be determined on an examination of the relief sought and the pleadings. What was important was that both raised a case cognizable under the Constitution, which the plaintiff’s documents did. On the second issue, the court held that some form of censorship was permissible under the Constitution; however where censorship laws are introduced they must be justifiable by being reasonably required in the national security interest, for public order, public morality, or the protection of the rights of another. What the second defendant wanted was akin to prior restraint. With reference to case law, the court held that prior restraint was not legally justifiable. Law must be precise and guide future conduct, which it was not in this case. The regulations were contrary to the Constitution. On whether the Commission was empowered to impose criminal sanctions, it was held that Parliament could not delegate this function to the Commission. As regards the third issue, the court had to define ‘direction or control’ in the context of the Constitution. Control or direction as used in the provision had the same meaning and effect as telling operators what they should or should not do in their publications. This function belongs to the media, not the Commission. The plaintiff’s claim was upheld.
https://africanlii.org/commercial?f%5B0%5D=sm_vid_Tags%3AMedia%20Law
The spread of Coronavirus disease 19 (COVID-19) has led to many healthcare systems being overwhelmed by the rapid emergence of new cases. Here, we study the ramifications of hospital load due to COVID-19 morbidity on in-hospital mortality of patients with COVID-19 by analyzing records of all 22,636 COVID-19 patients hospitalized in Israel from mid-July 2020 to mid-January 2021. We show that even under moderately heavy patient load (>500 countrywide hospitalized severely-ill patients; the Israeli Ministry of Health defined 800 severely-ill patients as the maximum capacity allowing adequate treatment), in-hospital mortality rate of patients with COVID-19 significantly increased compared to periods of lower patient load (250–500 severely-ill patients): 14-day mortality rates were 22.1% (Standard Error 3.1%) higher (mid-September to mid-October) and 27.2% (Standard Error 3.3%) higher (mid-December to mid-January). We further show this higher mortality rate cannot be attributed to changes in the patient population during periods of heavier load. Introduction The rapid spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) worldwide and the disease caused by the virus, coronavirus disease 19 (COVID-19), has caused a global pandemic1 with devastating social and economic consequences. Throughout the pandemic, several health systems were overwhelmed in light of the rapid emergence of new cases within a short period of time. Notable examples include Lombardy in Italy2 in which ICU capacity has reached its limit in March 2020, and New York city in the USA3. The heavy workload imposed on hospital services might have negatively affected patients’ outcomes and exacerbated mortality rates. The first individual with COVID-19 in Israel was identified on 21 February 2020. In response, the Israeli Ministry of Health (MOH) has gradually employed a series of social distancing measures in order to mitigate the spread of the virus4. Following the relaxation of these measures in May 2020, the number of new patients has substantially increased and on the 10 September 2020, Israel became the country with the highest rate of COVID-19 infections per capita worldwide5. This period was referred to as the second wave of COVID-19 disease and as a result, Israel was the first country worldwide to impose a second lockdown in mid September 2020. The restrictions were gradually loosened during October 2020, followed by an additional increase in the number of cases leading the government to impose a third lockdown in January 2021. In-hospital mortality of individuals with COVID-19 throughout the study period is described in Fig. 1a. Here, we assessed excess mortality using a model developed for predicting patient mortality based on data of day-by-day patient disease course6. We show that during a peak of hospitalizations in September and October 2020, patient deaths significantly exceeded the model’s mortality predictions, while reverting to match the predictions as patient load subsided in October, and then showing renewed excess mortality as hospital load has increased again since late December. As Israel has a relatively small geographical area and small population, and national restrictions were placed on the entire country at the same time (such as school closures and lockdowns) – cases and hospitalizations dynamics were similar across different hospitals, and so the analysis was conducted on a national level. Hospitalization dynamics for Israel’s 15 largest hospitals, and which account for almost 75% of national COVID-19 related hospitalizations during this period, are shown in Supplementary Fig. 1. Results We applied a prediction model on nationwide hospitalization data originating from the Israeli MOH to predict mortality of hospitalized patients with COVID-19 based on their age, sex, and clinical state on their first day of hospitalization. In-hospital mortality predictions were made using Monte-Carlo methods based on a multistate survival analysis (see below) and a set of Cox regression models, first constructed and validated on a nationwide cohort during the first stages of the pandemic in Israel6 (see Methods section). Overall, from 15/07/2020 to 20/01/2021, 22,636 individuals were hospitalized with COVID-19 infection in Israel and were included in the analysis. Mean age was 59.8 ± 21.9 years old (median age was 63 years old), and 11,070 (48.9%) were females. We first divided the data into 27 weeks, and assigned each patient according to week of hospitalization. Patients’ characteristics are presented in Table 1. Patients who died on the same day of hospital admission were not included in the analysis. The model, already trained and validated on a cohort of 2703 hospitalized patients early in the course of the pandemic6, was modified, re-trained and validated on data of 5966 individuals, hospitalized from 15/07/2020 to 08/09/2020 (time-period I). The model was then applied to data of individuals hospitalized from 09/09/2020 to 20/01/2021. We divided this interval into three time periods (denoted II, III, IV) according to hospital load: Periods II (09/09/2020 to 27/10/2020) and IV (15/12/2020 to 20/01/2021) are those where the daily number of severe+critical patients exceeded 500; see Fig. 1b. Excess mortality was calculated as the difference between observed and predicted mortality. Strikingly, during time-periods II and IV, the observed 14-day in-hospital mortality was respectively 22.1% (Standard Error (SE) 3.1%) and 27.2% (SE 3.3%) higher than predicted by the model (Table 2), whereas our model accurately predicted in-hospital mortality during periods I and III, where the observed 14-day mortality was 0.6% (SE 5.1%) and 10.8% (SE 5.7%) higher than predicted by the model respectively; see Fig. 1c and Table 2. Similar trends were observed for 28-day mortality (Table 2). Cumulative expected and actual death curves for each hospitalization week are presented in Fig. 2; Supplementary Fig. 2 presents calibration plots by hospitalization week. Discussion In this study, we show that even under moderately heavy patient load (above 500 severely-ill patients hospitalized nationwide), in-hospital mortality rate of patients with COVID-19 in Israel significantly increased compared to periods of lower patient load (250–500 severely-ill patients). Notably, the threshold defined by policy makers in Israel as an upper bound in which the healthcare system will not be able to adequately treat patients was 800 severe and critical patients7. In addition, the increase in observed mortality was evident despite the fact that throughout the pandemic, clinical experience in treatment of COVID-19 patients increased, along with a better understanding in pharmacologic (such as corticosteroids8 and remdesivir9) and non-pharmacologic (such as proning10) treatment modalities that may be beneficial for the patients. We postulate that the excess mortality is most likely due to the rapid escalation in the number of hospitalized patients with COVID-19 during these time periods in Israel, which may have resulted in an insufficiency of health-care resources, thereby negatively affecting patient outcomes. Theoretically, several other explanations may account for the increased mortality observed. First, it is possible that the excess deaths may be driven by a more vulnerable patient population, with a higher risk for mortality, that was admitted for hospitalization around time-periods II and IV. However, the model adjusts for age, sex, and clinical state upon 1st day of hospitalization, and the predictions therefore take these differences into account. Although our data did not include information on patient comorbidities that may also influence the mortality rate from COVID-19, it was previously shown that this information is not necessarily essential for accurate mortality prediction for hospitalized patients when utilizing a multistate survival model which takes into account the patient’s clinical state upon hospitalization6. Accounting for clinical state also somewhat reduces the likelihood of increased mortality being due to deferred hospitalizations. Second, it is possible that during these time periods a more virulent strain was circulating in Israel. However, there is no evidence for the existence of such a strain and the fact that the time period lasted for only 7 weeks, and was followed by a time period in which the model achieved accurate predictions makes it unlikely. The fact that increased mortality is once again observed in time-period IV, in which hospital load is once again high, further strengthens our hypothesis. Our study has several limitations. First, the increased mortality observed during periods of high rate of COVID-19 related morbidity may be due to factors specific to the Israeli healthcare system. The Israeli healthcare system is universal and mandates all citizens to join one of the official non-profit health insurance organizations. Regional11 or financial12 disparities affecting the availability of health-care resources in other countries as well as racial variation13 may affect COVID-19-related mortality. Future studies should be conducted in order to determine if this effect is also observed in other countries with different healthcare systems, and the specific threshold of patients representing healthcare capacity in each healthcare system. Full model code is available for use at https://github.com/JonathanSomer/covid-19-multi-state-model. Second, our findings may be influenced by the availability of diagnostic testing for COVID-19 as well as accurate and complete documentation of the disease severity state by clinicians. However, testing policy and physicians documentation practices did not change significantly in Israel throughout the study period. Moreover, we only included data after 13 July 2020, in which uniform criteria for COVID-19 disease severity were applied by the MOH in all hospitals in Israel. While tempting to do so, it is a highly challenging task to estimate any functional dose-response relation from these results and data, and it is beyond the scope of this work. Nonetheless, we can say with confidence that above a certain number of hospitalized severe or critical patients (~450), excess death seems bound to occur. In conclusion, here we have shown that the mortality of hospitalized patients with COVID-19 in Israel was associated with health-care burden, reflected by the simultaneous number of hospitalized patients in a severe condition. Our work emphasizes that even in countries in which the healthcare system did not reach a specific point defined as insufficiency, the increase in hospital workload was associated with quality of care and patient mortality, ruling out factors related to change in the hospitalized population. In addition, our study highlights the importance of quantifying excess mortality in order to assess quality of care, and define an appropriate carrying capacity of severe patients in order to guide timely healthcare policies and allocate appropriate resources. Methods Data We analyzed data originating from the Israeli MOH on COVID-19 related mortality in Israel from 15/07/2020 to 20/01/2021. Patient data included information on age, sex, date of positive SARS-CoV-2 polymerase chain reaction (PCR) test, date of hospitalization, and clinical outcome (death or discharge from hospitalization) for each individual. In addition, daily information on disease severity during hospitalization was available. Classification of disease severity was based on the following clinical criteria, applied on 13 July 2020 by the Israeli MOH: mild illness – individuals who have any of the various signs and symptoms of COVID-19 (e.g., fever, cough, malaise, and loss of taste and smell); moderate illness – individuals who have evidence of pneumonia by a clinical assessment or imaging; severe illness – individuals who have respiratory rate >30 breaths per minute, SpO2 <93% on room air at sea level, or ratio of arterial partial pressure of oxygen to fraction of inspired oxygen (PaO2/FiO2) <300 mmHg, and ventilated/critical (denoted in this paper as Critical) – individuals with respiratory failure who require ventilation (invasive or non-invasive), multiorgan dysfunction or shock14. These criteria were determined based on NIH15 and WHO1 definitions. Statistical analysis In order to assess whether mortality of hospitalized patients with COVID-19 in Israel was associated with health-care burden we applied a multistate prediction model. The model is a modification of a Cox regression-based survival analysis model previously described in a study by Roimi et al.6. which predicts the clinical course of individual patients. The model adjusts for right censoring, recurrent events, competing events, left truncation, and time-dependent covariates. The original aim of the model was to allow timely allocation of sufficient healthcare resources and skilled medical professionals by medical centers. Weekly predictions of mortality and number of severe cases based on the model were presented and utilized by policy makers in Israel. A hospitalized patient is in one of four clinical states: mild, moderate, severe, or critical; the exact definition of the states is detailed above. The multistate model has five states: (i) mild or moderate, (ii) severe, (iii) critical, (iv) discharged, and (v) deceased. This multistate model consists of 14 Cox regression models, one for each possible state-to-state transition, shown in Fig. 3. The 14 semiparametric models each includes a set of covariates, possibly with time-dependent covariates and different covariates for each model. Specifically, we took in age, sex, and state at hospitalization as baseline covariates. We also added time-dependent covariates encoding the hospitalization history of the patient: cumulative days in hospital and whether the patient had been in a severe or critical state before. Making predictions based on our proposed multistate model requires estimating the absolute risks, also known as the cumulative incidence functions. The absolute risks involve estimating the probabilities of moving between states, the time to be spent at each state and integrating over all possible combinations between any possible triplet of entry state, exit state, and hospital length of stay. Since hospitalization consists of potentially multiple transitions between transient states (up to 14 transitions for a patient), the absolute risks have no tractable analytic forms. Thus, we performed Monte-Carlo (MC) sampling from the multistate model, in order to obtain consistent predictions for each individual patient and for the cohort. A detailed description of the MC sampling procedure is given in Roimi et al.6. Results of the Cox survival analysis are shown in Tables 1–4 of the Supplementary Information. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings of this study originates from the Israel minister of health. Restrictions apply to the availability of these data and so are not publicly available. Code availability Analysis source code is available at: https://github.com/tomer1812/covid19-israel-multi-state-hospitalization-model https://doi.org/10.5281/zenodo.4567352 All analyses were performed using the statistical software R version 4.0.3, and Python version 3.6. Model source code is available at: References WHO. WHO Coronavirus Disease (COVID-19) Dashboard | WHO Coronavirus Disease (COVID-19) Dashboard https://covid19.who.int/ (2021). Odone, A., Delmonte, D., Scognamiglio, T. & Signorelli, C. COVID-19 deaths in Lombardy, Italy: data in context. Lancet Public Health 5, e310 (2020). Peters, A. W., Chawla, K. S. & Turnbull, Z. A. Transforming ORs into ICUs. N. Engl. J. Med. 382, e52 (2020). Miller, D. et al. Full genome viral sequences inform patterns of SARS-CoV-2 spread into and within Israel. Nat. Commun. 11, 5518 (2020). The Times of Israel. Israel has highest rate in world of new coronavirus infections per capita – TV https://www.timesofisrael.com/israel-has-highest-rate-in-world-of-new-coronavirus-infections-per-capita-tv/ (2020). Roimi, M. et al. Development and validation of a machine learning model predicting illness trajectory and hospital utilization of COVID-19 patients-a nationwide study. J. Am. Med. Inform. Assoc. https://doi.org/10.1093/jamia/ocab005 (2021). The Times of Israel. Seriously Ill Virus Patients Top 800, Number Once Cited As Max For Hospitals https://www.timesofisrael.com/seriously-ill-virus-patients-top-800-number-once-cited-as-max-for-hospitals/ (2020). WHO Rapid Evidence Appraisal for COVID-19 Therapies (REACT) Working Group. Association between administration of systemic corticosteroids and mortality among critically Ill patients with COVID-19: a meta-analysis. JAMA 324, 1330–1341 (2020). Beigel, J. H. et al. Remdesivir for the treatment of Covid-19 - final report. N. Engl. J. Med. 383, 1813–1826 (2020). Zang, X. et al. Efficacy of early prone position for COVID-19 patients with severe hypoxia: a single-center prospective cohort study. Intensive Care Med. 46, 1927–1929 (2020). Ji, Y., Ma, Z., Peppelenbosch, M. P. & Pan, Q. Potential association between COVID-19 mortality and health-care resource availability. Lancet Glob. Health 8, e480 (2020). Wollenstein-Betech, S., Silva, A. A. B., Fleck, J. L., Cassandras, C. G. & Paschalidis, I. C. Physiological and socioeconomic characteristics predict COVID-19 mortality and resource utilization in Brazil. PLoS ONE 15, e0240346 (2020). Bassett, M. T., Chen, J. T. & Krieger, N. Variation in racial/ethnic disparities in COVID-19 mortality by age in the United States: a cross-sectional study. PLoS Med. 17, e1003402 (2020). הגדרה אחידה של מצב חומרת המחלה במטופלים מאושפזים עם COVID-19 | משרד הבריאות. https://www.gov.il/he/departments/publications/reports/mr-294754420 (2020). National Institutes of Health (NIH). Coronavirus (COVID-19) https://www.nih.gov/coronavirus (2021). Acknowledgements M.G. received support from the U.S.-Israel Binational Science Foundation (BSF, 2016126). We thank the following for their contributions to our efforts: Meir Bruhim, Strategic Planning, Israeli MOH; Avidan Cohen, Business Intelligence division, Israeli MOH; Linoy Vaknin-Alon, Business Intelligence division, Israeli MOH; and Dr. Danny Eytan, Rambam Health Care Campus and Technion - Israel Institute of Technology. Ethics declarations Competing interests The authors declare no competing interests. Additional information Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer review reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. About this article Cite this article Rossman, H., Meir, T., Somer, J. et al. Hospital load and increased COVID-19 related mortality in Israel. Nat Commun 12, 1904 (2021). https://doi.org/10.1038/s41467-021-22214-z Received: Accepted: Published: DOI: https://doi.org/10.1038/s41467-021-22214-z Further reading - The role of statisticians in the response to COVID-19 in Israel: a holistic point of view Israel Journal of Health Policy Research (2022) - Evolution of resistance to COVID-19 vaccination with dynamic social distancing Nature Human Behaviour (2022) - Spatial and temporal fluctuations in COVID-19 fatality rates in Brazilian hospitals Nature Medicine (2022) Comments By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
https://www.nature.com/articles/s41467-021-22214-z
Treatment has been shown to reduce the incidence of hospitalisation or death in people at higher risk of severe illness from COVID-19.2 Patients must meet Pharmac access criteria and the prescription must be endorsed accordingly.1 Note that Paxlovid cannot be dispensed to patients who do not fit the access criteria. Interaction possibility strong Nirmatrelvir is a SARS-CoV-2-3CL protease inhibitor that inhibits viral replication, while ritonavir inhibits cytochrome P450 3A-mediated metabolism of nirmatrelvir, extending the half-life to allow twice daily dosing.2 Ritonavir’s strong inhibition of CYP 3A4 and 2D6 enzymes may increase blood levels of many concomitant medicines, potentially causing toxicity.2 Both Paxlovid antivirals are CYP3A substrates so drugs that inhibit or induce CYP3A may, respectively, increase or decrease Paxlovid concentrations. Thus, there is potential for serious drug–drug interactions and adverse events, loss of virological response, and development of resistance.2 For certain patients it will not be safe or appropriate to use Paxlovid. Drug interactions that can, however, be safely managed should not preclude Paxlovid use.3 Strategies for managing interactions include adjusting or temporarily stopping co-medicine doses, using alternative co-medicines, and increasing monitoring of adverse events or co-medicine drug levels.2,3 Because many patients may be taking interacting medicines, decisions about prescribing Paxlovid must be made carefully and consider individual patient condition, medical history, comorbidity, all current drug use, and potential risks/benefits of treatment.3 Good communication between prescriber, dispenser and patient is essential. Inform patients, and if necessary whānau, of the potential for Paxlovid interactions with other medicines, including OTC and complementary, and recreational drugs. Alert patients to signs and symptoms of adverse effects and follow up with written instruction if necessary. Discuss any co-medication changes with patients and notify dispensers. Visit tinyurl.com/HAH-Paxlovid for comprehensive information. Paxlovid practicalities Co-packaged nirmatrelvir with ritonavir (Paxlovid) is a new oral antiviral medicine for treating adults with COVID-19, who meet access criteria, in the community.1 Supplied as two individual medicines packaged together, the Paxlovid recommended dose is nirmatrelvir 300mg (two 150mg tablets) with ritonavir 100mg (one tablet), taken twice daily for five days, starting within five days of symptom onset.2 The nirmatrelvir component should be halved in patients with moderate renal impairment, which will require dispensers to break into the packaging. There is potential for multiple drug–drug interactions with commonly prescribed medicines, which warrants careful consideration of patient suitability and clear communication.2 References 1. Pharmac. COVID-19 oral antivirals: Access Criteria. 2022. https://pharmac.govt.nz/news-and-resources/covid19/covid-oral-antivirals Accessed April 2022. 2. Medsafe. New Zealand data sheet. Paxlovid. 2 March 2022. www.medsafe.govt.nz/profs/Datasheet/p/paxlovidtab.pdf Accessed March 2022. 3. National Institutes of Health (US). Covid-19 treatment guidelines. Updated 24 February 2022. www.covid19treatmentguidelines.nih.gov/therapies/antiviral-therapy/ritonavir-boosted-nirmatrelvir--paxlovid-/ Accessed March 2022.
https://www.akohiringa.co.nz/education/new-antiviral-targets-covid-19
No matter what age, setting healthy boundaries for our children can be confusing...are you confident in your plan? -Do you feel as connected with your child as you would like to? ​-Are Friendships difficult for your child? -Does your child struggle in extracurricular activities? -Are things getting broken too often? -Do small moments turn into full meltdowns? -Does your child agonize over making decisions? As parents, one of our most important jobs is to create boundaries that help our children be emotionally healthy and prepared for the world. As Intentional Parents, our most important job is to create those boundaries based on building OUR children a strong Safety Net that will support them specifically in achieving SECURE attachment, Unwavering TRUST, SUCCESSFUL Relationships and Feelings of SAFETY and CONNECTION. ​I know it can feel very overwhelming and even seem impossible at times. As a parent of 6 children who each had their own set of needs, I often found myself dealing with tantrums and worrying about their self esteem. I watched some of my children sabotage events and friendships with their behaviors and choices. And many times I felt "pushed back" in our relationship, which caused huge disconnects over and over again. The Safety Net was not in place as of yet - I had to do something different! The right set of boundaries creates an "environment" in which our children will flourish and succeed. They will acquire better self esteem, be confident in their ability to make a positive impact on the world, be more successful in all of their relationships, their ability to trust and depend on us will blossom and so much more! Just let yourself think of the possibilities! Okay, so the reality is that we all know, one way or another, that our job includes setting (or not) healthy boundaries for our children. However, no one teaches about how to do that with kids who have any level of struggle with emotional and developmental hurdles. There is no one that talks about intentionally aligning the size of your child's world with his actual abilities, rather than expecting him to have abilities already in place that help him manage his world. Nobody explains how we can impact trust by focusing on creating healthy brain chemistry. There is no one else that coaches parents to keep from being diluted in their relationship with their child. When you leave this class, you will have a very specific strategy you can use to create the size of world in which your child can succeed and your relationship can flourish. Among other new clarity and knowledge, you will have the words to explain to others how they can support you in decreasing your child's anxiety and increasing his sense of safety. As Intentional Parents, we know the healing power of having an action plan in place - You will leave this class with the first 5 steps of your plan in place immediately! WHEN?? Wednesday June 1st at 7:30 pm Central - But remember...if you can't make the date you get the recording!
https://www.tohavehope.com/building-boundaries-that-foster-emotional-health.html
Talilei Orot: Mizmor L’Todah-Chanukah The Talilei Orot quotes from Rav Chaim Vital: “A Song of Thanksgiving.” The Thanksgiving Offering is not brought because of a sin. It is possible that someone bringing such an offering to express his gratitude for a great miracle in his life, will look at the other people standing in line in the Temple to bring their sin offerings with a certain arrogance. He is coming to express gratitude and not because of a sin!
https://thefoundationstone.org/talilei-orot-mizmor-ltodah-chanukah/
The Power of Intuition Many of us place our trust in things we can see, touch, hear, taste, or smell. “These are the things,” we think, “that truly exist. This is reality.” It’s true that our senses connect us to the world in a very real and meaningful way (part of being present, after all, is being aware of your body and of the physical world around you), but that doesn’t mean they’re the only powers we have at our disposal. We cannot, must not, forget about the power of intuition. What is intuition? Unlike the five senses, which allow us to take in observable, measurable data from our surroundings, intuition comes from within. Your intuition is only available to you, and can only be interpreted by you. And, intuition is subjective. In fact, intuition often flies in the face of observable facts. Say you go out to lunch with a friend. He smiles when he greets you, shares information about his day, and laughs at the funny story you tell. But you know this friend well, and you can see that something is “off.” His smile is a little too wide. He talked a little too quickly about his day. His laugh felt a little forced. A casual observer would see a smiling, talking, laughing man spending time with a friend. But your intuition tells you that that appearance of “normality” is deceiving. There’s evidence in front of you, but deep down, you don’t trust it. You get serious, and ask your friend if something is wrong. He breaks down and out pours all the problems and negativity he’s been holding back. I’ve had this exact scenario play out with friends and family over the years, and it amazes me how my intuition, at least in cases like these, rarely leads me astray. Where intuition comes from Intuition can be (and often is) subconsciously formed and shaped by past experiences. You’ve seen your friend act that way before, and it turned out that he was having a hard time then. Your brain learned from that experience, so that when the data presents itself again, it triggers your intuition alarm. Because of this, I would argue that intuition has greater potential than we sometimes give it credit for. Intuition is a sum of our collective experiences, melding together into a feeling that can, if we let it, guide us in one direction or another. And sometimes, of course, those gut feelings seem to come out of nowhere, presenting us with options we may not have considered otherwise. Why is intuition powerful? Whether intuition is a result of past experiences, or an out-of-the-blue feeling, one thing is for sure: intuition is powerful. Here are a few reasons why. Intuition invites simplicity. Simplicity is, in and of itself, powerful. When things are simple, they’re uncomplicated and easy. With simplicity, life is focused and purposeful. You can move forward without second-guessing or questioning yourself. If you follow your intuition, you’re inviting simplicity. Rather than overthinking every question or problem that comes your way, you trust your gut. What could be simpler than following your intuition? Your intuition always has your best interests at heart. A large part of the power of intuition lies in the fact that your intuition always has your best interests at heart. Remember: your intuition is for you and you alone. It wants what’s best for you, because it’s thinking only of you. That doesn’t mean your intuition will always lead you to be selfish. On the contrary, your intuition could lead you to help someone else or give something up. But the point is, it will only tell you to do what it thinks you need. It will do its best to lead you to joy. Following your intuition can help you make better choices. Trying to make a decision based on data alone leads to analysis paralysis. You can’t make a decision because there is data supporting both choices. You need that gut feeling to help you make the data personal. You need to discover and interpret how each possibility makes you feel, and you need to use those feelings to make your decisions. There’s even research that suggests that people who focus on intuition when making big purchases are happier with their decisions in the long run. Trusting your intuition allows you to get to know yourself. Is there anything more valuable than knowing yourself? When you know yourself, you’re able to better understand yourself: why you think, feel, and act the way you do; how to change bad habits into good ones; what motivates you; what makes you happy. Knowing yourself is key to creating a happy life for yourself. Trusting your intuition is trusting yourself. It’s learning to recognize that voice inside your head—the voice that is uniquely yours—and deciding that maybe, just maybe, it knows what it’s talking about. In this way, intuition helps you not only get to know yourself, but also to build confidence in yourself. How to harness the power of intuition Intuition clearly matters, and it’s clearly powerful, but how can you harness the power of intuition so that it can benefit you as a creative and as a person? Know what your gut feeling feels like. Again, self-knowledge is empowering. You have to know what your gut feeling feels like if you’re going to put it to work for you. It can be easy to confuse that feeling with anxiety, fear, criticism, desire, ambition, or any other number of feelings. Learn what your feelings feel like in your body. Isolate that inner voice. Recognize what it feels like and the kinds of things it tells you to do. As you get to know yourself, you’ll get to know your intuition. Track thoughts inspired by intuition. Keep a journal or a running list of things your intuition tells you. You may start to see patterns. Maybe your intuition is very strong when it comes to your career, but less so when it comes to relationships. Maybe you’re more likely to listen to your intuition when it’s pushing you a certain way. What thoughts do you dismiss without even entertaining them? As you track the thoughts inspired by your intuition, you’ll become more familiar with what that inner voice sounds like, allowing you to harness it more easily. Combine feelings with data. I’m not suggesting you suddenly live your life 100% according to how you feel. If I lived and breathed based on my intuition, I would probably stay home all day getting lost in books and ordering takeout. It’s important to balance your intuition with data: “I can’t stay home all day, because I have bills to pay, so I have to go to work.” “My gut is telling me I love that Porsche, but my bank account is telling me to go for the Honda.” Feelings and data are important when it comes to making smart decisions. As you learn to harness the power of intuition, don’t forget or ignore the value of observation, facts, and data. Try intuitive eating. Wanting to do something simple to start tapping into your intuition? Try a little intuitive eating. This is an “anti-diet” philosophy based on the idea that your body knows what it needs and will tell you what that is. It will also tell you when it’s hungry and when it’s full, and you need to listen. When you feel hungry, make sure it’s a physical need, not an emotional one. Eat because your body needs it, not because your body thinks it needs it. Don’t focus on eating “good” foods over “bad” foods; instead, focus on eating what your body wants. If it wants something fresh, cold, and crisp, go for the salad. If it needs something hearty and comforting, get yourself some mac-n-cheese. The point is to listen to your body and its cues, and to not pass judgement on whatever it is telling you. When you feel stuck, stop and listen. Another quick hack: when you get stuck, stop and listen. I know I often have moments in my day when I feel tired, physically and mentally, and I’m just not sure what to do next. I want to keep moving and getting things done, but I feel paralyzed or stuck for one reason or another. At times like these, it’s vital that you stop revving your engine so you can pause and get into gear. Find a quiet place. Turn off your phone or other distractions. Breathe deep. Listen to what your intuition tells you. Follow it, even if it seems like something small or insignificant (my intuition has told me more than once that I need a snack; surprisingly, that rise in blood sugar can help tremendously). The power of intuition lies in its ability to empower you. When you listen to your intuition, you are guiding yourself. You are building trust in the one person you’ll always have with you. You are learning to give yourself what you need, instead of what you or other people want. Listen to your intuition. Listen to your inner voice. You are more powerful than you think. Find your inner voice, with Design.org. Get free, personalized coaching to help you tap into your intuition and create a happy life. Start by taking our assessment: it’s free, quick, and easy.
https://design.org/the-power-of-intuition/
© 2015 Ivette Vargas-O'Bryan and Zhou Xun. All rights reserved. Recent academic and medical initiatives have highlighted the benefits of studying culturally embedded healing traditions that incorporate religious and philosophical viewpoints to better understand local and global healing phenomena. Capitalising on this trend, the present volume looks at the diverse models of healing that interplay with culture and religion in Asia. Cutting across several Asian regions from Hong Kong to mainland China, Tibet, India, and Japan, the book addresses healing from a broader perspective and reflects a fresh new outlook on the complexities of Asian societies and their approaches to health. In exploring the convergences and collisions a society must negotiate, it shows the emerging urgency in promoting multidisciplinary and interdisciplinary research on disease, religion and healing in Asia. Drawing on original fieldwork, contributors present their latest research on diverse local models of healing that occur when disease and religion meet in South and East Asian cultures. Revealing the symbiotic relationship of disease, religion and healing and their colliding values in Asia often undetected in healthcare research, the book draws attention to religious, political and social dynamics, issues of identity and ethics, practical and epistemological transformations, and analogous cultural patterns. It challenges the reader to rethink predominantly long-held Western interpretations of disease management and religion. Making a significant contribution to the field of transcultural medicine, religious studies in Asia as well as to a better understanding of public health in Asia as a whole, it will be of interest to students and scholars of Health Studies, Asian Religions and Philosophy.
http://repository.essex.ac.uk/14510/
annual report design, corporate brochure design, print and production Project Scope Design for print Image commission and re-touching Illustration of graphical data Art direction Copy editing Annual Report What happened The Help the Aged annual report was usually done in-house but 2007 was such an important year, it needed a new lease of life to communicate the successes and hopes to high-level stakeholders. The brief The brief was to ‘bring the report alive’ using content provided by the client, then designing an appealing framework to communicate the charity’s achievements, goals and financial standing - all whilst working within a ‘tight’ budget. Our solution We implemented a simple structure and stripped the information back, separating the real life quotes from the content. We selected strong, emotionally engaging photography and combined it with very strong typography and elegant graphics. Why it worked The annual report design was seen as very progressive but still worked within Help the Aged’s brand guidelines, keeping type sizes large and meeting accessibility standards. We increased the content by increasing the page number without increasing the print budget. Why the client loved it The design had a positive impact, both internally and externally. It succinctly communicated the scope of the charity’s work and its wide range of services. The client was particularly impressed that it was produced within a very tight timeframe and budget. "I like working with Navig8 they always push the boundaries of your own perceptions and they are always spot on. Of the moment communications that are clear, well designed and engaging. Thanks Navig8!"
http://annualreportdesignuk.co.uk/annual_report_help_the_aged_2008.html
The first drone legislation, passed in 2018, legalized drones and dug the foundations for important infrastructures such as the Digital Sky platform,... Can You Fly FPV Drones in India? All You Need to Know Ever since the UAS Rules 2021, India’s latest drone regulations, were finalized in March this year, there has been some confusion in... What Do The UAS Rules 2021 Mean For Indian Drone Pilots? The new Indian drone regulations, titled UAS Rules 2021, came into effect on 12th March 2021. Along with these regulations came skepticism... The UAS Rules 2021: An Overview NOTE: The following UAS Rules are older regulations and are not applicable now. They have now been replaced by the Drone... NPNT Compliant Drones in India The NPNT (No Permission-No Takeoff) rule is an integral part of legal drone flight in India. Obtaining an NPNT compliance certificate involves... How to Become a Drone Manufacturer in India The Government of India finalized the new drone regulations called UAS Rules on 12th March 2021. The new regulations put in place... US Regulations A Guide to Reading NOTAMs: Part 107 Test Prep While you study the National Airspace System (NAS), the airspace classes, and sectional charts, it is important that you’re aware of NOTAMs.... A Guide to Reading METARs and TAFs: Part 107 Test Prep If you’re preparing for the FAA's Part 107 test, you might’ve come across two important topics - METARs and TAFs. Reading METARs... How to Study for Part 107 Test: The Complete Guide 2021 If you’ve wanted to become a commercial drone pilot in the United States, you might have come across the Part 107 test.... What is Section 2209 and Why Must the FAA Comply With it? As the number of drones in the United States climbs steadily, a group of drone industry advocates has petitioned the FAA to... FAA Launches the TRUST Test for Recreational Drone Pilots The only step for recreational pilots until now was registering their drone with the FAA and not flying in restricted airspace. However,... The Best Drone Pilot Training Schools in the US Becoming an FAA Part 107 certified drone pilot requires you to take the FAA airman knowledge test (Read about the Part 107... More on GIS Popular DJI Introduces Drone For Precision Agriculture and Land Management DJI has introduced a multispectral imaging drone designed for precision agriculture and land... Best Drone Insurance Providers in the UK: 2021 Guide If you’re a hobby pilot in the UK, you are not legally bound... Naxals Use Drones Over CRPF Camp In Chhattisgarh's south Bastar region, drones were seen hovering over a strategically important... Cyprus Deploys Drones to Monitor Turkish Oil Drilling Drones have been deployed by the nation of Cyprus, to monitor Turkish attempts... Drones Are Revolutionizing Mountaineering A 65-year-old Scottish climber fell off a 30m steep in a solo summit... Atlas Dynamics Can Ensure Surveillance Even in Large Crowds Atlas Dynamics used the AtlasPRO UAS to help the Military Police of Rio... What exemptions does GARUD bring from the existing drone policies in India? Government Authorization for Relief Using Drones State-of-the-art drones used in search and rescue operations Drones are proving to be highly efficient and effective in search and rescue... Locust Swarms in India: How Drones Are Tracking and Controlling Locusts Amid the coronavirus pandemic, a new threat to the agriculture industry has been... Commercial drones may soon fly in Japan The new drone legislation is proposed to be introduced in 2022 by the... The Sony Airpeak S1 Drone is Here: Should You Buy It? Announced at CES 2021, the Sony Airpeak S1 (ARS-S1) is a professional drone that is now available for preorder. The ARS-S1 is...
https://blog.flykit.app/
# Nukina Kaioku Nukina Kaioku (貫名 海屋, 1778–1863) was a Japanese painter and calligrapher. He had many pseudonyms, but Kaioku (海屋) and Sūō (菘翁) are the most well-known. He was considered a leader in the field of Japanese calligraphy during the Edo period. He was also good at painting in the Nanga style, which is a Japanese artistic style mean to emulate Chinese art and culture. ## Early life Nukina Kaioku  was born on Shikoku in the Awa Province. Nukina Kaioku was born into a samurai family of hereditary archery instructors to the daimyō of the Hachisuka clan of the Awa Province. The typical samurai education included the martial arts, from which Kaioku's physical frailty exempted him, and Confucian philosophy, the Chinese classics, calligraphy and painting. He exhibited outstanding talent in calligraphy, and his uncle, who was a priest of the Kōyasan Shingon-shū on Mount Kōya, encouraged his interest in the writing style of Kūkai. ## Legacy and style By the end of his life, Kaioku was recognized both as one of the most outstanding calligraphers of his time and was also admired in his role as a scholar of Chinese writing styles. Along with Maki Ryoko and Ichikawa Beian, he was one of the renowned calligraphers in groups of three referred to as Sanpitsu, or three brushes, during the Bakumatsu period (Bakumatsu no Sanpitsu). His mature calligraphy style was conservative and fairly faithful to the orthodox tradition of the 4th-century Chinese master Wang Xizhi. He was also versatile, and his calligraphy shows a solid mastery of the major modes of Kara-e (Chinese-style) writing.
https://en.wikipedia.org/wiki/Nukina_Kaioku
Palaeontology: leg feathers in an Early Cretaceous bird. Here we describe a fossil of an enantiornithine bird from the Early Cretaceous period in China that has substantial plumage feathers attached to its upper leg (tibiotarsus). The discovery could be important in view of the relative length and aerodynamic features of these leg feathers compared with those of the small 'four-winged' gliding dinosaur Microraptor and of the earliest known bird, Archaeopteryx. They may be remnants of earlier long, aerodynamic leg feathers, in keeping with the hypothesis that birds went through a four-winged stage during the evolution of flight.
DS3 Design is a multidisciplinary team that specializes in delivering hospitality, retail and office spaces. We strive to develop the creation of authentic, sensory and experiential spaces with a strong narrative and a sense of personality. We translate the client’s vision by taking a custom approach, whether working with the architect or directly with the client. DS3 Design was founded in 2013 led by David Santos III. He has a bachelor’s degree in Architecture from the University of the Philippines and practiced design and architecture in Asia- Philippines, Japan, Hong Kong, China and Singapore for 15 years. In 2006, he relocated to Phoenix and continued to work for Fitch - one of the world's leading retail and brand consultancy. He has worked across the country from California to New York and has gained knowledge and design inspirations through his travels. We like to get to know our clients and learn all about their business to create a great working relationship and vision for the project. Our designs are developed to be aesthetically pleasing while also being practical for the environment of each project. We will develop a design based on concept, trends and traditions. Each design is created in visual form with sketches and 3D renderings to give a feel for how the space will look at the end of the project.
http://www.ds3design.com/about/
A brief biography of this twentieth-century artist and an explanation of his philosophy of art are followed by analyses of twelve of his paintings. Includes color reproductions of the paintings. Author Series Language English The Russian painter Wassily Kandinsky (1866-1944), who later lived in Germany and France, is one of the pioneers of 20th-century art. Nowadays he is regarded as the founder of abstract art and is, moreover, the chief theoretician of this type of painting. Together with Franz Marc and others he founded the group of artists known as the "Blane Reiter" in Munich. His art then freed itself more and more from the object, eventually culminating in the First... Author Series Publisher Shooting Star Press : Pub. Date Language English Author Series Language English The book covers the five stages of the artist's life and work - Russia : the early years 1887-1910; the Paris years 1910-1914; war and revolution in Russia 1914-1923; France and America 1923-1948; and the late work 1948-1985. It also includes a chronology of Chagall's life. Publisher Kultur Pub. Date Language English In the first decade of the 20th century, Russian painting opened up like a bursting dam. Some of the artists were to become international figures in 20th century art, and in a couple of decades Russian art was to take an honored place in the world. Publisher Kultur Pub. Date Language English Just as sympolism embraces the period of Russian painting from Ilya Repin to Boris Grigoriev, so too does impressionism shine its direct or diffused light on Vasili Plolenov, Wassily Kandinsky, and Konstantin Korovin. It encompasses the earthy expressiveness of Philip Maliavin, the decorative Fauvism of Mikhail Larionov, and the primitivism of Natalia Goncharova. It also includes those who would later be called Rayonists, Cezannists, Cubists and Suprematists.... Author Publisher Benedikt Taschen Pub. Date Language Español Author Publisher Wydawnictwo Karakter Pub. Date 2021. Language English Didn't find it?
https://catalogbeta.swanlibraries.net/Search/Results?lookfor=%22Painters%20--%20Russia%20--%2020th%20century%22&searchIndex=Subject
Iron sulfide phases are the ultimate repository of iron and reduced sulfur in sediments. The sulfur isotope geochemistry of pyrite has had much to tell us about modern and ancient Earth environments and it is likely that Fe isotopes will too, once fractionations for key processes are known. We report the results of an experimental study of Fe isotope fractionation on precipitation of FeS, synthetic mackinawite, from excess aqueous Fe(II) solutions by addition of sodium sulfide solution at 2–40 °C. The results show a significant kinetic isotope effect in the absence of a redox process. No detectable effect of temperature on the fractionation factor was observed. The Fe isotope fractionation for zero-age FeS is ΔFe(II)–FeS = 0.85 ± 0.30‰ across the temperature range studied, giving a kinetic isotope fractionation factor of αFe(II)–FeS = 1.0009 ± 0.0003. On ageing, the FeS in contact with aqueous Fe(II) becomes progressively isotopically heavier, indicating that the initial fractionations are kinetic rather than equilibrium. From published reaction mechanisms, the opportunity for Fe isotope fractionation appears to occur during inner sphere ligand exchange between hexaqua Fe(II) and aqueous sulfide complexes. Fe isotope fractionation on mackinawite formation is expected to be most significant under early diagenetic situations where a readily available reactive Fe source is available. Since FeS(aq) is a key reactive component in natural pyrite formation, kinetic Fe isotope fractionations will contribute to the Fe isotope signatures sequestered by pyrite, subject to the relative rate of FeS2 formation versus FeS–Fe(II)(aq) isotopic equilibration.
http://orca-mwe.cf.ac.uk/71904/
We have previously written about how the criminal justice system ultimately seeks to balance public safety with the rights of those who are accused of criminal wrongdoing. In an attempt to facilitate that balance, legislators and judges often grant law enforcement access to certain kinds of information. Unfortunately, some of this access certainly seems to constitute a serious invasion of privacy. One kind of access that individuals may not be aware that law enforcement has is access to the utilities records of certain suspects. It is considered to be an unjust invasion of privacy and grounds for evidence suppression by a criminal defense attorney if law enforcement simply tries to gain access to utilities records of anyone they suspect of criminal wrongdoing. Citizens are protected against this kind of intrusion in part by the Colorado Open Records Act. However, law enforcement who assert that any utilities information request is “reasonably related to an investigation within the scope of the agency’s authority and duties,” are granted an exception to this act. As a result, Colorado law enforcement accessed this kind of information more than 5,000 times in the first six months of this year without first being required to obtain subpoenas or warrants. When law enforcement assert their exception to the Colorado Open Records Act, they may obtain information about where a suspect works, where a suspect lives and/or who a suspect lives with. If you are at all concerned that you may be suspected of criminal wrongdoing, do not wait until you are formally arrested to seek the advice of an experienced criminal defense attorney. An attorney can help you navigate your situation and potentially help to shut down an investigation before law enforcement goes digging through your personal information.
https://www.jurdem.com/blog/2013/07/law-enforcement-can-access-your-utilities-information/
Delighted to Announce ESG PlayBook is a Consultant To SASB What is the SASB Framework? The SASB Conceptual Framework sets out the basic concepts, principles, definitions, and objectives that guide SASB in its approach to setting standards for sustainability accounting; it provides an overview of sustainability accounting, describing its objectives and audience. What Companies Use SASB? Companies like GM, Merck, Nike and JetBlue were early adopters of the SASB guidelines, using the provisional standards to report on material ESG issues. Since the launch of the formal standards last year, SASB staff have focused on driving awareness and adoption in the issuer community. Who is SASB? SASB stands for Sustainability Accounting Standards Board. SASB was founded in 2011 as a non-profit organization focused on independent standards setting. According to its website, this organism’s mission is to “help businesses around the world identify, manage and report on the sustainability topics that matter most to their investors.” To do this, SASB developed 77 sets of industry-specific standards gathering feedback from companies, investors, and other market participants. Each set of standards focuses on what SASB has determined to be the most financially material topics for each industry. In their own words, these are “issues that are reasonably likely to impact the financial condition or operating performance of a company and therefore are most important to investors.” SASB’s industry standards include related accounting metrics (to have KPIs to measure the company’s performance), technical protocols for compiling data, and data units for normalization. What is the SASB’s materiality map? “The SASB Materiality Map® is an interactive tool that identifies and compares disclosure topics across different industries and sectors.” In other words, this map highlights the most relevant issues for any given industry and sector. It helps companies identify which issues they should be exploring and reporting on, and it helps investors identify where they should focus on when analyzing a given company or industry. The map includes 26 sustainability issues which are organized under five sustainability dimensions (Social Capital, Human Capital, Business Model & Innovation, Leadership & Governance, and Environment). The list was refined by focusing on issues (out of all the issues that can be discussed when talking about ESG) that are most likely to have a financial impact on the company. What does sustainability mean in accounting? Sustainability accounting represents the activities that have a direct impact on society, environment, and economic performance of an organisation. Sustainability accounting is often used to generate value creation within an organisation. What are the 3 principle of sustainability? Therefore, sustainability is made up of three pillars: the economy, society, and the environment. These principles are also informally used as profit, people and planet. Next Steps For simplified ESG reporting on one platform, contact us. We’d be happy to hear from you!
https://www.esgplaybook.com/delighted-to-announce-esg-playbook-is-a-consultant-to-sasb/
Fantastic Beasts and Where to Find Them Review By Lora Williams on November 16, 2016 I will start this review by letting you all know it may be colored by the fact that I’m a HUGE Potterhead…so make of that what you will! How is J.K. Rowlings new movie Fantastic Beats and Where to Find Them. You’re about to find out in my review! Having read every Harry Potter novel and seen every Harry Potter film (and a whole lot of fan fiction), I believed for years that J.K. Rowling could do no wrong. However, after the release of the disappointing Cursed Child script earlier this year, I was nervous for Fantastic Beasts and Where to Find Them. Although not written by Rowling, Cursed Child had her stamp of approval. Was the Harry Potter universe about to suffer over-dilution to the point of ruining everything? No. Fantastic Beasts confirmed, for me, that J.K. Rowling is still a magical writer who can do no wrong. Fantastic Beasts and Where to Find Them follows Newt Scamander, played by Oscar-winning Eddie Redmayne, on his first journey to the United States of America as part of finishing his book. It’s easy to see why JKR, at her own admission, was so fascinated by Newt Scamander and felt the need to develop his story. He’s a quirky underdog who loves creatures more than humans and who, in his own words, people find “annoying.” An unlikely hero befit of any JKR story. On his first day in America, he gets arrested by the Ministry of Magic in America and a No-Mag (Muggle) finds his case of creatures, letting them loose upon New York City. He spends the next few days collecting all of his creatures and what follows are great little scenes, including a hilarious mating dance, a thieving Niffler, a destroyed Macy’s department store, and narrow escapes. It is soon discovered, though, that something even darker is haunting New York City… Like any young adult film, and like Harry Potter itself, Fantastic Beasts is rather predictable and slightly cheesy. Do not go in to Fantastic Beasts expecting an elaborate plot with mysteries you can’t solve. It’s easy to figure out who the bad guys are. However, that didn’t stop me from loving the movie and the characters. My group’s favorite part unanimously was when you got to step inside Newt’s briefcase full of creatures. The visual effects team really brought all of Newt’s creatures to life. You got to see creatures familiar and unfamiliar, all in their own habitats and with their own individual characteristics just as Newt sees them. It was like being on Noah’s Arc…but with significantly more magic. It was also in this scene that the real star of the movie stole the show: Dan Folger as No-Mag (Muggle) Jacob Kowalski. Jacob is basically all of us if we found out magic was real: “I wanna be a wizard.” He continued to steal the show and quickly became everyone’s favorite character. He’s another unlikely hero we didn’t know we needed (god bless you JKR.) Fantastic Beasts isn’t short of strong female characters either. Katherine Waterston as Goldstein and Alison Sudol as her sister Queenie are powerful women who rarely need saving. The President of the Ministry of Magic in America is even a woman…which is pretty bold for the 1920’s setting. The movie has great nods to the Harry Potter books and movies (not surprising since Director David Yates directed HP movies 5 to 8) with references to known characters, fights over which wizard school is the best, and a glimpse of the Deathly Hallows mark. Composer James Newton Howard even weaved in the original Harry Potter orchestral theme at all the perfect parts of the movie. And as with any JKR story, there are great lessons: love always wins, the greater good must sometimes prevail, and to always stay true to yourself and what you believe. Even with these references, Fantastic Beasts is not a Harry Potter film. So do not go into it believing that’s what it is. It’s something different entirely. Being the first movie not based on a known story, it’s the beginning of a new era of the wizarding world (as they say often in the trailers) and it really sets the tone and the setting for the next four films, which I predict will be much darker and more interesting. Fantastic Beasts gives HP fans a peek into the past and into Gellert Grindelwald’s reign of terror, which is a piece of Potter history we know little about. I imagine the subsequent movies will dive into this history even more, though Fantastic Beasts gives no indication of what future plots may be. It is its own self-contained story. A magical story that will remind you of when you first read or saw Harry Potter. I will definitely be seeing it again.
Introduction {#Sec1} ============ A particular section of machine learning, known as deep learning, is currently enjoying its renaissance in the area of artificial intelligence \[[@CR1]\]. For computer vision tasks, the primary motivation of deep learning techniques is the biomimicry of the human visual system, allowing computers to learn from experience and formulate an understanding in terms of a hierarchy of concepts. In the field of medical image processing, deep learning approaches are providing computational solutions to a wide range of automation and classification tasks \[[@CR2]\]. For instance, deep learning techniques have been used in organ \[[@CR3]\] and tumor segmentation tasks \[[@CR4]\], as well as tissue and tumor classification \[[@CR5], [@CR6]\]. The fundamental difference of deep learning methods is that they take a unique approach to solving classical image processing tasks by allowing the computer to identify image features of interest. This is in contrast to traditional machine learning that requires predefining the features of interest (e.g., image edges, intensity, and/or texture). Based on the successes of deep learning techniques, we sought to explore their potential in solving the difficult task of segmenting the kidneys of patients affected by autosomal dominant polycystic kidney disease (ADPKD). In ADPKD, these phenotypic differences include renal size (e.g., renal volumes can vary from \~200 ml to more than 7000 ml), shape, and composition (e.g., appearance of the border of the kidneys in MR images has highly variable signal intensities resulting from whether the border is composed of simple and/or complex cysts, varying degrees of fibrosis, or healthy renal parenchyma). The natural course of ADPKD is highly variable and is characterized by progressive enlargement of cysts within the kidneys and is a leading cause of end-stage renal disease (ESRD) \[[@CR7]--[@CR10]\]. Total kidney volume (TKV) has become the main image-based biomarker for following ADPKD progression at early stages of the disease \[[@CR11]--[@CR15]\]. Imaging methods such as ultrasound (US), computed tomography (CT), and magnetic resonance imaging (MRI) are employed to diagnose, monitor, and predict outcomes for patients affected by ADPKD \[[@CR16]--[@CR19]\]. MRI has become the imaging modality of choice due to its superior soft tissue contrast, non-ionizing radiation, and accuracy. Current methods to manually measure TKV using MR images include volume calculation by the ellipsoidal method \[[@CR20]\], stereological approaches \[[@CR21]\], and planimetry tracings \[[@CR22], [@CR23]\]. Due to the large time requirement of manual tracing, automated approaches to segment kidneys are desirable. However, segmentation of ADPKD kidneys is challenging due to a number of factors. For instance, the shapes of the kidneys are highly irregular, and the contrast at the border of the kidney is highly variable at the interface of several different tissue types including fluid-filled cysts, calcified cysts, renal parenchyma, and fibrotic tissue. In addition, MR acquisition parameters vary widely from institution, requiring a robust approach which can handle not only the wide range of disease presentations but also the drastic difference in tissue contrast due to how the images were acquired. We previously developed both semi- and fully automated segmentation approaches to allow accurate and reproducible measurement of TKV in ADPKD patients \[[@CR24], [@CR25]\]. Fortunately, these developments have allowed for the creation of a database of thousands of reference standard segmentations by which we have been able to explore novel, next-generation image processing techniques in order to finally and fully address the problem of segmentation of the PKD kidney in order to accurately and reproducibly derive TKV. We have developed a deep neural network model that can capture both local and global context within the image. This model is based on a convolutional neural network (CNN) approach that performs a series of downsampling (i.e., max pooling operations which select the maximum value from a patch of features which help to reduce the data dimensionality) and upsampling procedures (similar to autoencoders \[[@CR26]\], which allow classification to be made at the voxel level). The network also incorporates skip connections (similar to a CNN architecture known as U-Net \[[@CR27]\] which connect layers at the same resolution and allow the networks to retain spatial information). The network is a cascade of layers that start by learning low-level features (e.g., edges and lines) and higher-level features (which combine this information to learn what is or is not the kidney). In summary, building a network with these components allows the network to (i) learn both low- and high-order features, (ii) learn both local- and entire image-level context, and (iii) perform voxel-wise classification (i.e., decide whether a voxel belongs to the kidneys or not). Method {#Sec2} ====== MRI Data {#Sec3} -------- Institutional review board approval was obtained for this study. All subjects were appropriately consented for use of bio-sample data for the purpose of identifying methods for improving ADPKD diagnosis and management. De-identified DICOM image data from the TEMPO study \[[@CR28]\] was transferred to our institution and converted to the NIFTI file format by the dcm2nii software. The images have a reconstructed matrix size of 256 × 256 × *Z* (with *Z* large enough to cover the full extent of the kidneys within the imaged volume). Image voxel sizes are most commonly on the order of 1.5 mm in-plane with typically 3--4 mm slice thicknesses. Reference Standard TKV {#Sec4} ---------------------- The pycysticimage viewer toolkit was used by a trained medical imaging analyst, and the MIROS application was used to create initial kidney segmentations \[[@CR24]\]. Afterwards, the segmentations were quality checked and manually corrected when needed. These segmentations were then used with the automated follow-up segmentation approach \[[@CR25]\] to generate segmentations for all patient follow-up examinations. These segmentations were also quality checked and manually corrected when needed. The finalized segmentations were used as the reference standard segmentations by which we judged the accuracy of the fully automated approach. Deep Learning Model {#Sec5} ------------------- We developed a convolutional neural network architecture that is based on a semantic segmentation approach. All algorithms were written in Python, with the Keras library and Theano backend. For developing, training, and testing the neural network models, a high-performance GPU workstation (Exxact Corp., Fremont, CA) with 128 Gb of RAM and 4× NVIDIA GeForce GTX 1080, 8 Gb GPUs was used. The network architecture was first optimized on a small subset of the data (*N* = 200 cases). This optimization consisted of extracting 150 cases for training and validation, and then testing on the remaining 50 cases. Exhaustive grid search was then performed to test a range of networks that were shallower and deeper (in terms of layers), thinner and wider (in terms of number and size of kernels), as well as different activation functions (ReLU, tanh). Each network was run for 50 epochs. Based on the best performing network, 11 separate networks were trained (on different data subsets) in order to create an artificial multi-observer deep neural network for fully automated segmentation of polycystic kidneys in MR images. For training and validation, 2000 cases were randomly selected and the networks were each trained on different subsets of the data (80% training, 20% validation split). After training, the remaining 400 cases from those not used for training and validation were used for testing the automated segmentation approach. Segmentation Post-processing {#Sec6} ---------------------------- Following the segmentation map generated by the deep learning network, a routine to extract the two largest connected components was performed (i.e., the right and left kidneys). This was followed by an active contour and edge detection method in order to finalize the segmentation \[[@CR24]\]. Evaluation of Automated Approach {#Sec7} -------------------------------- Comparison statistics were generated from the reference standard segmentations and those made by the automated approach. These comparison statistics included voxel-by-voxel correlation-based metrics and comparison of total volume differences. For the voxel-by-voxel comparisons, a number of commonly used segmentation metrics were calculated. These include the Dice coefficient (or similarity index) that is defined as:$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{Dice}=\frac{2\cdotp \mathrm{TP}}{2\cdotp \mathrm{TP}+\mathrm{FP}+\mathrm{FN}} $$\end{document}$$ where TP is true positives (i.e., both reference standard and automated approach classified voxel as being the kidney), FP is false positives (i.e., automated approach falsely classified voxel as being the kidney), and FN are false negatives (i.e., automated approach falsely classified voxel as not being a part of the kidney), and the Jaccard coefficient (or overlap ratio), which is defined as:$$\documentclass[12pt]{minimal} \begin{document}$$ \mathrm{Jaccard}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}+\mathrm{FN}} $$\end{document}$$ Both of these indices vary within the range 0 to 1, where a value closer to 1 indicates a closer similarity between the two segmentations. Sensitivity, specificity, and precision are also reported based on voxel level statistics and the average maximum distance between the borders of the two segmentations was calculated (*D* ~mean~). In addition, percent error of TKV as measured by the different approaches was calculated, and Bland-Altman analysis was performed to compare the automated measurement method to the reference standard. Results {#Sec8} ======= Optimized Network Performance {#Sec9} ----------------------------- The optimal deep learning network architecture is graphically depicted in Fig. [1](#Fig1){ref-type="fig"} and had a training Dice coefficient of 0.97 and a validation Dice coefficient of 0.96. Shown in Fig. [2](#Fig2){ref-type="fig"} are the training and validation curves for the Dice coefficient calculated at each epoch.Fig. 1Optimized network architecture consisting of a series of downsampling, upsampling, and skip connections. Each block consists of a series of convolutions (3 × 3 kernels, ReLU activation) and dropout layers (0.35). Both max pooling layers and upsampling layers are of size 2 × 2. The final convolutional layer is a 1 × 1 kernel with sigmoid activation, resulting in classification of each voxel of the input (size 256 × 256) Fig. 2Training and validation curves for the optimized network. Training and validation Dice coefficients of 0.97 and 0.96 were obtained, respectively. Network weights were monitored and saved based on the best performance on validation set Artificial Multi-observer Network {#Sec10} --------------------------------- Next, 11 of these networks used the 2000 cases for training and validation. Each network was trained on a different subset of the cases. Each network was run for 100 epochs, and the best model was saved based on Dice coefficient. These 11 networks were then used in a majority voting scheme to test their ability to accurately segment the 400 test cases not seen during training and validation. Visualization {#Sec11} ------------- Visual examples of the result of the multi-observer ensemble method are shown in Fig. [3](#Fig3){ref-type="fig"} along with the reference standard segmentation.Fig. 3Examples of segmentations obtained for three different patients. Shown in the *left column* are the MR images, the *second column* are the reference standard segmentations, the *third column* are the automated segmentations, and the *right column* are the segmentations overlaid on one another. Reference standard segmentations are shown in *red*, and automated segmentations are shown in *blue*. Regions of overlap are *purple*. Shown in the *top row* is an average example from the dataset, which had a Dice coefficient of 0.96. Shown in the *second row* is the worst-performing case, which had a Dice coefficient of 0.92. The difficulty in this case is the rarer (in terms of this particular dataset) T2-weighted acquisition (a FISP image) which suffers from image artifacts (particularly banding artifacts resulting from intravoxel dephasing). Shown in the *final row* is an example of a patient with significant polycystic liver disease. Notice how the automated approach does not classify the liver, or the liver cysts, as kidney Similarity Metrics {#Sec12} ------------------ Table [1](#Tab1){ref-type="table"} summarizes the similarity statistics for the automated approach compared with the reference standard segmentations. The multi-observer ensemble method had an average percent volume error of 0.68%, a standard deviation of percent volume error of 2.2%, and worst case min = −8.1%, and max = 7.0%. In addition, similarity statistics were as follows: Jaccard = 0.94 ± 0.03, Dice = 0.97 ± 0.01, sensitivity = 0.97 ± 0.02, specificity = 0.99 ± 0.01, and precision = 0.98 ± 0.02 for the unseen test cases.Table 1Summary statistics for the automated approach compared with the gold standard. Shown are the results for an individual network, as well as the multi-observer approachStatistic *m* ± SD \[min/max\]IndividualMulti-observerJaccard0.93 ± 0.03 \[0.78/0.98\]0.94 ± 0.03 \[0.85/0.98\]Dice0.96 ± 0.02 \[0.88 0.99\]0.97 ± 0.01 \[0.92 0.99\]Sensitivity0.96 ± 0.02 \[0.79/0.99\]0.96 ± 0.02 \[0.89/0.99\]Specificity0.99 ± 0.01 \[0.99/1.00\]0.99 ± 0.01 \[0.99/1.00\]Precision0.97 ± 0.02 \[0.83/1.00\]0.97 ± 0.02 \[0.88/1.00\]*D* ~mean~0.57 ± 0.46 \[0.18/4.45\]0.49 ± 0.36 \[0.17/3.69\]Volume difference %−1.42 ± 2.75 \[−18.90/15.72\]−0.65 ± 2.21 \[−8.06/7.04\] Automated Measurement of TKV {#Sec13} ---------------------------- Shown in Fig. [4](#Fig4){ref-type="fig"} are the Bland-Altman analysis results for an individual network, and the multi-observer ensemble method. For the individual network and the multi-observer ensemble method, the *m* ± SD for the percent volume difference was −1.42 ± 2.75 and −0.65 ± 2.21, respectively.Fig. 4Bland-Altman analysis of the percent difference of TKV measurements obtained by the automated approach and the reference standard segmentations for both an individual network and the simulated multi-observer approach. The mean difference (*solid line*) and 95% confidence intervals (*dotted lines*) are also shown. For the individual network, the *m* ± SD for the percent volume difference was −1.42 ± 2.75 and the 95% confidence intervals were \[−6.93 to 4.09\]. The *m* ± SD for the percent volume difference was −0.65 ± 2.21 and the 95% confidence intervals were \[−4.97 to 3.63\] Discussion {#Sec14} ========== Implications for Research and Clinical Trials {#Sec15} --------------------------------------------- High accuracy was obtained by the automated segmentation approach and performance on a level comparable to two different people performing segmentations (interobserver variability) was achieved (comparing the automated approach to the results generated manually). The combination of high accuracy without the necessity of human interaction is an important advance for both the clinical practice and research trials. In the case of research trials, the ability to efficiently and objectively detect small changes reduces the cost of performing a study and results in a much more rapid decision about a drug's effectiveness. This current study can work harmoniously with our previous work for establishing a baseline measurement \[[@CR24], [@CR25]\], and automatically performing a reread of subsequent scans in the same patient \[[@CR24], [@CR25]\]. Our automatic segmentation approach offers a fast and accurate method to measure the TKV imaging biomarker for patients with diseased kidneys. This automation allows for robust study repeatability and removal of user bias in segmentations and measurement of TKV. The automatic segmentation has useful clinical applications such as following progression of the disease as well as judging the effectiveness of interventions. Once the network is trained, the automated approach segmentations are computed in the matter of minutes, whereas manual segmentations take 45--90 min. Thus, our method could enable the routine clinical use of TKV data. An important strength of the developed approach is the success that was observed in terms of accurately handling liver cysts and major vasculature (e.g., renal artery and vein). This differentiation is a difficult task for humans and it appears that there are clearly identifiable imaging features that were derived that allowed the automated approach to successfully differentiate not only the liver from the kidney but also adjacent liver cysts from those pertaining to the kidney. Lastly, having the ability to accurately and reproducibly segment the PKD kidney not only allows for measurement of TKV but also allows characterization of additional imaging biomarkers, such as calculating cystic burden or describing the "class" of cystic distribution \[[@CR29]\], calculating imaging texture features \[[@CR30]\], or measuring parameters derived from quantitative MRI acquisitions \[[@CR31]\]. Limitations {#Sec16} ----------- While the developed approach appears very promising, there exist some limitations that may still require a final quality check by a trained imaging analyst. For instance, renal pelvis delineation appears highly variable. This we attribute to the known high variability of human readers in performing this task. Fortunately, the fact that an automated approach will come to the same conclusion every time will be a helpful step towards improving the reproducibility of TKV measurements. In addition, being able to simulate the results obtained from multiple people performing the segmentations removed outlier cases and resulted in a much more consistent and reproducible measurement of TKV. Conclusion {#Sec17} ========== We obtained high-quality segmentations of severely diseased organs matching human performance with a fully automated computer algorithm which simulates a multi-observer majority voting scheme. This method should be further explored for its utility in research studies and the clinical practice. This work was supported by the PKD Foundation under grant 206g16a, the National Institute of Diabetes and Digestive and Kidney Diseases under NIH Grant/Award Number P30 DK090728 Mayo Translational PKD Center (MTPC), and the National Cancer Institute (NCI) under grant/award CA160045. Otsuka Pharmaceutical Development & Commercialization, Inc., Rockville, MD, USA provided the imaging data used in this study and also supplied partial funding support. Institutional review board approval was obtained for this study.
The Hawthorn Foundation, with its membership's non-partisan support, is working in partnership with the state of Missouri and the governor's office to increase the effectiveness and efficiency of our state government. Hawthorn is bringing together private, nonprofit and public sector leaders from across the state in a variety of strategic initiatives to improve Missouri’s competitiveness. The Governor’s Innovation Task Force was the first of these efforts. The Department of Economic Development and the Hawthorn Foundation established the statewide Branding Task Force to develop a brand for Missouri that will attract more businesses, innovators, students, workers, and visitors to the state. The Task Force will mobilize public and private sector leaders within the economic development, tourism, workforce development, and innovation communities to lead the initiative. By having a cohesive brand to promote Missouri externally, the state will elevate Missouri’s reputation as a thriving innovation hub and great place to live, visit, work, go to school, and do business. The task force launches in June 2018 and is expected to conclude its work by December 2018. See the Hawthorn Workforce Collaborative page for more details. The Missouri Chief Operating Officer, in concert with the Office of Administration and Department of Revenue, convened a team of state leaders from both public and private sectors to confirm best practices for efficiently managing the state held real estate properties and facilities. The Missouri Chief Operating Officer requested the assistance of private sector fleet managers and industry representatives to study vehicle fleet practices within Missouri State Government. The state of Missouri spends approximately $98 million each year to transport state employees for official business. The COO asked for a task force to seek out opportunities to streamline fleet practices and lower the overall costs of transportation utilizing industry best practices. The task force completed its work in December 2017. Read more about the Missouri Fleet Management Task Force. Governor Greitens established the Governor’s Innovation Task Force (GITF) in 2017 to (1) develop a fact-based perspective on the state of innovation and entrepreneurship in Missouri and (2) suggest options for potential state government actions to promote innovation and new technology start-ups across the state. Over 2000 people participated in some way during the GITF process, and the task force completed its summary report within 75 days of launch. Read more about the GITF. In keeping with Governor Parson’s commitment to infrastructure improvement in Missouri, the Hawthorn infrastructure committee has been absorbed into a new governor’s task force. The Governor’s Infrastructure Task Force will meet at regional locations, report out its findings upon completion of its work, assisting in shaping the path forward for Missouri’s infrastructure improvement. This task force will include leaders from transportation, utilities, other significant business leaders, and a number of statewide stakeholders.
https://www.hawthornfoundation.org/ee-task-forces/
Apple Inc. revealed a voice-activated speaker on Monday, thrusting itself into the rapidly escalating fight between the biggest names in technology to control the home through a tabletop device. The growing sophistication of virtual assistants such as Apple’s Siri, Amazon.com Inc.’s Alexa and Alphabet Inc.’s Google Assistant has made it possible to embed artificial intelligence in everyday home devices, letting people unlock doors and dim lights using only their voices. Apple launched Siri in 2011 but has since fallen...
This large Wimbledon garden was in need of a contemporary make over to incorporate a modern garden room and to bring it in line with the interiors of the house. The garden and pilates studio, which the garden room was to become, are a haven for this busy family and needed to reflect their personalities, providing them all a space to retreat to, together and individually. Not forgetting the extended family of 5 dogs and a cat. Parts of the garden had already been altered and with plenty of level changes it was challenge to integrate the old and the new. When you are planning a garden studio it is important to think about the whole scheme to avoid it looking like an afterthought. With consideration it can blend easily with its surroundings rather than dominating the space. Working with existing trees and levels will give your new garden office or studio instant maturity. We love the varying heights of planting that give a jungly vibe, helped by the existing bank of bamboo screening, both guaranteeing the garden feels secluded and private. Although the site was unusual it gave us the opportunity to play with levels, shapes, colour and water to create lush oasis that lets you forget you are in the city. It was also a great reminder that modern gardens can have plenty of curves.
https://reallynicegardens.co.uk/portfolio/wimbledon/
Chemical basis of inflammation-induced carcinogenesis. Chronic inflammation induced by biological, chemical, and physical factors has been associated with increased risk of human cancer at various sites. Inflammation activates a variety of inflammatory cells, which induce and activate several oxidant-generating enzymes such as NADPH oxidase, inducible nitric oxide synthase, myeloperoxidase, and eosinophil peroxidase. These enzymes produce high concentrations of diverse free radicals and oxidants including superoxide anion, nitric oxide, nitroxyl, nitrogen dioxide, hydrogen peroxide, hypochlorous acid, and hypobromous acid, which react with each other to generate other more potent reactive oxygen and nitrogen species such as peroxynitrite. These species can damage DNA, RNA, lipids, and proteins by nitration, oxidation, chlorination, and bromination reactions, leading to increased mutations and altered functions of enzymes and proteins (e.g., activation of oncogene products and/or inhibition of tumor-suppressor proteins) and thus contributing to the multistage carcinogenesis process. Appropriate treatment of inflammation should be explored further for chemoprevention of human cancers.
Executive Assistant - Controllership LOCATIONS: At Disney, we‘re storytellers. We make the impossible, possible. We do this through utilizing and developing cutting-edge technology and pushing the envelope to bring stories to life through our movies, products, interactive games, parks and resorts, and media networks. Now is your chance to join our talented team that delivers unparalleled creative content to audiences around the world. This Secretary position supports the Corporate VP, Controller for The Walt Disney Company. This role is responsible for maintaining an efficient functioning office and workflow, supporting projects, effectively producing and managing communications, meeting preparations and meeting notes, and other administrative support including calendar coordination. This position supports a corporate executive leading and organization of ~300 team members across various geographies. The ideal candidate for this role will be an individual with high energy, an independent producer, very proactive and willing to tackle new assignments. This role requires building and maintaining productive relationships across the organization, handling a rapidly changing environment, and regularly managing multiple priorities. The Administrative Coordinator must be flexible, very service oriented and demonstrate strong attention to detail. The role requires strong technology and user software experience and skills. Responsibilities - Support the Corporate Controllership leader in all aspects of the work including communications, calendar coordination, meeting preparations and project management support. - Effectively plan and communicate to the leader, business colleagues and partners on matters associated with the department responsibilities. Must be able to communicate and coordinate with the most senior executive offices and their team members. - Prepare executive for meetings which will include gathering and assimilating materials or data. Be prepare to develop and share written meeting recaps - Skillfully, proactively and independently manage the executive’s calendar understanding priorities, addressing recurring changes, and using appropriate mediums for meetings inclusive of video conferencing over various time zones. Coordinate logistics for all participants. - Will participate and support special projects of the executive that could be administrative in nature or more involved such as business integrations, reorganizations, research, etc. Will be given direction but expected to be able to build excel files, produce written documents or research materials when given particular topics. Will produce original work in excel, word, and PowerPoint. - Screen, organize, and determine significance of information, calls and correspondence to prioritize actions and facilitate work to ensure that deadlines are met - Possess the willingness and ability to identify opportunities and resolve problems independently and demonstrate good judgment in the handling of issues with a sense of urgency - Proficiently use Microsoft Office products, including the ability to create, manipulate, and print documents in Word, Excel, and PowerPoint - Effectively utilize SAP to verify bills, prepare payment requests, act as casual buyer for the department, and research and follow-up on payments and payroll questions - Manage expense reports and processing, including gathering receipts, organizing information, submitting, and tracking to ensure prompt payments - Organize and maintain both electronic and hard copy files for quick and easy access - Create and further strengthen a vibrant, team focused culture - Lead efforts to ensure professional office appearance throughout the department - Develop acute savviness for Company policies - Provide general office management support to the team Basic Qualifications - Minimum of five years of experience serving in an executive support role and/or equivalent experience - Demonstrated exemplary planning and organization skills – able to set priorities, manage details and accurately follow through to meet all deadlines - Demonstrated strong ability to be proactive and self-motivated and anticipate needs - Demonstrated ability to independently manage multiple tasks and prioritize work in a fast paced environment with minimal oversight - Demonstrated excellent written, verbal and interpersonal skills with the ability to prepare original communications/memos - Proficiency in Microsoft Outlook, Word, Excel, PowerPoint, SAP, and internet. Knowledge and/or willingness to learn Microsoft Visio and additional software and mobile devices is a plus - Ability and eagerness to learn the substance of the work in order to be an effective facilitator within the business unit - Positive team player with a strong service orientation and enthusiastic attitude - Demonstrated ability to make independent decisions and demonstrate good judgment in the handling of issues; including those that involve sensitive and confidential information - Ability to maintain professionalism, calmly and smoothly manage multiple and ever-changing demands, details, and deadlines - Strong partnering and relationship building skills Preferred Qualifications - Previous experience supporting an Accounting/Finance organization a plus Required Education - College associates degree or more Company Overview At Corporate, you’ll team with the best in the business to build one of the most innovative global businesses in any industry. Uniquely positioned at the center of an exciting, multi-faceted Company, the forward-thinkers at Disney Corporate constantly pursue new ideas and technologies to help the Company’s many businesses drive value, all the while gaining something valuable from the experience themselves. Come see the most interesting Company from the most interesting point of view. Additional InformationThis position is a legal entity of The Walt Disney Company, an equal opportunity employer.
https://disney2.ongig.com/jobs/the-walt-disney-company-corporate-/glendale-california-united-states/executive-assistant-controllership/520411BR?lang=en_us
Posted September 4th, 2012 in Aquatic Invasive Species, Great Lakes Cleanup, Program Last week, Illinois-Indiana Sea Grant had a two-day meeting and retreat at the Indiana Dunes State Park in Chesterton, Indiana. In addition to devoting some time to planning and discussing current and future projects, we were treated to a couple of informative and scenic tours in the area, learning more about the extensive restoration work to protect the dunes, the state park and national lakeshore, and the water quality of Lake Michigan. Staff members were able to join National Park Service workers on-site to learn about and get their hands dirty at the Great Marsh Restoration Site not far from the dunes. Once very large, the remaining Great Marsh area is approximately 12 miles long and harbors a wide range of plants, animals, insects, and other beneficial organisms. Those native species are threatened by invasive species, however, and work is ongoing to plant and establish native species to bolster the wetlands’ resistance to invasive species and restore the natural balance of the area. Informative, fun, and muddy, the chance to do on-the-ground work in restoring this watershed was a valuable experience for everyone involved, and offered a practical reminder of the importance of restoring and protecting these areas. There are more terrific photographs of the restoration project and the lake shore on Illinois-Indiana Sea Grant’s Facebook page. Head over and check them out, and be sure to plan a visit to the park for yourself.
https://iiseagrant.org/iisg-staffers-get-their-hands-dirty-for-wetland-restoration/
Power. In whose hands does it lie? Change. Who is responsible for enforcing it? Equality. At what point is it afforded? Watching the premiere of the Bahá’í International Community’s film ‘Glimpses into the Spirit of Gender Equality’ was at once profound, inspiring and sobering. The film opened with a quote extolling the assertion that human dignity and nobility is neither male or female: Throughout the 40-minute film, glimpses from Malaysia, India, Zambia, USA and Colombia feature the diversity of ways that the principle of the equality of women and men is being expressed in different contexts. The film follows the advancement towards gender equality that is being made globally and brings us to the present day. After watching the premiere of the film, I was reminded again of the new conversations that must unfold in all the spaces we find ourselves in, if we truly wish to translate the principle of equality into reality. The responsibility to do this falls on all of us. If we want to see a world in which material, human, social, and spiritual prosperity can be enjoyed by all, then we must realise the power we have today, in this moment, to learn where we have come from, and how far we must go. When gender equality is at the forefront of our minds, we are better able to find different ways of advancing it from the grassroots, the national and the international stage. It takes courage to liberate ourselves, our families and our communities from traditions and norms that perpetuate the man-made material measures that divide us. It also takes an all-embracing view of societal transformation, as the film highlights the interdependence of gender equality and education, health, social cohesion, social structures, family life, work life, cultural, traditional and religious practices, and peace-building. Advancing the rights, well-being and opportunities afforded to women and the girl-child serves to drive greater prosperity and human-flourishing for the whole of humanity. To see the end result- “to be a civilization of the future we need both qualities that men and women have to bring”- at the beginning of the process, has profound implications for the tools we are giving young people to be at the forefront of building a better world. “Women have equal rights with men upon earth; in religion and society they are a very important element. As long as women are prevented from attaining their highest possibilities, so long will men be unable to achieve the greatness which might be theirs.” (Paris Talks: Addresses given by ‘Abdu’l-Bahá in Paris in 1911-1912) “What does a spiritual perspective allow us to consider?” ‘Glimpses’ focuses on a number of different stories. One that touched me in particular was that of a Colombian teacher. In her interview, she highlighted that many of the constraints in our communities and cultures come about because individuals are not prepared to make decisions that are fair, equitable and kind- despite having the intellectual knowledge that this is right. She articulates that the spirit that motivates us to make decisions according to these moral values must be strengthened, if we are to achieve our full potential- individually and collectively. Establishing gender equality is part of this potential. In the USA, the young artist interviewed asserted that the concept of divine or spiritual doesn’t exist within the framework of gender; the soul is genderless and surpasses the limitations of this material world. She established that we are all of equal moral worth and all contributions are imperative to humanity’s development. These examples, alongside many of the others shared in the film, spoke to me about the potential of women being equal to that of their male counterparts and, if this potential is recognised, the many advancements we could make for humankind. In order to do this, however, we must learn. We must operate in a mode of learning. Without learning, we will continue to perpetuate and endorse antiquated standards for women and men, we will continue to suppress the potential of both women and men, we will continue to have inequality in education, family life, economics, politics, the workplace, and the home. With a multitude of spiritual problems that find their expression in material systems and structures of all societies, education provides a rich opportunity to build a civilization that has both material and spiritual qualities. Just like a lamp without the light within it, the lamp cannot realise its true purpose, and the surrounding remains in darkness. So, let’s embrace, question, learn. The spirit of gender equality is channelled by us all.
https://www.bahai.org.uk/post/glimpses-into-the-spirit-of-gender-equality-a-personal-reflection-on-a-new-film
Jacksonville Roller Derby plays flat track roller derby according to the latest rules and clarifications from the Women’s Flat Track Derby Association. See how many JRD skaters you can spot in their Derby 101 videos! The latest ruleset is available at rules.wftda.com. The objectives of roller derby are relatively simple. Each team fields one point-scoring skater (“Jammer”) whose objective is to lap as many opposing skaters as they can. The remaining skaters who aren’t scoring points work both on offense and defense at the same time — to block the opposing Jammer and to clear a path for their own Jammer. Well-played roller derby requires agility, strength, speed, control, peripheral vision, communication, and teamwork.
https://jacksonvillerollerderby.com/about/rules/
Antonym of libertarian What is another word for Libertarian? What is another word for libertarian? |humanitarian||liberal| |reformist||broad-minded| |humanistic||latitudinarian| |permissive||freethinking| |liberalistic||forward-thinking| What is the synonym and antonym of liberal? Some common synonyms of liberal are bountiful, generous, and munificent. While all these words mean “giving or given freely and unstintingly,” liberal suggests openhandedness in the giver and largeness in the thing or amount given. What is the antonym of individualism? Opposite of the characteristics determining who or what a person or thing is. collectivism. conformity. statism. What is the libertarian ideology? Libertarians seek to maximize autonomy and political freedom, and minimize the state’s encroachment on and violations of individual liberties; emphasizing pluralism, cosmopolitanism, cooperation, civil and political rights, bodily autonomy, free association, free trade, freedom of expression, freedom of choice, freedom … What is the opposite of liberal? Conservatives tend to reject behavior that does not conform to some social norm. Modern conservative parties often define themselves by their opposition to liberal or labor parties. The United States usage of the term “conservative” is unique to that country. What is Conservative policy? They advocate low taxes, free markets, deregulation, privatization, and reduced government spending and government debt. Social conservatives see traditional social values, often rooted in familialism and religion, as being threatened by secularism and moral relativism. Do libertarians believe in socialism? Libertarian socialism rejects the concept of a state. It asserts that a society based on freedom and justice can only be achieved with the abolition of authoritarian institutions that control specific means of production and subordinate the majority to an owning class or political and economic elite. What issues do libertarians support? Its cultural policy positions include ending the prohibition of illegal drugs, advocating criminal justice reform, supporting same-sex marriage, ending capital punishment, and supporting gun ownership rights. As of 2021, it is the third-largest political party in the United States by voter registration. What is a libertarian in simple terms? Libertarianism is a kind of politics that says the government should have less control over people’s lives. It is based on the idea of maximum liberty. Libertarians believe that it is usually better to give people more free choice. What is the antonym of progressive? To understand the word regressive, it’s helpful to know that its antonym, or opposite, is progressive. When something is progressive, it tends to get better and more advanced. Something that’s regressive, on the other hand, gets less developed or returns to an older state. Are liberal and progressive synonyms? This word is the opposite of conservative, which means “favoring tradition; resistant to change.” Although it’s often used in political contexts as a synonym of liberal, progressive can also be used in a more general sense. What is the opposite in meaning of audacity? Opposite of a willingness to take bold risks. timidity. care. carefulness. caution. What is regressive attitude? returning to a previous and less advanced or worse state or way of behaving: Incinerating waste rather than recycling it would be a regressive step. Vigilance is needed to overcome the natural regressive tendency to become complacent. What do you mean by regressive? Definition of regressive 1 : tending to regress or produce regression. 2 : being, characterized by, or developing in the course of an evolutionary process involving increasing simplification of bodily structure. 3 : decreasing in rate as the base increases a regressive tax. What is a word for not progressive? traditionalist. ultraconservative/ultra-conservative. unprogressive. What causes regressive behavior in adults? Insecurity, fear, and anger can cause an adult to regress. In essence, individuals revert to a point in their development when they felt safer and when stress was nonexistent, or when an all-powerful parent or another adult would have rescued them. What is regression emotion? Regression can vary, but in general, it is acting in a younger or needier way. You may see more temper tantrums, difficulty with sleeping or eating or reverting to more immature ways of talking. What is progressivism synonym? nounperson or group favoring usually radical change. left-winger. leftist. liberal. How do you know if you are a age regress? People who practice age regression may begin showing juvenile behaviors like thumb-sucking or whining. Others may refuse to engage in adult conversations and handle issues they’re facing. Age regression is sometimes used in psychology and hypnotherapy. What is age regression a symptom of? Involuntary age regression can be a symptom of mental health disorders such as post-traumatic stress disorder (PTSD), dissociative identity disorder, schizophrenia, or mood disorders. Voluntary age regression is sometimes used to cope or for relaxation. Learn More: What Are Mood Disorders?
https://virtualpsychcentre.com/antonym-of-libertarian/
One of the biggest challenges in GRC is demonstrating the effectiveness of a good program. An effective program identifies the key risks and implements controls where they will have the greatest impact. If everything works properly, then nothing happens. Success is commonly measured by the absence of an event such as: After a couple of years, a strong GRC program often faces pressure to reduce expenditure. Regardless of the risk, an excellent track record in controlling it means that is no longer seen as a top threat to the organization, and as a result, funding gets reallocated. The following questions then remain: Setting controls that detect, transfer or mitigate an event is difficult, but possible. The team needs to identify the number and severity of “almost” risk events—the events that were detected and mitigated—and contrast gross and net impacts. Setting controls that prevent an event on the other hand, is extraordinarily difficult. How does a team identify the number of events that were prevented because a control pre-empted it? How can the organization evaluate the effectiveness of a policy or entity-level control that changed the environment to prevent an attempt? Is that policy still important? Is the training still required? Can/should these programs ever be reduced, or do they just grow indefinitely? The solution to this problem is excellent record keeping. “If you can’t measure it, you can’t improve it.” — Peter Drucker An event management system needs to document and track all events. When a new risk treatment is implemented, the pre and post-treatment data needs to be contrasted, and the pre-treatment data needs to be preserved as a reference point for years to come. This reference can then be used as a quantitative ROI for evaluating program effectiveness.
https://www.resolver.com/blog/demonstrating-the-effectiveness-of-a-grc-program/
MBTI makes use of a body of four letters, E x P i R e N t I a L s P e C S L E c l u s A b L. The Myers Briggs Type Indicator (MBTI) is a self-rating questionnaire showing different emotional tendencies in people’s individuality and decision making. The exam seeks to project four specific types to various individualities: extraverted or withdrawn, instinct or even feeling, presuming or Feeling, and judgment or even understanding. In this way, mbti 잔다르크 the survey targets to detail just how each of us suits the much larger design of the individual individual. MBTI was actually created through James R. Myers, Ph.D. and also Isabel Myers, Ed. The idea behind MBTI is that a individual’s unique individuality is established by his/her leading behavioral design or even technique to lifestyle. The MBTI itself has many parts. The major center element is actually contacted the Inventory of Personality Factors, or just the IFP. This features 30 various physical as well as psychological qualities, which are based on the theories of Carl Jung as well as his university of psychology. The various other components of the MBTI consist of the Theory of Cognitive Processes, the Models of Personality, the idea of Human Needs, and the Need Model. The primary factor, or even IFP, is actually determined due to the preferences of the person has relating to particular factors in his character. These are actually gotten in touch with “core opinions,” as well as are the standard mindsets as well as feelings that a person requires if you want to function properly in his chosen area. Another necessary element of the MBTI is actually the guidelines or even the four aspects of the MBTI, which are actually catalogued by characters. They are actually: Dominance, Influence, Steadiness, and also Performance. Supremacy is actually closely pertaining to extroversion and also is actually represented due to the character A. The various other elements, which are all pertaining to the character methods of the person, are collectively referred to as the ” affectional domination.” Besides the mental choices of the individual, there is actually also one more measurement of the MBTI called the MBTI typing clue. This is a geometric depiction of the types of personalities that the person has. The MBTI could be utilized to figure out which sort of personality the person possesses. The four different kinds are actually referred to as exhibitionists and are stood for due to the shapes as well as different colors that are the absolute most generally linked with all of them. Having said that, anybody may come to be an ” character” only through deciding on an extroverted character type – it is certainly not a skill that can be genetically received. People who possess MBTI inputs the dominant type possess 3 fundamental personality types. These are the extraverted, the instincts, and the assuming people. The extroverted individual is oriented towards activity and adventure, while the instinct person is oriented towards paying attention to possible communication possibilities and also taking those right into account. The assuming person wants making use of reasoning to deal with troubles, while the extraverts use emotions as well as emotions as their primary device. The equipment style has no preference as to personality types. The MBTI examination measures 5 personality types. The absolute most commonly made use of MBTI instrument is actually the Myers-Brigg Type Indicator or MBTI. This is a extensively approved action that has actually been actually utilized for decades. Just recently, the size called the Big Five personality type was actually launched to suit the different tastes among individuals of different personality types. Therefore, the MBTI ended up being a lot more complex, as the various inclinations amongst the various kinds of folks could currently be actually assessed. The MBTI guitar determines the desires of 7 various forms of individuals. These feature the exhibitionist, the intuitiveness, the user-friendly, the agreeableness, the conscientiousness, the psychological reliability, and the openness to experience. While it may in the beginning seem that this is too much information, it is indicated to be made use of as a manual to assist everybody receive a grip on their individual partnerships. Through allowing folks to choose which variables they intend to concentrate on, they will certainly have the ability to a lot better understand on their own and also in turn, a lot better know others. As an example, those along with extroverted individuals would possess the MBTI variable ” Private Eye” ( understanding of personal) as a important part of their personality. Those along with the ” poor” personality types would need to have the “conscientiousness” variable as a nice way to understand their environment and others. Folks who are brand-new to MBTI as well as its actual idea have actually discovered it valuable to actually take the examination and inflict a friend or on their own to ensure that they can easily see where they might need to have to readjust their tastes. There are online service providers of the MBTI tests. Any person can access these exams and give them, whether you are a doctor trying to detect a person or even a pupil who is trying to find a way to a lot better comprehend his or her setting. Thus, the mbti characters can come to be a helpful device for every person.
https://dunkingpro.info/mbti-makes-use-of-a-body-of-four-letters-e-x-p-i-r-e-n-t-i-a-l-s-p-e-c-s-l-e-c-l-u-s-a-b-l/
The FPN should prioritize monitoring, strategic communication and knowledge sharing, new study shows When the Forestry Planning Network (FPN) was formalized at the Asia-Pacific Forestry Planning Workshop in early 2017, members discussed a number of challenges that planners continue to face in strategic planning. It was agreed that the FPN should conduct a gaps and needs study to further define its strategy and areas of support to Asia-Pacific forestry planners. The Baseline Review, Gaps and Needs Assessment of Forestry Strategic Planning in the Asia-Pacific Regionwas completed in early 2018. The study covered Cambodia, China, Fiji, Nepal, Philippines, Thailand and Vietnam with the potential to include other economies in future studies. Senior forestry expert Dr. Thomas Enters led the study. It comprised a self-assessment by FPN members, interviews during visits to four economies and a review of plans and related documentation. Based on the gaps and challenges in forestry strategic planning that were identified, the study recommended three main areas of focus for developing FPN support activities: 1. Strengthening monitoring and evaluation One of the biggest weaknesses in current strategic planning practices is in the monitoring and evaluation of implementing the strategic plans. The study recommended for the FPN to support economies in building monitoring and evaluation capacity and ability. One such idea is to assist economies in establishing frameworks for monitoring and evaluation to help planners meet monitoring requirements and policymakers to understand the reasons behind the speed of progress. It was also recommended that as a minimum, the monitoring and evaluation framework should focus on monitoring progress towards the desired conditions of key resources (e.g. biological diversity; soil, water and air; and social and economic benefits), which should be defined by individual economies. The framework should also be sufficiently flexible to accommodate shifting priorities over time. Other areas of support include training and capacity building activities in developing SMART (specific, measurable, achievable, realistic and time bound) indicators and in using the logical framework approach in order to build a solid understanding of strategic planning in general. In addition, the study recommended for FPN to identify opportunities to collaborate with other similar initiatives, such as the FAO’s Executive Forest Policy Course for the Asia and the Pacific. 2. Assisting with strategic communication One of the gaps identified in strategic planning is the communication of the plans and translating it to something that can be understood by the general public as well as decision makers beyond the forestry sector. In the medium-term, the study recommended that the FPN assists economies in developing communication plans that, in a timely and cost-effective manner, can make a case for forests and forestry. This involves building communication ability and skills including: - Supporting with the identification of target audiences at international, national and sub-national levels and their information needs. - Supporting with the selection of broad key messages, based on consultations with key stakeholders (i.e. audiences). - Recommending internal and external communication tools and proposing channels of communication. This may include leveraging on personnel in the media and journalists to strengthen outreach. 3. Building knowledge on external issues and linking them to national contexts Over the last decade, several developments at the international level including the UNFCCC Paris Agreement on climate change, the CBD and its Aichi targets, the UN SDGs, and the Bonn Challenge and the New York Declaration on Forests have also impacted forests, forestry, forest policy and/or strategic planning. Many foresters struggle with the constant emergence of new issues, concepts, discourses and themes. The study recommended for the FPN to respond to the demand for clear and easy-to-understand information on such issues and provide access to learning tools, training courses and relevant events. This mode of communication should also be used to enhance the understanding of issues such as: - Cross-sectoral planning (that goes beyond broader consultations); - Governance and rights; - Drivers of deforestation and forest degradation; and - Strengthening private sector engagement. The study also recommended for the FPN blog to review the work of other similar organizations to complement information-sharing efforts and avoid unnecessary duplication. Based on the recommendations of the study, the FPN has developed a work plan which was discussed with members during the annual Forestry Planning Workshop on 28 March 2018 in Beijing. The workshop also included sessions on introducing SMART indicators and their importance in monitoring. The workshop took place as part of the APFNet Conference on Forest Rehabilitation in the Asia-Pacific Region.
http://forestryplanning.net/2018/04/12/the-fpn-should-prioritize-monitoring-strategic-communication-and-knowledge-sharing-new-study-shows/
Available Online June 2015. - DOI - https://doi.org/10.2991/icecee-15.2015.178How to use a DOI? - Keywords - traffic flow; prediction; EMD; wavelet neural network - Abstract - The operation of the urban traffic exist a high degree of complexity and randomness. The key of Intelligent Transportation System is the real-time and accurate traffic flow prediction. Taking effective measures in a timely manner would prevent the occurrence of traffic accidents since traffic congestion brings much traffic inconvenience to people. Real-time traffic flow prediction plays a significant role in easing traffic congestion and guiding convenient travelling. Therefore, considering the characteristics of traffic flow, this paper presents a neural network, wavelet analysis method, and the EMD and wavelet neural network method respectively. Three methods are utilized in simulating the same set of traffic data and then the most effective way of solving traffic congestion is obtained by taking contrast analysis of simulation result. - Open Access - This is an open access article distributed under the CC BY-NC license.
https://download.atlantis-press.com/proceedings/icecee-15/24663
Taxman, Mr. Thief song page includes lyrics, videos, mp3's and pictures for Cheap Trick's song Taxman, Mr. Thief. “Taxman, Mr. Thief” is a song written by Rick Nielsen that was first released on Cheap Trick’s 1977 debut album Cheap Trick. It is 4:16 in length.
http://cheaptrickfan.com/discography/taxman-mr-thief/
The American Public Works Association (APWA) is a professional association of more 30,000 members across the United States and Canada, responsible for “making normal happen” in the communities they serve. While the structures may vary from area to area, all communities have public works at some level. Depending on the community, these quiet public works professionals can cover a wide range of responsibilities including drinking water production and distribution, wastewater collection and treatment, storm water management, roads and highways, parks and nature areas, public facilities and buildings, fleet management, transit, solid waste, airports, and more. The vital work that public works professionals do every day is so normalized that sometimes even other first responders forget about the role that public works plays to make sure that drinking water (as well as fire suppression and public health) is uninterrupted, wastewater is collected and treated (vital for public health and facility operation), storm water systems operate as designed (can play a role in hazmat situations), and transportation networks are operational so all resources can get where they need to be. The role that public works professionals provide in constructing, operating, and maintaining some of the critical lifelines for their community cannot be understated, but protecting those critical lifelines is a collaborative effort between many groups in both the public and private sectors. Just like the Internet of Things (IOT) has become ingrained in everyday life for many people, it also continues to become more and more integrated into public works operational systems. Cybersecurity is a critical issue for public works for a number of reasons. The systems that are in place to construct, operate, and maintain critical infrastructure all have some components of technology involved. Some of the ways that connected technology is integrated ranges from things as simple as databases used to store information, to supervisory control and data acquisition systems (SCADA), or control systems for irrigation or athletic field lighting. The challenge in protecting these systems from cyber threats comes from the varied ways in which they are connected to networks and thus the hardware and software barriers, as well as who is managing them, is not consistent. This is really where the collaborative effort of the public and private sectors must join alongside public works in assisting with cybersecurity. How those efforts look will depend on the systems and needs of individual public works departments, but even just assessing the cyber threats and vulnerabilities would be a good starting point. Beyond the cyber-actor threat, preparing for a disaster or other unplanned event is another layer to cybersecurity, in the sense that redundancy of these systems is also vital. Because computer network systems are just that connected, any one of those computers serves as a potential failure point. But this issue applies to more than just public works. For example, some public works radio communication systems, such as 800mhz land mobile radio, are supported by networks and if those systems are compromised for whatever reason critical communication abilities can be lost. Similarly, cellular networks provide vital communication and operational control of critical infrastructure. If the cellular network is overwhelmed or has failed, public works professionals still need to have operational control of the critical infrastructure facilities. For cellular networks, this is one of the essential reasons that including public works in the FirstNet program is vital to the operational readiness of our communities. For public works, it is not just about communication but being able to monitor and operate critical infrastructure facilities via the cellular network. Outside of cybersecurity efforts for systems, one of the ways public works departments assist preparedness is by spreading the word of cybersecurity awareness through social media accounts. While cybersecurity is not the core of the public works mission, we can help prepare our communities by sharing the important messages to our followers. This would not be possible without the effort and production of educational materials (social media and otherwise) by other public, private, and nonprofit groups. But this joint-effort education model with the Department of Homeland Security, and other partners (public, private, nonprofit) developing important messages and public works sharing them has many applications beyond just cybersecurity as it relates to all hazards. Leveraging these collaborations during non-emergencies, for topics such as cybersecurity, is a great way to develop and exercise these relationships for when disaster strikes. In addition to the cyber realm, physical security is another big factor for public works. Some people see physical security as fences, locks, and bollards, but it is so much more than that and most of it happens every day. Ensuring the physical security of critical infrastructure systems must occur on many levels. One of the big challenges in security is based on who is encountering the critical infrastructure. For the general public it is more an issue of safety, and this is where features such as fences and locked doors are utilized. But along roads and highways there is another potential security measure that some may not normally recognize as physical security – guardrails and concrete walls. It is true that these devices are intended to keep errant vehicles on the road for the occupant’s safety from whatever is off the road, but they also serve to protect roadside facilities such as communication buildings, radio towers, and road weather information systems from errant vehicles. For operators, contractors, or other authorized personnel involved in critical infrastructure there are processes in place intending to provide a layer of security. Some communities require anyone working in or placing facilities in the public right of way to get a permit. This permitting process allows the local unit of government to know what proposed action will be taking place, who a contact is, and to be able to manage the process as appropriate. If conflicts, or other issues, are identified as part of the permitting phase, then costly delays or emergencies during construction can be avoided. Another key security feature that is widely advertised is the 811 Call Before You Dig (One Call Systems International). Calling before an excavation takes place is required under state law across the United States. When someone calls 811 and communicates their proposed excavation location and scope of work, the local utility companies in the area are notified of the work and required to go mark the location of existing underground utilities using paint and/or flags. This way the person or company doing the work has an idea of the potential conflicts they need to watch for while digging. The utility locate process is key in providing for physical security of critical infrastructure, including cyber, because so much of the communications backbone runs along fiber networks buried below the ground. One simple scoop of an excavator could rip a fiber line(s) right out of the ground and the system that line was supporting would be rendered useless until repairs are made or a redundant system made operational. For public works, even the permitting process for work in the public right of way has a role in providing for critical infrastructure protection. The operations side of public works also has many layers of security that are seemingly basic or routine. For water (drinking and waste) treatment systems this includes regular monitoring and testing. Backflow preventers can be used to make sure that water does not flow the wrong way through a pipe system. The important thing to note with these devices or practices is that because they are routine, it is important that the public works professionals doing the work do not become complacent. This shared mindset of security is an area where that can really provide an opportunity for public works and other security professionals to come together. Public works professionals work to “make normal happen” in their communities on a daily basis and have a key role in preparing for, mitigating against, responding to, and recovering from disasters or other events (planned or unplanned). When things go bad, it is the critical lifelines that public works provides that need to be restored for actual recovery to begin. The role of cyber and physical security in preparing for all hazards is continually evolving. It is only through strong partnerships and collaboration at all levels between public works professionals and others in the homeland security enterprise that the United States can become a more resilient nation. Mark Ray is Director of Public Works for Crystal, Minn., and chairman of the Emergency Management Committee, American Public Works Association.
https://www.hstoday.us/subject-matter-areas/infrastructure-security/the-critical-role-of-cybersecurity-in-keeping-public-works-infrastructure-operational/
Research Area: (1) Optimization Algorithm (2) Machine Learning (3) Software Testing Topic: Sequence Covering Array (SCA): The evolution of State Transition Testing Abstract: State transition testing is a type of black-box testing catering the need to test the behaviour of the system when changing its state. However, state transition testing often generates test cases to test all individual transitions (i.e. transition between one state to another state). With the advancement of current technology, testing of individual transition seems to be insufficient. Bugs or defects are usually triggered by the combination of more than one transitions. As a result, researchers extend the concept of state transition testing to n-switch state transition testing in order to cater needs for testing more than individual transition. With n-switch state transition testing, tester can generate test cases that cover all n-transition within the System Under Test (SUT). Although n-switch state transition testing able to generate test cases that can cover all n-transition, it could potentially omit the negative tests (i.e. testing the invalid transitions). To overcome this limitation, researchers come up with the idea of applying sequence covering array (SCA) in testing the state transition. The talk will cover the evolution of state transition testing and at the same time highlight the current state- of-the-art in SCA research. Keynote Speaker 2 Prof. Daowen Qiu School of Data and Computer Science, SYSU, China Topic: A distributed semi-quantum computing model:A method of quantum-classical hybrid computing Abstract: It is still difficult to design a large-scale universal quantum computer nowadays. Even for some special quantum computers, their cost of manufacture is also very high. So, another way is to consider how to use small size quantum computers to solve some problems with essential advantage over classical computers. In this talk, I would like to design a distributed semi-quantum algorithm for phase estimation which has a better time complexity even than the conventional quantum algorithm. The basic idea is to use distributed micro quantum computers to process respectively a small quantity of quantum states and then communicate with a given classical computer via classical channel to transport the results. Furthermore, we would mention other quantum-classical hybrid devices: quantum finite automata with classical states; semi-quantum key distribution. Keynote Speaker 3 Dr. NIRMALYA THAKUR University of Cincinnati, Cincinnati, Ohio, USA Research Interests: Human-Computer Interaction, Data Science, Machine Learning, Artificial Intelligence, Internet of Things, Assistive Systems, Affective Computing and their applications in the context of solving real-world problems to improve the quality of life of users in a Smart Home with a specific focus on the elderly population. Topic: Applications of Distributed Artificial Intelligence for Enhancing User Experience in Smart Homes Abstract: To meet the increasing needs of the constantly increasing world’s population, advanced urban development policies equipped with sound infrastructures and modern technologies, such as Internet of Things (IoT)-based Smart Homes are necessary to create better living experiences in day to day lives of people. In these IoT-based living spaces, human behavior will involve working and collaborating with gadgets, technologies, robots and other smart agents. The user experience and effectiveness of human-technology partnerships with these systems depends on multimodal components of user interactions. These components include cognitive load, mental model, behavioral response, touch-based components, gesture-based interactions, emotional response, verbal and non-verbal components, just to name a few. Distributed Artificial Intelligence and its applications in IoT-based Smart Homes has the potential to address and augment these user interactions with an aim to enhance user experience in the context of different activities performed in the IoT environment. This talk aims to introduce the concept of Distributed Artificial Intelligence and discuss its applications in IoT-based living spaces to augment multimodal components of user interactions to enhance user experiences, improve the quality of life and foster human-technology partnerships in the future of Smart Homes. Several state of art works in these fields will be reviewed and discussed. The recent and ongoing researches in these fields at the University of Cincinnati will also be briefly outlined. The talk will conclude with presenting some of the open challenges in this field to the audience.
http://ispds.org/keynote
The utility model discloses an environment-friendly crushing and granulating device with high granulation rate. The environment-friendly crushing and granulating device comprises a box body, a first-layer crushing structure and a second-layer crushing structure, wherein the first-layer crushing structure and the second-layer crushing structure are arranged in an inner cavity of the box body in parallel up and down; and a distributor is arranged in a space between the first-layer crushing structure and the second-layer crushing structure. The equipment is divided into two layers, the steel plate screens with different apertures are selected according to different properties of materials, the embossing plate mounted on the main shaft crushes the materials in a wiping and pressing manner, and the crushed materials are high in granulation rate and high in production efficiency after primary coarse crushing and secondary fine crushing, so that the production cost is correspondingly reduced; the stamping steel plate screen is adopted, the thickness of the screen is increased, the equipment rotating speed is low, the screen cannot be broken, and the service life is long; the rotating speed of the equipment is low, the material temperature can be kept unchanged, and the material temperature is low and can be directly packaged; the sealing effect is good, dust is avoided, and the environment-friendly production requirement is met.
This painting is featured in Heather Chontos' "Lontano" exhibition taking place at The Colony Palm Beach. Chontos’ collection of paintings consists of mixed media works on canvas created during her Artist-in-Residency at Palazzo Monti in Brescia, Italy. As a painter, Chontos focuses her studio practice on the ambiguity and unique beauty of Abstract Expressionism. She creates artwork that explores compositions of invisible light, like a secret language only spoken through her various mediums, palettes and forms. Focusing on gestures and connecting color through relentless movement, Chontos’ resulting forms are interpretations of her surrounding environment, delicate details, vulnerable landscapes and moments of light. Through these works she inspires an intimate dialogue with the viewer and her emotional connection to impulsive mark making. She is driven by an intuition that guides each gesture and builds the complex movement between the many layers on her canvas.
https://www.1stdibs.com/art/mixed-media/heather-chontos-nuvola/id-a_3882832/
What Are Minerals? - Types, Properties & Examples - Video ...- earth mineral composition ,What Are Minerals? - Types, Properties & Examples. ... but there are many more minerals on Earth ... What Are Minerals? - Types, Properties & Examples Related Study ...Diatomaceous Earth - EarthcoreDiatomaceous Earth is a natural occurring siliceous sedimentary mineral compound from microscopic skeletal remains of unicellular algae-like plants called diatoms. Mr. Mulroy's Earth Science. ... How do We Determine the Mineral Composition of Igneous Rocks? ... by determining their mineral composition. Many people recognize granite because it is the most common igneous rock found at Earth's surface and because granite is ... granite based upon mineral composition. What is the composition of the Earth? SAVE CANCEL. already exists. Would you like ... and many other minerals. Share to: What is the composition of Earth? Chapter 5: Weathering and soils ! erosion is physical collection of rock particles by water, ... mineral composition: stability of minerals at Earth’s surface! Home > Soil Basics > Soil Composition > Soil Minerals: ... Weathering of Soil Minerals and Change in Mineral Composition. ... or primary minerals, within the earth. Reply to ASK-AN-EARTH-SCIENTIST. Subject: The Composition of the Earth's Core ... mantle minerals to predict the mineralogy of the lower reaches of the mantle. abundances of the elements that compose prevalent mantle minerals as well as the chemistry of upper ... constraining the composition of the Earth. A mineral is formed through natural processes and has a definite chemical composition. Minerals can be ... A diamond created deep in Earth’s crust is a mineral. What is the chemical composition of the Earth's crust? Chapter 2 Multiple Choice 1. ... b. by changes in mineral composition. c. at great depth within Earth. ... c. does not change the rock's mineral composition. Over 2000 minerals have been identified by earth scientists. Table 10d-2 describes some of the important minerals, their chemical composition, and classifies them in one of nine groups. What Is the Most Common Mineral? Search the ... The most common mineral of the Earth's ... waves behave in the mantle in order to understand its composition. Minerals, elements and the Earth’s crust Introduction. Minerals are substances formed naturally in the Earth. They have a definite chemical composition and structure. Learn about the chemical composition and crystal structure of minerals. Includes a discussion of the ways geologists identify and categorize minerals. Illustrated overview of the most widespread chemical elements, minerals, and rock types in the Earth's crust. Rock and Mineral Identification for Engineers ... Minerals whose composition includes the ... and / or pressure within the earth. What Is the Moon Made Of? Search the site ... and the difference between the composition of the Earth and the ... View Pictures of Beautiful and Interesting Minerals. Start studying GEOL 101 - Chapter 4: Magma, Igneous Rocks and Intrusive Activity. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Average Chemical Composition of Earth’s Crust, ... MINERAL COMPOSITION (Relative by Volume) ... Earth Science Reference Tables — Earth’s . S. 1. Composition of the Crust. The Earth's crust is made up of about 95% igneous and metamorphic rocks, 4% ... Graphical Representation of Mineral Composition. Earth Sciences Questions including "Is there a chemical way to remove a calcium ring on a pebble tec salt water pool" and "What are the main colors of the earth" Minerals that are very stable at the Earth's surface are minerals that either form ... the composition of the mineral ... Structures of Sedimentary Rocks. Geography4Kids.com! This tutorial introduces composition of the Earth. Other sections include the atmosphere, biosphere, hydrosphere, climates, and ecosystems.
https://www.municipality-watchdog.co.za/crushers/19066/earth-mineral-composition-.html
Cardiff School of Engineering, Cardiff University, Cardiff CF24 3AA, UK e-mail: [email protected] LSIS, Arts et Metiers ParisTech, Lille 59046, France Arts et Metiers ParisTech, Lille 59046, France Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332 1Corresponding author. Contributed by the Manufacturing Engineering Division of ASME for publication in the JOURNAL OF MICRO- AND NANO-MANUFACTURING. Manuscript received January 26, 2017; final manuscript received July 3, 2017; published online September 28, 2017. Assoc. Editor: Martin Jun. This paper reports a feasibility study that demonstrates the implementation of a computer-aided design and manufacturing (CAD/CAM) approach for producing two-dimensional (2D) patterns on the nanoscale using the atomic force microscope (AFM) tip-based nanomachining process. To achieve this, simple software tools and neutral file formats were used. A G-code postprocessor was also developed to ensure that the controller of the AFM equipment utilized could interpret the G-code representation of tip path trajectories generated using the computer-aided manufacturing (CAM) software. In addition, the error between a machined pattern and its theoretical geometry was also evaluated. The analyzed pattern covered an area of 20 μm × 20 μm. The average machined error in this case was estimated to be 66 nm. This value corresponds to 15% of the average width of machined grooves. Such machining errors are most likely due to the flexible nature of AFM probe cantilevers. Overall, it is anticipated that such a CAD/CAM approach could contribute to the development of a more flexible and portable solution for a range of tip-based nanofabrication tasks, which would not be restricted to particular customised software or AFM instruments. In the case of nanomachining operations, however, further work is required first to generate trajectories, which can compensate for the observed machining errors. Designed 2D patterns Particular implementation of the CAD/CAM approach adopted CAD/CAM approach adopted SEM micrographs of (a) pattern 1, (b) pattern 2, and (c) pattern 3 The seven different curved sections with their machining order and direction Comparison of the tip trajectory achieved (gray color) with the theoretical one (black color) Some tools below are only available to our subscribers or users with an online account. Download citation file: Customize your page view by dragging and repositioning the boxes below. © 2018 ASME The American Society of Mechanical Engineers Sorry! You do not have access to this content. For assistance or to subscribe, please contact us: Sign in or create your free personal ASME account. This will give you the ability to save search results, receive TOC alerts, RSS feeds, and more.
http://micronanomanufacturing.asmedigitalcollection.asme.org/article.aspx?articleID=2650821
As educators, we understand the importance of ensuring that children develop a sense of belonging when starting at our learning service. We are equipped with knowledge, strategies, skills and confidence when supporting young children as their sense of belonging develops over time. We know this sense of belonging is vital, in order for children to learn and thrive in our environment. What we may not be as confident with or knowledgeable about, is ways to make sure that the families of children also develop a sense of belonging in our environment. We know this is important, and we no doubt have strategies for working to build relationships with families, however for some educators, this is not something that comes naturally to them. They have studied child development. They are skilled at working with children. They are confident working with children. Strategies for working with adults was not something that was covered in-depth at University. We know that to truly work in partnership with families, we must have a trusting and respectful relationship with them. This can be hard to do when parents dash in and out of your learning service to drop off and pick up their child, often pressed for time. As educators, you may need to be creative with finding ways to connect with, and establish relationships with families. Finding ways they can develop a sense of belonging in your environment is a great start. - Have a safe space where families can put their belongings so that they are free to stay and spend time with their child. Spending time in your environment will help them feel more comfortable, and will open up opportunities for conversation with you. - Having facilities for tea and coffee for parents. Have you considered having a dining room table where families can sit, catch up with each other, and have a cup of tea or coffee before they leave for the day? It is a great way to foster not only individuals sense of belonging, but also support parents getting to know each other. - Having a family photo wall is a tried and tested way to support families to develop a sense of belonging in your space. It shows that they matter, it helps their child feel their family belongs. - Provide a space where mothers can breastfeed their children comfortably. - Invite parents to share their skills, knowledge and passions. You probably have a wealth of assets amongst your families. Often all it takes is a simple invitation to share, for parents to realise that your learning service is a space where everybody works collaboratively. - Create a parent library of books and resources on a range of different topics that are relevant to parenting. - Host a shared dinner/get together with the families in your learning service. Having a relaxed, unhurried atmosphere allows educators to take more time to chat with parents. it is also a fantastic opportunity for families to get to know each other! - Reply to parents stories on Storypark. Storypark is a fantastic way to connect with families that don’t spend a lot of time in your learning service. The conversations that happen in the comments section can be so valuable and informative! There are no doubt many other creative and inspiring ideas for supporting families to feel a true sense of belonging in your learning service. Feel free to add them in the comments section below!
https://blog.storypark.com/2016/08/ways-help-families-feel-sense-belonging-learning-service/
Crises in a Globalised World In a globalised world, basic malfunctions of economic, social, political, and ecological systems are often interconnected in complex ways that endow crises with a new quality. If we are to cope with crises successfully, a comprehensive assessment of their causes is needed, as a basis for concerted action. Such an advance is only possible if exchange between scholars, political decision-makers, and the wider public is intensified. Against this background, the distinctive mission of the Leibniz Research Alliance ‘Crises in a Globalised World’ is to cut across three sets of boundaries— between disciplines, between different thematic fields in which crises figure, and between the academic, political, and societal spheres. The Alliance’s strategic goal is to gather together the expertise which the different participating institutes have accumulated—on financial and debt crises, environmental crises, food crises, and crises of political orders—and use this to generate generalisable knowledge about the processes, dynamics, and patterns of crises, about their systemic character, and about the suitability and social acceptability of policies devised to cope with them. This knowledge can then be applied in the resolution of future crises. Ongoing Projects Governing the Corona Crisis A cooperation project of the LRA "Crises in a Globalised World" The objective of this project is to assess impacts of government responses to Corona using key indicators on Food Security and Socio-Economy in rural and peri-urban areas of the three countries Brazil, Iran and Tanzania and to compare findings between the countries and to Germany. Determinants of Iran’s Red Meat Crisis Multidisciplinary Analysis of Supply Chain Governance The meat crisis and the lack of impact of the countermeasures taken show that a comprehensive study of MSC in Iran is essential. The project aims to use production network analysis (PNA) governance of a global value chain approach to understand the factors exacerbating the meat crisis in Khorasan Rasavi province on Iran's eastern borders. Knowledge transfer The Health Data Space between the competing priorities of medical care, academic freedom, and data security Crisis Talk | Stream & Event Review of 1st July 2021 With Prof Dr Indra Spiecker called Döhmann (Goethe University Frankfurt), Axel Voss (Member of the European Parliament) and Yiannos Tolias (European Commission, DG SANTE, Legal lead on AI and AI liability). Lessons from the pandemic – EU European and Global Crisis Management Crisis Talk | Stream & Event Review of 23 June 2021 With Prof Dr Nicole Deitelhoff (Leibniz Peace Research Institute Frankfurt/Speaker Leibniz Research Alliance “Crises in a globalised world”), Jens Gieseke (tbc) (Member of the European Parliament) and Kim Eling (Deputy Head of Cabinet of the Commissioner for Crisis Management, European Commission).
https://www.leibniz-krisen.de/en.html?option=com_acym&ctrl=frontmails&task=setNewIconShare
As the winners are announced, here are the highlights from the prestigious award’s nine shortlisted photographers, who tell visual stories from the world’s little-known corners Women fix a volleyball net at the state prison in Maracaibo, Venezuela, in 2018. Photograph by the main category winner, Ana María Arévalo Gosen ANA MARÍA ARÉVALO GOSEN Sara Rumens Thursday November 04 2021, 7.00pm, The Times Today Leica announces the winners of the prestigious Oskar Barnack award. Now in its 41st year, it continues to attract the world’s exceptional photographers who each present a body of their personal work; their projects are carried out over years, slotted in between editorial and commercial assignments, until a project is ready. They cover very different subject matter, but what they share is the ability to look deeply, taking the time to submerge themselves in their projects so they can share these stories with us, building the trust of the people in the images so we might know what is happening in the world around us. The Leica Oskar Barnack award (LOBA) comes with a considerable prize of €40,000 (£34,000) and a Leica camera —
Urban and regional planning happens within the context of the principles of sustainability as enshrined in Agenda 21 - the global blueprint for sustainability that was agreed at the United Nations Conference on Environment and Development in 1992 (the Rio Earth Summit). These principles are: The precautionary principle - where there are threats of serious or irreversible damage to the community's ecological, social or economic systems, a lack of complete scientific evidence should not be used as a reason for postponing measures to prevent environmental degradation. In some circumstances this will mean actions will need to be taken to prevent damage even when it is not certain that damage will occur. The principle of intergenerational equity - the present generation must ensure that the health, integrity, ecological diversity, and productivity of the environment is at least maintained or preferably enhanced for the benefit of future generations. The principle of conserving biological diversity and ecological integrity - aims to protect, restore and conserve the native biological diversity and enhance or repair ecological processes and systems. The principle of improving the valuation and pricing of social and ecological resources - the users of goods and services should pay prices based on the full life cycle costs (including the use of natural resources at their replacement value, the ultimate disposal of any wastes and the repair of any consequent damage). Zenith Town Planning Pty Ltd acknowledges and adheres to the principles of sustainability. The four core principles, given credence through Agenda 21, guide all research and recommendations. Triple bottom line assessment methods are applied to all projects to ensure that the results are socially, economically and environmentally responsible. In carrying out business and professional activities Zenith Town Planning Pty Ltd also aims to contribute to realising the Sustainable Development Goals of the UN 2030 Agenda for Sustainable Development. These goals will build an inclusive, sustainable and resilient future for people and the planet through economic growth, social inclusion and environmental protection. Allen, the director of Zenith Town Planning Pty Ltd, is the leader of the Eurobodalla Branch of Surfrider Foundation Australia - a not-for-proft environmental group dedicated to the protection of Australia's waves and beaches.
http://zenithplan.com.au/index.php/about/sustainability
This study provides an overview of practices for quantifying and reporting avoided energy-water costs from demand-side measures. It also summarizes the regulatory guidance for incorporating water savings into cost-effectiveness screening for energy efficiency programs. Resources Showing results 1 - 100 of 134 The adoption of intelligent efficiency applications is increasing across multiple sectors of the economy. This report analyzes over two dozen of these applications in the buildings, manufacturing, transportation, and government sectors. We describe the technologies involved, characterize their use, and quantify their deployment. We also look at several enabling and cross-cutting technologies and the use of intelligent efficiency in utility-sector energy efficiency programs. This guide is designed to help environmental agencies better understand the array of Lean methods and when to consider using each method. The guide focuses primarily on Lean production, which is an organizational improvement philosophy and set of methods that originated in manufacturing but has been expanded to government and service sectors. This report updates ACEEE's 2013 assessment of multifamily energy efficiency programs in US metropolitan areas with the most multifamily households. Using housing, policy, and utility-sector data from 2014 and 2015, this report documents how these programs have changed in the context of dynamic housing markets and statewide policy environments. The report also offers an analysis of the number, spending, offerings, and targeted participants of current programs and their potential for further expansion. Putting Your Money Where Your Meter Is: A study of pay for performance energy efficiency programs in the United States This report examines the history of pay-for-performance (P4P) energy efficiency approaches. As the report describes, there is a diverse spectrum of pay-for-performance programs but, at the most basic level, these programs track and reward energy savings as they occur, usually by examining data from a building's energy meters -- as opposed to the more common approach of estimating savings in advance of installation and offering upfront rebates or incentives in a lump-sum payment. The report finds that P4P has some important opportunities for increasing energy savings, but also key limitations that will need to be better understood through piloting and experimentation. Trends in the Program Administrator Cost of Saving Electricity for Utility Customer-Funded Energy Efficiency Programs This technical brief presents trends in the cost of saved electricity for energy efficiency programs between 2009 and 2013. For this report, LBNL collected and analyzed more than 5,400 program years of data collected in 36 states from 78 administrators of programs funded by customers of investor-owned utilities. These administrators provide efficiency programs to customers of investor-owned utilities that serve about half of total U.S. electricity load. This report focuses on six energy efficiency areas for state and local governments to improve the energy efficiency of existing commercial and multifamily buildings, which include strengthening market demand and expanding public-private partnerships. The NorthernSTAR and U.S. Department of Energy Building America Program partnership investigated a new model to deploy building science-guided performance solutions to homeowners. This research explored three aspects to market delivery: 1. Understand the homeowner's motivations regarding investing in building science-based performance upgrades. 2. Determine a rapidly scalable approach to engage large numbers of homeowners directly through existing customer networks. 3. Access a business model that will manage all aspects of the contractor-homeowner performance professional interface to ensure good upgrade decisions throughout time. Utilities and regulators increasingly rely on behavior change programs as essential parts of their demand side management (DSM) portfolios. This report evaluates the effectiveness of currently available programs, focusing on programs that have been assessed for energy savings. This report focuses on behavior change programs that primarily rely on social-science-based strategies instead of traditional approaches such as incentives, rebates, pricing, or legal and policy strategies. The objective is to help program administrators choose effective behavior change programs for their specific purposes. Best Practices in Developing State Lead-by-Example Programs and Considerations for Clean Power Plan Compliance This paper is intended to guide state governments on Clean Power Plan compliance and shows how leading by example in state and local government programs communicates an agency’s commitment to reducing energy consumption, protecting facilities, and protecting taxpayer dollars. This report details opportunities for scaling up program activity and increasing savings from programs reaching the people who need it most. It discussed best practices from existing programs for overcoming many of the key challenges that program administrators face, including how to address housing deficiencies that prevent energy efficiency upgrades, how to address cost effectiveness challenges, and how to serve hard-to-reach households. This study focused on barriers to, and opportunities for, solar photovoltaic energy generation; opportunities for, access to other renewable energy by low-income customers; contracting opportunities for local small businesses in disadvantaged communities; low-income customers to energy efficiency and weatherization investments, including those in disadvantaged communities. It also provides recommendations on how to increase access to energy efficiency and weatherization investments to low-income customers. This report explores how governments and energy efficiency implementers could help stakeholders better analyze and act upon building performance data to unlock savings. This paper presents results from three surveys of homeowners, renters, and contractors, which compared their perceptions and priorities for healthy housing to the principles of indoor air and environmental quality. Survey results indicate that: nearly one quarter of homeowners had some concern about healthy-home problems or risks; homeowners cited indoor air quality issues as their leading concern, followed by water quality, harmful materials and chemicals, and indoor environmental quality (such as noise or light pollution). Behavioral change programs are not necessarily a separate category of efficiency efforts; rather, behavioral approaches can be effectively integrated into all programs in residential, commercial, or industrial settings. As increased connectivity within homes and businesses expands opportunities to provide energy information, the role of behavior will likely become even more prominent. Consortium for Energy Efficiency, Inc. (CEE) provides this webpage dedicated to behavior change resources. Critiques of Energy Efficiency Policies and Programs: Some Truth But Also Substantial Mistakes and Bias Several recent studies purport to show that particular energy efficiency programs and policies do not work or are too expensive. This short paper is written for people who are not evaluation experts and are trying to understand what conclusions they can take from these studies. We examine many of these papers and find that while they do have some useful findings, they often include a variety of unreasonable assumptions or outright mistakes that undermine their conclusions. Based on this review, we offer several recommendations on ways we can constructively move forward. Through field-testing and analysis, this project evaluated whole-building approaches and estimated the relative contributions of select technologies toward reducing energy use related to space conditioning in new manufactured homes. Three lab houses of varying designs were built and tested side-by-side under controlled conditions in Russellville, Alabama. The tests provided a valuable indicator of how changes in the construction of manufactured homes can contribute to significant reductions in energy use. The primary objective of the quantitative research phase of this survey was to get market-based feedback and insights in the following areas to assist the industry in better serving its constituents, including: insights as to major challenges that industry is facing and potential support that organizations could provide and feedback on how industry organizations could add value for constituents in the future. This literature review describes what is currently known about the occupant health benefits resulting from residential energy efficiency or work that is consistent with home performance upgrades. Of particular interest are the occupant health impacts associated with work typically conducted by the home performance industry, such as: air sealing and insulation; properly-sized, selected, matched, and installed energy efficient heating, ventilation, and air conditioning (HVAC) systems; identification and correction of moisture problems; proper whole house and room ventilation; lighting; and additional services including the replacement of appliances; measurement and installation of whole house and room air filtration systems (e.g., air purifiers); and basic pest exclusion. The intent of this literature review is to examine research that assessed work that would not be expected to harm residents or the workers. A recent cost vs. value report compared the average cost for popular remodeling projects with the value those projects retain at resale value in 100 different U.S. markets. This Home Energy article discusses how one of the most valuable remodeling options is one you can’t see--energy efficiency. Indoor Air Quality in Homes: State Policies for Improving Health Now and Addressing Future Risks in a Changing Climate This report discusses indoor air quality issues, including: wildfire smoke, dampness, and mold, and the effect of energy efficiency upgrades on these health-related issues. The report describes current state policies and programs in these areas, highlighting approaches for consideration by other jurisdictions. This publication explores the behavioral factors behind individual homeowners' use of energy, and what might change those behaviors. The chapters cover: (1) Leverage Points for Achieving Sustainable Consumption in Homeowner Energy Use; (2) Evaluating the Theoretical Justification for Tailored Energy Interventions; (3) Quantifying the Value of Home Energy Improvements; (4) Considering the Effect of Incorporating Home Energy Performance Ratings Into Real Estate Listings; (5) Energy Efficiency 101: Improving Energy Knowledge in Neighborhoods; (6) Enhancing Home Energy Efficiency Through Natural Hazard Risk Reduction: Linking Climate Change Mitigation and Adaptation in the Home; (7) Leveraging the Employer-Employee Relationship to Reduce Greenhouse Gas Emissions at the Residential Level; and (8) Increasing the Effectiveness of Residential Energy Efficiency Programs. This document features lessons learned shared by Better Buildings Residential Network members during Peer Exchange Calls held during Autumn 2016. This publication summarizes lessons learned from Peer Exchange Calls about how energy efficiency programs and partners can leverage timing to engage homeowners. Lifting the High Energy Burden in America's Largest Cities: How Energy Efficiency Can Improve Low-Income and Underserved Communities Energy burden is the percentage of household income spent on home energy bills. In this report, ACEEE, along with the Energy Efficiency for All coalition, measures the energy burden of households in 48 of the largest American cities. The report finds that low-income, African-American, Latino, low-income multifamily, and renter households all spend a greater proportion of their income on utilities than the average family. The report also identifies energy efficiency as an underutilized strategy that can help reduce high energy burdens by as much as 30%. Given this potential, the report goes on to describe policies and programs to ramp up energy efficiency investments in low-income and underserved communities. Massachusetts Special and Cross-Cutting Research Area: Low-Income Single-Family Health- and Safety-Related Non-Energy Impacts (NEIs) Study This study assesses and monetizes a sub-set of non-energy benefits experienced by recipients of energy efficiency services residing in income-eligible households in MA, including: reduced asthma; reduced cold-related thermal stress; reduced heat-related thermal stress; reduced missed days at work; reduced use of short-term, high interest loans; increased home productivity due to improvements in sleep; reduced carbon monoxide poisoning; and reduced home fires. This report was developed to help inform national stakeholders about the strategies that have been used to achieve deep energy savings in the multifamily housing sector through energy efficiency upgrades. These strategies could be used as models in areas where utility program administrators and policymakers seek to achieve deep energy savings in the multifamily building stock for the purposes of reducing energy costs, creating comfortable and healthy homes, meeting regulatory requirements, or reducing the environmental impacts of energy consumption. This report includes a national multifamily market characterization, barriers and opportunities for program and policy efforts, and eight exemplary case studies from across the country. The research described in this report holds great potential to significantly improve the process for including energy efficiency in developing and implementing federally funded multifamily rehabilitation projects through the USDA, the U.S. Housing and Urban Development (HUD) Low Income Housing Tax Credit, and other programs. Non-Energy Benefits of Energy Efficiency and Weatherization Programs in Multifamily Housing: The Clean Power Plan and Policy Implications This literature review explores how residential energy efficiency and health interventions can confer positive economic, health, and environmental non-energy benefits at the individual and community level, thereby leading to significant savings while improving the quality of life and resiliency of low income households. The paper closes with policy recommendations to unlock the savings of non-energy benefits from smart energy efficient investments. Residential air-source heat pumps (ASHP) are a heating and air-conditioning technology that use electricity to provide a combination of space heating and cooling to homes. A new generation of ASHPs has come to market over the past five years. This report evaluates the key market barriers as well as potential opportunities to leverage. Based on an assessment of the regional ASHP market, it is clear that while ASHPs have established a viable and growing market, there remains a significant opportunity to further accelerate adoption of the technology and in the process achieve energy and cost savings to the Northeast and Mid-Atlantic region. This report, informed by leading research and real-world examples, highlights practical online and in-person tactics that contractors can use to promote social interaction and social comparison among homeowners to make energy upgrades a "must-have" in U.S. homes. Reaching More Residents: Opportunities for Increasing Participation in Multifamily Energy Efficiency Programs The multifamily sector can be hard to reach when it comes to energy efficiency programs. Besides being diverse and complex, the sector presents a unique set of challenges to efficiency investments. The result is that multifamily customers are often underserved by energy efficiency programs. Drawing on data requests and interviews with program administrators, this report summarizes the challenges to program participation and identifies best practices that programs can use to reach and retain large numbers of multifamily participants. Energy efficiency is good for you--and for the air you breathe, the water you drink, and the community in which you live. This fact sheet shows how saving energy reduces air and water pollution and conserves natural resources, which in turn creates a healthier living environment for people everywhere. It includes the stories of a family in Pennsylvania and a hospital in Florida. Shared Renewable Energy for Low- to Moderate-Income Consumers: Policy Guidelines and Model Provisions This report provides information and tools for policymakers, regulators, utilities, shared renewable energy developers, program administrators and others to support the adoption and implementation of shared renewables programs specifically designed to provide tangible benefits to low income and moderate income individuals and households. This report explains the psychology of individual energy efficiency actions, and how large scale behavior change programs can use this research to reduce greenhouse gas emissions. This report identifies sustainable funding sources for asthma-related home interventions. It examines the business case and return on investment for interventions that remedy triggers that can exacerbate asthma. This DOE webpage provides an introduction to how home energy management systems can fit into broader smart home and grid modernization efforts. A Changing Landscape: The Regional Roundup of Energy Efficiency Policy in the Northeast and Mid-Atlantic States This report represents NEEP’s annual assessment of the major policy developments of 2014, as well as its look into the immediate future, where NEEP gauge states’ progress toward capturing cost-effective energy efficiency as a first-order resource. While looking at the region as a whole, NEEP also provides summary and analysis of some of the biggest building energy efficiency successes and setbacks from Maine to Maryland — including significant energy efficiency legislation and regulations and changes in funding levels for energy efficiency programs. Accessing Secondary Markets as a Capital Source for Energy Efficiency Finance Programs: Program Design Considerations for Policymakers and Administrators This report is targeted at both policymakers and program administrators who are less familiar with secondary markets and their significance in the energy efficiency context, as well as those that are more familiar with these concepts and may be actively considering secondary market strategies. It covers how efficient access to capital from secondary markets -- reselling energy loans to investors to replenish program funds -- is being advanced as an important enabler of the energy efficiency industry “at scale.” This series of 19 tip sheets is based on the experience and expertise of EPA’s Climate Showcase Communities. The tip sheets cover a wide range of topics, such as marketing and communications (effective messaging, traditional media strategies, community-based social marketing, and testimonial videos) and working with specific types of stakeholders (institutional partners, contractors, experts, utilities, early adopters, volunteers). Effective Practices for Implementing Local Climate and Energy Programs: Identifying and Working with Experts This tip sheet was inspired by the experiences and expertise of EPA’s Climate Showcase Communities (CSCs). It focuses on working with experts and highlights best practices and helpful resources and recommended resources for other communities interested in pursuing similar projects. Effective Practices for Implementing Local Climate and Energy Programs: Working with Institutional Partners This tip sheet was inspired by the experiences and expertise of EPA’s Climate Showcase Communities (CSCs). It focuses on working with institutional partners and highlights best practices and helpful resources and recommended resources for other communities interested in pursuing similar projects. Energy and Environment Guide to Action: State Policies and Best Practices for Advancing Energy Efficiency, Renewable Energy, and Combined Heat and Power The Guide to Action provides in-depth information about over a dozen policies and programs that states are using to meet their energy, environmental, and economic objectives with energy efficiency, renewable energy, and combined heat and power. Each policy description is based on states’ experiences in designing and implementing policies, as documented in existing literature and shared through peer-exchange opportunities provided to states by EPA’s State Climate and Energy Program. Energy Efficiency Behavioral Programs: Literature Review, Benchmarking Analysis, and Evaluation Guidelines This literature review and benchmarking analysis focuses on electric and gas utility-implemented Conservation Improvement Programs (CIP) in Minnesota that used behavioral techniques. The objective of this effort was to provide the State of Minnesota with information necessary to make informed decisions about the design, evaluation, and claimed savings approaches for these programs. Exploring Potential Impacts of Weatherization and Healthy Homes Interventions on Asthma-related Medicaid Claims and Costs in a Small Cohort in Washington State This report presents results from an analysis of asthma-related health benefits of health and home performance interventions using data collected from 49 households in Northwestern Washington State from 2006 to 2013. Cool Choices layered an experiment atop four engagement games where they used game mechanics to identify high energy users and encourage those high energy users (along with other game participants) to participate in Focus on Energy residential programs. This research effort, called "Find and Flip," explored whether a gamification strategy could identify high energy users and then drive them to Focus on Energy programs. This guide provides recommended benchmarking metrics for measuring residential program performance. This report discusses how low income communities can be transformed through energy efficiency. Many of our fellow citizens face energy costs that are excessive compared with their overall incomes, yet they cannot afford to invest in the energy efficiency measures that would reduce their energy cost burden. Families nationwide are often forced to choose between necessities such as food or medications and paying their energy bills to heat and cool their homes. Private and public resources are available to help Americans, but these resources reach only a small percentage of underserved households. This online guide provides step-by-step guidance and resources for local governments to plan, implement, and evaluate climate, energy, and sustainability projects and programs to reduce greenhouse gas emissions and adapt to climate change impacts. It captures lessons learned and effective strategies used by local governments, breaks down program implementation into concrete steps, and curates resources to help local governments find the information they need. The framework was developed with extensive input from local government stakeholders, including EPA’s Climate Showcase Communities. This guide was developed for local climate and clean energy (i.e., energy efficiency, renewable energy, and combined heat and power) program implementers to help create or transition to program designs that are viable over the long term. The guide draws on the experience and examples of EPA’s Climate Showcase Communities as they developed innovative models for programs that could be financially viable over the long term and replicated in other communities. The MF HERCC Recommendations Report 2015 Update expands the 2011 publication, and delivers explicit and refined recommendations for multifamily energy efficiency program administrators and implementers. Program Design Lessons Learned (Volume 1) draws on the insights DOE gathered from its more than 4 years of administering State Energy Efficient Appliance Rebate Program (SEEARP) and analyzing the nearly 1.8 million rebates and the associated reporting from the 56 state and territory programs. Program Results (Volume 2) includes program impacts reports summarizing individual state and overall results of the State Energy Efficient Appliance Rebate Program (SEEARP) 2014 ACEEE Summer Study on Energy Efficiency in Buildings: Myths of Low-Income Energy Efficiency Programs: Implications for Outreach Low-income energy efficiency programs provide financially vulnerable utility customers with important energy savings. To date, low-income programs have faced challenges in driving participation -- fueling myths that suggest low-income populations are difficult to reach. This paper explores these myths in turn. 2014 ACEEE Summer Study on Energy Efficiency in Buildings: Valuing Home Performance Improvements in Real Estate Markets This paper describes existing barriers to integrating energy efficiency data into real estate markets, and illustrates recent efforts to address them. National cross-industry collaborations have resulted in standard data collection and transfer tools that allow home performance data to be shared across industries. Real estate markets in some regions have begun including these data into multiple listing services (MLS), making them visible during real estate transactions. This study assesses the benefits of adding health and home performance to a community health worker education program on asthma control in King County, Washington, from October 2009 to September 2010. The study compared group homes receiving community health worker education on health and home performance benefits and interventions with historical comparison group homes receiving only education on asthma control. Over the study period, the percentage of study group children with not-well-controlled or very poorly controlled asthma decreased more than the comparison group. SEEA created this document to inform the planning, design and delivery of early-stage energy efficiency programs in the Southeast. This document captures general concepts essential to the successful development and implementation of robust program portfolios, as well as lessons learned from prior experience on the regional and national levels. Expanding the Energy Efficiency Pie: Serving More Customers, Saving More Energy Through High Program Participation This report analyzes ten categories of utility-sector energy efficiency programs that have achieved high participation among targeted customer markets. Despite issues with the nature and availability of participation data, the study draws on published data sources and interviews with program contacts and industry experts to identify many examples of programs that have achieved high participation. Financing Energy Improvements on Utility Bills: Market Updates and Key Program Design Considerations for Policymakers and Administrators This report provides an overview of the current state of on-bill programs and provides actionable insights on key program design considerations for on-bill lending programs. Green & Healthy Homes Initiative: Improving Health, Economic, and Social Outcomes Through Integrated Housing Intervention This paper found that improved health outcomes and more stable, productive homes in primarily African American, low-income neighborhoods are related to the mitigation of asthma triggers and home-based environmental health hazards and that upstream investments in low-income housing have the potential for generating sustainable returns on investment and cost savings related to improved health, productivity gains, and wealth retention due to energy conservation. This report describes and monetizes numerous health and home performance benefits attributable to the weatherization of low-income homes by the U.S. Department of Energy’s (DOE) Weatherization Assistance Program (WAP). This guide assists with developing an implementation plan for a Home Performance with ENERGY STAR program. It covers key elements of the plan, including the scope and objectives of the program and the policies and procedures that will ensure its success, including co-marketing and brand guidelines (section 1), workforce development and contractor engagement (section 3), assessment and report requirements (section 4), installation specifications and test-out procedures (section 5), and quality assurance (section 6). Developed as part of the Residential Building Stock Assessment (RBSA), this report provides overall housing utility and energy statistics for Idaho, and details the type and efficiency of various components such as windows, insulation, appliances and type of heating fuel used in homes with each region of the state. Insights from Smart Meters: Identifying Specific Actions, Behaviors, and Characteristics That Drive Savings in Behavior-Based Programs The report, the second in a series of reports on smart meters, presents concrete examples of findings from behavior analytics research using data that are immediately useful and relevant, including proof-of-concept analytics techniques that can be adapted and used by others, novel discoveries that answer important policy questions, and guidelines and protocols that summarize best practices for analytics and evaluation. This publication presents examples of the value that insights from behavior analytics can provide to programs (as well as pointing out its limitations). This paper explores the State Energy Efficient Appliance Rebate Program (SEEARP) designs and delivery methods used, and provides lessons learned about specific program models and best practices for states, utilities, and energy efficiency organizations to use in designing rebate programs. This paper is a review of recent studies that have explored relationships between mental health and how this may be affected by poor home performance, specifically living in cold and damp homes. This research focuses on intervention studies in which heating and insulation improvements were carried out and impacts on well-being assessed. Developed as part of the Residential Building Stock Assessment (RBSA), this report provides overall housing utility and energy statistics for Montana, and details the type and efficiency of various components such as windows, insulation, appliances and type of heating fuel used in homes with each region of the state. Non-Energy Benefits / Non-Energy Impacts (NEBs/NEIs) and Their Role & Values in Cost-effectiveness Tests: State of Maryland Non-Energy Benefits / Non-Energy Impacts (NEBs/NEIs) and Their Role & Values in Cost-effectiveness Tests: State of Maryland This study is a review of non-energy benefits related to residential weatherization programs. The study estimates the value, in dollar and percentage terms, of non-energy benefits from weatherization programs, and summarizes the ranges and typical values for non-energy benefits. Recommendations for a non-energy benefits strategy for Maryland are provided. This report is a comprehensive research study of energy efficiency in Northwest residential buildings. It includes a metering study, a single-family report, a manufactured homes report, and a multi-family report. In addition, it includes state-by-state energy use reports, as well as end-use consumption data. Developed as part of the Residential Building Stock Assessment (RBSA), this report provides overall housing utility and energy statistics for Oregon, and details the type and efficiency of various components such as windows, insulation, appliances and type of heating fuel used in homes with each region of the state. Resiliency through Energy Efficiency: Disaster Mitigation and Residential Rebuilding Strategies for and by State Energy Offices Given the many priorities state and local governments and residents face following a disaster, integrating energy efficiency and resiliency into residential rebuilding can be a challenge. Fortunately, research into state experience with energy-efficient and resilient rebuilding in the residential sector has revealed several key strategies that other state and local communities can employ to mitigate the impacts of a natural disaster and plan for coordinated and effective disaster recovery. This report focuses on the experiences of State and Territory Energy Offices as leaders and key players in the rebuilding process. Research reveals a whole range of unmet housing-related desires in America -- gaps between what Americans have and what they say they need or want. The Demand Institute surveyed more than 10,000 households about their current living situation and what’s important to them in a home. The survey represents all U.S. households: renters and owners; movers and non-movers; young and old and finds that unsatisfied needs and desires cut across the entire population. Unlocking Energy Efficiency for Low-Income Utility Customers: Four Key Lessons from Real-World Program Experience With so much to gain, how can we optimize low-income energy efficiency programs to maximize the benefits for financially vulnerable citizens, as well as program implementers and the broader population of ratepayers? This paper shares four important lessons for engaging low-income customers based on Opower’s experience in partnering with utilities to serve the low-income population. Developed as part of the Residential Building Stock Assessment (RBSA), this report provides overall housing utility and energy statistics for Washington, and details the type and efficiency of various components such as windows, insulation, appliances and type of heating fuel used in homes with each region of the state. Weatherization and Indoor Air Quality: Measured Impacts in Single-family Homes under the Weatherization Assistance Program This report summarizes findings from a national field study of indoor air quality in homes treated under the Weatherization Assistance Program (WAP). The study tested and monitored 514 single-family homes in 35 states and served by 88 local weatherization agencies. The study focused on five indoor environmental quality parameters: carbon monoxide, radon, formaldehyde, indoor temperature and humidity, and indoor moisture. This study examines actual loan performance data obtained from CoreLogic, the lending industry’s leading source of such data. To assess whether residential energy efficiency is associated with lower default and prepayment risks, a national sample of about 71,000 ENERGY STAR and non-ENERGY STAR-rated single-family home mortgages was carefully constructed, accounting for loan, household, and neighborhood characteristics. The study finds that default risks are on average 32 percent lower in energy-efficient homes, controlling for other loan determinants. This paper describes the changes in indoor environmental quality (IEQ) conditions (air quality and thermal comfort conditions) from health and home performance improvements in 16 apartments serving low-income populations within three buildings in different California climates and seasons. Over the past 30 years, program administrators have concentrated on investment behavior change -- that is getting their customers to install things like insulation and lighting systems using various behavior change tools such as marketing, education, rebates, and technical assistance to support the investment behavior change. Today, as program administrators move to expand the range of behavior change strategies in their portfolios, it is often difficult to know where to begin. The New York State Energy Research and Development Authority (NYSERDA) began by detailing the range of behavior change strategies and identifying strategic opportunities. This report from the New York State Energy Research and Development Authority (NYSERDA) details the range of behavior change strategies in the existing portfolio and identifies strategic opportunities in the area of behavior change. This study looks at evidence of capitalization of energy efficiency features in home prices using data from real estate multiple listing services (MLS) in three metropolitan areas: the Research Triangle region of North Carolina; Austin, Texas; and Portland, Oregon. These home listings include information on Energy Star certification and, in Portland and Austin, local green certifications. Our results suggest that Energy Star certification increases the sales prices of homes built between 1995 and 2006 but has no statistically significant effect on sales prices for newer homes. Low-income tenants bear a particularly large burden for energy costs. Because their costs nearly equal those of higher income renters, energy accounts for larger shares of their incomes and overall housing costs. In 2011, more than one-fourth of all renter households had incomes below $15,000. These lowest-income renters devoted $91 per month to tenant paid utilities, while renters with incomes above $75,000 paid $135. Residential Customer Enrollment in Time-based Rate and Enabling Technology Programs: Smart Grid Investment Grant Consumer Behavior Study Analysis The U.S. Department of Energy's Smart Grid Investment Grant (SGIG) program worked with a subset of its projects undertaking Consumer Behavior Studies (CBS) to examine the response of mass market consumers (i.e., residential and small commercial customers) to time-based electricity rate programs, in conjunction with the deployment of advanced metering infrastructure (AMI) and associated technologies. The effort presents an opportunity to advance the electric industry's understanding of consumer behavior. This preliminary report summarizes experiences of the different phases of the enrollment process (qualification, solicitation, recruitment, and selection) across nine of the ten SGIG utilities, who collectively undertook 11 consumer behavior studies. It also provides experimental and descriptive results and lessons learned. This report provides an assessment of the current landscape of multifamily energy efficiency programs in the 50 metropolitan areas with the largest multifamily housing markets. The authors describe spending on utility customer-funded programs for the primary utilities in each metropolitan area. Additionally, they identify the specific opportunity in each metropolitan area to scale up multifamily programs based on a three-part analysis of: (1) local housing market characteristics; (2) the scope of current utility customer-funded energy efficiency programs; and (3) the statewide policy environment and potential for local partnerships with non-utility-funded energy efficiency programs. 2012 ACEEE Summer Study on Energy Efficiency in Buildings: Valuing Energy Efficiency in the Real Estate Community The lack of documented value of retrofit measures is a barrier to many homeowners doing upgrades - as most appraisals do not include energy improvements in their comparables, and the home’s future sale can prevent the homeowner from earning a return on their investment via lower energy costs. Once the industry develops a process for valuing the energy improvements, it can unlock the significant potential for retrofit work through market pricing signals (energy efficient homes are worth more) and enhanced access to capital for those purchasing a more efficient home (energy efficient homes improve borrowers’ cashflow because they cost less to operate). Assessment of Electricity Savings in the U.S. Achievable through New Appliance/Equipment Efficiency Standards and Building Efficiency Codes (2010 - 2025) This report provides a forecast of how building energy codes and appliance efficiency standards are likely to capture significant energy efficiency savings through 2025. This report describes the characteristics of fifteen types of single-family homes in the Chicago area and the packages of energy efficiency measures that result in an optimal level of energy savings. This fact sheet provides an overview of how state policymakers, utilities, and regulators can overcome barriers to deploying customer energy information and feedback strategies. This paper examines the energy efficiency of multifamily rentals in comparison to other housing types and its relationship to household income. It analyzes 2005 and just‐released 2009 data from the U.S. Residential Energy Consumption Survey and finds that multifamily rentals were significantly less energy efficient than other types of housing, both nationwide and in every region of the country. Energy Transparency in the Multifamily Housing Sector: Assessing Energy Benchmarking and Disclosure Policies This report is intended to serve as a guide for policymakers and multifamily stakeholders on benchmarking and disclosure rules and regulations. It provides an introduction to the multifamily housing sector, followed by a thorough review of existing benchmarking and disclosure policies and an assessment of continuing policy challenges and opportunities. Forum on Enhancing the Delivery of Energy Efficiency to Middle Income Households: Discussion Summary This document summarizes discussions and recommendations from a forum for practitioners and policymakers aiming to strengthen residential energy efficiency program design and delivery for middle income households. This report describes how customer usage data can help promote the adoption of retro-commissioning polices for public and private commercial buildings. The Role of Local Governments and Community Organizations as Energy Efficiency Implementation Partners: A Review of Trends and Case Studies The Value of Green Labels in the California Housing Market: An Economic Analysis of the Impact of Green Labeling on the Sales Price of a Home This is the first study to provide statistical evidence that, holding other factors constant, a green label on a single-family home in California provides a market premium compared to a comparable home without the label. The research also indicates that the price premium is influenced by local climate and environmental ideology. To reach these conclusions, researchers conducted an economic analysis of 1.6 million homes sold in California between 2007 and 2012, controlling for other variables known to influence home prices in order to isolate the added value of green home labels. Tracking Utility Behavior‐Based Energy Programs Against the Behavioral Theories and Principles that Inspired Them This paper explores the drivers of energy use behaviors and the behavior‐based programs adopted by utilities charged with reducing the energy consumption of their residential and small commercial customers. It also presents researchable recommendations on how utilities can improve the effectiveness of behavior‐based energy programs. This report provides policymakers with principles and recommendations to understand and manage concerns about bill and rate impacts resulting from requiring utilities to provide efficiency programs. This document provides an overview of how state policymakers, utilities, and regulators can overcome barriers to deploying customer energy information and feedback strategies.
https://rpsc.energy.gov/resources?f%5B0%5D=program_component%3A2&f%5B1%5D=program_design_phase%3A8&f%5B2%5D=type%3A80&amp%3Bamp%3Bf%5B1%5D=field_state_or_territory%3A742&amp%3Bamp%3Bf%5B2%5D=field_state_or_territory%3A746&amp%3Bf%5B1%5D=field_organization_or_program%3A3337
Malta experienced an 18% increase in fatal accidents on the road between 2001 and 2017, as other European countries experienced a decrease, leading a recent report by the European Data Journalism Network to state “there is something wrong on the island”. The number of people killed in traffic accidents across the EU in that period dropped by 50% – from 54,000 to 25,300. This downward trend was experienced across all Member States, apart from Malta where the number of fatalities has actually increased. “There is something going wrong on the island”, according to the report entitled ‘Are speed cameras saving lives, and are old cars dangerous?’ The report notes that Malta has the second highest rate of speed cameras, “but that has yet to lead to better results”. Malta may have a mortality rate of 43 per million inhabitants, seven below the EU average of 50, but the country is failing to improve. Even Romania which has the highest amount of fatalities registered at 98 per million inhabitants, bettered their rate by 20% instead of dropping further. According to the data, the age of the car also has little impact on the number of accidents and fatalities. The average age of a car on the Maltese roads is just under 8 years old, similar to Italy who has less speed cameras and less accidents. Romania, Bulgaria and Poland have poor accidents statistics, despite driving cars that are, on average, newer than Maltese cars. Data from Eurostat shows that Malta has the third highest number of cars per 1,000 inhabitants in the EU, with one vehicle for every 1.6 persons. It ranks after Luxembourg with 662 cars and Italy with 625 cars per 1000 inhabitants.
https://theshiftnews.com/2019/05/06/18-increase-in-deaths-on-the-road-in-malta/
Graphing Linear Inequalities Worksheets Linear inequalities are just comparisons of two linear expressions. We are accustomed to comparing values with the greater than or less than symbols, but inequalities lack a true value and are rather helpful for making mathematical generalizations. Because of this they use a different symbol that indicates that it could greater or lesser than, but also equal. When we graph linear equalities, it is fun because not only do we plot the line of the equation, but we also indicate where else (greater or lesser) the answer could possibly be. Graphing linear equalities is very helpful to represent a relationship that may exist between a rate of change. These types of graphs are often used to describe the constraints of materials or products. Such as what temperature they should be used in. These worksheets and lessons help students make plots of linear inequalities in a half plane setting. Aligned Standard: HSA-REI.D.12 - Y-Intercept Step-by-step Lesson- This one comes down to shading and that's about the sum of it all. - Guided Lesson - We make the equations a bit more difficult to point there graph right away. - Guided Lesson Explanation - As you see, you will need plenty of graph paper. - Practice Worksheet - We give you pretty standard inequalities to work with. The standard does call for a bit more. You will see new sheets soon to address this. - Matching Worksheet - Match the inequality to the correct graph. Pay attention to the numbers. - Graphing Systems of Inequalities Five Pack - We ask you to solve it, but graphs help you volumes. - Graphing Linear Inequalities Five Pack - You just need to make the graph here. Remember that the intercept gets it all started. - Graphing Inequalities Five Pack - Yeah it's a ditto of the last pack. Obviously different problems though. - Answer Keys - These are for all the unlocked materials above. Homework Sheets Hopefully the shading works out for you. - Homework 1 - The graph of y < -5 is a horizontal line. Every y-value is -5, including the y-intercept. Start by graphing the line y = -5. - Homework 2 - The slope-intercept form of a linear is like the slope intercept form of an equation (y = mx +b), but with an inequality symbol instead of an equals sign. - Homework 3 - The slope-intercept form of a linear is like the same form of an equation (y = mx +b), but with an inequality symbol instead of an equals sign. Practice Worksheets I found that black and white copies come out better with this color scheme, I'm not sure why. - Practice 1 - If the inequality uses the symbol ≥, so be sure to draw a solid line. - Practice 2 - Finally, figure out which region to shade. You could remember that when inequalities start with y > or y ≥, you should shade above the line. - Practice 3 - Graph all the inequalities below. Math Skill Quizzes Flat graphs, then tilted graphs, ending with more flat graphs. - Quiz 1 - You could remember it as, if you are lesser than something, the shade is below you. If you are greater than a value, the shade is above you. - Quiz 2 - Finally, figure out which region to shade. You could remember that when inequalities start with y > or y ≥ , you should shade above the line. - Quiz 3 - Graph this inequality completely with shading: y < 5 How to Graph Linear Inequalities By now you must have understood the importance of graphing equations. Linear equalities or linear inequalities, both types can be plotted on a graph. But plotting graphs of these mathematical entities requires that we have the ability to comprehend what they mean, when we do. So, to start with the basics, let's learn what they are first. Linear Inequalities also entails a linear function while being an inequality. A linear inequality is simply a comparison of two linear expressions. Since these expressions are not precise, neither is our comparison and it results in the use of a non-exact comparison symbol. So, our final answer is not precise, but we have a general understanding of where it is located on a graph. To appropriately graph a linear inequality, plot the "equals" line on the graph first, then shade in the correct area. Make sure to rearrange the equation, leaving the y-variable on the left and everything else on the right side of the equation. Now consider plotting 'y =' line; make it a definite line for y ≤ or y ≥, and a dashed line for y< or y>. Now color or shade above the line for representing a "greater than" (y > or y ≥) or below the line for a "less than" (y< or y ≤). You can see how we have plotted the linear inequality y ≥ 2x + 4. We plot it as y = 2x + 4 and draw the line. I find it helpful to just choose random values for x and then plot all the points to find the line. Once the line is connected all that is left to do is shade the greater than (≥) region, which is above the line. This indicates that our solution resides somewhere on or above the line. The overall process is very similar to plotting the equation of a line on a graph. We just need to go one step further and tell where the possible solution is by shading one side or another of the line. Why Do We Graph Linear Inequalities at All? In many cases this skill is presented to students in a matter of fact manner. Truly understanding why this skill matters helps students understand where and when to apply it. When you perform this skill you will treat it like any other linear function that you place on a coordinate system. The difference is that you are just not plotting out a line but shading an area from the line that satisfies the inequality. What that means is that we are entirely sure where the answer lies in the coordinate system, but we are certain where it could be. Think of it like a search map that you see in all those cheesy action movies. We know that the suspect will be found in the area. That is the same thing as the solution to our inequality, we know that the answer lies in here. While this is not an exact science it is very helpful in narrowing down where the solution lies. When you begin to work on more complex projects you will see situations where you have multiple related inequalities. When you put several of these solutions together you can begin to pattern emerge that really tightens down your final answer.
https://www.mathworksheetsland.com/algebra/38linehalf.html
Monsters at the End of Time: Alternate Hierarchies and Ecological Disasters in Alaya Dawn Johnson’s Spirit Binders Novels - Author(s): - Anita Harris Satkunananthan (see profile) - Date: - 2019 - Group(s): - Environmental Humanities, GS Speculative Fiction, TC Ecocriticism and Environmental Humanities, TC Postcolonial Studies - Subject(s): - Speculative fiction, Anthropocene, Postcolonial ecocriticism, Gothic literature, Postcolonialism - Item Type: - Article - Tag(s): - Fantasy fiction, apocalypse, postcolonial Gothic, EcoGothic - Permanent URL: - http://dx.doi.org/10.17613/bfqp-k986 - Abstract: - This paper interrogates the connection between entities that hover in the liminal state between life and death (such as vampires and spirits) and the manner in which these entities relate to Alaya Dawn Johnson’s conjurings of alternate political structures and hierarchies in her Spirit Binders series. Johnson’s alternate hierarchies are compelling primarily because they are both flawed and liminal. These hierarchies contain gateways between life and death, between material reality and spiritual reality. An ecoGothic lens is applied to these texts as they deal with climate-related disasters and the ways in which the texts instigate not just heroism but also monstrosity. In Gothic fiction, supernatural tropes such as the Vampire, spirits, and intermediaries are often signposts towards psychological states such as Terror and its relation to the Sublime. In Gothic fiction, very often, vampires, spirits and other similar creatures are connected to a hierarchy or community of sorts. A postcolonial Gothic reading of Gothicized texts, however, interrogates the power relations, the sense of haunting underscoring the text as well as the discourse of Terror in relation to the Other. I argue that Johnson’s writing enables the reader to peer in between the veils of life and death to unearth the darker sides of human nature, but very often these glimpses are not just about personal choices. These glimpses reveal strategies and missteps that guide the ways in which those hierarchies shape those choices,which Johnson then subverts in her tales. - Metadata: - xml - Published as: - Journal article Show details - Publisher: - Kritika Kultura - Pub. Date: - 2019 - Journal: - Kritika Kultura - Volume: - 33/34 - Page Range: - 524 - 538 - ISSN: - 1656-152x - Status: - Published - Last Updated: - 2 years ago - License: - All Rights Reserved - Share this:
https://hcommons.org/deposits/item/hc:28667/
Leading with Compassion The need for leaders to be more compassionate toward their people has never been greater. We are living and working in a time of social unrest, calls for equality, and unprecedented economic and health crises. Employing a little more compassion can go a long way to alleviating anxiety, improving performance, and retaining your best people. In this exclusive 60-minute webinar, you’ll learn how compassion (not empathy) plays a critical role in shaping great leaders and creating sustainable work environments. - Explore the (big) difference between sympathy, empathy, and compassion - Recognize why leading with compassion is crucial in today's diverse workplace - Identify ways to make compassion a core value of the company culture - Learn how to encourage compassionate leadership in your organization Guest Speaker Renée Charles, Learning and Development Specialist, Parris Consulting Renée Charles is a seasoned learning professional who is passionate about helping individuals and organizations successfully achieve their performance goals. Having dedicated more than 20 years to learning and organizational development, her work has focused on leadership development; customer experience management; diversity, equity, and inclusion; and strategies for personal effectiveness. Renée has created impactful learning experiences for organizations in Canada and across the globe. She has demonstrated excellence in learning design and development, workshop facilitation, and performance coaching. Renée holds a Bachelor of Science degree in Neuroscience from the University of Toronto and a certificate in Adult Education from St. Francis Xavier University. Renée’s recent professional development endeavors include completion of a certificate in Applied Positive Psychology.
https://content.ukg.com/Contact/leading-with-compassion
posted by Anonymous on . You are hired as ballistics expert and need to measure the muzzle speed of the bullet (with a mass, m = 20 g) for the gun. You fire a ballistic gun horizontally into a 2-kg ballistic pendulum (M = 2 kg) hanging at rest on a massless rod. After the bullet hits it and becomes embedded in it, the pendulum swings 0.5 m above its original height (h = 0.5 m). Find the speed with which the bullet hits the block (muzzle velocity). Round the answer (use no digits after decimal place). Hint: use both laws of conservation: for energy and momentum.
http://www.jiskha.com/display.cgi?id=1298496148
Q: objective-c: return value I'm reading Stephen Kochan's "Programming in Objective-C" (I'm n0000b). Everything has been obvious to me until exercise 4-8. It asks me to modify methods, in an "Accumulator" class created earlier in the chapter, to "return" the value of the accumulator when one of the math methods is used (basically it's a calculator). I took this originally to mean that I want the program to display the result whenever one of the methods is used (+, -, *, /), so I set it up to do so, so each line displays the cumulative result rather than just the final result: [deskCalc setAccumulator: 0.0]; [deskCalc add: 200.]; //the result is displayed [deskCalc divide: 100.0]; //the result is displayed [deskCalc subtract: 1.0]; //the result is displayed [deskCalc multiply: 5]; //the result is displayed NSLog (@"The result is %g", [deskCalc accumulator]); But after looking up other people's solutions, it appears that "returning the value of the Accumulator" means something different. Can someone describe to me what returning a value means, and what purpose it serves? I have looked through the previous chapter a few times but it is still unclear to me how this will make the program behave differently. Thanks! -Andrew A: The calculation methods you have in your accumulator class probably look something like this at the moment: - (void)add:(float)aFloat { result += aFloat; NSLog(@"%f", result); } This method, in its current state, returns nothing (void). It outputs the current total on screen only. That's nice for an exercise, but in real-world programs, a calculation result will probably not be very useful if it's displayed on screen. Instead, you probably will want to do something else with the result, so you want the method to return it. For example, the NSString class has a method length. This method would not be very useful if it were to print the length of the string on screen. Instead, it returns the length, so the program can do something useful with this value (like checking that the string has the correct length): int length = [tweet length]; if (length > 140) { // display a useful error message ... } else { // tweet it :) ... } To modify your calculator methods to return something, you will (a) have to change the method signature to have a return value, and (b) to actually return this value. In the method signature, simply change the return type to something other than void. In your example, the correct type would probably be float (or whatever type the calculator is using internally for its current result). - (float)add:(float)aFloat { ... } To actually return the current value, you would add a return statement add the end of your method (before the closing }) return result; (this assumes that result is the instance variable containing the current calculation result).
This exhibition — one of several highlighting contributions from the museum’s Campaign for Art — introduces the range and quality of these newly committed and gifted works in a multidisciplinary selection that strengthens and deepens SFMOMA’s collection. Among the Painting and Sculpture highlights are two key paintings by Jackson Pollock, important works by Jasper Johns and Robert Rauschenberg, and an entire gallery dedicated to Joseph Beuys. A space devoted to the late work of Diane Arbus showcases a major gift to the Photography department. Media Arts features significant historic pieces by performance and video pioneers Ant Farm, Lynn Hershman Leeson, and Nam June Paik. In Architecture and Design, a selection of chairs, each of a single material, along with experimental works of architecture, bring innovation into focus. Monday, May 16, 2016, 12:30 p.m. Leadership support is provided by Collectors’ Forum, an SFMOMA art experience group. Major support is provided by the Prospect Creek Foundation. Additional support is provided by Robin Wright and Ian Reeves.
https://www.sfmoma.org/exhibition/campaign-art-modern-contemporary/
Communication is a vital part of any business. Details regarding the products and services offered by a business should be communicated to its clientele in order for a relationship between the two parties to be created. The main function of a communications team within a business is to design and execute marketing campaigns and to create strong ties with the media. Here are a few factors that you need to think about when creating a good communications team within a business. In order to establish a functional communications team, you are going to need individuals to conceptualize ideas, to make them a reality and then to execute them in an appropriate way. The first step of this process would need professionals who have both a good knowledge in business as well as creativity. They should be able to predict the outcome of the communications campaign and give a direction the rest of the team can follow. You will then need a group of writers and art directors who can come with the basic material to take the campaign forward. You may also need animators and videographers to implement it further. Lastly, you will need a group of individuals who can get the material onto the media in an effective manner. It is always better to hire people who are multifaceted when forming this team. There is a certain culture that brings out the best in everyone when it comes to creative thinking. It is a culture based on acceptance and appreciation. No idea should be considered a bad idea within the team. The input of each individual should be given equal recognition and they should be encouraged to get better at what they do. In order for this culture to be maintained within the competitive nature of a business, a strong leadership is of utmost importance. The person in charge should be someone who understands the importance of the work that is done by the team as well as the value of each and every member in it. It is vital that the rest of the business is aware of the work your communications team does. There should be proper protocol on how any other department approaches the team and on how work is assigned to them. Implementing a proper system that allows others in the business to interact with the communications team in an organized way will allow the team to execute their work more efficiently.
http://designwritecommunications.com/how-to-create-a-good-communications-team-within-a-business/
When I was in my school I was very good at reading and writing in the English language but when it comes to speaking in I hesitate a lot, I stutter, my head seems to be down and I am unable to speak this language and feel a little jittery. And I noticed that the rest of the students in my class speak English effortlessly even my juniors are also communicating in this particular language and very fluently with confidence. I always ask myself how to speak English fluently and confidently. And now after a few months, I can talk to anyone in the whole world in this language very easily without any hesitation. Now I have decided that I will share my complete learning journey with you all. So keep calm and read about my successful journey through this article How to speak English fluently and confidently: Immerse Yourself in the Language: Speaking English constantly is one of the finest strategies to advance your language abilities. You can accomplish this by reading newspapers, books, and articles, streaming English movies and TV shows, and listening to English podcasts. Finding a native English speaker conversation partner and consistently practising your speaking with them is another excellent tip. Practice your pronunciation: Speaking English fluently requires good pronunciation. By repeating words and phrases, you can practice the language’s sounds and so enhance your pronunciation. You can practice pronunciation by using applications and activities, listening to native English speakers, and repeating them. Learn Vocabulary: Knowing a wide range of vocabulary is essential for speaking English. A great way to learn new vocabulary is to read and listen to English-language materials. Additionally, it’s important to practice using new words in sentences, so you can see how they are used in context. Also read: What Is Candid Photography? | Simply Explained Practice your grammatical knowledge to improve your ability to speak English fluently. Practice by learning grammatical rules, working on activities, and observing how native English speakers use grammar. Try to Speak Confidently: When speaking any language, confidence is essential. Speaking in front of others is something you can practise to increase your confidence. This can be achieved by public speaking, speaking in front of a mirror, or even presenting presentations. Keep in mind that you will feel more at ease and confident the more you speak. Finally, it’s critical to keep in mind that developing fluency in any language requires time and work. Be kind to yourself and don’t lose hope if you don’t notice improvement straight away. Your ability to speak English fluently can be enhanced with determination and regular practise. Best Indian Films to Improve English: There are so many Indian films that can help you to improve your English language skills. But for now, I’m just sharing with you only four of them I hope you like them. 1) Slumdog Millionaire. This critically acclaimed film won an Academy Award in 2009. directed by Danny Boyle, is set in India and features a mix of English and Hindi dialogue. The story follows a young man from the slums of Mumbai who becomes a contestant on a popular Hindi version of the “Who Wants to Be a Millionaire” game show. The film’s use of Indian English and its cultural context makes it an excellent tool for learning English. 2) The Lunchbox. This film is a feel-good story about two strangers who connect through a mistaken delivery of a lunchbox. The film is set in Mumbai and features a mix of English and Hindi dialogue. Its relatable characters and heartwarming story make it a great choice for learning English. 3) The Namesake. This film, directed by Mira Nair, is a coming-of-age story about a young man of Bengali descent growing up in the United States. The film features a mix of English and Bengali dialogue, and its exploration of identity, culture and family dynamics makes it an excellent choice for learning English. 4) English Vinglish. This film is about a housewife who enrolls in an English-language class to improve her skills and gain the respect of her family and society. The film features a mix of English and Hindi dialogue and its relatable story and great performances make it a great choice for learning English.
http://zonsity.com/how-to-speak-english-fluently-and-confidently-2023/
Mara Martin is an extraordinary cook who has enchanted Italian as well as International diners over the past twenty-five years. Her talents have awarded her worldwide recognition. Mara’s recipes and methods are truly unique, inspired by her own creativity as well as the rich gastronomic tradition of the Veneto — particularly that of Venice. La Cucina Veneziana’s history hinges upon the Republic’s dominance of the sea, where the East met the West and the city inherited culinary customs and ingredients from civilizations across the globe. The same international spirit that influenced Venice’s great artists, transformed its chefs: The art of Mara’s cooking is a special combination of enduring tradition and invention. As a young girl, Mara apprenticed with her grandmother, a celebrated chef in her home town who catered banquets for special occasions. In observing these activities, Mara learned the skills that would one day be awarded with a prestigious Michelin Star. The secret of Mara’s success in the kitchen is simple, however. It lies in the tastes and culinary wisdom that have been handed down through generations, as every mother teaches her own children. Her passion and dedication resulted in Mara’s cooking being defined as the “new gastronomic art” of today’s Venice: A confluence of the great tradition of Italian cooking – particularly the cooking of the region of Veneto – made “universal” with contemporary international references geared to modern appetites. Today, “Da Fiore” is synonymous with tradition, but also with elegance and innovation. Here, you can relish Venetian’s inimitable aromas and tastes that have withstood the test of time. Mara Martin begins every day in the kitchen, making bread. She says it gives her concentration and at the same time, relaxation, inspiration, and meditation. It is almost a spiritual moment.
https://www.dafiore.net/en/chef/
On the surface, it might seem that all these countries have in common is the river that runs through them. The 11 nations that make up Africa’s Nile River Basin represent some of the poorest countries in the world. Home to more than 450 million people, they are rich in culture — and in conflict, including some that arise from the water itself. The Nile Project, launched in 2011, is an attempt to help bring harmony to the region through music. Thirty-five musicians, performing songs in 10 languages, blend their cultures’ unique musical styles in an effort to set the stage for cooperation that transcends sound. “The Nile Project is different things for different people,” Mina Girgis, producer and chief executive officer of The Nile Project, said. “In its simplest form, it is a collaboration among citizens of the 11 Nile countries to work on a solution to one of the challenges facing the Nile sustainability. In the case of the music, it’s musicians that are collaborating to make music that inspires cultural curiosity and environmental understanding of the issues facing the different countries.” In its first five years, members of the project not only performed 85 concerts during tours of three continents, they also conducted 130 workshops on issues ranging from environmental to cultural and social problems. The project’s 2017 U.S. tour includes stops at five campuses within the University of North Carolina system. The Nile Project is scheduled to arrive Wednesday at East Carolina University for the first of more than a dozen concerts, workshops and discussions planned as part of the group’s four-day residency. These include a presentation for area elementary and middle school students and a concert to conclude the S. Rudolph Alexander Performing Arts Series season, both scheduled for April 7. Playing to American and European audiences was not what Girgis had in mind when he conceived the idea for The Nile Project six years ago. Girgis, whose background is in ethnomusicology (the study of music in its cultural context), had just returned to San Francisco following a visit to his native Egypt that came during the uprising of 2011. Attending an Ethiopian concert with a friend, he began to consider how music might be an effective means to start a conversation about water issues in countries surrounding the Nile River. The first musicians’ gathering was held in 2013, and the group began its African tour the next year. “The motivation for this project was not to bring it to the United States,” said Girgis, who has a bachelor’s degree in hospitality administration from Florida State University. “It was mainly to have it perform in Africa to the people from these different countries.” For the first performances, musicians set sail on the river, presenting concerts and sustainability workshops in eight cities across five countries along the Nile. “There’s an upstream downstream dynamic, conflict over water allocation among the 11 countries,” Girgis explained. “When you look at the possible solutions to that problem, you can realize that the efforts of governments will not solve the water scarcity problem. The solution is to find creative approaches to using this water better and that would require involvement of more than governments. “There is a lot of cultural isolation among the countries sharing the river,” he said. “The music that we’re making helps the Nile as a watershed and helps people see the cultural connection that these countries share.” Four the current tour, 12 musicians from seven countries share the stage, performing a medley of African musical styles, including traditional, dance and religious songs. “You can call it fusion because we combine different musical traditions from the different countries, different rhythms, different scales, different playing styles,” Girgis said. “We kind of visit a lot of different territories in one performance.” Instruments range from Egyptian flutes and Sudanese harps to keyboards and electric guitars. “The music is fantastic and critically acclaimed,” said Michael Crane, associate dean of the College of Fine Arts and Communication. “People should attend the concert just for the novelty, complexity and diversity of the music.” Off stage, the project includes a partnership with half a dozen universities in Egypt, Ethiopia, Kenya, Uganda and Tanzania that is designed to bring together students to collaborate on solving river sustainability challenges. In Greenville, discussions surrounding The Nile Project residency will include representatives of the North Carolina Conservation Network and Sound Rivers in addition to water issues experts from ECU. “It’s very easy to draw comparisons to issues that face everyone, like water rights, safety and sustainability,” Crane said. While water issues in North Carolina may seem to have little in common with those encountered in parts of Africa, Girgis said conversations about the Nile often uncover parallels to other regions. “It’s a good opportunity to get people from the local community to reflect on their context and see how different it is and how similar it is to other parts of the world,” he said. “... We definitely have opened many people’s eyes to their roles in water sustainability, and a lot of people are realizing they can be much more involved in sustainability of their river than they thought.” For more information, visit nileproject.org.
http://www.reflector.com/Look/2017/03/31/More-than-a-band-Musicians-seek-to-bring-harmony-to-Africa-s-water-conflicts.html
This blog originally appeared in ODI’s Development Progress series. ‘Perhaps the most important transformative shift is towards a new spirit of solidarity, cooperation, and mutual accountability that must underpin the post-2015 agenda… It is time for the international community to go beyond an aid agenda and put its own house in order: to implement a swift reduction in corruption, illicit financial flows, money-laundering, tax evasion, and hidden ownership of assets.’ UN High Level Panel on Post-2015, executive summary. With less than two years to go before the deadline for the achievement of the Millennium Development Goals (MDGs), it is time to take stock of what the goals have achieved and, just as importantly, what the goals have overlooked – including finance. The debate on what follows the MDGs – the post-2015 framework – is a chance to focus on two major finance themes that are not reflected in the goals themselves. First, that taxation is the central source of development finance; and second, that illicit financial flows undermine effective taxation and require international action. If this chance is not to be wasted, we need a consensus – and soon – on targets in these interlinked areas. The High Level Panel report on the Post-2015 Development Agenda is an important and early milestone in this debate. As such, it is significant that the report includes not only lengthy discussion of the issues of taxation and illicit flows, but also a proposed target – specifically: ‘12e. Reduce illicit flows and tax evasion and increase stolen-asset recovery by $x’ While welcome, this proposal has a few weaknesses. First, it is not helpful, conceptually, to equate a dollar of illicit flows with a dollar of tax evasion, or a dollar of stolen asset recovery with either. An illicit flow of $100 driven by tax evasion would only imply perhaps $30 of lost tax revenue, so the numbers are not directly comparable; and the development impact per dollar (on, say, governance or public spending) will not be the same either. Similarly, the value of increases in stolen asset recovery goes far beyond the dollars recovered. The real benefit comes through the demonstration and deterrent effect, which will reduce new thefts. As such, it is hard to see any meaningful value of ‘$x’ that would make the proposed target workable. The HLP proposal also implies that, with further technical work, it will be possible to get broad consensus on the quantitative estimates of each component: illicit flows, tax evasion and stolen asset recovery. This seems only realistic for the latter – and even here, would probably exclude the estimate of the total of stolen assets that is needed to make any dollar recovery numbers meaningful as a proportion of the potential total. If asset theft doubles while recovery increases by 10% in nominal terms, could that be called ‘success’? The third issue with the HLP proposal is that it excludes the policy measures that are needed for progress, so any hope of accountability for individual actors could be lost too. The important focus of the report on the primary responsibility of developed countries appears to have been lost in translation somewhere between the rhetoric and the target (recognition of tax avoidance is also dropped). To ensure that the opportunity represented by the HLP proposal is not lost, a broad debate is needed around potential alternatives, as well as some consensus within the intergovernmental discussions. Without prejudging that key process, it is possible to outline the structure that a target – or rather, set of targets – could take. Above all, two components will be needed. The first will reflect the quantitative emphasis of the current proposal, with specific suggestions in two areas: first, the actual estimates to be used and, second, the proposed level of reduction in each case. The choice of estimates may be a particularly thorny question. On the one hand there may be no consensus on using those of an individual organisation (e.g. Global Financial Integrity who have led the way in publishing estimates). On the other hand, there is still a lack of expertise in multilateral organisations that could be mandated to perform the role, such as the World Bank. The second component will reflect the policy responsibility of all countries, but especially high-income countries with major (or disproportionate) roles in international trade and financial flows. This is likely to include measures around the transparency of company ownership and those participating in trust and foundation arrangements; the tax transparency of corporate accounts; and the automatic exchange of tax information with all countries, especially those with lower incomes. There’s more detail on these two components in the ‘Fermanagh Declaration’ (pdf) drafted by Owen Barder and myself as a suggestion for the G-8 summit earlier this year; and the new Open Government Guide, launched at the Open Government Partnership Summit in London earlier this month, includes a chapter on tax and illicit flows with a broader set of illustrative policy commitments.
https://www.cgdev.org/blog/financing-progress-independently-taxation-and-illicit-flows
Our sugya deals with the institution of shelichut and its biblical derivation. Various sources are enlisted in this process and the Gemara explains the necessity of each. For example, the Gemara suggests that we could not assume shelichut for kiddushin if the Torah had only informed us of the ability to deliver a get via a messenger, since divorce, in contrast to marriage, does not require the consent of the woman. The Gemara continues that shelichut regarding hekdesh-related issues, such as teruma or korbanot, cannot be derived from the 'mundane' paradigm of kiddushin or geirushin. Furthermore, shelichut for geirushin and kiddushin could not be learned from teruma or korbanot, since the latter are basically mental processes, whereas concrete action is required regarding the former. In order to develop a deeper understanding of the shelichut concept, we will attempt to explain the logic that underlies these distinctions suggested by our sugya. We will begin with the difference between hekdesh and non-hekdesh areas of halakha. In the previous shiur, we established that direct involvement in mitzvot is preferable to delegating the performance to another. Therefore, we cannot assume that the institution of shelichut was extended to areas of hekdesh, which require personal participation. It is therefore necessary to introduce an explicit source, which applies shelichut to areas such as teruma and korbanot. In order to appreciate why shelichut should be limited to mental processes, let us glance at an additional sugya that is critical for our understanding of shelichut: "Given that all tena'im [stipulations that a person makes when performing a given halakhic act] are derived from where - from the tenai of the tribes of Gad and Reuven [see Bemidbar 32], a tenai that can be fulfilled through a shaliach - such as the one there [in the context of Gad and Reuven] - is a valid tenai; that which cannot be fulfilled through a shaliach is not a valid tenai." (Ketubot 74a) Tosafot (s.v. "tenai") assume that this provision reflects an inherent connection between shelichut and tena'im: "This is the reason: since the action [to which the individual wishes to assign a stipulation] is within his power to such an extent that he can even carry it out through an agent, it stands to reason that it lies within his power to assign a tenai to it, as well. But chalitza, which one cannot execute through an agent, is not within his power to assign to it a tenai, either; thus, even if the condition is not met, the action takes effect." In other words, shelichut is only possible in areas in which the individual is in control. Where man is the creator of the new halakhic status, shelichut is applicable. However, in the case of chalitza, the brother does not permit his sister-in-law to remarry. Although he must participate in chalitza, he is not in control. He merely takes part in the ceremony, which results in her license to remarry. His role is merely mechanical and may not even require his da'at; hence, he is unable to dictate the terms of the chalitza. Similarly, he lacks the authority to appoint another to take his place. It would appear that the distinction drawn by halakha between mental processes and those that require action is rooted in this point. It is self-evident that man is the creator of changes in halakhic status that are determined by machshava. Therefore, man who is in control has the ability to appoint a shaliach. However, the authority of man becomes questionable in areas requiring concrete action. Perhaps, the individual's role in these instances is merely mechanical, thus eliminating the possibility of shelichut. Even if we view the individual as the source of the status change in situations where action is indispensable, we may nevertheless consider his authority as diminished relative to areas where machshava alone suffices. If so, it would be impossible to extrapolate shelichut from areas that are determined mentally to those which demand concrete action, as well. This understanding will also serve us in appreciating the third distinction raised by our sugya. Namely, shelichut may be limited to halakhic changes that are effected by unilateral action. Clearly, an individual possesses greater control over something that he effects unassisted. Therefore, shelichut cannot automatically be extended to areas dependent upon bilateral agreement. Hence, an explicit biblical source is needed to teach us that even in such areas, man is sufficiently in control to assign a shaliach. 2. Two Types of Shelichut It is plausible that the distinction between bilateral and unilateral areas remains even after the Torah introduces the application of shelichut in both. The Mordekhai in Kiddushin (#505) quotes the position of the Kadosh from Radosh that a shaliach sent to deliver a get has the authority to appoint another shaliach in his place. However, a shaliach sent to marry a woman lacks this authority, since, in contrast to divorce, one cannot marry a woman without her consent. This position seems to suggest that although shelichut applies in both areas, a basic difference nonetheless exists between the two. In fact, we may even claim that the drasha does not merely extend shelichut to bilateral agreements, but rather introduces a new type of shelichut which can apply to those areas. We may explain this position based on the insightful remarks of the Ketzot concerning the precise nature of shelichut (188:2). According to the Ketzot, the Rishonim debated whether to consider a shaliach as merely acting on behalf of the sender, or as actually replacing him. The understanding of shaliach as a replacement awards him independent status. However, if he merely performs the given action for the sender, his appointment does not imply independence. On this basis, we may claim that only in the case of get, where the husband has total control, can he confer on the shaliach independent authority. This status allows the shaliach not only to deliver the get and activate the geirushin, but also to appoint a different shaliach in his place. However, in the case of kiddushin, where the husband is dependent upon the woman's consent, he cannot grant the shaliach independence. The Torah merely allows him to send a messenger to perform the act of kiddushin on his behalf. The shaliach therefore lacks the independent authority to appoint a shaliach in his place and can do no more than the act of kiddushin itself. In summary, the Gemara discussed three possible sources for the concept of shelichut and analyzed the uniqueness of each source. We sought to demonstrate how the unique qualities of each source could affect the extension of shelichut to other contexts. 3. The Requirement of Shlichut to be mafrish Teruma Let us now inspect the specific case of teruma in light of a sugya in Nedarim 36b. The Gemara there discusses the rule posited in the Mishna allowing one to separate terumot and ma'asrot on behalf of someone who vowed not to receive any benefit from him: "[The Mishna stated:] He may separate his terumot and ma'asrot with his knowledge. To what case does this refer? If we say that [he separates teruma] from the grain belonging to the owner of the stack [of grain] on behalf of the owner of the stack, then with whose knowledge is this done? If we say with his own knowledge, who appointed him a shaliach [licensing him to separate the teruma]? Rather, it must refer to the knowledge of the owner of the stack - but does he not then provide benefit for him by carrying out his shelichut? As Rava stated, we deal here with a case of one who declares, 'Whoever wishes to come to separate teruma may come and separate teruma '" The Gemara assumes that the individual from whom the noder (the one who took the vow) vowed to not derive benefit may not function as a shaliach for the noder. However, the Gemara appears to conclude that he can be mafrish (separate) the teruma if he does not formally assume that role of shaliach. This is accomplished via a general announcement allowing anyone to be mafrish the teruma. From this discussion it seems that hafrashat teruma is not limited to the owner of the produce. One person can designate the produce of another as teruma so long as he doesn't violate the wishes of the owner; no assignment of shelichut is required. This understanding is quite reasonable: after all, prior to hafrasha the produce is in a state of tevel, which we may define as an actual or potential mixture of teruma and chulin. Therefore, hafrasha merely delineates the teruma within this mixture. In fact, the Talmud Yerushalmi (Terumot 1:1) entertains the possibility that hafrasha does not require ownership (see the Gaon's commentary). However, this position appears to contradict the very foundations of our sugya, which applies the principle of shelichut in order to explain how one can be mafrish teruma for another. The application of shelichut assumes that only the owner or his agent can be mafrish the teruma. One solution to this problem is to suggest that in actuality, only the owner or his messenger may be mafrish teruma, as suggested by our Gemara. Nevertheless, if the owner does not single out a specific shaliach, then even if the mudar hana'a (the one from whom the noder may not derive benefit) chooses to fulfill the shelichut, the neder is not violated. Therefore, in response to a general announcement allowing anyone to separate the teruma, the mudar hana'a may fulfill the shelichut without compromising the neder. Tosafot (Gittin 66a s.v. "kol") suggest this approach: "Although the Gemara states regarding someone from whom another may not derive benefit that he may separate teruma on his behalf with his knowledge etc., and the Gemara explains that this refers to one who declares, 'Whoever wishes to come and separate teruma may come and separate teruma,' this does not mean that if he makes such a pronouncement, he [the one separating the teruma] is not considered fulfilling his shelichut. Rather, specifically with respect to a mudar hana'a we do not consider this shelichut, by which he would be viewed as providing benefit for him, since he did not personally assign him." The Rashba takes a different approach. He concedes that according to the Gemara's conclusion in Nedarim, one may be mafrish teruma for another even without having been appointed a shaliach. However, he argues that this applies only within the specific context of that Gemara, which discusses the possibility of designating one's own produce as teruma in order to render the produce of another permissible for consumption. Since he owns his produce, he has the power to designate it as teruma. The Gemara's question relates to one's ability to indirectly affect the produce of another via this designation. According to this understanding, ownership is indispensable for hafrasha. Hence, the Gemara in Nedarim is consistent with our Gemara which demands shelichut to replace the requirement of ownership to allow for hafrasha. By contrast, the Ramban (Gittin 66a) adopts our initial understanding, and denies the need for ownership as a prerequisite for hafrasha. The Ramban requires permission, not shelichut. According to this position, the problem posed by our sugya, which introduces the institution of shelichut to enable one to be mafrish teruma for another, resurfaces. Upon closer inspection, the Ramban's position becomes even more puzzling. He tries to prove that shelichut is unnecessary for hafrasha from the Gemara in Bava Metzia which initially assumes that one can be mafrish for another: "Regarding teruma, even an expression of consent suffices, as the Gemara states in 'Eilu Metziot' [Bava Metzia 22a], '[If the owner finds someone separating teruma for him and says], 'You should have taken from the higher quality produce,' then if, indeed higher quality produce was found [thus proving the sincerity of the owner's comment, and hence his consent to the separation of teruma], then the separation of teruma is valid.'" However, the Gemara explicitly rejects its initial assumption, and concludes that formal shelichut is required: "Rava interpreted it [that beraita] to accommodate Abayei's position, as referring to a case where he appointed him a shaliach. Indeed, this seems reasonable, for if it speaks of a case where he did not assign him as his shaliach, could the separation of teruma be valid? The verse states, 'you - also you' to include one's agent [that he may separate teruma only under the same conditions and terms as the owner himself]. Just as one separates only with knowledge [that he separates teruma], so must the agent separate only with the owner's knowledge." How can the position of the Ramban be reconciled with this sugya, let alone supported by it? Let us return to our Gemara. The Gemara proves that the institution of shelichut applies to teruma from the Mishna in the fourth perek of Terumot (mishna 4). However, already in chapter 3, we find a Mishna which establishes the ability to be mafrish on behalf of another: "When does this apply? When he said nothing. But if he allowed his family member, servant or maidservant to separate teruma, the separation is valid." (Terumot 3:4) Why did the Gemara choose not to cite this Mishna as evidence for the application of shelichut to teruma, selecting instead the Mishna in chapter 4? Moreover, the Gemara cites a longer passage from the Mishna then would appear necessary. It would have been sufficient to simply quote, "If one tells his agent, 'Go and separate teruma,' he separates in accordance with the owner's intention [the amount he figures the owner would have given as teruma]." But the Gemara adds the continuation of the Mishna - "If he does not know the owner's intention [whether he would normally give a larger or smaller amount], he separates the average amount - one-fiftieth." Why must the Gemara include this passage in its citation? We can resolve all these difficulties by proposing that according to the Ramban, two distinct paths can be taken to be mafrish on behalf of someone else. First, one can be mafrish once the owner indicates his consent. In addition, the owner can also make use of the institution of shelichut. Where shelichut is applied, it is as if the owner himself was mafrish. Permission, by contrast, grants the non-owner ability to be mafrish in accordance with the wishes of the owner. Based on the above, we can distinguish between these two tracts. The option of a non-owner designating teruma is contingent upon the subjective wishes of the owner. If the whims of the owner are not accommodated, the hafrasha is void. Shelichut, on the other hand, is established via a formal designation on the part of the owner. Once appointed, the shaliach is required to fulfill his shelichut faithfully and may act in this capacity as long as he does not objectively violate this trust. Subjective whims of the owner are irrelevant so long as the shaliach fulfills his task consistent with the norms governing the specific shelichut. If we adopt this distinction, we can easily explain the Ramban's proof from Bava Metzia. The sugya there addresses the question of whether one's intention can be assumed retroactively. The Gemara attempts to resolve this question on the basis of the braita that appears to allow one to be mafrish for another without his knowledge. This perhaps indicates that eventual acquiescence retroactively legitimizes the hafrasha. The Ramban proves from this, that the owner's permission is sufficient for hafrasha, since permission is parallel in this regard to intention, and can perhaps be applied retroactively. Appointing a shaliach, however, constitutes a specific, halakhic act, which demands expressed da'at and can only be effective proactively. The Ramban understood that the Gemara does not reject this basic premise. Instead, it rejects merely this understanding of the braita. The option of permission is inapplicable if the hafrasha does not correspond to the whims of the owner. According to the braita, the owner's consent is indicated if, upon hearing of the hafrasha, he responds that better quality produce could have been used. Since the hafrasha of the non-owner does not, in this instance, correspond to the wishes of the owner, shelichut is the only option left in understanding the braita. However, the initial premise, which assumed that permission suffices, was never overturned. Similarly, the sugya in Kiddushin proved the possibility of shelichut with regard to teruma from the Mishna in chapter 4 of Terumot. The Gemara goes through the trouble to cite the seemingly irrelevant detail, that if the shaliach is unaware of the amount the owner wishes to be mafrish, he may assume the norm. According to our understanding, only from this clause can we prove that the shelichut option is being exercised. The mere fact that one can be mafrish for another, which already appears in chapter 3, can be explained based on the permission option. However, the possibility of a legitimate hafrasha that does not correspond to the wishes of the owner forces us to acknowledge the application of shelichut to hafrashat teruma. Sources and questions for next week's shiur. Sources: 1. Kiddushin 41b "ela lo likhtov... Ka mashma lan." Gittin 23b "amar Rav Asi... bnei brit." 2. Rambam Hilkhot Sheduchin V'shutfin 2:1-2, Rambam Hilkhot Geirushin 3:15-16, Shiltei Gibborim Gittin [12a in the pages of the Rif] #1 3. Sanhedrin 72b Tosafot s.v. Yisrael, Magen Avraham Orach Chayim beginning of Siman 189, Even Ha-ozer ibid. 4. Rambam Hilkhot Issurei Biah 12:11, 13:14-17. Questions: 1. Regarding what point does the Riaz argue with the Rambam? 2. What halakha does the Magen Avraham derive from Tosafot in Sanhedrin? 3. Is an eved knaani considered a convert to Judaism? 4. What was the status of Shimshon's wives?
https://www.etzion.org.il/en/shiur-18-shelichut
Some of the signs and symptoms of a stroke may include sudden difficulty speaking or understanding speech, confusion, and numbness or weakness in the face or limbs (especially on one side of the body). Other symptoms associated with a stroke can include dizziness, loss of balance, and vomiting. In general, symptoms appear suddenly; often, multiple symptoms are present at the same time. Call 911 immediately if you suspect that you or someone else is having a stroke. Even though a stroke occurs in the unseen reaches of the brain, symptoms of a stroke can be easy to spot. As a general rule, symptoms with a stroke appear suddenly, and often there is more than one symptom present at the same time. Therefore, a stroke can usually be distinguished from other causes of dizziness or headache. The signs and symptoms discussed below may indicate that a person has had a stroke and requires medical attention immediately. For a person having a stroke, the symptoms may vary depending on which part of the brain is affected. Examples of specific symptoms can include: - Sudden numbness or weakness of face, arm, hand, or leg (especially on one side of the body) - Sudden confusion - Sudden trouble speaking or understanding speech - Sudden trouble seeing in one or both eyes (such as double vision, blurred vision, or blindness) - Sudden dizziness, lightheadedness, or trouble walking - Sudden loss of balance or coordination - Sudden severe headache with no known cause - Vomiting - Loss of consciousness - Spinning sensation (vertigo) - Sudden collapse - Seizures (in a small number of cases). If you suspect you or someone you know is having a stroke, do not wait for the symptoms to worsen or improve. Call 911 immediately. There are now effective therapies for stroke that need to be administered at a hospital; however, they lose their effectiveness if they are not received within the first three hours after stroke-related symptoms appear. Also, keep in mind that it is common for a stroke victim to protest or deny that he or she is having a stroke. If you notice a person exhibiting any of the possible symptoms of a stroke discussed above, get help right away.
http://stroke.emedtv.com/stroke/stroke-symptoms.html
Q: While loop not counting correctly I've been learning programming in python for the last two weeks and it's going great so far. But now I'm stuck and can't seem to find an answer. I found a really weird behaviour of a while loop, I just can't wrap my head around. x=0 step_size=0.2 while x<2: print x x+=step_size This code prints: 0 0.2 0.4 ... 1.8 2.0 2.0 should not be printed, right? When x becomes 2.0 the statement "x<2" is false, therefore the loop should exit and never print 2.0. And now for the really weird part: it works with other numbers. Step_size=0.4 prints up to 1.6, step_size=0.1 up to 1.9. Using "x<1" as a statement and step_size=0.2 also works. What am I missing? Best regards, Leo Edit: I'm using python 2.7.5 and the default Idle Editior v2.7.5 A: It's floating point arythmetic. Output in console for python 3.6 0 0.2 0.4 0.6000000000000001 0.8 1.0 1.2 1.4 1.5999999999999999 1.7999999999999998 1.9999999999999998
Communication professor honored with teaching award Laura Guerrero, professor of interpersonal communication in the Hugh Downs School of Human Communication at Arizona State University, has received the 2022 Teaching Award from the International Association for Relationship Research (IARR). This award recognizes excellence in teaching in the field of personal relationships at the undergraduate and/or graduate levels. IARR is an interdisciplinary organization dedicated to advancing the scientific study of personal and social relationships, and encourages collaboration among students, new scholars and experienced scholars. Guerrero was nominated by two colleagues — Associate Professor Amira de la Garza and Assistant Professor Joris Van Ouytsel at the Hugh Downs School — as well as Samantha Shebib, a Hugh Downs School undergraduate alumna, now an assistant professor at the University of Alabama-Birmingham, and Anya Hommadova, one of Guerrero’s many former doctoral advisees, currently an assistant professor at Sam Houston State University. De la Garza said that "Dr. Guerrero has been described by her past students as incredibly caring and assisting them to hone their research and investigation skills towards creating powerful programs of research that sustain their careers." Shebib says Guerrero has always been a source of support, offering her expertise and advice whenever needed. “But most importantly, Dr. Guerrero believed in me before I even believed in myself. I would not be where I am today without her — and there’s zero exaggeration in that sentiment,” she said. Van Ouytsel stated that her courses are highly relevant to students’ lives while also extremely rigorous. “This holds for her graduate seminars as well,” he said. “Projects coming out of Laura’s seminar often end up turning into conference papers, publications or dissertations. She often tells me that she sees her seminars as ‘think tanks’ where new ideas about relationship research and theory are born. Several graduate students told me that they consider themselves interpersonal scholars because of her seminars.” Guerrero has also co-authored three different textbooks on interpersonal communication. “Close Encounters: Communication in Relationships,” on which Guerrero is the lead author with co-authors Peter Andersen and Walid Afifi, is in its sixth edition and is one of SAGE Publication’s best-selling textbooks. This book is used in both undergraduate and graduate relationship courses at universities around the country, across various disciplines. She is also a co-author of a textbook on nonverbal communication with Judee Burgoon and Valerie Manusov, and the lead author of the recently published “Interpersonal Encounters: Connecting Through Communication,” co-authored with ASU alumna Bree McEwan. This new book is focused on introducing students to the ways communication affects their everyday lives, as well as improving their communication skills to foster better personal and professional relationships. “My goal in writing textbooks is to make material relevant and to provide students with solid research-based commentary about how they can be better communicators and have healthier relationships,” Guerrero said. In addition to her classroom teaching and textbook writing, Guerrero is a popular mentor for graduate students. She has worked with a consistent stream of undergraduate honors students and research assistants, chairing or co-chairing 18 PhD students’ dissertations, and has served on another 18 PhD committees. “I can't tell you how honored I am to receive this award from an important and prestigious organization like IARR, which represents the very best scholarship on relationships across many different disciplines,” Guerrero said. “I have been a fan of IARR since its inception. It also means a lot to me to have colleagues and former students who took the time to nominate me." “Professor Guerrero is one of our most beloved teachers and active mentors,” said Sarah Tracy, professor and interim director of the Hugh Downs School. “Over the course of her career, she has positively impacted thousands of students, whether that be in the classroom, in the research lab or in co-authoring research publications. What is more, she has served in school leadership positions that directly support successful undergraduate education and champion student diversity, equity, inclusion, justice and belonging.” The IARR awards ceremony will take place this summer at a virtual conference.
https://news.asu.edu/20220506-communication-professor-honored-teaching-award
UiPath Foundation launches the 2020 activity report, presenting the stories and the results of the educational programs carried out in vulnerable communities from Romania and India, as well as the foundation’s response to the challenges of the 2020 pandemic crisis. Since January 2019, UiPath Foundation has supported more than 25,000 children from vulnerable families in Romania and India to have access to quality education and over 3,000 teachers in Romania who have benefited from training activities aimed at improving their digital skills, the leadership in the vulnerable communities and the 21st century skills. About 12,450 children participated in educational activities in 6 communities from Romania and India and more than 15,000 hours were spent in 2020, in online tutoring for Romanian, mathematics and English for children enrolled in the Future Acceleration Program, in Romania. The activity report recounts, beyond the impact results, a series of stories about the courage of children, teachers, partners, mentors and members of the UiPath Foundation team in their efforts to open up new opportunities to children in vulnerable communities. “2020 had a strong impact in our communities, amplifying the educational and social gaps already existing in vulnerable environments. We have managed, with great courage, resilience, and flexibility, together with our partners, not to alter the objective of our educational programs and to reconfigure the educational intervention to respond to the context generated by the pandemic. Our objective, since 2019, has been to build integrated and long-term support systems, by providing the necessary educational context to ensure that the unlimited potential of children does not remain trapped in poverty, and each of them can decide their path to the future by being exposed to a multitude of educational alternatives. The year 2020 confirmed that our mission cannot be achieved by a single organization and that synergies between different entities provide the most appropriate educational environment for children from disadvantaged families. We continue, in 2021, in the same spirit of solidarity and collaboration, together with our strategic and educational partners.”, says Raluca Negulescu-Balaci, Executive Director, UiPath Foundation. UiPath Foundation develops educational programs that involve integrated, long-term intervention, together with an ecosystem of partners, with the aim of fostering equal access to the future for all children, regardless of their background. ⮚ Future Acceleration Program, the foundation’s flagship program, is dedicated to children aged 11-16, who benefit from an integrated support package designed to cover the needs of children and their families: monthly scholarships, clothes, school items, food, medical services, psychological counseling, but also access to quality educational and development opportunities, both online and offline, for the advancement of the literacy and numeracy level, and language and digital skills. In 2020, with the support of strategic partners in the intervention areas: Policy Center for Roma and Minorities (Bucharest, Ferentari), “Inimă de Copil” Foundation (Galați County), ”Bună ziua, Copii din România” Association (in Vaslui County), Pro Patrimonio Foundation (in Botoşani County), 368 children from these communities benefited from the program. The foundation’s support was also complemented by the ongoing involvement of 82 mentors and volunteers from UiPath, who spent more than 940 hours with the children in 2020. ⮚ Nurture Teachers’ Potential is the second program implemented, through which UiPath Foundation, together with Teach for Romania, supports teachers from vulnerable communities to adapt their teaching methods to the specific needs of students and to the current requirements. 10 teachers from Brașov, Buzău, Dâmbovița and Galați counties received training on key skills delivered by the Teach for Romania tutors and 30 teachers from the public education system from Galați, Vaslui and Bucharest counties participated in the pilot phase of the Teaching as Leadership training program. In addition, 450 teachers from across the country received scholarships from the UiPath Foundation to develop their much-needed digital skills when transitioning to online school. ⮚ The foundation’s third program, Early Education Forward implemented with the strategic partner, OvidiuRo Association, ensures access to quality early education for preschoolers in vulnerable communities, through training opportunities for teachers and by equipping school spaces with quality educational resources. 365 educators participated in 15 online training sessions, 9,000 books were distributed to children and educators, and 35 KinderLibraries were sent to educators to equip kindergartens in vulnerable rural communities. 365 educators participated in 15 online training sessions, 9,000 books were distributed to children and educators, and 35 KinderLibraries were sent to educators to equip kindergartens in vulnerable rural communities. The 2020 Annual Report also details other activities and partnerships initiated by UiPath Foundation last year, such as the Foundation’s response to the pandemic health crisis and the partnership with the documentary “Acasă, My home”. All these stories presented in the 2020 Annual Report, (available here), stand under the sign of courage, the common motto of the strategic partnerships and the key element that made the implementation of all the educational programs possible, with the support of volunteers, mentors, parents, teachers. The report captures the results achieved and the experiences of those involved in each educational program. About UiPath Foundation UiPath Foundation is a non-governmental, non-profit, non-political, and non-religious organization that supports children from vulnerable backgrounds to reach their potential and thrive with their communities through equal access to education and the development of 21st century skills. UiPath Foundation was founded by UiPath in January 2019 and acts as an independent organization. Starting in Romania (Bucharest and Cluj, Vaslui, Galați, Botoșani and Olt (from 2021) counties and India (Bangalore), UiPath Foundation runs, in partnership with local impact driven non-profit organizations, educational programs that address the multiple needs of children facing poverty.
https://business-review.eu/helpinghands/uipath-foundation-launches-its-2020-activity-report-221329
The Castro Cultural District work-group is proud to have received and celebrated our official designation this Pride! We had a great time marching in the San Francisco Pride Resistance Contingent and speaking to people at our booth! The Castro LGBTQ Cultural District is being created with the intent of preserving, sustaining, and promoting the LGBTQ history and culture of the broader Castro district. The creation of the Castro LGBTQ Cultural District will highlight the structures and sites important to this history; fostering racial, ethnic and cultural diversity among its residents and businesses; and creating a safe, beautiful, and inclusive space for LGBTQ and allied communities, from those who call this neighborhood home to those who visit it from around the world. The Castro LGBTQ Cultural District is a living, breathing geographic and cultural area with rich political, social, economic, and historical significance to the LGBTQ community. The neighborhood has been recognized worldwide for nearly half a century as a beacon of LGBTQ liberty and an enclave for LGBTQ people to find safety, acceptance, and chosen family. The Castro has long drawn new residents and visitors from every corner of the globe who seek out the neighborhood because of its significance as a center of LGBTQ life. The Castro LGBTQ Cultural District will help sustain this community and the Castro as a center of LGBTQ life. This is important for the movement as a whole, but also important for the Castro and for San Francisco.
https://castrolgbtq.org/2019/07/01/castro-lgbtq-cultural-district/
1- Identify the different reasons people communicate: People communicate for various reasons in a work setting. Communication is a way of making and developing relationships, expressing ideas, letting people know how you are feeling or giving support and encouragement. To build relationships: This is a very important reason for communication because building relationships is imperative with parents and children when they first meet. Communication and how you present it is the first step in gaining trust and confidence in a person. For maintaining relationships: Good communication is vital for maintaining relationships. In order for this age of children to develop and thrive properly they need their parents to supply them with nutrition, and a stable home. They need their parents to provide assistance and guidance with homework and parental supervision. Children in middle childhood are self-critical and socially aware (Berger, 2010). Therefore, these children need repeated reassurance to promote the child’s self-esteem. Middle Childhood children depend on their parents to accommodate time for them to form friendships with their peers for these children to obtain social adequacy. Not only do they need a parent, but also they need a good one. Parents shape and mould their children into something they want them to be. They instil morals and values that they wish their children could someday possess, like honesty, trustworthiness, compassion, and respect. Those attributes are some characteristics of what we all wish our children had, but that’s up to the parents to make it happen. So, if parents are the key to ensuring a well-rounded teenager, what type of parenting style could get the job done efficiently? In order to ensure the best learning environment and wellbeing for the child it is essential for the practitioner to uphold professional relations with the child’s guardians and other colleagues. Most nurseries have specific policies regarding how they keep relations with the child’s parents. As a learner it is important you indicate this to the parent if they ask you a question or have a query so then you can pass them on to a member of staff. The reason this relationship between staff and parent is important is because the parent needs to be informed on a regular basis on how their child is progressing as it is important the child is meeting their needs, also parents maybe sending their child to a private nursery and they are concerned if the money they are paying is having an impact on their child’s development. By having regular contact with the parent, staff can arrange meetings with the parent to discuss the Child's progression and how they can improve working conditions with the child. Communication and professional relationships with children, young people and adults. Understand the principles of developing positive relationships with children, young people and adults. Effective communication is important in order to build positive relationships , we should always check how we approach and respond to other people as we are more likely to have better relationships if we communicate well with one another. Parents and other adults that come into school are more likely to give better support if communication is strong and effective in turn this benefits the children we work with. It is also important for the children that we model effective communication skills, this means checking what we are saying in moments of stress or excitement, we ask the children to behave in a certain way when communicating and sometimes forget that ourselves if we do this they will struggle to understand the boundaries of what is acceptable. By doing this you are creating a good example for pupils who will therefore learn from you on how to communicate positively with others. When communicating successfully with parents and staff you are then more likely to work as a team to ensure that pupils will be able to gain maximum benefit from learning, which is the ultimate goal as a teaching assistant. 1.2) Explain the principles of relationship building with children, young people and adults. In order to build good relationships with others it is important to have a warm, friendly and caring attitude to others. Others need to feel relaxed and comfortable in your company and feel as if they can bring up any concerns they have. 2.2 Explain and demonstrate key relationship building strategies and/or skills involved in working with parents in partnership - Good first impression: This is important to build a good relationship and ensure you both know what is expected of each other when working in partnership - Setting by time: Ensure we always have the time to discuss any needs or concerns with parents when the need it - Value their opinions: make sure the parents know that we value their opinions; have feedback sheets for parents to fill in or listen to the parents when they bring us concerns. - Parents are experts: experts on their own children, make sure they know this and that we Middle Childhood and Adolescence PSY 280 Sunday, October 29, 2012 Middle childhood and adolescence is a crucial period of development within everyone’s lifetime, but for the child and parent it can become a time of uncertainty. In this era of a child life, their brains are developed enough to for logic, so they attempt to understand the world around them with answers from their perspective. All children require parents who would do what is necessary to care about them. Parents should act in the best interest of the child’s development, and they should have to make an evaluation of the parenting methods that work well with the personality of the child. Within these years the child’s temperament also begins to have an effect in their lives. .CU1549.1 - Understand the importance of positive relationships for the development and wellbeing of children and young people. Identify the different relationships children and young people may have. Children and young people come into contact with many people in their day to day life so would have a range of different relationships. Their relationships start from birth and the first ever relationship a child would have would be their parents, mother & father would be their primary carers and also others that the child would get to know first. This would then be followed by siblings, grandparents, uncles, aunts and cousins. 2.2.2 • Describe with examples the kinds of influences that affect children and young people’s development including: a) Background, b)health, c) environment. There are many factors that can affect and influence the good development of children and adolescents as the background, health and environment. • Background.- . - It is important to know the characteristics of the family about the child's development such as, warmth, equal affection between parents and children, establish a proper relationship in terms of norms, habits, values, etc.., This creates a child behavior without conflict and have effective and positive development from childhood to adulthood. For example... there are children who have parents who care about meet all their needs and sometimes confused wanting to give the material to show affection to their children and believing that that is the kind of attention they need.
https://www.antiessays.com/free-essays/Explain-The-Importance-Of-Relationships-For-Children-P3JQURKRM35T.html
30 questions linked to/from What is the precise meaning of anatta? 15 votes 14answers 2k views If anatta is a reality, then how do you explain Volition or Will? I'm just trying to understand the concept of anatta better here. Buddhism tells me there is the concept of no-self (anatta), and even the so called conditional self is actually an illusion that arises ... 13 votes 11answers 5k views Can a non-Buddhist get Nirvana? No religion teaches what Buddhism teaches at its core. A true Hindu or Christian or Muslim believes in God and soul. Can a Hindu or a Christian or a Muslim, who is not aware of Buddhist philosophy, ... 11 votes 8answers 1k views Learning materials for Dependent Origination (Paṭiccasamuppāda) in Theravada Buddhism I am listening through the talks and guided meditations from the retreat at Amaravati Just One More: Dependent Origination and the Cycles of Addiction Retreat, where there are a number of references ... 9 votes 7answers 940 views What are the three marks of existence? What are the three marks of existence and where are they found in the canon? Is there any fundamental differences in interpretation among the different traditions? 8 votes 10answers 7k views Who am I- according to Buddhism What does Buddha say about the question,"Who am I"? Our body and mind perishes with time. If so then who actually attains the Truth? 7 votes 13answers 561 views Is Nibbana a state of mind or an element (dhamma)? I have four parts of this question, Is Nibbana a state of mind or a dhamma? If Nibbana is a state of mind, is it merely the uprooting of craving? If the answer for the second question is "Yes", then ... 6 votes 11answers 906 views Why is “I have no self” a wrong view? In the Sabbasava Sutta (MN2), the view that "I have no self" is listed as one of the six wrong views and one who holds this view will not be freed from suffering. Questions: Why is "I have no self" ... 6 votes 9answers 3k views Can one follow Hinduism and Buddhism at the same time? I am born Hindu and have been following Buddhism for more than a year. The change has been a life changing experience but now I find myself at the junction of two religions. I sometimes face ... 5 votes 8answers 1k views If there is no self what or who is it that gets enlightened? From reading this answer I come to understand that anatta means the lack of a core that can be conceived as self. If there is no permanent self, then who or what gets enlightened? 5 votes 8answers 505 views Can you explain how cessation of existence is known to be possible? People from other sects may argue against Buddhism on the following grounds. I invite the community of BSE to explain by reasoning or analogy how cessation of existence, which is known as the Nibbana-... 5 votes 3answers 612 views Meaning of “Body is emptiness, emptiness is body” In the Heart Sutra, Avalokiteshvara says to Sariputra this Body itself is Emptiness and Emptiness itself is this Body. This Body is not other than Emptiness and Emptiness is not other than this ... 4 votes 1answer 369 views What is the best translation of Anatta into English? Is the best translation of Anatta "non-self" or "there is nothing that you can take as me, mine, self or non-changing everlasting controllable part which can be identified as me, mine or everlasting ... 3 votes 5answers 668 views Understanding anatta via “there are no computer programs” analogy I tried to find an analogy that would help me to understand anatta: Just as we can say "there is no self" (there are just mental aggregates interacting with each other and eventually causing some ... 3 votes 5answers 219 views The actualization of Anatta Having attempted to understand the actualization of Anatta, i am at a complete loss. The more i endeavor to understand the meaning, the more confusing it seems. I have read; What is the best ... 3 votes 5answers 104 views Is this talk just a convention? In this question it was said that Buddha said "I, the unexcelled teacher. I, alone, am rightly self-awakened ... I am a conqueror (of evil qualities)." The answer seems to be that Buddha used 'I' for ... 3 votes 6answers 754 views Someone told me Buddha copied almost everything from Brahmanism, how accurate is that? I am fairly new to the Dhamma and this site specifically. I was told by an Indian person that dyana (meditation) was a part of a yoga system which became zen in china, dharma became dhamma, most of ... 3 votes 2answers 129 views What are false unchanging entities? I am having trouble with the concept of "unchanging entities which exist on their own". Attachment to the false view of self means belief in the presence of unchanging entities which exist on their ... 3 votes 3answers 2k views Is Buddhism a syncretic religion, and then what would they say on the Abrahamaic religions? Syncretism is a union or attempted fusion of different religions, cultures, or philosophies — like Halloween, which has both Christian and pagan roots, or the combination of Aristotelian ... 2 votes 3answers 1k views What is the fastest way to reach enlightenment? I am asking for reference request and sutras where a person can cut through samsara quickly and obtain liberation. I have heard a story when Buddha was asked this question and he replied "Sound". ... 2 votes 7answers 172 views As a Buddhist, how shall we make sense of the notion that there is no such thing as a Soul? The three marks of existence is: Impermanence, Suffering, and No-Self. If there is no-self, then there is no Soul. Our cognitive abilities is the result of the physical (Brain organ) and the non-... 2 votes 4answers 475 views How do I refute the claim that the Buddha was actually preaching Vedanta? Some people are obsessed with making a great personality and revolutionary like the Buddha a follower of their own faith by making baseless, historically inaccurate and factually incorrect claims. One ... 2 votes 9answers 290 views Impermanent self I hear people saying this a lot regarding Annatta - "if something is impermanent then it cannot be self." But doesn't this only apply if you're coming from the view that a 'self' must be permanent?... 2 votes 2answers 168 views Equanimity, aversion and anatta Equanimity is described as It refers to the equanimity that arises from the power of observation, the ability to see without being caught by what we see. The definition can be found here. If i dig ... 1 vote 3answers 113 views Responsibility in Buddhism If nothing can be considered 'myself' or 'mine', if nothing is in my complete control (take volition for example), how can people be held responsible for their thoughts, words and deeds, if they are ... 1 vote 1answer 129 views No self and individual responsibility If there is no self, no "mind" or "I" that can be found, then what does Buddhism call that faculty which exercises personal responsibility? 1 vote 5answers 214 views Other than Nirvana , what else is not changing? It is said that Nirvana is not changing. But I found a text here which states that dharma of conditioned arising is unchanging. The Buddha said to the monk: “Conditioned arising was neither made by ... 1 vote 3answers 88 views When saying that the aggregate is not-self are we not predicting the existence of a thing called self? Parmenides, a presocratic philosopher, said: The only roads of inquiry there are to think of: one, that it is and that it is not possible for it not to be, this is the path of persuasion (for ... 0 votes 4answers 484 views Buddhism and the so called “hard” problem of consciousness Hello Buddhists namaste! Would it be wrong of me to suppose the anatta means at least partly, the following two theses? No explanation of a conscious thought is complete: mind is not absolutely ... 0 votes 2answers 122 views Can Buddhism be called as Atheist, Agnostic or Theistic? I am reading this book currently called Buddhism without beliefs. In this book the author claims that Buddhism is Agnostic. Now Agnosticism is a claim that you are ignorant, or not sure whether God is ... 0 votes 4answers 985 views Can anyone explain non-self or Anatta of Buddhism in simple terms with example? [duplicate] Recently I got to know about 'Non-Self' or Anatta term but online material was not comprehendible. Below are my questions: 1) Can anyone explain it in simple terms with example? 2) If there's no ...
https://buddhism.stackexchange.com/questions/linked/1891?sort=votes&pagesize=30
Background: Colonoscopy visualizes more of the colon than flexible sigmoidoscopy. This study compares the outcomes of an unsedated modified colon endoscopy (MCE) with flexible sigmoidoscopy (FS) in family medicine practice. Methods: We conducted a retrospective chart review of existing clinical data to compare outcomes for 48 patients undergoing MCE and 35 patients undergoing FS at 3 family medicine practices in Los Angeles. Outcomes of interest included completion rates, number of complications, depth reached, anatomic site visualized, and information about the number and nature of clinical findings. Results: No significant differences were found between MCE and FS regarding completion rates (83.3% vs 75%, respectively). Expected statistically significant differences were found between the 2 procedures in the anatomic site visualized (P < .01) and depth reached (P < .01). Clinical pathologies were identified in 58% of MCE patients and 37% of FS patients. Four adenocarcinomas were identified in the MCE group in the proximal region of the colon that could not have been detected by FS. Conclusions: Findings from this study suggest that MCE can be an acceptable alternative to FS in office settings for colorectal cancer screening. Family physicians routinely provide endoscopic screening services to their patients in the form of flexible sigmoidoscopy (FS). More than a decade ago, Selby et al reported a 60% reduction in colorectal cancer mortality among people undergoing screening sigmoidoscopy.1 However, traditional FS only reaches a depth of 60 cm and so excludes 80 to 100 cm of colon from examination. Recent studies have suggested that FS may miss as many as half the lesions in the colon,2,3 a problem that may be particularly pronounced among women. In a recent study comparing the detection of polyps by colonoscopy and sigmoidoscopy, FS identified only 35.2% of women with advanced colorectal neoplasia compared with 66.3% of matched men.4 In contrast, standard colonoscopy allows 100% of the cecum (total colon) to be viewed in approximately 76% or greater of procedures3,5,6 and has been shown to be more sensitive than FS for detecting large adenomas and cancers.2,4,7 Although the US Preventative Services Task Force does not yet recommend the use of one particular method of colorectal cancer screening over another, it strongly recommends that clinicians screen adults with average risk for colorectal cancer with one of a variety of different screening methods, including colonoscopies, beginning at age 50 and then again every 10 years.8 The American Cancer Society makes similar recommendations for adults at average risk.9 An excellent overview of colorectal cancer screening recommendations and surrounding controversies is available in Ransohoff's 2005 review of the topic.10 Many patients, particularly those who are uninsured or underinsured, do not have access to colonoscopy as a screening option because of the few trained colonoscopists working in medically underserved areas.11–13 In Los Angeles County alone, community physicians report that their uninsured and publicly insured patients with indications can wait as long as 8 months for a colonoscopy, and that screening colonoscopies are simply unavailable (phone conversation with G. Floutsis, MD, Medical Director Clinica Msr. Oscar A. Romero Community Health Center, November 2005; e-mail communication with RD Yang, MD, PhD, Division of Gastroenterology and Liver Diseases, Keck School of Medicine, University of Southern California, March 2007). One solution to the limited capacity for screening colonoscopies in the health care system is to train primary care physicians to perform colonoscopies in the primary care settings. Numerous previous studies have shown that, after the completion of appropriate training, family physicians can perform colonoscopies competently and safely in inpatient and outpatient settings with high patient satisfaction, few to no complications, and reliable and valid clinical findings.14–17 Unfortunately, licensing regulations in some states relating to the use of conscious sedation (required for colonoscopy) can make it cost prohibitive for family physicians and other primary care physicians to offer colonoscopy in their practices. In California, full conscious sedation must be administered in a facility that is fully licensed either by the Department of Health Services, the Joint Accreditation Commission of Hospitals and Health Organizations or the American Association of Ambulatory Health Centers (California Senate Bill 595 to 19990816 Amended). Several studies have compared unsedated colonoscopy with sedated procedures and with FS in specialist settings and have found the unsedated procedure to be comparable to sedated colonscopy and FS in terms of patient tolerance, complications, and completion rates. In one of the earliest of these studies, Thiis-Evensen et al (2000) of Norway evaluated the efficacy of colonoscopy without sedation during screening examination in 451 adult patients.18 Completion rates and complication rates for unsedated and sedated colonoscopy with an adult endoscope were comparable. Currently, the procedure is the de facto standard of care for colorectal cancer screening by colonoscopy in small provincial clinics and hospitals in Norway (e-mail communication with E. Thiis-Evensen, MD, Department of Medicine Telemark Central Hispital, Skien; Department A of Medicine, Rikshospitalet University Hospital, Oslo, Norway, December 2003). In a gastroenterology setting, Wu et al (2003) obtained similar findings in a comparison of unsedated colonoscopy with an adult colonoscope and FS and using nursing staff to deliver the procedure.19 Thompson, Springer, and Anderson found no significant differences in patient tolerance and examination duration when comparing unsedated colonoscopy with a pediatric colonoscope and FS.20 Studies comparing pediatric and adult colonoscopes have found few significant differences between the two in time to cecum, patient tolerance, and endoscopist perception of difficulty,21 but found a slight superiority in completion rates for the pediatric colonoscope. Saifudden et al (2000) reported higher completion rates in procedures using the pediatric colonoscope compared with those using adult colonoscopies, especially in women.22 Okamoto et al (2005) found better completion rates with the pediatric compared with adult colonoscope in patients with fixed, angulated colons.23 In 2002, in response to their uninsured and publicly insured patients’ lack of access to screening colonoscopies, 4 clinicians from 3 family medicine practices involved with LA Net, a primary care practice-based research network, began offering unsedated colonoscopy with a pediatric endoscope to adult patients under guidelines recommended by the US Preventative Services Task Force and American Cancer Society and those outlined in Table 1. The clinicians opted to use a pediatric colonoscope in the procedure based on evidence demonstrating the basic comparability of the 2 devices and a slightly higher completion rate for procedures conducted using the pediatric endoscope. At each practice, modified colon endoscopy (MCE) was offered to all average-risk adult patients eligible for colorectal cancer screening as an alternative to both already-available on-site FS and referral to an off-site specialty clinic for sedated colonoscopy. In a few rare instances, MCE was offered to patients in higher-risk categories after they were referred for off-site sedated colonoscopy while they were waiting for their appointment. In these instances, the patients were likely to experience very lengthy wait times for an off-site appointment because of their insurance status. All of the family physicians in this study acquired their skills for FS while in residency training. Three of the 4 acquired their skills in colonoscopy over 10 years of practice and continuing medical education procedural courses through the American Academy of Family Physicians and others. One clinician had received formal colonoscopy training during his residency training program before joining the faculty practice. Each received training from the endoscope manufacturer in the use of the equipment and was instructed by the lead investigator (RGH), who has extensive experience in GI endoscopy. All reviewed Hoff's recommendations for conducting unsedated colonoscopy.24 All clinicians were credentialed by the University of Southern California Faculty Practice Credentialing Committee to perform these procedures. The goal of this study was to determine whether MCE and FS conducted in a family medicine practice are comparable in terms of completion rates and number of complications, and to determine whether MCE allows the family physician to visualize more of the colon than FS. Methods Patients Billing records were used to identify all patients who underwent MCE or FS at the 3 family medicine practices between 2003 and 2005. A total of 48 patients underwent MCE and 35 patients underwent FS during this period. Table 2 provides patient demographics. Data Data were abstracted from existing medical records by the lead investigator (RGH) and a research assistant as part of a quality improvement effort. Modified Colon Endoscopy Patients who opted for MCE received instructions about preprocedure colon preparation using a standard protocol. They also received the proper bowel cleansing solutions and tablets and were provided with instructions regarding proper positioning, relaxation techniques based on recommendations made by Hoff,24 and personnel expected to be in attendance. Patients were also instructed to give feedback to the clinician and other members of the team about the level of comfort/discomfort and any other symptoms that might arise, such as nausea, dizziness, and the urge to evacuate gas. The patient was positioned in the left lateral decubitus position and the endoscopist performed a rectal examination to ascertain the presence of internal/external hemorrhoids, prostate size in men, and any possible obstruction. Patients could observe the procedure on a video monitor and were instructed to give feedback to the endoscopist throughout the procedure about their level of discomfort using a 10-point pain index scale in which 1 equated to no discomfort and 10 equated to the worst pain the patient had ever experienced. Patients were instructed that they could stop the procedure at any point by saying “stop.” During the procedure, the endoscopist explained the technical and anatomic markers to the patient. During procedures in which technical difficulties were encountered (including severe bowel spasms, persistent looping, obstruction, and poor visualization of the lumen), the endoscopist automatically terminated the procedure. During cases in which biopsy, ablation, or fulguration were indicated, the endoscopist explained these procedures to the patient and what he or she could expect to see on the video monitor. A video processor (model EPX-2200, Fujinon, Wayne, NJ) with a pediatric 170-cm endoscope (EC 250LP5, Fujinon) was used, and standard snare, biopsy forceps, and hot wire snares using an Earht Radiation Budget Experiment electrical power unit were also available. Once past the splenic flexure, a stiffening device, 1.4 or 1.6 mm in diameter (model 14700, Zutron Medical, Kansas City, MO), was introduced to avoid excessive looping. The clinician then advanced the scope further to examine the rest of the transverse and ascending colon. Appropriate biopsies and samplings were obtained. All suspicious lesions were biopsied and sent to pathology. Any excessive bleeding sites were electrocauterized. Finally, a reverse look at the rectum and anal verge was performed. After the procedure, the patient was instructed to rest for a few minutes and was allowed to use the bathroom if needed. The endoscopist discussed findings with the patient. The patient was allowed to go home if his or her vital signs were stable. Flexible Sigmoidoscopy Patients who opted for FS received instruction at the time of scheduling for the procedure and were provided with a Fleet enema to be administered 2 hours before arrival for the procedure. The patient was instructed at the time of arrival regarding positioning, members of the team expected to be present, and discomfort. The patient administered another Fleet enema in the office before the procedure. The patient was positioned in the standard left lateral decubitus position and a digital rectal examination was performed to look for any masses and to examine the prostate (in men). The endoscopist then performed a standard sigmoidoscopy using a Fujinon video endoscope with a standard sigmoidoscope. Appropriate biopsy of lesions and electrofulguration of any unusual bleeding sites were performed. A reverse look at the rectum and anal verge occurred. At the end of the procedure, findings were discussed with the patient, who was then discharged if their vital signs were stable. Outcome Variables To evaluate the viability of MCE as an alternative to FS, the procedures were compared across completion rates, complication rates, depth reached, anatomic site visualized, and clinical findings. For purposes of this study, completion of MCE was defined as visualization beyond the hepatic flexure and down the ascending colon with insertion beyond 110 cm, unless there was a foreshortened colon because of colon resection. Completion of FS was defined as reaching the splenic flexure or insertion of the scope to a depth of 60 cm. Site of maximum insertion was determined by direct visualization and identification of an anatomic site, which was correlated with the centimeter marks on the endoscope. Anatomic site visualization and identification were based on landmarks characteristic of the different parts of the colon. Although the cecum was not always visualized during MCE, completion of the study required visualization of the ascending colon with scope insertion at or beyond the 110-cm mark. During situations in which coiling was suspected, insertion of the stiffening wire resulted in the straightening of the scope and further visualization of the colon. Clinical findings recorded included those noted in any standard colonoscopy textbook and included hemorrhoids, fissures, polyps, and masses. The presence or absence of carcinoma was confirmed through the pathology report. Analysis Patient demographics for the 2 groups were analyzed for equivalence using t test (age) and Pearson's χ2 (gender and ethnicity). Completion data, complications, and clinical findings were compared using Pearson's χ2. Significant group differences in demographics were controlled for in comparison of clinical findings. SPSS software version 13 (SPSS Inc, Chicago, IL) was used for all analyses. Results A total of 83 patients underwent MCE (n = 48) or FS (n = 35) between 2003 and 2005 at the 3 practice sites. A significant difference was found between the 2 groups for age (t(79) = 2.83; P = .006): patients in the MCE group were older than those in the FS group. The reasons for this difference are not clear but one might speculate that patients in the older age group, specifically those in their 60s and 70s, who previously may have been referred to a gastrointestinal laboratory may have been more likely to opt for the on-site MCE procedure than their younger counterparts. Self-perceived risk may also have influenced patients to opt for the MCE, with older patients being more likely to perceive greater risk than younger patients and therefore more likely to chose MCE. No statistically significant differences were found between the groups for gender or ethnicity. All further analyses controlled for age. Completion Rates The completion rate for MCE at the 3 practice sites was 83.3%, which is not significantly different from the completion rate (75%) obtained for FS. The main reason cited for failure to complete both procedures was patient discomfort: 8.3% of MCE patients and 11.4% of FS patients did not complete the procedure because of discomfort. In MCE, poor preparation was also cited as a reason for noncompletion in 3 cases (6.3%). Depth Reached and Site Visualized The cecum was visualized but not intubated in 72.9% of the MCE patients. In 6.3% of MCE procedures there was a successful cecal intubation. Because of the limited length of FS equipment, none of these sites could be visualized. Thus, as expected, statistically significant differences were found between the 2 procedures in anatomic site visualized (P < .01). Similarly, when analyzed by depth readings on the endoscopes, MCE reached significantly further into the colon (mean, 130.1 cm; SD, 30.1 cm) than the FS (mean, 50.6 cm; SD, 10.0 cm); again showing an expected and statistically significant difference (P < .01). A summary of findings is provided in Table 3. Complications No complications were reported in either group. Clinical Findings Pathology was identified in 58% of MCE patients compared with 37% of FS patients. Adenocarcinomas were identified in 4 MCE patients compared with none in the FS group, a clinically significant finding. One of these was a younger patient (age 31) with a family history of colon cancer who opted to undergo MCE while he was waiting for traditional sedated colonoscopy through the LA County system. The second was a 61-year-old patient with a similarly high-risk family history who also opted to undergo MCE concurrent to a referral to a gastroenterologist for screening. The other 2 were patients with unexpected findings of cancerous polyps in the transverse colon. Tests of significance were not conducted with these data because of the small sample size and the statistically significant difference in the average age of the 2 groups; instead, these data were treated as descriptive. Clinical findings for the 2 procedures are summarized in Table 4. Variations in Outcome by Clinician Among the 4 clinicians who performed MCE across the 3 practice sites, no differences were found on completion rates, reasons for noncompletion, clinical findings, or depth reached (P = .08). Similarly, among the 6 clinicians who performed FS there were no statistically significant differences found regarding completion rates, reasons for noncompletion, or depth reached (P = .29) (Table 5). Discussion and Conclusions MCE achieved completion rates comparable to FS, no complications, and it allowed family physicians to visualize significantly greater portions of the colon than is possible with FS. Using MCE, the family physicians in this study were able to visualize the cecum 72.9% of the time and to intubate the cecum in 6.3% of the cases. Based on these data, we conclude that MCE can be an acceptable alternative to FS for colorectal cancer screening in family practice. Although its use in family practice is promising, it is important to note that MCE also has significant limitations. The rate of intubating the cecum in MCE is only 6.3%, which is significantly lower than that achieved with regular sedated colonoscopy. Thus, even though MCE improves the family physician's ability to visualize more of the colon, it cannot be viewed as a replacement for traditional sedated colonoscopy. All patients with higher-risk indications should be referred for traditional sedated colonoscopy for screening and, until these limitations are overcome, MCE should only be used for screening adults with average risk and not for diagnostic evaluations. The ability to extend a standard office sigmoidoscopy to encompass a significantly larger segment of the colon has the potential to significantly enhance family physicians’ ability to detect cancers or potentially cancerous lesions in their patients. In the past, family physicians who provided colonoscopy services in their office or in a gastrointestinal endoscopy laboratory had to commit large segments of their time, thus disrupting their practice pattern. In some states, the costs of providing office-based colonoscopy is prohibitive because of regulations governing the use of conscious sedation. MCE greatly reduces the time, effort, and staffing requirements for offering screening colonoscopies in family medicine practices, and it eliminates the need for conscious sedation. Findings from the current study are encouraging and suggest that MCE can be an acceptable alternative to FS in the family medicine practice, and a simpler and more cost-effective alternative to traditional sedated colonoscopy. Limitations to this study include lack of randomization and small sample size. A larger, randomized controlled trial is needed to evaluate the reach, effectiveness, and feasibility of offering MCE in family medicine and its acceptability to patients relative to FS and other methods of colorectal cancer screening. Acknowledgments We gratefully acknowledge the assistance of Michael Fong, MD; Carmela Lomonaco, PHD; Prapti Upadhyay, MA; and Laura Myerchin, MA, for their help in conducting this study. Notes This article was externally peer reviewed. Funding: none. Prior presentation: Unsedated colonoscopy in primary care practice. North American Primary Care Research Group. October 2005. Quebec City, Quebec, Canada. Conflict of interest: none declared. - Received for publication October 6, 2006. - Revision received March 23, 2007. - Accepted for publication April 2, 2007.
https://www.jabfm.org/content/20/5/444.full
Have you recently been promoted to a managerial role? Or do you have a brand new team to work with at your company? Do you need to learn how to lead a team? Whether you are new to management or a seasoned professional, everyone needs a refresher course on the best ways to lead a team. According to an Inc. article written by Glen Blickenstaff, CEO of The Iron Door company, there are four types of leadership you will need to be aware of including directive, participative, adaptive, and laissez-faire leadership. A directive type of leadership, which has been called autocratic in the past, entails making all of the decisions yourself as the manager, directing your staff to follow specific directions, and expecting employees to complete all tasks on time. The participative form of leadership has been called democratic leadership and it includes working with staff members on the decision-making process while taking an active role in making the final decisions. In this case, the manager carries out making sure that the intended results of a decision come to fruition, according to a blog post from Virgin. Adaptive leadership is a more fluidic type of managerial approach in which the boss takes into account the environment and situation of a particular scenario when leading a team or individual. This type of leadership considers each individual staff member and their needs in terms of the type of leadership he or she would benefit from. Liberal leadership or laissez-faire leadership entails letting a team make the vast majority of decisions. The manager spends little time being involved in particular decisions and leaves everything up to the team members. If the employees are all motivated to achieve an outcome and are able to complete the requirements, a laissez-faire type of leadership can work. If employees lack the motivation, there may be problems with a liberal leadership style. Below, we delve further into laissez-faire leadership and the type of characteristics it entails. >> Recommended reading: 31 Motivational Movies That Will Change Your Perspective on Leadership Key Qualities of the Laissez-Faire Leadership Style The definition of laissez-faire, according to the Merriam-Webster dictionary, is a “practice characterized by a usually deliberate abstention from direction.” This is essentially what management with this leadership style entails. The main qualities surrounding laissez-faire leadership are being hands-off and delegating all types of decision-making and other tasks, according to an Inc. article. Essentially, this means the manager will assign a number of tasks to their employees and have virtually no contact or additional catch-up until the project is completed. The employees will have much more freedom while being expected to come to the manager if any questions or issues come up. Under this type of leadership, for minor problems, team members are more likely to solve issues among themselves. The manager will not be checking in on the team members. Nonetheless, the boss is expected to provide the tools and resources needed to complete a project. Essentially, this form of leadership is the direct opposite of micromanaging and mostly involves a fair amount of delegation. It entails allowing team members to take on leading their own tasks and projects with little oversight. If the staff is motivated and hardworking, the laissez-faire leadership style can succeed at any company. According to St. Thomas University, while laissez-faire leaders enable employees to make their own decisions regarding how to finish tasks, they still have control over finalizing organization-wide decisions. The type of characteristics employees need to have in order to succeed under this style of leadership includes experience in a particular occupation, skills, and relevant education. >> Recommended reading: Business Innovation Starts with an Innovative Leader The Benefits of Laissez-Faire Leadership A manager using this leadership style understands the needs of their team and understands the general spirit of their office. When choosing this leadership style, a manager knows whether a particular team member will succeed with a hands-off approach. The benefits of the laissez-faire leadership style include empowering employees and boosting productivity overall. This type of leadership can also help a team become more innovative and improve morale as a whole. Additionally, employees enjoy knowing that their boss has so much confidence in them, which can inspire them to work harder than before in order to help the company boost profits and gain real results. For the managers, this leadership style and its delegation process gives them more time to focus on other high-level tasks. The level of independence associated with the laissez-faire style can help some employees feel more satisfied with their work. This works perfectly in situations where the employees are passionate about their work and have the motivation necessary to get their tasks done. The managers first need to know that their team has the knowledge and skills necessary to complete work with the use of this type of project management. Over time, employers using this type of leadership style end up having more trust in their workers. This sort of leadership is particularly effective in the case of the team members being experts in a particular field while the boss is less knowledgeable. Consider the mayor of a small city that has undergone severe flooding and electrical outages. In this case, the mayor would delegate rescue efforts and rebuilding to the experts. While there are clear benefits to the liberal leadership style, there are also a number of disadvantages that need to be addressed. >> Recommended reading: 360-Degree Feedback: A Great Ally In Your Companies Management The Disadvantages Associated with Liberal Leadership There are potential problems that have been uncovered with the laissez-faire style of leadership such as people not working together as a team and employees not working as hard as when they are being actively led by a manager. This form of leadership will not be very effective if the team members lack the knowledge, experience, and/or skills necessary to complete a project. This will lead to less innovation and teamwork along with the potential for inferior job performance and less job satisfaction among workers. According to the Houston Chronicle, employing this form of leadership among employees with less knowledge and fewer skills may lead to a decrease in productivity and inferior quality in the finished product. It is also more complicated to determine which person is responsible for a successful outcome if all team members are allowed to work completely on their own. Additionally, it is more complex to find out who is at fault if a project is done incorrectly when employing the laissez-faire leadership style. There are also employees who are unable to create their own deadlines or manage projects on their own. Without guidance, some workers are also unable to solve problems on their own. Essentially, teams that receive no feedback from their employer may miss deadlines and have projects that do not meet the expectations of the manager. In a team-based environment, the liberal leadership style may not provide enough guidance regarding role awareness and people may be unsure what exactly they should be doing on a group level. The fact that this leadership style keeps managers detached from the group means that there is a lack of cohesiveness and team members may begin to have less interest in a particular project. This form of leadership may also be inadequate for managers themselves since they may take advantage of the laissez-faire style to avoid taking accountability for any misgivings and problems with a project among workers. If the results are inadequate and deadlines are missed, the manager may then assign blame to the workers instead of taking on some responsibility. Some managers may also take advantage of this form of leadership to an extreme in which they may avoid a true form of leadership and become more passive in terms of project management. In these cases, the boss may no longer motivate team members and may not involve the staff in a more team-based approach to completing projects. Additionally, they may not provide any recognition for employees that have done an exemplary job. Whether or not you employ the laissez-faire approach to leadership or take a more directive or a participative style for leading your team, you can’t go wrong by using the software platform Runrun.it, which will enable you to manage your team regardless of the type of leadership style you employ. Runrun.it Software is Useful with Multiple Leadership Styles When considering how to lead a team and which style you prefer, you will find that Runrun.it software will provide you with the tools you need when using multiple different leadership styles. This software product will help you plan out and delegate tasks across your team while also giving greater transparency across a project’s timeline. In fact, in the midst of a liberal leadership style, the Runrun.it software will allow you to determine the team members responsible for successful outcomes and reward superior work.
https://blog.runrun.it/laissez-faire-leadership/
While data have been reported most extensively for fluoxetine and paroxetine, class effects of SSRI therapy appear to include increased sleep onset latency and/or an increased number of awakenings and arousals, leading to an overall decrease in sleep efficiency. Can Prozac cause sleep problems? SSRIs, such as fluoxetine (Prozac), are some of the most commonly prescribed antidepressants. But even though they’re quite effective against depression, they can also make it hard to fall asleep and stay asleep. Should you take Prozac at night? However, when Prozac is given in combination with Zyprexa (olanzapine)—a combination called Symbyax—as a therapy for treatment-resistant depression, it can cause sleepiness, so then it’s recommended to be taken in the evening. Does fluoxetine disrupt sleep? However, at least in short-term treatment, many antidepressants with so-called activating effects (e.g. fluoxetine, venlafaxine) may disrupt sleep, while others with sedative properties (e.g., doxepin, mirtazapine, trazodone) rapidly improve sleep, but may cause problems in long-term treatment due to oversedation. Does Prozac help you sleep? These drugs work by influencing serotonin levels in your brain. By balancing chemicals in your brain, these drugs will likely improve your mood and appetite. They can also enhance your energy levels and help you sleep better. Both medications can reduce anxiety, fear, and compulsive behaviors. How can I stop Prozac insomnia? Insomnia - Take your antidepressant in the morning if your doctor approves. - Avoid caffeinated food and drinks, particularly late in the day. - Get regular physical activity or exercise — but complete it several hours before bedtime so it doesn’t interfere with your sleep. 12.09.2019 What does Prozac feel like when it starts working? If you experience a positive response to Prozac, you might notice a decrease in your anxiety symptoms and feel more like yourself again: More relaxed. Less anxious. Improved sleep and appetite. Why is Prozac bad? The “if depressed, then Prozac” model puts millions of people needlessly at risk of serious side effects. The most dangerous of these is an “overstimulation reaction” that has been linked to compulsive thoughts of suicide and violence. Is Prozac a happy pill? The original “happy pill” was fluoxetine, more commonly known as Prozac. This medication, approved for use in 1987, was the first drug of its kind to be prescribed and marketed on a large scale. The use of this medication is very common, especially for the treatment of depression, but it is not without its risks. Can Prozac cause weight gain? Experts say that for up to 25% of people, most antidepressant medications — including the popular SSRI (selective serotonin reuptake inhibitor) drugs like Lexapro, Paxil, Prozac, and Zoloft — can cause a weight gain of 10 pounds or more. Is Prozac good for anxiety? Prozac, or fluoxetine, is a selective serotonin reuptake inhibitor (SSRI) and a widely used antidepressant. It is considered safe and effective in treating depression, anxiety, and obsessive compulsive disorder (OCD), and bulimia. Why does Prozac take so long to work? Since our brain has plenty of active serotonin transporter molecules when we start taking antidepressants, it takes a while before a suppression of the genes that code for the transporter has an effect on serotonin in the brain. What is the best antidepressant for insomnia and anxiety? Sedating antidepressants that can help you sleep include: Trazodone (Desyrel) Mirtazapine (Remeron) Doxepin (Silenor) … Medications - Citalopram (Celexa) - Fluoxetine (Prozac) - Paroxetine (Paxil) - Sertraline (Zoloft) 17.03.2020 What is the most common side effect of Prozac? Common side effects include feeling sick (nausea), headaches and trouble sleeping. They are usually mild and go away after a couple of weeks. If you and your doctor decide to take you off fluoxetine, your doctor will probably recommend reducing your dose gradually to help prevent extra side effects. Is Prozac sedating or activating? Of the SSRIs, Prozac (fluoxetine) is the most likely to cause activation, following by Zoloft (sertraline). The latter is due to the effects Zoloft has on dopamine receptors. Although activation can be troublesome, it can be helpful for those with severe fatigue, low motivation, or excessive sleepiness. Why was Prozac taken off the market? Prozac Generic Recalled Due to Abnormal Testing Results.
https://lucasrichert.com/psychopharmacology/does-prozac-effect-sleep.html
The world’s largest assessment of biodiversity recently shared the alarming news that 1 million species are under threat of extinction. Australia’s extinction record is poor compared to the rest of the world, and our investment into conservation doesn’t do enough to restrain the growing crisis. Currently, 511 animal species, 1,356 plant species and 82 distinct “ecological communities” – naturally occurring groups of native plants, animals and other organisms – are listed as nationally threatened in Australia. And these numbers are increasing. Read more: 'Revolutionary change' needed to stop unprecedented global extinction crisis While much conservation effort focuses on protecting individual species, we are failing to protect and restore their habitats. Our ongoing research into environmental investment programs shows that current levels of investment do not even come close to matching what’s actually needed to downgrade threatened ecosystems. One of the programs we evaluated was the 20 Million Trees Program, a part of the Australian government’s National Landcare Program. For example, we analysed investment targeted at the critically endangered Peppermint Box Grassy Woodlands of South Australia. Fewer than three square kilometres of woodland were planted. That’s less than 1% of what was needed to move the conservation status of these woodlands by one category, from critically endangered to endangered. Restoring communities Conservation efforts are often focused on species – easily understood parts of our complex and interrelated ecosystems. In recent years, some effective measures have been put in place to conserve species that are teetering on the edge of extinction. We have, for instance, seen the appointment of a Threatened Species Commissioner and the release of a Threatened Species Strategy and Prospectus. But we don’t often hear about the 82 threatened ecological communities in which many of these species live. Temperate eucalypt woodlands once covered vast areas of southern Australia before being cleared to make way for agriculture. The Peppermint Box Grassy Woodlands of South Australia, for instance, have been reduced to 2% of their former glory through land clearing and other forms of degradation. These woodlands provide critical habitat for many plant and animal species, among them declining woodland birds such as the Diamond Firetail and Jacky Winter. Andreas Ruhz/Shutterstock Focusing on the conservation and restoration of our threatened communities (rather than individual species) would create a better understanding of how much effort and investment is required to curb the extinction crisis and improve the outcomes of biodiversity restoration. Read more: How many species on Earth? Why that's a simple question but hard to answer A problem of scale Large-scale restoration investment programs are often touted in politics, particularly when these have a national focus. And many recent restoration programs, such as the Environment Restoration Fund, National Landcare Program, Green Army and 20 Million Trees, are important and worthwhile. But in the majority of cases the effort is inadequate to achieve the stated conservation objectives. Underlying threats to the environment often remain – such as vegetation clearing, genetic isolation and competition from introduced pests and weeds – and biodiversity continues to decline. Read more: Another Australian animal slips away to extinction The 20 Million Trees program, for example, is the most recent national initiative aimed at restoring native vegetation systems, attracting A$70 million in investment between 2014 and 2020. To place the scale of this investment into context, we analysed the impact of the 20 Million Trees program on the critically endangered Peppermint Box Grassy Woodlands of South Australia. The restoration priority for this community should be to enhance the condition of existing remnant areas. But improving its conservation status would also require more effort to increase the area of land the woodland covers. Even if the full six-year budget for 20 Million Trees (A$70 million) was used to replant only this type of woodland, it would still fall short of upgrading its conservation status to endangered. We estimate that moving the community up a category would require a minimum investment of A$150 million, excluding land value. And Peppermint Box Grassy Woodland is just one of the threatened ecological communities listed for conservation. There are 81 others. Read more: An end to endings: how to stop more Australian species going extinct Although any effort to improve the status of threatened ecosystems (and species) is important, this example shows how current levels of effort and investment are grossly inadequate to have any substantial impact on threatened communities and the species that live there. Our estimates relate to how restoration activities affect land cover. But ensuring they are also of adequate quality would need more long-term investment. Boosting investment Investment in biodiversity conservation in Australia is falling while the extinction crisis is worsening. Protecting and restoring ecological communities will preserve our unique native biodiversity and develop an environment that sustains food production and remains resilient to climate change. But failure to invest now will lead to extinctions and the collapse of ecosystems. To make genuine inroads and have an enduring impact on Australian threatened species and ecosystems, restoration programs must be clear on the amount they expect to contribute to conservation and restoration objectives, along with co-benefits like carbon sequestration. The programs must be at least an order of magnitude larger and be structured to produce measurable outcomes.
Posted in Manufacturing and Production Operates forklift and material handling equipment in distribution center and/or manufacturing area. Receives and moves all materials and products to staging or storage areas and arranges them for proper movement when needed. Performs work under the direction of warehouse supervisor/manager or material handling manager. ESSENTIAL DUTIES AND RESPONSIBILITIES: PHYSICAL DEMANDS / REQUIRMENTS: Operating a forklift or other material handling equipment is considered a "Safety Sensitive" function. A Safety-Sensitive Function, as defined by the Company, is one that requires the operation of motor vehicles, forklifts, or motorized warehouse equipment, or one that involves inspecting, servicing, conditioning, controlling, supervising, loading, or unloading such machinery. SAFETY: The Sealing Line driver will be responsible for individual compliance with all safety rules, regulations, and policy procedures. Employee's responsibilities include but not limited to: WORK ENVIRONMENT: Employees will be working in a Manufacturing and/or Warehouse environment in all types of weather. Conditions may range from and include : hot, cold, humid, loud, working near airborne particles, moving mechanical parts and forklift traffic. EDUCATION and/or EXPERIENCE: Certifications, Licenses, Registrations | | Rosemont, Illinois Posted about 1 hour ago | | Ramsey, Minnesota Posted about 1 hour ago | | Ramsey, Minnesota Posted about 1 hour ago Subscribe to job alerts and add your resume to our resume database for employers!
https://www.usdiversityjobsearch.com/jobs/warehouse-forklift-driver-at-genpak-llc-in-middletown-ny-22-429_1657352637
This location exposes a sequence of rocks that are around 32 - 36 million years old. At this time, Te Riu-a-Māui/Zealandia (the continent New Zealand is part of) was still slowly submerging under the sea following its separation from Gondwana. The base of the sequence of rock exposed at Hutcheson's Quarry is now overgrown but has been recorded and described in early geological sketches and photographs as tuff rock (hardened volcanic ash). Overlying this and exposed here are (1) layers of ash and limestone, overlain by (2) a bed of basaltic cobbles within limestone. During the Miocene Epoch, when the area was still under the sea, the Gee Greensand was deposited. This is exposed as a thin layer at the very top of this outcrop. This sequence of rocks has been an important time marker to correlate other outcrops of similar age. First people to Ōamaru Ōamaru is the traditional name of the stream and associated wetlands that were once prominent within this area and was part of the extensive network of kāika nohoaka (settlements) and kāika mahika kai (food-gathering places) located along Te Tai o Araiteuru (the Otago coastline). Rāwiri Te Māmaru, a prominent Kāi Tahu kaumātua recorded Ōamaru as a pā tūturu (defensible village) and a kāika mahika kai where tuna (eels), īnaka (whitebait) and kōareare (edible root of raupō) were gathered. Sequence of rocks exposed at Hutcheson's Quarry Tuna (eel) A quarry to serve a lime kiln Between 1860 and 1870 David Hutcheson (1826-1882) operated a limestone quarry as well as a lime kiln at this site. A lime kiln is a structure that burns limestone, at temperatures of around 1000°C to produce quicklime. When mixed with water, quicklime becomes slaked lime and is the basis for lime mortar (an early form of cement). Hutcheson's operation here would have provided quicklime and mortar to bind together blocks of Ōamaru Stone in the construction of the buildings in the nearby Victorian Precinct and beyond. The quarry has also been known under the name Hutchinsons Quarry. It is not uncommon to find people with differently spelled versions of their surname. Some people changed them on arrival in New Zealand or others to distinguish themselves from others with the same surname. While Hutchison and Hutchinson also occur in papers from the time and later books on local history, it is David Hutcheson (1826-1882) how he signed his name is his will. Please be aware the following hazards include: waterway, steps, unsealed walking track, and falling debris from overhang. Fast facts - Volcanic activity from 36 to 32 million years ago produced the volcanic ash and basalt cobbles seen here. - Interpretation signs at the Ōamaru Penguin Colony and the Ōamaru Lookout Point describe other rocks produced by ancient local volcanic activity. - Hutcheson's Quarry is one of New Zealand's first geological reserves. Latitude: -45:05:38.583 Longitude: 170:57:56.154 Easy walk 5 min 300m GETTING THERE To get to Hutchesons Quarry, follow Eden St in for 500m from Thames Highway. There is parking available to the right. Walk into the Glen Warren Reserve and follow the path for a few hundred metres. The quarry will be to your right. Kaitiakitanga Protection and guardianship are at the heart of the Geopark philosophy. We ask you to treat this site with respect, do not remove anything from this site and preserve it for our future generations.
https://www.whitestonegeopark.nz/hutchesons-quarry
“Steganographies were used to cipher messages in order to guarantee secrecy and security. However, even though disregarding many terminological details (or differences) used today by the cryptographers, one must distinguish between the activity of coding and decoding messages when one knows the key, or code, and cryptoanalysis (sic); that is, the art of discovering an unknown key in order to decipher an otherwise incomprehensible message. Both activities were strictly linked from the very beginning of cryptography: if a good steganography could decode a ciphered message, it ought to allow its user to understand an unknown language as well. When Trithemius wrote his Polygraphiae, which was published in 1518, before his Steganographia, and did not earn the sinister fame of the latter work, he was well aware that, by his system, a person ignorant of Latin might, in a short time, learn to write in that “secret” language (1518: biiii) (sic). Speaking of Trithemius‘ Polygraphia, Mersenne said (Quaestiones celeberrimae in Genesim, 1623: 471) that its “third book contains an art by which even an uneducated man who knows nothing more than his mother tongue can learn to read and write Latin in two hours.” Steganography thus appeared both as an instrument to encipher messages conceived in a known language and as the key to deciphering unknown languages. In order to cipher a message one must substitute the letters of a plain message (written in a language known by both the sender and the addressee) with other letters prescribed by a key or code (equally known by sender and addressee). To decipher a message encoded according to an unknown key, it is frequently sufficient to detect which letter of the encoded message recurs most frequently, and it is easy to infer that this represents the letter occurs most frequently in a given known language. Usually the decoder tries various hypotheses, checking upon different languages, and at a certain point finds the right solution. The decipherment is made, however, more difficult if the encoder uses a new key for every new word of the message. A typical procedure of this kind was the following. Both the encoder and the decoder refer to a table like this: Now, let us suppose that the key is the Latin word CEDO. The first word of the message is encoded according to the third line of the table (beginning with C), so that A becomes C, B becomes D and so on. The second words is encoded according to the fifth line (beginning with E), so that A becomes E and so on. The third word is encoded according to the fourth line, the fourth according to the fifteenth one . . . At the fifth word one starts the process all over again. Naturally the decoder (who knows the key) proceeds in the opposite way. In order to decipher without knowing the key, if the table is that simple and obvious, there is no problem. But even in cases of more complicated tables the decipherer can try with all possible tables (for instance, with alphabets in reverse order, with alternate letters, such as ACEG), and it is usually only a matter of time before even the most complex of codes are broken. Observing this, Heinrich Hiller, in his Mysterium artiis steganographicae novissimum (1682), proposed to teach a method of learning to decipher messages not only in code, but also in Latin, German, Italian and French, simply by observing the incidence of each letter and diphthong in each language. In 1685, John Falconer wrote a Cryptomenysis patefacta: or the Art of Secret Information Disclosed Without a Key, where he noted that, once someone has understood the rules of decipherment in a given language, it is possible to do the same with all the others (A7v).” Umberto Eco, The Search for the Perfect Language, translated by James Fentress, Blackwell. Oxford, 1995, pp. 194-6.
https://therealsamizdat.com/2016/06/21/eco-polygraphies/?shared=email&msg=fail
The establishment of a botanical garden in Pietermaritzburg in 1874 was in response to a growing demand for tree seedlings in the Natal midlands. Many of the ‘Grand Old Trees’ date back to this period, when the lower part of the Garden was laid out in geometric form with blocks of tightly packed trees. By the 1900s the Garden had removed many of the trees and established paths, a rockery and prize collections of azaleas, as well as a pond and tea garden. Camellia Walk was laid out in 1908 and Plane Tree Avenue was planted in 1908 by curator Brian Marriot in response to a suggestion from the Governor of Natal. In the 1970s the formal paths were removed to make way for a more informal display garden. The Useful Plants Garden displays a wide range of indigenous plants traditionally used for healing, charms, crafts, building and food and drink. The Grassland Bed comprises grasses and associated plants from the summer rainfall grasslands which are grown in a ‘naturalistic’ way. The ‘look-listen-feel-smell’ garden was designed to stimulate the senses of both young and old alike. Illustrated on this section of the map is the blue water lily (Nymphaea nouchali) and the Hilton daisy (Gerbera aurantiaca). The blue water lily is found in rivers, lakes and ponds throughout Africa. These beautiful blue- and pink-flowered water plants are used in traditional medicine. The beautiful red Hilton daisy is only found in the KwaZulu-Natal and Mpumalanga mist belt regions. It is endangered due to habitat transformation by commercial forestry and agriculture. The Hilton daisy is the flagship species of the KwaZulu-Natal National Botanical Garden’s Threatened Species Programme. The most famous visitor to this section was President Paul Kruger, who planted a Camellia japonica in April 1891. From 1892 onwards the Garden provided flowers for exhibition at shows. Because of the beautiful flowers in this garden, and an attractive fragrance from surrounding Jasminum and Osmanthus fragrans inside this garden, and the azaleas forming a front hedge with a show of flowers of various colours, the pathway in between has been declared “The Lovers Lane”. The latest addition to this area is a gazebo for a Wedding Garden.For more information about the Wedding Garden please contact Bathabile Ndlovu (Wedding Garden Project Manager) on +27 (0)33 344 3585. The Zulu Pharmacy – Plants used in traditional medicine, health and beauty.These useful plants have various culinary, medicinal and cultural uses. Some plants treat headaches, snakebites etc. Over a thousand different plants are used in Zulu culture for medicinal purposes alone. Many of these plants are becoming threatened due to over-harvesting of wild populations. This exciting garden aims to broaden public awareness on the importance and value of our useful plants. Bartering was a trading method used to share all sorts of crops and ensure that everyone had food. This section aims to educate the public about getting back to basics and producing their own food.This garden also teaches visitors about rituals, ceremonies, songs and music, dances, storytelling and clothing associated with Zulu culture. Gain insight into how the oral tradition of Zulu storytelling and use of lore, proverb helps consecutive Zulu generations in understanding and interpreting the natural world of plants and animals. Look at displays of cultural weapons, woven baskets, mats, clay pots, utensils, bead work and musical instruments inside the traditional beehive hut.To book cultural tours and other cultural events please contact Mbuso Zondi (Interpretation Officer) or Siphu Ngqasa (Marketing Officer) on +27 (0)33 344 3585. vernonias, senecios, helichrysums and various bulbs form lovely colourful displays. Some species found in this section are very rare and some have limited distribution countrywide.Take a glance and discover the hidden treasures of this grassland, and be amazed by the herbaceous layer with its beautiful flowers.
https://www.sanbi.org/gardens/kwazulu-natal/tours/display-garden/
TSA contractors allegedly reviewed contract invoices totaling $265 million Transportation Security Administration contractors improperly reviewed invoices for work performed by other TSA contractors, and one contractor improperly reviewed its company’s own invoices to TSA, according to a new report from the Homeland Security Department’s Office of Inspector General. The April 15 audit evaluated 13 contracts valued at $609 million in fiscal 2009. The IG alleged in the report that TSA contractors inappropriately performed inherently governmental functions with respect to approximately $265 million worth of service contracts. The IG also found that TSA contracting officers did not follow acquisition guidance, nor did TSA maintain a sufficient number of contracting oversight employees. “The TSA did not provide adequate management and oversight of acquisitions for support services for transportation security programs,” the report states. “Contractors were performing inherently governmental functions or roles that closely support the performance of inherently governmental functions, acquisition staff did not follow acquisition guidance, and support services contracts contained vague statements of work. This occurred because the agency did not have an adequate number of properly trained core acquisition staff to administer contracts and oversee support services contractors’ performance.” As a result of the shortcomings, TSA cannot be certain that it is obtaining the best value for taxpayers, the audit concluded. Under the Federal Acquisition Regulation, contract administration is considered an inherently governmental function that is to be performed only by government employees. TSA’s support services contractors performed contract administration in three of the 13 contracts reviewed. In those cases, contractors reviewed invoices to determine whether they were reasonable, correctly charged and allowable, and then recommended the invoices for approval and payment. Those contracts totaled $265 million in value in fiscal 2009. In addition, one of the contractors performed contract support for its own contract, along with reviewing its own invoices, the report states. “When we brought this to the attention of TSA management, they took immediate action to correct the problem,” the IG wrote. However, TSA officials did not agree with the IG’s contention that contractors were performing inherently governmental or nearly inherently governmental work, the report states. Nonetheless, they did agree to initiate corrective actions to address the problems. About the Author Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week. As long as the information in these audits is accurate, why doesn't the IG and GAO name the contractors and the specific contracts in their reports? Until some public-facing facts are released, there is no downside to inappropriate and/or illegal activities on the part of both government and industry. It certainly is inappropriate for a contractor to be performing any contract admin. functions on their own employer's contract. That having been said, there is nothing inherently governmental in reviewing invoices for other contracts. As long as an FTE contracting person, presumably a Contracting Officer, is approving the invoices, this is an allowable practice. Having a contractor doing the review is preferrable to not adequately reviewing them at all, which is what would otherwise have happened given the shortage of qualified 1102's. What a shock. Employees of an agency whose employees are matched at a ratio of almost 1:1 by contractors is allowing contractors to pay contractors. This is clearly a case of the foxes keeping an eye on the henhouse while the farmers goes to town. Who the heck is in charge at this agency???? E-Mail this page Printable Format Advanced Search Sign up for our newsletter. To win a spot on the State Department's $8 billion EVOLVE contract, bidders must work on building their customer's confidence. Read More The road to cyber compliance is bumpy and many contractor are bound to fail. Here's why and what you can do to protect your organization. Read More The Justice Department has launched its Civil Cyber Fraud Initiative and even the most compliant government needs to pay attention to what DOJ is saying and where it is focusing.
https://washingtontechnology.com/articles/2010/04/16/ig-report-says-tsa-contractors-did-inherently-governmental-work.aspx
Employee evaluation is considered vital for every organization since it’s an important HR function. A comprehensive assessment of employee’s performance benefits a company in many ways. It’s a well-known method of evaluating employees’ strengths and weaknesses which further facilitates the management as it acts as a foundation on which they can work and improve the overall organizational performance. Normally the scenario is that the evaluation is done at the end of the fiscal year (just before the increment period) as per the company’s policy. However, some companies evaluate performances on a short term basis (bi annually/quarterly). It has its own advantages, the reason being, if the employees’ performance is not going in the desired direction, the management can correct them mid – way rather than waiting for the whole year to end and then making corrections. Short term evaluation Vs Long term evaluation – which one in your opinion can contribute better, as far as the end – result is concerned. Furthermore, if there are any pros and cons associated with them then kindly highlight those aspects as well. 13 Comments - Short term, project based feedback is ideal for skill development and reinforcement. Long term evaluation provides just that- long term exchange and career development focus. Both are equally important to maintain an active dialogue and keep employee and management focused and engaged. I’ve found it most effective to provide short term results based feedback on projects and periodic performance, and to provide a separate setting and process for long term career development and feedback. Both can be combined and balanced for ‘annual evaluation’ if desired. Ideally,a career development discussion should be led by teh employee; what does he/she want to develop, aspire to tackle? Managers can then provide feedback and highlight opportunities to support these goals as they align to business opportunity. Short term/periodic results should be discussed from the perspective of the manager and the specific objectives outlined for the project. - every minute is important in the competitive edge,if you are not able to read your employee credibility & capacity in the first interview means you will lose something in the future,short is the word which must be in the competitive edge. with regards. abhishek, always look for the result oriented emplyee whether having degree or not it doesn’t matter. - Good Afternoon Salima, Wishing you merry christmas and a very happy and prosperous new year. Coming to the question, in my view both has its merits and demerits. While in the short term evaluation I feel it gives much more control to the employer to evaluate and judge the performances and take corrective actions, while on the contrary I feel it puts an extra pressure (unknown fear of being evaluated every now and then) on the employee as well as will definitely add to the company’s budget. In a long term evaluation pattern an employee who needs some corrective actions will keep on going repeating the same over and over again. Here one word of caution that I feel is at the time of hiring or may be handing over the new project an employee should be assessed for his suitability with the job expected of him/her. - Long term evaluation will determine the future growth of the companies. If companies fail to evaluate long-term goals, I think that these companies are in the troubles because of competition from other companies will use his competitive advantages to win over the companies. In addition, I think that set up for the long-term goals for companies to run is very important because it will determine how to survive in this new competitive era. - It really depends on the situation and perhaps duration of employment. I think small periods warrant small reviews and large periods larger reviews. - Changes take time. If you focus only on long-term outcomes, you may become discouraged. In addition to setting your long-term goals, identify desired short-term outcomes. Short-term outcomes provide rapid feedback on the impact of the Action Plan, and are typically easier to measure. If the short-term outcomes look positive, communicating this throughout the organization or team will improve overall morale and commitment to the process of improving psychological safety and health. - An effective evaluation is one which will take a certain amount of effort and time to complete. This effort multiplied across your entire workforce can become quite significant. If you want to start doing it twice per year, it can reaaly add up. I believe in annual evaluations which include the establishment of both long and short term goals. Short term goals can be used for several purposes. One of which is to facilitate an early review of areas where some immediate improvement is warranted. Regardless of whether or not this is the case; regular reviews of an employees’ progress by thier immediate supervisor should be the norm. Not just for disciplinary reasons, but to ensure that an employees’ efforts are being focused in the right direction. As the needs and direction of the company change, so too, should the efforts of the employee. Regular review of progress toward formally established goals is appropriate for keeping an employee on track and inspired as well as an effective means of checks and balances. If there are issues regarding an employee’s performance, then more frequent reviews or evaluations are warranted, but evaluation for the sake of evaluation is more a hindrance than a help. Most employees would, in my opinion, find it intrusive and somewhat overwhelming to have to be evaluated every six months. Informal review and guidance to help an employee meet their goals is much more effective and considerably less work. - Hi Salima, Thanks for bringing up an important aspect related to Performance Management & employee development. I do agree to your observation that many org lay maximum emphasis on the year end performance evaluation & more often than not, most of the managers rely on their memory which leads to a relaince on recency factor towards evaluation of an employees performance. My take on this subject would be to install a mechanism for doing a series of regular short term evaluations which should cumulatively be an input or could be considered as a long term evaluation. For instance, let us consider a process where managers have a documented monthly meetings with their team discussing their performance vis-a-vis their goals, their achievements & areas of development & action plan for the next month which would be reviewed in a similar meeting the next month. With this we would ensure an amalgamation of short term evaluation & will have fact based documented inputs when we get into long term evaluation or annual assessments. Hope this helps. Regards Praveen - You want to do a mix of both–annually is a good time to review two to five year targets, but you need to meet on a regular basis to mark progress against plans or to adjust the plans to current realities. This can be done weekly, monthly or quarterly depending on the situation–keeping in mind that an annual goal may have shorter term components that can be identified, measured and tracked. This is a good way to avoid surprises at the annual review. - Below is the clickable link.. ♦♦♦ I support Short Term Evaluation since it allows the corporate to check and see its staff weakness before the year-end, especially those with big human staff, but this requires many efforts and time and a well qualified experts…while yearly Evaluation is good for small companies since they can easily check their small staff performance ♦♦♦ Make Ur Year A Gooood Oneeee F.A.A. - Hi, For me it is more than the evaluation a whole some process of review. If you take an example from a LION wandering in the jungle, for every 12- 14 steps it stops and takes a look around then proceeds. It helps the LION to have full awarness of its surrounding and it can sense and act for any changes accordingly. How ever when it is chasing the prey its strategy is different. Same way timely pause and looking around on how we have tread the path, to what extent we are in line and to what extent we have deviated need to be assessed. This gives a lot of clarity on assessing the individual performance and the performances of the teams. Majority of the times the long term evaluations suffer from the recency effects and the performance and evaluations get colored. Except for a very few specialist and very routine kind of works it is alaways desirable to go for reviews every quarter or at least per six months. Kind regards Dayanand L Guddin Sr. Head – HR BOBST INDIA - You actually need both. Regular ‘feedback’ sessions throughout the year allow you to coach and develop your team, as well as course correct as needed. It’s much easier to address situations requiring constructive feedback in small doses as they occur and the pressure associated with a raise or promotion is eliminated because these are just supportive chats. You stand a better chance of controlling a fire while it’s still just a spark – waiting until it is a destructive force benefits noone. If you’ve been providing feeedback throughout the year, the annual evaluation process becomes anticlimatic – you are all on the same page when it comes to performance because you have been working together on it all year. In fact, the evaluations pretty much write themselves because you have everything already documented. It’s also the fairest way to handle performance reviews in that if you wait until the end of the year, you will only be able to remember the things that happened most recently – this could sway the evaluation to be either more positive or negative than it needs to be simply because a current good or bad event stands out most in your mind. - Hi, Salima – good question. I would rather not even think of employee performance as something to be evaluated solely on a schedule, either short or long term. Putting this important function on a schedule operationalizes it into a procedural item to be checked off a list as “done on time” or “late”. In my opinion, an effective manager is doing this every day,using opportunties to coach, counsel, and manage their employees. Helping others to improve and thrive should be THE focus of a manager and woven into their working life.
https://www.ephlux.com/short-term-vs-long-term-eval/
The invention provides an unsaturated water supply device for simulating underground water, which comprises a soil storage device, a water receiver, a hydrostate, a pressure difference sensor, a soil moisture sensor and a data processing unit, wherein the soil storage device is used for being filled into a test soil column; the water receiver is used for supplying water to the soil storage device; the hydrostate is used for adjusting a water supply pressure of the water receiver after being injected with water; the pressure difference sensor is used for measuring a pressure difference at the upper and the lower ends of the water storage device; the soil moisture sensor is used for detecting the soil moisture content of the test soil column, and the data processing unit is used for automatically collecting, storing and displaying the moisture content data and water supply data of the section of the test soil column according to a set time interval. According to the method provided by the invention, an actual underwater depth can be simulated and a moisture change process can be tested, the volume of the test soil column used in a test process is smaller and the cost of the test soil column used in the test process is lower.
The final round of Autumn Nations Cup pool matches take place this weekend and England bid to rubber stamp their place in the play off final when they visit Wales on Saturday. Wales v England, Saturday 28th November, Llanelli, KO 18:00 (South African time) Referee: Romain Poite Assistant Referees: Pascal Gauzere, Alex Ruiz TMO: Brian MacNeice Weather Forecast: Llanelli Wales Wales are third in Pool A on 4 points, level with Ireland but 5 points behind England. They snapped a 6 match losing sequence with an 18-0 win over Georgia in difficult conditions last week but their points difference is still vastly inferior to the 2 sides above them in the standings. In their Autumn Nations opener Wales were beaten 32-9 by Ireland in Dublin, a game where they were allocated around 9.5 points start on the handicap. Wales also had a poor 6 Nations which ended with a 14-10 home defeat to Scotland on 31st October. Earlier in the tournament they lost at home to France (27-23), in England (33-30) and to Ireland in Dublin (24-14) with their only win coming at home to Italy (42-0). Bet on Live Roulette, SA's first online live roulette games England England need to avoid defeat to rubber stamp their place in next weekend's play off final which would most likely be against France. They beat Ireland 18-7 at Twickenham last week (12-0 half time) with Jonny May scoring both their tries. Ireland's late consolation brought the margin of defeat pretty much in line with where the pre-match handicap had settled. England opened their tournament with a 40-0 win over Georgia at Twickenham. England scored 6 tries, 3 of which went to hooker Jamie George and the contest was over by half time with England 26-0 to the good. Earlier last month England also clinched the 6 Nations title and in the outright betting to win this tournament they are now trading at around 1 / 3. Team News Wales: 15 Leigh Halfpenny, 14 Louis Rees-Zammit, 13 Nick Tompkins, 12 Johnny Williams, 11 Josh Adams; 10 Dan Biggar, 9 Lloyd Williams; 8 Taulupe Faletau, 7 James Botham, 6 Shane Lloyd-Hughes; 5 Alun Wyn Jones (captain), 4 Jake Ball; 3 Samson Lee, 2 Ryan Elias, 1 Wyn Jones Replacements: 16 Elliot Dee, 17 Rhys Carre, 18 Tomas Francis, 19 Will Rowlands, 20 Aaron Wainwright, 21 Rhys Webb, 22 Callum Sheedy, 23 Owen Watkin England: 15 Elliot Daly, 14 Jonathan Joseph, 13 Henry Slade, 12 Owen Farrell (captain), 11 Jonny May, 10 George Ford, 9 Ben Youngs, 8 Billy Vunipola, 7 Sam Underhill, 6 Tom Curry, 5 Joe Launchbury, 4 Maro Itoje, 3 Kyle Sinckler, 2 Jamie George, 1 Mako Vunipola. Replacements: 16 Luke Cowan-Dickie, 17 Ellis Genge, 18 Will Stuart, 19 Jonny Hill, 20 Ben Earl, 21 Jack Willis, 22 Dan Robson, 23 Anthony Watson. 6N & RWC Head to Head (any venue) 2020 6N Twickenham England 33-30 Wales (Tries 3-3) 2019 6N Cardiff Wales 13-6 England (Tries 1-0) 2018 6N Twickenham England 12-6 Wales (Tries 2-0) 2017 6N Cardiff Wales 16-21 England (Tries 1-2) 2016 6N Twickenham England 25-21 Wales (Tries 1-3) 2015 RWC Twickenham England 25-28 Wales (Tries 1-1) 2015 6N Cardiff Wales 16-21 England (Tries 1-2) 2014 6N Twickenham England 29-18 Wales (Tries 2-0) 2013 6N Cardiff Wales 30-3 England (Tries 2-0) 2012 6N Twickenham England 12-19 Wales (Tries 0-1) The Betting – Handicap Need to open an account? Claim your 1st deposit bonus of up to R2,000 here Wales +14.5 points at 9/10 widely available England -14.5 points at 9/10 widely available Note, odds quoted are correct at the time of writing but are subject to change. Betting Angle When the teams met in the 6N at Twickenham earlier this year there was only 3 points in it but Wales scored 2 very late tries and were flattered by the margin of defeat. On form England have Wales stone cold but I can see Wales putting in a decent shift and looking at 6N head to heads in Cardiff you would have to go back to 2003 for the last time England won by a margin that would see them cover this handicap. Wales plus is the handicap play. Bet: 2.5 of 5 units Wales +14.5 points at 9/10 widely available Claim your 1st deposit bonus of up to R10,000 here Subscribe to our free weekly betting mailer which will include our best and value bets below.
http://goodforthegame.co.za/Int-Rugby/wales-v-england-autumn-nations-cup-betting-preview.html
Aboriginal and Torres Strait Islander (respectfully, subsequently referred to as Indigenous) children in Australia experience oral disease at a higher rate than non-Indigenous children. A his-tory of colonisation, government-enforced assimilation, racism, and cultural annihilation has had profound impacts on Indigenous health, reflected in oral health inequities sustained by Indigenous communities. Motivational interviewing was one of four components utilised in this project, which aimed to identify factors related to the increased occurrence of early childhood caries in Indigenous children. This qualitative analysis represents motivational interviews with 226 participants and explores parents’ motivations for establishing oral health and nutrition practices for their children. Findings suggest that parental aspirations and worries underscored motivations to establish oral health and nutrition behaviours for children in this project. Within aspirations, parents desired for children to ‘keep their teeth’ and avoid false teeth, have a positive appearance, and preserve self-esteem. Parental worries related to child pain, negative appearance, sugar consumption, poor community oral health and rotten teeth. A discussion of findings results in the following recommendations: (1) consideration of the whole self, including mental health, in future oral health programming and research; (2) implementation of community-wide oral health programming, be-yond parent-child dyads; and (3) prioritisation of community knowledge and traditions in oral health programming.
https://ro.uow.edu.au/test2021/2715/
Engaging, accessible and stunning design for your digital product. I’m a professional designer focused on the humanity of our digital products. Users are the focus of the experiences we create and should aspire to bring joy, be helpful, and foster progress through our products. Visual Solutions Simple, frictionless and visually appealing designs for making products work as flawlessly as possible. Iterative creation with user input ensures that your product satisfies the end goal. Taking the stress and mysticism out of visual design for digital products. With accessibility guidelines, proven strategies, and discovering unique user characteristics. Diverse Experience Work in a variety of industry sectors has created a breadth of experience while providing depth of experience in Product Design itself.
http://www.justinbruss.com/home
TECHNICAL FIELD OF THE INVENTION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION BRIEF DESCRIPTION OF THE DRAWINGS DETAILED DESCRIPTION OF THE INVENTION Pixel Array Interconnections Operation of the Invention Fault Tolerance Other Embodiments This invention relates to spatial light modulators, especially those known as deformable mirror devices, and more particularly to circuitry for controlling the on and off states of individual pixel elements. Spatial light modulators (SLMs) consist of an array of electronically addressable pixel elements and related control circuitry. A typical application is for image display, where light from each pixel is magnified and projected to a display screen by an optical system. The type of modulation depends on how the modulator is combined with an optical system. A frequently used type of SLM is the deformable mirror device, in which each pixel element is a tiny micro-mechanical mirror, capable of separate movement in response to an electrical input. Incident light may be modulated in direction, phase, or amplitude for reflection from each pixel. For many applications, the SLM is binary in the sense that each pixel element may have either of two states. The elements may be off, which means that it delivers no light. Or, the element may be on, which means that it delivers light at a maximum intensity. To achieve a viewer perception of intermediate levels of light, various pulse width modulation techniques may be used. These techniques are described in pending US. Patent Serial No. 07/678,761, entitled "DMD Architecture and Timing for Use in a Pulse-Width Modulated Display System", assigned to the same assignee as the present application. n In general, pulse width modulation produces an integrated brightness by switching each pixel on or off for a period that corresponds to a binary number, during each frame. Pulse width modulation uses various schemes for loading the SLM, such as "bit-frame" loading, in which one bit per pixel for an entire frame is loaded at one time. Each pixel element has a memory cell. The entire array of memory cells is loaded with one bit per cell, then all pixel elements are set to correspond to that bit-frame of data. During the display time of the current bit-frame, data for the next bit-frame is loaded. Thus, for example, for 8-bit pixel brightness quantization, the SLM is loaded eight times per frame, one pixel per frame at a time. In one such method, the most significant bit is displayed for 1/2 of a frame period, the second most significant bit for 1/4 frame period, etc., with the least significant bit (LSB) representing a display time of 1/2 frame period, for n-bit brightness quantization. A problem with existing pixel loading techniques is that they require at least one memory cell per pixel element. As the number of pixels per frame increases, the memory requirements for the SLM device results in increased costs and reduced manufacturing yields. A need exists for SLM that has reduced circuitry for controlling the pixel elements. Loading schemes that use a memory cell for every pixel element also limit the minimum time in which a pixel element can be set, to the time required to load a bit-frame into the memory array. When pulse width modulation is used, the display time for the LSB is the shortest display time. During this LSB time, the data for the next frame must be loaded. This is the time period when a "peak" data rate is required. To satisfy this peak data rate, a certain pin count and data frequency on those pins must be available. A high peak data rate translates into a high pin count and/or high frequency, which increases device and/or system costs. A need exists for an SLM that reduces this peak data rate. A first aspect of the invention is a spatial light modulator (SLM) having individually controlled pixel elements, each of which may be set and reset to either of two states depending on a value of a data signal delivered to that pixel element. The SLM has an array of pixel elements each having two possible states depending on the value of a data signal delivered to it from an associated memory cell. The SLM also has a number of memory cells, each in data communication with a set of pixel elements. Each memory cell stores a data value representing an on or off state of a pixel element of its set and delivers a signal representing this data value to the pixel elements of its set. A number of reset lines are connected to the pixel elements such that a different reset line is in communication with each pixel element of a set. Thus, the reset lines may be used to reset only one pixel element of a set at a time. A technical advantage of the invention is that a single memory cell controls a set of multiple pixel elements. This reduces the circuitry per pixel, which has the effect of reducing device cost and increasing manufacturing yields. Also, the peak data rate at which loading must occur is reduced because there are fewer memory cells to load for any one reset. This has the effect of reducing pin-counts and/or lowering data frequency requirements, with the further effect of lower device and/or systems cost. Figure 1 is a block diagram of a portion of an SLM array, having memory cells with a fanout of four pixel elements. Figure 2 illustrates a memory cell having a fanout of four pixels. Figure 3 illustrates the bistable operation of a mirror element of an SLM. Figures 4 and 5 illustrate how reset lines can be easily connected for torsion-hinge type pixel element arrays having conductive mirrors and hinges. Figure 6 is an example of a data sequence for loading a frame of data into an array of memory cells, each having a fanout of four pixel elements. Figures 7-9 illustrate enhanced embodiments for providing improved fault tolerance. Figure 1 is a block diagram of a portion of an SLM array 10, having pixel elements 11 that are controlled with memory cells 12 and reset lines 13. Only a small number of pixel elements 11 with their related control circuitry is shown; a typical SLM array 10 would have thousands of such elements 11. Figure 1 is primarily intended to show how each memory cell 12 serves multiple pixel elements 11. Additional detail about the interconnections between pixel elements 11, memory cells 12, and reset lines 13 is explained below in connection with Figures 2 - 5. SLM 10 is, for purposes of this description, a device known as a deformable mirror device (DMD). DMDs have arrays of tiny micro-mechanical mirror elements, which may be modulated to provide the viewer with a perception of varying intensity. An example of a DMD is the DMD device manufactured by Texas Instruments, Inc. However, the invention is not limited to the use of DMD's for SLM 10, and may be used with other types of SLMs having addressable pixel elements, with similar characteristics, namely, operation in accordance with data signals and a reset control signal, as explained below. Pixel elements 11 are operated in a bistable mode, which means that there are two stable states. As explained below in connection with Figure 3, the direction of their movement is controlled by "loading" them with data from their memory cell 12 via address electrodes to "drive" the pixel element 11. As further explained in connection with Figure 3, the state of the pixel element 11 is changed, in accordance with this driving voltage, by applying a differential bias via a reset electrode. The term "reset signal" is used herein to refer to a signal that is delivered to the pixel elements 11 to cause them to change state. Pixel elements 11 are grouped into sets of four pixel elements 11, each set in communication with a memory cell 12. The number of pixel elements 11 in a set associated with a single memory cell 12 is referred to as the "fanout" of that memory cell 12. Thus, in Figure 1, each memory cell 12 has a "fanout" of four pixels. The invention is applicable to other fanout values, but a fanout of four is used herein for purposes of example. Each memory cell 12 may be a conventional SRAM (static random access memory) cell. One of the advantages of many of today's designs for SLM 10 is that they may be easily integrated onto underlying CMOS control circuitry. This description is in terms of memory cells 12, each having a single bit storage capacity. However, the scope of the invention could also include "memory cells" that store more than one bit or that have additional logic circuitry. For example, each memory cell 12 could have a double buffer configuration. Four reset lines 13 control the time when the pixel elements 11 change their state. Once all memory cells 12 for the pixel elements 11 connected to a particular reset line 13 have been loaded, the states of the pixel elements 11 change according to the data with which they have been loaded, simultaneously, in response to a reset signal on that reset line 13. In other words, the pixel elements 11 retain their current state as the data supplied to them from their memory cell 12 changes, and until receiving a reset signal. Each pixel element 11 in the set of four pixel elements associated with a memory cell 12 is connected to a different one of four reset lines 13. Thus, each pixel element 11 in a set can change its state at a different time from that of the other pixel elements 11 in that set. In general, each set of pixel elements 11 associated with a memory cell 12 has the same number of pixel elements, and this number is the same as the number of reset lines 13. However, there could be instances, such as on edges of the pixel element array, where a memory cell 12 is connected to a fewer number of pixel elements. Figure 2 illustrates a set of four pixel elements 11, its memory cell 12 and reset lines 13, and the related interconnections. Each pixel element 11 is labeled in terms of the reset line 13 to which it is connected, i.e. pixel element 11(A) is connected to reset line 13(A). As indicated, either a "1" or a "0" value may be delivered to the pixel elements 11. When the memory cell 12 is switched, either of those values is delivered to all pixel elements 11 to which that memory cell 12 is connected. A signal on the reset line 13 of each pixel element 11 determines whether that pixel element 11 will change state. Figure 3 is a cross sectional illustration of a single pixel element 11 of a typical DMD type of SLM 10. The spatial light modulation is provided by a reflective mirror 31, which tilts in either of two directions. The two stable states of mirror 31 are indicated by the dotted lines. In its stable positions, one end of mirror 31 has moved toward one of two landing electrodes 32. Two address electrodes 33 are connected to the outputs of the memory cell 12 whose fanout includes that pixel element 11. A reset voltage is applied to the conductive mirror 31 by means of a reset electrode 34. Address electrodes 33 are used to apply a voltage difference, such that one end of mirror 31 is attracted to its underlying electrode 33 and the other end is repelled. The reset voltage at electrode 34 determines whether the mirror 31 will actually rotate to the corresponding landing electrode 32. Thus, the mirror 31 are "loaded" via their memory cell 12 and reset via reset lines 13. If tilted in a selected direction, such as toward a display screen, a pixel element will be "on"; otherwise it is tilted so as to direct light elsewhere, such as to a trap. Figure 4 is a top plain view of a portion of an array of pixel elements 11, whose reset lines 13 are via torsion hinges 41. As in Figures 1 and 2 and as indicated by dotted lines, each pixel element 11 is associated with a memory cell 12 having a fanout of four pixel elements 11. In this embodiment, pixel elements 11 have conductive mirrors 31 and conductive torsion hinges 41 so that the reset can be applied directly to the mirrors 31 via the hinges 41 without special connections or isolations. In Figure 4, where each mirror 31 has a pair of hinges 41 and where pixel elements 11 are aligned so that the hinges 41 are along horizontal lines, connections to reset lines 13 are easily made along these horizontal lines. Figure 5 illustrates an alternative arrangement of SLM 10. As in Figure 4, the fanout of each memory cell 12 is a vertically spaced set of pixel elements 11. However, the reset connections are along diagonal reset lines 13. As in Figures 2 and 3, each pixel element 11 is labeled in terms of the reset line 13 to which it is connected, i.e. pixel element 11(A) is connected to reset line 13(A). This arrangement would be useful in SLMs 10 where it is advantageous to align pixel elements 11 such that their hinges 41 are along diagonal lines. For pulse width modulation, the operation of SLM 10 is generally consistent with existing pulse width modulation techniques in that an n-bit value represents the brightness of each pixel element 11 during a frame period. Each bit of the n-bit value represents a time during which the pixel element 11 is either on or off. The number of bits in the n-bit value is referred to herein as the "bit depth". pixel 1 pixel 2 pixel 3 pixel 4 A B C D E F G H I J K L M N O P Q R S T For purposes of example herein, it is assumed that each pixel element 11 displays light during one frame in accordance with a bit depth of 5 bits. Thus, for example four pixel elements 11 in a set associated with a single memory cell 12 might have the following data for a single frame: , where {ABCDE} represents a 5-bit binary value. The value of each bit is "1" or "0" representing one of two possible states for the pixel element 11. bit 4 (MSB) bit 3 bit 2 bit 1 bit 0 (LSB) 16 time units 8 time units 4 time units 2 time units 1 time unit If it is assumed that a "1" in the LSB position represents an "on" value of one time unit, then a "1" in the MSB position will represents 16 time units, with the intermediate bits ranging downward as requiring 8, 4, and 2 time units. If bit 4 is the MSB, and bit 0 is the LSB, the times represented by each a "1" value of bit are: Thus, the greater the 5-bit value, the longer the pixel element 11 is on during a frame, and the brighter it is relative to other pixel elements 11 during that frame. Further details about pulse width modulation techniques are described in US. Patent Serial No. 07/678,761, referred to in the background section of this patent application and incorporated herein by reference. The pulse width modulation technique described herein makes use of the fact that some on or off times are long compared to the switching speed capability of memory cells 12. An underlying premise of the invention is that a single memory cell 12 may serve multiple pixel elements 11 if its data loading is sequenced so that no more than one of its pixel elements 11 needs resetting at the same time. In general, the sequencing used to load each frame of data depends on fanout and the bit-depth. Various sequences are possible, but a rule that the sequencing must follow is that no two pixel elements 11 in a set can need loading at the same time. n Several "optional" rules, in addition to the rule of the preceding paragraph, may be applied. Where a fanout of m pixel elements is assumed, one such rule is that at the beginning of the sequence, all m pixel elements 11 are loaded in the first m time units. Thus, each pixel element 11 of each set is loaded in a continuous series of initial time slices. This rule results in good separation between frames, with a maximum skew of m time units between the end of one frame and the beginning of the next. Also, the data loaded during the first m - 1 time slices should not be the LSB data. Finally, the data for any one pixel element 11 should begin and end in the same position relative to a frame. This is true because for a bit depth of n bits, the number of data units used for loading data is 2 - 1 data units. Figure 6 illustrates an example of data sequencing for a memory cell 12 having a fanout of four, and applying all of the above rules. Thus, where m = 4, and it is assumed that each loading step takes one time unit, the four pixel elements 11 associated with a memory cell 12 are loaded with the same data but only one pixel element 11 is reset. The pixel elements associated with a first reset line 13(A) are designated as pixel elements 11(a), etc. The loading sequence of Figure 6 is for 5-bit data frames as follows: Load pixels 11(A), bit 4, and reset 13(A) Load pixels 11(B), bit 3, and reset 13(B) Load pixels 11(C), bit 2, and reset 13(C) Load pixels 11(D), bit 3, and reset 13(D) Skip 2 LSB time units Load pixels 11(C), bit 4, and reset 13(C) Skip 2 LSB time units Load pixels 11(B), bit 0, reset 13(B) Load pixels 11(B), bit 1, and reset 13(B) Load pixels 11(D), bit 1, and reset 13(D) Load pixels 11(B), bit 4, and reset 13(B) Load pixels 11(D), bit 0, and reset 13(D) Load pixels 11(D), bit 2, and reset 13(D) Skip 1 LSB time unit Load pixels 11(A), bit 0, and reset 13(A) Load pixels 11(A), bit 2, and reset 13(A) Load pixels 11(D), bit 4, and reset 13(D) Skip 2 LSB time units Load pixels 11 (A), bit 3, and reset 13(A) Load pixels 11(C), bit 0, and reset 13(C) Load pixels 11(C), bit 1, and reset 13(C) Skip 1 LSB time unit Load pixels 11(C), bit 3, and reset 13(C) Skip 2 LSB time units Load pixels 11(B), bit 2, and reset 13(B) Load pixels 11(A), bit 1, and reset 13(A) Skip 1 LSB time unit Buffering with a frame buffer (not shown) may be used to order the data in the correct sequence. A frame of data (the data that fills an array of SLM 10) is divided into four "split bit-frames". For the first split bit-frame, bit 4 for each pixel element 11(A) in each set associated with memory cells 12 would be appropriately ordered for loading during a time unit, such that 1/4 of the SLM 10 is loaded. Then, all bit 3's for each pixel element 11(B) would be ordered as a second split bit-frame for loading, etc. The overall effect of the data sequencing is that, for each frame, the entire array of pixel elements 11 is reset in groups of pixels, rather than all at once. Thus, resetting occurs in a "split reset" pattern, i.e., those pixel elements 11 connected to a single reset line 13 are switched at the same time. n Like prior pulse width modulation techniques, it takes 2 - 1 LSB time units to display a full n-bit frame. However, each loading step is done with smaller increments of memory and can therefor be done in less time. In the example of this description, 1/4 of a bit-frame is loaded for every reset signal. In other words, four reset signals are used per bit-frame. Each bit-frame, unlike those of prior pulse width modulation techniques, may display data from a different bit. As a result of the loading technique of the invention, the peak data rate is reduced. Also, although loading occurs more frequently per frame, the higher valued bits no longer coincide for all pixel elements 11. Thus, there are no long waits during the display time of these higher valued bits. The average data rate and the peak data rate more closely converge. fanout max = 2 n - 1 n The maximum fanout per memory cell 12 depends on the bit depth. Where the bit depth is n, the theoretical maximum fanout may be calculated as: n The numerator of the above equation represents that there are 2 - 1 time slices per frame. The denominator indicates that each fanout requires n events. Computer programs may be developed and used to determine appropriate sequences for varying bit depths and fanouts. A rule-based program will prevent violations of the above-stated rule that prohibits more than one pixel element 11 in a set from needing resetting at one time, as well as other optional rules. An enhanced method of the invention combines the above-described "split reset" process with a "block clearing". Block clearing has been used with prior pulse width modulation schemes to avoid the problem of having to load an entire bit-frame during a LSB time unit. For block clearing, bit-frames are loaded in whole multiples of a LSB time unit. A mechanism is provided on the SLM 10 to allow all pixel elements 11 to be quickly "cleared", i.e., switched to an "off" state. Thus, those bit-frames whose "on" times are less than the time required for loading can be given their appropriate weight. The total number of time units in a frame exceeds the maximum brightness time by the number of time units used for clearing. Thus, the consequence of having pixel elements 11 in an "off" state during part of loading is a reduction in optical efficiency of the SLM 10. The general aspects of block clearing are described in US. Serial No. 07/678,761. Figure 7 illustrates an enhancement of the SLM 10 of Figures 1 - 5, especially with respect to the interconnections between each memory cell 12 and the pixel elements 11 in its fanout. A resistive element, in this case a resistor 71, is included in each data connection for reducing the impact of a failure at any one pixel element 11. For example, a short at one of the pixel elements 11 will not cause the rest of the pixel elements 11 in the set to fail. As stated above, a feature of many SLMs 10 is that they are easily fabricated using integrated circuit processes. In these types of SLMs 10, resistors 71 could be fabricated from a polysilicon material. Alternatively, a highly resistive material could be used for the electrode contacts. Also, as an alternative to extra resistive areas or elements, the entire fabrication level for pixel element electrodes, such as the electrodes 33 of Figure 3, could be made from a material, such as titanium nitride or titanium oxynitride, having a high sheet resistance. Figure 8 illustrates another fault-tolerant enhancement of SLM 10. Instead of resistors 71, diodes 81 are used as a resistive element to isolate a fault at any one pixel element 11. Figure 9 illustrates a third fault-tolerant enhancement. Fuses 91 are designed to "blow" if there is a shorted pixel element 11. Zener diodes 92 or some other type of breakdown diode provides a high resistance to ground. Although the invention has been described with reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternative embodiments, will be apparent to persons skilled in the art. It is, therefore, contemplated that the appended claims will cover all modifications that fall within the true scope of the invention.
It’s been a long time since my last post so I thought I’d return with something a little different than in the past. Rather than speak about “market research done wrong” (as my former posts tend to dwell on), I’d like to demonstrate some of the applications of predictive modeling/machine learning/[INSERT “DATA SCIENCE” BUZZWORD HERE] within the market research realm. As the industry moves towards methodologies that blend both “hard” and “soft” data (e.g. transaction data and surveys, respectively), these techniques will become more important to learn and apply in order to stay relevant and continue to provide our clients with the best direction and research ROI. The goal of this post is to show a simple demonstration of predictive modeling applied to a common market research question: WHO WILL PURCHASE MY PRODUCT? AZURE MACHINE LEARNING ENVIRONMENT While I have spent most of my self-study time learning the R programming language, I wanted to try Microsoft’s Azure Machine Learning. Azure is part of Microsoft’s Cortana Analytics Suite which is the company’s foray into the data analytics market (now part of Azure’s AI Platform — Microsoft has a way of playing mix-and-match with their various product suites). You may be familiar with the Cortana Personal Assistant that came with the recent Windows 10 deployment (similar to Apple’s Siri or Amazon Echo’s Alexa). Although all is not lost for R-lovers — Microsoft’s acquisition of Revolution R back in 2015 has helped with the integration of an R coding module within the Azure environment (there is one for Python too if that’s your preferred language). To oversimplify, Azure is a drag-and-drop machine learning workflow designer. It removes much of the coding/scripting aspects inherent to a machine learning (from now on, just “ML”) project. While this makes things easier for the user, it can give one a false sense of understanding what exactly their experiment (this is the name given to an Azure project) is doing to achieve its results. I suggest at least getting a intermediate understanding of applied ML before delving into Azure. Azure experiments are created by dragging “modules” onto the work space and connecting them similar to a flow chart. Each module represents an operation such as filtering columns in a data set, applying a ML algorithm, or converting output to a specific format such as .CSV. INNOCENTIVE’S “PREDICTING PRODUCT PURCHASE” DATA CHALLENGE Innocentive is a website similar to Kaggle. It offers data science competitions with monetary rewards for creating predictive models used by real-world organizations. Sure, the data is often “cleaner” than a real-world setting, but for learning the trade (with the incentive of winning some cold, hard cash along the way) they are great sites. Details on this particular challenge are located here. In a nutshell, the project is a classification problem — who will purchase a product based on the given set of features? Participants are given two data sets: a training set used to build the model and a test set to evaluate the model using new data not included in the model-building process. Both sets are sizable (training set: ~643,000 records; test set: ~782,000 records) but shouldn’t require any fancy distributed computing systems (i.e. Hadoop) to analyze. Since Azure is cloud-based this becomes a non-issue because the data is stored out-of-memory. The outcome variable we are trying to predict using the remaining features, “target purchase,” is marked “1” for purchasers and “0” for non-purchasers. The challenge requires that models are at least72% accurate, or: One aspect of this data challenge that intrinsically adds difficulty is that fact that the features’ meaning has been obscured and have been given generic names like “C2,” “C3,” etc. All participants are given is a simple metadata file that indicates whether the feature is numeric or character in nature. Because of the somewhat nebulous a priori knowledge we are given about the data, feature engineering and feature selection can be laborious and fraught with a lot of guesswork. MODEL BUILDING: 1ST ATTEMPT Admittedly, I made one of the first mistakes in ML — I threw every algorithm I could at the training set without first analyzing the features! While you can do this (and you’ll usually fail fast) keep in mind that one of the most important steps in ML is feature selection and feature engineering. Arguably, data cleansing/preparation is the most important but these are a close second. Thankfully, the data set provided for training has zero missing values and most of our numeric features share similar standard deviations and means (although we will need to revisit the numeric variables later). Before diving into the “fun” stuff, I did perform a few maintenance routines: fixing metadata so variables are treated correctly according to the file mentioned above, sampling from the training file to reduce the overall size of the data set we are working with (this should help with computation time), and splitting the data set into training and validation sets (while we already have a test set, the validation set will allow us to “pre-test” our model on the training data prior to using it in a “real” evaluation). Azure only has a handful of built-in ML algorithms but they cover most of the commonly used ones. A variety of options are available for two-class problems so I began with Two-Class Boosted Decision Trees, Two-Class Decision Forest, Two-Class Support Vector Machines, and Two-Class Logistic Regression. Since building any ML model is an iterative process, I started by using the “Sweep Parameters” module for each algorithm. This allows Azure to test a range of tuning parameter options and choose the model with the highest predictive accuracy. On first pass, each of our models scored similarly (~71% – ~74% accuracy). While many may hang their hats up and be satisfied with these results, I believe we can do better! Our best model utilized Two-Class Boosted Decision Trees. While I played around with the tuning parameters for the other models, for the sake of brevity (and I’m already pushing that!) please note that this model held up after several iterations and continued to have the best accuracy among all those tested. I’ll stick with it for the remainder of the post. Aside: Two-Class Boosted Decision Trees What exactly is this algorithm? Tree building algorithms have been around for a long time and are most-closely associated with CART (Classification and Regression Tree) models. A basic CART model (for classification rather than regression in this post) iterates over all features and selects the feature that best splits the data by the outcome variable (in our case, “target purchase”). By “best splits” I mean the feature that most accurately divides the data into two subgroups with the greatest proportion of cases falling into the purchaser/non-purchaser classification (i.e. our “1s” and “0s”). Further, the algorithm recursively applies this splitting procedure using the remaining features within each subgroup and continues this way until some threshold is reached. When the splits no longer help divide the data anymore, we’ve reached a “terminal node” or “leaf” in our tree. Here is an example: After some tweaking, new cases can be passed through the tree to receive classification at a “leaf.” Azure’s Boosted Decision Trees work by building several “weak learner” trees, averaging how well they classify cases as an group, or “ensemble,” then adjusting the parameters to “fix” incorrect classification in the next iteration of tree building (the “boosting” part). I admit, this is a lousy explanation for a highly technical algorithm (and I don’t pretend to fully understand the specific mathematics behind it), but to simplify just think of the algorithm adhering to the proverb “if at first you don’t succeed, try, try again — with incremental improvements.” MODEL BUILDING: 2ND ATTEMPT Now that I’ve decided that our boosted tree model is the way to go, let’s see if there are any improvements I can make by adjusting the feature inputs. Please note, in a typical setting this is working a little backwards. The first round of feature engineering/selection is typically performed prior to modeling, but considering that this data set is anonymized we don’t really have any knowledge of what each feature is measuring. Unsupervised learning methods (clustering, correlation analysis, etc.) could potentially provide some insight, but I tried something a little different. Azure contains a set of modules called “Learning with Counts” which take features in a data set and count the occurrences of unique values grouped by the outcome variable (“target purchase”). This is a sort of feature engineering that uses the frequencies of values rather than the values themselves. It also calculates the difference between the group’s counts. I gave this a whirl since a lot of the columns marked as “numeric” in the metadata in fact only contain a handful of unique values. I suppose “numeric” should not be confused with “continuous” in all cases with this data set. The transformed data set now contains new features with these count values. Let’s run our boosted tree algorithm again to see if any improvement occurred. Wow! The accuracy shot up enormously! This is the territory I’d be happy in with a ML model. Yet, nothing really matters until we apply the model to predict “target_purchase” on our test set. After applying the same counting transformation to our test set and using Innocentive’s online scoring tool we are able to upload our predictions for our ~782K-record test data. Drum roll please… Oh no! A dismal 48% accuracy! This is actually worse than just guessing. How can this be? Two things come to mind: - The counting transformation failed on the test set due to the lack of a “target purchase” outcome variable (this column is blank in this set and the transformation may not have run correctly). - The model over fit the training data (i.e. the model is extremely biased towards the nuances of the training data that it only does a good job of modeling that set — not the test set or any other similar set of data we’d send through it). In order to make out model generalize to unseen data we will most likely have to sacrifice some predictive accuracy when we train the model. MODEL BUILDING: 3RD/FINAL ATTEMPT Although this is titled “3rd Attempt,” I’ve skipped ahead a few steps since this post is becoming a little long in the tooth. Here is a brief summary of how we got here: - Rather than “Learning with Counts” on the numeric variables with only a few unique values (see above), I instead converted these to categorical variables in both the training and test sets. This transformation will not cause the same error that happened before due to the lack of the “target purchase” outcome variable in the test set. - I’ve identified the optimal model-building parameters to maximize our model’s accuracy on the test set. This means I can now simply use the “Train Model” module to create a single model using one parameter set. Because we’ve reduced the computation time at this step, it is more feasible to use a greater portion of the training data. Therefore, I’ve eliminated the “Partition and Sample” module to allow the algorithm to use all the training data for model building. Our final trained model is 72% accurate meaning it correctly classifies both purchasers and non-purchasers 72% of the time. Now the true test…how will it perform on the test set A dramatic improvement! We went from 48% accuracy to 72% accuracy (you can see from the image that I made several submissions to the online scoring tool before reaching this level of accuracy so a lot of trial and error was involved). While I would have liked to do better, I was at least able to cross the minimal submission threshold. Yet more uplifting is the fact my model came in 19th place out of the 735 participants in the competition! Not enough to crack the top ten and give me a chance at the $20,000 prize but satisfying nevertheless. CONCLUSION In a real-world setting… - …we’d typically have a lot more knowledge about the data we were modeling prior to reaching that step. This could aid tremendously in the feature engineering/selection phase (although tree-based methods like the one we used here inherently have feature selection built into their algorithm). We’d also investigate our features through visualization which makes it much easier to determine the relationships between our features and the outcome as well as between the features themselves. Azure unfortunately does not have any built-in modules for exploring data through visualization although both the R and Python modules may allow this (I have yet to try). - …we’d probably weigh the value of our model based on its sensitivity (how well it predicts true positives) or specificity (how well it predicts true negatives) rather than overall accuracy. The reason for this is that sometimes the cost of incorrectly classifying a case as positive, and vice versa, will have a bigger impact on the model’s value. For instance, if you build a model that identifies potential purchasers of your product within a zip code for a direct mail campaign, you’ll be more concerned with the model’s sensitivity than its specificity — you want to maximize the number of buyers and are probably not too concerned with a few false positives (Type I Error) ending up in the distribution. Alternatively, a model used to predict the malignancy of potentially cancerous cells may want a more balanced sensitivity/specificity since a false negative (Type II Error) could lead to potentially fatal results — accuracy might be the appropriate measure. - …our final model would need to balance parsimony with business need. A great example of a model that failed in this regard is the Netflix Prize which in order to achieve such great results utilized an ensemble of 107 algorithms! Yet the bigger issue came with the shift in Netflix’s business model from mail delivery DVD/Blu-ray to streaming. This completely changed how people chose what to watch now that they could “sample” content before watching. A model is only as good as its reflection (and impact) on real-time business needs! One of the oldest uses of market research is to determine who will buy a product or not. Traditionally, this type of study would involve long, tedious surveys, minimally informative focus groups, and a turnaround time of several weeks or months. If the Netflix Prize teaches us anything it’s that business does not stand still for too long anymore and today’s research insights may already be invalid before they are presented to a client. I was able to structure together this simple model by myself over the course of a one week and that’s only because I worked on it part time. Writing this post nearly took the same amount of time!
https://expectedx.com/uncategorized/using-statistical-modeling-to-predict-product-purchase-in-microsoft-azure-2018-6-26/
What happens if humans don’t socialize Social isolation can lead to feelings of loneliness, fear of others, or negative self-esteem. Lack of consistent human contact can also cause conflict with the (peripheral) friends. The socially isolated person may occasionally talk to or cause problems with family members.. Why do introverts hate socializing Even when socializing is enjoyable, introverts still get worn out. Again, this is due to the way our brains are wired; compared to extroverts, we just aren’t as motivated and energized by social rewards. How important is the family in the socialization of a person Family is usually considered to be the most important agent of socialization. They not only teach us how to care for ourselves, but also give us our first system of values, norms, and beliefs. How do I get better mentally How to look after your mental healthTalk about your feelings. Talking about your feelings can help you stay in good mental health and deal with times when you feel troubled. … Keep active. … Eat well. … Drink sensibly. … Keep in touch. … Ask for help. … Take a break. … Do something you’re good at.More items… How being alone affects the mind Loneliness can be damaging to both our mental and physical health. Socially isolated people are less able to deal with stressful situations. They’re also more likely to feel depressed and may have problems processing information. This in turn can lead to difficulties with decision-making and memory storage and recall. Is it okay to not socialize It’s okay to be less-social than other people Others have a lower drive to socialize, which can show in a variety of ways: They like to spend a lot of time alone. They’re solitary by choice, not because they want to be around people more often, but can’t. … When they do socialize they’re happy to do it in smaller doses. What are the effects of not Socialising Here are 11 things that happen to your body and mind if you don’t socialize for more than a day.Poor Self-Esteem. … Depression. … Loss Of Reality. … Increased Tumor Risk. … Body Chills. … Decreased Ability To Learn. … Decreased Sense Of Empathy. … Inflammation.More items…•Nov 29, 2016 Why do introverts hate talking Psychologist Laurie Helgoe says introverts hate small talk because it creates a barrier between people. Superficial, polite discussion prevents openness, so people don’t learn about each other. Deeper meaning: Helgoe again, “Introverts are energized and excited by ideas. Are introverts lazy Although everyone is “lazy” sometimes, when introverts are relaxing in their bedroom, it’s probably because they’re trying to lower their stimulation level and recharge their energy. Can human live without socializing The answer is yes. The language of survival here implies a continuation in life. For that, you just need water, food and shelter. Socialization was derived from the understanding that energy expenditure would decrease for the indivual, as resources would increase and be distributed amongst the group. Do humans need to socialize Socializing not only staves off feelings of loneliness, but also it helps sharpen memory and cognitive skills, increases your sense of happiness and well-being, and may even help you live longer. In-person is best, but connecting via technology also works. Why you should not isolate yourself Isolation can increase the risks of mental health issues such as depression, dementia, social anxiety, and low self-esteem. Isolation and mental health issues can also interact with one another in a feedback loop. What is impact of socialization First, socialization teaches impulse control and helps individuals develop a conscience. This first goal is accomplished naturally: as people grow up within a particular society, they pick up on the expectations of those around them and internalize these expectations to moderate their impulses and develop a conscience. Is Socialising good for depression Socialization can help improve our mental and emotional health. Studies show – and wisdom confirms – being social decreases depression. Socialization also improves overall mental health. How does socialization shape a person’s self image Answer: Socialization effects social image in so many ways. … Our individual socialization patterns shape our mentalities. The things we individual experiences in society directly affect our minds, which explains how our minds register and react to incidents and situations we encounter differently. Can you imagine life without socialization Without socialization, we could not have our society and culture. And without social interaction, we could not have socialization. Our example of a socially isolated child was hypothetical, but real-life examples of such children, often called feral. What do you call someone who doesn’t like to socialize If you’re reticent about your feelings, you like to keep them to yourself, and you’re probably quiet in rowdy groups where everyone is talking over each other. The original meaning of reticent describes someone who doesn’t like to talk. How often should you socialize Regardless of the age group to which the participants belonged, the results of the research were clear: to have a strong sense of well-being, five to six hours per day of socializing was necessary. Do introverts hate socializing This doesn’t meant introverts hate people; introverts can like socializing, but they certainly don’t place spending time with others on the same level as an extrovert would. In fact, according to this study’s findings, some introverts may even be indifferent to people.
https://fieldofstudyofsociology.com/qa/quick-answer-is-it-unhealthy-to-not-socialize.html
On Saturday Emma Stone performed a very special tribute to Billie Jean King in honor of the 50th anniversary of the tennis icon’s triple crown win in 1967. Together with Tony nominated singer/songwriter Sara Barellis, Stone paid tribute to King at a pre-match ceremony at Arthur Ashe Stadium in New York City, where the U.S. Open is currently being held. In 1967, King won her triple crown by finishing first in the women’s singles, women’s doubles, and women’s mixed doubles championship at the U.S. Open. RELATED VIDEO: Emma Stone Reveals Her Favorite Movie Moment of All Time Stone is playing King in the upcoming movie Battle of the Sexes, which tells another remarkable story from King’s career. In 1973, King famously defeated ex-champion Bobby Riggs at the Houston Astrodome in a match nicknamed “Battle of the Sexes.” Riggs thought there was no way a woman could beat him at tennis, but clearly he was wrong. At the time so many people turned in to watch “Battle of the Sexes” that it became the most-watched sporting event of all time. Curiously enough, during filming Stone rarely sought King out to discuss her character, although King had an explanation for that. “She didn’t want to because I’m in my 70s, and she said I’m more fully formed as a person,” King said. “She said, ‘I want to know what [Billie] was feeling at 28 and 29 when she played Bobby.’ That’s where she had to concentrate.” Battle of the Sexes will be released in theaters on Sept. 22.
https://people.com/celebrity/emma-stone-honors-50th-anniversary-of-billie-jean-kings-triple-crown-win-at-the-u-s-open/
Land in Crisis, presented by National Geograpic. Based on a PBS broadcast, the site includes: Africa for Kids where Fimi, a youngster from Nigeria serves as the guide to a variety of fun activities for elementary level students; Photoscope where older students can look at contemporary Africa in five photo essays; and Africa Challenge where students can show how much they know by playing a game. Then philosophy migrated from every direction to Athens itself, at the center, the wealthiest commercial power and the most famous democracy of the time [ note ]. Socrates, although uninterested in wealth himself, nevertheless was a creature of the marketplace, where there were always people to meet and where he could, in effect, bargain over definitions rather than over prices. Similarly, although Socrates avoided participation in democratic politics, it is hard to imagine his idiosyncratic individualism, and the uncompromising self-assertion of his defense speech, without either wealth or birth to justify his privileges, occurring in any other political context. If a commercial democracy like Athens provided the social and intellectual context that fostered the development of philosophy, we might expect that philosophy would not occur in the kind of Greek city that was neither commercial nor democratic. As it happens, the great rival of Athens, Sparta, was just such a city. Sparta had a peculiar, oligarchic constitution, with two kings and a small number of enfranchised citizens. Most of the subjects of the Spartan state had little or no political hieroglyphics writing activity middle school, and many of them were helots, who were essentially held as slaves and could be killed by a Spartan citizen at any time for any reason -- annual war was formally declared on the helots for just that purpose. The whole business of the Spartan citizenry was war. Hieroglyphics writing activity middle school Athens, Sparta had no nearby seaport. It was not engaged in or interested in commerce. It had no resident alien population like Athens -- there was no reason for foreigners of any sort to come to Sparta. Spartan citizens were allowed to possess little money, and Spartan men were expected, officially, to eat all their meals at a common mess, where the food was legendarily bad -- all to toughen them up. Spartans had so little to say that the term "Laconic," from Laconia, the environs of Sparta, is still used to mean "of few words" -- as "Spartan" itself is still used to mean simple and ascetic. While this gave Sparta the best army in Greece, regarded by all as next to invincible, and helped Sparta defeat Athens in the Peloponnesian Warwe do not find at Sparta any of the accoutrements otherwise normally associated with Classical Greek civilization: Socrates would have found few takers for his conversation at Sparta -- and it is hard to imagine the city tolerating his questions for anything like the thirty or more years that Athens did. Next to nothing remains at the site of Sparta to attract tourists the nearby Mediaeval complex at Mistra is of much greater interestwhile Athens is one of the major tourist destinations of the world. Indeed, we basically wouldn't even know about Sparta were it not for the historians e. Thucydides and philosophers e. Plato and Aristotle at Athens who write about her. In the end, philosophy made the fortune of Athens, which essentially became the University Town of the Roman Empire only Alexandria came close as a center of learning ; but even Sparta's army eventually failed her, as Spartan hegemony was destroyed at the battle of Leuctra in by the brilliant Theban general Epaminondas,who killed a Spartan king, Cleombrotus, for the first time since King Leonidas was killed by the Persians at Thermopylae in A story about Thales throws a curious light on the polarization between commercial culture and its opposition. It was said that Thales was not a practical person, sometimes didn't watch where he was walking, fell into a well according to Platowas laughed at, and in general was reproached for not taking money seriously like everyone else. Finally, he was sufficiently irked by the derision and criticisms that he decided to teach everyone a lesson. By studying the stars according to Aristotlehe determined that there was to be an exceptionally large olive harvest that year. Borrowing some money, he secured all the olive presses used to get the oil, of course in Miletus, and when the harvest came in, he took advantage of his monopoly to charge everyone dearly. After making this big financial killing, Thales announced that he could do this anytime and so, if he otherwise didn't do so and seemed impractical, it was because he simply did not value the money in the first place. This story curiously contains internal evidence of its own falsehood. One cannot determine the nature of the harvest by studying the stars; otherwise astrologers would make their fortunes on the commodities markets, not by selling their analyses to the public [ note ]. So if Thales did not monopolize the olive presses with the help of astrology, and is unlikely to have done what this story relates, we might ask if he was the kind of impractical person portrayed in the story in the first place. It would not seem so from all the other accounts we have about him. The tendency of this evidence goes in two directions: First, Thales seems to engage in activities that would be consistent with any other Milesian engaged in business. The story about him going to Egypt, although later assimilated to fabulous stories about Greeks learning the mysteries of the Egyptians who don't seem to have had any such mysteries, and would not have been teaching them to Greeks anywayis perfectly conformable to what many Greeks actually were doing in Egypt, i.was living just two thousand years ago. Only China, with a continuous history since the Shang (c BC), has at least equalled this, but just barely if we bring Egyptian history down to the last hieroglyphic inscription ( AD).. To the Egyptians, Egypt was, the "Black Land."Some people think that this referred to the skin color of the Egyptians.. However, the Egyptians contrasted. According to the Children's University of Manchester, the ancient Egyptians developed their writing system, made up of pictures and symbols called hieroglyphs, around leslutinsduphoenix.comng Egyptian hieroglyphs is an excellent way to start a unit on ancient Egyptian history. The students presented “ Rooted in Collaboration: Engaging Middle School Students through Poetry,” a collaborative poetry unit that they planned and taught at Washington Middle School in Dubuque, Iowa in the fall of as a part of their English Methods course, taught by Hilarie Welsh, Ph.D., assistant professor of education. Signing your name or scribbling a grocery list may seem a simple, mundane activity. In fact, it is the result of a complex interaction of physical and mental processes involving cooperation among your brain’s cognitive, motor, and emotion areas, down through the brain stem and the spinal cord, and out to . If you really like an activity from a particular book, I encourage you to buy the book in order to see the activity in context and find more lessons that will work well with your students. (Purchasing a book also helps to support the author. The Opening Sequence is one of the most notable hallmarks of The Simpsons. The sequence differs from episode to episode, usually with a different Chalkboard Gag, Couch Gag, and a Saxophone Solo from Lisa. And since mid-Season 20, a Title Screen Gag, Billboard Gag, and Lisa with a different.
https://huwuvuxyqos.leslutinsduphoenix.com/hieroglyphics-writing-activity-middle-school-28252fn.html
The VISN 17 Center of Excellence for Research on Returning War Veterans (VISN 17 CoE) is located at the historic campus of the Doris Miller VA Medical Center in Waco, Texas. The Doris Miller medical campus opened in 1932 and was one of the first hospitals built by the VA, which was primarily designated as a psychiatric facility, and currently hosts 11 mental health programs and our premier research facility. There are currently three congressionally mandated “Centers of Excellence” in the nation; one specializing in the study of suicide prevention (Canandaigua, NY), one specializing in the study of stress-related mental health issues (San Diego, CA), and the VISN 17 CoE which specializes in the study of the development, progression, and treatment of Posttraumatic Stress Disorder (PTSD). The Waco VA campus was chosen by Congress to host the VISN 17 CoE in 2006 to conduct specialty research on our country’s returning Post-9/11 veteran population, which include veterans enlisted in any of the military branches during the Operation Enduring Freedom (OEF), Operation Iraqi Freedom (OIF), and Operation New Dawn (OND) campaigns. With our proximity to Fort Hood (one of the largest active duty U.S. military installations), several reserve and National Guard units, and the heavy concentration of discharged veterans in the Central Texas region, there are more than 35,000 unique OEF/OIF/OND veterans registered in the Central Texas Veterans Health Care System (CTVHCS) and more than 40,000 active duty military service members in our catchment area. Recent census data conducted by Fort Hood indicated that there were an additional 316,380 family members (including spouses and children, retirees, and survivors of deceased veterans) residing in the Central Texas region, which provide an additional rich population to assist in the evaluation of families impacted by symptoms of posttraumatic stress and resiliency in military families. In alignment with the Department of Veterans Affairs (VA) statutory missions, the VISN 17 CoE conducts several education and training programs for health profession students to enhance the quality of care provided to veteran patients within the Veterans Health Administration (VHA) healthcare systems. In accordance with this mission, "To educate for VA and for the Nation", education and training efforts are accomplished through coordinated programs and activities with academic affiliates in our community. Affiliates include institutions such as Baylor University, Texas A&M Health Science Center, University of Texas-Dallas, and Tarleton State University, where the VISN 17 CoE has developed tiered training programs (see Training Programs) for undergraduate students to post-doctoral graduates to develop high quality future clinical providers and research professionals to meet patient care needs within VA and the nation. Our Mission The VISN 17 Center of Excellence for Research on Returning War Veterans was tasked by Congress1,2 to: • Study post-traumatic stress disorder and other mental health conditions experienced by Veterans returning from recent military conflicts, including Operations Desert Storm, Enduring Freedom, Iraqi Freedom, and New Dawn • Leverage the relationship between the Waco VA Medical Center and Fort Hood to further research on post-traumatic stress disorder • Serve as a specialized facility for mental health and post-traumatic stress disorder, supporting Waco and the Central Texas Health Care System with clinical expertise and educational support We are true to our mission and unifying theme by remaining clinically focused on new and innovative treatments for those mental health conditions common among returning Veterans, including post-traumatic stress disorder (PTSD) and traumatic brain injury (TBI). 1. Committee Reports 109th Congress (2005-2006) House Report 109-305 2. Military Quality of Life and Veterans Affairs and Related Agencies Appropriations Bill, 2006 Post-Deployment Problems Through the study of numerous neurological, genetic, and psychological factors, our Center uses a comprehensive approach to develop advanced approaches for assessing and treating several issues that are often experienced by Veterans returning from theaters of war, including but not limited to:
https://www.mirecc.va.gov/mirecc/visn17/
Construction Progress Inspections and Reports Comply with building codes. A construction progress inspection involves assessing construction works before completion in order to identify any errors or failures to comply with the BCA (Building Code of Australia). Multiple inspections We recommend multiple Progress Inspections throughout the construction process. Our expert building inspectors can inform you of anything the builder should or shouldn’t be doing, and by doing so, give you the information you need to address the builder with confidence about any sub-standard practice. Lock-up stage can mean it’s too late to find out what might be hidden behind the plasterboard and white paint. Enquire Now It’s in the details With an eye for detail, our building inspectors can identify defects in the structure, surfaces, or waterproofing of your construction project. With years of experience, our building inspectors know where to look for common mistakes, shortcuts, and vulnerabilities in the construction process. After each progress inspection we will provide you with a report detailing any issues, including photographs of problem areas, within 24 hours. Is it too late?
https://vitalbuildinginspection.com.au/building-inspections/construction-progress-inspection-report/
It is timely now than ever to act upon the issues of Muslim religio-political relations due to the increasing spread of Islamophobia. Ever since the September 11, 2001 attack, the image of Islam has become subjected to what the media portrays it to be. The wrong perception of Islam incorporated in violent extremism of certain ideologies are regarded as the representation of the religion, which consequently resulted to exaggerated fear, hatred and hostility to the Muslims. In this era of globalisation, Islamophobia is felt in every part of the world, even in countries which has less contact with Muslims or Muslim populace. Professor John L. Esposito in his recent talk “The Clash of Ignorance and Its Implications for Muslim Religio-Political Relations” at USIM addressed this issue in the context of clash of ignorance. The notion of clash of ignorance was coined by Edward Said as a counter critique to the earlier notion of clash of civilisation by Samuel Huntington, originated with Bernard Lewis and adopted by academics, policymakers and religious leaders in addressing the post September 11, 2001 attack. The notion of clash of civilisation was criticised to be simplistic and concentrated too much on the ‘West’ and ‘Islam’, by maintaining that the real source of conflict lies within Islam. Meanwhile, the notion of clash of ignorance provides a better understanding of the current state of affairs, which are very much a result of mutual lack of awareness regarding religious belief, history and culture. It must be recognised that conflicts today are largely because of ignorance rather than the actual differences themselves. According to Professor John L. Esposito, among the major concerns of ignorance towards Islam is the belief that Islam is a militant or potentially a militant religion; and that Islam is incompatible with democracy that would undermine the very cultural values in the society. These led to the perception that Islam is peculiarly different and thus could not absorb peacefully and progressively in a mixed society and modernity. The hijacked distorted version of Islam by extremists should not be taken as an acceptable representation of Islam and Muslims of more than 1.8 billion people in the world today. Honour killing, terrorism and other violent acts are not in line with the teachings of Islam which prohibits clearly excessiveness in religion (Surah Al-Maidah:77). Among the basic principles of Islam is that it honours human kind (Surah al-Isra: 70) and provides mercy for all creatures (Al-Anbiya: 107). Islamophobia is a result of failure to understand intercultural dynamics and the lack of willingness to unlearn and relearn about Islam. The expectation of wanting everyone ‘to be like us’ would not work in a multicultural global society. Respect of other peoples’ belief and culture is essential for a sustainable co-existence. A big part of ignorance in the context of Islamophobia, is not wanting to realise that the majority of the Muslims are peace loving, respectful and ordinary humans living their day to day life. To remedy ignorance is by knowledge. The way to the future is to identify the narratives that are wrong within and outside the Muslim community. The narrative of Islam’s glorious past, according to an observation by Dr. Amr Abdalla in Addis Ababa University Institute for Peace and Security Studies Policy Brief 2016, could be a driver towards militancy and violence by youths under the conviction that they are fulfilling their religious duty to restore to Islam its “lost glory”. This is also in line with Professor John L. Esposito’s caution on Muslims’ tendency of ‘treating certain ideals as if it is a reality’. An example of this is the understanding of ummah that transcends politics and nationalities, an ideal which may not be appropriate in the current setting of nation states. Equally important is the divisions within the Muslim communities that must be treated by mutual understanding, openness and respect. As Muslims, there is a need to create a new narrative by posing the Shariah as a moral compass. The role of religious leaders is paramount in uniting, providing constructive input and educating Muslims from the ignorance of diversities and commonalities of other cultures and religions. The next generation of Imams must be trained to face the new challenges and be ready as the forefront in peaceful dialogues and intercultural communal efforts. For further readings by Professor John L. Esposito, one may find them in his numerous books such as “The Future of Islam”, “What Everyone Needs to Know About Islam” and “Who Speaks for Islam”. This writing is based on Public Lecture Series organised by the Faculty of Leadership and Management on last Wednesday April 25, 2018.
https://www.usim.edu.my/news/in-our-words/islamophobia-clash-ignorance/