content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Across the school, we offer varied incursions to engage and enhance all our students learning and development.
Speak Stars program is run by Super Speak, formally Chatterbox Inc, it is Melbourne’s award winning drama & speaking school. This program assists students with their personal development and leadership skills, especially as they prepare for secondary school.
Speak Stars provides students with opportunities to expand their skills in communication and public speaking by participating in activities such as voice work, story telling, persuasive speaking and information telling. This great organisation also helps students to become more confident, by doing things such as role playing, script work, improvisation and characterisation. It involves students participating in six one-hour workshops over a consecutive number of weeks. The sessions are held in school and conducted by Vicki Skyring, an experienced teacher in Drama, Speech and English. Vicki presents her lessons in a motivating, stimulating and challenging way to maximise the students’ learning experiences in a fun, enjoyable and safe environment.
For further information about Super Speak and Speak Stars, please follow the link. | http://www.karoops.vic.edu.au/?page_id=1846 |
As you may have noticed, we’re in the middle of yet another American presidential election (our 57th). The news is full of musings about party primaries and delegate counts and possible brokered conventions, but if things proceed as usual, as many as 130 million Americans will cast votes in November. A winner will be declared based on popular votes in the states as transmuted into a total of 538 electoral votes (if no candidate receives at least 270 such votes, the US House of Representatives chooses the next president).
Seems orderly and natural after 56 such exercises, doesn’t it? But “one person, one vote, the first candidate past the (plurality or majority) post wins” is a polarizing and not very representative way of doing things.
Many of us vote for our second choices — the “lesser evils” — because our first choices “can’t win.”
Many of us could live with either of two or more candidates, but vote for the one who “can win” rather than the one we may like best.
What if you could vote for ALL the candidates you like, instead of just one, secure in the knowledge that your vote(s) would not be “wasted” on a loser, or “spoil” the chances of one of your preferred candidates, resulting in election of the “greater evil?”
You could, if the United States adopted any of several far more rational voting methods. Of the three that come to mind — Instant Runoff, Single Transferable Vote and Approval Voting — I’m going to describe only the last one both to keep this column short and because it’s my own favorite. Here’s how Approval Voting works:
You vote for as few or as many candidates as you like. All the votes are counted. The candidate with the most votes wins. Yes, it’s really that simple.
Assume that this November (as seems likely), your ballot offers you the choice of Republican Donald Trump, Democrat Hillary Clinton, Libertarian John McAfee or Green Jill Stein.
If you’re a progressive, you prefer Stein to Clinton, but reluctantly pull the lever instead for Clinton because you really, really, really don’t like Trump and Stein “can’t win.”
If you’re a libertarian, McAfee’s the only even remotely acceptable choice. Maybe you’ll just stay home and watch re-runs of “Modern Family” instead of bothering to vote for someone who “can’t win.”
Under approval voting, progressives could vote for Stein AND Clinton, libertarians could vote for McAfee alone … and both candidates would likely receive second or third votes from people who also vote for Trump or Clinton. Every vote — every VOTER! — would count.
I’m not sure what effect Approval Voting would have on this year’s presidential race, but over time I suspect we’d start seeing successful independent and third party candidates for seats in the state legislatures and Congress — and eventually the White House.
Better election outcomes require better voting systems. Visit the Center for Election Science (electology.org) to learn more about Approval Voting and how to help put it into action in your city, county or state.
Thomas L. Knapp (Twitter: @thomaslknapp) is director and senior news analyst at the William Lloyd Garrison Center for Libertarian Advocacy Journalism (thegarrisoncenter.org). He lives and works in north central Florida. | https://thegarrisoncenter.org/archives/5083 |
A REVEALing Study of Consumer Genomics Response
Alzheimer’s risk does not seem to affect depression, anxiety.
By Kevin Davies
September 15, 2009 | During a panel discussion at last year’s Bio-IT World Expo, the Editor-in-Chief of the New England Journal of Medicine, Jeffrey Drazen, an early skeptic of the predictive power of personal genomics, outlined what steps he needed to see from the genetics community. “I’m from Missouri, and you have to show me,” Drazen said. “You’ve got to do the study that shows that making a difference in [genetic] knowledge will make a difference in how people behave.”
Drazen added, “We’re not there yet… I wish you good luck, and send me your papers when you show that it works!” Sitting in the audience, Boston University neurologist Robert Green gladly seized the opening and informed Drazen he was preparing to submit just such a manuscript.
That paper, presenting the findings of the REVEAL (Risk Evaluation and Education for Alzheimer’s Disease) study, was published in the Journal in July. It represents a milestone in judging the public’s attitude to—and ability to cope with—the sometimes adverse results of personal genetic testing.
Green’s group set out to examine attitudes of people with a family history of Alzheimer’s disease to learning their all-important APOE genotype. The apolipoprotein E (APOE) gene on chromosome 19 is a well-known predictor of Alzheimer’s risk. Individuals who inherit one copy of the e4 allele have a 2-3 fold relative risk of the disease, whereas e4 homozygotes have around a 15-fold greater risk.
The study was actually performed between 2000 and 2003. Green and his colleagues enrolled 162 adults who had one or both parents diagnosed with AD. All received counseling information before the trial began. 111 were told their APOE genotype, whereas 51 remained as controls in the nondisclosure group. Of the 111 individuals tested, 53 were heterozygous or homozygous for the e4 allele.
Green and colleagues found few if any differences between the two groups with regard to the individuals’ levels of anxiety, depression or distress, even up to one year after the study. “Subjects who learned they were e4 positive… showed no more anxiety, depression or test-related distress than those who did not learn their genotype,” the authors write. (There was a slight short-term but transient increase in anxiety in the e4 group.) The individuals that showed the most dramatic, or clinically meaningful, changes in psychological profile were spread evenly between the control group and the disclosure group (regardless of e4 carrier status).
Despite the lengthy genesis of the study, there are inevitably shortcomings. As Green et al. note, “If APOE genotyping had been provided without genetic counseling or to subjects who had no family history of Alzheimer’s disease, the results might have been different. In addition, the exclusion of subjects with low neurocognitive scores and high depression scores may have influenced the results.”
While advocating more expansive follow-up studies, nevertheless, Green’s team draws satisfaction from the REVEAL findings that disclosing genotyping information to individuals who test negative is beneficial, and causes only transient, modest distress to those who end up testing positive. “These data support the psychological safety of disclosing data regarding genetic-counseling protocols” to Alzheimer’s family members, “despite the frightening nature of the disease and the fact that the disclosure has no clear medical benefit.”
| |
DU Alumnus Shares Extensive Asian Art Collection — and Love of History — in First Public Exhibit
What started as one book, two artifacts and an interest in history has evolved into a millennia-spanning art collection of thousands of rare objects and multiple libraries comprising more than 11,000 volumes on art, all thoughtfully compiled by a University of Denver alumnus.
Today, the Nantucket, Massachusetts, home of David Billings (BSBA Finance, ‘70) and his wife, Beverly Hall Billings, is brimming with their fascinating, 4,000-piece collection of Asian art, compiled over 50 years from sources around the world.
A meticulously curated selection from their collection is currently on display at the Whaling Museum on Nantucket. The exhibit, Asian Treasures from The Billings Collection, features around 300 pieces, many of which are on public display for the first time. The exhibit opened over Memorial Day weekend and runs through Nov. 1.
The exhibit — and the collection as a whole — represent a culmination of half a century of research, travels, and dedication to collecting. “My collection covers the whole gamut,” Billings said.
The collection is primarily made up of Chinese art covering a span of about 8,000 years, from the prehistoric Neolithic period to the fall of the Qing Dynasty in the early 20th century. Items include scrolls, sculptures, snuff bottles, ceramics, textiles, paintings and more, made from precious materials such as jade, porcelain, glass, and bronze. The collection also includes a rare example of the earliest paper money ever in circulation, dated around 1368 during the Ming Dynasty. Beyond China, the collection also features artifacts from other Asian regions, including what are now Tibet, India, Japan, Korea, and more.
Billings, an East Coast native who says he loved his time at DU, first became interested in Asian art while he was living in Denver in the 1970s, after graduating from DU. His then-mother-in-law — an avid collector of Japanese art — gave him a book on Chinese art as well as two pieces from China and Japan. Through this connection, he became rooted in the Denver art scene and spent ample time in the Asian art galleries at the Denver Art Museum.
Billings already had a deep interest in and passion for history; he recalls taking a Russian history class, where an instructor discussed why an ancient cup had certain markings and dents on it. The sheer antiquity of the cup and the context surrounding what happened to it over the ensuring centuries captivated Billings. From there, his curiosity and passion only intensified.
“I was fascinated by something that old,” Billings said. “[The U.S.] is only about 300 years old. We’re talking about something that goes back 8,000 years. It’s a totally different feeling.”
After moving back to the East Coast, Billings began to pick up rare books and objects during regular visits to New York City and beyond. As a self-proclaimed “semi-compulsive reader,” Billings’ literary pursuits would continually lead him to explore new aspects of Chinese art, which in turn led him to search for artifacts that reflected his broadening interests. He didn’t intend to assemble a collection, but before he knew it, that’s what started happening.
“There wasn’t a plan to cover them all,” Billings said. “I didn’t have a checklist.”
As his collection blossomed, Billings began loaning objects to institutions around the world before reconvening them back at his home on Nantucket. Eventually, several prominent art dealers from New York City came to see the collection for themselves, noting that it was truly one-of-a-kind. Rather than having a narrow focus, as most collections do, this had a more far-reaching scope.
“I was surprised when I was told there wasn’t another collection like it,” Billings said. “It was sort of an ‘aha’ moment. You just don’t see a collection [like this] in a private setting.”
Now, the Billingses are sharing that private collection —and their singular expertise in Asian art history — with their fellow art and history buffs through the public exhibit at the Whaling Museum, presented by the Nantucket Historical Association.
The Billingses worked with an expert team that included a curator who helped select the objects for display, along with a designer and mount maker who helped create an exhibit that matches the splendor of the artifacts.
In total, from when they were first approached about creating the exhibit to its opening, the process took about two years, with several delays due to the COVID-19 pandemic. During lockdown, the Billingses wrote a detailed book, Passion and Pursuit: The Billings Collection, about their collection with descriptions and essays about the art and its history. Billings also spent time during lockdown restoring a full jade bodysuit from the Han Dynasty and a Peking opera diorama, all by hand. These two significant artifacts are on display as part of the exhibit.
While the exhibit showcases a portion of the Billingses’ artifacts, the collection is not finished growing. In fact, as a “living collection,” it may never stop growing — the Billingses continue to expand their collection with as much enthusiasm as ever.
“It’s ongoing. It’s not a static thing,” Billings said.
Asian Treasures from The Billings Collection is on display at the Whaling Museum in Nantucket, Massachusetts, through Nov. 1. Join DU and the Billingses for a special event at the Whaling Museum on July 30. Click here to learn more. | https://liberalarts.du.edu/news-events/all-articles/du-alumnus-shares-extensive-asian-art-collection-and-love-history-first-public-exhibit |
What Is The Role Of The Judge In A Personal Injury Lawsuit?
Friday, August 4th, 2017
For someone who has just been injured and may be considering a personal injury lawsuit, the entire process can seem daunting and overwhelming. Most people know to immediately hire a lawyer, but then what happens after the attorney is hired?... | https://www.deanboyd.com/category/lubbock-personal-injury-attorney/page/3/ |
F.A.Q.
Home Page
»
English
»
English: Creative Writing
Writing
Submitted by:
onail100
Views:
113
Category:
English: Creative Writing
Date Submitted:
04/08/2016 08:16 AM
Pages:
10
Report this Essay
OPEN DOCUMENT
Unit 6 ‒ Writing: Reviews
Ex. 53
Gourmet Magazine
«Dinner by Heston Blumenthal, Melbourne»
Rule number one at Dinner by Heston Blumenthal: don't forgo a predinner cocktail. Even if, like me, you discover the only booking available is at 10pm on a Friday (which possibly means you're having supper rather than dinner at Dinner), there's something so perfectly "golden age of hotel restaurants" about the whole set-up that to refuse a cocktail in such a setting could be read as an act of pitiable self-sabotage. This was a wonderful place, which I visited last week.
The bar at the Melbourne outpost of Heston Blumenthal's two Michelin-starred London restaurant is the only bar in his repertoire so far. It's intimate, glamorous and adult, with mirrored surfaces, a low-slung banquette and immensely comfortable upholstered armchair bar stools that offer a cinematic vista over the main dining room with its curved leather booths, moss-green armchairs and dramatic backdrop of city lights. It's so close-up ready that you'd only be mildly surprised to see Eva Marie Saint and Cary Grant clinking glasses down one end.
What has changed most dramatically from the pop-up days is the food, even if Dinner's most Instagrammed dish, Meat Fruit, is a close relative of the Alice in Wonderland-like trickery that defined the Duck's dégustation. A dish that's related to The Fat Duck's famed snail porridge, sharing the same vibrant green that comes from garlic- and parsley-loaded butter. Chicken Cooked with Lettuces is truly lovely, brilliant of flavour and texture. The chicken (from Mount Barker in Western Australia) is brined and rolled, cooked sous-vide, then roasted, and arrives sitting on a salty, spiced celeriac sauce and an almost mayonnaise-like onion emulsion made from charred spring onion and pickled-onion juice. It's accompanied by a cos lettuce heart, blanched and dressed with a white truffle and shallot emulsion, and topped with shards of crisp chicken skin, oyster leaves and...
READ FULL ESSAY
Similar Essays
Candide : Analysis...
How To Write a...
Write About The...
Com150- The... | https://www.cyberessays.com/Term-Paper-on-Writing/123572/ |
Sing Yin have brought together two of our favourite Cantonese dishes in a unique menu celebrating contrasting flavours. Sourced fresh from Hong Kong’s markets, Chef Raymond has concocted a selection of delicious recipes featuring the much loved lobster and chicken. Diners can select one chicken dish and a whole lobster for a delicious combination of flavours from both the land and sea. We loved the sweet, perfectly cooked Sauteed Green Lobster with XO Sauce and the indulgent Steamed Green Lobster in Egg White. Our favourite chicken dishes were Chef Raymond’s signature Lychee Wood Fried Crispy Chicken, and the Marinated Chicken with Shallot. We were thoroughly impressed by the quality of the produce used in each dish and loved the simple flavours which combined to make a delicious meal. Head over to Sing Yin with a few friends so you can order one of everything for a fun chicken and lobster feast. The special menu is available from now until April 30. | http://hungryhongkong.net/2015/03/yin-yang-reinvented-with-lobster-and.html |
*Spend some time on your own experience of love(s): family, friends, intimate partners.
*How do people use the word "knowledge"
*What does observation "disclose" about reality?
*In what sense do we "see" abstract objects like triangles?
*Socrates argument from the nature of desire.
*If love is desire and desire is pursuit of what you don't have, then love can't be the fullness and possession of beauty.
*Consider different ways of accounting for Love, especially Christian and naturalist.
:*Focus on some things you are sure of and then begin to explore the basis of your certainty (in light of readings also).
:*Line up some of the theories or points of view that you've read about and consider their strengths and weaknesses (you have to do this for study questions anyway). Can you think of ways to combine parts of theories to avoid problems? Are the S&W really significant?
:*Acknowledge limits and areas where you don't have answers, but try to say what the limit is about and why it's there.
:*Dualism defined by belief in nonphysical.
:*Substance dualism: mind is non-physical substance.
::*Cartesian Dualism: Descartes: matter has length, breadth, height, and st position. But real you is nonphysical, thinking thing.
::*Main issue: interactionism. Descartes postulated "animal spirits"
:*Popular Dualism: person as "ghost in machine"
::*Might be more plausible since it could be thoroughly interactionist. maybe just some other form of energy. Also, might allow for survival of death.
:*Property Dualism: All is material, but brain realizes special properties.
::*Properties like "having a sesation of red, thinking that P, desiring Q...etc. Properties are "emergent" "solid, colored, alive" irreducible in some sense.
::*Epihenomenalism: idea that consciousness takes place "above the phenomena" of brain activity. Effect of brain but not causal.
:::*Appeals because it seems close to science of brain and yet accounts for 1st person experience.
:::*Odd to say your actions aren't determined by your desires, but epi's do.
::Typically, dualist has to claim that mental properties are irreducible (or not dualism, right?), but seems odd to say properties are emergent but not reducible. So some say mental properties are real like physical ones. "Elemental-property dualism" But analogy to electromagnetic force not promising.
::# Suffering from conditions - refers to suffering from the effects of karma, ones' own and others.
:2 There is the origination of suffering: suffering comes into existence in dependence on causes.
:: First of 12 links: Ignorance: Ignorance of impermanence, of suffering, of nonself.
:: Note the chain of causal connection advanced on p. 22 of Siderits: ignorance ultimately causes suffering, but the intermediate steps are important.
:3 There is the cessation of suffering: all future suffering can be prevented by becoming aware of our ignorance and undoing the effects of it.
:4 There is a path to the cessation of suffering.
:Nirvana is literally "extinction of self" even "annihilation" - What could this mean if there's no self?
:1 Liberation is inherently desirable.
:2 Selfish desires prevent us from attaining liberation.
:3 In order to attain liberation one must train oneself to live without selfish desires.
:4 One does not engage in deliberate action unless one desires the foreseen result of the action.
:In your group work today, start to probe and question the basic teachings of Buddhism. Consider some of the following questions, but also identify some of your own questions and try to figure out how a Buddhist might reply.
::*Try to distinguish "good," "bad," and maybe "indifferent" forms of suffering. Is there some deep and pervasive condition of suffering to existence?
:If thoughts are not physical how can they interact with the physical.
:Physicalism: Everything is physical. Initially counterintuitive to say that thoughts are physical.
:Basic argument: Problem of interactionism. Nothing incoherent about non-physical things affecting physical things, just haven't found any in the last few hundred years.
:Is physicalism an assumption or discovery? Historical point. 3:41 to 4:40 Most candidates for non-physical forces have been eliminated. Principles of conservation of energy lends credibility to physicalism.
::*qualia ("something that it's like" to experience X) are a problem for physicalist.
::*Mary thought experiment: Has Mary learned something new? a phenomenal fact? must be non-physical.
:::*response from physicalist: could acknowledge new experience (new brain process), and allow that she has a new way of experiencing red. ability to recreate experience in imagination.
::*Important that Newtonian physics has room for non-impact forces. (not just particles impacting as in Descartes/Leibniz.) 1700-1990 most scientists were dualists.
*There is no more to the person than the five skandhas (the exhaustiveness claim).
::An entity cannot operate on itself (the anti-reflexivity principle).
::Could just be shifting coalition.
:::Support for this view: Questions of King Milinda - nominalism -- words as "convenient designators"
:*Arguments against the "ineffability of nirvana"
:: Ineffability would imply that no truth can be uttered as ultimate. That's not the case in Buddhism.
:*Arguments against the "punctualist" or "annihilationst" view. Is nirvana living in the present?
:: Punctualism can't be ultimately true because it involves reference to "sets" of skandhas and that a "whole," which for the Buddhist is only a conventional reality. Also, still conventionally true that there is a self.
:: Pain suffering, and joy are still "at stake" in one's experience.
:*Nirvana as an achieved and integrated awareness of the relative importance of each standpoint for truth. "unlearning the myth of self, while keeping good practices" -- grounding obligations to self / non-self.
:First - we should obey moral rules because they reflect karmic laws. And we should do that to win release from rebirth. Limit of this is that you have to believe in karma and the motivation is limited to self-interest.
:Second - Doctrine of the three klesas - greed, hatred and delusion. negative feedback loop, therefore need for right speech, right conduct, right livelihood. (Note that for Buddhists, you don't practice virtue because it's the right thing to do, but because it allows you to promote well-being.) Motivation at this level is to attain the liberating insight into the true nature of the self.
:Third, we should be moral because all suffering is ultimately equal.
:*Imagine Brown and Robinson switch brains.
:*Imagine Brown's brain is split and placed in two hosts.
:*Where is Dennett after the first operation?
::*Three possibilities: Hamlet, Yorrick, and POV.
:*After the accident: What's the point of Hubert?
|Pantheism - theos/matter, theos in all life, in reason, rationality in nature. Older stoicism believed in cyclical conflagration.
|Virtue (care of the hegimonikon) is the end of life and should satisfy the demand for happiness.
::Notice the different "burdens" each of these starting points.
:We'll come back to the various positions on this topic, but take notes on them as part of your own background preparation.
::Seems to be supported by general knowledge of physical world, but leads to puzzles with our intuitions, especially about responsibility. Where is the self in determinism? Can my mental states exert causal power on myself.
::Particle physics seems to suggest that there is indeterminism in events at a very small scale physical matter(if haven't heard of [[http://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat Schrodenger's Cat]], now's your chance). But would random change really be enough to account for free will. Free will isn't random, after all.
::This one starts out counter-intuitive to most people. How can determinism and free will be compatible?
:::Versions include "traditional" -- Action caused by agent and not forced and "Deep self" -- Action caused by agent's authentic desire.
::Human agents have special causal powers (agent causation) that determine their free actions.
|Democritean atomism; only evidence for material objects, but recognition of idea of gods.
|If there are gods, they aren't concerned about us. No worry of retribution.
|Pleasure is the good. Virture is instrumental in helping us understand how to pursue pleasure and a condition for successful attainment of pleasure in life.
*Gods -- Should we be afraid of the gods?
*Death -- Should we fear death?
*Pleasure -- the "alpha and omega" of a happy life.
:*Distinction between kinetic and katastematic pleasures. The limits of pleasures.
:*The relationship between virture and pleasure in Epicurus.
:*How much kinetic pleasure would a good Epicurean pursue? Virtue and the "measure of pleasure" -- Friendship and sociability.
::*Traditional -- Two criteria for free act -- 1) caused by agent and 2) not forced or coerced.
::*Issues: Hard to understand "forced" or "coerced".
::*General Point: Compatibilism take a common sense approach to distinguishing "free" and "not free" actions. Sort of like a juridical process treat culpability. Mitigating factors on a crime include acting under force or coercion.
::*Deep Self Compatibilism -- adds third criterion: 3) and that we identify with the desire underlying the action.
::*Starting point in criticism of Traditional -- How do you know that your desires are really your own, really express your "true self"? Problem of false desire. Problem of desire under conditions of addiction. desire/identity distinction central to this position.
::*Comment on Nietzsche excerpt, p. 103: notion of authentic life as a creative project.
:*The Hot Dog Problem -- tracing the causal history for the decision to buy the hot dog seems to lead to the conclusion that at the moment of decision the guy "could not have acted otherwise" than he did. His desire still determined his action.
::*freedom from desire valued (recall Symposium!) Poss. thought experiment here.
::*Virtue (pursuit of excellence, development of capacities, moral virtue) is a necessary condition for the pursuit pleasure. See passage in Letter to M. All pleasure is good, but not always worthy of choice. Virtue helps with that.
::*Does Epicureanism pose a false or low goal for human beings?
::*Is it paradoxical or wise (or both) to say that the the goal of life if pleasure and yet to advocate such a ''moderate'' concept of pleasure? (see PD 18, for example).
::*How should we understand his advocacy of the pursuit of friendship? Doesn't friendship bring worry and anxiety?
This page has been accessed 34,200 times. | https://wiki.gonzaga.edu/alfino/index.php?title=Spring_2011_Philosophy_of_Human_Nature_Lecture_Notes_2&diff=11210&oldid=10753 |
DRK-12 Research and Products
- Publication | Developed by the Education Development Center and Bank Street College of Education, this professional development program will show general and special education teachers how to collaborate to provide a high-quality, standards-based mathematics education to all students,...
-
STEM Smart Brief: Improving STEM Curriculum and Instruction: Engaging Students and Raising StandardsPublication | “For effective K–12 STEM instruction to become the norm, schools and districts must be transformed.” Read this brief to learn more about curriculum and instructional methods that engage students in the learning process.
- Publication | The concept of criticism as a tool for research, although well established in other educational research traditions, is not well established in the domain of Educational Technology. This book changes all that by substantiating criticism as a way to step back and critically...
-
Compendium of Research Instruments for STEM Education, PART I: Teacher Practices, PCK, and Content KnowledgePublication | The purpose of this compendium is to provide an overview on the current status of STEM instrumentation commonly used in the U.S and to provide resources for research and evaluation professionals. Part 1 of a two-part series, the goal to provide insight into the measurement...
- Tool | Every Learning Experience in Foundation Science Begins with a brainstorming activity. This six minute video explains how brainstorming can be used to determine prior knowledge of your students, introduce new content, and establish a safe classroom culture for sharing ideas.
- Publication
- Publication | This article describes The Concord Consortium's High-Adventure Science Project, which brings frontier science into the classroom, allowing students to explore questions in Earth and space science that scientists are currently investigating.
- Tool
- Publication | “When students from non-mainstream backgrounds receive equitable learning opportunities, they are capable of attaining science outcomes comparable to their mainstream peers. The same is true for mathematics and, presumably, for other STEM subjects, as well.” Read this brief...
- Publication | Despite evidence that it can help students learn higher-order thinking skills and gain deep content knowledge, problem-based learning (PBL) is not deployed on a large scale in K-12 classrooms. This conceptual chapter explores teacher’s past experiences, and resulting...
-
Partnering with Users to Develop STEM Education Materials: Insights from Discovery Research K-12 ProjectsPublication | This brief suggests practical ways of engaging teachers and other “end-users” in projects that develop materials for education in the areas of science, technology, engineering, and math (STEM). Projects described in this brief have benefited from school, district, and state...
- Publication
-
Can Dynamic Visualizations Improve Middle School Students’ Understanding of Energy in Photosynthesis?Publication
- Publication | Like citizen journalists, your students can get to the heart of science literacy—and challenging questions like these—with the “learn by doing” methodology in this innovative book. Front-Page Science uses science journalism techniques to help students become better...
-
Knowledge for Algebra Teaching for Equity (KATE) Project: An Examination of Virtual Classroom Simulation ApproachesPublication | In this paper, we present an overview of the National Science Foundation (NSF) funded Knowledge for Algebra Teaching for Equity (KATE) Project and experiences from preservice teachers who are preparing for teaching middle grades mathematics. We highlight findings from a...
- Publication | “Providing a richness of resources unavailable in any classroom, informal science institutions across the country have developed exemplary partnerships with public schools—and have room for more.” Read this brief to explore how out-of-school learning can complement and...
- Publication | This overview is intended to describe the scope and depth of research and development DR K-12 has funded and to identify areas that could be advanced by further investigations by CADRE. The overview summarizes the 248 projects that met the criteria for inclusion and...
- Tool | A brief overview of teacher tools that promote student discourse in the classroom. A summary of why spend your time on talk in the science classroom and its functions.
- Publication
- Publication | The nature of energy is not typically an explicit topic of physics instruction. Nonetheless, verbal and graphical representations of energy articulate models in which energy is conceptualized as a quasimaterial substance, a stimulus, or a vertical location. We argue that a...
- Publication | Introductory biology courses form a cornerstone of undergraduate instruction. However, the predominantly used lecture approach fails to produce higher-order biology learning. Research shows that active learning strategies can increase student learning, yet few biology...
- Publication | An essay detailing the purposes, functions and benefits of academically productive talk including the Talk Goals and Moves chart.
- Presentation
- Publication | This study explores what students understand about enzyme–substrate interactions, using multiple representations of the phenomenon. In this paper we describe our use of the 3 Phase-Single Interview Technique with multiple representations to generate cognitive dissonance...
- Publication | In this study, we examine how a professional science news editor and high school teachers respond to student writing in order to understand the values and priorities each bring to bear on student work. These questions guided our work: • How do teachers respond to... | http://cadrek12.org/resources/drk12-research-products?sort_bef_combine=field_resourceform_year_value%20ASC&page=9 |
- Publisher: [S.I.] : Sourcebooks, Inc., 2011.
Content descriptions
|Summary, etc.:||
|
A fast-paced adventure series featuring The Jungle Girl herself! The stories are inspired and co-created by Bindi Irwin, daughter of the iconic wildlife expert, Steve Irwin. The series features the characters of Bindi, her brother Robert, mother Terri, and the Australia Zoo. Camouflage: Book 4 At the new reptile park in Singapore Bindi and Robert have to blend into their surroundings to help stop some very suspicious activity in the dead of night. | https://evergreen.lib.in.us/eg/opac/record/21896630 |
FLOWER DELIVERY SERVICES IN Romsey.
Romsey is a historic market town in the county of Hampshire, England. Romsey was home to the 17th-century philosopher and economist William Petty and the 19th-century British prime minister, Lord Palmerston, whose statue has stood in the town centre since 1857. The town is located 7 miles northwest of Southampton. The Flower Shop delivers flower bouquets via delivery service in this area. The Flower Shop florists always make your flowers beautiful. | https://www.southamptonflorists.co.uk/flowers-delivery/romsey/ |
Equest Center for Therapeutic Riding, Inc. was established as a State of Michigan non-profit corporation in June, 1990 and received its IRS 501(c)3 designation two months later. The first therapy class began in February, 1991 with 16 riders on 7 acres of land in Rockford, MI. Since that time the facility has grown in many ways – but the goal remains the same: Improving the lives of individuals with special needs through the power of the horse.
Equest is a Member Center of the Professional Association of Therapeutic Horsemanship International ~ “PATH International” and is one of only 3 therapeutic riding programs in Michigan that owns it’s own farm. Our facility is the ONLY year-round facility in the area which is a critical piece for many individuals with disabilities. This helps to prevent regression and foster greater physical, mental, emotional and social improvement and benefits experienced due to a consistent riding therapy schedule throughout the year.
Today 185+ riders, ranging in age from 2 to 100, enjoy the benefits of Equest each week, with a waiting list of disabled children hoping to participate in our program. The additional land, improved facilities and programming is allowing us to give individuals previously on the waiting list an opportunity to enjoy the benefits equine therapy can offer! Vital to the program’s success has been the contribution of an army of volunteers who donated over 54,200 hours of service last year alone!
|Submitter||
|
Purdue University
|Location||Rockford, MI|
|URL||http://equestcenter.org/|
|Status||Active|
|Cite this work||
|
Researchers should cite this work as follows: | https://habricentral.org/resources/66371 |
What Role for Currency Boards?
A currency board is an arrangement under which a country fixes its exchange rate and maintains 100 percent backing of its money supply with foreign exchange. They were common in British colonial territories in the 19th century, but fell into disuse when colonial regimes were dismantled. In 1983, however, Hong Kong, the largest of Britain's few remaining colonies introduced a currency board. In recent years fixed exchange regimes backed by currency board type of arrangements were adopted in Argentina (1991), Estonia (1992), and Lithuania in early 1994. Since then, interest has revived in currency boards as a means of stabilizing currencies and bringing order to economic conditions generally but not without a price. John Williamson argues in this excellent monograph that although most of the recent literature have urged their adoption in many countries - most notably Mexico, Russia and Ukraine - it would be a mistake for those countries to accept such advice.
This study starts by defining what a currency board is and explaining how it differs from a central bank. This is followed by a brief sketch of the historical record of currency boards in colonial situations to the most recent experiences of the Baltic Countries. The heart of the study, however, is the analysis of the advantages and disadvantages of currency boards. Williamson agrees that this type of arrangement have certain virtues. They assure convertibility, instill macroeconomic discipline that limits budget deficits and inflation, provide a mechanism that guarantees adjustment of balance of payments deficits, and thus create confidence in a country's monetary system. However, a currency board also carries a series of important disadvantages. For example, it may be difficult for a country to gather enough foreign reserves to back the monetary base 100 percent at the outset. Also, there is a danger of fixed exchange rate quickly becoming overvalued if a currency board is introduced in an attempt to stop high inflation. Moreover, a fixed exchange rate can make adjustments more costly and painful by preventing the use of the exchange rate to facilitate the process. A currency board precludes the active use of monetary policy to stabilize the domestic economy and is unable to act as a lender of last resort when domestic financial institutions face an illiquidity crisis. Furthermore, the ability of a currency board to discipline fiscal policy is critically dependent upon the political willingness of the government to be disciplined.
Since there are both important advantages and disadvantages in adopting a currency board in place of a central bank, it is to be expected that each arrangement will be preferable under some circumstances. Williamson identifies three situations where a currency board will be superior: a) where the collapse of the local monetary authority has been so complete that only renunciation of monetary sovereignty will serve to restore it; b) where the economy is small and very open to world trade and finance, as in most cases of currency boards, so that the cost of not being able to use the exchange rate as an instrument of adjustment is unimportant; and c) where a country is determined to use a fixed exchange rate as a nominal anchor in stabilizing inflation whatever the cost.
Applying the analysis to some of the countries that have recently been urged to adopt currency boards, Williamson argues that there is little ground for urging such a step on Mexico; it is not small, the monetary crisis was limited in the sense that the demand to hold pesos did not collapse, and it would be ill-advised to resort again to using the exchange rate as a nominal anchor. The case is slightly stronger in regard to Russia and Ukraine, where a currency bard might have the benefit of inducing a quicker substitution of domestic for foreign money, but even in these cases a decision t give up the foreign exchange rate instrument would be too much of a gamble in the long run when it seems that the worst of the money crisis is over. There is little doubt that currency board arrangements are making a comeback. However, governments that operate them must accept restrictions on the way to conduct fiscal, and specifically monetary policies. The constraints mean, as pointed out by Williamson, that currency boards are not always the answer for every developing country and/or transition economy. | https://www.thefreelibrary.com/What+Role+for+Currency+Boards%3F-a019730266 |
CLIMATE CHANGE threatens to trigger widespread floods and storms along coastal areas by accelerating the rates at which the seas and oceans are rising, space agency NASA has warned.
Climate change has significantly contributed to rising sea levels over the last 25 years but the worst is yet come. NASA has warned more heat is being trapped in the oceans as a result of climate change. In turn, the world’s ice caps and polar regions are melting at rates that are expected “to continue accelerating in years to come”. | |
Arkansas wins the show over East Central, 77-74
FAYETTEVILLE – Arkansas, down 14 with 11 minutes remaining, needed a 20-4 run and free throws from Davonte Davis, Chris Lykes and JD Notae in the final minute to claim a 77-74 victory over East Central University Sunday afternoon at Bud Arena Walton.
East Central, coached by former Arkansas graduate assistant Max Pendery, took the lead midway through the first half and extended their two-point lead to 14 (60-46) with 11:02 to play. From that point on, Arkansas continued their run 20-4. It all started with a 6-0 push ahead of a Tigers’ basket. The Razorbacks then scored 10 without a response before the Tigers scored. The Razorbacks had four straight games to end the comeback and lead 66-64 with 5:09 to go.
Notae scored six during the race, Lykes scored five, Jaylin Williams and Davis each scored four and Au’Diese Toney scored one.
Arkansas led by six (71-65) at 3:21 only to see East Central regain the lead, 72-71, with a 3-pointer from Josh Apple with 57 seconds remaining.
Davis put Arkansas back in the lead with a pair of free throws seven seconds later. With 22 seconds left, Lykes pierced two of the charity gang. Brennan Burns brought the Tigers down to one point, 75-74, with two free throws with six seconds remaining. However, Notae froze the game with two free throws with five ticks remaining for the 77-74 win.
Davis led Arkansas with 20 points while Notae added 17. Toney recorded a double-double with 10 points and a high of 15 rebounds (nine offensives). He also fired a charge with 35 seconds left, which helped secure the victory. Williams, whose energy and three blocked shots contributed to the victory, narrowly missed a double-double with 10 points and nine rebounds. Lykes finished with 13 points.
Apple led all scorers with 23 points while Jalen Crutchfield scored 15 for East Central.
Next, Arkansas will host North Texas on October 30 (Saturday). Kick-off is 4 p.m. at Bud Walton Arena. The game is sold out as it is part of the 2021-22 men’s basketball season pass package.
FIRST HALF: Arkansas 35 – Central East 37
- JD Notae scored the first points of the game and 7 of the team’s top 15.
- East Central took their first lead (21-20) at 5:28 with a 3-point lead from Jalen Crutchfield. The Tigers increased their advantage to five (26-21) with 4:05 left in the period.
- Arkansas regained the lead (27-26) on a quick layup from Davonte Davis at 3:07.
- The lead rebounded before East Central AJ Ferguson Jr., made a free throw and a layup for the Tigers’ 3-point lead with 42 ticks remaining.
- Lykes responded with a layup to bring the Hogs down to one (36-35) with 29 seconds on the clock. Brennan Burns converted a free throw with 19 seconds left to give the Tigers a 36-35 lead.
- Both teams were 14 of 33 on the field (42.4%) but East Central scored six 3-pointers (6 of 17) to one (1 of 9) by the Razorbacks. East Central had nine assists compared to four by Arkansas. Arkansas was also just 6 of 13 off the free throw line.
SECOND HALF: Arkansas 42 – Central East 37
- East Central used a 7-0 run in a 14-2 push to lead by 14 (60-46) with 11:02 to go.
- Arkansas had a 10-0 run (in a 16-2 run) to tie the game (62-62) at 6:19.
- Arkansas used a steal and a layup from Notae to take their first lead since the first half. The layup brought the score to 66-64 with 5:07 to go.
- East Central regained the lead, 72-71, with a 3 point lead from Josh Apple with 57 seconds remaining.
- Devo Davis responded with two free throws with 50 ticks remaining to give the Hogs a lead, 73-72, and Au’Diese Toney fired a key charge with 35 seconds remaining to force an ECU turnaround.
- Lykes sank two free throws with 22 seconds remaining for a 3-point lead in Arkansas, 75-72.
- After two ECU free throws, Notae froze the game with two free throws with five seconds remaining.
REMARKS:
- The starters were Davonte Davis, JD Notae, Au’Diese Toney, Stanley Umude, Jaylin Williams.
- East Central won the first shot.
- JD Notae scored the first points of the game, a layup at 19:22.
- Chris Lyles, Jaxson Robinson and Connor Vanover were the first submarines.
- Arkansas have now won 34 consecutive exhibition games since Nov. 14, 2003.
- Arkansas is now 72-9 all-time in exhibition games.
For more information on Arkansas men’s basketball, follow @RazorbackMBB on Twitter. | https://adventurebase100.org/arkansas-wins-the-show-over-east-central-77-74/ |
ACS Biomaterials Science & Engineering is the society’s publication focusing on biomaterials used in a variety of studies, including areas of regenerative medicine and tissue engineering. The journal also publishes research on technology advancements, such as 3D printing and microfluidic-related biomaterials work. Led by Editor-in-Chief, David L. Kaplan, ACS Biomaterial Science & Engineering is currently inviting interested and eligible early-career researchers to apply for membership to its inaugural Early Career Editorial Advisory Board.
Accepted members will have the opportunity to gain insight into the editorial decision-making process on matters regarding journal content and direction. Over the course of a two-year term, members will benefit from key professional and mentoring relationships with our established Editorial Advisory Board members and Associate Editors.
Eligibility requirements:
- Must be employed by an academic or research institution
- Must have been awarded a doctoral degree in any area of biomaterials science and engineering within the past 5 years (excluding career breaks)
Application requirements:
Please submit the following documents:
- Curriculum Vitae including a list of published works and peer review experience to date (3-page maximum)
- Mission statement (~300 word) describing your research goals and why you are interested in becoming a member of the Early Career Board
- Reprint of most significant paper published to date
- One letter of recommendation provided by a professor or advisor
Please submit your applications to Editor-in-Chief David Kaplan at [email protected].
Applications will be accepted through April 16, 2018. We look forward to receiving your applications! | https://axial.acs.org/2018/02/22/biomaterical-ecab/ |
With a surveillance reform bill stuck in the Senate, the federal court overseeing spy agencies on Friday reauthorized the National Security Agency’s controversial bulk collection of Americans' phone records.
Reauthorization from the Foreign Intelligence Surveillance Court (FISC) allows the NSA to continue to warrantlessly collect “metadata” in bulk about people’s phone calls. The records contain information about which numbers people called, when and how long they talked, but not the actual content of their conversations.
“Given that legislation has not yet been enacted, and given the importance of maintaining the capabilities of the Section 215 telephony metadata program, the government has sought a 90-day reauthorization of the existing program,” the Justice Department and Office of the Director of National Intelligence said in a joint statement, referring to the section of the Patriot Act that authorizes the program.
The House passed a bill to end the bulk collection program earlier this year, and instead allow the federal government to search for specific records in phone companies' databases with a court order. Privacy advocates balked, however, warning that the legislation was too broad and would have allowed the NSA to conduct searches for every number in a certain area code, for instance, or every Verizon subscriber.
Since then, lawmakers in the Senate have spent months hashing out a bill.
Senate Judiciary Committee Chairman Patrick LeahyPatrick Joseph LeahyOvernight Defense — Presented by The Embassy of the United Arab Emirates — Missing journalist strains US-Saudi ties | Senators push Trump to open investigation | Trump speaks with Saudi officials | New questions over support for Saudi coalition in Yemen Senators trigger law forcing Trump to probe Saudi journalist's disappearance Justice Kavanaugh will be impartial, not political like his opponents MORE (D-Vt.) introduced a new version of the USA Freedom Act earlier this summer. It has managed to win support from lawmakers on both sides of the aisle as well as privacy advocates, technology companies and the Obama administration.
“The Department of Justice and the Director of National Intelligence support this legislation and believe that it reflects a reasonable compromise that preserves essential intelligence community capabilities, enhances privacy and civil liberties, and increases transparency,” the two agencies said on Friday.
Some members of the Senate Intelligence Committee have opposed the bill, however, warning that it would cut the legs out from under the nation’s spies.
The Senate has yet to take action on the legislation and it could face an uphill fight amid a busy legislative calender.
In a statement on Saturday morning, Leahy said that lawmakers should pass his bill and "ensure that this is the last time the government requests and the court approves the bulk collection of Americans' records.
"We cannot wait any longer, and we cannot defer action on this important issue until the next Congress," he added. "This announcement underscores, once again, that it is time for Congress to enact meaningful reforms to protect individual privacy.”
The NSA’s phone records program needs to be reauthorized by the FISC every 90 days. The current authority expires on Dec. 5.
This post was updated on Sept. 13 at 12:42 p.m. | https://thehill.com/policy/technology/217618-spy-court-renews-nsa-program |
The Missouri State Office of the Natural Resources Conservation Service (NRCS) asked the Ozarks Environmental and Water Resources Institute (OEWRI) at Missouri State University (MSU) to submit a proposed plan and budget for a pilot watershed assessment study for the Lamar Lake - North Fork Spring River Watershed. The project area is a 12-digit hydrologic unit code (HUC-110702070206) watershed located within the larger Spring River Watershed in Barton County, Missouri that includes the City of Lamar and the drinking water supply impoundment located there.
This assessment is part of the National Water Quality Initiative (NWQI) aimed at reducing nutrients and sediment in the nation’s rivers and streams. The goal of the NWQI program is for the NRCS and its partners to work with landowners to implement voluntary conservation practices that improve water quality in high-priority watersheds while maintaining agricultural productivity. The purpose of this assessment is to provide NRCS field staff the necessary information on locations within the watershed where soil, slope, and land use practices have the highest pollution potential and to identify conservation practices can be the most beneficial to improve water quality.
Objectives
- Complete a comprehensive inventory of existing data in the watershed including information related to geology, soils, hydrology, climate, land use, and any existing biological or chemical monitoring data available;
- Perform a resource assessment of the watershed that includes analysis of the data gathered in the watershed inventory that includes identification of nonpoint source pollutants, water quality impairments, rainfall-runoff characteristics, and a field-based stream bank conditions assessment;
- Provide NRCS staff with information on the resource concerns within the watershed, specific field conditions that contribute that most to the water quality impairment, and what conservation practices should be implemented for the existing conditions to get the most water quality benefit. | https://oewri.missouristate.edu/2018-lamar-lake-north-fork-spring-river-watershed-nwqi.htm |
Medical professionals were put on trial after the war for their participation in war crimes and crimes against humanity during the Holocaust. The trial sparked questions about medical ethics in the aftermath of the brutal experiments on prisoners in the camp system.
On December 9, 1946, an American military tribunal opened criminal proceedings against 23 leading German physicians and administrators for their willing participation in war crimes and crimes against humanity. This case is known as the "Doctors Trial" (USA v. Karl Brandt et. al). On August 19, 1947, the judges of the tribunal delivered their verdict. But before announcing the guilt or innocence of each defendant, they confronted the difficult question of medical experimentation on human beings.
Several German doctors had argued in their own defense that their experiments differed little from those conducted before the war by German and American scientists. Furthermore they showed that no international law or informal statement differentiated between legal and illegal human experimentation. This argument was a great concern to two US doctors who had worked with the prosecution during the trial, Dr. Andrew Ivy and Dr. Leo Alexander.
As a result, on April 17, 1947, Dr. Alexander submitted a memorandum to the United States Counsel for War Crimes. The memo outlined six points that defined legitimate medical research. The trial's verdict of August 19 reiterated almost all of these points in a section entitled "Permissible Medical Experiments." It also revised the original six points into ten, and these the ten points became known as the "Nuremberg Code."
In the half century following the trial, the code informed numerous international ethics statements. Its legal force, however, was not well established Nevertheless, it remains a landmark document on medical ethics and one of the most lasting products of the "Doctors Trial."
From: Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10. Nuremberg, October 1946–APRIL 1949. Washington, D.C.: U.S. G.P.O, 1949–1953. | https://encyclopedia.ushmm.org/content/en/article/the-nuremberg-code |
THIS BOOK IS STILL AVAILABLE IN PRINTPlease note that although the printed book cannot be ordered directly from this website, it can be ordered from any reputable bookseller. Alternatively, to immediately obtain an eBook version please click on the links below to Amazon Kindle and/or Google Books.
Select region
The chapters in this volume do not represent the whole of the Middle East and North Africa, as such a collection would have been too large for one volume. Rather, the selection here is intended to present different perspectives on a range of educational issues, relevant to a particular focus or country, or common to a number of countries in the area. There is no overarching theme beyond that which is common to most of the countries in this area; such as modernity versus tradition; the spread of education effecting sociological changes - most pronounced in the rural and tribal areas; the changing fortunes and roles of women; the aspiration and expectation of youth; and the state having become the major player in providing education. These are all shared by most of the countries represented here.
Acknowledgements, 7
Colin Brock & Lila Zia Levers. Introduction, 9
Barbara Freyer Stowasser. The Qur’an and Women’s Education: patriarchal interpretations of Sura 4:34 and their unreading by educated Muslim women, 21
Serra Kirdar. The Impact of Educational Empowerment on Women in the Arab World, 39
Sally Findlow. Women, Higher Education and Social Transformation in the Arab Gulf, 57
Andre Elias Mazawi. Besieging the King’s Tower? En/Gendering Academic Opportunities in the Gulf Arab States, 77
Golnar Mehran. Religious Education of Muslim and Non-Muslim Schoolchildren in the Islamic Republic of Iran, 99
Iran Mohammadi-Heuboeck. Aspects of Bilingualism in Iranian Kurdish Schoolchildren, 127
Yossi Dahan & Yossi Yonah. Israel’s Education System: equality of opportunity – from nation building to neo-liberalism, 141
Richard Ratcliffe. The Moment of Education: the politics of education among the Negev Bedouin, Israel, 163
Bilal Fouad Barakat. The Struggle for Palestinian National Education Past and Present, 185
Abdelkader Ezzaki. Formal Schooling in Morocco: the hopes and challenges of the current educational reform, 209
Ayse Kok. Computerising Turkey’s Schools, 223
Notes on Contributors, 237
We are happy, that amongst the education systems and issues covered in this study, we have been able to include contributions concerning Israel in general, the Negev Bedouin within Israel, and Palestine. All too often, Israel and issues related to it, has been excluded from Middle Eastern collections, while Palestine suffers from not yet technically being a state and with many of its people internally or externally dislocated in more than just a spatial sense. Other countries and issues represented are: Iran, Turkey, United Arab Emirates, Saudi Arabia, Morocco, and Algeria. In this volume, connections between the role of education and women, their position and increasing visibility in non-traditional roles inform four of the chapters. Two such chapters (Finlaw and Mazawi) focus on higher education and the empowerment of women as they become key players in the socio-economic changes that are transforming their countries in the Gulf region. Another investigates the role of education in bringing about cultural change and fostering leadership amongst a select group of powerful ‘first generation professional’ Arab, Muslim women (Kirdar). These three chapters illustrate extremely varied fortunes as between the different groups and individuals who are the subjects of each piece of research, ranging from the cultural and institutional constraints on women within the Saudi Arabia tertiary sector to the beneficial effects of having experienced and retained complementary aspects of Islam and western education.
We commence the collection, however, with the contribution of Babara Stowasser (The Qur’an and Women’s Education: patriarchal interpretations of Sura 4:34 and their un-reading by educated Muslim women) which examines the general perception of Islam as a religion that accords an inferior position to women. The Qur’an is often quoted to support this position, especially the Sura 4: 34 in which the treatment of women appears. This first chapter examines the reaction of modern educated, Muslim female university students to this Sura and its interpretation by Islamic scholars. She describes her experience of teaching traditional and modern Qur’anic exegesis (Tafsir ) literature at Georgetown University and the role it plays in the educational experience of modern educated religiously-minded Muslim women. She states that most of the students in her seminars were female, and active in ‘rethinking Islam’. Some were foreign students, and many others first or second generation immigrants to the USA. Her course centred on the study of Arab Tafsir literature in chronological order, as well as focusing on specific Quranic themes for textual comparison, such as issues of gender, for example the men’s ‘guardianship’ over women, which appears in the Sura 4: 34. She goes on to examine the classical interpretation of Sura 4: 34, through the writings of al-Tabari and al-Baydawi and other pre-modern Arab Sunni scholars, and describes the reaction of her students to the classical interpretations of Sura 4: 34, in which male superiority and supremacy is emphasised, as being ‘uniformly negative’. She also examines the modern scholars (from Abduh, 1905, Rida, 1935, to al-Rahman, 1988), and the work of women scholars in the second half of the 20th century. She indicates that unlike the earlier pioneers among female religious scholars, most of the contemporary women writers are ‘engaged in an effort to ‘unread’ patriarchal interpretations of the Qur’an’. They consider that in the area of Tafsir women’s contribution is essential to opening up the Islamic discourse on gender as a whole and that ‘feminism’ ‘within the context of Islam can provide the only path to empowerment and liberation that avoids the challenging the whole culture’. She touches on the works of Muslim women scholars who at present live in the West and write in English, such as Amina Wadud, Asma barlas, Reffat Hassan, and Nimat Hafez Barazangi. These and other ‘activist scholars’, she points out, often question or even reject being identified as ‘feminists’ as they consider this term to apply to women in the West who are secularist while fighting male domination. Nevertheless, Stawasser say: ‘there is a liberationist tenor to their work that perhaps does merit use of the epithet between women’s rights and democracy, human rights and economic justice, and it is thus conceived as the absolute antithesis to the patriarchal, paternalistic and hierarchical framings of the pre-modern Tafsir’. She ends the chapter by giving the internet sites where Qur’an and gender are discussed, and concludes that ‘the whole of Sura 4: 34 is at present a regular staple in this new world of electronic communications, and that the verse’s electronic interpretations tend to reflect the new hermeneutics of gender-equal readings of the Qur’an’.
The contribution of Serra Kirdar's The Impact of Educational Empowerment on Women in the Arab World, based on a larger piece of research, illustrates just how far some Arab women have travelled along the emancipation route. Although the record is extremely varied she has been able to find remarkable cases of outstanding success in a variety of country contexts : Saudi Arabia, Bahrain, Morocco, Egypt and Jordan. The author is not concerned, however, only with a single cultural context, rather she is examining the effects of a dual religious-cultural experience, what she terms ‘the merging of cultural traditionalism and modernity within individuals. They represent Arab professional female role models, having ‘used their educational experiences to redefine their own identities and engender changes’. She places this initially in a global as well as regional perspective and emphasises the significance of female educational development for development overall. However, in the Arab social context, opportunity for women are still class related, nonetheless, those who have a voice are using it more to challenge more the status quo. Indeed ‘many women who have aligned themselves with the Islamic revival movement in different Arab countries have gone to considerable lengths to proclaim their independent initiatives’. For it is traditional social norms rather than religious regulation that constrain educated women from fully utilising their knowledge and skills. This, Kirdar comment, is not basically different from the Western societies where ‘the glass ceiling’ certainly still exists.
The high achieving respondents to Kirdar’s interviews indicated that while they had been given opportunities to succeed, they had to do so in a spectacular way, and then once in a key professional position they tended to feel isolated because of gender. But they did not, as Western women have tended to do ‘abandon the feminine in order to ascertain the masculine’. All interviewees believed that, further change and opportunity is inevitable but will take time.
Kirdar moves on to consider the issue of ‘a female empowerment’, possibly partly due to the break through of the generational band she had interviewed, these are elite women, but what about the growing and emerging middle classes? They are looking for a new way, but not necessarily the Western way. They are setting new bench marks and assisting the new middle class Arab men to progress as well.
Sally Findlow’s chapter on Women, Higher Education and Social Transformation in the Arab Gulf looks at how a particular group of Arab women are engaging with rapidly expanding higher education opportunities at home and abroad, and conceptualising this engagement as part of wider regional project of social change. Set in a historical context and against broader discussions about the social functions and gendered inequalities of higher education, it challenges over simplistic stereotypes of the opportunities available to Arab, particularly Arab Gulf, women. It draws on policy analysis and the first hand accounts of local women to describe how the UAE higher education system interacts with issues of tradition, modernity, religion and family to produce complex patterns of aspiration, empowerment and tension among young women.
The marginalisation of women in higher education is addressed, so while in absolute number there has clearly been an increase, the fields of study in which the majority of women are engaged are fairly limited and circumscribed. Nonetheless, Findlow shows this is not uncommon in global perspective.
The Gulf States are the focus of this study and were, even by regional standard, slow starter in development terms, though for at least two decades public rhetoric has extolled the social roles of women. In some instances certain devices have been used to deal with traditional considerations such as locating a new campus for women miles in land from the main centre or offering distance learning modes. One impressing feature is the apparent success in attracting large numbers of women to technical education.
Sally Findlow summarises this situation as a ‘feminist/internationalist orientation, constrained only by residue social conservatism. She interviewed female university students and found that enrolling in higher education was often seen as an alternative to merely staying home. It is at least an option available to women, even if, as yet, to only a few in the border female themes. However, barriers to utilising qualification and experience in higher education are wide spread, the marriage imperative being the foremost. As Findlow puts it ‘Economics, life style and status are all involved in this mismatch. The more educated the woman is the higher her bride-price’. She also found that mothers are encouraging their daughters to enrol in higher education and once there to be careful to conform to traditional lifestyle. So it would appear that social adjustment to increased opportunities is under way and the voices of educated women in the Gulf are beginning to be heard.
Andre Mazawi (Besieging the King’s Tower: en/gendering academic opportunities in the Gulf Arab States) is also concerned with women’s higher education opportunities in Gulf Arab States. As he points out, women constitute more than 75 per cent of all higher education students, but less than 30 per cent of Faulty members. Furthermore their presence in both regards is highly concentrated in certain disciplinary fields. He refers to them, despite their number as : ‘a subordinate and politically weakened social group. Their situation is not assisted by their issues being marginalised in the literature - hence the significance of the chapter in this volume by Mazawi and Finlow. This chapter concentrates on the main issues: a) a ‘spatial political economy of academic opportunities; b) ‘globalisation and the privatisation of emancipation’; c) ‘academic opportunities as topographies of struggle’, the First of the three concentrates on Saudi Arabia, concluding that: ‘women are generally not allocated to disciplinary fields which are either associated with Saudi Arabia’s industry and technology or with the production of religious knowledge’. The second is non Gulf-oriented in a wider sense, and concludes that the promotion of women in Gulf higher that the position of women in Gulf higher education is: ‘an emblematic representation of a cultural product of modernity and renaissance endorsed and promoted by a benevolent state. The third focus is on women’s increased activism in post Gulf-war social debate. Overall the picture is one of struggle to gain the maximum possible benefit from such liberalisation as has occurred. Mazawi’s overall conclusion is that it is not yet clear how the increased participation of Gulf women in academia will be able to relate to the resolution of issues of social justice, fairness and equity in the wider societies.
Another focus within this collection is Iran, a complex country, multi-ethnic and multi-lingual. One particular feature is the multi-faith nature of the population – while the vast majority are Shi’i Muslims, remainder are Christians, Jews and Zoroastrians. These aspects of Iranian plural society are examined in the two chapters on that country. Iran was declared an Islamic Republic in 1979 and Iranian schools have ever since aimed at creating pious and politicised school children, because the aim of the Islamic leaders since the Revolution, has been to create politically aware and devout Muslim citizens. The education system has been regarded as the principle instrument through which this was to be achieved. In the aftermath of the Revolution, a major revision of school textbooks took place. Islamic themes, references to Islamic personalities, and episodes in the history of Islam were introduced into the majority of school texts, especially those of the humanity subjects. Religious studies itself came to form a major part of the curriculum. Given the overtly Islamic nature of the education system, it may be asked how the minority faiths fair within it? The answer to this question we find in the chapter by Golnar Mehran, on Religious Education of Muslim and Non-Muslim Schoolchildren in the Islamic Republic of Iran. She points out that Iran has a predominantly Muslim population the majority of which are Shi’i. It does also contain minorities who practice other faiths, mainly Christianity, Judaism and Zoroastrianism, these being the only other ‘officially recognised’ religions in the country.
The system of education in Iran, since the establishment of a modern system of education in the first half of the twentieth century, has been highly centralised and this is still the case under the Islamic Republic. There is a standard curriculum throughout the country and teachers are trained in state-sponsored teacher training centres. The textbooks are also uniform throughout the country, with the exception of religious textbooks. The Ministry Education prints separate religious textbooks for the Muslim and non-Muslim schoolchildren between ages of 7-16, the period of formal compulsory schooling.
The purpose of Mehran’s study is two-fold: to identify the goals of state religious education, and to determine whether these are intended to bring unity or disunity among the different faiths. Hence, she undertakes an in-depth study of religious education in schools by examining the pictorial and textual contents of religious education textbooks during the 2004-2005 academic year for both Muslim and non-Muslim students. She addresses the following themes: how religious diversity is treated in the textbooks; messages conveyed to students of different faiths; and the similarities and dissimilarities of religious education for the majority and minority faiths. Her finding reveals that state religious education aims at bringing about unity rather than division, by emphasising ‘commonalities’ and ignoring dissimilarities, among monotheist religions. Another of her findings is that religious education in Iranian schools is also characterised by a policy of silence that deliberately excludes potentially divisive issues, and avoids acknowledging and addressing the religious diversity that exists in contemporary Iranian society.
Another prime characteristic of Iran is its multi-ethnic and multi lingual composition, and in this study, Iran Mohammadi-Hueboeck examines the question of ethnicity, language, and identity through impacts on the Iranian Kurds in contemporary Iran. In the chapter on the Aspects of Bilingualism in Iranian Kurdish Schoolchildren she begins by listing the Iranian ethnic groups, all with their different languages, and points out that the Persians, Azeris and the Kurds comprise the largest groups. The question then arises, given the highly centralised nature of the state and its education system, and the emphasis on Persian as the language of instruction in schools and communication in local and national administrative structures, how is this multi-lingual factor accommodated? She examines this question by focusing on a case study of the Iranian Kurds with particular reference to the new generation of Kurdish youth. With the establishment of a modern centralised state under Reza Shah (1925-41) and the aim of creating a unified country, the Persian language became the official language and school textbooks came to be published only in Persian. Mohammedi-Heuboeck argues that this development put Persian, ‘in close contact with a variety of regional languages, giving rise to politically motivated situation of bilinguilism throughout the country’. After the Revolution of 1979 the ethnic peoples of Iran hoped for a certain amount of autonomy especially with regard to their language and culture. But under the Islamic Republic this was not to be. Despite many debates and discussion, as well as the official recognition (article XV of the Constitution) of the right of the minorities to teaching in their own language, Persian remains the language of instruction in Iranian schools. Mohammadi-Heuboeck traces the long periods of struggle by the Kurds for the right to receive education in their own language, and examines the impact of a centralised school system on the new generation of Kurdish youth, she writes: ‘the dynamic of identification of the new generation is not the same as it used to be for the previous generation, socialised mainly within the family circle’; school life forms a considerable part of the Kurdish youth experience and the family circle is no longer the only place of the identity and cultural references.
Mohammedi-Huebock goes on to examine the changing attitude of even the older generation to the dual linguistic identity and the increasing acceptance of Persian as the ‘language of our children’, even though in many cases their own knowledge of Persian is rather poor. This has become even more widespread under the Islamic Republic as result of the increase in the population and the Islamic Republic’s drive to increase literacy. Therefore the transmission of the Kurdish language is gradually on the decline, and the young generations’ linguistic identity is oriented more and more towards Persian, the official language. As a consequence, not only has a conflict been created within the Kurdish communities, but also between its rural and urban areas – the urban increasingly becoming Persian speaking and therefore regarded as educated, and the rural still dominated by Kurdish and regarded as backward. She states that this stigmatised stereotype of the Kurdish language can be seen in schoolchildren who regard Kurdish as the language of farmers and therefore there is no reason for them who are urban livers to speak it. This attitude by most of the younger generation to the language is also applies to the Kurdish culture itself, which is regarded as dispensable and no longer relevant to a modern Persian speaking society. The fact that they speak Persian as their first language is seen as a sign of modernity and social prestige.
She further analyses the sociolinguistic interaction of the young or middle aged parents, from different socioeconomic backgrounds with their children. The parents ambitions for their children to climb the social ladder influences their attitude to Persian, to the extent of adopting it as the language of communication in the home, even to young children, so as to prepare them for school and interaction with their peers. Her contact with families in the Kurdish communities gives her an insight into the complexity of their lives in the face of the interplay between the old linguistic aspirations and the requirements of the socioeconomic dynamics in today’s Iran. She examines the conflict that the young experience in this cultural duality. The majority of the youth, who have assimilated the national identity by speaking Persian, either in a local or Tehrani accent, like to conceal their Kurdish origin, but nevertheless live under the constant threat that their Kurdish origin might be revealed in public. The feeling of discomfort and shame towards their parents or their family who may not speak the Persian language well is common among them, and they put much effort into concealing their parents’ Kurdish identity.
Mohammadi-Heurback points out that this crisis of identity, is not only limited to Kurdish children speaking Persian, but also extends to the children speaking Kurdish itself. The experience of being Kurdish in school where Persian language and culture is dominant, has resulted in the emergence of ethnic nationalism amongst some of the new generation. They regard the institution of school as a symbol of political domination by the State in the Kurdish area. Fear of loss of their Kurdish identity creates either passive or active antagonism towards the central government. They wish to be recognised as Kurds and strongly believe that Kurdish language and literature should also form part of the curriculum along with other subjects. She concludes that this can only be achieved if the article XV of the Constitution, granting the right to the minorities to be taught in their own language is realised in practice.
Yossi Dahan & Yossi Yonah in their chapter on Israel’s Education System: equality of opportunity - from nation building to neo-liberalism, begin by stating that the value of equal opportunity has always been the guiding principle of the founders as well the prominent political leaders of the state of Israel in formulation and implementation of public policies in the sphere of education. This value was also regarded as essential for realisation of a cohesive society and creation of solidarity among the members of the emerging new Jewish state the geographical origins of which are extremely diverse. They go on to pose the questions: ‘how has this value fared in Israel’s education system over the years’, and ‘to what extent does it receive meaningful expression in Israel’s education policies?’ They begin by examining the State Education Act of 1953, which formally nationalised Israel’s education system. This was to provide equal opportunity and equal treatment for children regardless of their ethnic origins and social background. However, in practice, this was not the actual outcome: Dahan and Yohan maintain that from the outset ‘geographical segregation and systematic discrimination against various social groups’ took place. This was seen in areas such as allocation of resources, teaching personnel, and school curriculum. The severest form of discrimination was against the Arab children. The Israeli Arab citizens lived under the military rule that followed the establishment of the state of Israel, and their children were allocated to segregated schools and severely discriminated against in allocation of resources. In the case of the other Israeli citizens, Mizrahi and Ashkenazi Jews, the pattern of geographical segregation in their lives was upheld by the education system. Most of the Mizrahi children attended poorly equipped schools with low academic achievements, while Ashkenazi children whose parents were mainly of European origin attended privileged schools, with high quality teaching and facilities, preparing the pupils for higher education and consequently a privilege position in the society. Therefore, they state ‘the education system has generally resulted in reproducing existing patterns of geographical segregation and structural inequalities between Mizrahi and Ashkenazi children, thus practically creating two educational sub-systems characterised by a display of material conditions and different school curricula’. Hence, this led to emergence of wide scholastic gaps among children belonging to different social groups, which became a worrying factor for the education policy makers who perceive it as ‘undermining attempts to cultivate a cohesive Jewish community’. Dahan & Yonah proceed to focus on the reforms that were initiated to remedy the initial shortcomings of the education system, the two most important being: the Integration Reform, implemented in 1968 and the ‘Dovrat Reform’, endorsed by the government in 2005. Both reforms, they state, despite their different ideological and political nature, ‘ decree that educational policies are desirable only to the extent that they significantly contribute to the realisation of the value of equal opportunity’. According to this value, every child irrespective of social background, including nationality, ethnicity, race, gender, family milieu and economic status, should be granted educational opportunities. They show that both these reforms fail to implement the ‘principle of educational equality of opportunity’, considered ‘one of the main constitutive building blocks of Israel’s state ideology’. Though, they meet the aim of creating a strong national Jewish identity by adopting a strong nationalistic curriculum which rules out any reference to multiculturalism or allow expressions of other national identities that exist in the state if Israel.
Richard Ratcliffe’s contribution (The Moment of Education: the politics of education among the Negev Bedouin, Israel) is a fascinating insight in to certain educational experiences of a very special and distinctive group, the Bedouin of the Negiv in Israel, who he clearly shows have exhibited a keen political interest in education. As with the gendered chapters discussed above the issues of concern in education for this group reflect wider social trends and problems. As he puts it, Bedouin educational politics was significant in relation to wider politics within the community at large. Ratcliff indicates that the Negev Bedouins, until recently, have been peripheral within the Arab World, indeed even ‘romanticised’. In relation to education in particular they have become politically active. They have according to Radcliffe: ‘come to symbolise the internal Arab threat, in terms of demography, land, security, and perhaps even the existential impossibility for social interpretation. He discusses in particular what he terms ‘the moment’ for Bedouin education: the educational contestation of 1994-2005, including a sustained campaign for the improvement of Bedouin education in Israel, which also brought much ‘new knowledge’ to light about their plight, by showing clearly that ‘the low status of Bedouin education was not caused by ‘cultural reasons, but rather by the unequal material and conditions and discrimination’. This sustained campaign had the additional effect of symbolising the discrimination against Palestinian Arab citizens of Israel in general. Ratcliffe analyses what he terms the technopolitics of approach of the campaign, concentrating on practical issues of neglect and discrimination rather than ideological and rhetorical stances. He describes it as having an ‘integrationist logic’, revolving around issues of development, land, gender, demography and segmentary politics. It was tactically partial, pliable and patient. Nonetheless it was still a struggle and the struggle goes on. For Ratcliffe this campaign, this moment ‘marked the internationalisation of Bedouin politics ... in Israel/Palestine’, with the focus on competing national projects, this transformation is often overlooked.
Bilal Barakat’s analysis of The Struggle for Palestinian Nation Education Past and Present is also described as a struggle. It is viewed here in the context of anti-colonialism and modernisation in what he rightly describes as ‘a highly exceptional position’. This of course due to the historical developments behind the present conflictual situation in education commencing in 1846 with the Ottoman education laws modelled on the French system. He examines the progress of both public and private provisions and access, noting the social class implications, including the limited expansion of public schooling and the British mandate from the early 1920s to 1948. He shows this partiality to be a deliberate policy, with the prime objective being to educate potential teachers and bureaucrats. This approach failed completely to meet the technical needs of the Palestinian community. However, schools were active in other respects, for example in the Arab Revote of 1936-39 when the British responded by constraining physical access - a forerunner of what has happened with the construction of the war in contemporary Israel /Palestine. The 1967 seizure by Israel of the West Bank and Gaza Strip greatly increase control of regulated space, subsequent to which ‘authority was severely abused’. Provision was reduced to a minimum and systematic development curtailed. Such severe limitation raged from primary to university levels. Nonetheless, the Palestinian universities have I, if not de jure, served as features of a nascent ‘rational authority’ for Palestine in supply high level, professional, expertise. Barakat describes how the Oslo Accords of the early 1990s ‘formally transferred control of the educational system in the Occupied Territories to the newly formed Palestinian National Authority’. This enabled the increase of provision at primary level to reach near universal enrolments and progressed through the secondary stage as well. The outcome he regards as favourable as compared with Arab schools within the State of Israel itself, when ‘internal colonialism’ brings additional constraints. However, in the Occupied Territories there has been considerable physical damage and human casualty due to military action.
Turning to the issue of education in relation to liberation, Barakat poses the question: ‘Whose liberation and from what’? Is the kind of education that assists a revolutionary struggle appropriate for assisting social and economic well-being in a border sense? Such tension can also be portrait as being between education for individuals and education for the community as a whole. But there are many Palestinian communities as well as social classes. Only in the universities of the West Bank apparently has ‘the individual’ and ‘the national’ been reconciled.
Issues of identity and historical record have been severely and adversely affected by Israeli destruction of research documents, archives and central records. Through such assaults, Palestinian identity ‘as political, historical, intellectual or cultural beings is sought to be minimised if not eliminated’. Nonetheless, educational provision in contemporary Palestine has helped so far to prevent the total realisation of Israel’s strategy of attacking Palestinian identity. So Barakat concludes that education remains a potential contributor to the realisation of the Palestinian State and its national development.
Abdelkader Ezzaki in Formal Schooling in Morocco: the hopes and challenges of the current educational reform, commences by quoting 2003 UNDP Human Development Report’s figures on education, which show that, in Morocco despite the fact that the educational expenditure forms more than 25% of the government’s total budget (about 5.5% of GDP), it ranks as one of the low-performing countries in North Africa on the human development indicators (126, with an education index of .50, compared to Egypt’s 120 and an index of .63; Algeria’s 107 and an index of .69 and Tunisia’s 91 with an education index of .73); this is based on adult literacy rates and the combined primary, secondary and tertiary gross enrolment ratio (which forms one of the three indices on which the human development index is built). Clearly, it became necessary to remedy the shortcomings of the educational sector by introducing a reform programme. Ezzaki proceeds to review and discuss the educational reform in Morocco following the setting up of a Special Commission of Education and Training in the mid-1990s which addressed the problems bearing on all sections of the education sector. This led to the drawing up of a Charter, which was officially adopted in 1999. This Charter came to form the basis for all the reform initiatives taken by different educational authorities in the country. It deals with a full range of educational matters such as universal education, the curricula, methods of teaching and evaluation, language teaching, and information and communication technology (ICT). He describes each section of the report against the relevant sector of the education system and examines the extent to which the proposed reforms have been implemented.
Ezzaki finds that, despite improvements in each of these areas, they still fall short of the standards and targets set out under the Charter. The illiteracy rate, for example, is over 40%, one of the highest in the Arab world: about 2.5 million children of school age, mainly female in rural areas, are still out of school; grade repetition rates have increased; the already weak pre-schooling system has further declined leading to inadequate language and skill development required for the next stage of schooling. In the area of curriculum change the reform stipulated the integration of new areas of study into the curriculum, such as human rights, environmental issues, citizenship, technology and computing. However, Ezzaki regards this reform as problematic in that it leads to the overloading of curriculum, increasing the cost of schooling for parents, while the outcome may be superficial learning, resulting from this multiplicity of contents. Another aim was to include practical skills into the curriculum. This, however is being achieved through such initiatives as the project ‘la main à la pâte’ (hands-on learning) supported by the French cooperative programme in Morocco. It aims at enabling students to learn academic contents through practical activities. The challenge is to incorporate such initiatives formally into the curricula and as Ezzaki says: ‘to create a new pedagogical culture centred around learning ‘relevance’’.
Having examined each section of the reform against its implementation, Ezzaki concludes that the Charter is a sound document and highly relevant to the needs of the education system in the country. It has brought about much needed improvement in many sectors of the education. But certain implementation difficulties are reducing the success level of the reforms. He puts forward a number of policy initiatives to remedy the shortcomings.
Ayse Kok has explored a very different theme from the other chapters, being Computerising Turkey’s Schools. Information technology is of course making its mark throughout the Middle East and North Africa, but in Turkey it is coinciding with a major project to bring basic opportunity for all. In all countries ICT impacts on education in two major senses in that it requires a certain threshold of technical competence to install and utilise it at all and then once in being it becomes a valuable medium for the development of learning at all levels.
After discussing certain theoretical and technical issues that apply globally, Kok turns to the context of Turkey itself and especially the social transformation of recent decades, the last two of which have experienced the advent of ICT. There have in fact being a number of national bodies involved in ICT projects, but the key objective now is to integrate into a centralised education system. This is proving a massive and challenging task, given that disparate levels of infrastructure and understanding across the country. It is not just a technical challenge but a curricular one as well. In general the installation of technical capacity is well ahead of understanding how it can support curricular development. So two programmes of Basic Education Phase One (1998-2003) and Phase Two (2000 onwards) have been under way, looking to Turkey’s future, including possible membership of the European Union. Indeed, this is a driving force behind the ICT development. But as Ayse Kok concludes: ‘What this would mean in respect of Turkey as a Middle Eastern country is another issue’.
Bilal Barakat is a doctoral research student at the Department of Education, University of Oxford. After having studied mathematics at Cambridge(BA) and Oxford (MSc), he developed his current research interest in educational planning, especially in developing countries. He has worked as a consultant on various aspects of international and UK educational development, including higher education quality assurance, education in post-conflict settings and teacher training and recruitment, for UNDP, UNESCO, and for national agencies in the United Kingdom.
Colin Brock is UNESCO Chair of Education as Humanitarian Response at the University of Oxford and a Fellow of St Hugh’s College,Oxford. A graduate in geography and anthropology from the University of Durham, he initially taught in secondary schools and subsequently at the universities of Reading, Leeds and Hull before moving to Oxford in 1992. From 1972 to 1974 he was Education Adviser at the Caribbean Development Division of the then Overseas Development Agency, since when he has worked in the field of comparative and ?nternational education. Colin has undertaken significant project work in the field in many locations in Africa, South and East Asia, the Americas and the tropical island zones. More recently he has become involved in the Middle East. He is the author or editor of about 30 books and over 100 chapters, articles and research reports.
Yossi Dahan received his PhD from the Philosophy Department, Columbia University, New York, USA. He teaches courses on law and society and labor law at Ramat Gan College of Law in Israel, where he heads the human rights division. He also teaches courses on ethics, political theory and education at the Open University. He is the chairman and one of the founders of ‘Adva Center’, a research and advocacy center devoted to the study of social and economic inequalities in Israel. Dr Dahan is currently working on a book on theories of social justice.
Abdelkader Ezzaki was, until recently, Professor of Education at Mohammed V University-Souissi. He has served as Visiting Professor and as an international education consultant in the USA, in Africa and in the Gulf region. Currently, he is an ‘Education Specialist’ with ALEF, a USAID project that works in Morocco on innovative initiatives in education and vocational training. He holds a PhD from Temple University (USA), and an MA from the University of Wales (UK).
Sally Findlow is a lecturer in Education at Keele University. She gained her first degree in Islamic Studies and her PhD from the University of Cambridge. Sally has lived, worked and conducted fieldwork across several countries in the Arab Gulf and also in Egypt. In research terms she is interested in the role of higher education in the production of culture seen as dynamic and mutable, and responsive to social and policy change.
Serra Kirdar is the Founder and Director of the Muthabara Foundation, which was established in partnership with the University of Oxford Middle East Centre and the Centre for Applied HR Research to help maximise the potential of Arab women to achieve managerial and professional roles in the private sector. Serra gained a BA in Middle Eastern Studies, an MSc in Comparative and International Education and a DPhil., all at the University of Oxford. Her doctoral thesis was entitled: ‘Gender and Cross-Cultural Experience with Reference to Elite Arab Women’. Serra was a founding member of the New Leaders Group for the Institute for International Education (IIE), and also founded the Initiative for Innovative Teaching (INTEACH) under the IIE and Oxford University Middle East Centre. INTEACH aims to develop tailor-made locally geared professional training programmes for public sector teachers in the Arab world with the aim of enhancing pedagogical instruction in the region. She is also a Foundation Life Fellow of St Antony’s College, Middle East Centre, University of Oxford.
Ayse Kok received her BSc degree in Management Information Systems from Bogazici University, Turkey in 2003. She subsequently worked as an IT consultant with Ernst & Young, a professional business advisory services firm, before joining the MSc in ELearning programme at the University of Oxford. After completing her MSc degree, she worked as a short term e-learning consultant at the United Nations Systems and Staff College in Turin, Italy. Ayse has also presented several papers at international conferences about e-learning such as iLearn in Paris (January, 2007) and EDEN workshop in Barcelona (October, 2006).
Lila Zia Levers was born in Tehran where she attended primary school, before moving to England for her secondary and university education, graduating in Politics at the University of Exeter. She subsequently held a series of posts in both education and administration, culminating in the post of Graduate Studies Administrator at the Modern Languages Faculty, University of Oxford. As a long-standing member of the British Association for Comparative and International Education, she has presented papers at its conferences on aspects of education in Iran. She presented a paper at the Royal Institute of International Affairs, Chatham House, London on ‘The Iranian Revolution: ten years later’. These papers have been published. Her most recent publication is ‘Ideology and Change in Iranian Education’ in Rosarii Griffin (Ed.) Education in the Muslim World: different perspectives (Oxford: Symposium Books, 2006).
André Elias Mazawi is Associate Professor, Department of Educational Studies, Faculty of Education, University of British Columbia (UBC), Canada. He is Co-Director of the Centre for Policy Studies in Higher Education and Training (CHET) at UBC and serves as Associate Editor and French Editor of the Canadian Journal of Higher Education. He is interested in higher education and educational policy, with particular reference to the Middle East. His recent publications include: ‘‘Knowledge Society’ or Work as ‘Spectacle’? Education for Work and the Prospects of Social Transformation in Arab Societies’, in Educating the Global Workforce: knowledge, knowledge work and knowledge workers, edited by L. Farrell & T. Fenwick, pp. 251-267. (London: Routledge, 2007); and ‘Globalization, Development, and Policies of Knowledge and Learning in the Arab States’, in New Society Models for a New Millennium – the learning society in Europe and beyond, edited by M. Kuhn (New York: Peter Lang, 2007).
Golnar Mehran is Associate Professor of Education at Al-Zahra University in Tehran, Iran. She has also acted as education consultant to UNICEF (Iran, Jordan and Oman), UNESCO, and the World Bank. Her research interest and publications include: ideology and education in post-revolutionary Iran; political socialization of Iranian schoolchildren; presentation of the ‘self’ and ‘other’ in Iranian education; female education in Iran and the Middle East; and religious education in the Islamic Republic of Iran.
Iran Mohammadi-Heuboeck recieved her PhD in Sociology from the Ecole des Hautes Etudes en Sciences Sociales (EHESS), Paris, for a thesis on the role of school in the process of construction of identity among Kurdish schoolchildren in the Islamic Republic of Iran. Her current research focuses on various aspects of contemporary Iranian society: identities of ethnic minorities, sociology of education, women’s studies and questions of religious identity in the Islamic Republic. She is working as academic supervisor at the London School of Oriental and African Studies .
Richard Ratcliffe is near completing his DPhil on the politics of non-formal education among the Negev Bedouin, Israel at St Antonys College, University of Oxford. As part of this study, he spent 3 years working with different educational institutions and initiatives, as a consultant, researcher and teacher. Prior to this, he worked for a number of different human rights organisations in Israel/Palestine and gained an MA in Arabic from Edinburgh University.
Barbara Freyer Stowasser is Professor of Arabic and Islamic Studies at Georgetown University in Washington, DC, USA. She holds an MA in Near East Studies from UCLA and a PhD in Comparative Semitic and Islamic Studies from the University of Munster, Germany. Her publications include Islamic Law and the Challences of Modernity, co-edited with Yvonne Haddad (AltaMira Press, 2004), a book-length study on Women in the Qur’an: traditions and interpretations (Oxford University Press, 1994), and an edited volume entitled The Islamic Impulse (Center for Contemporary Arab Studies, Georgetown University, 1987, reprinted 1989). Two of her shorter think-pieces appeared as Center for Contemporary Arab Studies Occational Papers: ‘Religion and Political Development: comparative ideas on Ibn Khaldun and Machiavelli’ (1983, reprinted 2000), and ‘A Time to Reap: thoughts on calendars and millennialism’ (2000). The latter is the text of Dr Stowasser’s address as outgoing president of the Middle East Studies Association (1989-99).
Yossi Yonah received his PhD from the Philosophy Department, University of Pennsylvania, USA. He teaches political philosophy and philosophy of education in the Department of Education, Ben Gurion University of the Negev, Israel. He was Head of the Teacher Training Program there between 1995 and 2002. He is a senior research fellow with the Jerusalem Van Leer Institute. Professor Yonah has published extensively on topics pertaining to moral and political philosophy, philosophy of education and multiculturalism. | https://www.symposium-books.co.uk/bookdetails/66/ |
Our objective is to demonstrate the feasibility of an instrument to measure residual urinary bladder volume non-invasively. The instrument is based on ultrasonics, and would specifically be designed for low cost, direct digital readout, and ease of use by relatively untrained personnel. The primary value of the instrument is in reducing the need for catheterization, thus saving patients from the associated discomforts and risks of trauma and infection. Primary applications are: 1. Diagnosis of diseases which involve residual urine problems in the urological office, outpatient clinic, or emergency room; 2. Management of the bladder in spinal cord injury and other neurogenic problems; 3. Determining the need to catherize patients recovering from anesthesia. The product has potential for widespread use by urologists, clinics, and spinal cord care centers. The project includes both the demonstration of technical feasibility, and clinical calibration and evaluation of an experimental instrument. The clinical evaluation phase has several objectives: Test the instrument and processing algorithms for accuracy against catherization derived results. Determine accuracy over enough data points to be significant. Provide data to be used for off-line refinement of the processing. Determine how reliably the instrument can be applied over the range of potential patients, including sex, age, extent of bladder filling, obesity, and extravesical pathology. Investigate clinical value in the following applications: (a) Ambulatory office or clinical diagnosis, (b) Spinal cord injury patient management, (c) Post-anesthetic patient care. Because of the patient benefits and the excellent potential for widespread use, we think the project will be of interest to the National Institute of Arthritis, Diabetes, and Kidney Diseases.
| |
On February 13, 2017, the members of the Dixie Section voted overwhelmingly to change the name of the Section to the Alabama – Northwest Florida Section, Professional Golf Association (PGA) of America. The new name gives the Section clearly identifiable boundaries and a practical name for doing business in the 21st century. The Section has a rich history, mostly rooted in the years operating as the Dixie Section from 1965-2016.
The Section boundaries have been Alabama and the Florida panhandle since 1969. Historically, PGA professionals in Alabama, as long ago as 1916, were members of what was previously called the Middle States Section. This Section’s region encompassed the states of Alabama, Georgia, Tennessee, Florida and both North and South Carolina. Of course, this was in the early days of the PGA when fewer members existed and autonomy of a Section was hardly more than a geographical identity. As the PGA grew, the demand for more reasonable and operational boundaries was heard and in 1926, the Southeast Section of the PGA was organized. The Carolinas was established as a separate Section, leaving six states as the Southeast Section. Not much better, but at least the trend was started to reduce the size of the Sections so they could have a more individual identity.
In 1959, several things took place. Tennessee and Florida were both established as new Sections, respectively. Additionally, the states of Louisiana and Mississippi joined together to establish the Gulf States Section. This left Alabama and Georgia to form their own Section, then called the Georgia-Alabama Section. During this period, as it is today, the Panhandle of Florida was considered a part of Alabama, for geographical identification.
In 1965, the Georgia-Alabama Section was renamed the Dixie Section. While the Georgia professionals established themselves as the Georgia Chapter of the Dixie Section, while the Alabama professionals had some work to do in this respect. At this time, there were several organizations and associations in the area that had banded together to also promote golf, pro-ams, tournaments, etc. This was understandable because of the large geographical area to be covered by one organization. In addition to the Professional Golfers of Alabama, there were groups from the Panhandle of Florida, Mobile, AL, and other areas of south Alabama. While these groups included some, if not most of the PGA members, they also included some golf professionals who were not PGA members. This left Alabama professionals some work to do when organizing the Alabama Chapter of the Dixie Section, as a counter-part of the Georgia Chapter. In 1967, Jackie Maness of Alabama was elected Secretary/Treasurer of the Dixie Section (GA-ALA). Shortly after this, the organization called to the Professional Golfers of Alabama to reestablish themselves as the Alabama Chapter of the Dixie Section, including the professionals in the Panhandle of Florida.
Starting in 1968, the Georgia professionals were making a strong attempt to have the national organization approve the Georgia Chapter as an established Section. Of course, this was met with support from the Alabama professionals. This would finally separate Alabama and the Panhandle of Florida to establish its own Section. Jackie Maness was very instrumental in working with Georgia professionals in this goal. In December 1969 at the PGA Annual Meeting in Scottsdale, Arizona, the Georgia Section PGA was approved as its very own Section.
The Alabama professionals elected to retain the name of the Dixie Section PGA. Jackie Maness was named as President of the new Section and Gene Williams as the Secretary/Treasurer. At this point, there was still much to be done in regards to Chapter organization within the new Dixie Section. In January 1970, a meeting was held in the Panhandle area on this subject. A second meeting followed this several months later at the Pine Harbor CC in Alabama, where it was resolved that the Dixie Section would be divided into two Chapters. One would be called the Alabama Chapter to take in the area North of Montgomery and the other Chapter would be called the Gulf Coast Chapter and take in the area South of Montgomery through the Panhandle. The Chapters were to elect their own officers and conduct their own programs under the guidance and approval of the Dixie Section. By this time, the other non-PGA groups were disbanded as the PGA members pulled out to participate in their respective Chapters.
By 1977, the Dixie Section was one of very few sections that did not have an established business office and administrative person to handle all of the ever-increasing administrative details of the organization. Section PGA member Chick Ritter, who had recently retired as a Head Professional, was hired in April 1978 as the Executive Secretary and first full-time employee of the Section. Steady progress was made in the affairs and programs of the Section. In 1985, Chick requested retirement again with a lead time of June 1986 for a turn over to someone to take his place. In January 1986, after screening a number of applicants, the Section Board of Directors selected Eddie Webster of Webster and Associates, Inc., an association management firm, as the Executive Director for the Dixie Section. The contract became effective on February 1, 1986. After eight months, Eddie Webster left and Ralph “Peg” Thomas served as interim Executive Director from October 1, 1986 until February 16, 1987, when Dave Berry, PGA member, assumed the position of Executive Director until his retirement in October 2004. In March of 2005, the Section Board of Directors named Bart Rottier the new Executive Director.
It is interesting to note the membership numbers since the Dixie Section, as we know it today, was established. In 1969, before the split with Georgia, there were 175 members in the overall Georgia-Alabama area. The split left the Dixie Section, as we know it today, with 82 members. There has been a steady increase to the point that in 2017, we had 359 Members and 55 Apprentices.
Many new ideas have grown in the Section with assistance of the PGA National Headquarters. A PGA Financial Assistance Fund began in 1986 to provide college scholarships for children and grandchildren of PGA Members. In April of 2004 the Dixie Section Golf Foundation began. This Foundation was set up in a 501(c)(3) tax exempt status.
Steve Lyles
Royal C. "Bud" Burns
Jackie Maness
Conrad Rehling
Wayne Griffith
Brent Krause
Woody Woodall
Arthur "Butch" Byrd
Chris Rigdon
Jon Gustin
Dave Atnip
Robert B. Barrett
Tommy Burns
Jim Brotherton, Jr. | https://alabamanwfloridapga.com/about/history/ |
Jurriën van Duijkeren (1982), born and raised in Amsterdam, studied architecture at TU Delft and the Technical University in Prague before graduating cum laude in 2008. Prior to founding Common Practice he worked on the design and execution of large housing ensembles, schools and universities in Belgium and The Netherlands. In addition to practice, Jurriën teaches architecture at the Rotterdam Academy of Architecture and Amsterdam Technical School.
Inara Nevskaya (1982) graduated cum laude from the Architectural Institute in Moscow and studied at Shibaura Institute of Technology in Tokyo before relocating to The Netherlands. She has worked on complex projects of various scales, from private homes, to large housing schemes and commercial transformations, before co-founding Common Practice. Inara collaborated with numerous product designers and manufacturers on the development of innovative sustainable materials.
Team - Inara Nevskaya, Jurriën van Duijkeren, Mindy Li, Alexander Fischer
If you are interested in working with Common Practice please e-mail your CV and portfolio to [email protected]. Files should not exceed 5MB.
Common Practice is an office for architecture and design, established in 2017. We are committed to make spaces and buildings that contribute to the sustainable city and enrich the lives of the individuals and communities they serve.
We are curious and open-minded, searching inspiration in both the simplicity of the everyday and the intelligence of long standing traditions. Many answers to challenges of the future can be found in the past. We look carefully at the history and collective experience of a place before making a proposal for the new.
We believe that a good and sustainable built environment emerges as a result of collective effort and enjoy working on assignments as part of a broader team, combining knowledge, aspirations and devotion. We appreciate that architecture can be unpredictable and surprising, as rational and poetic layers come together. | https://www.commonpractice.nl/studio |
15 Skills Every Project Manager Should Have on Their Resume
The ability to successfully execute projects from the initiation stage to completion goes beyond just technical know-how. To be effective, project managers must also work on developing the soft skills necessary to push projects to completion.
Whether you've already begun your career in project management or are hoping to pivot into the field, here are 15 skills that any you should seek to develop and showcase on your resume:
1. Leadership
At the end of the day, your job as a project manager is to lead your team. You need to set the vision and inspire others to achieve it - if you can do that, all while treating your team members with respect and consideration, you'll find that meeting your other goals as a PM is much easier.
2. Industry Expertise
You should know the ins and outs of your field and your company. Understanding the more technical aspects of the work you're overseeing will help you gain your team's respect, and track and evaluate their work more effectively. From understanding project management software to keeping track of emerging market trends, excelling in your own field is a non-negotiable necessity for a project manager.
3. Negotiation
When a group of people from diverse backgrounds are brought together to work toward a common goal, there is bound to be conflict. That's why one of the top project management skills is the ability to listen to all perspectives and find win-win solutions.
4. Communication
The ability to know what needs to be said and how to say it is the secret for building lasting relationships and a close-knit team.
5. Cost Control
The scope of a project manager's job profile extends beyond human resource management and also includes effectively handling logistics. Since budget management is an important part of the job, cost control becomes is commonly cited as a top project management skill. At the end of the day, delivering a project on time and under budget is your ultimate goal.
6. Team Management
If you want to land a job as a project manager, you need to know how to delegate tasks andkeep team members' professional goals aligned with larger organizational goals.
7. Risk Management
Project management is constantly evolving, as is its scope, As a result, you never know what kinds of projects or tasks might be thrown your way. . In these situations, having a solid foundation in risk management will help you navigate unfamiliar terrain and make strong decisions..
8. Organization
As a project manager, you could be overseeing different projects of varying natures simultaneously. The ability to stay organized and keep your team operations streamlined by prioritizing multiple complex tasks at the same time is crucial.
9. Critical Thinking
You need to be skilled at analyzing the pros and cons of each decision, and thinking logically in order to reach conclusions.
10. Technical Writing
From updating external stakeholders on projects to internal team communications, your ability to succinctly and clearly explain a project with written updates is crucial. s.
11. Mentoring
As a project manager, you will work with different teams in different environments, and each team member will possess a different skill set. Considering this diversity, mentoring is another key skill you should aim to master over time. It opens up a whole new window of opportunity to ensure your team reaches its max potential.
12. Adaptability
Given the diverse nature of the job, adaptability is a crucial skill for any project manager. From getting along with team members from diverse backgrounds and experiences, to quickly adapting to new technologies, product trends, geographical locations and user demographics, good PMs know how to adapt to the situation they're in.
13. Resilience
Even though no project manager sets out to watch a project spiral downhill, every PM eventually encounters a project that just doesn't go quite right. The resilience to turn around a project that's performing poorly is key - and showcasing this resilience on your resume will help set you apart from your peers.
14. Quality Control
A project manager must not only see a project through from initiation to closure, but also ensure the end product is in line with the purpose it has been designed for. Therefore, spending time and energy on the quality of your deliverables is a hallmark of a good project manager.
15. Sense of Humor
Humor is an important people skill. For a job that largely entails managing teams of professionals, the ability to laugh and lighten up the situation when things look grim is crucial. A project manager who relies on a good sense of humor to earn goodwill is much more likely to have team members who are willing to go the extra mile when necessary.
Why Do These Project Manager Skills Count?
Reports released by the Project Management Institute indicate that the need for individuals with diverse skills in the field of project management will grow by 87.7 million by 2027 and that the success rate of a project is likely to increase by 40% when the project manager possesses the essential skills to do the job well.
If you are looking at building a career in this field, acquiring and honing these skills - and figuring out how to showcase them on your Project Management resume - will help you stand out from the crowd.
Growing Your Career in Technical Support: 4 Tips for Getting Hired at Elastic from Support Director Heidi Sager
Heidi Sager loves math, but she also loves working with people.
She always has, which is why she enjoyed her part-time job working at the IT department of the University of Colorado while she was studying electrical engineering. (She'd started in computer science, but explains that it "wasn't for her" and switched her major.) She helped students and professors with word processors, basic programming, and software checkout, and took a full-time job after graduation as a UNIX system administrator.
3 Women, 7 Lessons: What These Relativity Leaders Learned in 2020
Working at Relativity—the global tech company that equips legal and compliance professionals with a powerful data-organizing and discovery platform—looked different in 2020. The highly collaborative environment of their Chicago headquarters transitioned to a virtual setting, and just like companies around the country, Relativity adapted their goals and major projects to a completely remote environment. | https://blog.powertofly.com/project-manager-resume-skills-2639131779.html |
This month, our students are gaining a greater understanding of community helper occupations, such as police officers, mail carriers, medical professionals and firefighters. The children have a lot of fun imagining themselves in these important roles, and incorporating toy versions of the uniforms, equipment and vehicles that go with them.
In addition, October is National Fire Prevention Month, so we place a special emphasis on the importance of fire safety and the role of firefighters. Our classroom activities help the children become more comfortable around emergency responders in uniform, and teach them basics about what to do in case of an emergency.
Here are some ways children learn about community helpers in the classroom, as well as activities for you and your child to do at home.
In the classroom: Toddlers are fascinated with dressing up as doctors, police officers and firefighters, because they have distinct uniforms and roles that children can easily understand. During dramatic play, our teachers provide students with costumes and props, and encourage them to choose the role they want to play.
In the classroom: Our Beginner students learn about the special vehicles that community helpers use, by playing matching games, reading books, and building vehicles using cardboard boxes.
In the classroom: During fire safety lessons, many of our schools invite local firefighters to visit. Students explore the tools firefighters use, learn “Stop, Drop and Roll,” and may have the opportunity to tour a fire truck.
At home: Continue exploring fire safety by practicing “Stop, Drop and Roll” with your child. Ask him, “Who puts out fires?” and discuss what he should do if he hears a fire alarm at home.
In the classroom: Teachers introduce situations when it might be necessary to dial 9-1-1. Students practice finding 9-1-1 on different keypads, such as cell phones and landlines.
At home: Show your child photos of various community helpers and the buildings where they work. Ask him to identify the helpers and their workplaces, and describe the roles the helpers play in our community.
We provide many opportunities for students to learn about community helpers. By setting this foundation, they become more familiar and comfortable around the people that make their neighborhoods a better place. | https://www.merryhillschool.com/links-to-parents-blog/exploring-community-helpers-roles-play/ |
Ho, Samuel B.
MetadataShow full item record
Abstract
Background: Early hospital readmission for patients with cirrhosis continues to challenge the healthcare system. Risk stratification may help tailor resources, but existing models were designed using small, single-institution cohorts or had modest performance. Aims: We leveraged a large clinical database from the Department of Veterans Affairs (VA) to design a readmission risk model for patients hospitalized with cirrhosis. Additionally, we analyzed potentially modifiable or unexplored readmission risk factors. Methods: A national VA retrospective cohort of patients with a history of cirrhosis hospitalized for any reason from January 1, 2006, to November 30, 2013, was developed from 123 centers. Using 174 candidate variables within demographics, laboratory results, vital signs, medications, diagnoses and procedures, and healthcare utilization, we built a 47-variable penalized logistic regression model with the outcome of all-cause 30-day readmission. We excluded patients who left against medical advice, transferred to a non-VA facility, or if the hospital length of stay was greater than 30 days. We evaluated calibration and discrimination across variable volume and compared the performance to recalibrated preexisting risk models for readmission. Results: We analyzed 67,749 patients and 179,298 index hospitalizations. The 30-day readmission rate was 23%. Ascites was the most common cirrhosis-related cause of index hospitalization and readmission. The AUC of the model was 0.670 compared to existing models (0.649, 0.566, 0.577). The Brier score of 0.165 showed good calibration. Conclusion: Our model achieved better discrimination and calibration compared to existing models, even after local recalibration. Assessment of calibration by variable parsimony revealed performance improvements for increasing variable inclusion well beyond those detectable for discrimination. | https://repository.mbru.ac.ae/handle/1/646 |
We propose to use single molecule electrophoresis through nanopores to determine the base sequence of nucleic acids. The method utilizes a unique analytical system in which an electric field drives single stranded nucleic acids through the transmembrane pore of alpha-hemolysin. As the polymer traverses the pore, current is partially blocked in a manner dependent upon polymer length, concentration and composition. In practice, a single pore can transport (and potentially analyze) nucleic acid fragments at the rate of 1000 bases per second, at a cost approximately 0.1 percent of traditional methods.
The specific aims of this proposal are the following: 1. We will determine if the five naturally occurring nucleotides have blockade signatures that can be readily distinguished. 2. We will establish the shortest sequence of nucleotides that can be detected with our current prototype instrument. 3. We will determine the ability of the prototype pore to overcome secondary structure by investigating single-stranded DNA with internal hairpin loops. The overall objective of the proposed research is to demonstrate the feasibility of detecting a specific electrical signal from a single nucleotide in a strand of DNA or RNA. Our preliminary results indicate that we are approaching both the time resolution and base-specific signatures required to resolve the sequence of nucleotides in a nucleic acid strand. | https://grantome.com/grant/NIH/R01-HG001826-03 |
Framework-inspired standards require an integrative approach that interweaves science practices, disciplinary core ideas, and crosscutting concepts. This document offers questions, strategies, and considerations for each phase of item development to aid the item developer in achieving well-aligned, multidimensional items for science assessment.
Collection
STEM
This collection of resources helps parents, educators, and others interested in integrating science, technology, engineering, and mathematics (STEM) into educational settings from early learning through college and career readiness.
Alignment Considerations for Next Generation Science Standards Assessments
Resource Type: Practice guide
2017 PDF (pdf)
Early Learning and Science Standards
Resource Type: Bibliography
2017 PDF (pdf)
This bibliography summarizes a literature review of the landscape in early learning standards and science, specifically centered on the topics of engineering and domain specific learning. The report also notes which states are incorporating detailed recommendations in science for their early learners.
Evidence-Based Practices to Support College and Career Readiness in High School: Early-College High School
Resource Type: Publication
2017 PDF (pdf)
This report includes a description of how Massachusetts used funds to support the implementation of five science, technology, engineering, and mathematics (STEM) Early College High Schools at the district level.
College and Career Readiness and Success Center
Personalized Learning with Digital Devices: Tools for Teachers
Resource Type: Practice guide
2016 PDF (pdf)
This guide provides information on digital tools and practical examples that educators can use in classrooms. The author also describes strategies for using technology in both STEM and STEAM (integrating the Arts) education.
Center on Innovations in Learning
Technical Assistance to Support the Science of Innovation
Type: Project
State(s): Arkansas, Alabama, Bureau of Indian Education, Mississippi
CIL focuses on Innovation Science, and a key aspect of Innovation Science is understanding how an organization (a school, for example) progresses with a “split screen” approach in which it is continuously improving fundamental practices while it is thoughtfully integrating “innovations.” CIL is working with a group of schools in 4 states to determine the effectiveness of this model and has published training modules on Getting Better Together, which guide a district or school’s Leadership Team in the process of improvement and innovation.
Innovation Science: Getting Better Together: A Blended Support Approach to School Improvement
Type: Project
State(s): Alabama, Virginia, West Virginia, Bureau of Indian Education
A project which blends virtual support with face-to-face support of schools implementing a school improvement plan. | https://compcenternetwork.org/collection/4753 |
The dancers wear demon costumes and masks numbering three, five, seven or nine. During the performance, the men and women hold their hands. All the dancers follow the foot steps of Ghure, the leader. In certain circumstances, men and women perform separately.
Demon Dance is usually played in the densely populated area, and is primarily played during the local festivals such as Chaitol and Bishu. It is similar to Bhangra, a Punjabi folk dance. | https://www.india9.com/i9show/Demon-Dance-17851.htm |
Knowing each others strengths and weaknesses in the studio, Snyder's "Thunda" features collaborating partner 51B, taking the best part of their themes and styles and tangling them in a flowing arrangement with strong replay value.
A true fusion of modern electronic elements and cultural soundbites make up the technical aspect of the single, yet the feeling it leaves you is something hard to put into words.
Coming from Chicago, Snyder is gaining steam within the North America performance space and certainly has a growing trajectory, a soloist to keep an eye on to say the least. | https://daily-beat.com/snyder-thunda/ |
From heaters and furnaces to wood-burning fireplaces, most heating systems rely on efficient chimneys to channel smoke and combustion by-products up and out of the home, which in turn results in more efficient heating. However, through the years of channeling smoke, creosote, which is a highly flammable substance, can accumulate inside the chimney, thus making it a fire hazard. Therefore, people must have their chimneys thoroughly inspected and serviced regularly.
The Department of Homeland Security and Emergency Services (DHSES) in New York recommends homeowners conduct annual chimney inspections to safeguard its structural soundness and safe operation. Accordingly, proper maintenance can determine the early signs of trouble that could lead to dangerous house fires, severe property damage, or expensive repairs.
Besides removing accumulated creosote and other obstructions in the chimney, the regular inspection also allows homeowners to notice physical defects, like the tilting of the chimney. A leaning chimney is one of the common signs of serious structural problems that require immediate repair.
Multiple factors could be responsible for causing the chimney structure to lean or tilt.
- Masonry Damage
Most chimneys are made with bricks as these are durable enough to withstand heat. However, bricks are made from porous materials, meaning they are solid but contain void spaces. Due to this, bricks are susceptible to water damage.
Weather elements, such as rain, sleet, and snow, can cause tiny cracks in the bricks’ hard outer surface and allow water to seep inside them. The freeze and thaw cycle, which can repeat for years, expands the breaks even further, resulting in the brick and mortar’s deterioration and erosion.
Cracked or missing sections of bricks and mortar can unbalance a chimney, causing it to tilt on one side or, in severe cases, collapse.
- Worn Brackets and Clasps
Some chimneys have brackets and clasps installed to hold in place and support the structure’s upper segments. However, like other components, these are exposed to different weather elements; therefore, they can experience wear and become loose over time.
- Improper Installation
Chimneys are built using materials such as stainless steel, concrete, pumice, clay or ceramic, and plastic. These structures are thick, heavy, and rigid to withstand the fireplace’s heat and flames and other harsh elements, such as weather conditions.
Poor craftsmanship of the chimney footing, the chimney component supporting the whole structure, can cause it to tilt over time and possibly suffer from other types of damage. Additionally, inferior bricks and mortar can also cause the chimney structure to suffer from cracks.
- Insufficient or Inferior Footing
Chimneys are constructed with a concrete footing, or a chimney pad, as these are compact and heavy. The footing provides stability in the chimney structure and prevents it from leaning away from the house.
As the chimney footing plays a significant role in maintaining the structure’s stability, it must be sturdily constructed. The 2020 Building Code of New York State stipulates that a concrete footing for the chimney must be one foot thick and extend not less than six inches on each side.
When the chimney has insufficient footing or the chimney pad is too small to bear its weight, it can cause the structure to fail or lean over time. Consequently, chimney footings made with inferior materials can cause chimney movement. These are more prone to cracks, particularly when exposed to the same moisture and freeze-thaw cycles that can destroy the chimney masonry. Cracks can cause the chimney to angle and lean to one side.
- Soil Issues
Chimneys can experience tilting problems when the structure has been built on poor supporting soils. When the soil supporting the chimney is not sufficient to hold its weight, the structure may begin to sink, crack, or crumble, causing it to separate from the rest of the home and lean inward or outward.
Soil foundations are composed of different ingredients, such as sandy soils, clay soils, and sandy loam soils. These ingredients have varying characteristics, behaving differently under wet and dry conditions.
Movement in a chimney’s footing and structure can result from the different characteristics of these ingredients. Certain soil types can significantly and indefinitely expand and contract, subjecting the foundation to settling or expansive stresses that often result in damage.
In cases where the soil is not adequate to support the weight of the chimney, footings might need to be dug deeper and wider to accommodate the load.
- Improper Water Drainage
When a home has a poorly designed drainage system, the gutters and downspouts may release the water directly to the foundation instead of redirecting it away from the house. This causes the water to infiltrate expansive soil around the house, pressuring against the home and chimney’s foundation. It can cause cracking throughout the home’s foundation, including the chimney’s footing.
- Normal House-Settling
The shifting or movement of a home’s foundation is a normal phenomenon. It can happen at any time, most notably when the water around and inside the foundation begins to expand as it freezes.
The foundation’s movement can cause the chimney’s footing to move, resulting in the structure to start leaning or tilting toward or away from the house.
Conclusion
Proper chimney inspection and maintenance allows homeowners to look out for signs of structural deterioration. Although some damage may only be seen through a more in-depth chimney inspection, other damage, including a chimney leaning away from the house, can be spotted through visual inspection.
A leaning chimney is dangerous as it could fall and cause physical injuries to anyone in its path. Depending on the design of the home and chimney, a chimney falling away from the house could create a floor to ceiling hole in the exterior wall. Additionally, the chimney’s tilting can also result in flashing issues and leaks in the structure’s interior. Therefore, it is crucial to immediately address the chimney’s tilting problem to minimize the house’s damage.
The chimney is considered to be the most complicated and heaviest among all the housing components. Thus, regardless of the cause of the chimney problem, a licensed chimney contractor should conduct the structure’s immediate repair or replacement. An experienced chimney and masonry professional can guarantee a quality solution to a homeowner’s chimney-related problem.
Long Island Roofing and Chimney is a reliable and trustworthy roofing and chimney contractor on Long Island. With over 15 years of high-quality professional services, they are among the best companies to take care of roofing, chimney, gutters, and other home maintenance and improvement projects.
Contact Long Island Roofing and Chimney today at 631-205-6177 (Suffolk County), 516-605-6108 (Nassau County), or click here to request a quote. | https://longislandroofingandchimney.com/common-causes-of-a-leaning-chimney/ |
Going outside in the wintertime is good for you. After all, whether you’re trying to fit in a workout or just in need of some fresh air and sunshine, you don’t want to stay cooped up indoors until spring!
But not so fast—cold weather demands a little bit of extra care, especially if you’re going to be engaging in physical activity. Before you head out the door, be sure to take these precautions:
Check the weather. Remember to consider the temperature before you go outside—and that includes the wind chill. Extreme cold, even when you’re bundled up properly in warm clothing, can be dangerous. Any temperature below 0 degrees Fahrenheit poses a risk. In these situations, avoid outdoor exercise and do an indoor workout instead. The same goes for days when it is precipitating, unless you have waterproof exercise attire.
Layer up properly. There’s a science to layering your outdoor winter clothing. Start with an innermost layer in a moisture-wicking fabric (think Under Armour or similar sportswear). Avoid cotton, which does a poor job of retaining body heat and will, actually, make you colder if it gets wet from snow or perspiration.
After your first layer, put on a warm middle layer of fleece or wool. Take care that your clothing is loose enough that outer pieces will go over inner pieces, but still fitted enough that you won’t get wind trapped between layers.
Your top layer should be something light and waterproof, especially if you’re going out in the snow.
Wear gloves, a hat, and proper socks. While you’re dressing yourself, remember that blood flow concentrates in your core (i.e. your shoulders, chest, stomach and hips), which can mean there’s less blood to warm up your head, hands and feet. Protect your hands and feet by wearing gloves and socks that have a moisture-wicking inner layer of fabric and a heavier, warmer outer layer.
Choose a hat made from wool or fleece that will cover your ears.
If you find that the cold air is burning your lungs, something that individuals with asthma are particularly prone to, you can wear a scarf around the lower half of your face to warm the air you’re breathing in.
Recognize the signs of hypothermia. If your body is losing body heat faster than it can produce it, your body temperature will drop dangerously low. Once it goes below 95 degrees, you are at risk of hypothermia, which can cause such problems as a heart attack, kidney problems, and liver damage.
Watch for signs of hypothermia like pale skin, a puffy face, slowed speech, and slurred words. Late-stage signs of hypothermia include slow movement, clumsiness, shallow breaths, and a slowed heartrate. In extreme cases of hypothermia, you could black out or lose consciousness.
Hypothermia is a medical emergency that demands attention. While you wait for help from medical personnel, try to warm a hypothermic person up by removing any wet clothing, covering the person with a blanket or dry coat, and moving them inside or to a warmer location. | https://whatsupmag.com/health-and-beauty/health/brr-stay-healthy-in-the-cold/ |
PSI AquaPen-C (AP-C 100)
AquaPen-C is a new cuvette version of the FluorPen fluorometer. It is a pocket-sized lightweight device that is very convenient for quick, reliable, and easily repeatable measurements in the field or laboratory. It is equipped with a blue and red LED emitter, optically filtered and precisely focused to deliver light intensities of up to 3,000 µmol(photon).m-2.s-1 to measured suspensions. Blue excitation light (455 nm) is intended for chlorophyll excitation, i.e., for measuring chlorophyll fluorescence in algal cultures. Red-orange excitation light (620 nm) is intended for excitation through phycobilins and is suitable for measuring in cyanobacteria. Due to high sensitivity - 0.5 µg Chl/l - the AquaPen-C can measure natural water samples containing low concentrations of phytoplankton. | https://www.wetec.com.sg/product-page/psi-aquapen-c-ap-c-100 |
Students are the future of society. They need a strong foundation for their future life. Here is the list of some powerful affirmations for students that will show them the right path to their future.
1 to 34 Affirmations for Students
1. Above all, I want to learn.
2. Anything is possible.
3. As my demand for my learning grows, my learning expands.
4. During the exams, I recall information quickly and easily.
5. Education is the gateway to my future! Today I make the most of my academic opportunities.
6. Every day in every way I am becoming more focused on what I do.
7. Every day, I improve myself in some way.
8. Exams are fun.
Read: Powerful Affirmations for kids
9. Focusing comes naturally to me.
10. For today, I am truly attentive on my work.
11. Getting good grades is natural for me.
12. I act kind and courteous to all people.
13. I always clear my exams.
14. I always enjoy my studies.
15. I always learn from my mistakes and they also teach me how to be better.
16. I always manage my time and study schedule wisely, I always start with planning to make everything before the deadline.
17. I always pass exams with flying colors.
18. I always stay focused on my studies
19. I always stay focused on my studies.
20. I am a beautiful person. I matter. I am strong. I am genuine. I can do anything I put my mind to. I’ve got this.
21. I am a gifted student, and I can achieve anything.
22. I am a quick learner and happy all the time.
23. I am a talented and prominent student.
24. I am a talented student, I am going to learn a lot today.
25. I am a very quick learner.
26. I am advancing to new levels by learning more each day.
Read: Positive Affirmations for success
27. I am always open to learning in a better way.
28. I am always relaxed during exams.
29. I am an excellent student.
30. I am blessed to live this life that I have created.
31. I am building my future.
32. I am capable.
33. I am confident I can solve life’s problems successfully.
34. I am easily able to sit for exams without stress or anxiety.
35 to 68 Affirmations for Students
35. I am excited about the chance to be a college student.
36. I am excited to step into a new world.
37. I am focused and concentrated whenever I am studying, nothing grabs my attention away.
38. I am free of distractions.
39. I am good at turning my nervous feelings into high confidence.
40. I am improving my study habits every day.
41. I am in control of my progress.
42. I am kind and courteous to all people.
43. I am learning to enjoy studying.
44. I am motivated to learn more, dig deeper and conduct great research.
45. I am on the journey of becoming a very successful student.
46. I am open and ready to learn.
47. I am recognized as a student with immense focus and determination.
48. I am relaxed during exams.
49. I am truly attentive to my work.
50. I am very focused on my preparation.
51. I am very good at gaining knowledge and making proper use of it.
52. I am well prepared for every exam.
53. I am worthy of deep connections.
54. I am worthy to receive.
55. I begin studying well before exams are scheduled.
56. I believe in myself and I am capable of becoming a great student.
57. I can change the world.
58. I can get through everything.
59. I choose healthy ways to deal with stress.
60. I chose to move forward every day, growing and learning as I go!
61. I concentrate all my efforts on the things I want to accomplish.
62. I create a healthy balance in my life.
63. I easily understand and retain what I study.
64. I embrace life as a student.
65. I enjoy learning more each new day.
66. I enjoy studying for my exams and tests.
67. I enjoy the subjects I am studying.
68. I feel good about myself and my preparations for tests and exams.
69 to 102 Affirmations for Students
69. I feel thankful to be a student and it shows.
70. I focus on one task at a time.
71. I focus on the important tasks first.
72. I focus well to get good grades.
73. I have a sharp mind that makes me a very good student.
74. I have a winner’s mindset and I love accomplishing my goals.
75. I have self-respect and dignity.
76. I know how to thrive under exam pressure.
77. I know what I need to know for this exam.
78. I learn to make studying fun.
79. I learn, comprehend and remember fast and easily.
80. I look forward to a great result of my exams.
81. I love and approve of myself.
82. I love gaining knowledge which helps me in growing to my full potential.
83. I love my student life!
84. I love the challenge of a tough exam.
85. I love to learn and it is quite easy for me.
86. I make a positive impact in other students’ lives.
87. I pass exams easily.
88. I prepare for exams systematically and intelligently.
89. I radiate positive energy.
90. I remove distractions to help me have more focus.
91. I respect my education because it creates a more complete me.
92. I start with a positive mindset.
93. I stay focused while studying for exams.
94. I strive to do my best every day.
95. I study and comprehend fast.
96. I study efficiently, effectively, purposefully, and whole mindedly.
97. I study hard and regularly.
98. I study well.
99. I succeed even in stressful situations.
100. I value my education as it prepares me for a bright future.
101. I will continue to expand my mind.
102. I will do my exams well.
103 to 136 Affirmations for Students
103. I will do well in this exam as I am well prepared.
104. I will focus on the important things, and let the rest go.
105. I will follow my dreams.
106. I will pass my exam!
107. I will win at what I put my mind to.
108. I work both hard and smart to clear my exams.
109. I’m only human and we all make mistakes.
110. It is possible for me to achieve all my goals because my true potential is limitless.
111. It’s okay not to know everything. I can always learn.
112. Learning is life. I love learning and I am good at it!
113. Learning, understanding, and applying come naturally, constantly, and effortlessly.
114. My ability to focus is increasing which is making me a peak performer.
115. My confidence grows when I step outside of my comfort zone.
116. My mind absorbs and processes new information with greater speed.
117. My mind’s ability to learn and remember is increasing every day.
118. My mistakes help me learn and grow.
119. My self-worth is not determined by any number on a scale.
120. My time is valuable.
121. Nothing can stop me from living the life of my dreams.
122. Recalling information while writing in exams is easy.
123. Staying focused now comes naturally to me.
124. Strong wisdom is developed through wise and discerning study.
125. Studying hard comes naturally to me.
126. Studying is easy and pleasant for me, I can see an improvement every day.
127. Studying is very easy for me and I am doing it well.
128. Studying with focus comes easily and naturally to me.
129. Success is not final, and failure is not fatal. It’s the courage to persevere that counts in the end.
130. There is no reason for me to compare myself to others.
131. Today I set aside my fears and achieve all my educational goals.
132. Today I take charge of my education. The more I learn, the more I achieve.
133. Whatever I need to learn always comes my way at just the right moment.
134. When I am exposed to information that benefits me, I absorb it like a sponge!
135. While writing answers, I recall information quickly.
136. With every passing day I am becoming adept at studying. | https://knoansw.com/powerful-affirmations-for-students/ |
Abstract The introduction of lambdas and first class function objects forever changed the nature of C++, opening up the floodgates to functional programming. Sometimes the best library design is based on a pattern that's very familiar to a Haskell programmer but alien to a C++ programmer. The new std::future proposal for C++17 hides several functional patterns including that of a monad -- the boogeyman of generations of imperative programmers. But once seen in action, the monad becomes just one more pattern in the toolbox of a library designer. Bio Bartosz Milewski always tries to be where the action is. He started as a quantum physicist when supertstrings were hot, joined the software industry to build the search engine for Microsoft Windows, became an avid proponent of object-oriented programming, and wrote a book on C++. He got into concurrency and parallelism when the multicores hit the market, and now he's evangelizing functional programming as the Holy Grail of software development.
Related Talks
Scala Monads: Declutter Your Code With Monadic Design
31 minutes
scala
monad
monadic
design patterns
monadic design
dan rosen
marakana
techtv
Code
Demo
In this video tutorial, Dan Rosen will show you how to use Scala's capacity for monadic design to eliminate repetitive boilerplate in your code. You'll learn to recognize places where the perpetual clutter of null checks, try-catch blocks and more can be hidden within a monad, to make your ...
Brian Beckman: Don't fear the Monad
an hour
haskell
monads
programming
brian beckman
functional programming
Haskell
Cross posted from msdn's channel 9. Functional programming is increasing in popularity these days given the inherent problems with shared mutable state that is rife in the imperative world. As we march on to a world of multi and many-core chipsets, software engineering must evolve to better equip software engineers with ...
Hey Underscore, You're Doing It Wrong!
37 minutes
brian lonsdorf
html5
html5devconf
san francisco
Underscore
Underscore.js
functional programming
marakana
tips
tricks
use underscore.js productively
Brian Lonsdorf has a love-hate relationship with Underscore.js. Yes, it offers a bunch of tools included in today's functional programming paradigm (like map, filter, reduce, take, drop, compose, etc.), but in Underscore the functions are sometimes verbose and unintuitive. It claims to be a functional programming language, but how true ...
RACify Non-Reactive Code by Dave Lee • GitHub Reactive Cocoa Developer Conference
24 minutes
git
github
github training
github foundations
basics
collaboration
git basics
VCS
programming
versioncontrol
version control
open source
software development
mercurial
bazaar
perforce
subversion
CVS
octocat
Facebook's Dave Lee presents a talk about "RACifying" non-reactive code at the 2014 Reactive Cocoa Developer Conference hosted by GitHub. As always, feel free to leave us a comment below and don't forget to subscribe: http://bit.ly/subgithub Thanks! Connect with us. Facebook: http://fb.com/github Twitter: http://twitter.com/github Google+: http://google.com/+github LinkedIn: http://linkedin.com/company/github About GitHub GitHub is the best place to share code with ... | https://www.programmingtalks.org/talk/c17-i-see-a-monad-in-your-future/ |
It was a two-day forum on South China Sea held July 10-11 at the Center for Strategic and International Studies (CSIS) in Washington. The speakers, many from the United States, Japan, Vietnam and the Philippines, pointed fingers at China for being "provocative", "aggressive," "coercive" and "changing the status quo".
Chu Shulong, a professor of political science and international relations from Beijing-based Tsinghua University, felt unease.
He pointed out that the US pivot to Asia-Pacific policy has intentionally and unintentionally helped escalate tensions in the South China Sea.
"The simple fact is that tension in the South China Sea has become higher in the last three years since US adopted its pivot strategy, especially since Secretary (of State) (Hillary) Clinton’s Hanoi speech in July 2010," Chu said.
His thoughts are widely shared among the Chinese population, who largely see the US as biased in its role in both South and East China seas. Many believe the US is trying to profit to advance its dominance in the region.
Chu, a well-respected intellectual, has spent three decades studying US foreign policy. He said that the previous US Democratic administration under President Bill Clinton had emphasized three pillars in China policy: economic interest, democracy and human rights.
"But from the pivot strategy in the last few years, we hear too much talk about security, we don’t hear much talk about democracy, human rights issues or economic development and cooperation," he told the audience.
Chu not only believes the US is behind much of the tension in South and East China seas, he said the US strategy is also imbalanced, unreasonable and biased.
For one thing, in the two-day CSIS conference, no US officials present talked about anything wrong done by other countries, except China, Chu pointed out.
Examples Chu cited included that when the Philippines sent large military vessels to the Huangyan Island in the South China Sea, the US kept silent, and when Vietnam passed unilateral law to change the status quo, the US government again said nothing.
Pointing to the big projector screen on the wall at the CSIS conference room, Chu said all the pictures in the presentations were about construction done by China, not by other countries on the rocks and reefs.
"We never see here the picture of construction by others, so this is great bias, unreasonable and unfair treatment that China cannot accept," he said.
In fact, when visiting Japanese Defense Minister Itsunori Onodera spoke at CSIS on July 11, he also repeatedly used harsh words to blame China for allegedly threatening to change the status quo by force and coercion, but never mentioned or even tried to justify the Japanese government’s nationalization of the Diaoyu Islands in late 2012. Most Chinese believe the nationalization caused the current tensions by changing the status quo since the 1970s and 1980s when the two governments decided to shelve the historical maritime territorial disputes for future generation.
The US has also chosen to keep silent on the unilateral and provocative Japanese action, reinforcing the idea among many Chinese that the US is biased in its policy in the region to favor its military allies.
While international law was one of the key words at the conference, Chu pointed out the irony of the US. He admitted that China has a long way to go in building the rule of law, but said the US is in no position to talk about this because it has not participated in many international laws, such as the International Criminal Court, the United Nations Convention on the Law of the Sea and some international conventions on human rights.
"So the US is just talking about international law to require others when it’s needed, but feel free when it does not need international law for itself," Chu said, citing the example of the Iraq war.
To Chu, it does not make sense for the US to accuse China of threatening the use of force or coercion when the US is increasing its military presence in the South China Sea. "What we should call this? It’s also threatening the use of force, it’s also coercion."
The bespectacled political science professor, speaking English in his slow and soft voice, reminded the audience that in the last 20 years it was the US which most often used military force and coercion in the world.
He said that unlike what many US officials and academics think, President Xi Jinping’s foreign policy is not a departure from previous Chinese leaders but basically a continuation.
Chu might have looked a bit lonely among the panelists at CSIS last week, but some US pundits, including Bonnie Glaser, a CSIS senior adviser for Asia who has been critical of China on the South China Sea issue, had to admit that the Chinese government policy on South China and East China seas enjoyed wide support among the Chinese people. | https://usa.chinadaily.com.cn/opinion/2014-07/14/content_17761306.htm |
Definition:
Clinical trials are used to assess the safety of new medicines. Pharmacovigilance is the process of collecting safety data after the drug has been approved for use and giving this information to the regulatory authorities. The data collected includes adverse effects associated with use of a drug.
Pharmacovigilance is required after marketing approval to detect potential adverse reactions that could be due to use of the medicine. In particular, it can detect rare side effects that are not picked up during clinical trials or affect specific groups of individuals.
A phase 4 clinical trial investigates the occurrence of side effects over time by a new treatment after it has been approved and is on the market.
More information:
Tags: | https://revive.gardp.org/resource/pharmacovigilance/?cf=encyclopaedia |
The CDC’s cloth mask guidance to stop the spread of COVID-19:
- The CDC recommends that fully vaccinated people maximize protection from the Delta variant and prevent possibly spreading it to others, by wearing a mask indoors in public if you are in an area of substantial or high transmission. For more, please visit the CDC’s fully vaccinated guidance site.
- For individuals under the age of 2, or not vaccinated, the CDC recommends wearing a mask in indoor public places.
- For more information on masks, please visit the CDC’s guide to masks
Below are recent posts from Dear Pandemic in which they have answered questions about masks. Also visit the Dear Pandemic website and search category masks for more answers to related questions. | https://uwm.edu/publichealth/frequently-asked-questions/facemasks/ |
The infrastructure gap in Latin America and the Caribbean
Water and sanitation, electricity, transportation, and telecommunications sectors in Latin America and the Caribbean are projected to require annual investments of US$2.2 billion, or 3.12% of GDP, through 2030.
Due to the COVID-19 pandemic, many economic, social, and environmental problems in Latin America and the Caribbean (LAC) have gotten worse. In response, the Inter-American Development Bank (IDB) has set out in its Vision 2025 a set of guiding principles and priorities that are in line with the Sustainable Development Goals (SDGs) and aim to help achieve sustainable and inclusive economic growth.
To contribute to this end, the IDB conducted a study whose main objective is to estimate the investments that LAC would need to make by 2030 to make progress in meeting the SDGs, which propose comprehensive goals that incorporate criteria of affordability, resilience, and sustainability that require the adoption of public policies beyond the investments needed to provide more and better infrastructure.
This study gives a modular and consistent way to figure out how much infrastructure investment is needed. It can be found at: https://interactive-publications.iadb.org/La-brecha-de-infraestructura-in-America-Latina-y-el-Caribe.
According to the results of the research, it is estimated that Latin America and the Caribbean need to invest US$2.2 billion, representing 3.12% of its Gross Domestic Product (GDP), each year until 2030, in water and sanitation, energy, transportation, and telecommunications sectors to expand and maintain the infrastructure needed to meet the SDGs. 59% of this investment should go to building new infrastructure, and 41% should go to maintenance and replacing old assets.
It is estimated that 47% of the investment needed in LAC is associated with meeting SDG-9, which includes investments associated with road, airport, and telecommunications infrastructure; in second place are investments related to SDG-7, which represents 26% of infrastructure investment needs by 2030; in third place is SDG-6, which represents investments of around 17% of the total. Finally, meeting SDG-11 requires investment in urban mass transit systems and represents 10% of total estimated investments.
Water and sanitation services, including wastewater treatment, require an average annual investment effort of 0.5% of the region's GDP. In its case, the electricity sector must invest 0.8% annually to achieve universal access to electricity for the entire population and advance in the decarbonization of the electricity generation matrix. The transportation sector needs an average annual investment of 1.4% of GDP, while households need an average annual investment of 0.4% to connect to the internet through broadband and mobile technologies.
Even though these estimates have some problems, it is hoped that they will help make decisions about how to close the infrastructure gaps in Latin America and the Caribbean so that the SDGs can be reached faster. | https://www.mexicanist.com/l/the-infrastructure-gap-in-latin-america-and-the-caribbean/ |
Scientists discover a process that could enhance our ability to harvest energy from the Sun for electricity and fuels.
A process to enhance the performance of solar technologies such as solar cells and photocatalysts, and potentially make their production cheaper, has been discovered by scientists.
Solar cells take energy from the Sun and convert it into electricity. But energy from the Sun can also be harnessed to create other fuels such as hydrogen, which could be used for example in cars. These ‘solar fuels’ are produced by mimicking photosynthesis, the process used by plants to create energy from sunlight.
Solar fuels could help tackle climate change, as they can be created without producing carbon dioxide, a greenhouse gas. They could also directly replace fossil fuels in many applications.
However, photosynthesis is complicated process, and there are several challenges to its replication. One of these challenges is that catalysts – materials that help the reaction proceed – are often expensive and inefficient, preventing the process from being easily scaled up.
In a new study, published in the journal Advanced Materials last week, researchers from Imperial College London and Queen Mary University of London (QMUL) demonstrate that a unique property of the material barium titanate could lead to more efficient solar cells and catalyst systems. | http://www.learn.ca/tag/barium-titanate/ |
Galar is a planet protected by the Asgard. It recently developed its own version of a Stargate program where it explores and tries to make alliances with the cultures it encounters. Experimenting with alien technologies, it modified a Goa'uld memory device to create a way to graft memories in the hopes of cutting down training time for highly skilled professions, such as surgeons and engineers. It was first introduced in the episode 9.12 "Collateral Damage".
Factoids
- Names and Designations: Galar
- Number of Suns: At least one
- Number of Moons: Unclear; may be moon itself in orbit around a ringed planet
- Source of Address: Undetermined, possibly Asgard
- Introduced in Episode: 9.12 "Collateral Damage"
- Earth Cultural/Technological Equivalent: Roughly equivalent to Earth's modern society
- Main Interest: Technology exchange; alliance
- Influenced/Dominated by: Asgard
- History of Stargate: Unknown
Stargate Glyphs
Unknown
Geopolitical Structure and History
Galar was home to a human population originally under the domination of the Goa'uld. Sometime generations past, it came under the protection of the Asgard. The Galarans were able to develop technologically. They discovered the use of the Stargate, which they called "the Ring" and used it to explore new worlds. The Ring Program was remarkably similar to Stargate Command, run as a classified military project. An Emissary met people from over a dozen worlds and realized that Galar was vulnerable to much more technologically advanced species who understood interstellar travel.
They also had alien artifacts found on Galar. Modifying a Goa'uld memory device, they were able to graft memories from one person to another, like an organ transplant. The hope was to copy the knowledge and experience of skilled professionals, such as surgeons onto other candidates, thereby cutting the necessary training to a fraction of the required time. By doing this, they hoped to make some leaps technologically to catch-up to the alien cultures they met.
As for other technology, there was electrical power, computer equipment, high rise buildings, transportation similar to automobiles and trains. At the time SG-1 visited the planet, the population appeared to live in a post-industrial/technological age similar to present day Earth. The area SG-1 visited was an urban center. | http://stargate-sg1-solutions.com/wiki/Galar |
Recently I acquired a Raspberry Pi 4 and decided to build a tiny computer cluster out of it. The goal is to do play around a bit with parallel computing technology.
How To Create Julia Apps
A simple step by step guide to create a julia app
How To Create Julia Packages
A simple step by step guide to create a julia package
QR Decompositions and Rank Deficient Matrices
We discuss the necessary changes to our QR decomposition algorithms to handle matrices which do not have full rank.
QR Comparison with other Implementations
We developed a QR decomposition algorithm, based on the orthogonalisation process of Gram-Schmidt in a series of posts here, here, here, and here. Let’s have a look how good this algorithm performs against built-in implementations from julia and other programming languages.
QR Decompositions with Reorthogonalisation
Problem Formulation We already discussed QR decompositions and showed that using the modified formulation of Gram-Schmidt significantly improves the accuracy of the results. However, there is still an error of about $10^3 M_\varepsilon$ (where $M_\varepsilon$ is the machine epsilon) when using the modified Gram Schmidt as base algorithm for the orthogonalisation.
QR Decompositions
We consider the necessary changes to the Gram-Schmidt orthogonalisation to obtain a QR Decomposition
Gram-Schmidt vs. Modified Gram-Schmidt
We compare the accuracy of the classical Gram-Schmidt algorithm to the modified Gram-Schmidt algorithm.
Floating Point Accuracy and Precision
Floating point computations on computers may behave differently than one might expect. Every software developer should be aware of these since computed results may be off by orders of magnitude in the worst case.
PDE-based image reconstruction and compression
It is possible to compress/inpaint images from very little data. In order to obtain reconstructions that are comparable to the original image it is necessary to optimize the underlying interpolation data. | https://www.laurenthoeltgen.name/tag/julia/ |
The National Institute for Health Research (NIHR) Maudsley Biomedical Research Centre (BRC) next week embarks upon the next stage of its development, with the commencement of its latest £66m award from the Department of Health.
Based across the Denmark Hill Campus of King’s College London, with additional units at King’s Guy’s campus, the NIHR Maudsley BRC is a partnership between South London and Maudsley NHS Foundation Trust and the Institute of Psychiatry, Psychology & Neuroscience at King’s College London.
Originally established in 2007 through open competition, the Maudsley BRC brings together scientists, clinicians, mental health professionals, service users and carers. It aims to accelerate the translation of novel scientific discoveries into improved treatments and clinical care in mental health, neuroscience and dementia.
The NIHR Maudsley BRC is dedicated to experimental medicine and translational research in mental health and related disorders across the life span, from childhood through to older adults. Since its original award, it has received two further rounds of five-year funding from the Department of Health through NIHR, with its latest award set to run until March 2022.
Following NIHR’s decision to discontinue the specialist Biomedical Research Units established in 2011, the work of the Maudsley Dementia Biomedical Research Unit (BRU-D) will be continued and expanded upon in the NIHR Maudsley BRC as a theme dedicated to dementia and related disorders.
The new NIHR Maudsley BRC funding represents a substantial uplift compared to the previous BRC funding round. The new funding will allow it both to build on its current work and expand into new areas including substance use, pain and mobile health technologies.
Driving innovation in the prevention, diagnosis and treatment of ill-health.
Translating advances in biomedical research into benefits for patients.
Supporting the contribution of the NHS to the nation’s international competitiveness.
The Government has also pledged £64 million over the next five years to the BRC at Guy’s and St Thomas’ NHS Foundation Trust and King’s. This combined investment of over £130 million will allow the Guy’s and Maudsley BRCs to work across King’s Health Partners to gain new insights into common themes, especially the interface between mental and physical health.
Find out more about the new BRC by attending our next Open Seminar on Tuesday 19 April, which will outline the new BRC’s research programme and priorities, and will also focus on its work in training and informatics. | https://www.maudsleybrc.nihr.ac.uk/posts/2017/march/nihr-maudsley-brc-commences-five-year-research-programme/ |
If you have never heard of derealisation or depersonalisation it is no surprise. Of all the mental health issues it is one of the more rare afflictions. It is estimated that depression and anxiety affect around 1 in 4 people in the UK, whereas derealisation is estimated to affect as few as 1 in 50 people and most cases go undiagnosed.
Dissociation
This is the feeling that comes as part of derealisation. It is a feeling that most have experienced. A feeling that the world that we live in isn’t real. Sometimes it can be quite cool to “zone out” and imagine that this whole world is a fictional construct. But there is a big difference between fleeting thoughts and a constant feeling that reality isn’t real!
Derealisation from a personal perspective
Described by a sufferer of the condition, she likened it to constantly feeling like you are in a dream. She claims that nothing feels like it is real which can make everything seem pointless. Can you imagine that? The disconnect can have a profound effect on sufferers lives. Imagine feeling that way. What would be the point of going to work if nothing was real? This also has a dramatic effect on sufferers abilities to form meaningful relationships. It can also have a severe effect on your memory.
Depersonalisation from a personal perspective
This is a mental disconnect with your own body. It can be terribly frightening and disconcerting. One sufferer describes it as looking down at your hands and not feeling like they belong to you. You can also feel like your soul does not belong in your body. This is part of the reason that many sufferers feel suicidal.
What the medical professionals say
Dissociation is like our brains parachute. Pulling us away from negative feelings. It is nearly always as a result of trauma and it kind of an extreme form of repression. Rather than our brain burying the hurtful feelings, it tries to separate from the painful feelings in a more dramatic and pronounced way. Derealisation is a disconnect from your environment whereas as depersonalisation is a disconnect from your physical self. All are rooted in similar causes.
Causation
As we touched upon these conditions are often caused by trauma. However, this doesn’t mean that a sufferer would have to have been through a car crash or been assaulted to be a sufferer. Quite often it can be an amalgamation of minor traumas that your adult self might look at and dream irrelevant. For example, the time an adult spanked you for making too much noise, or something innocuous like that. While these don’t seem like a big deal to the adult version of you it was enough for the child version to develop a coping mechanism.
Famous sufferers
As stated the condition often goes undiagnosed and as with a lot of mental health conditions people aren’t always keen to come forward. Chester Bennington the lead singer of Linkin Park was allegedly a sufferer and this can be seen in the lyrics to some of his more iconic songs. The same can be said of Counting Crows frontman Adam Duritz who is refreshingly open about his condition.
Treatment
Generally, antidepressants aren’t going to cut it here. They can cause blunting and often the highs and lows of life help a person to feel real. There has been a lot of success using a form of treatment called Transcranial magnetic stimulation (TMS). TMS involves using magnetic forces to increase blood flow, oxygen levels around the brain. It is complex but has shown up to a 50% success rate for people with the condition. | https://www.theversed.com/95901/1-50-people-suffer-reality-distorting-disorder/ |
A first look at Latin America would lead us to conclude that it is predominantly Catholic with little religious diversity.
Data supports this notion. According to the Pew Research Center, 90% of Latin America’s population is Christian, while Muslims, Hindus, and Jews represent less than 1% each.
And yet, one of the 12 most religiously-diverse countries in the world, Suriname, is in Latin America. With a population of 520,000, Suriname is the smallest state in South America. According to Pew, it has a Christian majority (52% of the population), while the other half of the population is formed by two sizeable minorities: Hindus (close to 20%) and Muslims (about 15%). The rest of the population is made up of folk religions (5.3%), Buddhists (0.6%), and Jews (.2%). The unaffiliated represent close to 5%.
While exemplary in its diversity, Suriname shows us that the reality of religious diversity in Latin America is complex. So what exactly is meant by “religious diversity”?
Religious diversity remains a descriptive term. It does not necessarily or automatically translate into religious coexistence or tolerance. The days of the Inquisition are long behind us, and many Latin American countries have turned away from monolithic definitions of citizenship, prioritizing multiculturalism and pluralism instead. Nevertheless, grassroots prejudice (fueled largely by not knowing “the other” and the transmission of stereotypes), hate rhetoric, and conflict are still prevalent in some Latin American communities and societies.
Jewish communities, for example, are small minorities that vary in number, integration, participation, visibility, and coexistence with other minority and majority groups. Since Latin America’s Jewish communities started growing in the late 19th- and 20th centuries, Jews, by many accounts, have flourished in Latin America, becoming judges, entering into politics, economics, culture, and academia. At the same time, however, Jewish communities continue to face prejudice and/or antisemitism, sometimes sanctioned by political parties and authorities.
According to the Anti-Defamation League’s 2013/2014 survey, the number of people who hold anti-Jewish views is lower in Latin America than elsewhere in the world. But with respect to individual countries, those numbers tell a different story. In Panama, the average number of individuals with anti-Jewish views is above 50%. Colombia and the Dominican Republic are both close to 40%, while the average in Venezuela is 30%. In Mexico, where the average is 24%, antisemitism is mainly expressed discursively in the printed press and more recently on social media, while in countries such as Argentina, where the Argentine Israelite Mutual Association (AMIA) was bombed in 1994, and Venezuela, which has experienced a rise in antisemitic rhetoric and acts of vandalism in recent years, it has been expressed violently.
In multicultural regions where religion is a central axis of identification and where religious groups live next door to one another but do not necessarily coexist, there is ample room for creating a foundation for tolerance. This includes teaching and learning about each other, holding positive views, developing interpersonal relations that are respectful, accepting, and appreciative, and expressing one's belief freely and encouraging others to do the same. Schools and other learning communities – including those online - are ideal spaces to create and nourish these behaviors.
Last year, the Pan American Development Foundation (PADF) and Facing History held two online workshops that brought more than 400 Latin American educators, students, and activists from a dozen countries together to examine the historic roots of antisemitism, and its connection with local issues of religious intolerance. As a result, high school and college teachers in Mexico City, Quito, Buenos Aires, and Bogotá have designed lessons that include a case study of antisemitism, this hatred's historic development, and its contemporary expressions in Europe and the Middle East, as well as right here in Latin America.
As we open up the dialogue about religious diversity, starting in little ways in our own communities, more and more people will have the opportunity to engage in honest conversation about myths and stereotypes, history and its legacies, and the specific ways in which individuals can create tolerant communities and societies in an increasingly multicultural region.
Next steps:
Visit the Give Bigotry No Sanction project website for important and timely conversations about religious freedom, religious diversity, and civic identity.
Read George Washington’s 1790 Letter to the Hebrew Congregation in Newport, Rhode Island, a foundational document of religious tolerance.
Download A Convenient Hatred: The History of Antisemitism.
Sign up for an online course on Holocaust and Human Behavior. | https://facingtoday.facinghistory.org/facingtoday/what-exactly-is-meant-by-religious-diversity |
This looks like a great assignment. I like how it allows the student to do some "easy research" first (web browsing, followed by a general overview talk) followed by a more thorough presentation on a single paper. One question I have is, how do you actually run the presentations. In a class of 15-20, I would imagine that even doing 3 per day once per week would take most of the semester. I have a small class this year, and was looking for some new ways to approach the primary literature, so I am thinking about using this activity in a class of 12 students. One thing I will probably change is that I will have the 2nd assignment be written instead of oral.
Adam
Last spring I had 9 students in the class, with 80 min classes 3 times/week (MWF). We did the overview talks over 3 class periods. This was done in early March (before Spring break). I spread out the journal article talks over 5 class periods. I started that in April - and had them mostly on Fridays. It seemed to work well to have 2 days of "normal" class and have Fridays be our "lit discussion" days. It certainly would be harder with a larger class. One thing I have done with different assignments in some of my other larger classes to save class time is to have students write a paper and then do a class poster session. I divide them into 2 groups. Each group presents for half the class period. When they're not presenting, they need to visit other student's posters and write summaries or reviews. It's been a good way to get students to present what they've written to others without having lots of short, in-class, oral presentations.
Sounds like a great way to address the primary literature. I have a small (5 this year) advanced inorganic course where I am doing something similar this year.
Throughout the semester, I will hand out papers (chosen by me) to one student and have them present a summary, describe the concepts, and how they relate to course content. At the end of the semester, they get a list of inorganic chemists, and choose one as their author of choice. They are then asked to search the primary literature, and present a (slightly) more formal talk in the evening doing a similar job.
I think this works fairly well for smaller class sizes, but might have some problems for larger class sizes (especially at the start of the term, before they have a chance to get into the "meat" of the course). It also likely helps that I have 80 minute lectures as well.
Brad
I will be teaching an Advanced Inorganic Chemistry course with 13 students and will use this idea for my class. I love it! | https://www.ionicviper.org/literature-discussion/contemporary-inorganic-chemists |
EAQ Manor Farm is an outdoor and equine-assisted learning centre set in 12 acres of Somerset countryside. The charity works with a wide range of people, including children, young people and adults who are overcoming isolation, exclusion and social and emotional difficulties. Kerry Roberts from the charity told us how a grant of £1,500 helped them to provide virtual and in-person equine therapy and equine assisted learning to ease the impacts of the lockdown.
“During the initial few weeks of lockdown, we were contacted by several families of our regular learners and were successfully able to trial virtual, remote FaceTime sessions from our Centre for their children.
We found that doing virtual sessions had value in a way that went beyond our expectations. It meant that some of our most vulnerable learners, who had found lockdown challenging, were able to interact with the horses and do close observations of their behaviour. This had a hugely positive impact and all our parents agreed that their children were calmer after the sessions.
In the middle of April, we were able partly re-open for some of our more vulnerable learners with strict hygiene and social distancing practices in place. The funding from the Somerset Coronavirus Appeal meant we could subsidise the cost of these sessions for the families and meant we could offer 80 sessions of both virtual and one-to-one support during lockdown.
Before lockdown, we had been working with a young woman who was attending weekly Equine Assisted Learning sessions with us. She has struggled with extreme anxiety for many years, which was heightened with the onset of lockdown and meant she was unable to leave her home. She became unhappy and anxious about members of her family and would not let them leave the house, even for essential supplies.
Funding from the Somerset Coronavirus Appeal allowed us to offer her virtual remote sessions where she could access our animals and farm from the safety of her own home. Through a series of regular virtual sessions with us, her anxiety was reduced, and her mum was even able to go out shopping during one of our sessions. | https://www.somersetcf.org.uk/case-studies/coronavirus-appeal-stories-eaq-manor-farm |
Office Phone:
Department Phone:
Biography —
Dr. de Boer's studied the physiological basis of insect-plant interactions; he is a faculty advisor with KUUB.
Research —
My research interests center on the physiological basis of insect-plant interactions. More specifically, my research focuses on the chemical and chemoperceptual mechanisms underlying feeding decisions by caterpillars. Many herbivorous insects feed on only a few plant species despite an overwhelming abundance of plants in their environment. Caterpillars are excellent models for studying the physiological basis of feeding behavior because of their keen sense of smell and taste and their relatively simple nervous system which is readily accessible for experimental manipulation. My current research program is very limited due to an appointment in the University Advising Center as a Faculty Advisor. I am no longer accepting graduate students. However, I still serve as a Co-Advisor for students doing research in the areas of Sensory Behavior/Physiology and Insect-Plant Interactions.
Teaching —
My teaching philosophy is based upon my own experiences as a student that having an enthusiastic teacher and a learning friendly environment results in a better understanding of the course materials while enjoying the learning process. My goal is to stimulate an appreciation for and an understanding of biology by having students connect with the content through constructing a framework of key vocabulary and concepts. I aim to show them these connections which they can use in related courses, future professions, or their daily lives. Essential teaching components are learning about the scientific method and reading, analyzing, and criticizing the content of primary research papers.
Teaching interests:
- biology
- animal physiology
- insect physiology
- insect-Plant interactions. chemical signals
- chemical ecology. | https://eeb.ku.edu/people/gerrit-deboer |
What you need to know: One of India’s most sacred plants, tulsi is a powerful adaptogen that helps us adapt to emotional and environmental stress. It provides optimal support for the immune system, increasing stamina and endurance while also reducing inflammation. Tulsi is one of the key herbs used in Ayurveda for improving vitality and promoting longevity, and is the basis for a wide range of therapeutic blends.
Why you should try it:Tulsi, also known as holy basil, has been used for thousands of years for its diverse healing properties. It is an effective herb for supporting the heart, lungs and liver, which is why it is a common herb used for a cold, cough and flu. Tulsi contains potent levels of antimicrobial oils, making it a powerful antiseptic against many kinds of organisms, such as bacteria, fungi, and parasites. Research has also identified it to be anti-diabetic and to have influence over the neurochemistry of the brain, making it comparable to antidepressant medications. In addition, studies indicate that tulsi may protect cells from damage caused by radiation and chemotherapy.
| |
This vibrant and dynamic city tops numerous lists for business, entertainment, and quality of life. One of the country’s most popular, high-profile, and “green” cities, Austin was selected as the “Best City for the Next Decade” (Kiplinger), the “Top Creative Center” in the United States (Entrepreneur.com), #1 on the “On Numbers Economic Index” as the fastest growing economy, and #9 on Forbes’ list of “America’s Best Employers”, making the City of Austin the highest-ranking employer in the government services sector. Austin continues to lead the country with its vision of being the “Most livable city in the country”, emerging as a player on the international scene with such events as SXSW, Austin City Limits, and Formula 1, as well as being home to companies such as Apple, Samsung, Dell, Seton Healthcare , and St. David’s Healthcare. From the seat of state government and institutions of higher education to the “Live Music Capital of the World” and its growth as a film center, Austin has gained worldwide attention as a hub for education, business, health, and sustainability. Since 1900, Austin’s population has doubled every 20 years.
Austin City Government
The City of Austin is a progressive, dynamic, full-service municipal organization operating under the Council-Manager form of government. Austin’s mayor is elected from the city at large and 10 council members are elected from single-member districts. Terms of the mayor and council members are four years and are staggered so that a general election is held every two years with half the council being chosen from each election. Term limits for the mayor and council members provide for two consecutive, four-year terms. The City Council is responsible for appointing the City Manager, (the Chief Administrative and Executive Officer of the city), as well as the City Clerk, City Auditor, Municipal Court Judges, and Municipal Court Clerk.
The Mayor, Council Members, and City Manager are committed to delivering the highest quality services in the most cost-effective manner. The vision is to make Austin the most livable and best-managed city in the country.
The Austin Fire Department
The Austin Fire Department (AFD) is the 16th largest fire department in the country, providing prevention, preparedness, and effective emergency response to more than 947,000 citizens in a 271 square mile area, with 1,151 authorized sworn firefighter positions and 106 civilian staff members. Like most fire departments, AFD is responsible for providing a multitude of services, including operations, aircraft firefighting and rescue, communications, maintenance shops, medical operations, emergency prevention, arson investigations, professional standards, community outreach, safety, special operations, and educational services. They receive more than 85,000 calls a year; approximately 70 percent of those are medical in nature. The City of Austin operates a separate EMS Department.
“Our Mission Goes Beyond Our Name” is the cornerstone of the Austin Fire Department. Known as a leader in the fire service, AFD is on the cutting-edge of technology and training.
Our rank structure is as follows: Firefighter, Fire Specialist, Lieutenant, Captain, Battalion Chief, Division Chief, Assistant Chief, and Fire Chief. A Fire Cadet is an individual currently in the 28-week fire training academy. All cadets must pass both the state-administered Emergency Medical Technician (EMT) and Firefighter certified exam before graduating from the academy. After graduation, an individual becomes a probationary firefighter for six months.
The Position
The Fire Chief directs, plans, and coordinates the activities of the Department and serves as the administrative head under the direction of the City Manager, reporting to the Assistant City Manager over Public Safety.
Duties, Functions and Responsibilities:
Essential duties and functions, pursuant to the Americans with Disabilities Act, may include the following (other related duties may be assigned):
- Leads the City's efforts in the preservation of life and property relative to fire prevention;
- Leads the City's community fire prevention and suppression efforts;
- Coordinates and administers daily fire activities through subordinates;
- Oversees the development and implementation of administrative policy, services, and staffing levels; monitors and evaluates the efficiency and effectiveness of methods and procedures;
- Oversees the direct support of firefighter recruiting;
- Serves as lead administrator for all AFD employees (sworn and civilian);
- Participates in corporate initiatives for quality improvement processes and customer services initiatives within the Department;
- Works with community and business leaders to develop partnerships;
- Uses innovative outreach approaches to maintain and build community partnerships within a multi-ethnic and multicultural environment;
- Maximizes citizen support for the Department and involvement in its programs;
- Coordinates multi-agency response activities with other related organizations and agencies;
- Develops departmental programs to implement the City’s management plan;
- Increases quality of service through training, recruiting, and instilling customer services values among employees;
- Develops cooperative relationships with other City departments to foster service delivery improvements and problem-solving initiatives; and
- Responsible for the full range of supervisory activities including selection, training, evaluation, counseling, and recommendation for dismissal.
Knowledge, Skills, and Abilities
- Knowledge of federal, state, and city rules and regulations governing firefighting functions.
- Knowledge of local, state, and federal law and city ordinances.
- Comprehensive knowledge of and first-hand experience in all major aspects of fire operations.
- Comprehensive knowledge of administrative functions and budgetary responsibilities for a medium- to large-sized metropolitan fire department.
- Knowledge of emergency medical operations, particularly first responder services.
- Knowledge of the regional area’s geography, fire hazards, and of codes related to fire safety.
- Knowledge of and experience with the collective bargaining process, labor negotiations, and civil service law.
- Ability to develop, implement, and administer goals, objectives, and procedures for providing effective and efficient services for the City of Austin and surrounding area(s) as appropriate.
- Ability to work in and with a highly diverse community.
- Ability to coach, train, mentor, and discipline subordinates.
- Must have an open-door policy for all department members and remain approachable.
Education and Experience
Qualified candidates must have a Bachelor's degree in Fire Science or a related field, plus a minimum of five (5) years’ of progressively responsible work in fire management in a large city and/or county government structure, two (2) of which must be in a senior command level. A Master’s degree and graduation from the Executive Fire Officer (EFO) program is preferred.
Required Licenses and Certifications
Certification by the Commission on Fire Protection Standards at the intermediate level or its equivalent as determined by the Commission.
An individual appointed head of a department must be eligible to be certified at the time of the appointment or will become eligible to be certified within one year of the appointment as defined by the Texas Commission on Fire Protection Standards.
The Ideal Candidate
The ideal candidate should have extensive first-hand experience in operations, fire prevention, safety, education, emergency prevention, arson, investigations, community outreach, and recruitment. Fiscal and budget management experience is essential.
The ideal candidate should possess visionary leadership and display excellent managerial ability, strategic planning, and decision-making skills. He/she must exhibit strong relationship-building skills with both the sworn and civilian employees of AFD, the City Manager, City Council, department directors, and the community. It is essential that the incoming Chief has experience working in a unionized environment, and has had proven, documented success in establishing collaborative, diplomatic working relations with labor and employee associations.
Effective communication, and strong collaboration, negotiation, and team-building skills are necessary for this individual to be successful; advanced written and oral communication skills are imperative. The ability to make organizational changes that improve the operational effectiveness of the department is desired. This individual must adhere to the highest ethical and moral standards, and display transparency in all deeds and actions.
Salary
The City of Austin offers a competitive salary and extensive benefits (including a generous pension system) commensurate with experience. Relocation assistance is available for a successful “out of the area” candidate.
How to Apply
Please forward a cover letter and resume to:
[email protected]
Reference: COAFC
Affion Public
2120 Market Street, Suite 100
Camp Hill, PA 17011
888.321.4922
Fax: 717.214.8004
www.affionpublic.com
The City of Austin is an Equal Employment Opportunity Employer. | http://affionpublic.com/positions/fire-chief-city-of-austin-tx |
Outsourcing the Fire Service
This article looks at the trends of Cities and Town's looking to reduce the cost of providing services to the community targeting public safety.
The Scenario - A meeting between the City Manager and the Fire Chief occurred the other day and the Fire Chief was notified that as December 31, 2010 the City will not be providing fire protection services as they are going to outsource all public services to a private corporation. The City made this decision based on the current budget constraints, the loss of revenue and the need to provide other essential services to the citizens. The City Manager went on to say that the firefighters will be provided notice by HR at the end of this meeting and that the City wishes that the Fire Chief remain on board as a consultant to provide a seamless transition for those services.
The Reality – this scenario is actually occurring in many cities across our Nation and if not actually occurring in your area at this time, your elected and paid municipal leadership are looking at ways to cut costs and the fire service has become a target. We have been placed on notice and we better become proactive and not reactive to this reality.
In a recent article entitled Outsourcing Safety written by Autumn Giusti in an electronic periodical, American City and County , indicates that municipal budgets are continuing to experience shortfalls and that local government are essentially out of options. Now the focus for budget reductions is on public safety to balance the local government budget – a balancing act that will cut stations, personnel and look to outside contracting sources to provide these essential services. Cities and local government having essentially cut other municipal or county services to the bone have now targeted Fire, EMS and Police services. I am noticing in my part of the country, smaller communities are outsourcing to share costs of providing essential emergency and other municipal services. This is a result of the current economic situation and many more small to medium size communities are acting on the concept of outsourcing their public services – either to surrounding communities or to the private sector.
Outsource Bidding for Fire Protection –The City of San Mateo (CA) is joining a growing list of agencies vying to take over public safety duties in San Carlos where officials are considering contracting with San Mateo County for police protection and with the state for fire services. The Bay Area city of 28,000 has faced a deficit every year for the past decade, and the City Manager indicated San Carlos has exhausted its budget strategies. In budget meetings there was a continual request to the City directors to reduce their divisions more and more.
One of the comments from one of those directors indicated that the City could close a fire station. The city reportedly spends $9 million a year on police and $6.3 million on the fire department it shares with neighboring city, Belmont (CA). An analysis of a proposed outsourcing of services demonstrated the city could save $3.2 million on the police department and $1 million to $2 million on the fire department by outsourcing. The City of Sam Mateo indicated it can provide fire service in San Carlos at a cost of $5.3 million per year, according to an informal five-page proposal. That would represent a savings from the $6.2 million San Carlos spends on its joint fire department with Belmont.
The article goes on to say San Mateo's quote is higher than an earlier informal proposal from the California Department of Forestry and Fire Protection, or Cal Fire, which says it can do the job for between $3.5 million and $4.3 million annually. The competing proposals suggest a growing interest in a plan from San Carlos officials to outsource public safety services as they try to cut expenses and reduce a projected $3.5 million deficit in next year's budget.
Taking just the opposite position, in the City of Milpitas, the City Council has adopted a position that the city shall NOT outsource any Milpitas Fire Department operations to the State of California or other agencies, despite escalating employee costs to provide such services to residents. The council voted 4-1 June 15, with one Councilmember dissenting, to approve a request from a Councilman to NOT hand over the city's fire services to Cal-Fire, the state's lead fire agency. The Councilmember proposing the resolution indicated that talk of Milpitas contracting with Cal-Fire that had surfaced in newspaper advertisements and via resident’s website which was working against community values. In making the proposal the councilmember indicated the community needed to focus on its community values and the budget needs to reflect those community values and not to close Police and Fire stations or close the community center and library. The council unanimously adopted a total budget of nearly $130.2 million and approved formal agreements with the city's major employee unions that included seeing most city employees agreeing to slash pay by about 7 percent by taking 18 work furlough days, which equates to one and a half days a month.
In Dallas (TX), the City has recently been presented a proposal from a private ambulance service to outsource their EMS to the ambulance service. One of the arguments presentenced in their proposal is that firefighters should not have to deal with EMS issues and should focus on fire only. The proposal indicated a major cost savings to the City if the private sector is awarded EMS service. This is the tip of the iceberg.
On June 30, 2010 Maywood, California fired all of its full-time employees and now will contract out all of its municipal duties. The reason was that the city's workers compensation and commercial insurance carrier, terminated Maywood's coverage because of its claims history over the last five years as reflected in 2005-2010 Loss Summary Statements, the city says in a statement on its website. "As a result, the City of Maywood will be unable to administer a traditional staff," the statement reads. Shrinking grants and funding from both the state and the federal government also played a role in the decision, the statement says. However, in the statement, the Mayor sought to reassure Maywood residents that they would not experience a loss of service as a result of the decision. "Our community will continue to receive quality services," she says in the statement. "Maywood's streets will continue to be swept, our summer park programs will continue to operate, and our waste will be collected and hauled as scheduled. Further, the community will be protected and patrolled by the Los Angeles County Sheriff's Department."
Where did this all start? Reportedly, the Los Angeles County Sheriff's Department has provided contract police services since 1954 and claims it was the first agency to do so.
Outsourcing and combining services is not a new issue for newly incorporated cities for many municipal services. Many newer Cities have outsourcing of certain services as part of the incorporation plan. For example, when Deltona, Fla., incorporated 15 years ago with 86,540 residents, it relied on the County sheriff’s office for law enforcement. This may be setting the trend for smaller cities and towns to choose to outsource those municipal services. The purpose is to save money by consolidating certain municipal services especially public safety. In Deltona there were major savings in outsourcing those police services to the tune of about $3 million dollars.
The City of Sammamish (WA), incorporated in 1999, has been a contract City, outsourcing their police protection to the local County Sheriff and fire protection to a local Fire District. Currently there is a push to form a Regional Fire Authority under the applicable State of Washington statutes enabling Cities and fire districts for form a single entity with multiple partners of cities and other municipalities. The purpose is to create operational efficiency and hold down the cost of services for those cities and fire districts participating in this regionalization.
When we look across the country, read the newspapers and look in our own trade periodicals, the trend we see is in this period of declining revenue, cities are starting to seek viable alternatives to public services. In my experience with fire department budgets, about 70% to 75% of a fire department budget is for personnel costs and those costs are rising every year. Adding other municipal services like police, public works, planning and administration, just to name a few; the cost of providing those services rise faster than the revenue to support them. Those administrators are looking for a way to balance the budget.
In a recent Fire Engineering Legal Issues podcast entitled Cutbacks in the Fire Service which discussed closing firehouses, reducing the number firefighters and other essential fire protection services, it was pointed out that there are numerous standards such as NFPA providing a basis for a city or community to provide a safe fire response, not only for the firefighters but for the citizens. The question was posed, “what is the legal jeopardy for those communities cutting back fire services?” Currently there is no answer, but it appears that common sense, when looking to reduce the community budgets, has been tossed out the window when it comes to fire protection services.
From my point of view, politicians are under fire from all sides. The easiest thing for them to do is to look to alternative sources for the same services: They believe that they can outsource those services for less money and the fire service is starting to look like other municipal services - parks, waste management, and public works, only we have greater benefits and bigger pay raises. Our other downfall is our inability to market our own services to our own elected officials. Most politicians do not know what we do, when we do it. They do see however our 24 - 48 hour shifts, side jobs, firefighters driving high end vehicles and living well in this economic recession. We are not helping ourselves here.
Another contradiction in our service is the continuation of the myth that we can provide the same level of services with fewer dollars. We continue to reassure the community that we can provide services in spite of decreasing revenues and reduction in firefighters. What the community needs is a dose of reality and to tell them the truth – we cannot do the job with less dollars and we are seeking their assistance to safely provide fire and EMS services. We need to tell the community it may take more time to arrive at your emergency, with fewer resources and that is the new reality. Is the community willing to take that risk? I think they are – as they continue to vote down tax initiatives for fire protection and other essential community services.
The tax payer is getting tired of paying more taxes to keep a certain group of government workers working. The taxpayers themselves are already suffering from job loss, loss of home value, layoffs, reduction or elimination of health benefits and overall have been adversely affected by the current economy. They will and are fighting new taxes or a continuation of existing taxes to reduce their personal tax burdens.
I believe that the taxpayers are probably willing to play the risk game and not vote for higher taxes and the resulting endgame is reduced emergency services.
What is the future for the fire service in this period of declining revues, budget cuts and the trend to look at outsourcing as an alternative? Not being apocalyptic, our industry is and will be undergoing tremendous changes over the next five years. Private sector fire service and ambulance services are finding an audience with the elected officials. Certainly, in the big cities, the unions are very influential; but as we see in FDNY, the budget discussion placed 20 fire houses and hundreds of firefighters on the chopping block. Thank god that was avoided but it should be a wakeup call to the fire service that our municipal mangers are targeting the fire services. This is a national trend – budget restrictions and the closing of fire houses and reduction of firefighters.
As a Councilmember in Milpitas California indicated; there has been a recent and approved formal agreement with the city's major employee unions that included seeing most city employees agreeing to slash pay by about 7 percent by taking 18 work furlough days, which equates to one and a half days a month. This is the new reality.
The fire service needs to look inwardly and work aggressively with elected and appointed city or town officials to find the creative solution to end this crisis which means doing business differently, look to differentiate ourselves from the cops and public works, work with the unions on cost cutting or cost saving measures and to seek a different and efficient way to do business. We are talking about real money here. We need to look at some alternative source of funding. Fee for service is the white elephant in the room and for years a forbidden funding alternative in the fire service industry. Fee for service has been a great source of revenue for EMS and other private business. We need to change our way of thinking and get out of the box on these issues
I suggest that we look at ALL options to fund the fire service now before our firefighters are reduced to dangerous levels, our firehouses are closed and in the end, safety is compromised and the community suffers.
First published in fireengineering.com July, 2010
By M2 Resource Group, IncABOUT THE AUTHOR: John K. Murphy, JD, MS, PA-C, EFO
EMS & Fire Subject Matter Expert Witness; Litigation Support; Psychological Testing & Counseling
EMS & Fire Subject Matter Expert Witness; Litigation Support; Psychological Testing & Counseling
John K. Murphy, JD, MS, PA-C, EFO, retired as a Deputy Fire Chief after 32 years of career service; is a practicing attorney and is a frequent speaker on legal and medical issues at local, state and national fire service conferences.
Copyright M2 Resource Group, Inc
Disclaimer: While every effort has been made to ensure the accuracy of this publication, it is not intended to provide legal advice as individual situations will differ and should be discussed with an expert and/or lawyer.For specific technical or legal advice on the information provided and related topics, please contact the author. | https://www.hgexperts.com/expert-witness-articles/outsourcing-the-fire-service-19529 |
Quantbot Technologies LP Takes Position in Signet Jewelers Ltd. (SIG)
Quantbot Technologies LP bought a new position in Signet Jewelers Ltd. (NYSE:SIG) in the 3rd quarter, according to its most recent disclosure with the Securities and Exchange Commission. The fund bought 8,515 shares of the company’s stock, valued at approximately $561,000.
Other hedge funds have also recently added to or reduced their stakes in the company. MetLife Investment Advisors LLC grew its stake in shares of Signet Jewelers by 69.7% during the 2nd quarter. MetLife Investment Advisors LLC now owns 64,016 shares of the company’s stock valued at $3,569,000 after buying an additional 26,285 shares during the period. Wells Fargo & Company MN grew its stake in shares of Signet Jewelers by 3.8% during the 2nd quarter. Wells Fargo & Company MN now owns 368,593 shares of the company’s stock valued at $20,550,000 after buying an additional 13,645 shares during the period. Bank of New York Mellon Corp grew its stake in shares of Signet Jewelers by 11.9% during the 2nd quarter. Bank of New York Mellon Corp now owns 697,621 shares of the company’s stock valued at $38,892,000 after buying an additional 74,022 shares during the period. Swiss National Bank grew its stake in shares of Signet Jewelers by 2.8% during the 2nd quarter. Swiss National Bank now owns 102,800 shares of the company’s stock valued at $5,731,000 after buying an additional 2,800 shares during the period. Finally, Advisors Asset Management Inc. purchased a new stake in shares of Signet Jewelers during the 2nd quarter valued at $214,000.
Get Signet Jewelers alerts:
NYSE:SIG opened at $38.55 on Friday. The company has a current ratio of 2.20, a quick ratio of 0.47 and a debt-to-equity ratio of 0.49. The firm has a market cap of $2.13 billion, a P/E ratio of 10.68, a price-to-earnings-growth ratio of 1.48 and a beta of 0.89. Signet Jewelers Ltd. has a twelve month low of $33.11 and a twelve month high of $71.07.
Signet Jewelers (NYSE:SIG) last posted its earnings results on Thursday, December 6th. The company reported ($1.06) earnings per share for the quarter, beating the Thomson Reuters’ consensus estimate of ($1.08) by $0.02. The business had revenue of $1.19 billion for the quarter, compared to analyst estimates of $1.16 billion. Signet Jewelers had a negative net margin of 3.23% and a positive return on equity of 15.90%. The business’s quarterly revenue was up 3.0% on a year-over-year basis. During the same quarter in the previous year, the business posted $0.05 EPS. Sell-side analysts forecast that Signet Jewelers Ltd. will post 4.28 EPS for the current year.
SIG has been the subject of several research reports. ValuEngine raised shares of Signet Jewelers from a “strong sell” rating to a “sell” rating in a research note on Friday, August 10th. Zacks Investment Research raised shares of Signet Jewelers from a “hold” rating to a “buy” rating and set a $71.00 price objective for the company in a research note on Tuesday, August 21st. TheStreet upgraded shares of Signet Jewelers from a “d+” rating to a “c-” rating in a report on Tuesday, August 21st. Telsey Advisory Group lifted their target price on shares of Signet Jewelers from $56.00 to $63.00 and gave the company a “market perform” rating in a report on Friday, August 24th. Finally, Wells Fargo & Co restated a “hold” rating and set a $65.00 target price (up previously from $50.00) on shares of Signet Jewelers in a report on Thursday, August 30th. One equities research analyst has rated the stock with a sell rating, twelve have issued a hold rating and one has assigned a buy rating to the company. Signet Jewelers has an average rating of “Hold” and a consensus target price of $54.06.
ILLEGAL ACTIVITY WARNING: This story was first reported by Macon Daily and is owned by of Macon Daily. If you are accessing this story on another domain, it was illegally stolen and republished in violation of United States & international trademark & copyright laws. The original version of this story can be accessed at https://macondaily.com/2018/12/08/quantbot-technologies-lp-takes-position-in-signet-jewelers-ltd-sig.html.
Signet Jewelers Company Profile
Signet Jewelers Limited engages in the retail sale of diamond jewelry, watches, and other products in the United States, Canada, the United Kingdom, the Republic of Ireland, and the Channel Islands. Its Sterling Jewelers division operates stores in malls and off-mall locations primarily under the Kay Jewelers, Kay Jewelers Outlet, Jared The Galleria Of Jewelry, Jared Vault, and various mall-based regional brands, as well as JamesAllen.com, an online jewelry retailer Website. | |
Developing and developed country regulators shared practical experiences on harnessing the potential of financial technology (FinTech) to deepen financial inclusion at a virtual event jointly organized with Bank of Thailand (BOT) on 1-3 December.
Held as part of AFI’s developing-developing country dialogue (3D) platform, the knowledge Exchange Program-2 explored the current state of open banking, open application programming interface (API), electronic know your customer (e-KYC) and digital identification. Special focus was given to data privacy and protection, mitigating the impact and implementation of COVID-19 recovery measures.
Addressing participants via live feed, AFI Executive Director Dr. Alfred Hannig said that the ongoing pandemic had accelerated the pace of digitization, creating fresh opportunities to bolster support for vulnerable groups in formal financial systems.
“Given the digital transformation in countries represented by AFI member institutions, innovative digital financial services (DFS) solutions will be key to enhancing the usage and quality dimensions of financial inclusion,” Dr. Hannig told the more than 30 participants from AFI member institutions. Greater financial inclusion, he explained, provided regulators and policymakers with a key solution to mitigate the impact of COVID-19 and build sustainable recovery and resilience.
Citing the network’s wealth of expertise, he affirmed the importance of peer-to-peer learning platforms to reinforce the post-COVID-19 recovery phase and, in particular, to enhance the capacity of innovative and enabling policy environments for digital financial inclusion and FinTech ecosystems.
“AFI’s cooperation model has been at the forefront of responding to the crisis by supporting its members in designing and implementing high-impact tailored practical solutions to build recovery and resilience,” he said. Among such measures are AFI’s COVID-19 policy response dashboard, webinars, publications and in-country implementation efforts.
BOT Deputy Governor Ronadol Numnonda concurred with Dr. Hannig, adding that consistent efforts to promote digitization in Thailand had already yielded positive results.
In pre-recorded opening remarks, Numnonda said that investment in digital had allowed for economic stimulus package payments to be disbursed into electronic wallets. Reflecting a switch in consumer habits away from physical cash, he noted an 85 percent annual jump in electronic payments and fund transfers in May 2020, when the government and health authorities had encouraged physical distancing.
But to fully harness the potential of digital innovations, he said that cooperation was needed among a broad range of stakeholders to ensure effective implementation, regulatory oversight and prevent fragmentation. It would also help protect consumers, particularly the most vulnerable, against cybersecurity threats.
“Financial and digital literacy are crucial enablers in deepening financial inclusion even further,” he said, adding that this made “working closely together to engineer workable solutions” even more beneficial for all.
Member institutions from three continents gathered for the virtual event, including, Bangko Sentral ng Pilipinas (BSP), Bank Negara Malaysia (BNM), BOT, Central Bank of the Russian Federation, Mexico’s Comisión Nacional Bancaria y de Valores (CNBV), National Bank of Cambodia and People’s Bank of China. They engaged in dialogue and peer-to-peer learning with colleagues from Accelerate Estonia, European Banking Authority (EBA), Innovation for Poverty Action, Luxembourg’s Commission de Surveillance du Secteur Financier, Russian Electronic Money and Remittance Association and the Office of the UN Secretary-General’s Special Advocate for Inclusive Finance for Development.
Sharing details of the latest developments on open banking and open API within BOT’s jurisdiction was Thammarak Moenjak, director of financial institutions strategy department, who emphasized the importance of “efficient, secure and cost-effective data” to open banking, describing it as a “key engine of the central bank’s drive towards digital finance”.
Reflecting uncertain times, he noted that the central bank had promoted a more flexible approach to financial regulation through four guiding principles: customer centricity, no compromise on financial stability, no anti-competitive behavior, and market conduct and consumer protection.
Echoing this sentiment was BSP’s Melchor Plabasan, officer-in-charge of technology risk and supervision, who said that the central bank had “recognized the benefits of moving to an open finance framework”.
He added that BSP was “undertaking on policy initiatives to further promote sharing of information among new and incumbent third-party players” as part of efforts to encourage industry-wide buy-in.
As with BOT’s Moenjak, Plabasan urged regulators to be more adaptable in the face of new innovation, saying that “there will be new products and services, and we need to be ready”, adding that BSP had adopted a “test and learn” environment.
the Malaysian central bank had published e-KYC guidelines earlier this year aimed at accelerating and streamlining industry practices, Ian Lee Wei Xiung, BNM’s financial development and innovation department manager, explained.
“The future is expected to be more digital and lower ‘touch’ economy, and this is why we see e-KYC as a very important regulatory and industry development that we should aim to move towards,” he said. In terms of approaches, he added that BNM opted for being technology neutral to better focus on outcomes that prioritized safe and secure technology and digital onboarding of customers.
Also underscoring regulatory achievements was CNBV’s Mary Pily Loo, directorate-general for operational and technology risk, who said that developments in open banking in Mexico had been given a boost with the passing of a 2018 FinTech law. Despite this and other successes, she was cognizant of potential market risks including information leaks and limited technical capacities among stakeholders to maintain API and supervisory standards.
Growth in open banking was also noted among speakers from developed countries regulators. Two million customers had signed up to open banking in the UK, but more needs to be done to raise awareness of customers, Bank of England’s senior FinTech specialist, Irina Mnohoghitnei said.
Dr. Dirk Haubrich, EBA’s head of conduct for payments and consumers also spoke about the rising popularity of open banking in Europe, citing some 100,000 downloads per month of an EBA registry that contains information on authorized or registered payment and electronic money institutions. The registry aims to ensure transparency and high levels of consumer protection.
EBA was constantly monitoring and reacting to new developments, including by establishing an industry working group, Dr. Haubrich said adding that challenges remain – from divergent API models across European Union member states to frictions between banks and account information and payment initiation services.
Reminding participants of the broader goals was UNSGSA Policy Advisor David Symington, who spoke of the opportunities for financial inclusion to achieve Sustainable Development Goals, saying that “greater access to digital financial services is a key enabler for many Sustainable Development Goals.”
The event, the second in AFI’s Knowledge Exchange Program series, was partially funded with UK aid from the UK government. It stemmed from last year’s global dialogue on regulatory approaches for inclusive FinTech, held in Prague, Czech Republic, where enabling FinTech ecosystems were identified as a key area of priority in the global developing-developed country dialogue workstream.
The knowledge exchange is held under AFI’s 3D platform, a unique initiative to share and learn from the practical experiences, best practices and expert opinions between AFI members and their peers in developed countries. | https://new.afi-global.org/newsroom/news/developing-developed-country-peers-share-inclusive-fintech-covid-19-lessons/ |
This notice describes how existing general tax principles apply to transactions using virtual currency. The notice provides this guidance in the form of answers to frequently asked questions.
SECTION 2. BACKGROUND
The Internal Revenue Service (IRS) is aware that “virtual currency” may be used to pay for goods or services, or held for investment. Virtual currency is a digital representation of value that functions as a medium of exchange, a unit of account, and/or a store of value. In some environments, it operates like “real” currency — i.e., the coin and paper money of the United States or of any other country that is designated as legal tender, circulates, and is customarily used and accepted as a medium of exchange in the country of issuance—but it does not have legal tender status in any jurisdiction.
Virtual currency that has an equivalent value in real currency, or that acts as a substitute for real currency, is referred to as “convertible” virtual currency. Bitcoin is one example of a convertible virtual currency. Bitcoin can be digitally traded between users and can be purchased for, or exchanged into, U.S. dollars, Euros, and other real or virtual currencies. For a more comprehensive description of convertible virtual currencies to date, see Financial Crimes Enforcement Network (FinCEN) Guidance on the Application of FinCEN’s Regulations to Persons Administering, Exchanging, or Using Virtual Currencies (FIN-2013-G001, March 18, 2013). | https://tusc.network/forum/discussion/13/irs-notice-2014-21-virtual-currency-guidance |
To fully understand the character of Isabella, we need to know that Measure for Measure is one of Shakespeare’s problem plays. In one sense, it deserves to be considered as a drama of ideas.
Isabella, the heroine of this play, is a problematic character because she displays conflicting moral and legal notions through her attitude to sin, justice, celibacy, marriage, etc. She is a bundle of contradictions and as bright in her intellect as Portia. She is much colder, much less plausible as a real human being.
Shakespeare probably tasted some of his ideas through the character of Isabella and failed to some extent to make her a fully realized dramatic figure. Let’s see how.
The Character of Isabella in Respect to Her Brother Claudio
We first hear of Isabella from her brother Claudio in prison in Act-1, Scene-II. Claudio is arrested and sentenced to death by the over-righteous Angelo for having a maiden with a child. Claudio asked Lucio to go to Isabella and implore her to save his life by making a friend “with strict deputy.” We hear that Isabella is supposed to enter the cloister as a nun on that very day. The contrast between Claudio and his sister Isabella is established with a kind of dramatic irony. Claudio is going to die for violating a maiden’s chastity while his sister is about to take up the chaste life of a nun, which is itself a kind of physical death. We further hear of Isabella’s nature from the mouth of Claudio:
“For in her youth
There is a prone and speechless dialect besides she hath prosperous art
Such as move men;
When she will play with reason and discourse,
And well she can persuade.”
The Dilemma of Isabella
Isabella is put into an absurd situation. She is obliged to defend her brother who committed a sin that she most abhors. In fact, in Act-II, Scene-II, during the first meeting with Angelo, she begins with a dilemma in her own position as to the crime her brother committed.
“There is a vice that most I do abhor
And most desire should meet the blow of justice
For which I would not plead but that I must
For which I must not plead but that I am
At war twixt will and will not.”
The Strength of Isabella as A Character
Isabella’s strength of character, her persuasive power, and her willful obstinacy are revealed in her two encounters with Angelo. It is interesting to watch how Angelo’s puritanical self-righteousness falls down like a house of cards. She brings in religious as well as secular arguments in order to prove that mercy is a better principle than justice, and argues that Angelo, as a man like Claudio might also in similar circumstances commit the same sin. Moreover, she harps on the theme of authority and the harshness of tyranny that often goes with it;
“Oh; it is excellent
To have a giant’s strength;
But it is tyrannous
To use it like a giant.”
Does Isabella Prioritize Her Chastity Over her Brother’s Life?
In her second meeting with Angelo, Isabella is given a moral choice either to surrender her chastity to Angelo and thus save her brother’s life or to keep her virginity and let her brother die. The scene reveals Angelo as a lustful, hypocritical imposter. But it puts Isabella also in a trying situation. Isabella refuses to save Claudio by submitting to Angelo’s demand. Her insistence on physical purity makes her inhumanely insensitive to her brother’s fate. Isabella tells herself-
“Then, Isabella, live chaste and brother, die:
More than our brother is our chastity.”
But there is an irony- she pleads for mercy from Angelo while she herself is incapable of showing any mercy to Claudio. Moreover, Isabella’s very virtue is made responsible for the temptation of Angelo. Claudio’s miseries are intensified by her outburst of vituperation that shocks and perplexes him. Isabella’s response to her brother’s misery falls short of the Christian ideal.
The Basic Flaw in Isabella’s Character
Another paradox of Isabella’s character is that although she angrily rejects the demand for sacrificing her virginity, she does not condemn the bed-trick in which Mariana should take her place in Angelo’s bed. The basic flaw in her character is thus self-contradiction. Her rejection of Claudio’s plea to save his life is valid and inevitable, but that does not seem to justify the storm of abuse which she unleashes on him.
Final Note
Finally, we can call Isabella a dynamic character. Although she begins as a flawed character, she eventually learns wisdom and charity. Her acceptance of the bed-trick symbolizes a reversal of her previous values and marks new access to human understanding. Her marrying the Duke at the end is a culmination of this humanizing process. | https://hamandista.com/essays/character-isabella-measure-measure/ |
It’s easy to fall into the trap of thinking you are the user when you are wireframing your screens. In most cases you aren’t so you must use user testing to ratify your design decisions.
I was recently working with a client to help them be more user-centred in their design work. On beginning this project it was apparent to me that those who were working in UX design roles saw their job as creating wireframes and sketches.
They also attended the occasional lab-based user testing session where they would observe users with the designs which they and their colleagues in the creative department had created. Some feedback from these sessions would then be incorporated into the designs.
With a lot of emphasis in the role being put on the designs themselves, there was less consideration for the user. The danger of this is it can lead to the UX designer using themself as the user. Whilst this may sometimes be the case, it is not good practice.
We can all be very opinionated when it comes to creating the best design for a screen but we must remember to put ourselves in the user’s shoes and inform our design decisions with actual user feedback.
A City Council website could be used as an example here. Two colleagues working in the Council’s UX team would likely draw up different designs if they started to create them without following a true user-centred approach.
One of the team might enjoy leisure activities so therefore consider finding local sports facilities as a prominent section of the site, whereas their colleague might be mostly concerned with how they can get advice and benefits.
I could go on, but the point is that in order to create truly effective designs they need to be user-centred. Before diving in with some detailed designs consider the activities that are going to make these designs more effective. Do user research, create a set of personas, review web analytics, define user journeys and understand the red routes – recurrent and critical activities which users wish to complete on the site.
As mentioned above there was some user-centred design going into this particular client’s work with a decent budget for usability testing. Feedback gained was incorporated into designs. Despite this, when there was a project which did not have budget for a lab-based session there was a lack of creativity and passion from the team to get user feedback.
Budgets can often restrict the opportunity to get a user into a lab for formal user testing but there are other ways to gather feedback. As well as being cheaper these methods can also be quicker and more effective.
In the city council example, a logical starting point for qualitative user research might be to take a visit to the waiting room at the council building where it is likely there would be a group of people willing to give their thoughts for a small reward.
Budget constraints are a fact of life and often UX is one of the first places hit by them. It is the UX team’s responsibility to show passion and creativity to do what they can to gather user insight before, during and after the design process. | https://theuxreview.co.uk/user-testing-on-a-budget-you-are-not-the-user/ |
If you truly want to become successful then it is essential that you master the skill of effective time management and learn to gain complete control of how you spend every moment of your day.
If you want to be able to develop as a person and take the steps necessary to reach your goals it is a skill you must attain and one which has many techniques. One of those key elements is being able to boost your mind by taking breaks from whatever you are doing. You must develop a planned routine that includes regular times for getting away from the task at hand and recharging your batteries.
Here are five useful tips to help you to do so:
Blocks of Time
For any specific task decide exactly how long you are going to work on it, say 30 minutes, then when that time is up stop what you are doing. Take a short break and then go back to it or start on something else. When you use this method you must stick rigidly to the time you have set and stop regardless of what stage your work is at. So you must plan it carefully – only you can know when it is practical to take a break.
Start and Finish Times
Set specific start and finish times for your activities and stick to them. You will then learn how much work makes up a working day and so become more effective at managing your time and putting a value on it. Always finish promptly at the time you have set for the end of your day.
Feed Yourself
Take a regular break for lunch. Have a set time and duration for it and ensure you take time to do something totally unconnected from your current task. You should never work through lunch; you need the time to nourish your body, and your mind!
Short Breaks
Schedule into your day other short breaks of perhaps 10 minutes at a time. This is particularly important if you are working at a computer or doing other work requiring high levels of concentration. It will boost your energy and improve your concentration levels. And it will ensure that you stay healthy!
No Unplanned Breaks
Avoid taking unscheduled breaks. It is essential that you are able to avoid interruptions and you must set rules accordingly. For example have set times for checking emails and stick to them. You must also ensure that your family, friends, and work colleagues understand when you are working on a particular task and cannot be interrupted. Instead agree with them that you will have time set aside for them during the day. Your breaks must always be planned!
Making time for yourself will help to boost your levels of concentration and keep you focused on the tasks that you have to complete. It will also help to keep you healthy and so enable you to work effectively at all times.
So take a break! | https://www.goal-setting-guide.com/5-easy-ways-to-ensure-you-take-a-break/ |
(also known as: appeal to justice)
Description: Accepting evidence on the basis of wanting closure—or to be done with the issue. While the desire for closure is a real psychological phenomenon that does have an effect on the well-being of individuals, using "closure" as a reason for accepting evidence that would otherwise not be accepted, is fallacious. This is similar to the argument from ignorance where one makes a claim based on the lack of information because not knowing is too psychologically uncomfortable. However, the appeal to closure focuses on accepting evidence and for the reason of closure.
Logical Form:
Evidence X is presented, and found to be insufficient (or evaluated with a heavy bias due to the desire for closure).
Closure is desired.
Therefore, evidence X is accepted.
Example #1:
After the terrorist attack on the city, the citizens were outraged and wanted justice. So they arrested a Muslim man with no alibi who looked suspicious then charged him with the crime.
Explanation: Unfortunately, unsolved crimes are bad politically for those in charge and based on the number and percentage of false arrests, it is clear that appealing to closure has some serious consequences for many innocent people.
Exception: It has been stated elsewhere that "agree to disagree" falls under the appeal to closure. This is not the case because agreeing to disagree does not mean that either party is accepting the evidence of the other, in fact, it's the opposite. People can agree to "move on" or "table the issue," for many logical reasons. This is similar to negotiation and compromise. When people compromise, they usually do not agree to accept evidence they wouldn't otherwise accept. For example, if an atheist and theist are debating the existence of the Biblical God, they wouldn't say, "Okay, I'll agree that some kind of creator god exists if you agree that this god does not currently interfere in the universe."
References:
Logically Fallacious is one of the most comprehensive collections of logical fallacies with all original examples and easy to understand descriptions; perfect for educators, debaters, or anyone who wants to improve his or her reasoning skills.
Get the book, Logically Fallacious by Bo Bennett, PhD by selecting one of the following options:
Enroll in the Mastering Logical Fallacies Online Course. Over 10 hours of video and interactive learning. Go beyond the book!
Enroll in the Fallacy-A-Day Passive Course. Sit back and learn fallacies the easy way—in just a few minutes per day, via e-mail delivery.
Have a podcast or know someone who does? Putting on a conference? Dr. Bennett is available for interviews and public speaking events. Contact him directly here. | https://www.logicallyfallacious.com/tools/lp/Bo/LogicalFallacies/194/Appeal-to-Closure |
Max Haiven: French realist painter Gustave Courbet is, at first glance, the quintessential modern artistic persona: arrogant, iconoclastic, moody, brilliant and individualistic. He was also an anarchist who, in 1871, served as the short-lived Paris Commune’s de facto Minister of Culture, developing programs that empowered artists and opened museums and galleries to the public. It is towards figures like Courbet that our imaginations are trained to gravitate when we hear the word creativity. But his famous (and, at the time of its first exhibition in 1855, infamous) large-scale The Artist’s Studio, a real allegory summing up seven years of my artistic and moral life reveals something profoundly different about the nature of creativity. Here, the artist depicts himself seated in the centre of the scene, painting a landscape in his studio, surrounded by a cast of dozens of characters. “It’s the whole world coming to me to be painted,” he wrote, “on my right, all the shareholders, by that I mean friends, fellow workers, art lovers. On the left is the other world of everyday life, the masses, wretchedness, poverty, wealth, the exploited and the exploiters, people who make a living from death.” The painting represents a tacit admission that art, and creativity more broadly, is not merely the unique effusion of the tortured genius’ soul, but rather always a partly collective and common process: it relies on a whole community of people.
Readers of STIR will no doubt be familiar with the concept and politics of the commons and the struggle against enclosure, so I will not revisit them here except to say that creativity is an elemental part of the commons and of struggles to defend, expand and reinvent them. Indeed, creativity itself can be understood, at least in part, as a commons.
Consider, for instance, the incredible creative gifts that have emerged from the Black experience in the United States. As historian and philosopher of cultural politics Robin D.G. Kelley has shown in his incredible book Freedom Dreams: The Black Radical Imagination, cultural forms from gospel to blues to jazz to funk to hip-hop emerged collaboratively from popular movements against racism and exploitation. They served, at least initially, as catalysts for common struggles. But, likewise, these cultural forms have each been the subject of enclosure by the music industry, advertising and other capitalist forces eager to transform these into opportunities to sell cultural commodities and, in the course of this process, the history of collective, collaborative creativity is distilled into a lineage of individual figures.
That is to say that creativity always emerges from a context of shared and collectively cultivated cultural and intellectual ‘resources,’ and in turn contributes to that context, and that the politics of creativity are in many ways defined by capitalism’s attempts to conscript, shape, co-opt or charge rent for access to that creative commons. Indeed, this is the key argument of the Creative Commons licensing platform, an open-source initiative that allows creative producers—from musicians to artists to programmers—to “copy-left” their work, acknowledging its shared sources and its contribution to a shared cultural landscape while, at the same time, affording the option of ensuring authorial recognition and preventing future profiteering.
The hidden history of creativity
This argument may sound a bit odd or abstract because we are accustomed to imagining creativity in highly individualistic ways, ways that are fundamentally shaped by a capitalist worldview. Indeed, the idea of creativity, at least in the English language, only emerges as a distinct and recognised term amidst the rise of capitalism, the enclosure of the original commons and the processes of European colonialism and imperialism. This makes disentangling creativity from capitalism and developing a notion of the creativity of the commons fairly difficult, but also well worth attempting.
Essentially, the idea of creativity came into existence primarily to give cultural commodities added value. As the capitalist class was forging itself in the 17th and 18th centuries, largely based on their ability to expropriate and profit from commons lands and resources, they began to demand the means of what French sociologist Pierre Bourdieu called “distinction”: artifacts and social practices by which they could set themselves apart and afford themselves an exalted self-image and class solidarity. Unlike their aristocratic predecessors, the new capitalist class made no pretense towards some sort of inherited biological superiority. They wished to believe that their wealth and success was due to intelligence, cunning, hard work and entrepreneurial spirit. But in order to reproduce this illusion, and to cultivate a community of like-minded ruling class persons, a range of social institutions were required: elite schools and clubs, professional associations and guilds, and, importantly, a sphere of cultural refinement and cultivation. New cultural forms, from the novel to opera to private paintings to fine crafts emerged to meet the demand of a rising class of individuals eager to showcase not only their wealth but also their intellectual and cultural superiority.
The value of these commodities, both in terms of how much money they cost and their usefulness in reproducing ruling class culture, was based, at some fundamental level, on the signature of the unique artist—the authentic and singular mark of the individual that guaranteed the uniqueness of the cultural work in question. Around this figure of the unique artistic persona, the capitalist mythology of creativity grew. Creativity, it came to be understood, emerged from the divine wellspring of the individual soul. The white, male European artist achieved a celebrated status. While some of the earliest proponents of the idea of individualistic creativity posed this romantic ideal against the growing corrosive power of capitalism and in contrast to the crass and base cupidity of the businessman, the archetype was quickly enclosed: The artist came to be seen as the glamorous mirror image of the entrepreneur, the heroic, driven individual who tamed chaos and created profitable beauty and order in the world through force of will.
Such a mythology of individualistic, capitalist creativity depended (and still depends) on the defamation and degradation of its ‘others.’ The emergence of a bourgeois culture based on the ideal of individual creativity was created in contrast to the belittled creativity of the commoners: peasant dances, popular folktales, the music of travelling bards and community tradition, all these were castigated as mindless, derivative and fundamentally uncreative, in large part because they were collective or common practices, which had little place for naming a single original artist or author and were also difficult to commodify. Further, this enclosed form of ‘creativity’ made a fundamental if artificial separation between the fields of arts and culture and the realms of everyday life, discounting the creative work that is an integral part of raising children, cultivating community, telling stories, tending gardens and reproducing social life more broadly. Women, who had long been cultural leaders in commoners’ communities, were now dismissed as incapable of ‘real’ creative genius and excluded from the canon of great artists, authors and creators. The phenomenal cultural work of non-European civilisations was dismissed as merely the semi-conscious playing out of cultures locked in time, unable to achieve true creative innovation, capable only of reproducing old forms. Or, worse: they became the raw aesthetic material for European appropriation and enclosure, as in the case of the ‘primitivist’ art movements, emblematised by painters like Picasso.
This should not lead us to dismiss or reject the incredible European cultural and creative treasures of the modern, capitalist period. Nor should it encourage us to devalue the importance of gifted individual creators. But we ought to recontextualise them. No artist, composer or novelist exists outside a society that produces the food they eat, the clothes they wear, the tools they use and the community on whom they rely. In turn, no creative producer creates in a vacuum: they speak back to that society and help to shape it, often in very subtle but not unimportant ways. Further, while capitalist storytelling encourages us to remember cultural history as a parade of great men, of isolated, iconoclastic creative geniuses, the reality is that, as important as each character may indeed be, each existed as part of a community of other creative producers: critics, collaborators, rivals, friends, patrons, neighbours, and on and on. Each relied on a commons pool of cultural meanings, ideas, forms, styles, and techniques pioneered by previous generations of creative producers, and in turn contributed to this pool.
Enclosures of creativity today
Our individualist, capitalist, colonial method of remembering creativity delivers the idea into the hands of forces that are, today, actively promoting another wave of the enclosure of creativity, a tendency advancing on a number of fronts. In the first place, we are told that the massive transformations of social, technological and economic life going on all around us under the global financialised austerity regime are the inevitable and, indeed, laudable results of capitalism’s magical propensity for ‘creative destruction’: the revolutionary way new innovations and the accelerating drive of capitalist competition relentlessly sweeps away the past. The term was coined by economist Joseph Schumpeter in the 1950s, drawing on the wry observations of Karl Marx. Both thinkers noted the incredible capacity of capitalism to fundamentally transform society, but both were equally concerned by the disastrous impact this could have on people’s lives and the social fabric (and today, we might add, the environment). Yet by the 1990s this cautionary phrase had been rebranded as a celebratory slogan, implying that, in an age of unfettered global neoliberalism, individuals could no longer rely on state services, public institutions or paternalistic corporations. The age of ‘creative destruction’ (or, as it has been rebranded today, ‘disruptive innovation’) is one where the competitive individual is ascendant, where social bonds are little more than opportunities for personal leverage, and where a successful life is characterised by the embrace of multiple, part-time, temporary, precarious contracts and opportunities aimed at the cultivation of one’s ‘human capital.’
And here again, the rhetoric of creativity is enclosed: as cultural critic Angela McRobbie pointed out more than a decade ago, in this brave new world the artist has ceased to be seen as a dubious character at the margins of capitalism but has instead been cast as a ‘pioneer’ of the new economy. Who, more so than the artist, represents the archetypical worker in an age of uncertainty, individualism and reputation-based competition? Who better than the idiosyncratic, iconoclastic artist who refuses to be tied down to a single employer and who distrusts bureaucracies and paternal institutions? Indeed, we are all increasingly encouraged to understand our career aspirations as if we, too, were artists, members of the sanctified ‘creative class,’ seeking to find in work not merely compensation but also ‘intrinsic’ rewards, reputational payback and a whole personality or lifestyle.
Of course, actual work in the creative industries is usually fairly remote from the gushing idealism of the neoliberal boosters with its upscale live-work lofts, lofty airport departures lounges and MacBook-filled cafés. The reality is one of precarious, part-time, temporary and, increasingly unpaid work as internships become a compulsory right of passage and as debt becomes the norm thanks to the escalating necessity of expensive university credentials. The vast majority of ‘creative’ workers are essentially subsidising their creative pursuits through other, more banal forms of employment (e.g. waiting tables).
The problem here is evidently more serious than merely the confiscation of a romantic language of creativity we might have once thought of as liberating (though that is a problem, for we have seen our radical lexicon relentlessly colonised by capitalist propaganda such that everything from ‘revolution’ to ‘community’ to ‘sharing’ to ‘the commons’ itself has become fodder for cynical commercial manipulation). Much of the misplaced enthusiasm for the ‘creative class’ the ‘creative city’ and the emergence of the ‘creative economy’ is based on a more or less true observation: that we are all, inherently creative beings. But the reality is that, today, most of us must commit this creativity to the banal routines of daily survival under austerity neoliberalism, one in which all of life’s risks and hardships are downloaded onto the individual and where nearly every sphere of life has been opened to the competitive drives of the market.
It is also true that, in an age of sophisticated computer technologies and ubiquitous mobile phones, many of us now have at easy disposal tools for doing creative work and sharing it in phenomenal new ways, from art and design to music, from online publishing of fiction and poetry to photography and film-making, though we dare not probe too deeply into the seizure of creativity from workers hyper-exploited in the iPhone/MacBook global supply chain that make such creativity possible. Yet we have, somehow been forced to accept these new opportunities at the expense of any collective creative power to transform society, to open up questions of economic and social organisation and, importantly, power to creative critique and transformation. As the opportunities for a highly individualised form of capitalist creativity have in some ways become democratised, the substantial opportunities for common creativity have been largely enclosed.
Towards a common creativity
How then to respond? To my mind, the key is to recognise and valourise the creativity of the commons, to think carefully about the ways in which collective creativity is being practiced and actualised in efforts to defend and expand those common elements of our lives. This is not simply a matter of inserting or integrating more ‘culture’ into our common projects, though of course music, theatre, visual art and design are essential components of common spaces and practices. Indeed, these cultural forms help us build stronger, more resilient and powerful communities because they speak to and reveal, somehow, that ineffable quality of collective action: the odd and inexplicable way that ‘we’ are more than the sum of our parts, the way that, when we work together based on non-hierarchical, grassroots democratic egalitarian principles, we are able to generate tremendous creative capacity and also transform ourselves in profound ways. It is this collective, co-operative capacity that I think we need to value and honour as creativity of the commons. This is the creativity that emerges in the long radical democratic meetings of co-operatives, in the streets as protesters collectively evade and outmaneuver police, in the anti-oppression training sessions where we learn to unpack and undo the forms of privilege and power that divide us, or in the everyday labours of collaboratively reproducing life outside the market’s discipline.
Let me close with a few brief interesting examples of what might be termed art towards the common. I say ‘towards’ to note that all creativity is, as I have been arguing here, always partly common, but that we usually fail to acknowledge it as such. But the practices I will gloss here are explicitly oriented towards using the power and the unique historical and social position of ‘art’ to help us foster and reimagine the commons.
Caroline Woolard is a New York City based artist who specialises in creating common environments and resources for the commons. She describes her work as “researched-based and collaborative” and aimed at “co-creat[ing] spaces for critical exchange, forgotten histories, and desire inducing narratives.” For instance, in a 2013 temporary project at New York’s Museum of Modern Art, Woolard established an Exchange Café in which gallery-goers could obtain tea, milk and honey but were advised that “this establishment does not accept credit cards, debit cards, or national currencies. We accept your individual labor, goods, ideas, and/or services in exchange for our products.” But beyond creating unconventional works and experiences for the ‘art world,’ Woolard’s practice extends into collaborations to create new resources for the commons in the broader community. This included helping establish OurGoods.org, a web-based platform to enable bartering in New York City, and TradeSchool.coop, an open-source scheme that allows communities to establish their own schools for commoners to share their skills, talents and knowledge, again, based on the principles of non-monetary exchange. In all this work, Woolard turns her own impressive creative talents and energies towards generating spaces, process and opportunities to cultivate and activate more common, collective and community-based creative potentials. She is not simply ‘doing’ art with the public, but creating the possibility for commoners to realise and recognise the creative dimensions of working together, and, by extension, the ways in which this creative power and potential is almost everywhere enclosed under today’s capitalist system. Such a practice is rooted in a deep and long-term commitment to building radical democratic community, to communing.
Mi’kmaq artist Ursula Johnson is also actively working on what we might understand as the commons, but in a very different idiom. Based in Halifax and Cape Breton, Canada, Johnson’s work focuses on honouring and reinventing the artistic traditions of her Indigenous ancestors, traditions that have been the target of the Canadian state’s genocidal policies since colonisation. In 2010 Johnson curated an exhibition of the basketwork of her grandmother, famed Mi’kmaq artist Caroline Gould, affirming and valourising both Gould’s unique genius and talent as well as the broader cultural ‘commons’ of knowledge and skill which has been handed down generation to generation for centuries.* But Johnson is also a phenomenal basket-weaver in her own right, and has worked with this form to create works that speak to today’s social and political issues, including queer rights and identity, the violent colonial management and surveillance of Indigenous people (past and present), and the migration of Indigenous peoples to cities. Johnson also uses basketwork in powerful public performances that compel both Indigenous and non-Indigenous audiences to attend to and recognise the ongoing legacies of colonialism and genocide. In this sense, Johnson not only draws on a heritage and history of common creativity, but also seeks to create opportunities for common reflection and reconsideration. Importantly, Johnson’s work does not offer us an easy or celebratory vision of the commons. Rather, it forces us to think about how past and present oppression, inequity and violence presents a barrier to creating new commons. This is art towards the common, but also art towards the uncommon, art that troubles or complicates the common.
To these examples we might add the wide variety of artists, musicians, playwrights, writers and others who are committed to creating non-commercial public events to bring communities together, or who do the important work of either inspiring or questioning common projects. We are seeing, across a range of media, the emergence of new, experimental forms and practices that take as their task not simply the creation and possibilities of aesthetic beauty (which remains important), but also the revelation of our common capacities, the beauty of cooperation, and the possibilities for a world beyond enclosure. In this sense, we are perhaps rediscovering, or at least learning to value once again, the creativity of the commons and the commons of creativity.
*It should be noted that the interpretive frame of “the commons” here is mine: the Mi’kmaq language has other, better, older words to describe with greater precision and care similar themes.
Max Haiven is a writer, teacher and organiser, and an Assistant Professor in the Division of Art History and Critical Studies at the Nova Scotia College of Art and Design. | https://blog.p2pfoundation.net/creativity-and-the-commons/2017/05/12 |
This month, the Gallery at the Park in Richland presents “Four Northwest Artists,” an exhibit featuring the works of Pamela Claflin, Laura Gable, Deanne Lemley and Melanie Thompson. Though the four artists have a shared interest in beautiful landscape paintings, each brings a unique style and perspective to the collection.
Pamela Claflin’s inspiration arises from a deep love of nature. She often works outdoors, allowing her experiences in nature to influence her paintings, and she uses road trips as a way to generate ideas and collect painting materials. For more information on Claflin, visit ClaflinArt.com.
Meanwhile, Laura Gable’s devotion to landscapes began when she was young, originating from family vacations to national parks and hours spent playing outside in the rural Midwestern sunshine. She is influenced by both nature’s vast spaces and the tiny details that emerge in light and shadow.
Gable portrays the essence of nature in her paintings, using “lyrical trees, rugged basalt cliffs, the soft addition of chamisa and sagebrush, and water’s quiet, meandering fluidity” as inspiration. You can learn more about her at LauraGable.com.
For Deane Lemley, art is a “celebration of life itself.” She aims to go beyond simply identifying and illustrating an object. Her goal is to transfer everything she sees, thinks and feels to her paintings. Lemley says she often finds it challenging to capture the beauty of a landscape in the medium of a painting, though she enjoys being able to relate nature’s beauty to viewers. Visit Lemley’s website at DeanneLemley.com.
Finally, Melanie Thompson’s interest in landscape painting stems from her background as a wildland firefighter. She spent time hiking, monitoring fire behavior and sleeping under the stars, which gave her the opportunity to appreciate beautiful landscapes and the “fierce beauty of nature.” Trained in design, she uses her skills to create simple yet powerful images.
Her goal is to reflect the pure wonder of the outdoors and allow viewers to lose themselves in her paintings. She says, “I’d like them to taste the desert air, feel the sun on their skin, and see the splendor of the wild places around all of us.” Thompson’s website can be found at MelanieThompsonArt.com.
These artists have an impressive body of experience, with all four being featured in various shows and galleries in Washington, Oregon and across the United States.
The Four Northwest Artists exhibit will be on display at the Gallery at the Park from Oct. 3 to Oct. 27. The reception will be held on Oct. 6 from 6 to 8 p.m.
Adult workshop
On Nov. 18, from 1 to 4 p.m., Gail Roadhouse will hold a workshop on watercolor holiday cards. The fee for the class is $30, and the deadline to register is Nov. 10. If you sign up for this workshop, please contact the Gallery at the Park in advance to receive all the necessary information.
The Gallery at the Park is open from 10 a.m. to 5:30 p.m. Tuesday through Friday and 10 a.m. to 5 p.m. on Saturdays. Visit the Gallery at 89 Lee Boulevard near the entrance to Howard Amon Park in Richland, or visit online at galleryatthepark.org. | https://marisaquirk.com/2017/10/01/four-northwest-artists/ |
All offers of employment are contingent upon successfully passing laboratory tests and a physical.
Career Opportunities at Graham Hospital
Graham Hospital is an equal opportunity employer committed to all aspects of the employment relationship including but not limited to, employment, promotion, demotion, transfers, terminations, benefits, recruitment and recruitment advertising, rates of pay and any other form of compensation, selection of training, and all hospital sponsored social and recreational programs. It is the policy of the hospital that all applicants for employment be considered, that all persons employed including management staff, professionals, technicians, and all others, will be treated without regard to race, color, age, religion, national origin, physical or mental handicap, or sex, except where these may be a bona-fide occupational qualification.
It is our policy to maintain a working environment for our employees that is free from harassment or discrimination in any form.
We offer a compensation program that provides fair, equitable and market competitive wages. Our program was designed to attract and retain competent, qualified personnel.
| |
The art of essay writing is of creating a work of paper or text with a purpose. The main reason to write essays is to convey some information or even to present opinions. There are three kinds of essays, the Argumentative The Expository, the Argumentative, and the descriptive. It is the most efficient way to identify which type you want to create is to determine the goal you’re trying to reach.
Argumentative
Argumentative essays require a lot of research and analysis. This skill requires a student to gather all evidence available and to present your argument in a way that is persuasive. It is your goal to convince the reader to accept your argument.
The most effective way to enhance your writing skills for essays is to work on it. The best way to practice is by seeking help from your teacher, peers, and Bid4Papers writers. There are numerous online resources to provide illustrations and samples of argumentative essays. They will assist you to get an idea of what to expect and how to create your own argumentative essay.
A outline for an argumentative essay can aid you in organizing your thoughts and create your writing step by step. It will ensure that you don’t miss any important elements of your essay applying an outline.
One of the first things to consider during an argumentative essay is to identify a buy essays papers compelling topic. Your choice should depend on your individual interests and the scope of the assignment. The best thing to do is choose one that’s not intense or boring.
After that, you should select an argument. There are two types of arguments: positive and negative. An argument that is strong will be given by the latter. It could provide a reason to challenge the prior claim or simply a query. It must acknowledge the worth of views that differ. The writer should provide reasons why readers should take interest in the issue.
Three paragraphs need to be included in the body of an essay. Each paragraph should address the topic from a different angle. Each one will build on the argument of the preceding paragraph. The amount of paragraphs that you include in your essay is subject to change However, the most common number is three paragraphs.
Your conclusion is the last section of your essay. Your conclusion must reiterate your thesis statement , and convince readers to agree with the assertion. The conclusion should end with an appeal for actions.
An argumentative essay that is well-written should be a compelling topic with enough evidence to back up the argument. It should have an unbiased evaluation and an appropriate style. Argumentative essays must possess an appealing tone and vocabulary.
Expository
Expository essay is a writing task that asks for you to create a well-organized structured argument regarding an issue. You have the option of choosing from a range of different types of expository writing. Each has its unique structure. These are some useful tips on how to write an excellent expository essay.
The first step is to decide on a relevant topic. Also, you should conduct a thorough study regarding the topic. You’ll be able make your essay compelling by choosing a topic that is relevant.
Once you’ve decided on the topic you want to write about and you have decided to write about it, the next step is for you to create a thesis statement. This statement serves as the focus for your essay. It should be concise and clear. Additionally, it must be exactly describing your argument overall.
The body part should comprise the facts and proof to back your arguments. Each paragraph should be accompanied by a purpose as well esaay typer as the use of anecdotes, or actual examples. To back up your arguments it is possible to use statistics or logic. It is essential to cite your sources and follow the rules set forth by your institution.
The introduction of your essay should contain your essay’s topic and thesis statement. You should also outline what the theme is in relation to. You should not create your own arguments for the expository essay. Your argument should be based upon the data available.
The conclusion paragraph should reiterate the analysis of a research paper thesis statement, affirm your claims, and highlight how important the subject is. The conclusion should not reiterate the elements of your body paragraphs. It should summarize those paragraphs or offer suggestions or recommendations.
A good conclusion will not only boost your essay’s score and improve your essay’s quality, but make a lasting impression to your reader. The conclusion must include a summary of your essay’s body paragraphs, follow it up with the conclusion. Also, it should outline your plan of action you intend to do to understand the topic.
One final piece of advice: Read your essay over time and again. Then edit any areas you are unsure of or confused about. If you’re uneasy of the issue you’re not sure about, your teacher or a friend can help.
Description
The art of telling a story making use of words to stimulate the senses. It is possible to describe the person or object in addition to a space. It is important to present a clear picture of the subject or object to the person reading.
A great description will be an exercise in creative thinking, so choose your subject wisely. For instance, if you write about a well-known person, you can describe his or her accomplishments, the things that have had an impact on his or her life, or the things that were made about them or their life.
A well-written descriptive essay ought to trigger a strong emotion within the reader. It should also demonstrate a thorough understanding of the subject and the fundamental idea. If you’re unsure of what you should include Think about the major points you want to make and what is most important.
A written description checklist for guidance on what details you need to include. You’ll want to include a brief description of the subject’s physical features. It is also possible to include a brief description of his or their personality.
A great descriptive essay needs to leave a lasting impression. This can be achieved through establishing your own point of view by choosing the appropriate words.
Introduction, body paragraphs and a conclusion are the most important elements of any descriptive essay. Your body paragraphs must include a variety of ideas and images. The conclusion must tie every single point together. It should also explain the major points in the body text , and explain the reasons why they are odysseus personality crucial.
The most effective way to compose a descriptive essay is to think about your reader’s tastes and preferences. It is possible to narrow down the subject areas you choose to compose about and then create the structure to guide you in writing your writing.
A descriptive essay should also include a hook that is memorable. The hook that grabs attention could be an excerpt from a literary work or an original metaphor. Hooks can be those that catch the people’s attention and then encourage readers to read the article.
A well-constructed framework is the ideal place to start your descriptive writing assignment. No matter if you’re writing about an individual the event or an area, you should include relevant and interesting information. relevant details.
Conclusion
There are numerous methods to compose a concluding paragraph on an essay. It is important to ensure that the concluding paragraph read in conjunction alongside the remainder of the paper. It needs to be clear as well as compelling and leave an impression on the viewer.
You should also be careful to not use your conclusion in a way that’s check my essay grammar too specific. Instead, make use of a few easy, but powerful paragraphs to summarise the key aspects of your essay. So, readers has something they can take out of the essay.
The concluding paragraph must provide readers with a feeling of the conclusion. It may be a reminder of arguments or suggestions for greater consequences. However, your final sentence must be brief summary of the entire project.
It’s crucial to keep your mind in the present that the closing is more than simply a “sales pitch”. It’s also a gift to the person reading it. It should reflect on the arguments you’ve presented, and it should also offer insight into your subject issue.
It can be difficult to come up with a strong conclusion. A lot of people fall into the temptation to repeat the things they’ve said previously. It can cause the whole piece feel repetitive. For a stronger emphasis on the central point clear conclusions, concise sentences and an appealing to the emotions are ideal.
The closing paragraph gives the author the chance to show the reader what the subject is relevant to them in their everyday lives. Conclusions can also function as an encouragement to act. Ideally, it’s a time to review the issue and then restate the argument, or revisit arguments in a new way. Effective conclusions provide a pleasing balanced balance between the summary as well as the pitches.
Final, you should proofread the essay. Make sure you double-check spelling and the accuracy of your writing. Ask a professional editor or teacher for assistance.
The conclusion is a crucial part of your essay, so it must be done right. If your conclusion is not written properly, it can create a disjointed effect, leaving the reader without a clear memory of the essay’s content. Rehashing your thesis is a poor idea. This is especially harmful to short essays. | https://www.jordancasualshoesonline.com/the-art-of-essay-writing/ |
Strawberry and buttermilk ice cream
I have made a number of other variations of strawberry ice cream and sorbet (see the ice cream section on this blog) , but I like to keep experimenting with variations of the same ingredients for a couple of reasons. The same base ingredient, strawberries, can taste very different when made into an ice cream, or sorbet, or even a salad, this allows the chef to taste the nuances of the fruit when presented differently. You pair the dishes very differently, for example a rich ice cream after a lighter dinner or a light flavorful sorbet after a heavy or spicy meal. And finally, as a chef, I do not feel my skills and palate are improving unless I try something new all the time.
No matter the reason, I am so glad I tried this recipe. The fresh strawberries are brought alive by the soft yogurt and are beautifully creamy. You will be glad you did the extra step of removing the strawberry seeds, there is nothing to distract your palate from enjoying the sublime combination of flavors. I do love ice cream, but I will admit, this recipe absolutely dazzled me.
One of the problems of having too many cookbooks is that there are not enough days in a year or people or reasons to cook from them all the time. I generally trend toward 275 recipes a year, see my blog. It took me some time to arrive at this recipe, one that I have eyed for sometime. Like all her other cookbooks, this recipe and cookbook is outstanding. All her cookbooks bring a wonderful diversity of flavors and cuisines to my table. Go out and buy all her cookbooks, I cannot recommend them enough!
For more delicious recipes from this cookbook click here.
1 lb strawberries, hulled and roughly chopped
3/4 cup sugar (superfine, if you have on hand)
1 teaspoon vanilla essence, I love the Sprig brand
1 1/4 cups buttermilk, or yogurt
1/2 cup sour cream
Pinch of salt
Mix the hulled strawberries, sugar and vanilla in a bowl and allow to sit to macerate for 30 minutes. The strawberries with soften and give out some juices.
Add the strawberry mix (keeping a few tablespoons for garnish if desired) to a small blender and purée to a smooth liquid. Work the purée through a fine-mesh sieve to remove the seeds.
Mix in with the buttermilk, sour cream and salt till well combined.
Churn in an ice cream maker as per the manufacturer's instructions. Or you can freeze in the freezer, mixing the ice cream with a fork every hour till set.
Serve garnished with fresh strawberries if desired. | https://www.abowlofsugar.com/post/strawberry-and-buttermilk-ice-cream |
Sticking to a regular exercise routine can be challenging. Many people struggle to make it to the gym or lose motivation along the way with a long list of excuses that inevitably involve time and schedule clashes.
Here are six common excuses and fool-proof way to bust them for good.
1. “I’m bombarded with family commitments.”
Time. If only we had more, right? Well guess what? We’re given the same amount – it’s just a matter of what you choose to do with it. Having no time, is by far the biggest barrier to fitness, but can be overcome with a few simple strategies.
Firstly, if you know your days are jam-packed and the evenings are just as mayhem, get up 30 minutes earlier a few times a week to go for a brisk walk or do a lounge room body weight circuit. Lastly, take every opportunity to move more through out the day by taking the stairs, going to bathroom on a different floor, or change the workplace meeting culture by instigating walking meetings with colleagues (best for small groups). Anything to offset the amount of sitting you do will reap health and fitness benefits.
2. “I’m just not seeing results.”
Don’t expect miraculous change after just a few sessions at the gym. The minimum amount of time to start seeing physical changes is at least 6 to 8 weeks, if you couple exercise with a balanced eating plan.
After just one gym session your blood pressure will be lower, your metabolism elevated, and you’ll notice a boost in mood and self-esteem. If you focus on “how fit feels,” you’ll make it a habit to enjoy being active much more and the outcome of your desired weight, shape or performance goal will happen in due course.
3. “I’m self conscious about the way I look.”
Most people at the gym are so consumed with their own routine, that they probably won’t notice the weight you’re lifting or how fast your cycling on the bike. So stop the self–doubt and comparing yourself to others. A good way to boost your self-confidence is to schedule gym visits during off-peak or seek advice from a gym instructor who can ensure you have the right techniques and motivate you to try new things.
Once you have mastered a few workouts and feel more at ease, you can experiment with new exercises and pushing your limits, both physically and mentally.
4. “I don’t know where to start.”
If gym jargon has you completely stumped, you’re not alone. All the equipment in the gym can seem daunting and if you don’t know how to use it correctly, it can lead to injury.
Don’t be afraid to ask for help. If it’s an option for you, book in a few personal training sessions so someone can guide you through a balanced program. Otherwise try some intro classes run on the gym floor, such as Freestyle Group Training, or specialised programs like Team Coaching to get your fitness off on the right foot.
5. “I’m afraid I’ll hurt myself.”
This one is easily avoided if you properly warm up and cool down during each session. The body needs time to transition from a resting state, so doing 5-10 minutes of walking or light jogging, stretching or mobility exercises will ensure your muscles are warm and prepared.
6. “It’s cold and raining outside, I can’t be bothered.”
The weather can be a big barrier if you prefer to exercise outdoors, hence why you need a backup plan. On hot days, schedule your workouts during cooler parts of the day, such as early morning, or workout indoors in wet or cold weather. A simple body weight circuit of push-ups, squats, tricep dips, lunges, and back extensions hit all the major muscle groups of your body and can be done anytime, anywhere.
WHAT FITNESS BARRIERS DO YOU FACE?
Visit us today and take the first step on your fitness journey. Our experts are here to keep you motivated and stay ahead of the game. Download a 3 Day Trial here. | http://getthere.fitnessfirst.com.au/fitness/6-common-excuses-and-how-to-bust-them/ |
Environmental change post-COP26: are we doing enough to invest in tomorrow?
by Sarah Walpole
This month, guest blogger Dr. Sarah Walpole discusses the health impact of the climate crisis, whether global actions post-COP26 are sufficient and the valuable role that health professionals and health services can play as stewards. Sarah is a junior doctor in the North East of England.
Media from Wix
“Tackling climate change while protecting and enhancing our natural assets, and the biodiversity that underpins them, is crucial to achieving a sustainable, resilient economy. It is also crucial to maintaining a sustainable and resilient NHS.”
- Sajid Javid’s open letter of 10th Nov 2021 to NHS trust CEOs
As the buzz following last year’s climate conference in Glasgow continues to diminish, one thing is clear: governments alone will not save us. The commitments to climate action made in at COP26 were not good enough. We’re on track for disastrous climate change, with associated extreme weather events (including heat and flooding), sea level rise, species (biodiversity) loss, disruption to agriculture and livelihoods and increased human migration. All of these impacts of anthropogenic environmental change and many more contribute to chains of causation that end in harm to health.
Yet, the outcome is not inevitable: every degree and or fraction of a degree of warming averted counts. Some progress was made at COP26. There is room for hope; and there is need for action. Arguments raised before COP26 still stand and health leaders still have a key role to play. Health professionals, while overwhelmed by the challenges of COVID and its latest variant, continue to recognise the interlinked environmental challenges that we face. As new challenges emerge, unprecedented actions have been taken. Following the Russian invasion of Ukraine on 24th February, the UK health secretary, Savid Javid, told NHS trusts to end contracts with Russian energy suppliers. Meanwhile, many hospitals are installing solar panels on roofs and the UK government is offering supporting grants, recognising that local generation may contribute to providing energy security. Clinicians are developing ‘environmental literacy’ and understanding of environmental change and its health impacts. This blog provides an update on the key outcomes from COP26, and what they mean for health.
What can we expect if all countries meet their COP26 commitments?
While the majority of countries improved their commitments (or nationally determined contributions, NDCs), following COP26, we are still set to see temperatures rise by around 2.4°C by 2100 in a best-case scenario. This best-case scenario involves countries meeting both their ‘conditional’ commitments (achievement depends on the actions of other countries) and ‘unconditional’ commitments (must be done regardless of the actions of other nations) to reduce emissions by 2030.
This offers a small improvement: to put the predicted 2.4°C rise in the context of previous commitments, if all commitments from the Paris COP21 in 2015 were met, we would have expected to see even greater warming, with a rise of 2.6 to 2.7°C. The difference in 0.2 or 0.3°C may sound like marginal gains, but every 0.1°C counts. The difference between conditional commitments being met or not is also important. If countries only meet their unconditional commitments, we can expect an extra 0.1°C of warming. Action at a local and regional level will help to bring us closer to the emissions reductions that we need.
What will this global heating mean for health?
It is likely (about 70% chance) that 1.5°C temperature rise above pre-industrial levels (defined as the average over the half a century between 1850 and 1900) will be a reality during one month or more in the next five years. Global temperatures in the coming five years are predicted to be about 1°C above pre-industrial levels. It’s important to remember that the impacts of climate change are not equally distributed. Asia will see more warming than the global average, for example. This will have wide reaching health impacts, from drought and food insecurity to heat-related mortality. Research has already shown that extreme heat is increasing the risk of pre-term birth, which will have particular impacts for populations with fewer resources where women work as subsistence farmers and do not have access to air conditioning.
Media from Unsplash
Meanwhile, forty-six countries committed to significantly reduce short-lived climate pollutants (SLCPs), which have health impacts secondary both to local air pollution and to climate change. SLCPs include methane, hydrofluorocarbons (HFCs, the gas used as a propellant in metered-dose inhalers), black carbon (a.k.a. soot, produced by burning coal) and tropospheric ozone. While stratospheric ozone protects us from UV radiation, tropospheric (ground level) ozone causes exacerbations of chronic obstructive pulmonary disease and asthma and is responsible for an estimated 1 million premature deaths worldwide each year. Tropospheric ozone is continuously being produced from methane and other hydrocarbons and lasts hours to weeks in the atmosphere. SLCPs contribute to the 7 million deaths per year caused to air pollution.
Supporting those already battling the climate crisis
A major failing of COP26 was the lack of agreement on financing for adaptation to climate change in countries worst affected by it. In 2019, USD 80 billion was mobilised, but only 25% of this went to adaptation. Way back at COP15 in 2009, a goal was set to ‘mobilise jointly’ (i.e. for richer nations to contribute) USD 100 billion per year of funding to support resilience, adaptation and energy transitions in developing countries. This may sound like a lot of money, but in the context of global financing it’s not a big ask. To put it in context, the US alone has mobilised over USD 4 trillion (that is 40 x 100 billion) on COVID relief funds, USD 100 billion of which has reportedly been stolen fraudulently and USD 500 billion of which is as yet unspent. The discussion about funding for adaptation and sustainable transition has been postponed until COP27 in Egypt and the goal is not expected to be met until 2023.
Health systems as anchor institutions: the example of Clean Energy
One COP26 commitment that hit the news headlines was “the phasedown of unabated coal power and phase-out of inefficient fossil fuel subsidies”. A last minute amendment from India and China changed the words ‘phase out’ of coal to ‘phase down’. Alok Sharma later said he was ‘deeply sorry’ that this significantly weaker wording was in the final agreement. Climate transparency states that to avoid reaching dangerous tipping points and positive feedback loops accelerating climate change, coal must be phased out by 2030 in OECD countries, 2037 in non-OECD Asian countries and 2040 across all countries.
This is achievable. Renewables’ contribution to generating power in G20 countries increased from 19% in 2010 to 27% in 2019. Brazil is ahead of the curve, generating over 80% of its energy from renewables, which are cleaner and have fewer impacts on local air pollution and negative consequences for respiratory health. Health services can lead the charge to cleaner power. As major consumers of electricity, they can purchase renewable energy, sending a strong signal to energy suppliers, and use solar panels and other approaches to local energy generation. As mentioned earlier, University Hospitals of North Midlands has over 1000 solar panels installed on their roofs through a community scheme, which will lead to major financial savings for the Trust.
How far off the mark are COP26 commitments?
To keep within a safe space for human health, we should limit warming to under 1.5 °C, which would require annual emissions to be at least 20 gigatonnes of CO2 equivalent (GtCO2e) lower than the nationally determined contributions from COP26. To keep within 2°C of warming, we’d need countries’ annual emissions to be about 10 GtCO2e less. We’re currently globally emitting over 40 GtCO2e per year. If we keep emitting carbon at this rate, then by 2030 we will have emitted a further 460 GtCO2e – all of the carbon that we can emit to maintain a 50% likelihood of staying within 1.5 °C of warming (according to estimates from the start of 2021).
How has Cop26 shifted the dial on the climate crisis? A visual guide | Cop26 | The Guardian
Who will act?
COP26 has more to offer than the outcomes of government discussions. Fifty nations committed to developing climate-resilient health systems, and forty five countries committed to low-carbon health systems. These national commitments bolster ongoing efforts at institution level, including over 1,300 who have joined the Global Green and Healthy Hospitals network. However, there are still many at-risk lives and livelihoods to play for. The future depends not only on the actions that we take with our organisations and our communities, but also on if and how we hold national and global decision makers to account. | https://www.wghuk.org/post/environmental-change-post-cop26-are-we-doing-enough-to-invest-in-tomorrow |
Sandra Heleno, born 1971, in Lisbon, Portugal. She graduated in Physics Engineering and Technology at Instituto Superior Técnico, Technical University of Lisbon, Portugal (IST/UTL), and holds a Ph.D. in Physics Engineering from the same institution. From 1996 to 2008 she conducted research in the area of volcanic seismology, while deploying and running volcano and seismic monitoring networks. Since then, her work research addresses the mitigation of natural hazards (e.g. floods, landslides) through remote sensing image processing. | https://cerena.ist.utl.pt/user/674 |
This position is full-time and works approximately 37.5 hours per week.
Department Name/Job Location
This position is in the Olin Library. This position is for the Danforth Campus.
Essential Functions
POSITION SUMMARY:
The Japanese Studies Librarian engages in, promotes and provides support for scholarly and curricular activities for faculty, students and the community. In addition, this position serves wider discovery needs and research support through the creation and utilization of research tools, research assistance and instruction, collection development and liaison duties in their assigned subject area(s). The Japanese Studies Librarian is expected to continue their professional development in their assigned subject area(s) as part of their work serving students, faculty, colleagues and the community at large. The Japanese Studies Librarian will be involved in various library activities managed by other library programs and within the Research and Academic Collaborations Service Division.
PRIMARY DUTIES AND RESPONSIBILITIES:
- Research services responsibilities. The incumbent in this position is expected to deliver research services by:
- Providing and assessing in-person/virtual reference and research consultation services to fulfill user research needs
- Liaising with faculty, students and others in assigned subject areas to facilitate information about library resources/activities through regular meetings and appropriate communication channels
- Liaising with faculty, students and others in assigned subject areas to solicit material collection or service related needs
- Promoting the role of research libraries to the university and community at large by participation in, but not limited to, campus, regional and national committees and projects
- Appreciating and understanding the evolving nature of scholarship, scholarly output and the research data cycle and appropriately sharing that knowledge with faculty and students
- Connecting researchers, when applicable, with others within the libraries in a collaborative manner addressing scholarly communication needs including, but not limited to, publishing, data management and repository services
- Collaborating with Special Collections, Data Services and other areas within the Libraries providing cross-divisional partnerships and subject expertise to ensure the success of projects, collections, etc.
- Collection development. In collaboration with the Collection Services Librarian and with colleagues within the Libraries, the incumbent develops, maintains, and assesses subject specific collections for the libraries. Responsibilities include, but aren’t limited to:
- Consultations with subject faculty, students and staff in selecting materials
- Manage, assess and maintain collection budgets, collection materials and collection approval plans
- Professional awareness in subject area and collection criteria
- Collaboration with other areas in the libraries ensuring successful ordering and cataloging of assigned collections
- Instruction. Incumbent collaborates with colleagues within and beyond the libraries to deliver, plan and assess strategic and effective instruction and learning materials to meet the curricular needs of faculty and students. Responsibilities include, but aren’t limited to:
- Design, deliver and assess effective instructional sessions and activities through the solicitation of user and instructor feedback
- Design, implement and assess effective online learning and digital tools
- Develop and enhance teaching skills, as appropriate
- Professional development. Incumbent will further their professional knowledge not only in their subject specialty, but also in the field of librarianship and areas relevant to their position by:
- Participation in professional library or subject related organizations and committees
- Attending or participating in professional webinars/instruction sessions
- Stay current in their subject areas through various methods
- Perform other duties as assigned or appointed
Required Qualifications
- Master’s in East Asian, Japanese Studies or related field and two years of experience.
- Fluent command of both written and spoken English and Japanese.
Preferred Qualifications
- MLS from an accredited library program or PhD in related field.
- Working experience in an East Asian library or an academic library.
- Reading knowledge of Korean.
- Cataloging experience with ALA-LC standard for Japanese and Korean Romanization.
Salary Range
The hiring range for this position is $47,151 – $61,289 annually.
Department Summary
The Washington University Libraries is comprised of 12 libraries on the university’s Danforth, West, and Medical School Campuses. Reporting to the Vice Provost and University Librarian and in association with 3 campus partners, the libraries’ operate as a collaborative system in support of the research, teaching, and learning mission of the university. The Libraries’ house more than 3.6 million books, journals, and other print materials; 2.5 million microforms; 50,000 AV titles; and have access to more than 65,000 electronic journals and more than 1.8 million e-books. The libraries’ 135 dedicated professional and support staff serve an increasingly diverse community and exemplify the libraries’ commitment to meet the needs of faculty, staff and students for the present – and for years to come.
Benefits
-Retirement Savings Plan
-22 vacation days
-8 Paid Holidays
-Sick Time
-Tuition benefits for employee, spouse and dependent children
-Free Metro Link/ Bus pass
-Free Life Insurance
-Health, Dental, Vision
-Health Savings Accounts (HSA)
-Long Term Disability Insurance
-Flex Spending Plan
-Other Benefits
Human Resources website (hr.wustl.edu)
EOE Statement
Washington University is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity or expression, national origin, genetic information, disability, or protected veteran status. | https://www.eastasianlib.org/newsite/washington-university-in-saint-louis-japanese-studies-librarian/ |
3 Qualities to Consider When Hiring an Architect
If you’re constructing a commercial building, you have a lot of decisions to make. One of these is selecting the right architect to work with. The professional you choose plays a huge role in how your building will function and look, so it’s important to select one who meets all the necessary requirements. Here are three considerations to take into account.
3 Tips for Hiring a Commercial Architect
1. Look at Previous Projects
Reading customer reviews and looking at pictures of previous projects will tell you a lot about the architect you’re thinking about working with in terms of style and how well liked they are by their other clients.
2. Talk Budget
One of the most important factors to consider before hiring an architect is your budget. Ultimately, it will give you major guidelines as to who you can and cannot hire. When discussing your budget with an architect, ask about the up-front costs and any additional fees. If you don’t go over all the details before signing, you could end up paying more than you anticipated to finish the project
3. Ask About Building Departments
Hiring an architect who can get a project through the building department with only one or no revisions is important when you want to finish on time. Each round of revisions that is not passed through the building department requires re-drawings, revisions, resubmissions, and reviews that take precious time, so it’s crucial you hire a professional who is skilled with this portion of the design process. | https://pardiarchs.com/2017/05/3-qualities-to-consider-when-hiring-an-architect/ |
How massive stars die - what sort of explosion and remnant each produces - depends chiefly on the masses of their helium cores and hydrogen envelopes at death. For single stars, stellar winds are the only means of mass loss, and these are chiefly a function of the metallicity of the star. We dicuss how meallicity, and a simplified prescription for its effect on mass loss, affects the evolution and final fate of massive stars. We map, as a function of mass and metallicity, where black holes and neutron stars are likely to form and where different types of supernovae are produced. Intergrating over an initial mass function, we derive the relative populations as a function of metallicity. Provided single stars rotate rapidly enough at death, we speculate upon stellar population that might produce gamma-ray bursts and jet-driven supernovae.
Recommended Citation
Please use publisher's recommended citation. | https://tigerprints.clemson.edu/physastro_pubs/7/ |
Qatar’s fashion industry has thrived over the past decade, from local entrepreneurial designers making a name for themselves one hashtag at a time, to local investment firms acquiring Valentino and the French premium fashion house of Balmain. To ensure the fashion industry’s growth in the region, Qatar has invested in arts education. In 1998, Virginia Commonwealth University School of the Arts opened a branch campus in Qatar (VCUQ). The college offers five different art majors, including fashion design. This was the first step Qatar took to show it is investing in its creative youth to contribute to the industry in the years to come. It has been almost two decades since VCUQ opened its doors in Qatar, here contributor Alessandra Al Chanti looks to the stories that have progressed beyond the campus walls.
Following her passion and starting up her own local abaya brand was what pushed her to pursue her dream of becoming a fashion designer, by degree. Her brand focuses on creating “simple yet timeless” abayas that women can feel empowered and comfortable in.
The lack of job opportunities for local designers, especially fresh graduates, has not stopped them from making a name for themselves— in fact, it seems to be the motivating factor to create their own brand and see their craft come to life regardless of the obstacles in their way. | https://en.vogue.me/culture/qatar-fashion-design-careers/ |
Please read carefully and completely the terms of the agreement that follows. By accessing theANZSCTS website you agree to be bound by the terms of this agreement. If you do not wish to be bound to the terms of this agreement, you must not access the site.
Every effort is made to provide information that is accurate. However, protocols, courses and other matters contained in this website are subject to change at any time by appropriate action of the Society. We give no assurance or warranty that information on this site is current, and take no responsibility for matters arising from changed circumstances or other information or material that may affect the accuracy or currency of information on this site.
This website is provided on an ‘as is’, ‘as available’ basis without warranties of any kind, express or implied, including, but not limited to, those of title, merchantability, fitness for a particular purposes or non-infringement or any warranty arising from a course of dealing, usage, or trade practice. No oral advice or written information provided shall create a warranty; nor shall members or visitors to the site rely on any such information or advice. This publication is not intended to be a contract, explicit or implied, and the ANZSCTS reserves the right to make changes in the information contained.
The user assumes all responsibility and risk for the use of this website and the Internet generally. Under no circumstances, including negligence, shall anyone involved in creating or maintaining this website be liable for any direct, indirect, incidental, special or consequential damages, or lost profits that result from the use or inability to use the website and/or any other websites that are linked to this site. We accept no liability or responsibility to any person as a consequence of any reliance upon the information contained in this site. Nor shall they be liable for any such damages including, but not limited to, reliance by a member or visitor on any information obtained via the website; or that result from mistakes, omissions, interruptions, deletion of files, viruses, errors, defects, or any failure of performance, communications failure, theft, destruction or unauthorised access. States or countries that do not allow some or all of the above limitations of liability, liability shall be limited to the greatest extent allowed by law.
Users must observe and maintain the confidentiality of all security features relating to use of the website (including passwords, access arrangements etc) as notified. No data transmission over the Internet can be guaranteed as totally secure. Whilst the Society strives to protect such information the Society does not warrant and cannot ensure the security of information which users transmit. Accordingly, information is transmitted at the user’s risk.
Reference to any person, products, services, hypertext link to the third parties or other information by trade name, trademark, supplier or otherwise does not constitute or imply its endorsement, sponsorship or recommendation by the Society, nor is an endorsement of the Society implied by such links. Any external links on the Society website are for convenience only, as an index in a public library.
The Society may collect identifiable information such as contact details. Further, access to member only services has been made possible by using information contained in Society databases. Accordingly, we collect personal information to facilitate the granting of access to member only services.
The Society will not knowingly make an attempt to identify users or their browsing activities. However, in the unlikely event of an investigation, a law enforcement agency or other government agency may exercise its legal authority to inspect our Internet Service Provider’s logs, and thus gain information about users and their activities.
All of the identified information that the Society has used to grant access can be viewed and changed by users when they view their personal details. In addition, users may contact the Privacy Officer at any time to access personal information about themselves. They will be required to fill out a form to access this information. Access will be provided unless the request is unreasonable or the applicable privacy legislation permits or requires the Society to decline that access. As permitted by law, a fee may be requested to cover the cost of access.
The Society engages third parties to perform certain business functions. Therefore, it is sometimes necessary to disclose personal information to those suppliers. Disclosures may also be made to other third parties, including advisors and regulatory authorities. Where disclosure takes place, the Society seeks to ensure that personal information is handled appropriately.
We endeavour to ensure the Society website is secure through the use of firewalls. Personal data is maintained under strict security and can only be accessed internally by the Society employees who have permission to do so.
Any concerns about the Society’s handling of personal information should be directed to the Privacy Officer on +61 3 9249 1200 or at [email protected]. Requests may be required in writing and resolution of concerns will be sought as promptly as possible.
The Australian Government’s Privacy Commissioner is an additional source of information (see http://www.privacy.gov.au/), as is the New Zealand Government’s Privacy Commissioner (see www.privacy.org.nz).
The Society reserves the right to modify or amend this privacy statement at any time, provided that those modifications or amendments comply with applicable laws.
The Society may provide further relevant privacy information to users at the point of collection, in which case, such information should be read in conjunction with this policy. | https://anzscts.org/privacy-policy/ |
Three experiments using a parametric, single-subject design investigated gambling behavior in eight adult humans on a slot-machine simulation. Participants were staked with credits exchangeable for money prior to each session. Experiment 1 a was a systematic replication of Weatherly and Brandt (2004), which investigated the effects of percentage payback (the amount of money gained as a proportion of the amount of money bet) on gambling. Percentage payback was varied from 50% to 110% across conditions. Consistent with Weatherly and Brandt, gambling did not vary systematically across percentage-payback conditions. Experiment 1 b replicated Experiment 1 a but also included forced-exposure sessions prior to experimental sessions to guarantee a minimal exposure to the percentage-payback conditions. The results were similar to Experiment 1 a. In Experiment 2, win probability and size were manipulated across conditions. Only one of three participants showed sensitivity to this manipulation. In all experiments, most participants tended to place fewer bets as the experiment progressed. Most participants reported the use of a gambling strategy that was consistent with their performance on the gambling task. Overall, these results highlight the utility of studying gambling with procedures that give participants extensive experience with gambling conditions.
Recommended Citation
Brandt, Andrew Ellis, "Gambling on a Simulated Slot Machine under Conditions of Repeated Play" (2005). Masters Theses. 4602. | https://scholarworks.wmich.edu/masters_theses/4602/ |
This application claims the benefit of U.S. Provisional Application No. 61/845,824, filed Jul. 12, 2013, U.S. Provisional Application No. 61/899,048, filed Nov. 1, 2013, and U.S. Provisional Application No. 61/913,040, filed Dec. 6, 2013, the entire contents of each of which is incorporated herein by reference.
This disclosure relates to video encoding and decoding.
Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video game consoles, cellular or satellite radio telephones, so-called “smart phones,” video teleconferencing devices, video streaming devices, and the like. Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4, Part 10, Advanced Video Coding (AVC), the High Efficiency Video Coding (HEVC) standard presently under development, and extensions of such standards. The video devices may transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing such video compression techniques.
Video compression techniques perform spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences. For block-based video coding, a video slice (i.e., a video frame or a portion of a video frame) may be partitioned into video blocks. Video blocks in an intra-coded (1) slice of a picture are encoded using spatial prediction with respect to reference samples in neighboring blocks in the same picture. Video blocks in an inter-coded (P or B) slice of a picture may use spatial prediction with respect to reference samples in neighboring blocks in the same picture or temporal prediction with respect to reference samples in other reference pictures. Pictures may be referred to as frames, and reference pictures may be referred to as reference frames.
Spatial or temporal prediction results in a predictive block for a block to be coded. Residual data represents pixel differences between the original block to be coded and the predictive block. An inter-coded block is encoded according to a motion vector that points to a block of reference samples forming the predictive block, and the residual data indicates the difference between the coded block and the predictive block. An intra-coded block is encoded according to an intra-coding mode and the residual data. For further compression, the residual data may be transformed from the pixel domain to a transform domain, resulting in residual coefficients, which then may be quantized. The quantized coefficients, initially arranged in a two-dimensional array, may be scanned in order to produce a one-dimensional vector of coefficients, and entropy coding may be applied to achieve even more compression.
A multiview coding bitstream may be generated by encoding views, e.g., from multiple perspectives. Some three-dimensional (3D) video standards have been developed that make use of multiview coding aspects. For example, different views may transmit left and right eye views to support 3D video. Alternatively, some 3D video coding processes may apply so-called multiview plus depth coding. In multiview plus depth coding, a 3D video bitstream may contain not only texture view components, but also depth view components. For example, each view may comprise one texture view component and one depth view component.
Techniques of this disclosure relate to palette-based video coding. In palette-based coding, a video coder (e.g., a video encoder or a video decoder) may form a so-called “palette” as a table of colors or pixel values representing the video data of a particular area (e.g., a given block). In this way, rather than coding actual pixel values or their residuals for a current block of video data, the video coder may code index values for one or more of the pixels values of the current block, where the index values indicate entries in the palette that are used to represent the pixel values of the current block. A current palette for a current block of video data may be explicitly encoded and sent to the video decoder, predicted from previous palette entries, predicted from previous pixel values, or a combination thereof.
According to the techniques described in this disclosure for generating a current palette for a current block, the video decoder first determines one or more palette entries in a predictive palette that are copied to the current palette, and then determines a number of new palette entries that are not in the predictive palette but that are included in the current palette. Based on this information, the video decoder calculates a size of the current palette to be equal to the sum of the number of the copied palette entries and the number of the new palette entries, and generates the current palette of the determined size including the copied palette entries and the new palette entries. A video encoder may perform similar techniques to generate the current palette for the current block. In addition, the video encoder may explicitly encode and send pixel values for the new palette entries to the video decoder. The techniques described in this disclosure may also include techniques for various combinations of one or more of signaling palette-based coding modes, transmitting palettes, predicting palettes, deriving palettes, or transmitting palette-based coding maps and other syntax elements.
In one example, this disclosure is directed toward a method of coding video data, the method comprising generating a predictive palette including palette entries that indicate pixel values, determining one or more of the palette entries in the predictive palette that are copied to a current palette for a current block of the video data, determining a number of new palette entries not in the predictive palette that are included in the current palette for the current block, calculating a size of the current palette equal to the sum of a number of the copied palette entries and the number of the new palette entries, and generating the current palette including the copied palette entries and the new palette entries. The method further comprises determining index values for one or more pixel values of the current block that identify the palette entries in the current palette used to represent the pixel values of the current block.
In another example, this disclosure is directed toward an apparatus for coding video data, the apparatus comprising a memory storing video data, and one or more processors configured to generate a predictive palette including palette entries that indicate pixel values, determine one or more of the palette entries in the predictive palette that are copied to a current palette for a current block of the video data, determine a number of new palette entries not in the predictive palette that are included in the current palette for the current block, calculate a size of the current palette equal to the sum of a number of the copied palette entries and the number of the new palette entries, and generate the current palette including the copied palette entries and the new palette entries. The processors are further configured to determine index values for one or more pixel values of the current block that identify the palette entries in the current palette used to represent the pixel values of the current block.
In another example, this disclosure is directed toward an apparatus for coding video data, the apparatus comprising means for generating a predictive palette including palette entries that indicate pixel values, means for determining one or more of the palette entries in the predictive palette that are copied to a current palette for a current block of the video data, means for determining a number of new palette entries not in the predictive palette that are included in the current palette for the current block, means for calculating a size of the current palette equal to the sum of a number of the copied palette entries and the number of the new palette entries, means for generating the current palette including the copied palette entries and the new palette entries, and means for determining index values for one or more pixel values of the current block that identify the palette entries in the current palette used to represent the pixel values of the current block.
In a further example, this disclosure is directed toward a non-transitory computer-readable medium storing instructions thereon that, when executed, cause one or more processors to generate a predictive palette including palette entries that indicate pixel values, determine one or more of the palette entries in the predictive palette that are copied to a current palette for a current block of the video data, determine a number of new palette entries not in the predictive palette that are included in the current palette for the current block, calculate a size of the current palette equal to the sum of a number of the copied palette entries and the number of the new palette entries, generate the current palette including the copied palette entries and the new palette entries, and determine index values for one or more pixel values of the current block that identify the palette entries in the current palette used to represent the pixel values of the current block.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description, drawings, and claims.
This disclosure includes techniques for video coding and compression. In particular, this disclosure describes techniques for palette-based coding of video data. In traditional video coding, images are assumed to be continuous-tone and spatially smooth. Based on these assumptions, various tools have been developed such as block-based transform, filtering, etc., and such tools have shown good performance for natural content videos.
In applications like remote desktop, collaborative work and wireless display, however, computer generated screen content (e.g., such as text or computer graphics) may be the dominant content to be compressed. This type of content tends to have discrete-tone and feature sharp lines, and high contrast object boundaries. The assumption of continuous-tone and smoothness may no longer apply for screen content, and thus traditional video coding techniques may not be efficient ways to compress video data including screen content.
This disclosure describes palette-based coding, which may be particularly suitable for screen generated content coding. For example, assuming a particular area of video data has a relatively small number of colors, a video coder (a video encoder or video decoder) may form a so-called “palette” as a table of colors or pixel values representing the video data of the particular area (e.g., a given block). For example, the palette may include the most dominant pixel values in the given block. In some cases, the most dominant pixel values may include the one or more pixel values that occur most frequently within the block. In addition, in some cases a threshold value may be applied to define whether a pixel value is included as one of the most dominant pixel values in the block. According to this disclosure, rather than coding actual pixel values or their residuals for a current block of video data, the video coder may code index values indicative of one or more of the pixels values of the current block, where the index values indicate entries in the palette that are used to represent the pixel values of the current block.
For example, the video encoder may encode a block of video data by determining the palette for the block (e.g., coding the palette explicitly, predicting the palette, or a combination thereof), locating an entry in the palette to represent one or more of the pixel values, and encoding the block with index values that indicate the entry in the palette used to represent the pixel values of the block. In some examples, the video encoder may signal the index values in an encoded bitstream. A video decoder may obtain, from an encoded bitstream, a palette for a block, as well as index values for the pixels of the block. The video decoder may relate the index values of the pixels to entries of the palette to reconstruct the pixel values of the block.
The examples above are intended to provide a general description of palette-based coding. In various examples, the techniques described in this disclosure may include techniques for various combinations of one or more of signaling palette-based coding modes, transmitting palettes, predicting palettes, deriving palettes, or transmitting palette-based coding maps and other syntax elements. Such techniques may improve video coding efficiency, e.g., requiring fewer bits to represent screen generated content.
For example, a current palette for a current block of video data may be explicitly encoded and sent to the video decoder, predicted from previous palette entries, predicted from previous pixel values, or a combination thereof. According to the techniques described in this disclosure for generating a current palette for a current block, the video decoder first determines one or more palette entries in a predictive palette that are copied to the current palette, and then determines a number of new palette entries that are not in the predictive palette but that are included in the current palette. Based on this information, the video decoder calculates a size of the current palette to be equal to the sum of the number of the copied palette entries and the number of the new palette entries, and generates the current palette of the determined size including the copied palette entries and the new palette entries. A video encoder may perform similar techniques to generate the current palette for the current block. In addition, the video encoder may explicitly encode and send pixel values for the new palette entries to the video decoder.
In some examples of this disclosure, the techniques for palette-based coding of video data may be used with one or more other coding techniques, such as techniques for inter-predictive coding or intra-predictive coding of video data. For example, as described in greater detail below, an encoder or decoder, or combined encoder-decoder (codec), may be configured to perform inter- and intra-predictive coding, as well as palette-based coding. In some examples, the palette-based coding techniques may be configured for use in one or more coding unit (CU) modes of High Efficiency Video Coding (HEVC). In other examples, the palette-based coding techniques can be used independently or as part of other existing or future systems or standards.
th
High Efficiency Video Coding (HEVC) is a new video coding standard developed by the Joint Collaboration Team on Video Coding (JCT-VC) of ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Motion Picture Experts Group (MPEG). A recent draft of the HEVC standard, referred to as “HEVC Draft 10” of “WD10,” is described in document JCTVC-L1003v34, Bross et al., “High Efficiency Video Coding (HEVC) Text Specification Draft 10 (for FDIS & Last Call),” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 12Meeting: Geneva, CH, 14-23 Jan. 2013, available from: http://phenix.int-evry.fr/jct/doc_end_user/documents/12_Geneva/wg11/JCTVC-L1003-v34.zip.
With respect to the HEVC framework, as an example, the palette-based coding techniques may be configured to be used as a CU mode. In other examples, the palette-based coding techniques may be configured to be used as a PU mode in the framework of HEVC. Accordingly, all of the following disclosed processes described in the context of a CU mode may, additionally or alternatively, apply to PU. However, these HEVC-based examples should not be considered a restriction or limitation of the palette-based coding techniques described herein, as such techniques may be applied to work independently or as part of other existing or yet to be developed systems/standards. In these cases, the unit for palette coding can be square blocks, rectangular blocks or even regions of non-rectangular shape.
FIG. 1
10
20
30
10
20
30
is a block diagram illustrating an example video coding system that may utilize the techniques of this disclosure. As used herein, the term “video coder” refers generically to both video encoders and video decoders. In this disclosure, the terms “video coding” or “coding” may refer generically to video encoding or video decoding. Video encoder and video decoder of video coding system represent examples of devices that may be configured to perform techniques for palette-based video coding in accordance with various examples described in this disclosure. For example, video encoder and video decoder may be configured to selectively code various blocks of video data, such as CUs or PUs in HEVC coding, using either palette-based coding or non-palette based coding. Non-palette based coding modes may refer to various inter-predictive temporal coding modes or intra-predictive spatial coding modes, such as the various coding modes specified by HEVC Draft 10.
FIG. 1
10
12
14
12
12
14
12
14
12
14
As shown in , video coding system includes a source device and a destination device . Source device generates encoded video data. Accordingly, source device may be referred to as a video encoding device or a video encoding apparatus. Destination device may decode the encoded video data generated by source device . Accordingly, destination device may be referred to as a video decoding device or a video decoding apparatus. Source device and destination device may be examples of video coding devices or video coding apparatuses.
12
14
Source device and destination device may comprise a wide range of devices, including desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called “smart” phones, televisions, cameras, display devices, digital media players, video gaming consoles, in-car computers, or the like.
14
12
16
16
12
14
16
12
14
12
14
12
14
Destination device may receive encoded video data from source device via a channel . Channel may comprise one or more media or devices capable of moving the encoded video data from source device to destination device . In one example, channel may comprise one or more communication media that enable source device to transmit encoded video data directly to destination device in real-time. In this example, source device may modulate the encoded video data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated video data to destination device . The one or more communication media may include wireless and/or wired communication media, such as a radio frequency (RF) spectrum or one or more physical transmission lines. The one or more communication media may form part of a packet-based network, such as a local area network, a wide-area network, or a global network (e.g., the Internet). The one or more communication media may include routers, switches, base stations, or other equipment that facilitate communication from source device to destination device .
16
12
14
In another example, channel may include a storage medium that stores encoded video data generated by source device . In this example, destination device may access the storage medium via disk access or card access. The storage medium may include a variety of locally-accessed data storage media such as Blu-ray discs, DVDs, CD-ROMs, flash memory, or other suitable digital storage media for storing encoded video data.
16
12
14
14
In a further example, channel may include a file server or another intermediate storage device that stores encoded video data generated by source device . In this example, destination device may access encoded video data stored at the file server or other intermediate storage device via streaming or download. The file server may be a type of server capable of storing encoded video data and transmitting the encoded video data to destination device . Example file servers include web servers (e.g., for a website), file transfer protocol (FTP) servers, network attached storage (NAS) devices, and local disk drives.
14
Destination device may access the encoded video data through a standard data connection, such as an Internet connection. Example types of data connections may include wireless channels (e.g., Wi-Fi connections), wired connections (e.g. DSL, cable modem, etc.), or combinations of both that are suitable for accessing encoded video data stored on a file server. The transmission of encoded video data from the file server may be a streaming transmission, a download transmission, or a combination of both.
10
The techniques of this disclosure are not limited to wireless applications or settings. The techniques may be applied to video coding in support of a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, streaming video transmissions, e.g., via the Internet, encoding of video data for storage on a data storage medium, decoding of video data stored on a data storage medium, or other applications. In some examples, video coding system may be configured to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
10
FIG. 1
Video coding system illustrated in is merely an example and the techniques of this disclosure may apply to video coding settings (e.g., video encoding or video decoding) that do not necessarily include any data communication between the encoding and decoding devices. In other examples, data is retrieved from a local memory, streamed over a network, or the like. A video encoding device may encode and store data to memory, and/or a video decoding device may retrieve and decode data from memory. In many examples, the encoding and decoding is performed by devices that do not communicate with one another, but simply encode data to memory and/or retrieve and decode data from memory.
FIG. 1
12
18
20
22
22
18
In the example of , source device includes a video source , a video encoder , and an output interface . In some examples, output interface may include a modulator/demodulator (modem) and/or a transmitter. Video source may include a video capture device, e.g., a video camera, a video archive containing previously-captured video data, a video feed interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources of video data.
20
18
12
14
22
14
Video encoder may encode video data from video source . In some examples, source device directly transmits the encoded video data to destination device via output interface . In other examples, the encoded video data may also be stored onto a storage medium or a file server for later access by destination device for decoding and/or playback.
FIG. 1
14
28
30
32
28
28
16
32
14
32
32
In the example of , destination device includes an input interface , a video decoder , and a display device . In some examples, input interface includes a receiver and/or a modem. Input interface may receive encoded video data over channel . Display device may be integrated with or may be external to destination device . In general, display device displays decoded video data. Display device may comprise a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device.
20
30
30
This disclosure may generally refer to video encoder “signaling” or “transmitting” certain information to another device, such as video decoder . The term “signaling” or “transmitting” may generally refer to the communication of syntax elements and/or other data used to decode the compressed video data. Such communication may occur in real- or near-real-time. Alternately, such communication may occur over a span of time, such as might occur when storing syntax elements to a computer-readable storage medium in an encoded bitstream at the time of encoding, which then may be retrieved by a decoding device at any time after being stored to this medium. Thus, while video decoder may be referred to as “receiving” certain information, the receiving of information does not necessarily occur in real- or near-real-time and may be retrieved from a medium at some time after storage.
20
30
20
30
Video encoder and video decoder each may be implemented as any of a variety of suitable circuitry, such as one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), discrete logic, hardware, or any combinations thereof. If the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable storage medium and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing (including hardware, software, a combination of hardware and software, etc.) may be considered to be one or more processors. Each of video encoder and video decoder may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective device.
20
30
In some examples, video encoder and video decoder operate according to a video compression standard, such as HEVC standard mentioned above, and described in HEVC Draft 10. In addition to the base HEVC standard, there are ongoing efforts to produce scalable video coding, multiview video coding, and 3D coding extensions for HEVC. In addition, palette-based coding modes, e.g., as described in this disclosure, may be provided for extension of the HEVC standard. In some examples, the techniques described in this disclosure for palette-based coding may be applied to encoders and decoders configured to operation according to other video coding standards, such as the ITU-T-H.264/AVC standard or future standards. Accordingly, application of a palette-based coding mode for coding of coding units (CUs) or prediction units (PUs) in an HEVC codec is described for purposes of example.
L
Cb
Cr
L
Cb
Cr
In HEVC and other video coding standards, a video sequence typically includes a series of pictures. Pictures may also be referred to as “frames.” A picture may include three sample arrays, denoted S, Sand S. Sis a two-dimensional array (i.e., a block) of luma samples. Sis a two-dimensional array of Cb chrominance samples. Sis a two-dimensional array of Cr chrominance samples. Chrominance samples may also be referred to herein as “chroma” samples. In other instances, a picture may be monochrome and may only include an array of luma samples.
20
To generate an encoded representation of a picture, video encoder may generate a set of coding tree units (CTUs). Each of the CTUs may be a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples, and syntax structures used to code the samples of the coding tree blocks. A coding tree block may be an N×N block of samples. A CTU may also be referred to as a “tree block” or a “largest coding unit” (LCU). The CTUs of HEVC may be broadly analogous to the macroblocks of other standards, such as H.264/AVC. However, a CTU is not necessarily limited to a particular size and may include one or more coding units (CUs). A slice may include an integer number of CTUs ordered consecutively in the raster scan. A coded slice may comprise a slice header and slice data. The slice header of a slice may be a syntax structure that includes syntax elements that provide information about the slice. The slice data may include coded CTUs of the slice.
This disclosure may use the term “video unit” or “video block” or “block” to refer to one or more sample blocks and syntax structures used to code samples of the one or more blocks of samples. Example types of video units or blocks may include CTUs, CUs, PUs, transform units (TUs), macroblocks, macroblock partitions, and so on. In some contexts, discussion of PUs may be interchanged with discussion of macroblocks or macroblock partitions.
20
20
20
To generate a coded CTU, video encoder may recursively perform quad-tree partitioning on the coding tree blocks of a CTU to divide the coding tree blocks into coding blocks, hence the name “coding tree units.” A coding block is an N×N block of samples. A CU may be a coding block of luma samples and two corresponding coding blocks of chroma samples of a picture that has a luma sample array, a Cb sample array and a Cr sample array, and syntax structures used to code the samples of the coding blocks. Video encoder may partition a coding block of a CU into one or more prediction blocks. A prediction block may be a rectangular (i.e., square or non-square) block of samples on which the same prediction is applied. A prediction unit (PU) of a CU may be a prediction block of luma samples, two corresponding prediction blocks of chroma samples of a picture, and syntax structures used to predict the prediction block samples. Video encoder may generate predictive luma, Cb and Cr blocks for luma, Cb and Cr prediction blocks of each PU of the CU.
20
20
20
Video encoder may use intra prediction or inter prediction to generate the predictive blocks for a PU. If video encoder uses intra prediction to generate the predictive blocks of a PU, video encoder may generate the predictive blocks of the PU based on decoded samples of the picture associated with the PU.
20
20
20
20
20
If video encoder uses inter prediction to generate the predictive blocks of a PU, video encoder may generate the predictive blocks of the PU based on decoded samples of one or more pictures other than the picture associated with the PU. Video encoder may use uni-prediction or bi-prediction to generate the predictive blocks of a PU. When video encoder uses uni-prediction to generate the predictive blocks for a PU, the PU may have a single motion vector (MV). When video encoder uses bi-prediction to generate the predictive blocks for a PU, the PU may have two MVs.
20
20
20
20
20
After video encoder generates predictive blocks (e.g., predictive luma, Cb and Cr blocks) for one or more PUs of a CU, video encoder may generate residual blocks for the CU. Each sample in a residual block of the CU may indicate a difference between a sample in a predictive block of a PU of the CU and a corresponding sample in a coding block of the CU. For example, video encoder may generate a luma residual block for the CU. Each sample in the CU's luma residual block indicates a difference between a luma sample in one of the CU's predictive luma blocks and a corresponding sample in the CU's original luma coding block. In addition, video encoder may generate a Cb residual block for the CU. Each sample in the CU's Cb residual block may indicate a difference between a Cb sample in one of the CU's predictive Cb blocks and a corresponding sample in the CU's original Cb coding block. Video encoder may also generate a Cr residual block for the CU. Each sample in the CU's Cr residual block may indicate a difference between a Cr sample in one of the CU's predictive Cr blocks and a corresponding sample in the CU's original Cr coding block.
20
Furthermore, video encoder may use quad-tree partitioning to decompose the residual blocks (e.g., luma, Cb and Cr residual blocks) of a CU into one or more transform blocks (e.g., luma, Cb and Cr transform blocks). A transform block may be a rectangular block of samples on which the same transform is applied. A transform unit (TU) of a CU may be a transform block of luma samples, two corresponding transform blocks of chroma samples, and syntax structures used to transform the transform block samples. Thus, each TU of a CU may be associated with a luma transform block, a Cb transform block, and a Cr transform block. The luma transform block associated with the TU may be a sub-block of the CU's luma residual block. The Cb transform block may be a sub-block of the CU's Cb residual block. The Cr transform block may be a sub-block of the CU's Cr residual block.
20
20
20
20
Video encoder may apply one or more transforms to a transform block to generate a coefficient block for a TU. A coefficient block may be a two-dimensional array of transform coefficients. A transform coefficient may be a scalar quantity. For example, video encoder may apply one or more transforms to a luma transform block of a TU to generate a luma coefficient block for the TU. Video encoder may apply one or more transforms to a Cb transform block of a TU to generate a Cb coefficient block for the TU. Video encoder may apply one or more transforms to a Cr transform block of a TU to generate a Cr coefficient block for the TU.
20
20
20
20
20
After generating a coefficient block (e.g., a luma coefficient block, a Cb coefficient block or a Cr coefficient block), video encoder may quantize the coefficient block. Quantization generally refers to a process in which transform coefficients are quantized to possibly reduce the amount of data used to represent the transform coefficients, providing further compression. After video encoder quantizes a coefficient block, video encoder may entropy encoding syntax elements indicating the quantized transform coefficients. For example, video encoder may perform Context-Adaptive Binary Arithmetic Coding (CABAC) on the syntax elements indicating the quantized transform coefficients. Video encoder may output the entropy-encoded syntax elements in a bitstream. The bitstream may also include syntax elements that are not entropy encoded.
20
Video encoder may output a bitstream that includes the entropy-encoded syntax elements. The bitstream may include a sequence of bits that forms a representation of coded pictures and associated data. The bitstream may comprise a sequence of network abstraction layer (NAL) units. Each of the NAL units includes a NAL unit header and encapsulates a raw byte sequence payload (RBSP). The NAL unit header may include a syntax element that indicates a NAL unit type code. The NAL unit type code specified by the NAL unit header of a NAL unit indicates the type of the NAL unit. A RBSP may be a syntax structure containing an integer number of bytes that is encapsulated within a NAL unit. In some instances, an RBSP includes zero bits.
Different types of NAL units may encapsulate different types of RBSPs. For example, a first type of NAL unit may encapsulate an RBSP for a picture parameter set (PPS), a second type of NAL unit may encapsulate an RBSP for a coded slice, a third type of NAL unit may encapsulate an RBSP for supplemental enhancement information (SEI), and so on. NAL units that encapsulate RBSPs for video coding data (as opposed to RBSPs for parameter sets and SEI messages) may be referred to as video coding layer (VCL) NAL units.
30
20
30
30
30
20
30
30
30
30
30
Video decoder may receive a bitstream generated by video encoder . In addition, video decoder may obtain syntax elements from the bitstream. For example, video decoder may parse the bitstream to decode syntax elements from the bitstream. Video decoder may reconstruct the pictures of the video data based at least in part on the syntax elements obtained (e.g. decoded) from the bitstream. The process to reconstruct the video data may be generally reciprocal to the process performed by video encoder . For instance, video decoder may use MVs of PUs to determine predictive sample blocks (i.e., predictive blocks) for the PUs of a current CU. In addition, video decoder may inverse quantize transform coefficient blocks associated with TUs of the current CU. Video decoder may perform inverse transforms on the transform coefficient blocks to reconstruct transform blocks associated with the TUs of the current CU. Video decoder may reconstruct the coding blocks of the current CU by adding the samples of the predictive sample blocks for PUs of the current CU to corresponding samples of the transform blocks of the TUs of the current CU. By reconstructing the coding blocks for each CU of a picture, video decoder may reconstruct the picture.
20
30
20
30
In some examples, video encoder and video decoder may be configured to perform palette-based coding. For example, in palette based coding, rather than performing the intra-predictive or inter-predictive coding techniques described above, video encoder and video decoder may code a so-called palette as a table of colors or pixel values representing the video data of a particular area (e.g., a given block). In this way, rather than coding actual pixel values or their residuals for a current block of video data, the video coder may code index values for one or more of the pixels values of the current block, where the index values indicate entries in the palette that are used to represent the pixel values of the current block.
20
20
In one example, video encoder may encode a block of video data by determining a palette for the block, locating an entry in the palette having a value representative of the value of one or more pixel of the block, and encoding the block with index values that indicate the entry in the palette used to represent the one or more pixel values of the block. In some examples, video encoder may signal the index values in an encoded bitstream. A video decoder may obtain, from an encoded bitstream, a palette for a block, as well as index values for the pixels of the block. The video decoder may relate the index values of the pixels to entries of the palette to reconstruct the pixel values of the block.
20
30
30
In another example, video encoder may encode a block of video data by determining prediction residual values for the block, determining a palette for the block, locating an entry in the palette having a value representative of the value of one or more of the prediction residual values, and encoding the block with index values that indicate the entry in the palette used to represent the prediction residual values for the block. Video decoder may obtain, from an encoded bitstream, a palette for a block, as well as index values for the prediction residual values of the block. Video decoder may relate the index values of the prediction residual values to entries of the palette to reconstruct the prediction residual values of the block. The prediction residual values may be added to the prediction values (for example, obtained using intra or inter prediction) to reconstruct the pixel values of the block.
20
30
20
20
20
20
As described in more detail below, the basic idea of palette-based coding is that, for a given block of video data to be coded, a palette is derived that includes the most dominant pixel values in the current block. For instance, the palette may refer to a number of pixel values which are assumed to be dominant and/or representative for the current CU. Video encoder may first transmit the size and the elements of the palette to video decoder . Video encoder may encode the pixel values in the given block according to a certain scanning order. For each pixel location in the given block, video encoder may transmit a flag or other syntax element to indicate whether the pixel value at the pixel location is included in the palette or not. If the pixel value is in the palette (i.e., a palette entry exists that specifies the pixel value), video encoder may signal the index value associated with the pixel value for the pixel location in the given block followed by a “run” of like-valued consecutive pixel values in the given block. In this case, video encoder does not transmit the flag or the palette index for the following pixel locations that are covered by the “run” as they all have the same pixel value.
20
30
20
30
If the pixel value is not in the palette (i.e., no palette entry exists that specifies the pixel value), video encoder may transmit the pixel value or a residual value (or quantized versions thereof) for the given pixel location in the given block. Video decoder may first determine the palette based on the information received from video encoder . Video decoder may then map the received index values associated with the pixel locations in the given block to entries of the palette to reconstruct the pixel values of the given block.
20
30
Palette-based coding may have a certain amount of signaling overhead. For example, a number of bits may be needed to signal characteristics of a palette, such as a size of the palette, as well as the palette itself. In addition, a number of bits may be needed to signal index values for the pixels of the block. The techniques of this disclosure may, in some examples, reduce the number of bits needed to signal such information. For example, the techniques described in this disclosure may include techniques for various combinations of one or more of signaling palette-based coding modes, transmitting palettes, predicting palettes, deriving palettes, or transmitting palette-based coding maps and other syntax elements. Particular techniques of this disclosure may be implemented in video encoder and/or video decoder .
20
30
20
30
20
30
Aspects of this disclosure are directed to palette prediction. For example, according to aspects of this disclosure, video encoder and/or video decoder may determine a first palette having a first set of entries indicative of first pixel values. Video encoder and/or video decoder may then determine, based on the first set of entries of the first palette, a second set of entries indicative of second pixel values of a second palette. Video encoder and/or video decoder may also code pixels of a block of video data using the second palette (i.e., using the second set of pixel values).
20
30
20
When determining the second set of entries of the second palette based on the first set of entries, video encoder may encode a variety of syntax elements, which may be used by video decoder to reconstruct the second palette. For example, video encoder may encode one or more syntax elements in a bitstream to indicate that an entire palette (or palettes, in the case of each color component, e.g., Y, Cb, Cr, or Y, U, V, or R, G, B, of the video data having a separate palette) is predicted from (e.g. copied from) one or more neighboring blocks of the block currently being coded.
20
20
The palette from which entries of the current palette of the current block are predicted (e.g., copied) may be referred to as a predictive palette. The predictive palette may contain palette entries from one or more neighboring blocks including spatially neighboring blocks and/or neighboring blocks in a particular scan order of the blocks. For example, the neighboring blocks may be spatially located to the left (left neighboring block) or above (upper neighboring block) the block currently being coded. In another example, video encoder may determine predictive palette entries using the most frequent sample values in a causal neighborhood of the current block. In another example, the neighboring blocks may neighbor the block currently being coded according to a particular scan order used to code the blocks. That is, the neighboring blocks may be one or more blocks coded prior to the current block in the scan order. Video encoder may encode one or more syntax elements to indicate the location of the neighboring blocks from which the palettes are copied.
20
20
20
In some examples, palette prediction may be performed entry-wise. For example, video encoder may encode one or more syntax elements to indicate, for each entry of a predictive palette, whether the given palette entry is included in the current palette for the current block. If video encoder does not use prediction to populate an entry of the current palette for the current block, video encoder may encode one or more additional syntax elements to specify the non-predicted entries, as well as the number of such entries, in the current palette for the current block.
As described above, for a current block, e.g., a CU or PU, the entries in its palette may be predicted from entries in a predictive palette including palette entries from one or more previously coded neighboring blocks. This disclosure describes several alternative techniques to predict the palette for the current block.
20
30
20
20
20
30
In one example, a predictive palette includes a number of entries, N. In this example, video encoder first transmits a binary vector, V, having the same size as the predictive palette. i.e., a vector of size N, to video decoder . Each entry in the binary vector indicates whether the corresponding entry in the predictive palette will be reused or copied to a current palette for a current block. For example, video encoder may encode one or more syntax elements including the binary vector. In some cases, video encoder encodes the binary vector including a one-bit flag for each of the palette entries in the predictive palette that indicates whether a respective palette entry is copied to the current palette. In other cases, video encoder encodes a losslessly compressed binary vector in which the indications for the entries in the binary vector are compressed or combined together instead of being sent individually as one-bit flags. In this way, video decoder determines the one or more of the palette entries in the predictive palette that are copied to the current palette.
20
30
20
30
In addition, video encoder transmits a number, M, that indicates how many new entries will be included in the palette for the current block, and then transmits pixel values for the new entries to video decoder . For example, video encoder may encode one or more syntax elements indicating the number of the new palette entries that are included in the current palette using one of unary codes, truncated unary codes, Exponential-Golomb codes, or Golomb-Rice codes. In this way, video decoder determines the number of new palette entries not in the predictive palette that are included in the current palette for the current block;
30
30
20
In this example, the final size of the current palette for the current block may be derived as equal to M+S, where S is the number of entries in the predictive palette that are reused in the palette for current block. Video decoder may calculate a size of the current palette to be equal to the sum of a number of the copied palette entries and the number of the new palette entries. Once the size of the current palette is determined, video decoder generates the current palette including the copied palette entries from the predictive palette and the new palette entries explicitly signaled from video encoder .
30
To generate the palette for the current block, video decoder may merge the received M new palette entries and the S copied palette entries that are being reused from the predictive palette. In some cases, the merge may be based on the pixel values, such that the entries in the palette for the current block may increase (or decrease) with the palette index, for example, when separate palette is used for each component. In other cases, the merge may be a concatenation of the two sets of entries, i.e., the copied palette entries and the new palette entries.
20
30
20
30
20
20
20
20
In another example, video encoder first transmits an indication of a size of a palette, N, for a current block to video decoder . Video encoder then transmits a vector, V, having the same size as the palette for the current block, i.e., a vector of size N, to video decoder . Each entry in the vector indicates whether the corresponding entry in the palette for the current block is explicitly transmitted by video encoder or copied from a predictive palette. For the entries that are copied from the predictive palette, video encoder may use different methods to signal which entry in the predictive palette is used in the palette for the current block. In some cases, video encoder may signal the palette index indicating the entry to be copied from the predictive palette to the palette for the current block. In other cases, video encoder may signal an index offset, which is the difference between the index in the palette for the current block and the index in the predictive palette.
20
In the two above examples, the one or more previously coded neighboring blocks, from which the predictive palette used for the prediction of the current palette for the current block is formed, may be spatially neighboring blocks of the current block and/or neighboring blocks of the current block in a particular scan order of the blocks. For example, the neighboring blocks may be spatially located above (i.e., top-neighboring blocks) or to the left (i.e., left-neighboring blocks) the current block. In some examples, a candidate list of neighboring blocks may be constructed, and video encoder transmits an index to indicate one or more of the candidate neighboring blocks and associated palettes are used to form the predictive palette.
20
30
For certain blocks, e.g., CUs at a beginning of a slice or at other slice boundaries or leftmost CUs of the slice or a picture of video data, palette prediction may be disabled. For example, when the current block of video data comprises one or a first block in a slice of video data or a leftmost block of the slice or a picture of the video data, video encoder and/or video decoder may disable copying of palette entries in the prediction palette to the current palette for the current block.
20
30
20
20
20
In an additional example, video encoder transmits an indication of a number of entries included in a palette for a current block to video decoder . Then, for each of the palette entries, video encoder transmits a flag or other syntax element to indicate whether the palette entry of the palette for the current block is explicitly transmitted by video encoder or whether the palette entry is derived from a previously reconstructed pixel. For each of the palette entries of the palette for the current block that are derived from a previously reconstructed pixel, video encoder transmits another indication regarding a pixel location of the reconstructed pixel in the current block or a pixel location of the reconstructed pixel in a neighboring block that corresponds to the palette entry. In some cases, the reconstructed pixel location indication may be a displacement vector with respect to the top-left position of the current block. In other cases, the reconstructed pixel location indication may be an index into a list of reconstructed pixels that can be used for specifying the palette entry for the current block. For example, this list may include all the reference pixels that may be used for normal intra prediction in HEVC.
20
20
In some examples, techniques for predicting an entire palette may be combined with techniques for predicting one or more entries of a palette. For example, video encoder may encode one or more syntax elements in a bitstream to indicate whether the current palette is entirely copied from the predictive palette (for example, the palette for the last palette-coded block). If this is not the case, video encoder may encode one or more syntax elements in a bitstream to indicate whether each entry in the predictive palette is copied.
20
30
20
30
20
20
20
20
In some instances, the size of the palette may be a fixed value specified in the video coding standard applied by video encoder and video decoder , or may be signaled from video encoder to video decoder . In the case where each of the color components has a separate palette, video encoder may separately signal the sizes for the different palettes. In the case of a single palette for all the color components, video encoder may encode a single size for the single palette. In another example, instead of signaling the number of entries and the palette values, video encoder may signal, after signaling each palette value, a flag to indicate whether the signaled palette value is the final palette entry for the palette. Video encoder may not signal such an “end of palette” flag if the palette has already reached a certain maximum size.
20
20
20
20
Video encoder may encode one or more syntax elements to indicate whether palette prediction is enabled and/or active. In an example for purposes of illustration, video encoder may encode a pred_palette_flag to indicate, for each block (e.g., CU or PU), whether video encoder uses palette prediction to predict the palette for the respective block. In some examples, video encoder may signal a separate flag for each color component (e.g., three flags for each block). In other examples, video encoder may signal a single flag that is applicable to all color components of a block.
30
30
30
Video decoder may obtain the above-identified information from an encoded bitstream and may use the information to reconstruct the palette. For example, video decoder may receive data indicating whether a particular palette is predicted from another palette, as well as information that allows video decoder to use the appropriate predictive palette entries.
20
30
20
30
20
30
20
30
20
30
30
In some instances, additionally or alternatively, video encoder and/or video decoder may construct a palette “on-the-fly,” i.e., dynamically. For example, video encoder and/or video decoder may add entries to an empty palette during coding. That is, video encoder may add pixel values to a palette as the pixel values are generated and transmitted for positions in a block. Pixels (e.g., pixels having values that have previously been added and indexed within the palete) that are coded relatively later in the block may refer to earlier added entries of the palette, e.g., with index values associated with pixel values, instead of transmitting the pixel values. Likewise, upon receiving a new pixel value for a position in a block, video decoder may follow the same process as video encoder and include the pixel value in a palette. In this way, video decoder constructs the same palette as video encoder . Video decoder may receive, for pixels having values that are already included in the palette, index values that identify the pixel values. Video decoder may use the received information, e.g., pixel values for the palette and/or index values, to reconstruct the pixels of a block.
20
30
20
30
In some instances, video encoder and video decoder may maintain a palette of a fixed size. For example, video encoder and video decoder may add the most recent reconstructed pixel values to the palette. For each entry that is added to the palette, the entry that was added to the palette the earliest is discarded. This is also sometimes referred to as First-in-First-out (FIFO). This process of updating the palette may be applied only to blocks that are coded using the palette mode or to all the blocks irrespective of the coding mode.
20
30
20
30
The techniques described above generally relate to video encoder and video decoder constructing and/or transmitting a palette for palette-based coding. Other aspects of this disclosure relate to constructing and/or transmitting a map that allows video encoder and/or video decoder to determine pixel values. For example, other aspects of this disclosure relate constructing and/or transmitting a map of indices that indicate entries in a palette that specify pixel values of a block of video data.
20
20
20
20
20
In some examples, video encoder may indicate whether pixels of a block have a corresponding value in a palette. In an example for purposes of illustration, assume that an (i, j) entry of a map corresponds to an (i, j) pixel position in a block of video data. In this example, video encoder may encode a flag for each pixel position of a block. Video encoder may set the flag equal to one for the (i, j) entry to indicate that the pixel value at the (i, j) location is one of the values in the palette. When a pixel value is included in the palette (i.e., the flag is equal to one), video encoder may also encode data indicating a palette index for the (i, j) entry that identifies the corresponding entry in the palette that specifies the pixel value. When a pixel value is not included in the palette (i.e., the flag is equal to zero), video encoder may also encode data indicating a sample value (possibly quantized) for the pixel. In some cases, the pixel that is not included in the palette is referred to as an “escape pixel.”
30
30
Video decoder may obtain the above-described data from an encoded bitstream and use the data to determine a palette index and/or pixel value for a particular location in a block. For example, video decoder may decode one or more syntax elements indicating whether each of the pixel values of the current block has a corresponding pixel value in the current palette, decode one or more syntax elements indicating the index values for the one or more pixel values of the current block that have corresponding pixel values in the current palette, and decode one or more syntax element indicating the pixel values for the one or more pixel values of the current block that do not have a corresponding pixel value in the current palette.
In some instances, there may be a correlation between the palette index to which a pixel at a given position is mapped and the probability of a neighboring pixel being mapped to the same palette index. That is, when a pixel is mapped to a particular palette index, the probability may be relatively high that one or more neighboring pixels (in terms of spatial location) are mapped to the same palette index.
20
30
20
30
20
30
According to aspects of this disclosure, video encoder and/or video decoder may determine and code one or more indices of a block of video data relative to one or more indices of the same block of video data. For example, video encoder and/or video decoder may be configured to determine a first index value associated with a first pixel in a block of video data, where the first index value relates a value of the first pixel to an entry of a palette. Video encoder and/or video decoder may also be configured to determine, based on the first index value, one or more second index values associated with one or more second pixels in the block of video data, and to code the first and the one or more second pixels of the block of video data. Thus, in this example, indices of a map may be coded relative to one or more other indices of the map.
20
In some examples, video encoder may encode one or more syntax elements indicating a number of consecutive pixels in a given scan order that are mapped to the same index value. The string of like-valued index values may be referred to herein as a “run.” In some examples, a pixel value may be associated with exactly one index value in a palette. Accordingly, in some instances, a run of values may also refer to a string of like-valued pixel values. In other examples, as described with respect to lossy coding below, more than one pixel value may map to the same index value in a palette. In such examples, a run of values refers to like-valued index values. In this scenario, on the decoder side, runs of like-valued index values may correspond to runs of pixel values that correspond to the index values.
30
In an example for purposes of illustration, if two consecutive indices in a given scan order have different values, the run is equal to zero. If two consecutive indices in a given scan order have the same value but the third index in the scan order has a different value, the run is equal to one. Video decoder may obtain the syntax elements indicating a run from an encoded bitstream and may use the data indicated by the syntax elements to determine the number of consecutive pixel locations that have the same index value.
20
20
30
In some examples, all pixel locations in the current block having pixel values that are in the palette for the current block are encoded with a palette index followed by a “run” of the pixel value at consecutive pixel locations. In the case when there is only one entry in the palette, the transmission of the palette index or the “run” may be skipped for the current block. In the case where the pixel value at one of the pixel locations in the current block does not have an exact match to a pixel value in the palette, video encoder may select one of the palette entries having the closest pixel value and calculate a prediction error or residual value between the original pixel value and the prediction pixel value included in the palette. Video encoder may quantize, encode and transmit the residual value for the pixel location to video decoder .
30
20
30
Video decoder may then derive a pixel value at the pixel location based on the corresponding received palette index. The derived pixel value and the residual value (received form video encoder ) are then used to predict the pixel value at the pixel location in the current block. In one example, the residual value is encoded using an HEVC method specified by HEVC draft 10, such as applying a residual quad-tree (RQT) to transform the residual value, quantize the transform coefficients, and entropy encode the quantized transform coefficients. In some cases, the residual values may be quantized directly without applying a transform. As an example, video decoder may decode one or more syntax elements indicating the index values for the one or more pixel values of the current block, where the index values identify corresponding pixel values in the current palette as prediction pixel values, and decode one or more syntax element indicating residual values between the one or more pixel values of the current block and the identified prediction pixel values in the current palette. In some cases, the above examples may be referred to as lossy coding.
20
30
20
20
20
30
Additionally or alternatively, according to aspects of this disclosure, video encoder and video decoder may perform line copying for one or more entries of a map. The entries may also be referred to as “positions” due to the relationship between entries of the map and pixel positions of a block. The line copying may depend, in some examples, on the scan direction. For example, video encoder may indicate that a pixel value or index map value for a particular position in a block is equal to the pixel or index value in a line above (e.g., preceding) the particular position (for a horizontally oriented scan) or the column to the left of (e.g., preceding) the particular position (for a vertically oriented scan). Video encoder may also indicate, as a run, the number of pixel values or indices in the scan order that are equal to the pixel values or indices in the line above or the column to the left of the particular position. In this example, video encoder and or video decoder may copy pixel or index values from the specified neighboring line (or column for vertical scan) and for the specified number of entries for the line (or column for vertical scan) of the block currently being coded.
20
30
20
30
20
30
In some instances, the line (or column for vertical scan) from which values are copied may be directly adjacent to, e.g., above or to the left of, the line (or column for vertical scan) of the position currently being coded. In other examples, a number of lines of the block may be buffered by video encoder and/or video decoder , such that any of the number of lines of the map may be used as predictive values for a line of the map currently being coded. Similar techniques may be applied to previous columns for a vertical scan. In an example for purposes of illustration, video encoder and/or video decoder may be configured to store the previous four rows of indices or pixel values prior to coding the current row of pixels. In this example, the predictive row (the row from which indices or pixel values are copied) may be indicated in a bitstream with a truncated unary code or other codes such as unary codes. With respect to a truncated unary code, video encoder and/or video decoder may determine a maximum value for the truncated unary code based on a maximum row calculation (e.g., row_index−1) for horizontal scans or a maximum column calculation (e.g., column_index−1) for vertical scans. In addition, an indication of the number of positions from the predictive row that are copied may also be included in the bitstream. In some instances, if the line (or column in the case of vertical scans) from which a current position is being predicted belongs to another block (e.g., CU or CTU) such prediction may be disabled.
20
As another example, video encoder may signal an instruction, such as “copy from up line left half” or “copy from up line right half,” indicating the neighboring line and the number or portion of entries of the neighboring line to copy to the line of the map currently being coded. As an additional example, the map of index values may be re-ordered before coding. For example, the map of index values may be rotated by 90, 180 or 270 degrees, or flipped upside down or left-side right to improve coding efficiency. Thus, any scan may be used to convert the two dimensional array of pixel or index values into a one dimensional array.
20
20
20
30
The techniques for coding so-called runs of entries may be used in conjunction with the techniques for line copying described above. For example, video encoder may encode one or more syntax elements (e.g., a flag) indicating whether the value of an entry in a map is obtained from a palette or the value of an entry in the map is obtained from a previously coded line in the map. Video encoder may also encode one or more syntax elements indicating an index value of a palette or the location of the entry in the line (the row or column). Video encoder may also encode one or more syntax elements indicating a number of consecutive entries that share the same value. Video decoder may obtain such information from an encoded bitstream and use the information to reconstruct the map and pixel values for a block.
20
20
As noted above, the indices of a map are scanned in a particular order. According to aspects of this disclosure, the scan direction may be vertical, horizontal, or at a diagonal (e.g., 45 degrees or 135 degrees diagonally in block). In some examples, video encoder may encode one or more syntax elements for each block indicating a scan direction for scanning the indices of the block. Additionally or alternatively, the scan direction may be a constant value or may be signaled or inferred based on so-called side information such as, for example, block size, color space, and/or color component. Video encoder may specify scans for each color component of a block. Alternatively, a specified scan may apply to all color components of a block.
20
30
20
30
20
30
20
30
In some examples, video encoder may not transmit runs of like-valued index values in a given scan order to video decoder . Instead, video encoder and/or video decoder may implicitly derive the values of the runs in order to determine the entries of the map. In this case, video encoder may signal to video decoder that a run of a given index value occurs, but may not signal a value of the run. For example, the value of a run may be a constant value or may be derived based on side information for the current block of video data being coded such as, for example, the block size. In the case where the value of a run depends on the block size, the run may be equal to the width of the current block, the height of the current block, the half-width (or half-height) of the current block, a fraction of the width and/or the height of the current block, or a multiple of the width and/or the height of the current block. In some examples, video encoder may signal the value of a run to video decoder using high level syntax. In some examples, the phrase “high-level syntax” refers to syntax in parameter sets, e.g., picture parameter sets (PPSs), sequence parameter sets (SPSs), and video parameter sets (VPSs), and slice headers.
20
30
20
30
20
30
20
30
Additionally or alternatively, video encoder may not even need to transmit the map to video decoder . Instead, video encoder and/or video decoder may implicitly derive a start position or location of each run of index values included in the map. In one example, the video coding standard applied by video encoder and/or video decoder may determine that a run can only start at certain locations. For example, the run may only start at the beginning of each row, or the beginning of every N rows of the current block. The start location may be different for different scan directions. For example, if the vertical scan is used, the run may only start at the beginning of a column or the beginning of every N columns of the current block. In another example, the start location may be derived depending on side information for the current block. In the case where the start location of a run depends on the block size, the start location may be the mid-point of each row and/or each column of the current block, or a fraction of each row and/or column of the current block. In some examples, video encoder may signal the start position to video decoder using high level syntax.
20
30
20
30
In some examples, the implicit start position derivation and the implicit run derivation, each described above, may be combined. For example, video encoder and/or video decoder may determine that a run of like-valued index values in the map is equal to a distance between two neighboring start positions. In the case where the start position is the beginning (i.e., the first position) of every row of the current block, then video encoder and/or video decoder may determine that the length of the run is equal to the length of an entire row of the current block.
In some cases, described in more detail below, one palette is generated and shared for multiple color components in the current block. For example, for each pixel location in the current block, the pixel values in three color components (e.g., Y luma and both U and V chroma components) may form a vector (i.e., a color vector). Then, a palette may be formed by selecting a certain number of vectors to represent the current block. It may be possible to have one palette of pixel values for the luma component, and another palette of pixel values for the chroma components. The line copying described in more detail above may also work with a single palette. With a shared palette, a palette entry may be a triplet of (Y, U, V) or (Y, Cb, Cr) or (R, G, B). In this case, the palette index for each pixel location is signaled as being equal to the palette index of the row above, if the scan is horizontal, or the column on the left, if the scan is vertical, and then the associated number of palette indices is also copied from the previous row or column based on the run.
In the case of either a shared palette for two or more color components or of separate palettes for each of the color components, geometric information may be shared between the color components. Usually there is high correlation between edge locations of collocated blocks in different color components because the chroma components may have been downsampled from the luma components in a pre-defined way, such as 4:2:2 or 4:2:0 sampling.
For example, in palette-based coding, run coding may be used to indicate geometry information for the current block because an edge of the current block will break the run. In case of the 4:4:4 chroma format, the run may be generated once and used for all color components. The run may be generated based on one of the color components, or the run may be generated using more than one of the color components. In case of the 4:2:2 chroma format or the 4:2:0 chroma format, the run used for the luma component may be downsampled for application to the chroma components
20
30
20
30
20
30
The techniques of this disclosure also include other aspects of palette-based coding. For example, according to aspects of this disclosure, video encoder and/or video decoder may code one or more syntax elements for each block to indicate that the block is coded using a palette coding mode. For example, video encoder and/or video decoder may code a palette mode flag (PLT_Mode_flag) to indicate whether a palette-based coding mode is to be used for coding a particular block. In this example, video encoder may encode a PLT_Mode_flag that is equal to one to specify that the block currently being encoded (“current block”) is encoded using a palette mode. A value of the PLT_Mode_flag equal to zero specifies that the current block is not encoded using palette mode. In this case, video decoder may obtain the PLT_Mode_flag from the encoded bitstream and apply the palette-based coding mode to decode the block. In instances in which there is more than one palette-based coding mode available (e.g., there is more than one palette-based technique available for coding) one or more syntax elements may indicate one of a plurality of different palette modes for the block.
20
20
20
In some instances, video encoder may encode a PLT_Mode_flag that is equal to zero to specify that the current block is not encoded using a palette mode. In such instances, video encoder may encode the block using any of a variety of inter-predictive, intra-predictive, or other coding modes. When the PLT_Mode_flag is equal to zero, video encoder may transmit additional information (e.g., syntax elements) to indicate the specific mode that is used for encoding the respective block. In some examples, as described below, the mode may be an HEVC coding mode, e.g., a regular inter-predictive mode or intra-predictive mode in the HEVC standard. The use of the PLT_Mode_flag is described for purposes of example. In other examples, other syntax elements such as multi-bit codes may be used to indicate whether the palette-based coding mode is to be used for one or more blocks, or to indicate which of a plurality of modes are to be used.
20
30
When a palette-based coding mode is used, a palette is transmitted by video encoder , e.g., using one or more of the techniques described herein, in the encoded video data bitstream for use by video decoder . A palette may be transmitted for each block or may be shared among a number of blocks. The palette may refer to a number of pixel values that are dominant and/or representative for the block.
The size of the palette, e.g., in terms of the number of pixel values that are included in the palette, may be fixed or may be signaled using one or more syntax elements in an encoded bitstream. As described in greater detail below, a pixel value may be composed of a number of samples, e.g., depending on the color space used for coding. For example, a pixel value may include luma and chrominance samples (e.g., luma, U chrominance and V chrominance (YUV) or luma, Cb chrominance, and Cr chrominance (YCbCr) samples). In another example, a pixel value may include Red, Green, and Blue (RGB) samples. As described herein, the term pixel value may generally refer to one or more of the samples contributing to a pixel. That is, the term pixel value does not necessarily refer to all samples contributing to a pixel, and may be used to describe a single sample value contributing to a pixel.
i
i
i
In some examples, a palette may be transmitted separately for each color component of a particular block. For example, in the YUV color space, there may be a palette for the Y component (representing Y values), another palette for the U component (representing U values), and yet another palette for the V component (representing V values). In another example, a palette may include all components of a particular block. In this example, the i-th entry in the palette may include three values (e.g., Y, U, V). According to aspects of this disclosure, one or more syntax elements may separately indicate the size of the palette for each component (e.g., Y, U, V, or the like). In other examples, a single size may be used for all components, such that one or more syntax elements indicate the size of all components.
20
30
20
30
20
30
FIG. 5
Video encoder and/or video decoder may perform palette-based coding in a lossy or lossless manner. That is, in some examples, video encoder and/or video decoder may losslessly code video data for a block using palette entries that match the pixel values of the block (or by sending the actual pixel values if the pixel value is not included in the palette). In other examples, as described in greater detail with respect to below, video encoder and/or video decoder may code video data for a block using palette entries that do not exactly match the pixel values of the block (lossy coding). Similarly, if the actual pixel value is not included in the palette, the actual pixel value may be quantized in a lossy manner.
20
30
20
20
20
According to techniques described in this disclosure, video encoder and/or video decoder may perform palette-based coding of predicted video blocks. In one example, video encoder first derives a palette for a current block based on the pixel values in the current block, and then maps the pixel values in the current block to palette indices for encoding. The mapping may be one to one (i.e., for lossless coding) or multiple to one (i.e., for lossy coding). Video encoder also maps reference pixel values in a previously coded block that will be used to predict the pixel values in the current block. Once the pixel values of the current block have been mapped to palette indices, video encoder may encode the current block with palette indices using regular encoding methods, e.g., regular intra coding in the HEVC standard.
20
30
20
30
30
In the above example, the current block with palette indices is treated as if the current block were an original block with pixel values. Similarly, the palette indices of the reference pixels are used for performing regular intra prediction on the current block with palette indices. Video encoder transmits the prediction error or residual values to video decoder . After encoding the current block, video encoder converts the indices of the reference pixels, prediction pixels, and the residual values back to the pixel values for reconstruction of the current block and the normal prediction of future blocks. Video decoder may obtain the encoded residual values for the current block from the bitstream, and decode the current block using regular decoding method to obtain the current block with palette indices. Video decoder may then determine the pixel values of the current block based on the pixel values in the palette that are associated with the palette indices.
20
20
30
30
In another example, video encoder may generate a palette for a current block where the palette includes entries that indicate prediction residual values for the given block. The prediction residual values for the given block may be generated using any prediction mode, e.g., regular inter-prediction or intra-prediction in the HEVC standard. The prediction residual values for the given block may be residual pixel values or residual transform coefficient values. In either case, the prediction residual values may be quantized. In this example, video encoder maps the prediction residual values for the current block to the index values that indicate entries in the palette for the current block used to represent the prediction residual values for the current block, and encodes prediction residual values using the index values. Video decoder may obtain the block of index values from the bitstream, and determine the prediction residual values for the current block based on the corresponding prediction residual values in the palette that are identified by the index values. Video decoder may then reconstruct the pixel values of the current block using regular decoding methods based on the prediction residual values and previously coded reference pixel values.
20
30
20
30
20
30
In some examples, video encoder and/or video decoder may perform the palette-based video coding with video block prediction by applying the intra prediction mode (i.e., the prediction only uses previously coded pixel information in the current picture). In other cases, video encoder and/or video decoder may apply the inter prediction mode (i.e., the prediction is from pixels in a previously coded picture). In some cases, video encoder and/or video decoder may determine the prediction residual values for the current block using only a subset of prediction mode processes for either the inter prediction mode or the intra prediction mode.
20
30
20
20
30
In another example, video encoder and/or video decoder may perform no prediction for the current block. In this case, video encoder instead maps the pixel values to palette indices, and encodes the indices using entropy coding without prediction. In an additional example, video encoder and/or video decoder may perform residual differential pulse code modulation (RDPCM) using pixel values of the current block that are mapped to palette index values. In this example, no prediction from pixels outside the current block is used, and horizontal or vertical prediction may be used for line copying index values within the current CU.
In some examples, the techniques for palette-based coding of video data may be used with one or more other coding techniques, such as techniques for inter- or intra-predictive coding. For example, as described in greater detail below, an encoder or decoder, or combined encoder-decoder (codec), may be configured to perform inter- and intra-predictive coding, as well as palette-based coding.
FIG. 2
FIG. 2
20
20
is a block diagram illustrating an example video encoder that may implement the techniques of this disclosure. is provided for purposes of explanation and should not be considered limiting of the techniques as broadly exemplified and described in this disclosure. For purposes of explanation, this disclosure describes video encoder in the context of HEVC coding. However, the techniques of this disclosure may be applicable to other coding standards or methods.
20
20
20
20
20
30
Video encoder represents an example of a device that may be configured to perform techniques for palette-based video coding in accordance with various examples described in this disclosure. For example, video encoder may be configured to selectively code various blocks of video data, such as CUs or PUs in HEVC coding, using either palette-based coding or non-palette based coding. Non-palette based coding modes may refer to various inter-predictive temporal coding modes or intra-predictive spatial coding modes, such as the various coding modes specified by HEVC Draft 10. Video encoder , in one example, may be configured to generate a palette having entries indicating pixel values. Furthermore, in this example, video encoder may select pixel values in a palette to represent pixel values of at least some positions of a block of video data. In this example, video encoder may signal information associating at least some of the positions of the block of video data with entries in the palette corresponding, respectively, to the selected pixel values. Video decoder may use the signaled information to decode video data.
FIG. 2
20
98
100
102
104
106
108
110
112
114
116
118
100
120
126
120
20
122
20
In the example of , video encoder includes a video data memory , a prediction processing unit , a residual generation unit , a transform processing unit , a quantization unit , an inverse quantization unit , an inverse transform processing unit , a reconstruction unit , a filter unit , a decoded picture buffer , and an entropy encoding unit . Prediction processing unit includes an inter-prediction processing unit and an intra-prediction processing unit . Inter-prediction processing unit includes a motion estimation unit and a motion compensation unit (not shown). Video encoder also includes a palette-based encoding unit configured to perform various aspects of the palette-based coding techniques described in this disclosure. In other examples, video encoder may include more, fewer, or different functional components.
98
20
98
18
116
20
98
116
98
116
98
20
Video data memory may store video data to be encoded by the components of video encoder . The video data stored in video data memory may be obtained, for example, from video source . Decoded picture buffer may be a reference picture memory that stores reference video data for use in encoding video data by video encoder , e.g., in intra- or inter-coding modes. Video data memory and decoded picture buffer may be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), resistive RAM (RRAM), or other types of memory devices. Video data memory and decoded picture buffer may be provided by the same memory device or separate memory devices. In various examples, video data memory may be on-chip with other components of video encoder , or off-chip relative to those components.
20
20
100
100
Video encoder may receive video data. Video encoder may encode each CTU in a slice of a picture of the video data. Each of the CTUs may be associated with equally-sized luma coding tree blocks (CTBs) and corresponding CTBs of the picture. As part of encoding a CTU, prediction processing unit may perform quad-tree partitioning to divide the CTBs of the CTU into progressively-smaller blocks. The smaller block may be coding blocks of CUs. For example, prediction processing unit may partition a CTB associated with a CTU into four equally-sized sub-blocks, partition one or more of the sub-blocks into four equally-sized sub-sub-blocks, and so on.
20
100
20
30
20
30
20
30
Video encoder may encode CUs of a CTU to generate encoded representations of the CUs (i.e., coded CUs). As part of encoding a CU, prediction processing unit may partition the coding blocks associated with the CU among one or more PUs of the CU. Thus, each PU may be associated with a luma prediction block and corresponding chroma prediction blocks. Video encoder and video decoder may support PUs having various sizes. As indicated above, the size of a CU may refer to the size of the luma coding block of the CU and the size of a PU may refer to the size of a luma prediction block of the PU. Assuming that the size of a particular CU is 2N×2N, video encoder and video decoder may support PU sizes of 2N×2N or N×N for intra prediction, and symmetric PU sizes of 2N×2N, 2N×N, N×2N, N×N, or similar for inter prediction. Video encoder and video decoder may also support asymmetric partitioning for PU sizes of 2N×nU, 2N×nD, nL×2N, and nR×2N for inter prediction.
120
121
121
Inter-prediction processing unit may generate predictive data for a PU by performing inter prediction on each PU of a CU. The predictive data for the PU may include one or more predictive sample blocks of the PU and motion information for the PU. Inter-prediction unit may perform different operations for a PU of a CU depending on whether the PU is in an I slice, a P slice, or a B slice. In an I slice, all PUs are intra predicted. Hence, if the PU is in an I slice, inter-prediction unit does not perform inter prediction on the PU. Thus, for blocks encoded in I-mode, the predictive block is formed using spatial prediction from previously-encoded neighboring blocks within the same frame.
120
120
If a PU is in a P slice, the motion estimation unit of inter-prediction processing unit may search the reference pictures in a list of reference pictures (e.g. “RefPicList0”) for a reference region for the PU. The reference region for the PU may be a region, within a reference picture, that contains sample blocks that most closely correspond to the sample blocks of the PU. The motion estimation unit may generate a reference index that indicates a position in RefPicList0 of the reference picture containing the reference region for the PU. In addition, the motion estimation unit may generate an MV that indicates a spatial displacement between a coding block of the PU and a reference location associated with the reference region. For instance, the MV may be a two-dimensional vector that provides an offset from the coordinates in the current decoded picture to coordinates in a reference picture. The motion estimation unit may output the reference index and the MV as the motion information of the PU. The motion compensation unit of inter-prediction processing unit may generate the predictive sample blocks of the PU based on actual or interpolated samples at the reference location indicated by the motion vector of the PU.
120
If a PU is in a B slice, the motion estimation unit may perform uni-prediction or bi-prediction for the PU. To perform uni-prediction for the PU, the motion estimation unit may search the reference pictures of RefPicList0 or a second reference picture list (“RefPicList1”) for a reference region for the PU. The motion estimation unit may output, as the motion information of the PU, a reference index that indicates a position in RefPicList0 or RefPicList1 of the reference picture that contains the reference region, an MV that indicates a spatial displacement between a sample block of the PU and a reference location associated with the reference region, and one or more prediction direction indicators that indicate whether the reference picture is in RefPicList0 or RefPicList1. The motion compensation unit of inter-prediction processing unit may generate the predictive sample blocks of the PU based at least in part on actual or interpolated samples at the reference region indicated by the motion vector of the PU.
To perform bi-directional inter prediction for a PU, the motion estimation unit may search the reference pictures in RefPicList0 for a reference region for the PU and may also search the reference pictures in RefPicList1 for another reference region for the PU. The motion estimation unit may generate reference picture indexes that indicate positions in RefPicList0 and RefPicList1 of the reference pictures that contain the reference regions. In addition, the motion estimation unit may generate MVs that indicate spatial displacements between the reference location associated with the reference regions and a sample block of the PU. The motion information of the PU may include the reference indexes and the MVs of the PU. The motion compensation unit may generate the predictive sample blocks of the PU based at least in part on actual or interpolated samples at the reference region indicated by the motion vector of the PU.
20
In accordance with various examples of this disclosure, video encoder may be configured to perform palette-based coding. With respect to the HEVC framework, as an example, the palette-based coding techniques may be configured to be used as a CU mode. In other examples, the palette-based coding techniques may be configured to be used as a PU mode in the framework of HEVC. Accordingly, all of the disclosed processes described herein (throughout this disclosure) in the context of a CU mode may, additionally or alternatively, apply to a PU mode. However, these HEVC-based examples should not be considered a restriction or limitation of the palette-based coding techniques described herein, as such techniques may be applied to work independently or as part of other existing or yet to be developed systems/standards. In these cases, the unit for palette coding can be square blocks, rectangular blocks or even regions of non-rectangular shape.
122
122
122
Palette-based encoding unit , for example, may perform palette-based decoding when a palette-based encoding mode is selected, e.g., for a CU or PU. For example, palette-based encoding unit may be configured to generate a palette having entries indicating pixel values, select pixel values in a palette to represent pixel values of at least some positions of a block of video data, and signal information associating at least some of the positions of the block of video data with entries in the palette corresponding, respectively, to the selected pixel values. Although various functions are described as being performed by palette-based encoding unit , some or all of such functions may be performed by other processing units, or a combination of different processing units.
122
20
20
20
Palette-based encoding unit may be configured to generate any of the various syntax elements described herein. Accordingly, video encoder may be configured to encode blocks of video data using palette-based code modes as described in this disclosure. Video encoder may selectively encode a block of video data using a palette coding mode, or encode a block of video data using a different mode, e.g., such an HEVC inter-predictive or intra-predictive coding mode. The block of video data may be, for example, a CU or PU generated according to an HEVC coding process. A video encoder may encode some blocks with inter-predictive temporal prediction or intra-predictive spatial coding modes and decode other blocks with the palette-based coding mode.
126
126
Intra-prediction processing unit may generate predictive data for a PU by performing intra prediction on the PU. The predictive data for the PU may include predictive sample blocks for the PU and various syntax elements. Intra-prediction processing unit may perform intra prediction on PUs in I slices, P slices, and B slices.
126
126
126
To perform intra prediction on a PU, intra-prediction processing unit may use multiple intra prediction modes to generate multiple sets of predictive data for the PU. When using some intra prediction modes to generate a set of predictive data for the PU, intra-prediction processing unit may extend values of samples from sample blocks of neighboring PUs across the predictive blocks of the PU in directions associated with the intra prediction modes. The neighboring PUs may be above, above and to the right, above and to the left, or to the left of the PU, assuming a left-to-right, top-to-bottom encoding order for PUs, CUs, and CTUs. Intra-prediction processing unit may use various numbers of intra prediction modes, e.g., 33 directional intra prediction modes. In some examples, the number of intra prediction modes may depend on the size of the region associated with the PU.
100
120
126
100
Prediction processing unit may select the predictive data for PUs of a CU from among the predictive data generated by inter-prediction processing unit for the PUs or the predictive data generated by intra-prediction processing unit for the PUs. In some examples, prediction processing unit selects the predictive data for the PUs of the CU based on rate/distortion metrics of the sets of predictive data. The predictive sample blocks of the selected predictive data may be referred to herein as the selected predictive sample blocks.
102
102
Residual generation unit may generate, based on the coding blocks (e.g., luma, Cb and Cr coding blocks) of a CU and the selected predictive sample blocks (e.g., predictive luma, Cb and Cr blocks) of the PUs of the CU, residual blocks (e.g., luma. Cb and Cr residual blocks) of the CU. For instance, residual generation unit may generate the residual blocks of the CU such that each sample in the residual blocks has a value equal to a difference between a sample in a coding block of the CU and a corresponding sample in a corresponding selected predictive sample block of a PU of the CU.
104
Transform processing unit may perform quad-tree partitioning to partition the residual blocks associated with a CU into transform blocks associated with TUs of the CU. Thus, in some examples, a TU may be associated with a luma transform block and two chroma transform blocks. The sizes and positions of the luma and chroma transform blocks of TUs of a CU may or may not be based on the sizes and positions of prediction blocks of the PUs of the CU. A quad-tree structure known as a “residual quad-tree” (RQT) may include nodes associated with each of the regions. The TUs of a CU may correspond to leaf nodes of the RQT.
104
104
104
104
Transform processing unit may generate transform coefficient blocks for each TU of a CU by applying one or more transforms to the transform blocks of the TU. Transform processing unit may apply various transforms to a transform block associated with a TU. For example, transform processing unit may apply a discrete cosine transform (DCT), a directional transform, or a conceptually similar transform to a transform block. In some examples, transform processing unit does not apply transforms to a transform block. In such examples, the transform block may be treated as a transform coefficient block.
106
106
20
Quantization unit may quantize the transform coefficients in a coefficient block. The quantization process may reduce the bit depth associated with some or all of the transform coefficients. For example, an n-bit transform coefficient may be rounded down to an m-bit transform coefficient during quantization, where n is greater than m. Quantization unit may quantize a coefficient block associated with a TU of a CU based on a quantization parameter (QP) value associated with the CU. Video encoder may adjust the degree of quantization applied to the coefficient blocks associated with a CU by adjusting the QP value associated with the CU. Quantization may introduce loss of information, thus quantized transform coefficients may have lower precision than the original ones.
108
110
112
100
20
Inverse quantization unit and inverse transform processing unit may apply inverse quantization and inverse transforms to a coefficient block, respectively, to reconstruct a residual block from the coefficient block. Reconstruction unit may add the reconstructed residual block to corresponding samples from one or more predictive sample blocks generated by prediction processing unit to produce a reconstructed transform block associated with a TU. By reconstructing transform blocks for each TU of a CU in this way, video encoder may reconstruct the coding blocks of the CU.
114
116
114
120
126
116
Filter unit may perform one or more deblocking operations to reduce blocking artifacts in the coding blocks associated with a CU. Decoded picture buffer may store the reconstructed coding blocks after filter unit performs the one or more deblocking operations on the reconstructed coding blocks. Inter-prediction processing unit may use a reference picture that contains the reconstructed coding blocks to perform inter prediction on PUs of other pictures. In addition, intra-prediction processing unit may use reconstructed coding blocks in decoded picture buffer to perform intra prediction on other PUs in the same picture as the CU.
118
20
118
106
100
118
118
20
118
Entropy encoding unit may receive data from other functional components of video encoder . For example, entropy encoding unit may receive coefficient blocks from quantization unit and may receive syntax elements from prediction processing unit . Entropy encoding unit may perform one or more entropy encoding operations on the data to generate entropy-encoded data. For example, entropy encoding unit may perform a CABAC operation, a context-adaptive variable length coding (CAVLC) operation, a variable-to-variable (V2V) length coding operation, a syntax-based context-adaptive binary arithmetic coding (SBAC) operation, a Probability Interval Partitioning Entropy (PIPE) coding operation, an Exponential-Golomb encoding operation, or another type of entropy encoding operation on the data. Video encoder may output a bitstream that includes entropy-encoded data generated by entropy encoding unit . For instance, the bitstream may include data that represents a RQT for a CU.
20
20
In some examples, residual coding is not performed with palette coding. Accordingly, video encoder may not perform transformation or quantization when coding using a palette coding mode. In addition, video encoder may entropy encode data generated using a palette coding mode separately from residual data.
20
122
20
30
According to one or more of the techniques of this disclosure, video encoder , and specifically palette-based encoding unit , may perform palette-based video coding of predicted video blocks. As described above, a palette generated by video encoder may be explicitly encoded and sent to video decoder , predicted from previous palette entries, predicted from previous pixel values, or a combination thereof.
122
20
20
20
30
20
30
122
20
In one example, palette-based encoding unit of video encoder determines one or more palette entries in a predictive palette that are copied to a current palette for a current block of video data, and determines a number of new palette entries that are not in the predictor palette but that are included in the current palette. Based on this information, the palette-based video encoder calculates a size of the current palette to be equal to the sum of the number of the copied palette entries and the number of the new palette entries, and generates the current palette of the determined size including the copied palette entries and the new palette entries. Video encoder may transmit the determined information regarding the copied palette entries and the new palette entries to video decoder . In addition, video encoder may explicitly encode and transmit pixel values for the new palette entries to video decoder . Palette-based encoding unit of video encoder may then encode the current block by determining index values for one or more pixel values of the current block that identify the palette entries in the current palette used to represent the pixel values of the current block.
The techniques described in this disclosure may also include techniques for various combinations of one or more of signaling palette-based coding modes, transmitting palettes, predicting palettes, deriving palettes, or transmitting palette-based coding maps and other syntax elements.
FIG. 3
FIG. 3
30
30
is a block diagram illustrating an example video decoder that is configured to implement the techniques of this disclosure. is provided for purposes of explanation and is not limiting on the techniques as broadly exemplified and described in this disclosure. For purposes of explanation, this disclosure describes video decoder in the context of HEVC coding. However, the techniques of this disclosure may be applicable to other coding standards or methods.
30
30
30
30
30
Video decoder represents an example of a device that may be configured to perform techniques for palette-based video coding in accordance with various examples described in this disclosure. For example, video decoder may be configured to selectively decode various blocks of video data, such as CUs or PUs in HEVC coding, using either palette-based coding or non-palette based coding. Non-palette based coding modes may refer to various inter-predictive temporal coding modes or intra-predictive spatial coding modes, such as the various coding modes specified by HEVC Draft 10. In one example, video decoder may be configured to generate a palette having entries indicating pixel values. Furthermore, in this example, video decoder may receive information associating at least some positions of a block of video data with entries in the palette. In this example, video decoder may select pixel values in the palette based on the information and reconstruct pixel values of the block based on the selected pixel values.
FIG. 3
30
148
150
152
154
156
158
160
162
152
164
166
30
165
30
In the example of , video decoder includes a video data memory , an entropy decoding unit , a prediction processing unit , an inverse quantization unit , an inverse transform processing unit , a reconstruction unit , a filter unit , and a decoded picture buffer . Prediction processing unit includes a motion compensation unit and an intra-prediction processing unit . Video decoder also includes a palette-based decoding unit configured to perform various aspects of the palette-based coding techniques described in this disclosure. In other examples, video decoder may include more, fewer, or different functional components.
148
30
148
16
148
162
30
148
162
148
162
148
30
Video data memory may store video data, such as an encoded video bitstream, to be decoded by the components of video decoder . The video data stored in video data memory may be obtained, for example, from computer-readable medium , e.g., from a local video source, such as a camera, via wired or wireless network communication of video data, or by accessing physical data storage media. Video data memory may form a coded picture buffer (CPB) that stores encoded video data from an encoded video bitstream. Decoded picture buffer may be a reference picture memory that stores reference video data for use in decoding video data by video decoder , e.g., in intra- or inter-coding modes. Video data memory and decoded picture buffer may be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), resistive RAM (RRAM), or other types of memory devices. Video data memory and decoded picture buffer may be provided by the same memory device or separate memory devices. In various examples, video data memory may be on-chip with other components of video decoder , or off-chip relative to those components.
148
150
148
150
152
154
156
158
160
Video data memory , i.e., a CPB, may receive and store encoded video data (e.g., NAL units) of a bitstream. Entropy decoding unit may receive encoded video data (e.g. NAL units) from video data memory and may parse the NAL units to decode syntax elements. Entropy decoding unit may entropy decode entropy-encoded syntax elements in the NAL units. Prediction processing unit , inverse quantization unit , inverse transform processing unit , reconstruction unit , and filter unit may generate decoded video data based on the syntax elements obtained (e.g., extracted) from the bitstream.
150
The NAL units of the bitstream may include coded slice NAL units. As part of decoding the bitstream, entropy decoding unit may extract and entropy decode syntax elements from the coded slice NAL units. Each of the coded slices may include a slice header and slice data. The slice header may contain syntax elements pertaining to a slice. The syntax elements in the slice header may include a syntax element that identifies a PPS associated with a picture that contains the slice.
30
30
30
In addition to decoding syntax elements from the bitstream, video decoder may perform a reconstruction operation on a non-partitioned CU. To perform the reconstruction operation on a non-partitioned CU, video decoder may perform a reconstruction operation on each TU of the CU. By performing the reconstruction operation for each TU of the CU, video decoder may reconstruct residual blocks of the CU.
154
154
154
As part of performing a reconstruction operation on a TU of a CU, inverse quantization unit may inverse quantize. i.e., de-quantize, coefficient blocks associated with the TU. Inverse quantization unit may use a QP value associated with the CU of the TU to determine a degree of quantization and, likewise, a degree of inverse quantization for inverse quantization unit to apply. That is, the compression ratio, i.e., the ratio of the number of bits used to represent original sequence and the compressed one, may be controlled by adjusting the value of the QP used when quantizing transform coefficients. The compression ratio may also depend on the method of entropy coding employed.
154
156
156
After inverse quantization unit inverse quantizes a coefficient block, inverse transform processing unit may apply one or more inverse transforms to the coefficient block in order to generate a residual block associated with the TU. For example, inverse transform processing unit may apply an inverse DCT, an inverse integer transform, an inverse Karhunen-Loeve transform (KLT), an inverse rotational transform, an inverse directional transform, or another inverse transform to the coefficient block.
166
166
166
If a PU is encoded using intra prediction, intra-prediction processing unit may perform intra prediction to generate predictive blocks for the PU. Intra-prediction processing unit may use an intra prediction mode to generate the predictive luma, Cb and Cr blocks for the PU based on the prediction blocks of spatially-neighboring PUs. Intra-prediction processing unit may determine the intra prediction mode for the PU based on one or more syntax elements decoded from the bitstream.
152
150
164
164
Prediction processing unit may construct a first reference picture list (RefPicList0) and a second reference picture list (RefPicList1) based on syntax elements extracted from the bitstream. Furthermore, if a PU is encoded using inter prediction, entropy decoding unit may extract motion information for the PU. Motion compensation unit may determine, based on the motion information of the PU, one or more reference regions for the PU. Motion compensation unit may generate, based on samples blocks at the one or more reference blocks for the PU, predictive blocks (e.g., predictive luma, Cb and Cr blocks) for the PU.
158
158
Reconstruction unit may use the transform blocks (e.g., luma, Cb and Cr transform blocks) associated with TUs of a CU and the predictive blocks (e.g. luma, Cb and Cr blocks) of the PUs of the CU, i.e., either intra-prediction data or inter-prediction data, as applicable, to reconstruct the coding blocks (e.g., luma, Cb and Cr coding blocks) of the CU. For example, reconstruction unit may add samples of the transform blocks (e.g., luma, Cb and Cr transform blocks) to corresponding samples of the predictive blocks (e.g., predictive luma, Cb and Cr blocks) to reconstruct the coding blocks (e.g., luma, Cb and Cr coding blocks) of the CU.
160
30
162
162
32
30
162
30
FIG. 1
Filter unit may perform a deblocking operation to reduce blocking artifacts associated with the coding blocks (e.g., luma, Cb and Cr coding blocks) of the CU. Video decoder may store the coding blocks (e.g., luma, Cb and Cr coding blocks) of the CU in decoded picture buffer . Decoded picture buffer may provide reference pictures for subsequent motion compensation, intra prediction, and presentation on a display device, such as display device of . For instance, video decoder may perform, based on the blocks (e.g., luma, Cb and Cr blocks) in decoded picture buffer , intra prediction or inter prediction operations on PUs of other CUs. In this way, video decoder may extract, from the bitstream, transform coefficient levels of a significant coefficient block, inverse quantize the transform coefficient levels, apply a transform to the transform coefficient levels to generate a transform block, generate, based at least in part on the transform block, a coding block, and output the coding block for display.
30
165
165
165
165
165
165
In accordance with various examples of this disclosure, video decoder may be configured to perform palette-based coding. Palette-based decoding unit , for example, may perform palette-based decoding when a palette-based decoding mode is selected, e.g., for a CU or PU. For example, palette-based decoding unit may be configured to generate a palette having entries indicating pixel values. Furthermore, in this example, palette-based decoding unit may receive information associating at least some positions of a block of video data with entries in the palette. In this example, palette-based decoding unit may select pixel values in the palette based on the information. Additionally, in this example, palette-based decoding unit may reconstruct pixel values of the block based on the selected pixel values. Although various functions are described as being performed by palette-based decoding unit , some or all of such functions may be performed by other processing units, or a combination of different processing units.
165
165
30
Palette-based decoding unit may receive palette coding mode information, and perform the above operations when the palette coding mode information indicates that the palette coding mode applies to the block. When the palette coding mode information indicates that the palette coding mode does not apply to the block, or when other mode information indicates the use of a different mode, palette-based decoding unit decodes the block of video data using a non-palette based coding mode, e.g., such an HEVC inter-predictive or intra-predictive coding mode, when the palette coding mode information indicates that the palette coding mode does not apply to the block. The block of video data may be, for example, a CU or PU generated according to an HEVC coding process. A video decoder may decode some blocks with inter-predictive temporal prediction or intra-predictive spatial coding modes and decode other blocks with the palette-based coding mode. The palette-based coding mode may comprise one of a plurality of different palette-based coding modes, or there may be a single palette-based coding mode.
30
165
30
20
According to one or more of the techniques of this disclosure, video decoder , and specifically palette-based decoding unit , may perform palette-based video decoding of predicted video blocks. As described above, a palette generated by video decoder may be explicitly encoded by video encoder , predicted from previous palette entries, predicted from previous pixel values, or a combination thereof.
165
30
30
20
30
20
165
165
30
In one example, palette-based decoding unit of video decoder determines one or more palette entries in a predictive palette that are copied to a current palette for a current block of video data, and determines a number of new palette entries that are not in the predictor palette but that are included in the current palette. Video decoder may receive the information regarding the copied palette entries and the new palette entries from video encoder . In addition, video decoder may receive explicitly encoded pixel values for the new palette entries transmitted from video encoder . Based on this information, palette-based decoding unit calculates a size of the current palette to be equal to the sum of the number of the copied palette entries and the number of the new palette entries, and generates the current palette of the determined size including the copied palette entries and the new palette entries. Palette-based decoding unit of video decoder may then decode the current block by determining index values for one or more pixel values of the current block that identify the palette entries in the current palette used to represent the pixel values of the current block.
The techniques described in this disclosure may also include techniques for various combinations of one or more of signaling palette-based coding modes, transmitting palettes, predicting palettes, deriving palettes, or transmitting palette-based coding maps and other syntax elements.
20
30
20
20
20
20
As described above, in some examples, video encoder and/or video decoder may perform palette-based coding of predicted video blocks. In one example, video encoder first derives a palette for a current block based on the pixel values in the current block, and then maps the pixel values in the current block to palette indices for encoding. The mapping may be one to one (i.e., for lossless coding) or multiple to one (i.e., for lossy coding). Video encoder may also map reference pixel values in a previously coded block that may be used to predict the pixel values in the current block. Once video encoder maps the pixel values of the current block to palette indices, video encoder encodes the current block using regular encoding methods, e.g., regular intra coding in the HEVC standard.
20
30
20
30
30
30
In the above example, the current block with palette indices is treated as if it were an original block with pixel values. Similarly, the palette indices of the reference pixels are used for performing regular intra prediction on the current block with palette indices. Video encoder may transmit the prediction error or residual values to video decoder . In some cases, the prediction error or residual values may be transformed, quantized and entropy encoded into the bitstream. In other cases, it is also possible that the transform and quantization are disabled for the palette coding mode. After encoding the current block, video encoder may convert the indices of the reference pixels, prediction pixels, and/or the residual values back to the pixel values for reconstruction of the current block and the normal prediction of future blocks. Video decoder may obtain the encoded residual values for the current block from the bitstream. Furthermore, video decoder may decode the current block using a regular decoding method to obtain the current block with palette indices. Video decoder may then determine the pixel values of the current block based on the pixel values in the palette that are associated with the palette indices.
20
20
20
30
30
In another example, video encoder may generate a palette for a current block. The palette may include entries that indicate prediction residual values for the given block. The prediction residual values for the given block may be generated using any prediction mode, e.g., regular inter-prediction or intra-prediction in the HEVC standard. The prediction residual values for the given block may be residual pixel values (possibly quantized) or residual transform coefficient values (possibly quantized). In this example, video encoder maps the prediction residual values for the current block to index values that indicate entries in the palette for the current block that are used to represent the prediction residual values for the current block. In this example, video encoder may encode the index values for one or more positions in the current block, where the index values indicate the entries in the palette for the current block that specify the prediction residual values for the current block. Video decoder may obtain the encoded block of index values from the bitstream, and determine the prediction residual values for the current block based on the corresponding prediction residual values in the palette identified by the index values. Video decoder may then reconstruct the pixel values of the current block using regular decoding methods based on the prediction residual values and previously coded reference pixel values.
20
30
20
30
In some examples, video encoder and/or video decoder may perform the palette-based video coding with video block prediction by applying the intra prediction mode (i.e., the prediction only uses previously coded pixel information in the current picture). In other examples, video encoder and/or video decoder may apply the inter prediction mode (i.e., the prediction is from pixels in a previously coded pictures). In one example, the prediction residual values for the current block may be residual pixel values for the current block calculated from the pixel values of the current block and the previously coded reference pixel values. The residual pixel values may be quantized. In another example, the prediction residual values for the current block may be residual transform coefficient values for the current block calculated from the pixel values of the current block and the previously coded reference pixel values, and then transformed and possibly quantized.
20
30
In some cases, video encoder and/or video decoder may determine the prediction residual values for the current block using only a subset of prediction mode processes for either the inter prediction mode or the intra prediction mode. For example, in the case of the intra prediction mode, the DC, horizontal, and/or vertical prediction processes may be enabled, but other intra prediction mode processes may be disabled. The disabled processes may include the filtering in the intra prediction mode, e.g., one or more of mode-dependent intra-smoothing (MDIS), 1/32-pel bilinear interpolation, edge filter and/or DC filter (a background introduction can be found in U.S. Provisional Application No. 61/890,844, filed Oct. 14, 2013, entitled “Adaptive Filter Control for Intra Prediction in Video Coding,” Applicant reference number 1212-671USP3/134960P3), is disabled in this example. As a further example, in the case of the inter prediction mode, the average of pixels process, e.g., one or more of the weighted prediction, the bi-prediction, or the sub-pel interpolation, may be disabled.
20
30
20
20
30
In another example, video encoder and/or video decoder may perform no prediction for the current block. In this case, video encoder instead maps the pixel values to palette indices, and encodes the indices using entropy coding without prediction. In an additional example, video encoder and/or video decoder may perform residual differential pulse code modulation (RDPCM) using pixel values of the current block that are mapped to palette index values. In this case, no prediction from pixels outside the current block is used, and horizontal or vertical prediction may be used for line copying index values within the current CU. For example, when using the vertical prediction, the locations in the first row of the current block are not predicted, and the locations in the subsequent rows may be predicted using values in the previous rows, e.g., values in row i (i>0) equal to x(i, j) are predicted using x(i−1, j). When using the horizontal prediction, the locations in the first column of the current block are not predicted, and the locations in the subsequent columns may be predicted using values in the previous columns.
In some examples, the techniques for palette-based coding of video data may be used with one or more other coding techniques, such as techniques for inter- or intra-predictive coding. For example, as described in greater detail below, an encoder or decoder, or combined encoder-decoder (codec), may be configured to perform inter- and intra-predictive coding, as well as palette-based coding.
FIG. 4
FIG. 4
178
180
184
188
192
192
184
178
196
200
is a conceptual diagram illustrating an example of determining a palette for coding video data, consistent with techniques of this disclosure. The example of includes a picture having a first coding unit (CU) that is associated with a first set of palettes (i.e., first palettes ) and a second CU that is associated with a second set of palettes (i.e., second palettes ). As described in greater detail below and in accordance with one or more of the techniques of this disclosure, second palettes are based on first palettes . Picture also includes block coded with an intra-prediction coding mode and block that is coded with an inter-prediction coding mode.
FIG. 4
FIG. 1
FIG. 2
FIG. 1
FIG. 3
20
30
The techniques of are described in the context of video encoder ( and ) and video decoder ( and ) and with respect to the HEVC video coding standard for purposes of explanation. With respect to the HEVC framework, as an example, the palette-based coding techniques may be configured to be used as a CU mode. In other examples, the palette-based coding techniques may be configured to be used as a PU mode or a TU mode in the framework of HEVC. Accordingly, all of the following disclosed processes described in the context of a CU mode may, additionally or alternatively, apply to a PU or a TU. However, it should be understood that the techniques of this disclosure are not limited in this way, and may be applied by other video coding processors and/or devices in other video coding processes and/or standards.
188
184
192
20
30
20
FIG. 4
In general, a palette refers to a number of pixel values that are dominant and/or representative for a CU currently being coded (e.g., CU in the example of ). First palettes and second palettes are shown as including multiple palettes. In some examples, according to aspects of this disclosure, a video coder (such as video encoder or video decoder ) may code palettes separately for each color component of a CU. For example, video encoder may encode a palette for a luma (Y) component of a CU, another palette for a chroma (U) component of the CU, and yet another palette for the chroma (V) component of the CU. In this example, entries of the Y palette may represent Y values of pixels of the CU, entries of the U palette may represent U values of pixels of the CU, and entries of the V palette may represent V values of pixels of the CU.
20
20
184
192
i
i
i
In other examples, video encoder may encode a single palette for all color components of a CU. In this example, video encoder may encode a palette having an i-th entry that is a triple value, including Y, U, and V. In this case, the palette includes values for each of the components of the pixels. Accordingly, the representation of palettes and as a set of palettes having multiple individual palettes is merely one example and not intended to be limiting.
FIG. 4
184
202
206
202
206
184
In the example of , each of first palettes includes three entries - having entry index value 1, entry index value 2, and entry index value 3, respectively. Entries - relate the index values to pixel values including pixel value A, pixel value B, and pixel value C, respectively. It should be noted that each of first palettes do not actually include the indices and column headers, but only include the pixel values A, B and C and the indices are used to identify the entries in the palette.
180
20
30
1
3
180
20
184
30
184
30
184
30
20
184
30
As described herein, rather than coding the actual pixel values of first CU , a video coder (such as video encoder or video decoder ) may use palette-based coding to code the pixels of the block using the indices -. That is, for each pixel position of first CU , video encoder may encode an index value for the pixel, where the index value is associated with a pixel value in one or more of first palettes . Video decoder may obtain the index values from a bitstream and may reconstruct the pixel values using the index values and one or more of first palettes . In other words, for each respective index value for a block, video decoder may determine an entry in one of first palettes . Video decoder may replace the respective index value in the block with the pixel value specified by the determined entry in the palette. Video encoder may transmit first palettes in an encoded video data bitstream for use by video decoder in palette-based decoding. In general, one or more palettes may be transmitted for each CU or may be shared among different CUs.
20
30
192
184
20
188
30
192
188
188
30
192
188
According to aspects of this disclosure, video encoder and video decoder may determine second palettes based on first palettes . For example, video encoder may encode a pred_palette_flag for each CU (including, as an example, second CU ) to indicate whether the palette for the CU is predicted from one or more palettes associated with one or more other CUs, such as neighboring CUs (spatially or based on scan order) or the most frequent samples of a causal neighbor. For example, when the value of such a flag is equal to one, video decoder may determine that second palettes for second CU are predicted from one or more already decoded palettes and therefore no new palettes for second CU are included in a bitstream containing the pred_palette_flag. When such a flag is equal to zero, video decoder may determine that palettes for second CU are included in the bitstream as a new palette. In some examples, pred_palette_flag may be separately coded for each different color component of a CU (e.g., three flags, one for Y, one for U, and one for V, for a CU in YUV video). In other examples, a single pred_palette_flag may be coded for all color components of a CU.
192
184
In the example above, the pred_palette_flag is signaled per-CU to indicate whether any of the entries of the palette for the current block are predicted. This means that second palettes are identical to first palettes and no additional information is signaled. In other examples, one or more syntax elements may be signaled on a per-entry basis. That is a flag may be signaled for each entry of a palette predictor to indicate whether that entry is present in the current palette. As noted above, if a palette entry is not predicted, the palette entry may be explicitly signaled. In other examples, these two methods could be combined. For example, first the pred_palette_flag is signaled. If the flag is 0, a per-entry prediction flag may be signaled. In addition, the number of new entries and their explicit values may be signaled.
192
184
20
30
184
188
20
30
180
188
20
30
188
196
FIG. 4
When determining second palettes relative to first palettes (e.g., pred_palette_flag is equal to one), video encoder and/or video decoder may locate one or more blocks from which the predictive palettes, in this example first palettes , are determined. The predictive palettes may be associated with one or more neighboring CUs of the CU currently being coded (e.g., such as neighboring CUs (spatially or based on scan order) or the most frequent samples of a causal neighbor), i.e., second CU . The palettes of the one or more neighboring CUs may be associated with a predictive palette. In some examples, such as the example illustrated in , video encoder and/or video decoder may locate a left neighboring CU, first CU , when determining a predictive palette for second CU . In other examples, video encoder and/or video decoder may locate one or more CUs in other positions relative to second CU , such as an upper CU, CU . In another example, the palette for the last CU in scan order that used the palette mode may be used as a predictive palette.
20
30
20
30
180
20
30
196
20
30
20
30
Video encoder and/or video decoder may determine a CU for palette prediction based on a hierarchy. For example, video encoder and/or video decoder may initially identify the left neighboring CU, first CU , for palette prediction. If the left neighboring CU is not available for prediction (e.g., the left neighboring CU is coded with a mode other than a palette-based coding mode, such as an intra-prediction more or intra-prediction mode, or is located at the left-most edge of a picture or slice) video encoder and/or video decoder may identify the upper neighboring CU, CU . Video encoder and/or video decoder may continue searching for an available CU according to a predetermined order of locations until locating a CU having a palette available for palette prediction. In some examples, video encoder and/or video decoder may determine a predictive palette based on multiple blocks and/or reconstructed samples of a neighboring block.
FIG. 4
184
180
20
30
20
While the example of illustrates first palettes as predictive palettes from a single CU, (i.e., first CU ), in other examples, video encoder and/or video decoder may locate palettes for prediction from a combination of neighboring CUs. For example, video encoder and/or video decoder may apply one or more formulas, functions, rules or the like to generate a predictive palette based on palettes of one or a combination of a plurality of neighboring CUs (spatially or in scan order).
20
30
20
30
In still other examples, video encoder and/or video decoder may construct a candidate list including a number of potential candidates for palette prediction. In such examples, video encoder may encode an index to the candidate list to indicate the candidate CU in the list from which the current CU used for palette prediction is selected (e.g., copies the palette). Video decoder may construct the candidate list in the same manner, decode the index, and use the decoded index to select the palette of the corresponding CU for use with the current CU. In another example, the palette of the indicated candidate CU in the list may be used as a predictive palette for per-entry prediction of a current palette for the current CU.
20
30
20
20
20
30
In an example for purposes of illustration, video encoder and video decoder may construct a candidate list that includes one CU that is positioned above the CU currently being coded and one CU that is positioned to the left of the CU currently being coded. In this example, video encoder may encode one or more syntax elements to indicate the candidate selection. For example, video encoder may encode a flag having a value of zero to indicate that the palette for the current CU is copied from the CU positioned to the left of the current CU. Video encoder may encode the flag having a value of one to indicate that the palette for the current CU is copied from the CU positioned above the current CU. Video decoder decodes the flag and selects the appropriate CU for palette prediction. In another example, the flag may indicate whether the palette of the top or left neighboring CU is used as a predictive palette. Then, for each entry in the predictive palette, it may be indicated whether that entry is used in the palette for the current CU.
20
30
20
30
20
30
In still other examples, video encoder and/or video decoder determine the palette for the CU currently being coded based on the frequency with which sample values included in one or more other palettes occur in one or more neighboring CUs. For example, video encoder and/or video decoder may track the colors associated with the most frequently used index values during coding of a predetermined number of CUs. Video encoder and/or video decoder may include the most frequently used colors in the palette for the CU currently being coded.
20
20
30
20
20
20
20
As noted above, in some examples, video encoder and/or video decoder may copy an entire palette from a neighboring CU for coding a current CU. Additionally or alternatively, video encoder and/or video decoder may perform entry-wise based palette prediction. For example, video encoder may encode one or more syntax elements for each entry of a palette indicating whether the respective entries are predicted based on a predictive palette (e.g., a palette of another CU). In this example, video encoder may encode a flag having a value of one for a given entry when the entry is a predicted value from a predictive palette (e.g. a corresponding entry of a palette associated with a neighboring CU). Video encoder may encode a flag having a value of zero for a particular entry to indicate that the particular entry is not predicted from a palette of another CU. In this example, video encoder may also encode additional data indicating the value of the non-predicted palette entry.
20
30
This disclosure describes several alternative techniques for predicting a palette for a current CU. In one example, a predictive palette that includes palette entries from one or more previously coded neighboring CUs includes a number of entries, N. In this case, video encoder first transmits a binary vector, V, having the same size as the predictive palette, i.e., size N, to video decoder . Each entry in the binary vector indicates whether the corresponding entry in the predictive palette will be reused or copied to the palette for the current CU. For example, V(i)=1 means that the i-th entry in the predictive palette for the neighboring CU will be reused or copied to the palette for the current CU, which may have a different index in the current CU.
20
30
30
In addition, video encoder may transmit a number, M, that indicates how many new palette entries are included in the palette for the current CU, and then transmits a pixel value for each of the new palette entries to video decoder . In this example, the final size of the palette for the current CU may be derived as equal to M+S, where S is the number of entries in the predictive palette that may be reused or copied to the palette for the current CU (i.e., V(i)=1). To generate the palette for the current CU, video decoder may merge the transmitted new palette entries and the copied palette entries reused from the predictive palette. In some cases, the merge may be based on the pixel values, such that the entries in the palette for the current CU may increase (or decrease) with the palette index. In other cases, the merge may be a concatenation of the two sets of entries, i.e. the new palette entries and the copied palette entries.
20
30
20
30
20
20
30
20
20
20
In another example, video encoder first transmits an indication of a size of a palette, N, for a current CU to video decoder . Video encoder then transmits a vector, V, having the same size as the palette for the current CU, i.e., size N, to video decoder . Each entry in the vector indicates whether the corresponding entry in the palette for the current CU is explicitly transmitted by video encoder or copied from a predictive palette. For example, V(i)=1 means that video encoder transmits the i-th entry in the palette to video decoder , and V(i)=0 means that the i-th entry in the palette is copied from the predictive palette. For the entries that are copied from the predictive palette (i.e., V(i)=0), video encoder may use different methods to signal which entry in the predictive palette is used in the palette for the current CU. In some cases, video encoder may signal the palette index of the entry to be copied from the predictive palette to the palette for the current CU. In other cases, video encoder may signal an index offset, which is the difference between the index in the palette for the current CU and the index in the predictive palette.
20
In the two above examples, the one or more previously coded neighboring CUs used to generate the predictive palette used for prediction of the palette for the current CU may be a top-neighboring (i.e. upper) CU or a left-neighboring CU with respect to the current CU. In some examples, a candidate list of neighboring CUs may be constructed, and video encoder transmits an index to indicate which candidate neighboring CUs and associated palettes are used for palette prediction for the current CU. For certain CUs, e.g., CUs that are positioned at a beginning of a slice or at other slice boundaries or leftmost CUs in the slice or a picture of video data, palette prediction may be disabled.
20
30
20
20
20
20
In an additional example, video encoder transmits an indication of a number of entries included in a palette for a current CU to video decoder . Then, for each of the palette entries, video encoder transmits a flag or other syntax element to indicate whether the palette entry is explicitly transmitted by video encoder or whether it is derived from a previously reconstructed pixel. For example, a one-bit flag set equal to 1 may mean that video encoder explicitly sends the palette entry, and the one-bit flag set equal to 0 may mean that the palette entry is derived from a previously reconstructed pixel. For each of the palette entries that are derived from a previously reconstructed pixel, video encoder transmits another indication regarding a pixel location of the reconstructed pixel in the current CU or a neighboring CU that corresponds to the palette entry. In some cases, the reconstructed pixel location indication may be a displacement vector with respect to the top-left position of the current CU. In other cases, the reconstructed pixel location indication may be an index into a list of reconstructed pixels that can be used for specifying the palette entry for the current CU. For example, this list may include all the reference pixels that may be used for normal intra prediction in HEVC.
FIG. 4
192
208
214
208
214
20
30
180
1
3
184
1
3
192
188
20
30
192
184
20
30
4
192
In the example of , second palettes includes four entries - having entry index value 1, entry index value 2, entry index value 3, and entry index 4, respectively. Entries - relate the index values to pixel values including pixel value A, pixel value B, pixel value C, and pixel value D, respectively. According to one or more aspects of this disclosure, video encoder and/or video decoder may use any of the above-described techniques to locate first CU for purposes of palette prediction and copy entries - of first palettes to entries - of second palettes for coding second CU . In this way, video encoder and/or video decoder may determine second palettes based on first palettes . In addition, video encoder and/or video decoder may code data for entry to be included with second palettes . Such information may include the number of palette entries not predicted from a predictive palette and the pixel values corresponding to those palette entries.
192
184
192
192
FIG. 4
In some examples, according to aspects of this disclosure, one or more syntax elements may indicate whether palettes, such as second palettes , are predicted entirely from a predictive palette (shown in as first palettes , but which may be composed of entries from one or more blocks) or whether particular entries of second palettes are predicted. For example, an initial syntax element may indicate whether all of the entries are predicted. If the initial syntax element indicates that not all of the entries are predicted (e.g., a flag having a value of 0), one or more additional syntax elements may indicate which entries of second palettes are predicted from the predictive palette.
20
30
20
30
According to some aspects of this disclosure, certain information associated with palette prediction may be inferred from one or more characteristics of the data being coded. That is, rather than video encoder encoding syntax elements (and video decoder decoding such syntax elements) video encoder and video decoder may perform palette prediction based on one or more characteristics of the data being coded.
20
30
In an example, for purposes of illustration, the value of pred_palette_flag, described above, may be inferred from one or more of, as examples, the size of the CU being coded, the frame type, the color space, the color component, the frame size, the frame rate, the layer id in scalable video coding or the view id in multi-view coding. That is, with respect to the size of the CU as an example, video encoder and/or video decoder may determine that the above-described pred_palette_flag is equal to one for any CUs that exceed or are less than a predetermined size. In this example, the pred_palette_flag does not need to be signaled in the encoded bitstream.
20
30
While described above with respect to the pred_palette_flag, video encoder and/or video decoder may also or alternatively infer other information associated with palette prediction, such as the candidate CU from which the palette is used for prediction, or rules for constructing palette prediction candidates, based on one or more characteristics of the data being coded.
20
30
188
192
20
30
188
192
20
192
188
20
20
30
20
188
30
192
188
192
30
192
188
According to other aspects of this disclosure, video encoder and/or video decoder may construct a palette on-the-fly. For example, when initially coding second CU , there are no entries in palettes . As video encoder and video decoder code new values for pixels of second CU , each new value is included in palettes . That is, for example, video encoder adds pixel values to palettes as the pixel values are generated and signaled for positions in CU . As video encoder encodes pixels relatively later in the CU, video encoder may encode pixels having the same values as those already included in the palette using index values rather than signaling the pixel values. Similarly, when video decoder receives a new pixel value (e.g., signaled by video encoder ) for a position in second CU , video decoder includes the pixel value in palettes . When pixel positions decoded relatively later in second CU have pixel values that have been added to second palettes , video decoder may receive information such as, e.g., index values, that identify the corresponding pixel values in second palettes for reconstruction of the pixel values of second CU .
20
30
184
192
192
20
30
192
20
30
20
30
20
30
In some examples, as described in greater detail below, video encoder and/or video decoder may maintain palettes and at or below a maximum palette size. According to aspects of this disclosure, if a maximum palette size is reached, e.g., as second palettes are constructed dynamically on-the-fly, then video encoder and/or video decoder perform the same process to remove an entry of second palettes . One example process for removing palette entries is a first-in-first-out (FIFO) technique in which video encoder and video decoder remove the oldest entry of a palette. In another example, video encoder and video decoder may remove the least frequently used palette entry from the palette. In still another example, video encoder and video decoder may weight both FIFO and frequency of use processes to determine which entry to remove. That is, removal of an entry may be based on how the old the entry is and how frequently it is used.
20
20
20
30
According to some aspects, if an entry (pixel value) is removed from a palette and the pixel value occurs again at a later position in the CU being coded, video encoder may encode the pixel value instead of including an entry in the palette and encoding an index. Additionally or alternatively, video encoder may re-enter palette entries into the palette after having been removed, e.g., as video encoder and video decoder scan the positions in the CU.
20
30
192
192
184
192
188
20
30
In some examples, the techniques for deriving a palette on-the-fly may be combined with one or more other techniques for determining a palette. In particular, as an example, video encoder and video decoder may initially code second palettes (e.g., using palette prediction to predict second palettes from first palettes ) and may update second palettes when coding pixels of second CU . For example, upon transmitting the initial palette, video encoder may add values to the initial palette or change values in the initial palette as pixel values of additional locations in the CU are scanned. Likewise, upon receiving an initial palette, video decoder may add (i.e., include) values to the initial palette or change values in the initial palette as pixel values of additional locations in the CU are scanned.
20
20
30
Video encoder may, in some examples, signal whether the current CU uses transmission of an entire palette, or on-the-fly palette generation, or a combination of transmission of an initial palette with updating of the initial palette by on-the-fly derivation. In some examples, the initial palette may be a full palette at maximum palette size, in which case values in the initial palette may be changed. In other examples, the initial palette may be smaller than the maximum palette size, in which cases video encoder and video decoder may add values to and/or change values of the initial palette.
184
192
20
30
20
30
According to one or more aspects of this disclosure, the size of palettes, such as first palettes and second palettes , e.g., in terms of the number of pixel values that are included in the palette may be fixed or may be signaled using one or more syntax elements in an encoded bitstream. For example, according to some aspects, video encoder and video decoder may use unary codes or truncated unary codes (e.g., codes that truncate at a maximum limit of the palette size) to code the palette size. According to other aspects, video encoder and video decoder may use Exponential-Golomb or Rice-Golomb codes to code the palette size.
20
30
192
20
208
214
192
192
20
208
212
214
According to still other aspects, video encoder and video decoder may code data indicating the size of the palette after each entry of the palette. With respect to second palettes as an example, video encoder may encode a stop flag after each of entries -. In this example, a stop flag equal to one may specify that the entry currently being coded is the final entry of second palettes , while a stop flag equal to zero may indicate that there are additional entries in second palettes . Accordingly, video encoder may encode stop flags having a value of zero after each of entries - and a stop flag having a value of one after entry . In some instances, the stop flag may not be included in the bitstream upon the constructed palette reaching a maximum palette size limit. While the examples above disclose techniques for explicitly signaling the size of palettes, in other examples, the size of palettes may also be conditionally transmitted or inferred based on so-called side information (e.g., characteristic information such as the size of the CU being coded, the frame type, the color space, the color component, the frame size, the frame rate, the layer id in scalable video coding or the view id in multi-view coding, as noted above).
20
20
30
184
192
20
30
The techniques of this disclosure include coding data losslessly, or, alternatively, with some losses (lossy coding). For example, with respect to lossy coding, video encoder may code the pixels of a CU without exactly matching the pixel values of palettes exactly to the actual pixel values in the CU. When the techniques of this disclosure are applied to lossy coding, some restrictions may be applied to the palette. For example, video encoder and video decoder may quantize palettes, such as first palettes and second palettes . That is, video encoder and video decoder may merge or combine (i.e., quantize) entries of a palette when the pixel values of the entries are within a predetermined range of each other. In other words, if there is already a palette value that is within an error margin of a new palette value, the new palette value is not added to the palette. In another example, a plurality of different pixel values in a block may be mapped to a single palette entry, or, equivalently, to a single palette pixel value.
30
30
20
Video decoder may decode pixel values in the same manner, regardless of whether a particular palette is lossless or lossy. As one example, video decoder may use an index value transmitted by video encoder for a given pixel position in a coded block to select an entry in the palette for the pixel position, without regard to whether the palette is lossless or lossy. In this example, the pixel value of the palette entry is used as the pixel value in the coded block, whether it matches the original pixel value exactly or not.
20
20
20
20
20
In an example of lossy coding, for purposes of illustration, video encoder may determine an error bound, referred to as a delta value. A candidate pixel value entry Plt_cand may correspond to a pixel value at a position in a block to be coded, such as CU or PU. During construction of the palette, video encoder determines the absolute difference between the candidate pixel value entry Plt_cand and all of the existing pixel value entries in the palette. If all of the absolute differences between the candidate pixel value entry Plt_cand and the existing pixel value entries in the palette are larger than the delta value, video encoder may add the pixel value candidate to the palette as an entry. If an absolute difference between the pixel value entry Plt_cand and at least one existing pixel value entry in the palette is equal to or smaller than the delta value, video encoder may not add the candidate pixel value entry Plt_cand to the palette. Thus, when coding the pixel value entry Plt_cand, video encoder may select the entry with the pixel value that is the closest to the pixel value entry Plt_cand, thereby introducing some loss into the system. When a palette consists of multiple components (e.g. three color components), the sum of absolute difference of individual component values may be used for comparison against the delta value. Alternatively or additionally, the absolute difference for each component value may be compared against a second delta value.
20
30
In some examples, the existing pixel value entries in the palette noted above may have been added using a similar delta comparison process. In other examples, the existing pixel values in the palette may have been added using other processes. For example, one or more initial pixel value entries may be added to a palette (without a delta comparison) to start the delta comparison process of constructing the palette. The process described above may be implemented by video encoder and/or video decoder to produce luma and/or chroma palettes.
20
30
20
20
20
The techniques described above with respect to palette construction may also be used by video encoder and video decoder during pixel coding. For example, when encoding of a pixel value, video encoder may compare the value of the pixel with the pixel values of entries in the palette. If the absolute pixel value difference between the value of the pixel and one of the entries in the palette is equal to or smaller than a delta value, video encoder may encode the pixel value as the entry of the palette. That is, in this example, video encoder encodes the pixel value using one of the entries of the palette when the pixel value produces a sufficiently small (e.g., within a predetermined range) absolute difference versus the palette entry.
20
20
30
30
In some examples, video encoder may select the palette entry that yields the smallest absolute pixel value difference (compared to the pixel value being coded) to encode the pixel value. As an example, video encoder may encode an index to indicate a palette entry that will be used for the pixel value, e.g., the palette pixel value entry that will be used to reconstruct the coded pixel value at video decoder . If the absolute pixel value difference between the value of the pixel and all of the entries in the palette is greater than delta, the encoder may not use one of the palette entries to encode the pixel value, and instead may transmit the pixel value of the pixel (possibly after quantization) to video decoder (and possibly add the pixel value as an entry to the palette).
20
20
20
20
20
20
FIG. 2
In another example, video encoder may select an entry of a palette for encoding a pixel value. Video encoder may use the selected entry as a predictive pixel value. That is, video encoder may determine a residual value representing a difference between the actual pixel value and the selected entry and encode the residue. Video encoder may generate residual values for pixels in a block that are predicted by entries of a palette, and may generate a residue block including respective residual pixel values for the block of pixels. Video encoder may subsequently apply transformation and quantization (as noted above with respect to ) to the residue block. In this manner, video encoder may generate quantized residual transform coefficients. In another example, the residue may be coded losslessly (without transform and quantization) or without transform.
30
30
30
Video decoder may inverse transform and inverse quantize the transform coefficients to reproduce the residual block. Video decoder may then reconstruct a pixel value using the predictive palette entry value and the residual value for the pixel value. For example, video decoder may combine the residual value with the palette entry value to reconstruct the coded pixel value.
20
30
20
30
20
30
In some examples, the delta value may be different for different CU sizes, picture sizes, color spaces or different color components. The delta value may be predetermined or determined based on various coding conditions. For example, video encoder may signal the delta value to video decoder using high level syntax, such as syntax in PPS, SPS, VPS and/or slice header. In other examples, video encoder and video decoder may be preconfigured to use the same, fixed delta value. In still other examples, video encoder and/or video decoder may adaptively derive the delta value based on side information (e.g., such as CU size, color space, color component, or the like, as noted above).
FIGS. 2 and 3
In some examples, a lossy coding palette mode may be included as an HEVC coding mode. For example, coding modes may include an intra-prediction mode, an inter-prediction mode, a lossless coding palette mode, and a lossy coding palette mode. In HEVC coding, as noted above with respect to , a quantization parameter (QP) is used to control the allowed distortion. The value of delta for palette-based coding may be calculated or otherwise determined as a function of QP. For example, the above-described delta value may be 1<(QP/6) or 1<<((QP+d)/6) where d is a constant, and “<<” represents the bitwise left-shift operator.
20
30
20
30
20
30
20
30
Generation of a palette using the lossy coding techniques described in this disclosure may be performed by video encoder , video decoder or both. For example, video encoder may generate entries in a palette for a CU using the delta comparison techniques described above and signal information for construction of the palette for use by video decoder . That is, video encoder may be configured to signal information indicating pixel values for entries in a palette for a CU, and then encode pixel values using the pixel values associated with such palette entries. Video decoder may construct a palette using such information, and may then use the entries to decode pixel values of a coded block. In some examples, video encoder may signal index values that identify palette entries for one or more pixel positions of the coded block, and video decoder may use the index values to retrieve the pertinent pixel value entries from the palette.
30
30
30
20
20
30
30
In other examples, video decoder may be configured to construct a palette by applying the delta comparison techniques described above. For example, video decoder may receive pixel values for positions within a coded block and may determine whether absolute differences between the pixel values and the existing pixel value entries in the palette are larger than a delta value. If so, video decoder may add the pixel values as entries in the palette, e.g., for later use in palette-based decoding of pixel values for other pixel positions of the block using corresponding index values signaled by video encoder . In this case, video encoder and video decoder apply the same or similar processes to generate the palette. If not, video decoder may not add the pixel values to the palette.
30
30
30
In an example for purposes of illustration, video decoder may receive index values or pixel values for various pixel positions in a block. If an index value is received for a pixel position, for example, video decoder may use the index value to identify an entry in the palette, and use the pixel value of the palette entry for the pixel position. If a pixel value is received for the pixel position, video decoder may use the received pixel value for the pixel position, and may also apply the delta comparison to determine whether the pixel value should be added to the palette and then later used for palette coding.
20
20
30
On the encoder side, if a pixel value for a position in a block produces an absolute difference between the pixel value and an existing pixel value entry in the palette that is less than or equal to the delta value, video encoder may send an index value to identify the entry in the palette for use in reconstructing the pixel value for that position. If a pixel value for a position in a block produces absolute difference values between the pixel value and the existing pixel value entries in the palette that are all greater than the delta value, video encoder may send the pixel value and may add the pixel value as a new entry in the palette. To construct the palette, video decoder may use delta values signaled by the encoder, rely on a fixed or known delta value, or infer or derive a delta value, e.g., as described above.
20
30
20
30
20
30
As noted above, video encoder and/or video decoder may use coding modes including an intra-prediction mode, an inter-prediction mode, a lossless coding palette mode, and a lossy coding palette mode when coding video data. According to some aspects of this disclosure, video encoder and video decoder may code one or more syntax elements indicating whether palette-based coding is enabled. For example, at each CU, video encoder may encode a syntax element, such as a flag PLT_Mode_flag. The PLT_Mode_flag or other syntax element may indicate whether a palette-based coding mode is to be used for a given CU (or a PU in other examples). For example, this flag may be signaled in an encoded video bitstream at the CU level, and then received by video decoder upon decoding the encoded video bitstream.
30
20
In this example, a value of this PLT_Mode_flag equal to 1 may specify that the current CU is encoded using a palette-based coding mode. In this case, video decoder may apply the palette-based coding mode to decode the CU. In some examples, a syntax element may indicate one of a plurality of different palette-based coding modes for the CU (e.g., lossy or lossless). A value of this PLT_Mode_flag equal to 0 may specify that the current CU is encoded using a mode other than palette mode. For example, any of a variety of inter-predictive, intra-predictive, or other coding modes may be used. When a value of PLT_Mode_flag is 0, video encoder may also encode additional data to indicate the specific mode used for encoding the respective CU (e.g., an HEVC coding mode). The use of the PLT_Mode_flag is described for purposes of example. In other examples, however, other syntax elements such as multi-bit codes may be used to indicate whether the palette-based coding mode is to be used for a CU (or PU in other examples) or to indicate which of a plurality of modes are to be used for coding.
20
20
In some examples, the above-described flag or other syntax elements may be transmitted at a higher level than the CU (or PU) level. For example, video encoder may signal such a flag at a slice level. In this case, a value equal to 1 indicates that all of the CUs in the slice are encoded using palette mode. In this example, no additional mode information, e.g., for palette mode or other modes, is signaled at the CU level. In another example, video encoder may signal such a flag in a PPS. SPS or VPS.
20
30
According to some aspects of this disclosure, video encoder and/or video decoder may code one or more syntax elements (e.g., such as the above-described flag) at one of the slice, PPS, SPS, or VPS levels specifying whether the palette mode is enabled or disabled for the particular slice, picture, sequence or the like, while the PLT_Mode_flag indicates whether the palette-based coding mode is used for each CU. In this case, if a flag or other syntax element sent at the slice, PPS. SPS or VPS level indicates that palette coding mode is disabled, in some examples, there may be no need to signal the PLT_Mode_flag for each CU. Alternatively, if a flag or other syntax element sent at the slice, PPS, SPS or VPS level indicates that palette coding mode is enabled, the PLT_Mode_flag may be further signaled to indicate whether the palette-based coding mode is to be used for each CU. Again, as mentioned above, application of these techniques for indicating palette-based coding of a CU could additionally or alternatively be used to indicate palette-based coding of a PU.
20
30
In some examples, the above-described syntax elements may be conditionally signaled in the bitstream. For example, video encoder and video decoder may only encode or decode, respectively, the syntax elements based on the size of the CU, the frame type, the color space, the color component, the frame size, the frame rate, the layer id in scalable video coding or the view id in multi-view coding.
20
30
20
30
While the examples described above relate to explicit signaling, e.g., with one or more syntax elements in a bitstream, in other examples, video encoder and/or video decoder may implicitly determine whether a palette coding mode is active and/or used for coding a particular block. Video encoder and video decoder may determine whether palette-based coding is used for a block based on, for example, the size of the CU, the frame type, the color space, the color component, the frame size, the frame rate, the layer id in scalable video coding or the view id in multi-view coding.
FIG. 4
While the techniques of are described above in the context of CUs (HEVC), it should be understood that the techniques may also be applied to prediction units (PUs) or in other video coding processes and/or standards.
FIG. 5
FIG. 5
FIG. 4
240
244
244
184
192
is a conceptual diagram illustrating examples of determining indices to a palette for a video block, consistent with techniques of this disclosure. For example, includes a map of index values (values 1, 2, and 3) that relate respective positions of pixels associated with the index values to an entry of palettes . Palettes may be determined in a similar manner as first palettes and second palettes described above with respect to .
FIG. 5
FIG. 1
FIG. 2
FIG. 1
FIG. 3
20
30
Again, the techniques of are described in the context of video encoder ( and ) and video decoder ( and ) and with respect to the HEVC video coding standard for purposes of explanation. However, it should be understood that the techniques of this disclosure are not limited in this way, and may be applied by other video coding processors and/or devices in other video coding processes and/or standards.
240
244
20
30
240
244
FIG. 5
While map is illustrated in the example of as including an index value for each pixel position, it should be understood that in other examples, not all pixel positions may be associated with an index value that indicates an entry of palettes that specify the pixel value of the block. That is, as noted above, in some examples, video encoder may encode (and video decoder may obtain, from an encoded bitstream) an indication of an actual pixel value (or its quantized version) for a position in map if the pixel value is not included in palettes .
20
30
20
20
244
20
244
20
20
244
20
30
FIG. 5
In some examples, video encoder and video decoder may be configured to code an additional map indicating which pixel positions are associated with index values. For example, assume that the (i, j) entry in the map corresponds to the (i, j) position of a CU. Video encoder may encode one or more syntax elements for each entry of the map (i.e., each pixel position) indicating whether the entry has an associated index value. For example, video encoder may encode a flag having a value of one to indicate that the pixel value at the (i, j) location in the CU is one of the values in palettes . Video encoder may, in such an example, also encode a palette index (shown in the example of as values 1-3) to indicate that pixel value in the palette and to allow video decoder to reconstruct the pixel value. In instances in which palettes include a single entry and associated pixel value, video encoder may skip the signaling of the index value. Video encoder may encode the flag to have a value of zero to indicate that the pixel value at the (i, j) location in the CU is not one of the values in palettes . In this example, video encoder may also encode an indication of the pixel value for use by video decoder in reconstructing the pixel value. In some instances, the pixel value may be coded in a lossy manner.
The value of a pixel in one position of a CU may provide an indication of values of one or more other pixels in other positions of the CU. For example, there may be a relatively high probability that neighboring pixel positions of a CU will have the same pixel value or may be mapped to the same index value (in the case of lossy coding, in which more than one pixel value may be mapped to a single index value).
20
30
Accordingly, according to aspects of this disclosure, video encoder may encode one or more syntax elements indicating a number of consecutive pixels or index values in a given scan order that have the same pixel value or index value. As noted above, the string of like-valued pixel or index values may be referred to herein as a run. In an example for purposes of illustration, if two consecutive pixels or indices in a given scan order have different values, the run is equal to zero. If two consecutive pixels or indices in a given scan order have the same value but the third pixel or index in the scan order has a different value, the run is equal to one. For three consecutive indices or pixels with the same value, the run is two, and so forth. Video decoder may obtain the syntax elements indicating a run from an encoded bitstream and use the data to determine the number of consecutive locations that have the same pixel or index value.
20
20
30
In some examples, all pixel locations in the current CU having pixel values that are in the palette for the current CU are encoded with a palette index followed by a “run” of the pixel value at consecutive pixel locations. In the case where there is only one entry in the palette, the transmission of the palette index or the “run” may be skipped for the current CU. In the case where the pixel value at one of the pixel locations in the current CU does not have an exact match to a pixel value in the palette, video encoder may select one of the palette entries having the closest pixel value and may calculate a prediction error or residual value between the original pixel value and the prediction pixel value included in the palette. Video encoder encodes and transmits the residual value for the pixel location to the video decoder. Video decoder may then derive a pixel value at the pixel location based on the corresponding received palette index, and the derived pixel value and the residual value are then used to predict the original pixel value at the pixel location in the current CU. In one example, the residual value is encoded using an HEVC method specified by HEVC draft 10, such as applying a RQT to transform the residual value, quantize the transform coefficients, and entropy encode the quantized transform coefficients. In some cases, the above example may be referred to as lossy coding.
248
240
248
20
248
20
248
20
20
248
20
30
In an example for purposes of illustration, consider line of map . Assuming a horizontal, left to right scan direction, line includes five index values of “2” and three index values of “3.” According to aspects of this disclosure, video encoder may encode an index value of 2 for the first position of line in the scan direction. In addition, video encoder may encode one or more syntax elements indicating the run of consecutive values in the scan direction that have the same index value as the signaled index value. In the example of line , video encoder may signal a run of 4, thereby indicating that the index values of the following four positions in the scan direction share the same index value as the signaled index value. Video encoder may perform the same process for the next different index value in line . That is, video encoder may encode an index value of 3 and one or more syntax elements indicating a run of two. Video decoder may obtain the syntax elements indicating the index value and the number of consecutive indices in the scan direction having the same index value (the run).
20
20
As noted above, the indices of a map are scanned in a particular order. According to aspects of this disclosure, the scan direction may be vertical, horizontal, or at a diagonal (e.g., 45 degrees or 135 degrees diagonally in block). In some examples, video encoder may encode one or more syntax elements for each block indicating a scan direction for scanning the indices of the block. Additionally or alternatively, the scan direction may be signaled or inferred based on so-called side information such as, for example, block size, color space, and/or color component. Video encoder may specify scans for each color component of a block. Alternatively, a specified scan may apply to all color components of a block.
252
240
252
20
252
252
20
20
20
30
For example, with respect to a column based scan, consider column of map . Assuming a vertical, top to bottom scan direction, column includes one index value of “1,” five index values of “2” and two index values of “3.” According to aspects of this disclosure, video encoder may encode an index value of 1 for the first position of line in the scan direction (at the relative top of column ). In addition, video encoder may signal a run of zero, thereby indicating that the index value of the following position in the scan direction is different. Video encoder may then encode an index value of 2 for the next position in the scan direction and one or more syntax elements indicating a run of four, i.e., that the index values of the following four positions in the scan direction share the same index value as the signaled index value. Video encoder may then encode an index value of 3 for the next different index value in the scan direction and one or more syntax elements indicating a run of one. Video decoder may obtain the syntax elements indicating the index value and the number of consecutive indices in the scan direction having the same index value (the run).
20
30
240
20
20
20
30
According to aspects of this disclosure, video encoder and video decoder may additionally or alternatively perform line copying for one or more entries of map . The line copying may depend, in some examples, on the scan direction. For example, video encoder may indicate that a pixel or index value for a particular entry in a map is equal to a pixel or index value in a line above the particular entry (for a horizontal scan) or the column to the left of the particular entry (for a vertical scan). Video encoder may also indicate, as a run, the number of pixel or index values in the scan order that are equal to the entry in the line above or the column to the left of the particular entry. In this example, video encoder and or video decoder may copy pixel or index values from the specified neighboring line and from the specified number of entries for the line of the map currently being coded.
256
260
240
256
260
20
260
260
256
260
240
30
256
260
260
In an example for purposes of illustration, consider columns and of map . Assuming a vertical, top to bottom scan direction, column includes three index values of “1,” three index values of “2,” and two index values of “3.” Column includes the same index values having the same order in the scan direction. According to aspects of this disclosure, video encoder may encode one or more syntax elements for column indicating that the entire column is copied from column . The one or more syntax elements may be associated with a first entry of column at the relative top of map . Video decoder may obtain the syntax elements indicating the line copying and copy the index values of column for column when decoding column .
20
240
20
20
30
According to aspects of this disclosure, the techniques for coding so-called runs of entries may be used in conjunction with the techniques for line copying described above. For example, video encoder may encode one or more syntax elements (e.g., a flag) indicating whether the value of an entry in a map is obtained from a palette or the value of an entry in the map is obtained from a previously coded line in map . Video encoder may also encode one or more syntax elements indicating an index value of a palette or the location of the entry in the line (the row or column). Video encoder may also encode one or more syntax elements indicating a number of consecutive entries that share the same value. Video decoder may obtain such information from an encoded bitstream and may use the information to reconstruct the map and pixel values for a block.
264
268
240
264
268
20
264
268
20
268
268
264
20
268
264
In an example for purposes of illustration, consider rows and of map . Assuming a horizontal, left to right scan direction, row includes five index values of “1” and three index values of “3.” Row includes three index values of “1,” two index values of “2,” and three index values of “3.” In this example, video encoder may identify particular entries of row followed by a run when encoding data for row . For example, video encoder may encode one or more syntax elements indicating that the first position of row (the left most position of row ) is the same as the first position of row . Video encoder may also encode one or more syntax elements indicating that the next run of two consecutive entries in the scan direction in row is the same as the first position of row .
20
264
20
268
20
20
268
264
264
20
20
In some examples, video encoder may also determine whether to code the current pixel or index value relative to a position in another row (or column) or to code the current pixel or index value using a run syntax element. For example, after encoding the one or more syntax elements indicating the first position of row and the run of two entries (noted above), video encoder may encode, for the fourth and fifth positions in line (from left to right), one or more syntax elements indicating a value of 2 for the fourth position and one or more syntax elements indicating a run of 1. Hence, video encoder encodes these two positions without reference to another line (or column). Video encoder may then code the first position having an index value of 3 in row relative to upper row (e.g. indicating a copy from upper row and the run of consecutive positions in the scan order having the same index value). Hence, according to aspects of this disclosure, video encoder may select between coding pixel or index values of a line (or column) relative to other values of the line (or column), e.g., using a run, coding pixel or index values of a line (or column) relative to values of another line (or column), or a combination thereof. In some examples, video encoder may perform a rate/distortion optimization to make the selection.
30
268
30
240
30
Video decoder may receive the syntax elements described above and may reconstruct row . For example, video decoder may obtain data indicating a particular location in a neighboring row from which to copy the associated index value for the position of map currently being coded. Video decoder may also obtain data indicating the number of consecutive positions in the scan order having the same index value.
FIG. 5
20
30
In some instances, the line from which entries are copied may be directly adjacent to the entry of the line currently being coded (as illustrated in the examples of ). However, in other examples, a number of lines may be buffered by video encoder and/or video decoder , such that any of the number of lines of the map may be used as predictive entries for a line of the map currently being coded. Hence, in some examples, the pixel value for an entry may be signaled to be equal to a pixel value of an entry in a row immediately above (or column to the left of) or two or more rows above (or column to the left of) the current row.
20
30
20
20
30
240
20
240
240
In an example for purposes of illustration, video encoder and/or video decoder may be configured to store the previous n rows of entries prior to coding a current row of entries. In this example, video encoder may indicate the predictive row (the row from which entries are copied) in a bitstream with a truncated unary code or other codes. In another example, video encoder may encode (and video decoder may decode) a displacement value between the current line and the predictive line of map used as a reference for coding the current line. That is, video encoder may encode an indication of a particular line (or column) from which an index value is copied. In some examples, the displacement value may be a displacement vector. That is, let c[0], c[1], . . . , denote the indices of the current line of map and let u[0], u[1], u[2], . . . , denote the indices of a predictive line of map , such as an upper neighboring line. In this example, given a displacement vector is d, the index value for c[i] may be predicted from u[i+d], or u[i+d] to avoid d taking negative values. The value of d may be coded using unary, truncated unary, exponential Golomb or Golomb-Rice codes.
20
As another example, video encoder may signal an instruction, such as “copy from up line left half” or “copy from up line right half,” indicating the neighboring line and the number or portion of entries of the neighboring line to copy to the line of the map currently being coded. As an additional example, the map of index values may be re-ordered before coding. For example, the map of index values may be rotated by 90, 180 or 270 degrees, or flipped upside down or left-side right to improve coding efficiency.
20
240
30
20
30
20
30
In other examples, video encoder may not transmit runs of like-valued index values of map to video decoder . In this case, video encoder and/or video decoder may implicitly derive the values of the runs. In one example, the value of a run may be a constant value, e.g., 4, 8, 16, or the like. In another example, the value of a run may be dependent on side information for the current block of video data being coded such as, for example, the block size, the quantization parameter (QP), the frame type, the color component, the color format (e.g., 4:4:4, 4:2:2, or 4:2:0), the color space (e.g., YUV or RGB), the scan direction and/or other types of characteristic information for the current block. In the case where the value of a run depends on the block size, the run may be equal to the width of the current block, the height of the current block, the half-width (or half-height) of the current block, a fraction of the width and/or the height of the current block, or a multiple of the width and/or the height of the current block. In another example, video encoder may signal the value of a run to video decoder using high level syntax, such as syntax in a picture parameter set (PPS), a sequence parameter set (SPS), a video parameter set (VPS) and/or a slice header.
20
240
30
20
30
240
20
30
Additionally or alternatively, video encoder may not even need to transmit map to video decoder . Instead, video encoder and/or video decoder may implicitly derive a start position or location of each run of index values included in map . In one example, the video coding standard applied by video encoder and/or video decoder may determine that a run can only start at certain locations. For example, the run may only start at the beginning of each row, or the beginning of every N rows of a current block being coded. The start location may be different for different scan directions. For example, if the vertical scan is used, the run may only start at the beginning of a column or the beginning of every N columns of the current block.
20
30
In another example, the start location may be derived depending on side information for the current block such as, for example, the block size, the QP, the frame type, the color component, the color format (e.g., 4:4:4, 4:2:2, or 4:2:0), the color space (e.g., YUV or RGB), the scan direction and/or other types of characteristic information for the current block. In the case where the start location of a run depends on the block size, the start location may be the mid-point of each row and/or each column, or a fraction (e.g., 1/n, 2/n, . . . (n−1)/n) of each row and/or column. In another example, video encoder may signal the start position to video decoder using high level syntax, such as syntax in a PPS, a SPS, a VPS and/or a slice header.
20
30
20
30
In some examples, the implicit start position derivation and the implicit run derivation, each described above, may be combined. For example, video encoder and/or video decoder may determine that a run of like-valued index values of the map is equal to a distance between two neighboring start positions. In the case where the start position is the beginning (i.e., the first position) of every row of a current block, then video encoder and/or video decoder may determine that the length of the run is equal to the length of an entire row of the current block.
FIG. 6
FIG. 6
FIG. 6
270
272
274
270
272
274
is a conceptual diagram illustrating examples of determining a geometric edge , or of a video block using a run of palette indices for the luma component adaptively downsampled for the chroma components, consistent with techniques of this disclosure. In , the luma samples are illustrated as un-filled circles, and the chroma samples are illustrated as one of the luma samples overlaid with an x-symbol. illustrates examples of different run values for luma and chroma components based on a location of geometric edge , or of the video block.
In some cases, one palette is generated and shared for multiple color components in the current block, and in other cases, separate palettes are generated for one or more of the color components. In one case, one palette may be generated for the luma component and another palette may be generated for both the chroma components. In either case, the geometric information may be shared between the color components. Usually there is high correlation between edge locations of collocated blocks in different color components because the chroma components may have been downsampled from the luma components in a pre-defined way, such as 4:2:2 or 4:2:0 sampling.
For example, in palette-based coding, run coding may be used to indicate geometry information for the current block because an edge of the current block will break the run. In case of the 4:4:4 chroma format, the run may be generated once and used for all color components. The run may be generated based on one of the color components, or the run may be generated using more than one of the color components. In case of the 4:2:2 chroma format, the run used for the luma component may be horizontally downsampled by a factor of two for application to the chroma components. In the case of the 4:2:0 chroma format, the run used for the luma component may be horizontally and vertically downsampled by a factor of two for application to the chroma components.
270
272
274
270
272
274
FIG. 6
FIG. 6
FIG. 6
FIG. 6
In some cases, the run downsampling method can be adaptive to a chroma downsampling method. In this case, the downsampled run value for the chroma components may be differently calculated according to the location of the edge, e.g., edge , or , of the video block as shown in . In a first example, illustrates a geometric edge between two neighboring video blocks that is positioned such that a run for the luma component has a value “1” in the left-hand block and a value of “3” in the right-hand block. In this case, the downsampled run for the chroma components has a value of “1” in both the left-hand block and the right-hand block. In a second example, illustrates a geometric edge between two neighboring video blocks that is positioned such that a run for the luma component has a value “2” in both the left-hand block and the right-hand block. In this case, the downsampled run for the chroma components has a value of “1” in both the left-hand block and the right-hand block. In a third example, illustrates a geometric edge between two neighboring video blocks that is positioned such that a run for the luma component has a value “3” in the left-hand block and a value of “1” in the right-hand block. In this case, the downsampled run for the chroma components has a value of “2” in the left-hand block and a value of “0” in the right-hand block.
In addition to the geometric information, it may also be possible to have a single palette for pixel value of all color components. For example, for each pixel location in the current block, the pixel values in three color components (e.g. Y luma and both U and V chroma components) may form a vector (i.e., a color vector). Then, a palette may be formed by selecting a certain number of vectors to represent the current block. It may be possible to have one palette of pixel values for the luma component, and another palette of pixel values for the chroma components. In some cases, it may also be possible to combine the two methods of sharing geometry information and having a single palette of pixel values using a color vector.
In some examples, the line copying described in more detail elsewhere in this disclosure may also work with a single palette. In this case, the palette index for each pixel location is signaled as being equal to the palette index of the row above, if the scan is horizontal, or the column on the left, if the scan is vertical, and then the associated run of palette indices is also copied from the previous row or column. With a shared palette, a palette entry may be a triplet of(Y, U, V), so that later Y, U, V values may be reconstructed from the palette index. The reconstructed values may serve as the decoded pixel values or may serve as prediction values that are combined with residual values to derive the decoded pixel values. In the 4:2:2 chroma format and the 4:2:0 chroma format, the chroma components have been downsampled compared to the luma components. In the example of a 2:1 downsampling, the luma positions may be at 0, 1, 2, . . . , and the chroma positions may be at 1, 3, 5, . . . or may be at 0, 2, 4, . . . . For positions where chroma components do not exist, the U and V components in the palette entry may be discarded.
FIG. 7
FIG. 7
FIG. 2
FIG. 7
122
20
is a flowchart illustrating an example process for encoding prediction residual video data using a palette-based coding mode, consistent with techniques of this disclosure. The example process illustrated in is described herein with respect to palette-based encoding unit of video encoder from . In other examples, one or more other or additional components may perform the example process illustrated in .
20
122
122
280
Video encoder receives video data of a current video block to be encoded using palette-based video coding of predicted video blocks, and sends the video data to palette-based encoding unit . Palette-based encoding unit determines prediction residual values for the current block based on pixel values of the current block and previously coded reference pixel values ().
122
122
120
122
126
Palette-based encoding unit may calculate the prediction residual values using any prediction mode, e.g., an inter-prediction mode or an intra-prediction mode of the HEVC standard. In one example, palette-based encoding unit may use inter-prediction processing unit to predict pixel values of the current block using previously coded pixel values in a reference block. In another example, palette-based encoding unit may use intra-prediction processing unit to predict pixel values of the current block using previously coded pixel values in the current block.
122
In some cases, palette-based encoding unit may determine the prediction residual values for the current block using only a subset of prediction mode processes. For example, in the case of the intra prediction mode, the DC, horizontal, and/or vertical prediction processes may be enabled, but other intra prediction mode processes may be disabled. The disabled processes may include the filtering in the intra prediction mode, e.g., one or more of MDIS, 1/32 pel bilinear interpolation, edge filter or DC filter. As a further example, in the case of the inter prediction mode, the average of pixels process. e.g., one or more of the weighted prediction, the bi-prediction, or the sub-pel interpolation, may be disabled.
122
122
In one example, the prediction residual values for the current block may be residual pixel values for the current block. In this example, palette-based encoding unit calculates the residual pixel values for the current block from the pixel values of the current block and the previously coded reference pixel values. Palette-based encoding unit then proceeds to encode the residual pixel values for the current block using palette-based video coding as described in the following steps.
122
104
106
122
In another example, the prediction residual values for the current block may be residual quantized transform coefficient values for the current block. In this example, palette-based encoding unit calculates residual pixel values for the current block from the pixel values of the current block and the previously coded reference pixel values, and then sends the residual pixel values to transform processing unit and quantization unit to be transformed and quantized into residual quantized transform coefficient values for the current block. Palette-based encoding unit then proceeds to encode the residual quantized transform coefficients values for the current block using palette-based video coding as described in the following steps.
122
282
122
284
122
286
20
Palette-based encoding unit generates a palette for the current block including one or more entries that indicate the prediction residual values for the current block (). Palette-based encoding unit maps one or more of the prediction residual values for the current block to index values that identify entries in the palette used to represent the prediction residual values in the palette for the current block (). Palette-based encoding unit encodes the index values for one or more positions in the current block (). The encoded index values indicate the prediction residual values included in the palette for the current block that are used to represent the prediction residual values for the current block. Video encoder then transmits the index values for the one or more positions in the current block.
FIG. 8
FIG. 8
FIG. 3
FIG. 8
165
30
is a flowchart illustrating an example process for decoding prediction residual video data using a palette-based coding mode, consistent with techniques of this disclosure. The example process illustrated in is described herein with respect to palette-based decoding unit of video decoder from . In other examples, one or more other or additional components may perform the example process illustrated in .
30
165
165
290
165
292
Video decoder receives a bitstream representing coded video data using palette-based coding, and sends the entropy decoded video data to palette-based decoding unit . Based on one or more syntax elements included in the decoded bitstream, palette-based decoding unit generates a palette for a current block of video data including one or more entries that indicate prediction residual values for the current block (). Palette-based decoding unit then decodes index values for one or more positions in the current block (). The decoded index values indicate the prediction residual values included in the palette for the current block that are used to represent the prediction residual values for the current block.
165
294
165
165
164
165
166
Palette-based decoding unit determines one or more of the prediction residual values for the current block based on the index values that identify entries in the palette that represent the prediction residual values for the current block (). Palette-based decoding unit may determine the prediction residual values using any prediction mode, e.g., an inter-prediction mode or an intra-prediction mode of the HEVC standard. In one example, palette-based decoding unit may use motion compensation unit to predict pixel values of the current block using previously coded pixel values in a reference block. In another example, palette-based decoding unit may use intra-prediction processing unit to predict pixel values of the current block using previously coded pixel values in the current block.
165
In some cases, palette-based decoding unit may determine the prediction residual values for the current block using only a subset of prediction mode processes. For example, in the case of the intra prediction mode, the DC, horizontal, and/or vertical prediction processes may be enabled, but other intra prediction mode processes may be disabled. The disabled processes may include the filtering in the intra prediction mode, e.g., one or more of MDIS, 1/32 pel bilinear interpolation, edge filter or DC filter. As a further example, in the case of the inter prediction mode, the average of pixels process, e.g., one or more of the weighted prediction, the bi-prediction, or the sub-pel interpolation, may be disabled.
30
296
30
165
154
156
30
Video decoder then determines pixel values of the current block based on the prediction residual values for the current block and previously coded reference pixel values (). In one example, the prediction residual values for the current block may be residual pixel values for the current block. In this case, video decoder reconstructs the pixel values of the current block using the residual pixel values and the previously coded reference pixel values. In another example, the prediction residual values for the current block may be residual quantized transform coefficient values for the current block. In this case, palette-based decoding unit first sends the residual quantized transform coefficient values to inverse quantization unit and inverse transform processing unit to be inverse quantized and inverse transformed into residual pixel values for the current block. Video decoder then reconstructs the pixel values of the current block using the residual pixel values and the previously coded reference pixel values.
FIG. 9
FIG. 9
FIG. 3
FIG. 2
165
30
122
20
is a flowchart illustrating an example process for generating a palette for palette-based coding, consistent with techniques of this disclosure. The example process illustrated in is described herein with respect to palette-based decoding unit of video decoder from . In other examples, the process may also be performed by palette-based encoding unit of video encoder from . The example process for generating a palette for palette-based coding may be used to generate a palette including palette entries that indicate pixel values. In other examples, a similar process may be used to generate a palette including palette entries that indicate prediction residual values.
30
165
165
300
165
Video decoder receives a bitstream representing coded video data using palette-based coding, and sends the entropy decoded video data to palette-based decoding unit . Palette-based decoding unit generates a predictive palette including palette entries that indicate pixel values (). In some examples, palette-based decoding unit generates the predictive palette to include palette entries from one or more previously coded blocks of the video data. The previously coded blocks may include neighboring blocks of a current block including spatially neighboring blocks and/or neighboring blocks in a particular scan order of the blocks.
165
302
165
Palette-based decoding unit next determines, from the entropy decoded video data, one or more of the palette entries in the predictive palette that are copied to a current palette for the current block (). More specifically, palette-based decoding unit may decode one or more syntax elements indicating whether each of the palette entries in the predictive palette is copied to the current palette. In one example, the one or more syntax elements comprise a binary vector including a flag for each of the palette entries in the predictive palette that indicates whether a respective palette entry is copied to the current palette. In another example, the one or more syntax elements comprise a losslessly compressed version of the binary vector, where an uncompressed version of the binary vector includes a flag for each of the palette entries in the predictive palette that indicates whether a respective palette entry is copied to the current palette.
165
304
165
165
165
Palette-based decoding unit also determines, from the entropy decoded video data, a number of new palette entries not in the predictive palette that are included in the current palette for the current block (). Palette-based decoding unit may decode one or more syntax elements indicating the number of the new palette entries that are included in the current palette. In some examples, palette-based decoding unit decodes the syntax elements using one of unary codes, truncated unary codes. Exponential-Golomb codes, or Golomb-Rice codes. After determining the number of new palette entries that are included in the current palette, palette-based decoding unit decodes one or more syntax elements indicating a pixel value for each of the new palette entries.
165
306
165
308
165
Based on the information determined from the entropy decoded video data, palette-based decoding unit calculates a size of the current palette to be equal to the sum of the number of the copied palette entries and the number of the new palette entries (). After determining the size of the current palette, palette-based decoding unit generates the current palette to include the copied palette entries and the new palette entries (). In one example, the palette-based decoding unit generates the current palette by concatenating the copied palette entries and the new palette entries.
165
165
310
20
165
30
Palette-based decoding unit is then able to perform palette-based coding of the current block using the current palette. For example, palette-based decoding unit determines index values for one or more pixel values of the current block that identify the palette entries in the current palette used to represent the pixel values of the current block (). In the case where one or more pixel values of the current block do not have a corresponding pixel value in the current palette, video encoder may use the escape pixel concept to indicate which of the pixel values are not included in the current palette, and explicitly transmit these pixel values. Palette-based decoding unit in video decoder may then decode one or more syntax elements indicating the pixel values for the one or more pixel values that do not have a corresponding pixel value in the current palette.
20
165
30
In another example, video encoder may not use the escape pixel concept, but instead may identify pixel values included in the current palette as prediction pixel values for each of the one or more pixel values of the current block, and transmit residual values between the pixel values of the current block and the prediction pixel values in the current palette. Palette-based decoding unit in video decoder may then decode one or more syntax elements indicating the index values that identify the corresponding prediction pixel values included in the current palette, and the residual values between the one or more pixel values of the current block and the identified prediction pixel values in the current palette.
165
165
This disclosure also describes several alternative techniques for generating a palette for palette-based coding, which may be used to generate a palette having entries that associate index values with the either pixel values or prediction residual values for a current block. In one example, palette-based decoding unit decodes an indication of a size of the palette for the current block, decodes a vector having the same size as the palette for the current block, where each entry in the vector indicates whether an associated palette entry is transmitted or copied from a predictive palette, and, for the one or more palette entries copied from the predictive palette, decodes an indication of a position of the entry in the predictive palette. In another example, palette-based decoding unit decodes an indication of a number of entries in the palette for the current block, decodes a one-bit flag for each of the palette entries that indicates whether the palette entry is sent explicitly or derived from a previously reconstructed pixel, and, for each of the one or more palette entries derived from the previously reconstructed pixel, decodes an indication of a position of the reconstructed pixel that corresponds to the respective palette entry. In that example, the indication of the position of the reconstructed pixel may be a displacement vector with respect to the top-left position of the current block or it may be an index into a list of reconstructed pixels that may include all the reference pixels used for normal intra prediction.
165
165
20
165
In another example, palette-based decoding unit starts with a predictive palette for a neighboring block having a given size, and decodes a binary vector having the same size as the predictive palette, where each entry in the vector indicates whether an associated palette entry is reused from the predictive palette. Palette-based decoding unit also decodes an indication of the number of new entries to be transmitted, and receives the new entries from video encoder . Palette-based decoding unit then merges the reused entries and the new entries to generate the new palette for the current block.
It is to be recognized that depending on the example, certain acts or events of any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially. In addition, while certain aspects of this disclosure are described as being performed by a single module or unit for purposes of clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with a video coder.
Certain aspects of this disclosure have been described with respect to the developing HEVC standard for purposes of illustration. However, the techniques described in this disclosure may be useful for other video coding processes, including other standard or proprietary video coding processes not yet developed.
20
30
FIGS. 1 and 2
FIGS. 1 and 3
The techniques described above may be performed by video encoder () and/or video decoder (), both of which may be generally referred to as a video coder. Likewise, video coding may refer to video encoding or video decoding, as applicable.
While particular combinations of various aspects of the techniques are described above, these combinations are provided merely to illustrate examples of the techniques described in this disclosure. Accordingly, the techniques of this disclosure should not be limited to these example combinations and may encompass any conceivable combination of the various aspects of the techniques described in this disclosure.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following claims.
TECHNICAL FIELD
BACKGROUND
SUMMARY
DETAILED DESCRIPTION
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1
is a block diagram illustrating an example video coding system that may utilize the techniques described in this disclosure.
FIG. 2
is a block diagram illustrating an example video encoder that may implement the techniques described in this disclosure.
FIG. 3
is a block diagram illustrating an example video decoder that may implement the techniques described in this disclosure.
FIG. 4
is a conceptual diagram illustrating an example of determining a palette for coding video data, consistent with techniques of this disclosure.
FIG. 5
is a conceptual diagram illustrating examples of determining indices to a palette for a video block, consistent with techniques of this disclosure.
FIG. 6
is a conceptual diagram illustrating examples of determining a geometric edge of a video block using a run of palette indices for the luma component adaptively downsampled for the chroma components, consistent with techniques of this disclosure.
FIG. 7
is a flowchart illustrating an example process for encoding prediction residual video data using a palette-based coding mode, consistent with techniques of this disclosure.
FIG. 8
is a flowchart illustrating an example process for decoding prediction residual video data using a palette-based coding mode, consistent with techniques of this disclosure.
FIG. 9
is a flowchart illustrating an example process for generating a palette for palette-based coding, consistent with techniques of this disclosure. | |
Definition: The Entrepreneur is a change agent that acts as an industrialist and undertakes the risk associated with forming the business for commercial use. An entrepreneur has an unusual foresight to identify the potential demand for the goods and services.
The entrepreneurship is a continuous process that needs to be followed by an entrepreneur to plan and launch the new ventures more efficiently.
Entrepreneurial Process
- Discovery: An entrepreneurial process begins with the idea generation, wherein the entrepreneur identifies and evaluates the business opportunities. The identification and the evaluation of opportunities is a difficult task; an entrepreneur seeks inputs from all the persons including employees, consumers, channel partners, technical people, etc. to reach to an optimum business opportunity. Once the opportunity has been decided upon, the next step is to evaluate it.
An entrepreneur can evaluate the efficiency of an opportunity by continuously asking certain questions to himself, such as, whether the opportunity is worth investing in, is it sufficiently attractive, are the proposed solutions feasible, is there any competitive advantage, what are the risk associated with it. Above all, an entrepreneur must analyze his personal skills and hobbies, whether these coincides with the entrepreneurial goals or not.
- Developing a Business Plan: Once the opportunity is identified, an entrepreneur needs to create a comprehensive business plan. A business plan is critical to the success of any new venture since it acts as a benchmark and the evaluation criteria to see if the organization is moving towards its set goals.
An entrepreneur must dedicate his sufficient time towards its creation, the major components of a business plan are mission and vision statement, goals and objectives, capital requirement, a description of products and services, etc.
- Resourcing: The third step in the entrepreneurial process is resourcing, wherein the entrepreneur identifies the sources from where the finance and the human resource can be arranged. Here, the entrepreneur finds the investors for its new venture and the personnel to carry out the business activities.
- Managing the company: Once the funds are raised and the employees are hired, the next step is to initiate the business operations to achieve the set goals. First of all, an entrepreneur must decide the management structure or the hierarchy that is required to solve the operational problems when they arise.
- Harvesting: The final step in the entrepreneurial process is harvesting wherein, an entrepreneur decides on the future prospects of the business, i.e. its growth and development. Here, the actual growth is compared against the planned growth and then the decision regarding the stability or the expansion of business operations is undertaken accordingly, by an entrepreneur.
The entrepreneurial process is to be followed, again and again, whenever any new venture is taken up by an entrepreneur, therefore, its an ever ending process. | https://businessjargons.com/entrepreneurial-process.html |
Wrapper methods are performed by taking subsets and training learning algorithms. Based on the results of the training, we can select the best features for our model. And, as you may have guessed, these methods are computationally very expensive. The Wrapper methodology considers the selection of feature sets as a search problem, where different combinations are prepared, evaluated, and compared to other combinations.
A predictive model is used to evaluate a combination of features and assign model performance scores.
A wrapper method will perform the following:
Wrapper methods start by searching through different subsets of features, then creating a model with each. It follows a greedy search approach by evaluating all the possible combinations of features against the evaluation criterion. The evaluation criterion is simply the performance measure that depends on the type of problem, these procedures are normally built after the concept of the Greedy Search technique (or algorithm). They evaluate these models to select the best one, and afterward, they iterate to define a new subset based on the previous best subset.
Deciding when to stop this search comes down to monitoring whether the performance doesn’t increase or decrease beyond a certain threshold, depending on what method you’re using. These thresholds are often arbitrary and defined by the user.
I’ll discuss these procedures in more detail for specific wrapper methods.
The most commonly used techniques under wrapper methods are: | https://ml-concepts.com/2021/10/07/wrapper-methods-in-machine-learning/ |
Figure out your debt-to-income ratio.
Striking a balance between the debt you owe and the income you earn is vital. If you’re carrying too much debt, chances are your financial health is suffering because your income can’t sustain the monthly payments. The opposite occurs if the debt you’re carrying is low and your income can comfortably cover it.
How do you figure out if your debt is too high or low relative to your income? You need to figure out your debt to income ratio.
Before we go any further let’s test your skills on this subject.
KOFE Break!
Test your know-how before you start to see how much help you need. Can you skim this section or do you need to take some time on this topic?
What is the formula for calculating DTI?
A) Total Monthly Income - Total Monthly Debt
B) (Number of Collection Calls + Overdraft Fees) / Your Sanity x 100
C) (Total Monthly Debt / Total Monthly Income) x 100
D) Liabilities / Assets x 100
C) (Total Monthly Debt / Total Monthly Income) x 100
What's the maximum amount of DTI you should have, not including your mortgage?
A) No more than 5%
B) No more than 20%
C) No more than 25%
D) As long as you have more income than debt, you're golden!
B) No more than 20%
What is a debt-to-income ratio?
A debt-to-income ratio is a financial formula that compares a person’s debt payments to their total monthly income. A high debt-to-income ratio signifies financial trouble. A low ratio signifies financial balance and stability.
The bottom line is, if you can’t strike a healthy balance between your debt and income, you’re inviting financial turmoil into your life. And the consequences vary, as you’ll discover further down the page.
How to calculate your debt-to-income ratio
This may sound a bit confusing, but it’s very simple. Just follow these directions:
- Using your budget, add up your total monthly household income, including income from work, tips and commissions, any alimony or child support you receive, rental income, government benefits and so on. If your spouse works or receives income from other sources, include that too.
- Next, use your budget again and total up your monthly debt payments. The calculations should be based on the minimum payments due for each debt, but if you pay more than the minimum, check your statement for the current minimum payment required. Don't include payments for your mortgage, which should not be included in this ratio. This is your Total Monthly Debt.
- Divide your Total Monthly Debt by your Total Monthly Income. The result will be a percentage. This is your debt-to-income ratio.
KOFE Break! (new quiz)
Do lenders care if you have a high or low DTI?
A) No, they just want to loan money and get it back with interest
B) Yes, because they want someone who can manage their debt
C) No, they need as many customers as possible to meet their monthly quota
D) Yes, they care because they are caring people
B) Yes, because they want someone who can manage their debt
What can you do if your DTI is high?
A) Just jot in a lower percentage and no one will know
B) Add more debt to your family finances
C) Cut your debt as much as possible
D) Take a pay cut
C) Cut your debt as much as possible
Do you have a good or bad DTI?
If your DTI is less than 20%, then you’re in good financial shape. If it’s higher, it means you’re carrying too much debt. This ratio is cut and dry. There’s no wiggle room.
Use our Debt to Income Ratio Worksheet to calculate your DTI.
Lending institutions such as banks and credit card companies frown upon a high DTI. If you’re lucky enough to get approved for a loan or line of credit, you probably won’t qualify for the attractive terms they offer. You’ll be stuck paying much higher rates, which puts you further into debt.
Advantages of a low DTI
- Appeals to new creditors and lenders when you apply for a new loan or line of credit
- Indicates financial stability, so you are confident your finances are healthy
- Allows you to build savings because all of your money isn't going to debt payments
Disadvantages of a high DTI
- There is little-to-no money left in your budget for savings
- Increases financial and personal distress in your life
- You won't be able to qualify for loans and credit cards, or if you do qualify you'll face much higher interest rates
How to lower your DTI
If your DTI is above 20 percent, it’s imperative that you do something about it now. You have two choices, (or you can do both):
- Decrease your debt load. Pay off credit cards and other debt such as car loans. Be aggressive. Make sacrifices if it’s necessary – such as reducing spending in other areas of your budget so you can apply that money to your monthly payments.
- Increase your income level. Take a side job, do freelance work or ask for more hours if you're an hourly employee. Sell stuff in your home that you don’t use on eBay. Be creative.
Debt-to-income that includes housing costs
There’s another DTI - this one includes mortgage payments (or rent, if you don't own). Housing costs are usually the biggest part of most budgets. The monthly costs can more than double your debt load when you add it in; this is the number that mortgage lenders use to decide if you qualify for a mortgage on a new home.
They will then take your regular DTI and add the payments for the home you want to buy. As long as your DTI (including housing costs) is less than 41 percent, then you’ll most likely qualify for the mortgage.
The first step is getting your regular DTI under 20 percent. | https://keysfcu.kofetime.com/count-your-beans/debt-to-income-ratio/ |
One Fine Day
On an inauspicious morning at a Dutch library, a librarian makes an unexpected find in the overnight return box. The pantheon of book-borrowing sins holds no precedent for the box’s contents: a much mistreated Baedeker’s guidebook 123 years overdue. Even without compound interest, this tardiness merits a tidy fine, and in Underneath the Lintel (Soho Playhouse), playwright Glen Berger’s latest, our librarian hero determines to track down the miscreant. After many international adventures, he hires a theater for one night, to offer “an impressive presentation of lovely evidences” detailing his quest.
Berger’s monologue, subtitled The Mystery of the Abandoned Trousers, hardly slacks. Mailing a fine to the long-lived scofflaw in question proves difficult, as the borrower listed his name only as “A.” In an effort to run him to earth, the librarian, who has never left his native town of Hoofddorp, zips to China, Australia, Germany, and America. He eats sweets, greases palms, sees Les Miserables in three languages, and fritters away all his accumulated vacation days. He has the time of his life, or, perhaps, for the first time actually has a life.
T. Ryder Smith, hair powdered and face contorted with fastidiousness and fanaticism, plays the librarian with verve. He shuffles about the stage in a ragged coat, caressing the date stamper around his neck, proudly displaying his exhibits, and speaking in a bizarre accent—purportedly Dutch. Under Randy White’s direction, these mannerisms and characteristics never quite add up to a fully realized character, but this is never as bothersome as it ought to be.
You might say something similar of Berger’s play. His tendencies toward sentimentality and occasional cutesy cleverness mar the play, but never terribly. Troubles arise toward the end when the librarian, having identified his quarry, indulges in some metaphysical speculation and pointing up of metaphor. This heavy-handedness doesn’t intrude too much on the play’s good-naturedness. Yet you can’t help wishing Berger would lighten up and include more scenes like the one in which the librarian goes swing dancing in New York. Ryder’s beatific expression as he coerces his limbs into a jerky elegance is hilarious and affecting. These are the best scenes, when the librarian, so desperate for a thread of A.’s life, unexpectedly discovers the fabric of his own. —Alexis Soloski
Everyone Into the Royalty Pool!
Write what you know. Chay Yew and Lisa Peterson have gleefully violated this cardinal rule of Writing 101 with The Square (Ma-Yi Theater Company at the Public Theater). The creators asked 16 high-powered dramatists—Asian and not, from Ping Chong and David Henry Hwang to José Rivera and Kia Corthron—to pen a 10-minute playlet about the Asian American experience, set in a fictional Chinatown square. Yew and Peterson randomly assigned each artist a decade (1880s, 1920s, 1960s, present) and the number and ethnicity of the characters.
The intention is worthy, and all the usual subjects show up—immigrant pangs, discrimination, assimilation angst—along with clichés and sentimentality. Variations on the white master/Chinese houseboy team surface in Yew’s Scissors and Diana Son’s Handsome, where scissors also figure (though with a little more wit). These and other back-in-history pieces (Han Ong’s Untitled and Mac Wellman’s My Old Habit of Returning to Places) feel inauthentic, either too familiar or too far-out.
Other playwrights took the spirit of the project literally and wrote theater exercises: Maria Irene Fornes’s The Audition, about unemployed Asian actors trying to get work as Mexicans, and Robert O’Hara’s The Spot, where the characters role-play racial conflict. Both score as mildly amusing but unsatisfying.
In two of the best pieces, the authors write about what they know, but weave in an Asian character, ethnicity adding flavor. Craig Lucas’s Examination depicts a first encounter between two gay men, one a Chinese American doctor, the other a patient with an agenda. Sensitively acted by Ken Leung and Hamish Linklater, they project the welter of feelings in such meetings—vulnerability, attraction, longing. When the doctor’s immigrant parents suddenly enter jabbering in Cantonese, Lucas can scratch off part of his homework task, but the bit feels gratuitous.
Constance Congdon’s New takes us on a time-capsule tour of election eve, 1960. Two schoolgirl supporters of Nixon—one Japanese American and tipsy (Jennifer Ikeda), one WASP-y and sober (Fiona Gallagher)—comfort each other in the wee hours. It’s a nifty comedy of manners, as these girls in their colored wool suits and flip hairdos bond over politics—Catholics in the White House, the Japanese internment camps—while teasing their hair and spraying everything in sight. Congdon also offers a glimpse into the later ’60s with an ending that takes an ironic bite out of their—and our own former—innocence.
Peterson directs all the pieces with energy and inventive flourishes, aided by the design talents of Rachel Hauck, Christianne Myers, James Vermeulen, and Fabian Obispo. The design especially stars in Jessica Hagedorn’s Silent Movie, which takes a clever, film-noir look at 1920s Chinatown, a den of opium and iniquity. In one nifty bit, an Irish mistress and her maid make decadent love while a period movie is projected onto the red satin sheet covering them. | https://www.villagevoice.com/2001/10/30/theater-44/ |
Governing has been rolling out a series of articles tied to its 30-year anniversary. Most of them are worth checking out, but three in particular caught our eye.
The first is a retrospective piece, “What’s Changed (and What Hasn’t) Since Governing Started 30 Years Ago.” It is not a particularly optimistic look at the future, but it shows how the federal government’s relationship with state and local governments has changed and what those changes mean for the future.
The second outlines three seminal policy events that define modern federalism. They are not unexpected, but the piece provides a useful overview of the ways that 9/11, the American Recovery and Reinvestment Act, and the Affordable Care Act affect our federal government model.
The current stresses and predicted stresses of the aging population should begin to level off by 2030.
The emphasis on balanced budgets and the state level will be replaced by new focus on sustainable budgets.
The challenges created by failing infrastructure and cybersecurity will continue to be costly and require systematic policy solutions.
The gap between television and online news consumption is narrowing. As of August 2017, 43% of Americans report often getting news online, a share just 7 percentage points lower than the 50% who often get news on television.
Nonwhites and the less educated increasingly say they get news on social media. About three-quarters of nonwhites (74%) get news on social media sites, up from 64% in 2016.
Online news that comes via emails and texts from friends or family is the type of news encounter most likely to result in a follow-up action.
The report’s final finding was particularly interesting. “An analysis of nearly 2,700 different search terms associated with the water crisis in Flint, Michigan, shows that online searches can be a good proxy for the public’s interests, concerns or intentions.” The Center and our colleagues at EdNC have been thinking a lot about the information feedback loop and how reader interests and preferences drive coverage. The Pew report suggests that search term inquiries can reflect public interest, direct media coverage, and ultimately lead to public policy responses.
CityLab has an accessible overview of the European statistical office’s annual report. While we tend to focus on domestic and in-state trends, it is interesting to see how Europe is evolving and how it’s evolution relates to that of the U.S.
EducationNC and the Reach NC Voices initiative was recently included in the News Integrity Initiative’s first cohort of grant recipients. Click here for more information about Reach NC Voices or the recent award.
Should we dumb down tech? | https://nccppr.org/fridayfive/october-6-2017/ |
Forum Photo by Michael V. Cusenza
In a pact with the City, Revel has agreed to close monitoring of its operations and will institute stricter safety protocols.
By Forum Staff
The moped-sharing company Revel recently resumed its operation following a month-long cessation of service, the City Department of Transportation announced Thursday.
In a pact with the City, the company has agreed to close monitoring of its operations and will institute new stricter safety protocols—including around rider training, account security, helmet use, and its hours of operation.
According to DOT, the changes result from Revel’s dramatic growth over the last year, during which it grew to 3,000 electric mopedsserving Brooklyn and Queensfrom 1,000 mopeds serving those areas and sections of Manhattan and the Bronx The growth came with a surge in ridership, but also with increasing concerns over the company’s safety record, including 330 overall crashes with injuries in the first seven months of 2020.
DOT officials also noted that Revel voluntarily ceased operations on July 28, and has now agreed to enhance its safety requirements by implementing a new Safety & Rider Accountability Protocol. The new protocol includes the following changes:
- Revel will enhance rider training by requiring all current and new members to complete a 30-question safety training. With data showing that inexperienced riders are at a higher risk of being involved in a crash, the company will increase access to in-person riding lessons tenfold, from 112 class slots per week to about 1,164.
- The company will introduce measures to ensure that riders are wearing helmet, a requirement under state law. Revel will require riders and passengers to certify that they are wearing helmets—via a mandatory “selfie” submission prior to each ride.
- New rider accountability, monitoring and account-sharing policies have also been created. Revel will increase its penalties for bad behavior, use data from its mopeds to identify riders who ride in parks, or the wrong way down one-way streets, and introduce a package of measures to combat account sharing. The protocol will also include the company’s new community reporting tool, enabling members of the public to report dangerous riding.
- For the first 60 days of the relaunch, Revel will suspend operations betweenmidnight and 5 a.m., a period during which DOT found a higher rate of crashes. Revel will revisit this policy with DOT and the City at the end of the trial period.
- Revel will provide anonymized trip, training, and incident data to DOT, so that the agency may better monitor its performance and compliance with the protocol.
If Revel does not follow the protocol’s commitments, the City will move to immediately suspend the service until further notice, DOT officials said. | http://theforumnewsgroup.com/2020/09/04/dot-gives-revel-scooters-green-light/ |
Key information:
At Lumbertubs we believe safeguarding is everyone's responsibility. We have a duty to keep children safe and to be alert at all times to their welfare. The governing body has a duty of care to pupils and all members of the school community. We ensure there are consistent and effective safeguarding procedures in place to support families, children and staff at school. Any concerns about a child, however small, is reported through MyConcern recording system to the Safeguarding Team.
The Safeguarding Team is:
Designated Safeguarding Lead (DSL) - Helena Georgiou.
Deputy Designated Safeguarding Lead- Ceri Cook
Deputy Designated Safeguarding Lead-Emilie Harbottle.
Safeguarding Governor - Anne Partridge
|If you have a safeguarding or welfare concern our Senior Inclusion Support Manager can be contacted by Email: [email protected] or by phone on 07922065625 or 01604 408147.|
Safeguarding is not just about protecting children from deliberate harm. For Lumbertubs Primary School it includes such things as pupil safety, bullying, racist abuse and harassment, radicalisation, educational visits, intimate care, children missing education and internet safety. The witnessing of abuse can also have a damaging effect on those who are associated with any person who may have suffered abuse, as well as the child subjected to the actual abuse. This can and often will have a significant impact on the health and emotional well-being of the child. Abuse can take place in any family, institution or community setting. It can be by telephone or on the internet also. Abuse can often be difficult to recognise as children may behave differently or seem unhappy for many reasons, as they move through the stages of childhood or if their family circumstances change. However, it is important to know what the indicators of abuse are and to be alert to the need to act upon any concerns.
All Adults, Including the designated safeguarding lead, have a duty of care by law to refer all know or suspected cases of abuse to the relevant agencies including social services or the police. Where a disclosure is made to a visiting staff member from a different agency, such as the School Nurse etc. It is the responsibility of that agency to formally report the referral to the School's Designated Person in the first instance.
Safer Recruitment & Selection
It is a requirement for all agencies to ensure that all staff recruited to work with children and young people are properly selected and checked. At Lumbertubs Primary we will ensure that we have a member of the Senior Leadership Team on every recruitment panel who has received the appropriate recruitment and selection training. We will ensure that all of our staff are appropriately qualified and have the relevant employment history and checks to ensure they are safe to work with children in compliance with the Key Safeguarding Employment Standards.
News
Childline’s new tool helps young people remove nude images shared online
The NSPCC’s service for children and young people, Childline, has launched the Report Remove tool with the Internet Watch Foundation (IWF) to help young people remove nude images of themselves from the internet.
The Report Remove tool can be used by any young person under 18 to report a nude image or video of themselves that’s appeared online. The IWF then review these reports, and work to have the content removed if it breaks the law.
There are many reasons a young person may have shared a self-generated sexual image. Some may have sent an image for fun, or to a boyfriend or girlfriend, which has then been shared with others or on platforms without their consent. They may also have been groomed online or blackmailed into sharing this content.
Using Report Abuse to report nude images
First piloted in February 2020, the Report Remove tool can be found on the Childline website.
To report a nude image or video, the young person has to first verify their age. Our Childline service ensures all young people are safeguarded and supported throughout the process.
Young people can expect the same level of confidentiality they would from all their interactions with Childline; they do not need to provide their real name to Childline or IWF if they don’t want to.
The tool has been developed in collaboration with law enforcement to make sure that children will not be unnecessarily visited by the police when they make a report.
Supporting your child if their nude images are being shared
If a child has had a nude image shared online it’s vital they know who to turn to for support and our Report Remove tool is available for them.
You can also find more ways to support your child if they've been sexting here.
online/ https://www.nspcc.org.uk/about-us/news-opinion/2021/childline-tool-remove-nude-images-online/
Lumbertubs Safeguarding and Child Protection
What is the Multi-Agency Safeguarding Hub?
The Multi -Agency Safeguarding Hub (Mash) deals with referrals from professional and member of the public who may have concerns about a child's welfare. Various professionals from different organisations work together in the MASH, including social care, police and the NHS. By working together, They are able to gather and collate information quickly to make informed decisions about the risks posed to a child and decide on the most appropriate response to your concern.
The leaflet below has advice for parents and carers for what happens when a referral is made.
PREVENT
At Lumbertubs Primary we want to support our children and families from being drawn into terrorism, becoming terrorist or supporting terrorism or extremism in any form or guise. As a school, there is no place for extremist views of any kind whether from internal sources; pupils, staff, or governors or external sources' school community, external agencies or individuals. We recognise that extremism and exposure to extremist materials and influences can lead to poor outcomes for children. PREVENT is the prevention of children falling prey to radicalisation or extremism, and forms part of our Safeguarding Children approach.
Our pupils and staff should see our school as a safe space to explore controversial issues safely and our teachers encourage and facilitate this to happen through their Personal, Social, Health and Education and wider curriculum .
Lumbertubs Primary aims to minimise the risk and prevent them from being drawn in. We ensure that all staff are PREVENT trained and are aware of the signs and indicators of radicalisation and extremist behaviours. We work closely with the police and other agencies where any concerns are raised.
Protective Behaviours
Protective Behaviours is a practical down to earth approach to personal safety. It is a process that encourages self-empowerment and brings with it the skills to raise self-esteem and to help avoid being victimised. This is achieved by helping individuals to recognise and trust their intuitive feelings (Early Warning Signs) and to develop strategies for self-protection. The Protective Behaviours process encourages an adventurous approach to life that satisfies the need for fun and excitement without violence and fear. Every year each child in our school is given four lessons following the 'Taking Care' model. The sessions are around these two key messages: ‘We all have the right to feel safe all the time' and 'There is nothing too awful (or too little) we can't talk about it with someone'.
Protective Behaviours Training Partnership Publications
- Let's Talk Magazine
- All Have The Right To Feel Safe Booklet
- Practical Advice For Keeping Children Safe
Please see below for a poster giving advice and warnings signs for 'safe' and 'unsafe' secrets. | https://www.lumbertubs.co.uk/safeguarding/ |
If Tupac Shakur was not murdered 20 years ago, he would be 45 now, around the same age as Snoop Dogg and Jay Z.
With each passing year, his music and legacy continue to grow. He did not just have an influence on hip-hop artists; he had an influence over everyone. Some people equate rap to poetry, but only a few artists actually live up to this. Tupac created literature out of universal ideas, something that has been lost in current rappers today, as the major concern to get on the radio is beats, not lyrics. That’s another story for all you Migos, Young Thug and Lil’ Yachty fans, though.
Unlike the image that is pushed by the music industry today, Tupac rapped about personal, yet relatable issues, which is why two decades later, he is still considered one of the most important rappers. One of his most prominent legacies was his ability to express social issues in his music. Sure, his music was problematic at times, but it doesn’t erase the fact that he rapped about real life issues that other mainstream musicians would rather ignore today.
Diving into his timeless lyrics, there are messages that are relevant today. In his 1991 single, “Brenda’s Got a Baby,” Tupac tells the story of 12-year-old girl who is forced into prostitution, and then struggles to provide for herself and her baby after her cousin molests and impregnates her. This is not a light topic at all, yet Tupac detailed the struggle of a single mother in a radical way. Topics such as child prostitution, abuse and poverty are hard to take on, but Tupac showed empathy towards those who suffered and brought it to the forefront with his music.
Another example of his revolutionary songs is his single, “Changes,” recorded in 1992, but remixed and released in 1998. Known as one of his most definitive works, Tupac highlights the war on drugs, the war on poverty and tackles African-American social issues. The most intense moments in his song are when he references police brutality and institutional racism, issues that still need to be addressed today. Repetitively expressing his desire for change, Tupac would be disappointed to see that few things he wrote about have actually changed.
One of his most compassionate and well known songs, Tupac’s 1993 single “Keep Ya Head Up” is dedicated to women, specifically African-American women. He calls on men of color to treat their women and children with respect, while encouraging women to stay strong, even though it is difficult to survive in a one-parent family. He also criticizes the government’s role in perpetuating poverty among people of color. His lines about rape and sexual assault cut deep as they also continue to resonate as serious issues today along with inequality.
The three songs mentioned above are a few of his socially conscious raps that reveal life for minorities in the United Sates. Despite the hard-hitting, depressing topics he wrote about, he was often inherently hopeful. Noticing that his songs were recorded two decades ago, you wonder if he could see into the future.
As his legacy continues to make an imprint on society today, and his face continues to appear on clothing at Urban Outfitters, I encourage you to first and foremost listen and appreciate the legend’s lyrics for free on Spotify before purchasing a t-shirt.
The views expressed in this column are those of the author and not necessarily those of The Observer. | https://ndsmcobserver.com/2017/03/tupac-live-on/ |
The Fried Breadfruit Recipe is a delicious healthier snack or side dish idea.
Goan Food Recipes
Food Recipes from Goa or Goan influenced. Homemade Goan dishes, meals, and DIY. All dishes here are either traditional Goan or Goan-inspired meals and recipes.
Indian Chicken Curry Recipe
Learn how to make this easy Indian chicken curry with the step by step video and recipe further below!
Masoor Dal Curry Recipe
Healthy Masoor Dal Recipe, all from scratch and ready within minutes! This easy dal recipe is a one-pot vegan Indian main course meal. The lentils are briefly soaked and the recipe doesn't require a pressure cooker.
Guava Paste Recipe
Guava Paste (aka Guava Cheese, Goiabada) is a sweet delicacy which is common in former Portuguese colonies and several tropical regions in this world.
Sorak Curry Recipe - basic Goan coconut curry
Sorak Curry is one of those most basic curries, which a vast majority of goans make at home in a very regular basis, especially during monsoon when fish is scarce or expensive.
Chapati Recipe - How to make Indian Chapati with Ghee
The Indian Chapati, which is also known as roti, is a delicious but simply put together soft flat bread prepared with whole wheat flour.
Spiced Snake Gourd Recipe
This is a quick and easy spiced snake gourd recipe also known as a bhaji or sabzi in India.
Hog Plum Curry Recipe
Hog Plum Curry is a typical dish during the Hindu Ganesh Chaturthi festival in Goa and Maharastra, yet this deliciously sweet spicy and tangy hog plum curry is commonly enjoyed whenever the hog plum is in season.
Goan Egg Curry Recipe
The Egg Curry is a 20-minute no-brainer and a flavorful Goan egg curry made with coconut.
Homemade Ginger Garlic Paste
Ginger Garlic Paste is an essential in any Indian and Asian kitchen.
Asian Beef Stir Fry Recipe
Beef Stir Fry is a hot Asian dish which you can serve with rice and other Asian side dishes.
Stuffed Okra with Goan Recheado Paste
The vegan stuffed okra, also known as ladyfinger/bhende, are quickly prepared and fried in a pan with little oil and add flavor and zing to your palate.
How to roast Cashews - How to prepare Cashew Fruit
This post is meant to educate how to roast small quantities of raw cashews in the outdoors and you will understand why further below.
Moringa Pod Curry
This is a vegan, gluten free and low carb moringa pod vegetable curry gravy with coconut recipe.
Quick Guar Beans Stir Fry Recipe
This Guar Beans stir fry is a healthy Indian side dish, commonly served with rice and curry.
Goan Shrimp Curry Recipe
Goan Shrimp Curry is a well-known delicacy and if you have visited Goa in the past, you will have had this curry most probably while enjoying the sunset in a beach shack.
Basic Goan Curry Paste Recipe
The basic Goan Curry Paste is a red essential paste in any kitchen in Goa, India.
Spicy Omelette Sandwich
The Spicy Omelette Sandwich is a common fast food snack.
Red Amaranth Leaves Stir Fry Recipe
This recipe is Goan, Indian side dish served with curry rice and other vegetable sides in a thali.
Goan Pork Sorpotel Recipe - Indian Pork Curry
Goan Pork Sorpotel is an essential Indian pork Curry from the former Portuguese colony and the west Indian state of Goa. | https://www.masalaherb.com/indian/goan/ |
One of the practitioners that I have met over the years and really enjoyed the opportunity is Keith Berndston, MD. We first met through the ILADS conference and re-connected recently at the CIRS - Cutting Edge In Diagnosis and Treatment hosted by SurvivingMold.com.
He is a Shoemaker-certified CIRS doctor in addition to his work with Lyme disease. This combination is a rare find and often critical to unlayering the issues that must be explored to recover health.
With his permission, you can view hist recent Biotoxin Pathway 2.0 presentation slides by clicking on the image below.
I express gratitude to Dr. Berndston for being open to sharing with us! Enjoy... | https://www.betterhealthguy.com/biotoxinpathway |
Last month, I took a weeklong trip to the Yellow River's second-largest tributary, the Fenhe River in Shanxi province.
I traveled the Fenhe from its source to the middle and lower reaches before arriving at the point where it flows into the Yellow River.
The water quality of the river, environmental conditions along its length and the appearance of nearby mountains have changed significantly in the past three years.
In 2017, I visited Taiyuan, the provincial capital, for short stays, but the air was thick in particulate matter and the waters of the Fenhe were dark.
Shanxi is known for its rich coal and mining resources, but overdevelopment had taken its toll on the environment.
However, on my latest visit, I saw cities covered in greenery, with new trees planted, grass sown and parks established. Mining sites have been rejuvenated and rivers are flowing freely.
Throughout my stay, the air quality was good and there were blue skies.
I learned from the locals that the environmental improvements were the result of rising awareness, stricter law enforcement and positive action.
As I spoke to government officials and business owners, they frequently used terms such as "sustainable development", "severe punishment", "environmental investment" and "not giving preference to development over the environment".
They referred to President Xi Jinping's trips to Shanxi in 2017 and this year. They also said the provincial government set a goal to prioritize environmental protection in response to Xi's call to build a "beautiful Shanxi" and make the Fenhe River "water-abundant, cleaner and prettier".
Awareness of environmental protection has improved-being passed down from the central government to provincial authorities, and to businesses and local residents.
The provincial government set a goal to improve water quality to Grade V this year at 13 national-level inspection centers monitoring the Fenhe River. This grade is the lowest level on the national five-tier water quality system.
The goal was reached in June, way ahead of schedule.
Meanwhile, the Shanxi People's Congress Standing Committee has announced guidelines for environmental protection in the province for the next three years.
Wang Zhigang, deputy director of the committee's urban construction and environmental protection department, said 43 legislative projects are planned.
This year, Li Guiqin, a television anchor from Shanxi and also a deputy to the National People's Congress, the country's top legislature, proposed harnessing the Yellow River in the province to prevent water loss and soil erosion.
A native of Taiyuan, the provincial capital, Li had witnessed local rivers drying up-envying the clear waters and green mountains whenever she traveled to southern China.
"Environmental protection is so important, as it affects people's lives from all perspectives," she said.
Ji Yongli, executive deputy head of Lingshi county, Shanxi, said the local authority has increased investment for environmental projects, and now that awareness has improved, such work is backed by laws, regulations and financing.
The county has established a sewage treatment network to handle waste from urban and rural areas.
Domestic sewage is the main pollutant in the Fenhe River. However, industrial discharges have now improved, with enterprises severely punished if they ignore environmental protection laws and regulations, Ji said.
She added that although it has been expensive to take such action, the authorities and businesses recognize that it has been necessary to achieve sustainable development.
Since 2009, Shanxi Jinfeng Heat Supply, a privately-owned company in Shanxi, has invested 1.1 billion yuan ($162 million) on restoring Yuquan Mountain-a once-barren area damaged by widespread mining in suburban Taiyuan. The company has planted 5.5 million trees, covering 13 square kilometers.
Sun Zhanliang, deputy head of Yuquan Mountain Forest Park, said the project has yet to make a profit, but has "responded to the government's call to protect the environment".
With increased awareness, stricter laws and regulations, and decisive action, the Fenhe River is now cleaner and Shanxi is gradually shaking off its reputation of being a "polluted province". | http://shanxi.chinadaily.com.cn/2020-10/15/c_546541.htm |
In 2015 Hahnenkamm, LLC agreed to sell a 39.25-acre tract of land overlooking Lake Tahoe in Nevada to the U.S. Forest Service for $5.03 million. Soon after the sale, Hahnenkamm began to suspect that the sale price, which was supposed to be based on an independent, professional appraisal made by the Forest Service, was less than the property’s fair market value. Hahnenkamm therefore sued the Government, asserting a breach of the sale contract and of the two statutes that authorized the Forest Service’s acquisition of the property—the Santini-Burton Act and the Southern Nevada Public Land Management Act of 1998. The contract and the statutes required the Forest Service to base its purchase price on the fair market value of the property and an independent appraisal prepared in compliance with the Uniform Appraisals Standards for Federal Land Acquisitions (the Yellow Book).
First, the Government moved to dismiss the lawsuit arguing that the lawsuit failed to state a cognizable contract claim and that neither the Santini-Burton Act nor the Southern Nevada Public Land Management Act was money-mandating. The Court denied that motion and set the case for trial. Then the Government moved for summary judgment, arguing that Hahnenkamm had waived its rights by agreeing to sell the property, or was estopped from bringing its claims on equitable grounds. The Court refused summary judgment, and set the case for trial.
Following a six-day trial, the Court held that the Government had breached its duty to pay fair market value, and had not set the purchase price based on an independent, Yellow Book-compliant appraisal. Sharply criticizing the Forest Service’s appraisal, the Court found that it improperly relied for comparables on forced sales, sales to the government, and sales of parcels much smaller than the subject property.In addition, the Forest Service’s appraisal was not prepared independent of substantive input from the Forest Service:
Significantly, the Forest Service Review Appraiser, Ms. McAuliffe, had significant and ongoing communications with Mr. Dore while Mr. Dore prepared his appraisal, provided comparable sales date, and was permitted not only to see Mr. Dore’s draft appraisal before it was finished but also to make comments and suggestions that resulted in substantive changes to the report’s conclusions of fair market value. That all of these events led to an estimated appraisal value that was almost identical to what Forest Service personnel budgeted to acquire [the subject property] supports the inference that Mr. Dore’s appraisal, if not controlled, was influenced by the Forest Service’s involvement. . . . This fact is highlighted by defendant’s decision to withdraw Mr. Dore from its witness list and not call him to testify.
Having ruled for Hahnenkamm on liability, the Court awarded damages to Hahnenkamm for the difference between the amount the Forest Service paid for the subject property and the actual fair market value of the property.
Read full decision here. | https://marzulla.com/court-agrees-with-marzulla-law-forest-services-appraisal-process-flawed/ |
Falling in Love With Chinese Cuisine
Chef and James Beard award-winning food writer Fuchsia Dunlop is an expert when it comes to Chinese food and culinary culture. The native Brit was the first foreign student, and one of only a few women, to graduate from the acclaimed Sichuan Institute of Higher Cuisine. Since then, she’s mastered the Mandarin language and written four books, including a memoir, Shark’s Fin and Sichuan Pepper, and her most recent cookbook Every Grain of Rice, in which she divulges the approachable, vegetarian-friendly side of Chinese home cooking. Fuchsia talks Sichuan’s famed street food, her new cookbook and essential ingredients for authentic Chinese dishes.
AndrewZimmern.com: What drew you to Chinese cooking in the first place?
Fuchsia Dunlop: The extraordinary flavors of the Sichuanese capital, Chengdu. I went there as a university student (and a keen cook since childhood), and was delighted to find that the normal, everyday fare in Sichuan was more delicious than any Chinese food I’d previously encountered. The tastes of the local cuisine were just dazzling, and the food was so fresh and vibrant. I wanted to learn how to make a few dishes, and persuaded some restaurateurs to let me study in their kitchens. Very quickly, I was hooked. I ended up spending far more time on my extra-curricular culinary activities than on my academic work. And eventually I enrolled at the local cooking school.
AZ.com: Tell us about your experience at the Sichuan Institute of Higher Cuisine. What did you take away from your training?
FD: It was fascinating, fun and challenging. I was the only foreign student they had ever had, and one of three women in a class of about fifty. Our textbooks were in Chinese, and classes were conducted in Sichuan dialect. So it was a steep learning curve for me, but a wonderful experience. Every day we attended a theory class and a cooking demonstration, and were then let loose in the practice kitchen. Gradually we learned about the arts of cutting, flavor-mixing and dumpling-making, and the all-important control of heat. I came away with an invaluable grounding in Sichuanese cuisine, the ability to make many of the classic dishes, and an initiation into the language of the Chinese kitchen.
AZ.com: What has been your most memorable meal in China?
FD: There have been so many memorable meals in China that it is hard to choose! Possibly the one that persuaded me to go and live in Chengdu in the first place: a feast of local dishes in a small restaurant near the bus station – fish in chilli bean sauce, fire-exploded kidney flowers, preserved duck eggs, fish-fragrant eggplants. It was a revelation, and was an important factor in my decision to choose Sichuan University for my studies.
AZ.com: Your new book, Every Grain of Rice is filled with Chinese home cooking recipes. What are the principle qualities of Chinese home cooking?
FD: Chinese home cooking tends to involve quick, economical and healthy dishes in which vegetables and steamed rice or noodles are the star, and meat and fish play a secondary role. Dishes are generally stir-fried, boiled or steamed, and there is little deep-frying. There is no dessert course, but fresh fruit may be eaten after the meal. When I cook and eat as my friends in southern China, especially Sichuan, have taught me, I feel healthy and satisfied, and I generally lose weight, without any sacrifice of the pleasures of food. I find I can’t get enough of vegetables, especially leafy greens cooked the Chinese way (Chinese cooks are able to make the healthiest ingredients taste irresistibly delicious). In the West, we always talk about the Mediterranean diet as offering a model of healthy, balanced eating: I think we we should be talking about the Chinese diet in this way too.
AZ.com: What are three must-have food experiences in Sichuan province?
FD: Mapo tofu (‘Pock-Marked old woman’s tofu’): this classic dish is named after the much loved Qing Dynasty restaurateur who is said to have invented it, and it’s a perfect example of the rich, hearty flavors of Sichuanese cooking, with its deep red, chilli bean sauce and scattering of roasted Sichuan pepper, which will make your lips tingle delightfully. It’s also an example of the ability of Sichuanese cooks to transform a cheap, nutritious food that most people think is bland and boring (tofu) into a mesmerising delicacy.
A feast of Chengdu street snacks, including Zhong boiled dumplings in their chilli oil sauce, Lai glutinous riceballs, Dan Dan noodles and a whole host of other titbits that were once served on the streets of the city. Several restaurants serve set menus of these snacks: go with a group and your entire table will be covered in a dizzying variety of tiny dishes and bowls.
Sichuan numbing-and-hot hotpot (ma la huo guo): an experience that is not for the faint-hearted, but is a riotous introduction to Sichuanese social life and the fierier side of the local cuisine. Sit with your friends around a seething potful of chillies and Sichuan pepper, and cook your own food in the bubbling broth. Chongqing is the original home of this type of hotpot, but hotpot restaurants are now found all over Sichuan province.
AZ.com: Westerners tend to have a skewed view of Chinese cuisine, what are the biggest misconceptions?
FD: The biggest misconception is that it’s unhealthy. Of course the Chinese love deep-fried foods on occasion like everyone else, and banquets may involve extravagant amounts of meat, fish and poultry, but the everyday diet is stunningly healthy and dominated by vegetables and grains. The other misconception is that there’s one standard Chinese cuisine (although this has faded considerably in recent years with the growing regionalisation of Chinese restaurants in big cities in the West). China is a vast country, with staggering variations in climate, terrain, ingredients and cuisines: the difference between, say, Cantonese and Sichuanese cuisine is as great as the difference between northern French and southern Italian food.
AZ.com: What ingredients should every cook have in the pantry in order to prepare authentic Chinese food? Essential equipment?
FD: Ingredients: Soy sauce, Chinese vinegar (I generally use Chinkiang brown rice vinegar), ginger, garlic, scallions, toasted sesame oil, Shaoxing wine, Sichuan chilli bean paste, dried chillies and Sichuan pepper. Equipment: You need very little specialist equipment. A wok is probably the most important piece of kit, and a steamer and a Chinese cleaver come in handy.
AZ.com: Favorite recipe from Every Grain of Rice?
FD: Fish-fragrant eggplants (yu xiang qie zi).
AZ.com: What’s in your fridge?
FD: Fresh vegetables, ginger and scallions. Sichuanese pickled vegetables and home-made glutinous rice wine. Cheese and butter. Some vacuum-packed Spanish ham.
Check out Fuchsia’s recipe for Fish-Fragrant Eggplant from her cookbook Every Grain of Rice.
Fuchsia Dunlop is an award-winning cook and food-writer specializing in Chinese cuisines. She was the first Westerner to train at the Sichuan Institute of Higher Cuisine, speaks Mandarin, and has spent most of the last two decades exploring Chinese food. She is the author of four books, Land of Plenty (on Sichuanese cuisine), Revolutionary Chinese Cookbook (on the food of Hunan Province), Shark’s Fin and Sichuan Pepper: A Sweet-Sour Memoir of Eating in China, and, most recently, Every Grain of Rice: Simple Chinese Home Cooking. Fuchsia has received several awards for her books and journalism, including the 2012 James Beard Award for writing on Food Culture and Travel.
Photograph by Colin Bell. | https://andrewzimmern.com/5-questions-fuchsia-dunlop/ |
Michael Palmer, one of my very favorite guilty pleasure writers, unfortunately passed away last winter, and I can’t tell you how much I will miss his books. This one joins a fledgling series featuring Dr. Lou Welcome, coming after a long string of successful stand-alones. Stand-alone or series, Palmer’s formula rarely varied and it’s so crazy enjoyable I had a hard time putting any of his books down. There’s always a misunderstood doc at the center of the action, oppressed by whatever evil force Palmer came up with for a particular book, and the odds are always stacked against the doc. (Also, a clue: whenever another character calls him “doc,” that character is to be trusted).
In this novel, Dr. Lou Welcome, a recovered addict who now works with other doctors in recovery, is on a work junket. His boss’ wife is in the hospital and he’s been elected to take over and give a speech for his boss. Lou has brought his best friend (and AA sponsor) Cap along for the trip, a trip that begins each day with a rigorous trail run. Cap runs a boxing gym and is always pushing Lou to work harder.
On one of the mornings Lou is lucky enough to get a tour of the CDC (Center for Disease Control) as part of his trip and he gets a front row view of all the nasty bacteria being fought by the scientists at the CDC. The book has an opening segment with a young woman fighting off just such a bacteria, and, not being a Palmer newbie, I knew she was toast (I was right). This serves Palmer’s purpose though: it made me invested in the outcome.
Then the unforeseeable happens – Cap and Lou are out for a run and a terrible accident shatters Cap’s leg. It’s not a difficult leap to figure that Cap will get infected with the very same bacteria that killed the girl, putting Lou in a race against the clock. His foe, though he doesn’t know it, is a group called the 100 Neighbors, who are trying to bring down what they feel in an unnecessary government by any means, including the spread of a deadly virus that only they can cure. Unfortunately the virus has mutated and spread a little more quickly than they had planned, with no cure in sight.
The various characters and paths it takes for Lou to come to the rescue are both imaginative and entertaining. Palmer’s narrative genius is a combination of compelling characters and impossible situations, with a very real medical knowledge behind his writing. Michael Palmer was in real life a doctor and recovered addict, and he brought those things to bear on his writing. From all I can tell his death was a loss in every way to both the mystery community, to his family, and to the world in general. I am very grateful for his books and hope they will continue to enjoy a lengthy shelf life. (Michael Palmer, Oct. 1942-Oct. 2013). | https://auntagathas.com/aa/2014/06/28/michael-palmer-resistant/ |
Theodore Ziras - Territory4buy
Guitarist Theodore Ziras talks about Territory4, his 2009 CD release, "This is my fourth instrumental solo album. I felt it was time to move into some new territory, by starting to use new scales and modes, while experimenting with new grooves and time signatures... generally, to move on. Territory4 is literally and directly what the title states. I tried to create a piece of art that blends various styles, grooves and sounds. On this CD you'll hear modern metal, shred and fusion all mixed together in a "unique" way. I am very happy and proud that I recorded this music with three of the most gifted and talented musicians on the planet. My main ambition was to entertain and inspire you... I hope you enjoy it. Welcome to my Territory4." | https://www.truthinshredding.com/2009/06/theodore-ziras-territory-4-cd-available.html |
News - Important To Know !
Since January 2017 Russian Federal State Statistics Service conducts a sample survey of the workforce among the population aged 15 years and over (until 2017 - at the age of 15-72 years).
Following the survey in May 2018, the number of workforce was 76.1 million people, or 52% of the total number of the country's population, including 72.5 million people employed in the economy and 3.6 million people without employment, but actively looking for it (in accordance with the methodology of the International Labor Organization, they are classified as unemploye) ...
Read More...
Our Services
- Company Registration
- Market Research
- Sell In Russia
- Business Travel
- Real Estate Purchase Or Rent
- Document Translation In All Of CIS Languages
- Exhibition Presence Organization
- Direct-Sales Business Establishment
Read More...
Why Russia?
Russia is the largest country in the world by area has a massive potential entirely undiscovered.
Population of Russia is over 146,880,432 people, country covers more than one-eighth of the Earth's inhabited land area and it’s the ninth most populous area in the world... | http://facil.co.il/ |
Expand your angle of view underwater with this WCL06 SS Bayonet Wide-Angle Conversion Lens from Sea & Sea. Designed for use with the Sea & Sea DX-6G underwater housing, this model will offer an angle of coverage of 94° underwater, or 150° on land. It has a depth rating of 197' for deep dives, and its bayonet attachment makes it easy to attach and detach. The lens is built with a corrosion-resistant anodized aluminum alloy body. | https://www.paragondivestore.com/products/sea-sea-ss-bayonet-wide-angle-conversion-lens-wcl06 |
A pioneer in the study of the gastrointestinal and nutritional complications of HIV infection and AIDS for more than a quarter-century. Was the principal investigator for multiple government- and industry-sponsored studies in HIV-associated malnutrition and intestinal disease. Performed nutritional studies that measured the body’s composition and energy metabolism, characterized nutritional alterations, and defined treatment strategies for individuals with HIV and AIDS. Also conducted studies delineating the direct effects of HIV on the intestine, while identifying and treating specific infectious complications of AIDS. Concepts from his findings have shown applications in other areas of research, including end stage liver disease, chronic viral hepatitis, and liver disease associated with severe obesity.
Vice President, Board of Directors of the AIDS Community Research Initiative of America; Trustee, Royal S. Marks Foundation, Member, Board of Directors of the National Center for the Study of Wilson's Disease, Medical Advisory Board of the Wilson's Disease Association of America, American Gastroenterological Association, New York Academy of Science, American Association for the Advancement of Science, International AIDS Society, and New York Academy of HIV Medicine. | https://einsteinmed.edu/alumni/alumni.aspx?ID=1880 |
Introduction
============
Tuberculosis, caused by *Mycobacterium tuberculosis*, is a major health problem which has beensignificantly increased during recent years ([@B1]).
Historically, tuberculin skin test (TST) remains as the most common diagnostic method for evaluation of previous contact with mycobacteria or tuberculosis infection ([@B2], [@B3]).
In this test, delayed type hypersensitivity response to intradermal injection of purified protein derivates (PPD) was analyzed and Interpretated within 48-72 hr ([@B3], [@B4]).
The diagnostic potential of TST for *M. tuberculosis* infection is not well defined and this is considered as a poorly sensitive method. False positive results may be determined in BCG vaccinated or healthy medical workers who are exposed to environmental or non tuberculosis mycobacteria ([@B3], [@B4]).
Limited results on immunological parameters of healthy PPD positive workers exist. According to the occupational risk of infection in staff workers who have direct contact with mycobacterium species, we investigated their immunological parameters and compared them with healthy PPD negative volunteers.
Materials and Methods
=====================
***Participants***
Study population consists of twenty (16 male and 4 female) healthy workers at tuberculin unit of Razi Vaccine and Serum Institute with positive reaction to PPD. All of them have potential contact with mycobacterial antigens with no history of active tuberculosis. Twenty five (18 male and 7 female) healthy subjects who had negative results of PPD skin test were selected for controls. The average age of subjects was 36.2 years.
All participants consented to take part in this study and ten milliliter of heparinized peripheral blood was collected from them.
***Peripheral blood mononuclear cells (PBMCs) isolation, culture and cytokine assay***
PBMCs were isolated by Ficoll-hypaque density gradient centrifugation. The cells were washed and finally suspended in complete RPMI-1640 medium (10% human antibody serum + 100 u penicillin-streptomycin/ml) and counted.
A total of 1×10^6^ cells/well was cultured in flat-bottomed 24 well plates in duplicate with or without PPD (10 μg/ml).
The cultured plated were incubated at 37°C in an atmosphere of 5% CO~2~. After 4 days the supernatants were recovered and stored at -70°C.
The concentration of IFN-γ and IL-4 were measured using commercial anzyme linked immunosurbent assay kit (R&D) according to manufacturer's specifications.
***Flowcytometry procedure***
T and B lymphocyte and natural killer (NK) cells surface markers were enumerated by two color flow cytometry. For this reason a panel of monoclonal antibodies whish was conjugated with fluorescein isothiocyanate (FITC) and phycoerythrin (PE) consisting of HLA-DR, CD14, CD19, CD3, CD4, CD8, CD22, CD16+CD56 (all from DAKO) were used.
Peripheral blood samples were transported in sodium heparin tubes and were stained with a combination of monoclonal antibodies.
After incubation, washing and fixation, samples were quantified in a partec flowcytometer.
Data analysis was performed using winMDI 2.9 software and percentage of each marker expression was determined based on events in lymphocyte gate.
***Statistical analysis***
Data were reported as the mean ± standard error of the mean (SEM). Comparisons between groups were usually performed using unpaired two-tailed Student's t-test. A difference was considered to be statistically significant for a *P* value lower than 0.05 (*P*\< 0.05)
Results
=======
***Lymphocyte analysis***
To determine the lymphocyte subpopulation, we analysis the surface expression of lymphocyte markers with flow cytometry.
According to [Table 1](#T1){ref-type="table"}, data showed that PPD-positive tuberculin workers have a higher percentage of CD3+/CD8+ positive T lymphocytes compared to PPD-negative controls (38.33 ± 7.61 vs. 27.63±8.11, *P*\<0.05). The percentage of CD3+/CD4+ positive T cells was slightly lower in PPD-positive group (41.66±5.94 vs. 45.27±7.51, *P*\>0.05). However, the CD4/CD8 ratio was declined.
There were no differences in the percentage of B lymphocyte expressing CD22 (6.33 ± 3.11 vs. 7.24 ± 3.87, *P*\>0.05) and NK cells expressing CD16 and CD56 molecules (10.76±3.21 vs. 11.63±2.6, *P*\>0.05) in two separated groups ([Table 1](#T1){ref-type="table"}).
{#F1}
{#F2}
######
Percentage of selected lymphocyte markers in PPD positive and negative subjects
Lymphocyte marker PPD positive PPD negative
------------------- -------------- --------------
CD3+/CD4+ 41.66 ± 5.94 45.27 ± 7.51
CD3+/CD8+ 38.33 ± 7.61 27.63 ± 8.11
CD4/CD8 ratio 1.09 ± 0.24 1.54 ± 0.36
CD22+ 6.32 ± 3.11 7.24 ± 3.87
CD16+/CD56+ 10.76 ± 3.21 11.63 ± 2.6
***Cytokine production by PBMCs***
PBMCs from PPD-positive tuberculin workers and controls were cultured in the presence of PPD antigen and the levels of IFN-γ and IL-4 production was assayed by ELISA.
The results showed that, IFN-γ concentration in PPD-positive subjects was significantly greater than that of PPD-negative controls (194.41 ± 53.34 vs. 133.11 ± 40.6, *P* \<0.05) ([Figure 1](#F1){ref-type="fig"}).
Furthermore, production of IL-4 was similar in PPD-positive and negative groups (52.93 ± 10.06 vs. 48.38 ± 12.2, *P* \>0.05) ([Figure 2](#F2){ref-type="fig"}).
Discussion
==========
Tuberculosis is endemic in Iran and many individuals are sensitized to tuberculosis or non-tuberculosis mycobacterial antigens. This is revealed by positive skin reaction to PPD antigens.
However, to our knowledge, no report concerning the immune responses pattern of PPD positive healthy workers who have consistent exposure to mycobacterial antigens exist. They are good subjects for study of immune responses in non-tuberculosis mycobacteria (NTM).
Based on the majority of reports, mycobacteria infection could predominantly induce Th1 cells and CD8^+^ cytotoxic T cells with a Th1-like cytokine profile of elevated IFN- levels ([@B6]-[@B8]). Here, we studied the lymphocyte subpopulation and PPD-specific cytokine production in PPD positive subjects who have been worked in tuberculin production unit of Razi Vaccine and Serum Research Institute (RVSRI).
In the first step of this study, the lymphocyte subpopulation was determined by double-color flow cytometry. Based on our findings, the percentage of CD8^+^ lymphocytes were significantly greater in the PPD positive tuberculin workers compared to PPD negative control group. The CD8+ lymphocytes have an important roles for defense against intracellular pathogens, such as mycobacteria, and several studies are reported on increase of these cells in both tuberculosis infected or PPD positive subjects ([@B8]-[@B10]).
However, in this study no difference was observed in CD4+ lymphocytes. This could emphasize the role of mycobacterial antigens for over stimulating and increase of CD8+ lymphocytes and shifting of immune responses toward theses cells.
IFN-γ is a crucial cytokine for controlling of intracellular infection. This cytokine could be secreted from activated TH1 and CD8+ or NK cells ([@B11]- [@B13]).
In agreement with previous studies ([@B12], [@B13]), the present results show that IFN-γ production by PBMCs were greater in the PPD-positive donors in response to PPD antigen compared to the controls. However, no significant difference was found between PPD positive and control groups for production of IL-4, as an important cytokine that down-regulates Th1 immune responses. This is contradicting with some previous studies that reported the elevated levels of IL-4 in lymphocytes stimulated by *mycobacterial* antigens ([@B5], [@B13]).
Conclusion
==========
Overall, based on these data, we suggest an initial dominant Th1 response with elevated IFN-γ and CD8+ T cells count in PPD positive individuals who are constantly affected by mycobacterial antigens. This may be responsible for the elevated cell mediated immunity in these individuals and could interfere with some potentially pathologic criteria or diseases such as autoimmunity or allergic reaction. This could be mentioned for future studies.
We are grateful to Mehdi Hejazi (RVSRI) for assistance in collecting the blood samples and RAZI Vaccine and Serum Research Institute for financial support.
| |
If you’ve ever been close to a projector or placed your hand over one when it’s running, it is likely that you’ve felt some heat being generated by the device.
Sometimes a room can even get warm and feel uncomfortable to stay in after the projector has been running for a couple of hours.
This may lead you to wonder whether projectors are supposed to get hot, especially since we know that heat is not good for electronic devices.
It is normal for a projector to get hot, however, it shouldn’t overheat. Most of the heat is generated by the projector’s bulb which needs to warm up to a temperature of 200-300 degrees in order to display clear, brightly colored images.
Overheating in projectors can cause premature failure of certain parts such as the bulb and should be dealt with.
How do I know if my projector is overheating?
You can know that a projector is overheating if the temperature light flashes or turns red, the fans become very noisy from spinning at maximum speed, or the projector suddenly shuts down.
1. Temperature light flashes or turns red
Most projectors have a built-in temperature sensor that detects when a projector is overheating. When the internal temperatures exceed a certain set threshold the temperature light turns red to alert the user.
In some projectors, such as those from Epson, the temperature light will turn amber to alert you that the projector is too hot and then turn red when the projector overheats.
2. Projector Gets Noisy
A noisy projector is also another indication that a projector is overheating. This noise is produced by the projector’s cooling fans as they spin extremely fast in an attempt to dissipate heat away from the interior.
3. Sudden Shutdown
A projector has a built-in safety mechanism that automatically shuts it down when it overheats to prevent permanent damage to electronic components, such as the processor, that is soldered onto the motherboard. This also prevents the bulb from burning out prematurely.
What causes projectors to overheat?
A projector can overheat due to poor air circulation, dirty air filters, faulty fans, wrong orientation, and operating in a hot environment.
1. Operating in a Hot Environment
A projector can overheat if the temperature of the room it is placed in is so high that it does not facilitate cooling. The environment can get hot during the summer season or if there is a heat-generating appliance such as a space heater, oven, etc. in the room.
2. Lack of Proper Ventilation
Lack of proper ventilation can cause a projector to overheat since it relies on surrounding air circulation to cool its interior. Improper ventilation can be caused by having obstacles too close to the projector, dust clogging up the air vents, or being placed in a confined space that has little to no airflow.
3. Defective Air Filter
Over time, air filters can become filled with dust, thus limiting the amount of air that gets in and out to cool down the projector.
In addition, a defective air filter can allow dust and other debris into the projector’s interior, thus affecting its ability to dissipate heat effectively.
4. Wrong Orientation
Projectors have air vents placed strategically to let air flow in and out of their enclosure in the most efficient way possible. In fact, most projectors should not be tilted at an angle of more than 30 degrees forward or back.
Therefore, when placed in the wrong orientation, such as in a vertical position, the air vents can get blocked or the fans may blow air in the wrong direction, thus causing the projector to run hot.
How can I stop my projector from overheating?
You can prevent your projector from overheating by taking the following simple measures:
1. Improve Airflow
Improving ventilation around your projector is one of the first and easiest things you can do to prevent it from overheating. Ventilation around a projector can be improved in the following ways:
>Ensure that there is a space of at least 20 centimeters away from other objects, all around the projector, so that it is properly ventilated.
>Don’t put your projector in an enclosed space such as a cabinet or where the air is not able to flow freely.
>Ensure the projector’s air vents are not covered or blocked by any object so that it is able to draw in cool air from the surrounding and push out hot air from its interior freely.
>Replace the air filter if it is old or clogged up with dust or gunk so that the projector is able to “breathe in and out” freely.
2. Replace Fans if they aren’t working properly
Over time, fans can get damaged and fail to work as efficiently as they used to. One of the signs of a faulty fan is noise. Faulty fans should be repaired immediately to stop your projector from overheating and further damage.
3. Control Room temperature
If your room has windows, keep them open when running your projector to let in cool air from outside. During summer when temperatures get too high, it may be necessary to turn on an air conditioning unit or any other cooling system to bring temperatures down in order to use your projector in a favorable environment.
Also, consider moving any heat-producing electric appliances such as TVs and space heaters away from the projector. You shouldn’t place your projector directly above or below any electronic equipment.
4. Check how you Use your projector
Using your projector in the brightest setting causes it to produce the most amount of heat. Therefore, try reducing the brightness and see if this fixes the problem of overheating.
In addition, do not tilt your projector past the tilt angle that is recommended by the manufacturer. Doing so may affect how well a projector is able to push out and draw in air.
Can projectors catch fire?
Most projectors have cooling mechanisms and safety features to prevent them from catching fire, such as automatic shutdown. However, a projector can catch fire if:
- Excess heat causes internal parts to catch fire.
- The wrong power supply is used.
- Another material other than the lens cover is used to cover the lens during projection, material such as paper could easily catch fire.
- Water or any other liquid that conducts electricity gets spilled and gets into the interior and causes a short circuit.
- There is a bug in the software responsible for sensing heat, regulating fan speed, and shutting down the projector when it overheats.
It is important to carefully read the safety instructions that are printed in a projector’s user manual in order to avoid anything that could cause your projector to pose a fire hazard.
See also:
- Can I put a projector screen in front of a window?
- Are all projectors ceiling mountable?
- Do portable projectors need to be plugged in?
Do laser projectors get hot?
Laser projectors do get hot, however, they don’t get as hot as lamp projectors. This is because they produce light using LEDs which consume significantly less energy and consequently less heat compared to standard lamps.
Laser projectors only produce the light that’s needed to project an image, making them more energy efficient. If the necessary measures such as ensuring proper ventilation, replacing clogged air filters, and ensuring the fans run properly are not followed laser projectors can also overheat.
The video below shows how the different types of projectors work:
How long does it take for projector lamps to cool down?
A projector lamp takes 10-20 minutes to cool down, after which you can move it or cover it without causing damage or getting burned. If you don’t allow the lamp to cool down it could get damaged or shatter from shock and vibration.
Conclusion: Is it normal for projectors to get hot?
It is normal for a projector to get hot, even when the fans are running. Most of the heat is produced by the projector lamp which produces the light required to project images on a screen.
However, this does not mean that a projector should overheat. In fact, if the excess heat from a projector is not controlled, it couldn’t lead to the projector posing a fire hazard.
Fortunately, it is easy to keep a projector from overheating by following the steps outlined above. Preventing overheating extends the life of a projector and keeps it safe to continue using. | https://techusersguide.com/are-projectors-supposed-to-get-hot/ |
So it also has a positive effect. This demonstrates how the success of an environmental policy is highly dependent on the environmental and social context under which it is being implemented. Investigations into the late Pleistocene and Holocene history of vegetation and climate in Santa Catarina S Brazil. Using training to moderate chimpanzee aggression during feeding. This oscillation is driven by rarity-based perception of land state value: as forest becomes rare, the number of individuals preferring forest over grassland increases, and eventually the result is net conversion of grassland to forest.
The Ganga, the country's most important river, has become the world's most polluted one. Annu Rev Ecol Syst 20, 171—197 1989. They regularly burned the vegetation from the land in different patterns to help control or prevent wild fires. The dam has also helped Egypt avoid droughts and floods. As a result of the dam, farmers can now have two or three harvests per year rather than one. Hum Ecol 21, 1—21 1993. The effects of unfamiliar humans on aggression in captive chimpanzee groups.
Training to reliably obtain blood and urine samples from a diabetic chimpanzee Pan troglodytes. Primates: The Road to Self-Sustaining Populations. Many people believe that Australia's economy is resource dependent. An instantaneous point-sampling technique with five-minute focal animal test sessions and a 15-second intersample interval was supplemented with ad libitum recording of aggressive interactions and other behaviours of short duration. Ann Probab 3, 643—663 1975.
Data collection methods were identical to those used in the prior evaluation of straw and forage material. This was shown when rabbits were introduced and nearly wiped out Australians natural marsupials. J Veg Sci 3, 293—300 1992. This is its exact location. Ecol Appl 17, 2024—2036 2007. An example of human-environmental interaction in Brazil is the deforestation of the Atlantic forest. Conserv Biol 10, 977—990 1997.
In the present study, groups received 60-150 minutes of interaction, depending on its size. Because fire frequency drops off sharply at a specific threshold in forest cover ,, we will assume w F to be sigmoidal. Hence, human activities have the potential to change the composition of a mosaic ecosystem in a variety of ways. Temperture varies throughout the year. In sensitivity analysis we explored the impact of using the nonlinear version.
The test condition involved a familiar caretaker spending an additional 10 minutes per day, 5 days a week, with each chimpanzee. These characteristics may be human, physical, or cultural. Another example of human-environment interaction is the introduction of new plants to Australia from immigrants. Modeling the forest transition: Forest scarcity and ecosystem service hypothesis. In comparison to these factors, the amount and manner of human interaction with chimpanzees is a relatively neglected variable in behavioural management.
Data collection during the human interaction phase was begun three months after the onset of the phase. This suggests that as a species becomes more rare, its conservation value may increase in the eyes of the public, leading to efforts to protect and restore the species. Besides, deforestation has pushed several animal species to the brink of extinction. By recycling you can reduce landfills making the environment for animals better. Furthermore people often reorganize existing ecosystems to achieve new ones that seem to be more effective in serving their needs. But, in looking at history, these changes occurred over many thousands of years.
Australia has many natural resources. However, when F 0 are stable. Irrigation canals even keep some fields in continuous production through the use of artificial fertilizers. Human social systems have to adapt to their specific environment. Both variants caused few changes to the weak human influence scenario, but resulted in more parameter sets giving rise to oscillations in the strong human influence scenario. They also create air poll … ution, most notably dust.
Many species are becoming endangered and even worse, extinct. This boy is doing a positive interaction to the environment by recycling. But if the age of exploration hadn't happened the united states wouldn't be here. Moreover, many of these oscillations were sufficiently large to correspond to complete removal of either forest or grassland in the extremes of the cycle. Non-oral abnormal behaviours, already at very low levels 0.
Often these landscape types compete directly for resources , , , , ,. This is amazing--and it also has many problems. This resulted in few changes in the weak human influence case, but significant changes under the strong human influence case: the parameter regime giving rise to a simultaneously existing stable equilibrium and stable limit cycle was significantly reduced, meaning that dynamics were less sensitive to initial conditions. Available space per individual ranged from 24. Moreover, the relative composition of grassland versus forest may vary over time according to current preferences. As a result, the government borrowed heavily against the future sale of its oil. Examples of such consequences are the recreational use of mountainous regions for skiing or hiking, or ecotourism. | http://georgiajudges.org/positive-examples-of-human-environment-interaction.html |
In this practice we take complaints very seriously and try to ensure that all our patients are pleased with their experience of our service. When patients complain, they are dealt with courteously and promptly so that the matter is resolved as quickly as possible. This procedure is based on these objectives.
Our aim is to react to complaints in the way in which we would want our complaint about a service to be handled. We learn from every mistake that we make and we respond to customers’ concerns in a caring and sensitive way.
- The person responsible for dealing with any complaint about the service that we provide is Miss Pamela Murphy, our Complaints Manager.
- If a patient complains on the telephone or at the reception desk, we will listen to their complaint and offer to refer him or her to the Complaints Manager immediately. If the Complaints Manager is not available at the time, then the patient will be told when they will be able to talk to the dentist and arrangements will be made for this to happen. The member of staff will take brief details of the complaint and pass them on. If we cannot arrange this within a reasonable period or if the patient does not wish to wait to discuss the matter, arrangements will be made for someone else to deal with it.
- If the patient complains in writing the letter will be passed on immediately to the Complaints Manager.
- If a complaint is about any aspect of clinical care or associated charges it will normally be referred to the dentist, unless the patient does not want this to happen.
- We will acknowledge the patient’s complaint in writing and enclose a copy of this code of practice as soon as possible, normally within three working days.
- We will seek to investigate the complaint within ten working days of receipt to give an explanation of the circumstances which led to the complaint. If the patient does not wish to meet us, then we will attempt to talk to them on the telephone. If we are unable to investigate the complaint within ten working days we will notify the patient, giving reasons for the delay and a likely period within which the investigation will be completed.
- We will confirm the decision about the complaint in writing immediately after completing our investigation.
- Proper and comprehensive records are kept of any complaint received.
- If patients are not satisfied with the result of our procedure then a complaint may be made to:
- The Dental Complaints Service, The Lansdowne Building, 2 Lansdowne Road, Croydon, Greater London, CR9 2ER. Telephone: 08456 120 540 www.dentalcomplaints.org.uk
- The General Dental Council, 37 Wimpole Street, London. W1N 8DQ. Telephone: 0845 222 4141, the dentists’ regulatory body for complaints about professional misconduct
Data Protection Code
Data protection code of practice for patients
Keeping your records
- This practice complies with the Data Protection Act 1998 and this policy describes our procedures for ensuring that personal information about patients is processed fairly and lawfully.
The personal data that we hold
- To provide you with a high standard of dental care and attention, we need to hold personal information about you. This personal data includes:
2.1 Your past and current medical and dental condition; personal details such as your age, National Insurance number/NHS number, address, telephone number, date of birth and your general medical practitioner
2.2 Radiographs, clinical photographs and study models
2.3 Information about the treatment that we have provided or propose to provide and its cost
2.4 Notes of conversations/incidents about your care, for which a record needs to be kept
2.5 Records of consent to treatment
2.6 Correspondence with other health care professionals relating to you, for example in the hospital or community services.
Reasons for holding this information
- We need to keep comprehensive and accurate personal data about our patients to provide them with safe and appropriate dental care.
How we process the data
- We will process personal data that we hold about you in the following way:
Retaining information
- We will retain your dental records while you are a practice patient and after you cease to be a patient, for at least eleven years or, for children, until age of 25, whichever is the longer.
Security of information
- Personal data about you is held in the practice’s computer system and/or in a manual filing system. The information is not accessible to the public; only authorised members of staff have access to it. Our computer system has secure audit trails and we back-up information routinely.
Disclosure of information
- To provide proper and safe dental care, we may need to disclose personal information about you to:
7.1 Your general medical practitioner
7.2 The hospital or community dental services
7.3 Other health professionals caring for you
7.4 HM Revenue and Customs
7.5 Private dental schemes of which you are a member.
- Disclosure will take place on a ‘need-to-know’ basis. Only those individuals or organisations who need to know in order to provide care to you – or in order to ensure the proper administration of Government (whose personnel are covered by strict confidentiality rules) – will be given the information. Only the information that the recipient needs to know will be disclosed.
- In very limited circumstances or when required by law or a court order, personal data may be disclosed to a third party not connected with your health care. In all other situations, disclosure that is not covered by this Code of Practice will only occur when we have your specific consent.
- Where possible, you will be informed of these requests for disclosure.
Access
- You have the right of access to the data that we hold about you and to receive a copy. Access may be obtained by making a request in writing and the payment of a fee of up to £10 (for records held on computer) or £25 (for those held manually, including non-digital radiographs). We will provide a copy of the record within 21 days of receipt of the request and fee (where payable) and an explanation of your record should you require it.
If you do not agree
- If you do not wish personal data that we hold about you to be disclosed or used in the way that is described in this Code of Practice, please discuss the matter with your dentist. You have the right to object, but this may affect our ability to provide you with dental care.
Data Security Policy
This Dental Practice is committed to ensuring the security of personal data held by the practice. This policy is issued to existing staff with access to personal data at the practice and will be given to new staff during induction. Should any staff have concerns about the security of personal data within the practice they should contact Dr Gurpreet Midha.
All members of the team must comply with this policy.
Confidentiality
- All employment contracts and contracts for services contain a confidentiality clause, which includes a commitment to comply with the practice confidentiality policy.
2. Access to personal data is on a “need to know” basis only. Access to information is monitored and breaches of security will be dealt with swiftly by Dr Gurpreet Midha.
3. We have procedures in place to ensure that personal data is regularly reviewed, updated and deleted in a confidential manner when no longer required. For example, we keep patient records for at least 11 years or until the patient is aged 25 – whichever is the longer.
Physical security measures
- Personal data is only taken away from the practice premises in exceptional circumstances and when authorised by Dr Gurpreet Midha. If personal data is taken from the premises it must never be left unattended in a car or in a public place.
5. Records are kept in a lockable fireproof cabinet, which is not easily accessible by patients and visitors to the practice.
6. Efforts have been made to secure the practice against theft by, for example, the use of intruder alarms, lockable windows and doors.
7. The practice has in place a business continuity plan in case of a disaster. This includes procedures set out for protecting and restoring personal data.
Information held on computer
- Appropriate software controls are used to protect computerised records, for example the use of passwords and encryption. Passwords are only known to those who require access to the information, are changed on a regular basis and are not written down or kept near or on the computer for others to see.
9. Daily and weekly back-ups of computerised data are taken and stored in a fireproof container, off-site. Back-ups are also tested at prescribed intervals to ensure that the information being stored is usable should it be needed.
10. Staff using practice computers will undertake computer training to avoid unintentional deletion or corruption of information.
11. Dental computer systems all have a full audit trail facility preventing the erasure or overwriting of data. The system records details of any amendments made to data, who made them and when.
12. Precautions are taken to avoid loss of data through the introduction of computer viruses. | https://salforddentalpractice.com/complaints-and-data-protection/ |
In the last few weeks there has been both good news and bad news in the battle between total dependence on renewable “green” energy and a sensible energy policy in New York state.
First the bad news. I’m sure that many of you have read the recent OBSERVER story that the price of electricity spiked during the recent July heat wave. This was just another one of those unintended consequences that politicians are so adept at creating when they begin meddling in things, they know little about.
According to Assemblyman Andrew Goodell during the heat wave the price of electricity was driven up due to congestion in the transmission system that brings power to western New York, most of which now comes from heavily polluting coal fired plants in Pennsylvania. Goodell pointed out that the transmission system in our area was originally designed to deliver power from the now silent Dunkirk steam plant and the equally silent Huntley Station in Tonawanda and not from plants in Pennsylvania which is the reason for the congestion during periods of high demand like heat waves.
I’m sure we all remember the jubilation that met the governor’s 2013 announcement that the Dunkirk plant would remain open and be converted to cleaner burning natural gas. Don’t politicians say the craziest things when they need our votes?
Getting back to renewable energy, Goodell said that the Public Service Commission is aware of the problem but that repairs will take time. Unfortunately, I find that the repairs to relieve congestion are just one of many repairs, updates and rebuilds that engineers will be facing so that all those new wind and solar farms can be tied into the grid in order that wind power from Chautauqua County and solar power from the Mohawk Valley can be sent to areas of high demand
Problems have already arisen in transmitting some forms of renewable power where it is needed. In 2018 New York produced less renewable energy then in 2017 with a drop of 2.5 percent.
According to those in the know the reason for this was that our electric grid was unable to deliver renewable energy to regions where it was needed. In 2018, 70 gigawatts of wind energy went to waste because the grid couldn’t move it from upstate to areas of high demand. The governor was warned, but apparently has chosen to ignore, that as the state grows more dependent on renewable energy, we will face additional problems transmitting wind and solar energy where it needs to go and the state may face more brownouts and blackouts in the future
Besides the “blips” that local electric customer will see on their bills after future heat waves the cost of transitioning to renewable energy will cost all New Yorkers more for their electricity. What should disturb all New Yorkers is that the Cuomo administration has never made an effort to make even a rough estimate of the fiscal and economic impact of dependency on renewable energy. In light of that we should be prepared for large and unexpected increases in our electric bills that are already 43 percent above the national average.
As the cost of electricity goes up it will only accelerate what has been happening in upstate New York for years as businesses and citizens flee the state. Of course, supporters of green energy tell us that there will be new jobs in the green energy field to take up the slack. What kind of jobs will those be, I wonder? Just picture someone tethered to a 500 foot high tower cleaning dead birds off turbine blades or sweeping dust and dirt from solar panels in the summer and snow in the winter. We only have to look north to Tesla’s Solar City plant in Buffalo to understand that job claims in green energy are often wildly inflated.
The good news in energy is that the Chautauqua County Legislature recently passed a resolution opposing the construction of wind turbine farms on Lake Erie as part of the governor’s misguided and quixotic campaign to make the state dependent on renewable energy. As the resolution states wind turbines on the lake will have a negative impact on bird and fish populations but it will also have an adverse impact on recreational use of Lake Erie by local residents and visitors.
Opponents of wind and solar farms can take heart in the growing opposition to wind and solar farms by upstate citizens and municipalities. Local residents and organizations in these areas are working to save their landscapes and wild life from the blight of towering wind turbines and rapidly spreading fields of solar panels.
Not surprisingly, as opposition grows it has been reported that state officials have begun taking the questionable step of using state resources to help renewable friendly local officials change municipal codes to smooth the way for turbines and solar farms. So much for the duty of state government to place the wellbeing of its citizens first. | https://www.observertoday.com/opinion/commentary/2019/08/charging-ahead-with-bad-energy-policy/ |
Italian coastal sites have the advantage of favorable climatic conditions to use mixed renewable energy sources, such as solar and wind. Harbors are safe places to install wind turbines where wind conditions are almost offshore. Space-borne remote sensing can provide information to determine solar and wind energy production potential cheaper than usual observational activity to identify and assess suitable areas. Here, we present a case study for both energy resources assessment from satellite in harbors. | https://orbit.dtu.dk/en/publications/using-remote-sensing-data-for-integrating-different-renewable-ene |
Kristofer Dittmer is a PhD student in Ecological Economics at ICTA-UAB. He holds undergraduate degrees in Environmental Science and Environmental Economics from Gothenburg University, and a Master’s degree in Environmental Studies from the the Universitat Autònoma de Barcelona. His doctoral research focuses on the usefulness of local currencies in achieving socio-ecological change.
Research interests: local currencies, social and solidarity economy, degrowth.
About local currencies:
Local currencies are often proposed by degrowth advocates as innovations that facilitate the creation of convivial and ecologically sustainable societies. They are alternatives or complements to legal tender money that are mostly created by civil society and sometimes by public authorities, and that circulate in a more limited area than conventional money. Common examples are Local Exchange and Trading Schemes (LETS), time banks, and convertible and non-convertible paper currencies. Over the last few decades, local currencies have been variously viewed as policies for social inclusion, tools for local economic development, and survival mechanisms in times of economic crisis. Our research on local currencies focuses on the feasibility of alternative monetary systems and on their relevance to the socio-ecological transition called for by degrowth proponents. We draw inspiration from the postcapitalist ‘diverse economies’ project in economic geography as an antidote against the pessimism induced by discursive constructions of omnipresent Capitalism, tempered however by a fresh awareness of its frightful sibling; Fossil Energy Civilization. Research on local currencies at an interesting juncture between the two is currently conducted in Venezuela (see introductory article ‘Communal currencies in Venezuela’ by Kristofer Dittmer).
Selected publications: | https://degrowth.org/2011/11/28/kristofer-dittmer/ |
by Tom Orrell with contributions from Beata Lisowska.
At the Friday Seminar that preceded this year’s UN Statistical Commission, Open Data Watch’s Eric Swanson asked me a challenging yet pertinent question following my presentation to the plenary. He asked: “The definition and principles of ‘open data’ are quite clear and simple but the principles of joined-up data are less clear. Can you enunciate five principles of joined-up data that could serve as a practical guide for others?”
This is a question that we at the Joined-Up Data Standards (JUDS) project have been beginning to answer through our discussion papers, blogs and consultation paper. That said, Eric touched on a real gap in terms of concrete guidance when it comes to a commonly recognised list of principles for interoperability – the ability to access and process data from multiple sources without losing meaning, and integrate them for mapping, visualisation, and other forms of analysis – at a global level.
This blog builds on the answer that I gave to the Friday seminar and sets out five core interoperability principles:
Proposed checklist for new standards from our consultation paper
- Is there a clear need and demand?
- Does it duplicate the efforts or compete directly with standards that already exist?
- Are the design of the architecture and individual elements intellectually, logically and methodologically sound?
- Do components (building blocks) within the standard adopt other existing standards wherever possible?
- Is it designed to ensure comparability and interoperability with other standards?
- Will the data be available through open, sustainable and easily accessible channels?
- Is there political buy-in from the institutions that need to produce the data?
- Are timelines for development, implementation and adoption realistic?
- Does the data that can feed into the standard already exist?
- Is it realistic to expect that new data can be produced to feed the standard?
- Does any historical data exist that can act as a ‘rear-view mirror’ for the standard?
Principle 1: Use and Reuse existing data standards
Perhaps the most basic principle that underpins joined-up data is the notion that new classifications – how data are described – and standards – schema into which data are input – should not be developed unless absolutely necessary. Where possible, those seeking to develop a new standard should spend time considering what is already out there and whether an open data standard already exists that can simply and easily be adapted to their needs. This principle is implicitly recognised within our consultation paper, where we suggest a ‘checklist for new data standards’ as a guide for anyone seeking to produce a new data standard. Moreover, any new standard developed must be compatible with existing standards.
Principle 2: Don’t forget metadata
Metadata standards are arguably the most important prerequisite to joined-up data. Metadata includes information on the source of a piece of data, its author, the version being published and the link to the original dataset. Taken together, this information is crucial to ensuring that both machine and human users can discover, identify and contextualise data. Ensuring that machine-readable metadata formats are standardised and used across data producing institutions and bodies therefore greatly enhances the ability of data to be joined-up.
These attributes make metadata particularly important for the official statistics community as it starts to consider how statistical data can be made open by default. As my colleague Beata Lisowska recently put it in another blog, when it comes to metadata, “in essence, we’re really asking: can we trust this data?”
Principle 3: Use common classifications wherever possible
As more and more data are made open and proactively published by governments, international institutions, private sector actors, open standard initiatives and others, we need to make sure that the language used – or the classifications to which data are published – is the same. Often, similar information is classified using slightly different definitions, which hinders the machine-readability and so interoperability of that data. Within the international development sector, it’s crucial that data standards are fit for purpose and actively used, or at least linked to, by all stakeholders producing data.
Classifications of organisations and time formats are two cases in point where the absence of universally agreed definitions can seriously inhibit broad-scale interoperability. The identify-org.net site succinctly explains why the issue of organisational identifiers is important: “If my dataset tells you I have contacts with ‘IBM Ltd’. ‘International Business Machines’ and ‘I.B.M’ – how many firms am I working with?” Unique identifiers would go a long way to overcoming basic semantic challenges like this.
The United Nations Statistics Division has published a registry of classifications that it maintains at UN Classifications Registry. However the list does not include other international classifications such as UNESCO’s International Classification of Education (ISCED), WHO’s International Classification of Disease, or many other important classifications. A comprehensive inventory of all international and relevant national classification systems would be a boon to interoperability.
Principle 4: Publish data in machine-readable formats
For joined-up data solutions to offer real efficiency gains and value, it’s imperative that a machine is able to do most of the hard work in joining up the data. This is already possible but requires many data publishers to change the way they currently publish their data. Publishing data only in PDF format is not enough; data must also be published in machine-readable formats such as RDF, XML and JSON. Publishing in these formats would enable a computer to access, identify and filter data in an automated way, making it far simpler and less time-consuming for data users to put data to good use.
Principle 5: Ensure standards are user-driven
The explosion in open data publication that has taken place over the last twenty-odd years has happened with the key consideration of ‘openness’ at its heart. Whilst this is great and important, openness does not automatically equate to usability. For data to be usable they must be driven by the needs of users themselves. Take the Humanitarian eXchange Language (HXL) standard for example. Its beauty and functionality emanate from its incredible simplicity and ease of use. The process of ensuring that an interoperable standard is ‘usable’ can be a complex one that requires trial and error. Sticking with the HXL example, a linked-data approach was tried and tested but failed given the complexity of user needs in the humanitarian space. A hashtag approach was later agreed, which put user-needs at the heart of the endeavour.
These are some of the core principles of interoperability that we’ve uncovered during our research. They offer a starting point for further discussion and we will continue to explore these issues and others with the various stakeholders involved. One thing that we can be sure of already is that to find solutions to interoperability challenges, political will and policy coordination between governments, international organisations, open standard setters and others is key.
Up until the end of March 2017, the Joined-up Data Standards project is inviting feedback on their consultation paper, and will be publishing an updated paper building on these principles and other aspects of their work in the summer of 2017. | https://opendatawatch.com/blog/what-are-the-principles-of-joined-up-data/ |
"By sequencing the DNA from ten skeletons from the late Iron Age and the Anglo-Saxon period, we obtained the first complete ancient genomes from Great Britain," said Dr Stephan Schiffels, first author from the Wellcome Trust Sanger Institute, Cambridgeshire and the Max Plank Institute in Germany. "Comparing these ancient genomes with sequences of hundreds of modern European genomes, we estimate that 38% of the ancestors of the English were Anglo-Saxons. This is the first direct estimate of the impact of immigration into Britain from the 5th to 7th Centuries AD and the traces left in modern England."
Previous DNA studies have relied entirely on modern DNA and suggested anything between 10% and 95% contribution to the population. One such study suggested that Anglo Saxons didn't mix with the native population, staying segregated. However, this newly published study uses ancient genetic information and disproves the earlier idea, showing just how integrated the people of Britain were. The ancient skeletons from Cambridgeshire were carbon dated, proving they were from the late Iron Age (approximately 50BC) and from the Anglo-Saxon era (around 500-700 AD). Complete genome sequences were then obtained for selected DNA samples to determine the genetic make-up of these Iron Age Britons and Anglo-Saxons.
"Combining archaeological findings with DNA data gives us much more information about the early Anglo-Saxon lives. Genome sequences from four individuals from a cemetery in Oakington indicated that, genetically, two were migrant Anglo-Saxons, one was a native, and one was a mixture of both. The archaeological evidence shows that these individuals were treated the same way in death, and proves they were all well integrated into the Oakington Anglo-Saxon Community despite their different biological heritage." said Dr Duncan Sayer, archaeologist and author on the paper from University of Central Lancashire.
"We wanted to determine where ancient DNA samples would fit with respect to a modern population model and to map individuals into that model. This study, using whole-genome sequencing, allowed us to assign DNA ancestry at extremely high resolution and accurately estimate the Anglo-Saxon mixture fraction for each individual," said Richard Durbin, senior author at the Sanger Institute. "More full genome sequences and further improvements in methodology will allow us to resolve migrations in even more detail in the future." | https://www.eurekalert.org/pub_releases/2016-01/wtsi-agr011516.php |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.