content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
'The Ares I test is supposed to take place this year' No it's not. The first test of Ares 1 is due in 2011. NASA has announced a successful test related to its new Ares I astronaut-carrying rocket, the planned successor to the space shuttle. A key part of the structure, critical to safe parachute recovery of the discarded first stage, successfully blew itself up last week during a trial on the ground in Utah. The Ares I/Orion stack … "once lit there is no off, there is no throttle either Solid fuel is either on - full burn, or off" I could be mistaken, but I thought they were supposed to change it so [somehow] the solid booster can be shut off in some way. Still no way to throttle it, though. And what is different about this skirt design vs the one used on the Apollo rockets? Haven't they already messed with pyrotechnically separating different sections? Or is it the vibration of the solid boosters that requires a stronger design... The first Ares test IS scheduled for this year. Ares 1 X is a test of the first stage solid rocket. The rest of the stack (ie second stage and crew exploration vehicle) will just be dummy mass though. See PDF from NASA: http://www.nasa.gov/pdf/257539main_aresIx_fs_070808.pdf It goes on to say : "The Ares I-X test is part of a larger flight test program that will include five tests of the Orion launch abort system between 2008 and 2011, a follow-on Ares I-Y test in 2012, and an integrated test of both the launch vehicle and spacecraft, called Orion 1, in early 2013. " There are advantages to solid fuel, such as it being more stable, less complex, more thrust for the same size, etc. The first stage doesn't need variable thrust - it just needs to get the rocket as high as possible. The second stage is liquid, and allows for varied thrust. The space shuttle has been using a combination of solid fuel and liquid fuel since its inception - it has two solid boosters on either side of a liquid tank. There have been 124 space shuttle missions, with two shuttles destroyed - one by a fault in a solid rocket booster, and one because of a piece of foam insulation used to keep the liquid fuel cold. So two accidents, one due to each fuel type. The Ares I X test is planned for July 2009, although some delay looks likely. A four segment SRB with a dummy fifth segment. A test of the first-stage flight control systems, parachute recovery system and the system that separates its first and second stages. Parts are mock-ups, but an aerodynamically exact mock-up of the real Ares 1. @ Solid fuel???? Yes. Very similar to the two used for every Shuttle flight. Huh, I thought it really WAS going to be a rocket blowing up on the pad, with some NASA goon (the bureaucrats rather than the scientists) saying "Guh, it was supposed to do that." What do you know it appears it was a REAL test. RE: solid fuel? It's a major problem if the rocket goes off course. Otherwise, it's safe and reliable. There's no "off", but there's ALSO not all this complicated plumbing with liquid oxygen and propellant, no valves, no O-Rings (Challenger's downfall), no problems with the mixture, and so on. They already know how much fuel is needed to get launched so it has that much. From what I've read, the tanks alone for a liquid fuel setup add significantly to the weight, which is a big issue for a launch into space. If there's an off-course condition, the launch vehicle has basically an ejection system*, then the rocket can be remotely blown up. (*Is the ejection system survivable? It's supposed to be, but is definitely better than being rocketed back into the ground.) When they go bang, they do so without *any* warning whatsoever, unlike liquids which usually let you know in time to exit stage left. We already lost one Shuttle crew to solids, why are we doing this? Because NASA admin are idiots, and again not listening to their engineers. My tax dollars "at work" as usual. We're hoping Obama tosses Griffin on his ear, and we can scrap this POS, but so far, he hasn't. "Upper skirt extension"? First there's that screw-up with the nappy-wearing loony astronaut, and NOW the US taxpayers are funding some sort of high-tech exploding transvetism. Well *I'm* not impressed by the direction government-funded projects have taken under the new administration. So much for the increased oversight and responsible spending President Obama promised. Disgusted of Tunbridge Wells. Quote "Solid fuel???? By Fred Posted Monday 2nd February 2009 18:35 GMT Flame The primary stage is to be solid fuel?! WTF - once lit there is no off, there is no throttle either Solid fuel is either on - full burn, or off Seems like an accident waiting to happen" " Its not like the Astronauts need to slow down a spacecraft on the way up to space thats why the boosters always are solid fuel and this is nothing new. Spacecraft Manoevering Thrusters on the other hand are alway liquid fuel because they are only fired when needed for short durations. Actually, Ares uses Mediated Solid Fuel (or MSF) which uses engineered "smart particles" distributed throughout the oxidising substrate to accelerate or decelerate solid fuel burn. By applying targeted high energy radio waves at the fuel source various characteristics of the smart particles can be tuned in real time - including volume, and reactivity. These can be changed gradually to change flight characteristics (speed adjustment), or rapidly to - almost instantaneously - render the solid fuel source effectively inert and "shut down" the solid fuel engine. More information can be found at: http://en.wikipedia.org/wiki/AFakeWikiPage Yup. Before the Shuttle NASA considered solid fuel rockets to be too dangerous to be used for human flights. It was only after Congress told them that they could only have 1/2 of what they were asking for to design what became the Shuttle that money won out and they decided to use them (and due to more political finagling we ended up with the Challenger disaster). Actually the SRB on Challenger never exploded, the o-ring failed (due to extreme cold on the launch pad) on one of the section joints allowing exhaust gas to escape which cut through the supports which held the SRB to the external tank. They had had this o-ring fail on previous flights but the hole was blocked by debris, which did happen on Challenger but was dislodged due to large amounts of steering caused by wind shear. In all likelihood if the oring had failed on the other side of the SRB then there wouldn't have been an accident. I suspect that if something similar happened with Ares I then it would not cause a major failure. Yep - It was actually the liquid fuel that exploded after the leak from the booster melted through the liquid fuel tank. It wasn't debris and wind shear that caused it though. The low temperature (18°F) prevented the o-rings from being able to flex as needed, which allowed hot gasses to escape. In previous flights, the secondary o-ring was able to prevent leaks when the primary failed. do they need to blow it apart? surely a fails under tension loading type coupling would be best here? ie just stack it on top when stationary gravity ensures compression attachment during launch the thrust ensures compression, after the solid fuel is burnt the connection goes into tension and fails... Solid motors (the Shuttle SRBs at least) do have some capability to be switched off in flight. The burn rate of these propellants is strongly dependent on internal pressure. Opening a large vent port (once only, with explosives) at the top of the booster drops the pressure and reduces burn rate to a crawl. This is already done on the Shuttle when the SRBs are jettisioned. It makes little difference to Ares (I've no idea if such a system is even fitted), but it's a workable bit of rocket motor technology. "But there's only one of them - so it's twice as safe as the Shuttle!" <!CRAP MATHS ALERT! DO NOT READ ON IF YOU HAVE A SLIGHT CLUE!> Or more accurately, half as dangerous. 1% chance of failure with 2, 0.5% chance with 1 (I'm making these up as an example), so safety increase from 99.0% to 99.5% - not really doubling. And that's ignoring the combinational statistics involved (that 1% is made up of 1 failing, the other failing, or both failing) </pedant> :-) Though as Jimmy Carr mentioned on QI, the odds are always 50/50. It either blows up, or it doesn't Thank god someone here noticed that tiny little detail. The major design flaw that caused the loss of both space shuttles was the side-by-side design. If NASA had used their previous stacked-on-top design (which they have reverted to here) the blow-out on the booster would not have hit anything, and the foam loss would not occurred either. The first incident then becomes potentially survivable, and the latter a non-incident. With regard to the skirt, I don't believe that the Apollo first stages worried about recovery, and therefore did not need to care about parachute deployment. A Nonymous said: "thats why the boosters always are solid fuel and this is nothing new." Hah, it's only been used on the shuttle with disasterous consequences in 1986. It had happened before, and can be seen in various launch videos, but in 1986, the burn through was pointed at the big tank, and that blew up at 73 seconds, killing all aboard as they fell back to earth. Solid booster technology is a cheap way to go, but a dangerous way to go, and no one outside NASA management is happy to see it again in this Ares I stack. It would save a ton more money if they just used a Delta rocket to launch the crew, and an Atlas V to launch their lunar lander. I have no idea why Griffin though we needed another damned launch option, when we have so many already. Lockheed can build the capsule in the next few years, and with a launcher off the shelf, we would not have the multi-year gap they project now. All you "solid fuel=teh sux" proponents seriously need to review the design requirements of a liquid fuel booster before you weigh in, especially the bit about how to move the fuel and oxidiser quickly enough to suit the application. I thought everyone in our business understood the basic idea that high speed moving parts in the critical path are a bad thing, to be moved away from if any viable alternative presents itself. As for the chance of explosion doing away with the crew - rockets are inherently super dangerous. Only a fool would think that so much energy piled into a pipe could be called "safe" after we set fire to the contents. The thing is *designed* to explode for Azathoth's sake, just very slowly in a "controlled" manner. Perhaps it is possible to get the crew away from the fireball of an out-of-design-spec excursion event in time. I seriously doubt it, and from the suggestions I've seen coming later than Gemini (Apollo used older "Mercury-era" tech) the whole astronaut safety thing is a smoke and mirrors affair designed to allow politicians to speak in public about things they manifestly know nothing about while spending public money venally to benefit themselves, their constituents and not at all the people sitting in the cabin at t=0. If only the general public could be educated that Real Life <> Star Trek and that travelling to the ISS isn't like nipping down to the shops for twenty Rothmans (unless you smoke 'em), perhaps the money to properly investigate the useful applications of space tech would become available (hint, cheap power is still do-able with tech we already have to hand) and we could have the nifty new frontier (which is the bit I want) as a side product. For the love of Arthur. It's nearly 2010 and still no Space Wheel. I agree with Craig Matthews who say; Instead of trying to rush forward and get this new vehicle human-rated, what is wrong with resurrecting the Apollo/Saturn technology? Aside from space and size limitations, what was wrong with Apollo? bccmeteorites.com/misconduct-planetary.html The J2-X engine proposed for Ares' upper stage is a direct descendant of the J2s used in the Saturn V, but with the sort of gains in efficiency and manufacturing that might be expected after 55 years. A stack design in which all the potentially explosive stuff is as far away from the crew as possible is, again, identical to Saturn V, and the Orion module is largely based on Apollo (it even looks identical at a glance), albeit with the experience of that 55 years built in, not to mention a good deal more computing power. Repurposing the Shuttle SRBs makes good sense: the plants are up and running already and the SRBs are known to work. They can do the job, so why reinvent the wheel? Getting Saturn V manufacturing up and running again would be no mean feat, and doubtless take longer and cost more. The more pertinent question is, and has been for a long time, just how NASA got into a position where it had no credible plan in place to replace the Shuttle. "But there's only one of them - so it's twice as safe as the Shuttle!" In defence of Martin, your apparently reasonable skills at arithmetic and pedantry aren't matched by the level of irony-detection and humour-appreciation required for reading these comments. Do we really all need to use the joke icon as a defence against the irony deficient? <quote>So much for the increased oversight and responsible spending President Obama promised.</qoute> Wow - I knew some people thought that Obama is an all seeing god but even god will take some time to go through the entire US budget. What has he been in Office - 16 days???? Oops! Sorry I did not mean to tell everyone that Obama is god.... what if that is widely known??? Time to leave before the natives get upset The story that I heard was that the engineers wanted to build single piece solid rockets on site in Florida - but because of the pork issues the rockets "needed" to be built in Utah. That required that they be made in two parts so that they could send them to Florida on the train... Sounds like the way Chrysler products are designed and built.
https://forums.theregister.co.uk/forum/all/2009/02/02/nasa_ares_i_stage_sep_bang_test/
Neuroscience is one of the most rapidly advancing fields in medicine, with highly-detailed imaging offering new insights into the way the brain works and direct brain interfaces enabling weapons to be targeted and fired with just a thought. The technology is not speculative – just last week scientists unveiled an implant called BrainGate that enabled a woman who had lost the use of her limbs after a devastating stroke to control a robot arm using thought processes alone. Neuroscience, conflict and security The new report, "Neuroscience, Conflict and Security", formed part of a series that examined the impact of neuroscience on society, dealing specifically with the potential application of advances in neuroscience to the armed forces and security personnel. It was chaired by Prof Rod Flower FRS, professor of biochemical pharmacology at the William Harvey Research Institute, Queen Mary University of London, and brought together international experts to discuss new developments in the field and the laws and ethics that apply to their application in a military and civil context. According to Flower, it was the first time the Ministry of Defence’s (MoD) Defence Science and Technology Laboratory (Dstl) had worked with academia in neuroscience. "The people we were in contact with there were the horizon scanning team whose job it is to look out for potential new applications of research," Flower said. "We relied on what was publicly available, and while the US is extremely open about what its military does, the UK is not so open, and countries like Russia and China are a complete cipher." A key advance in neuroscience has been improvements in real-time neuro-imaging, which can indicate in great detail which parts of the brain "light up" when undertaking certain activities. One of its applications could be to screen potential recruits for a specific role, for example to see if they are temperamentally suited to be a commander, pilot or diver. "At the moment it’s very much a case of taking people on and subjecting them to high-stress exercises and choosing the ones who make it," says Flower. "If they could be subjected to imaging during assessment you could identify who has good risk-taking behaviour, strategy and planning ability, or 3D analytical skills." Brain scanning for target identification Brain scanning could also speed up and improve target recognition or identify changes in surveillance satellite images by recognising subconscious objective identification rather than an operator having to process and actively react. "It has been discovered that when you show the brain different images, it spots the differences between them even though they may not reach conscious awareness," says Flower. "Wearing a helmet like a hairnet can pick up a spike in brain activity which you can correlate to differences identified between two images, even if they were flashed up too quickly to process consciously." That potentially has the ability not only to speed up the process of target selection but also improve accuracy. It could also reduce problems associated with fatigue, which is a big issue facing people whose job involves scanning images for a long time, especially in the dark, such as surveillance UAV operators. Mind controlled weapons and aircraft Related feature Solutions for situational awareness – battlefield innovations Situational awareness solutions allow soldiers to make effective use of varied information in a battlefield context. Technologies such as the BrainGate implant have already shown that machinery can be controlled with the mind alone, and games manufacturers have already brought out low-cost helmet controllers than enable wearers to play by mind power alone. The obvious application for the military is mind-controlled weaponry and remotely-piloted aircraft, which could make operation and reactions far faster. "If you couple that with your subconscious mind being much faster at dealing with information you can see a situation sometime in the future where you’re not thinking about flying the aircraft, but your subconscious is doing it without interfering in any way," says Flower. "You would probably have a much better appreciation of an incoming threat and fire off a couple of missiles without having to consciously think." Drugs to stimulate troops and disable enemies The report also examines evidence that certain drugs can improve the performance of personnel performing certain military tasks. Among these, drugs developed to relieve the symptoms of Attention Deficit Hyperactivity Disorder (ADHD) in children, such as Ritalin, have shown great promise on unaffected adults who want to focus their attention on a specific task. "It could help when flying a long mission where you may become fatigued and your attention begins to drift off," says Flower. "It could also help you focus when you have a lot of information to process, like being a fighter pilot in a particularly tense situation when you’re trying to get a missile lock on a target while the aircraft and radio are bombarding you with information and you have to communicate back." Another approach that could improve the way the brain works is known as trans-cranial electrical stimulation where electrodes attached to a 9V battery are clamped to the head. Control studies showed it can improve the rate at which things are learnt, and possibly result in better memory formation. One controversial subject the report touches on is that of neuropsychology-inspired chemical weapons, discussing the fact that although the international Chemical Weapons Convention (CWC) bans the use of chemical weapons on the battlefield, they are allowed for civil law-enforcement purposes. "One of the problems is as far as anyone in our field can find, it’s not possible to find a totally safe drug that you could use," says Flower, citing the example of the Moscow theatre siege in which 150 civilians died alongside their Chechen rebel captors. "It’s partly because everyone’s unique and responds in different ways. If you start spraying it around you may affect children, women, men, pregnant women, old men, people taking other drugs, and people with heart disease. It won’t just be the 70kg healthy young men on which these drugs are tested." Flower is also keen to bust some myths about some chemicals that were reportedly tested for their effects on enemy troops. "Oxytocin is a hormone that’s produced in pregnancy that produces a feeling of emotional closeness and trust," says Flower. "There was a lot of talk that you may be able to use this as an interrogation tool to make your captive trust you and tell you all his secrets. But as far as we can tell that’s all nonsense." The ethics of neural war Like automated weaponry and battlefield robotics, however, these new techniques could require an overhaul of ethical guidelines, especially with regards to civilian casualties. Currently the last person who gave the order to fire is responsible, but if it came from the operator’s subconscious, the line becomes blurred. With advances in neuroscience holding such great potential for military applications, Flower would like to see the MoD to work closer with academia. One approach would be to have a two-way intern exchange between the MoD and academia. "It’s not rocket science, the research is all out there, and most of it gets published," says Flower. "It’s just a question of them being aware of it and able to pick up the ideas and exploit them before they read about it in Nature."
https://www.army-technology.com/features/featureweapons-of-perception-neuroscience-mind-controlled-weapons-and-the-military/
Welcome back to my Throwback Thursday series, where I republish old reviews, review books I have read before or review older books I have only just had a chance to read. In my latest Throwback Thursday I look at one of the more interesting novels from the Star Wars Legends universe, Darth Plagueis by James Luceno. With Star Wars day on the horizon, I have decided to go back and check out some of the key books in the now defunct Star Wars Legends universe. While no longer canon, there are still some amazing books in the Legends range, including some that will no doubt serve as an inspiration for some future shows or movies. I have already enjoyed several Legends books, such as Maul: Lockdown, Scoundrels and Death Troopers, but there are still more epic reads that I really want to check out. Probably the one I was most interested in reading was the epic Darth Plagueis by James Luceno. Luceno, who also wrote the fantastic novel Tarkin in the current Disney canon, is a very talented author, and I was very excited in checking out his take on the elusive and mysterious Darth Plagueis. “Did you ever hear the tragedy of Darth Plagueis the Wise?” Throughout the long and bloody history of the Republic, many Sith lords have risen to threaten the peace and order maintained by their hated rivals, the Jedi. While some have put complex and deadly plans into effect, few have reached the pinnacle of power, influence or mastery of the Dark Side of the Force as the mysterious Darth Plagueis, whose malign guidance shaped the galaxy in terrible ways and introduced a great darkness. Upon killing his master and obtaining all the power he ever desired, Darth Plagueis set out to continue his order’s greatest goal: destroying the Jedi and claiming the Republic as his own. Using his position as a powerful member of the Banking Clan, Darth Plagueis worked to manipulate the Republic into chaos and slowly lead the Jedi to a war they had no hope of winning. However, even a Sith as powerful as Darth Plagueis is unable to do everything on his own, and he soon seeks out a powerful Force user to take on as his apprentice, a talented politician from Naboo known only as Palpatine. Renaming Palpatine Darth Sidious, Plagueis begins manipulating events to ensure that his apprentice becomes a major power in the Senate, planning to elevate him to the role of Supreme Chancellor while also destroying those opponents who threaten their plans. However, despite the importance of their plan, Plagueis’s main desire is not the defeat of the Jedi but of a far older enemy, death itself. Diving into the mysteries of the Force, Plagueis will explore avenues of power not seen for millennia as he attempts to become the immortal master of the galaxy. But his obsession with endless life could yet be his greatest undoing. Wow, Luceno did not disappoint with this fantastic Star Wars novel. Darth Plagueis is an impressive and captivating read that perfectly tells the story of a particularly elusive figure. Bringing in some heavy Star Wars elements from the extended lore, Luceno has crafted a brilliant character-driven story that I had an extraordinary time listening to. Luceno has come up with an interesting story for the Darth Plagueis novel that achieves several goals at once. Not only does it tell the complete story of this legendary Sith Lord but it provides some interesting context for other pieces of Star Wars fiction, while also containing a powerful story of intrigue, betrayal and darkness. Set over a period of roughly 35 years and told from the perspectives of Darth Plagueis and Darth Sidious (with a few scenes seen from other characters, like Darth Maul), this brilliant novel does an excellent job of exploring the primary characters while also showing their malicious actions across various theatres of the Star Wars universe. While the novel starts off a little slow, you soon become engrossed in the story as you encounter multiple layers of manipulation and politics as Plagueis attempts to control the galaxy and make his major plans. The story is broken into three distinct periods, the first showing some of Plagueis’s early movements as a Sith Master and his initial meeting and recruitment of Sidious. The second part of the book, set 20 years before the events of The Phantom Menace, showcases Sidious as he becomes established as a Senator as Plagueis contends with some dangerous opponents and plots as he sets up the earliest stages of his master plan. The final third of the novel is set in the lead-up/during the events of The Phantom Menace, where you see many of the storylines come together, as well as the final chapters of the relationship between Plagueis and Sidious. I had a really great time with this compelling story, and it is one that I feel will appeal to a lot of Star Wars fans. While I was a little surprised at the suddenness of some of the time skips, I felt that all three major parts of the novel were really good, and I loved how well they flowed together to create one coherent and fantastic read. The three separate time periods allow for a massive story, while also featuring some of the key moments of the main character’s lives. Featuring a ton of intriguing and heavy bits of Star Wars lore, parts of the story do drag a little in places, especially as there is a little less action than your typical Star Wars novel. However, I found all the politics, machinations and expansions of the Star Wars lore to be extremely fascinating, and there is a brilliant story hidden in there. The story is also not completely bereft of action, and there are some pretty cool fight sequences scattered throughout the book, including some that show off Plagueis’s full, terrifying abilities. This story had an excellent tone and pace to it, and I feel that everything came together extremely well and I was pretty enraptured by every damn moment of it. This was a really good Star Wars novel, and it is one that will appeal to a wide range of fans, especially those who enjoyed the Legends range. While Darth Plagueis is technically no longer canon, Luceno really went out of his way to connect it to the wider Star Wars canon, which is something I really appreciated about this book. In many ways, Darth Plagueis serves as the ultimate companion to the prequel films as Luceno attempted to fill in some plot holes and unexplained bits of the movies, by exploring the entirety of the Sith’s rise to power. Bringing in a ton of obscure lore, you get an unparalleled view of how Plagueis and Sidious manipulated events in the Legends canon to lead to the events of the films, and this really helps to fill in some gaps. Luceno also includes multiple moments from The Phantom Menace film throughout the story, and it was pretty fascinating to see why parts of the antagonist’s plot came together like they did, as well as some excellent alternate views of certain key scenes. I also deeply enjoyed how Darth Plagueis tied into a ton of other pieces of Star Wars Legends fiction, including books, comics and games. Multiple prior novels are mentioned or connected to this novel in some way, and I felt that Luceno did a really good job of inserting elements from the already massive extended universe into his book and connecting the stories together and giving all of them more context and interest. All these connections helped to create a novel that is particularly compelling and intriguing to dedicated Star Wars fans, who will love seeing the events of this book unfold. While those fans who have only seen the movies will probably be able to enjoy this book easily enough (with only some minor confusion to some of the more obscure parts of the lore), this is a novel best enjoyed by readers who have checked out some other Star Wars Legends books and will appreciate how it fits into that wider version of the canon. I did like a lot of the universe-building that Luceno did in this novel, as the author explored some fascinating parts of the Legends universe. Not only does the reader get to experience a lot of obscure elements of Star Wars lore, including aliens, technology, locations and other cool things, but this also serves as one of the most impressive looks at the Sith and the Dark Side of the Force. Due to the deep examinations of the Sith and its history by Plagueis, as well as other elements contained in the training of Palpatine, the reader is flooded with knowledge about these Dark Side users and their ways, which proves to be quite intriguing. I had a brilliant time learning more about these deep elements of lore, especially as the characters talk about practicalities as well as history. The difference between various forms of the Dark Side are very cool, as you see some comparisons between Plagueis’s more scientific based usage of the Force and the Dark Side sorcery preferred by Sidious. I also found the characters’ own description and assessment of the Sith and the Force to be surprisingly deep, as the characters see themselves as more of a necessary force there to save the galaxy and the Republic from the Jedi. Darth Plagueis also contains some fantastic detail about the history of the planet Naboo, which I also found really fascinating. Darth Plagueis goes out of its way to explore the history of the planet and the reasons why it became a political and economic factor in the Republic in the lead-up to The Phantom Menace, and I loved seeing the political strife and manipulation that led to this initial war, as well as the rise of characters like Palpatine and Amidala. These brilliant pieces of lore are so much fun to learn about, and I had an incredible time finding out more about the Sith in this canon. Of course, one of the best bits of the lore that Luceno examines in this novel is the role that Darth Plagueis had in the Star Wars universe. First mentioned in that iconic monologue in Revenge of the Sith, Plagueis remained a mostly shadowy and unknown figure until the release of this book, which serves as the ultimate guide to the character and his history. Luceno, who at this point had been planning a Darth Plagueis story for years, does a brilliant job of telling the full story of this great character, and you get an outstanding focus on his entire life, especially his time as a Sith Master. Plagueis, a Muun also known as Hego Damask, is portrayed as a thoughtful, powerful and manipulative being with a surprising nobility and dignity to him. Fitted with an intriguing backstory and motivations, you see him grow into an extremely powerful Sith Lord throughout the course of the book, and it was fascinating to see all his plans and machinations. The most significant part of the character’s motivations is his hunt for immortality through the force. As such, you get a fantastic look at his obsessive experiments and research, as he tries to uncover this ultimate secret. I felt that Luceno did an incredible job of working this mysterious character into the wider Star Wars canon. There are some great moments throughout this book that show this shadowy figure manipulating key events from the shadows to bring about the events of the prequel films. I particularly loved how Luceno fit Plagueis into some scenes from The Phantom Menace, and it is very fun to imagine him watching these moments from just outside camera shot. This really was an incredible examination and exploration of this character, and I had so much fun finally finding out who Darth Plagueis was and how he was connected to the wider story. Despite this story no longer being canon, this novel is really the only guide to Darth Plagueis, and it wouldn’t surprise me if it is used as the primary source material for anyone wanting to introduce him in a future film or television series. While this book does tell the story of Darth Plagueis, in many ways it is just as much about Palpatine as it provides readers with an outstanding look at his early history. Essentially set during the time he was Darth Plagueis’s apprentice, you get some amazing insights into who Palpatine is and how he turned to the Dark Side of the Force. Portrayed as manipulative and insidious since birth, you get to see Palpatine at his most evil and dangerous as he learns about the Force and the Sith. I loved how you get to see various stages of Palpatine’s early life, from his teenage years where he first learns about his powers, to his middle age where he becomes a young ambitious senator and apprentice, to his time as an experienced manipulator and Force user just before coming Supreme Chancellor. I had a brilliant time seeing Palpatine grow as both a Sith and a politician throughout this book, and you get some fantastic views of his early interactions with key players in the Star Wars canon. I also deeply enjoyed seeing his intriguing dynamic with Darth Plagueis. In pretty much all his other appearances, Star Wars fans only ever see the confident and controlling Palpatine who has no-one above him. However, in Darth Plagueis, you see a somewhat more subservient Palpatine who is forced to bow to the will of one more powerful. Watching working under another is an interesting change of pace, although some reveals towards the end of the book (and in some other novels, such as Maul: Lockdown), show that he is never as loyal as Plagueis believes. This truly was an outstanding depiction of Palpatine and it was so awesome to see more about our favourite soon-to-be emperor. Aside from Plagueis and Palpatine, the Darth Plagueis novel is loaded with a ton of interesting supporting characters, many of whom had roles in the films, animated series or other pieces of Legends fiction. These intriguing characters help to create the novel’s rich tapestry of politics, intrigue and betrayals, and all of them served some fantastic roles in the book. I particularly enjoyed seeing the inclusion of other Sith characters like Count Dooku and Darth Maul, especially as this novel serves as a bit of an origin story for both, as you see Palpatine obtaining and training Maul as well as Plagueis and Palpatine manipulating Dooku to leave the Jedi. I also enjoyed the intriguing look at Plagueis’s own master, Darth Tenebrous, whose brief role showed a whole other aspect to the Sith as he had his own distinctive style. I did think that the crowd of supporting figures with their own story elements slowed the pace of the novel down a little in the middle of the book, but I ended up having a brilliant time enjoying the story set around the awesome main characters. Unsurprisingly, I chose to listen to Darth Plagueis on audiobook rather than seeking out a physical copy of this excellent novel. I naturally had a very fun time listening to this version of the book, which not only featured a brilliant narrator but also made excellent use of the typical Star Wars audiobook production elements. Darth Plagueis is loaded with cool sound effects and awesome Star Wars music, all of which add to the ambiance of the story in various ways. I particularly liked the use of John Williams’s iconic scores throughout this audiobook, which did a great job of enhancing several scenes and increasing their emotional impact. This was particularly true for some of the darker moments in the book, as some of the music associated with the Sith, the Dark Side and death/destruction, are blasted at full volume during some key moments, such as Palpatine discovering his destructive abilities for the first time, or during a couple of massacres. This awesome music was so cool to hear during these scenes, and you really got an increased sense of the powerful emotions and dark deeds that were going on. I also deeply enjoyed the epic narration, as this fantastic audiobook features the vocal talents of actor Daniel Davis (whom audiences of taste will recognise as Niles from The Nanny). Davis gives a powerful and commanding performance here, bringing some major gravitas to the role and the characters. His voice work for the titular character, Darth Plagueis, is really good, and you get a fantastic sense of the character’s power and wisdom as the novel continues. Davis also does a brilliant job of voicing multiple characters and species from the Star Wars films, sounding quite close to their original actors. I loved the voice work for Palpatine, capturing much of the villain’s iconic voice, while also giving it a youthful tilt for the earlier parts of the book. Other characters, such as Count Dooku and Darth Maul, are also expertly portrayed here, and I particularly liked Davis’s take on Christopher Lee’s amazing voice. This outstanding voice work, combined with the sound effects and music, helped to turn this into an exceptional listen that I deeply enjoyed. With a run time just under 15 hours, this is a descent sized Star Wars audiobook, but listeners can power through it in no time at all. This format comes highly recommended and you will have an outstanding time listening to the Darth Plagueis audiobook. Overall, Darth Plagueis is an impressive and addictive Star Wars Legends novel that I had an incredible time reading. James Luceno really excels at telling complex narratives that examine character origins, and Darth Plagueis did a wonderful and comprehensive job of expanding on a mostly unknown figure. I loved learning everything about this awesome Star Wars figure, and Luceno wove an outstanding tale of intrigue and power around him and his apprentice. An absolute must read for all fans of the Star Wars extended universe, I cannot wait until they finally introduce this complex figure into the current canon.
https://unseenlibrary.com/tag/sith/
The following resources contain terminology when referring to treatments, education, surgery and medically related information pertaining to the feet. The arch of the foot is the area of the bottom of the foot between the heel pad and toe pad. A hereditary dermatosis characterized by the presence of numerous flat wart-like papules on the dorsal aspect of the hand, foot, elbow, and knee. The part of the leg just above the foot; the joint between the leg and the foot. It is a hinge joint formed by the junction of the tibia and fibula with the talus, or ankle bone. The bones are cushioned by cartilage and connected by a number of ligaments, tendons, and muscles that strengthen the joint and enable it to be moved. Because it is in almost constant use, the ankle is particularly susceptible to injuries, such as sprain and fracture. It is also often one of the first joints to be affected by arthritis or gout. A series of abnormal reflex movements of the foot, induced by sudden dorsiflexion, causing alternate contraction and relaxation of the triceps surae muscle. A fungal infection of the skin of the foot; called also tinea pedis. It causes itching and often blisters and cracks, usually between the toes. Causative agents are Candida albicans, Epidermophyton floccosum, and species of Trichophyton, which thrive on warmth and dampness. If not arrested, it can cause a rash and itching in other parts of the body as well. It is likely to be recurrent, since the fungus survives under the toenails and reappears when conditions are favorable. Although Athlete`s foot is usually little more than an uncomfortable nuisance, its open sores provide excellent sites for more serious infections. Early treatment and health care supervision insure correct diagnosis and prevention of complications. Specific diagnosis is made by microscopic examination or culture of skin scrapings for the fungus. A reflex action of the toes, normal during infancy but abnormal after 12 to 18 months of age, or when locomotion begins; after this, it is indicative of abnormalities in the motor control pathways leading from the cerebral cortex and is widely used as a diagnostic aid in disorders of the central nervous system. It is elicited by a firm stimulus (usually scraping) on the sole of the foot, which results in dorsiflexion of the great toe and fanning of the smaller toes. Normally such a stimulus causes all the toes to bend downward. Called also Babinski`s sign. A prosthetic foot that allows ankle motion. An abnormal prominence on the inner aspect of the first metatarsal head, with bursal formation, and resulting in lateral or valgus displacement of the great toe. Bunions can be caused by congenital malformation of the bony structure of the foot or by joint disease such as rheumatoid arthritis and are aggravated by wearing high-heeled shoes with pointed toes. When the shoes do not fit properly they force the great toe toward the outer side of the foot. The result is continued pressure on the joint where the great toe articulates with the first metatarsal head. Chronic irritation causes a build-up of soft tissue and underlying bone in the area. Symptoms are swelling, redness, and pain. Mild cases can be relieved by changing to properly fitting shoes. If there is severe pain making ambulation difficult or impossible, anti-inflammatory agents may be effective. Surgical correction (bunionectomy) is indicated when all other measures fail. The hindmost part of the foot; the heel. Affecting the wrist and foot. A burning pain often associated with trophic skin changes in the hand or foot, caused by peripheral nerve injury. The syndrome may be aggravated by the slightest stimuli or it may be intensified by the emotions. Causalgia usually begins several weeks after the initial injury and the pain is described as intense, with patients sometimes taking elaborate precautions to avoid any stimulus they know to be capable of causing a flare-up of symptoms. They often will go to great extremes to protect the affected limb and become preoccupied with such protection. Any one of a variety of injuries to the hand, foot, arm, or leg can lead to causalgia, but in most cases there has been some injury to the median or the sciatic nerve. Injections of a local anesthetic at the painful site may bring relief. Sympathectomy may be necessary to eliminate the severe pain, and in the majority of cases it is quite successful. Psychotherapy may be necessary when emotional instability is suspected. Emotional problems may have been present before the initial injury, or they may result from the intense suffering characteristic of severe causalgia. Amputation of the foot, with the calcaneus, talus, and other parts of the tarsus being retained. A high-arched foot with the toes hyperextended at the metatarsophalangeal joint and flexed at the distal joints. Deformity in which the foot is twisted out of normal position; called also talipes. The foot may have an abnormally high longitudinal arch (talipes cavus) or it may be in dorsiflexion (talipes calcaneus) or plantar flexion (talipes equinus), abducted, everted (talipes valgus), adducted, inverted (talipes varus), or various combinations of these (talipes calcaneovalgus, talipes calcaneovarus, talipes equinovalgus, or talipes equinovarus). There are several theories as to the cause of clubfoot. A familial tendency or arrested growth during fetal life may contribute to its development, or it may be caused by a defect in the ovum. It sometimes accompanies meningomyelocele as a result of paralysis. In mild clubfoot there are slight changes in the structure of the foot; more severe cases involve orthopedic deformities of both the foot and leg. Although clubfoot is usually congenital, an occasional case in an older child may be caused by injury or poliomyelitis. Ridges of the skin produced by the projecting papillae of the corium on the palm of the hand and sole of the foot, producing a fingerprint and footprint characteristic of the individual; called also dermal ridges. Swelling of the lower leg and foot, covering all but the sole of the foot. Wedge-shaped bone; applied particularly to three of the bones in the tarsus of the foot. Wedge-shaped bone; applied particularly to three of the bones in the tarsus of the foot. The preferential use, in voluntary motor acts, of the right member of the major paired organs of the body, as the right eye, hand, or foot. the presence of only two digits on a hand or foot. A digit of the hand or foot and corresponding metacarpal or metatarsal bone, regarded as a continuous unit. The pulse felt on the top of the foot, between the first and second metatarsal bones. In 8 to 10 per cent of the population this pulse cannot be detected. A genus of fungi. E. flocco sum attacks both skin and nails but not hair, and is one of the causative organisms of tinea cruris, tinea pedis (Athlete`s foot), and onychomycosis. A condition in which one or more arches of the foot have flattened out. The front part of the foot. A general term for a group of eukaryotic organisms (mushrooms, yeasts, molds, etc.) marked by the absence of chlorophyll, the presence of a rigid cell wall in some stage of the life cycle, and reproduction by means of spores. Fungi are present in the soil, air, and water, but only a few species can cause disease. Among the fungal diseases (mycoses) are “histoplasmosis, coccidioidomycosis, ringworm, histoplasmosis, coccidioidomycosis, ringworm, Athlete`s foot, and thrush. Although the fungal diseases develop slowly, are difficult to diagnose, and are resistant to treatment, they are rarely fatal except for systemic mycotic infections, which can be life-threatening, especially for immunocompromised patients. Evaluation of the manner or style of walking, usually done by observing the individual as he walks naturally in a straight line. An adhesive tape support used in treatment of sprains and other painful conditions of the ankle, the tape being applied in a basket-weave fashion with strips placed alternately under the sole of the foot and around the back of the leg. Flexion or clenching of the fingers or toes on stimulation of the palm of the hand or sole of the foot. Angulation of the great toe toward the other toes of the foot. Angulation of the great toe away from the other toes of the foot. A mild, highly infectious viral disease of children, with vesicular lesions in the mouth and on the hands and feet. The hindmost part of the foot; called also calx. By extension, a part comparable to the heel of the foot, or the hindmost portion of an elongate structure. Amputation of the foot between the tarsus and metatarsus. The posterior portion of the foot, comprising the region of the talus and calcaneus. The presence of supernumerary digits on the hand or foot. An autosomal dominant skin disorder, usually occurring in the third or fourth decade of life, characterized by pink or reddish or yellowish brown hyperkeratotic scaly papules on the lower leg and dorsum of the foot, sometimes involving the trunk, thigh, arm, back and palm of the hand, and sole of the foot. Less than the usual number of digits on the hand or foot. A condition resembling trench foot occurring in persons who have spent long periods in water. The dorsal part of the arch of the foot. A Hemiplegic gait in which there is a pause after the short step of the normal foot, or in some cases after the step of the affected foot. Amputation of the foot between the metatarsus and tarsus. Mycetoma of the foot. Painful swelling of the foot, usually with fracture of a metatarsal bone, after excessive foot strain. Dorsal flexion of the second to fifth toes on percussion of the dorsum of the foot; in certain organic nervous disorders, plantar flexion occurs. The part of the foot between the ankle and the toes, its skeleton being the five bones (metatarsals) extending from the tarsus to the phalanges. A topical antifungal agent available in two formulations: one for dermatophytic infections such as Athlete`s foot, and the other for vulvovaginal candidiasis. The presence of only one finger or toe on a hand or foot. Pain in the metatarsus of the foot. Tenderness or Pain in the metatarsal area of the foot and in the third and fourth toes caused by pressure on a neuroma of the branch of the medial plantar nerve supplying these toes. The neuroma is produced by chronic compression of the nerve between the metatarsal heads. Called also plantar neuroma and Morton`s disease, foot, or neuralgia. A chronic, slowly progressing bacterial or fungal infection usually of the foot or leg, characterized by nodules that discharge an oily pus. Pertaining to the foot or feet. The design, manufacture, fitting, and modification of shoes and related foot appliances as prescribed for the amelioration of painful or disabling conditions of the foot and leg. A deformity in which the anterior part of the foot is displaced and lies laterally to the vertical axis of the leg. A foot with an abnormally high longitudinal arch, either congenital or caused by contractures or disturbed muscle balance. The bottom of the foot; called also sole. Pain in the sole of the foot. Plantar flexion of the foot when the ankle is grasped firmly and the lateral border of the sole is stroked or scratched from the heel toward the toes. A fascial space on the sole of the foot, divided by septa into the lateral, middle, and median plantar spaces. A viral epidermal tumor on the sole of the foot, sometimes the result of going barefoot; unlike other warts, this type is usually sensitive to pressure and may be painful during walking. Called also verruca plantaris. Walking or running flat on the full sole of the foot; characteristic of humans and of such quadrupeds as the bear. The specialized field dealing with the study and care of the foot, including its anatomy, pathology, medical and surgical treatment, etc. Neuralgic pain of the heel and sole; burning pain without redness in the sole of the foot. A supernumerary bone of the foot growing from the medial border of the scaphoid. The act of assuming the prone position, or the state of being prone. Applied to the hand, turning the palm backward (posteriorly) or downward, performed by medial rotation of the forearm. Applied to the foot, a combination of eversion and abduction movements taking place in the tarsal and metatarsal joints and resulting in lowering of the medial margin of the foot, hence of the longitudinal arch. A prosthetic foot with a solid ankle and a cushion heel. A prosthetic foot with a stationary attachment and a flexible endoskeleton. Using the left foot in preference to the right. Flatfoot; talipes valgus. The act of turning the palm forward or upward, or of raising the medial margin of the foot. Disarticulation of the foot with removal of both malleoli. tarsal bones of the foot. A complex of symptoms resulting from compression of the posterior tibial nerve or of the plantar nerves in the tarsal tunnel, with pain, numbness, and tingling paresthesia of the sole of the foot. Surgical fracture of the tarsus of the foot. The seven bones composing the articulation between the foot and leg, including the talus, calcaneus, navicular bone, cuboid bone, and medial, intermediate, and lateral cuneiform bones. The presence of four digits on the hand or foot. A Shoe correction consisting of a heel one half inch longer and an eighth to a sixth of an inch higher on the inside; used to bring the heel of the foot into varus and to prevent depression in the region of the head of the talus. A condition of the feet resembling “frostbite, ” frostbite, due to the prolonged action of water on the skin combined with circulatory disturbance due to cold and inaction. The presence of only three digits on the hand or foot. Amputation of the foot through the calcaneus. That in which the right foot and left “crutch” crutch or cane are advanced together, and then the left foot and right crutch. Contraction of the fingers and sometimes of the wrist, or of analogous parts of the foot, with loss of power, after severe injury or improper use of a tourniquet or cast in the region of the elbow. Congenital deformity of the foot due to tibiotarsal dislocation.
http://footpainmanagement.org/foot-terminology/
It sounds counter-intuitive, right? How can NOT thinking like a pianist make me a better pianist? I know that’s what I though. Then I saw this lesson from Jay Oliver, and it all made sense. As piano players, we’re often guilty of getting stuck in comfortable ruts and patterns. I know I am. Think about it — chords and melody in the right hand. Bass in the left. Sometimes we’ll play fifths, sometimes octaves. If we’re CRAZY we might even play arpeggios 🙂 But it’s usually the same. Which leads me to a controversial point… It’s hard for me to admit, but it’s true. In Jay’s lesson, he shows how using guitar-style chord voicings can radically change the sound and feel of your playing — for the better. You do this by switching things up — and playing the rhythm in the right hand with an open shape. In this lesson we’re in the key of G and Jay just plays the 1-5-1 (G-D-D) in his right hand, rocking back and forth. It’s a beautiful and sparse, open sound that leaves room for the left hand. And this is where we use the guitar voicings. A lot of guitar chords don’t have the root note as their bass note. They often use inversions to create unique sounds. We have a lot of lessons on inversions for your right hand, but what’s different about this lesson is that you’ll be using inversions in your left to create the feel and movement of the chord progression. That way you can take common chord progressions and completely change the way they sound. In the lesson, Jay uses the 1-4-1-5 chords (G-C-G-D). Simple chords, but it sounds amazing and fresh. Because he’s not thinking like a pianist. The possibilities are endless. In the second half of the lesson, Jay introduces some chords that are OUTSIDE the key of G major. These are called non-diatonic chords. We have a lesson on diatonic chords here. In this example, Jay introduces an F major chord, and then an Eb Maj 7. There are very good theoretical reasons why these chords work, but that’s not the point of this lesson. The point here is to show you how new and different the STYLE sounds when you start borrowing techniques from other instruments. The chord voicings are common guitar voicings, and we can even add some drum technique with rhythm and syncopation. Is that you’ve created something on the piano that most piano players won’t ever try. You’ve stopped thinking like a pianist, and by doing so … You’ve become a better one. By signing up you’ll also receive our ongoing free lessons and special offers. Don’t worry, we value your privacy and you can unsubscribe at any time.
https://www.pianote.com/blog/dont-think-like-a-pianist/
Plug-in vehicles can behave either as loads or as a distributed energy and power resource in a concept known as vehicle-to-grid (V2G) connection. This paper reviews the current status and implementation impact of V2G/grid-to-vehicle (G2V) technologies on distributed systems, requirements, benefits, challenges, and strategies for V2G interfaces of both individual vehicles and fleets. The V2G concept can improve the performance of the electricity grid in areas such as efficiency, stability, and reliability. A V2G-capable vehicle offers reactive power support, active power regulation, tracking of variable renewable energy sources, load balancing, and current harmonic filtering. These technologies can enable ancillary services, such as voltage and frequency control and spinning reserve. Costs of V2G include battery degradation, the need for intensive communication between the vehicles and the grid, effects on grid distribution equipment, infrastructure changes, and social, political, cultural, and technical obstacles. Although V2G operation can reduce the lifetime of vehicle batteries, it is projected to become economical for vehicle owners and grid operators. Components and unidirectional/ bidirectional power flow technologies of V2G systems, individual and aggregated structures, and charging/recharging frequency and strategies (uncoordinated/coordinated smart) are addressed. Three elements are required for successful V2G operation: power connection to the grid, control and communication between vehicles and the grid operator, and on-board/off-board intelligent metering. Success of the V2G concept depends on standardization of requirements and infrastructure decisions, battery technology, and efficient and smart scheduling of limited fast-charge infrastructure. A charging/discharging infrastructure must be deployed. Economic benefits of V2G technologies depend on vehicle aggregation and charging/recharging frequency and strategies. The benefits will receive increased attention from grid operators and vehicle owners in the future.
https://experts.illinois.edu/en/publications/review-of-the-impact-of-vehicle-to-grid-technologies-on-distribut
PhD - Karsten Hedegaard: Wind power integration with heat pumps, heat storages, and electric vehicles – Energy systems analysis and modelling, PhD thesis, DTU, Technical University of Denmark, 2013. Abstract Wind power is in many countries considered a key renewable energy technology in achieving the goals of reducing greenhouse gas emissions and relieving the dependency on fossil fuels. However, the fluctuating and only partly predictable nature of wind challenges an effective integration of large wind penetrations. This PhD investigates to which extent heat pumps, heat storages, and electric vehicles can support the integration of wind power. Considering the gaps in existing research, the main focus is put on individual heat pumps in the residential sector (onefamily houses) and the possibilities for flexible operation, using the heat storage options available. Several energy systems analyses are performed using the energy system models, Balmorel, developed at the former TSO, ElkraftSystem, and, EnergyPLAN, developed at Aalborg University. The Danish energy system towards 2030, with wind power penetrations of up to 60 %, is used as a case study in most of the analyses. Both models have been developed further, resulting in an improved representation of individual heat pumps and heat storages. An extensive model add-on for Balmorel renders it possible to optimise investment and operation of individual heat pumps and different types of heat storages, in integration with the energy system. Total costs of the energy system are minimised in the optimisation. The add-on incorporates thermal building dynamics and covers various different heat storage options: intelligent heat storage in the building structure for houses with radiator heating and floor heating, respectively, heat accumulation tanks on the space heating circuit, as well as hot water tanks. In EnergyPLAN, some of the heat storage options have been modelled in a technical optimisation that minimises fuel consumption of the energy system and utilises as much wind power as possible. The energy systems analyses reveal that in terms of supporting wind power integration, the installation of individual heat pumps is an important step, while adding heat storages to the heat pumps is less influential. As such, the installation of individual heat pumps can contribute significantly to facilitating larger wind power investments and reducing system costs, fuel consumption, and CO2 emissions. This is first due to the high energy-efficiency and economic competitiveness of the heat pumps. Moreover, their electricity demand profile is well suited for integrating wind power, even when not operated intelligently. The political phase out of coal in Denmark by 2030 furthermore creates particularly good conditions for utilising wind power in meeting the electricity demand for the heat pumps. When equipping the heat pumps with heat storages, only moderate system benefits can be gained. Hereof, the main system benefit is that the need for peak/reserve capacity investments can be reduced through peak load shaving; in Denmark by about 300-600 MW, corresponding to the size of a large power plant. This can be achieved when investing in socio-economically feasible heat storages complementing the heat pumps. The potential for reducing the required investments in peak/reserve capacities is crucial for the feasibility of the heat storages. Intelligent heat storage in the building structure is identified as socio-economically feasible in 20- 75 % of the houses with heat pump installations, depending on the cost of control equipment in particular. Investment in control equipment, enabling utilisation of existing hot water tanks for flexible heat pump operation, is found socio-economically feasible in about 20-70 % of the houses. In contrast, heat accumulation tanks are not competitive, due to their higher investments costs. Further analyses investigate the system effects of a gradual large-scale implementation of battery electric vehicles (BEVs) and plug-in hybrid electric vehicles (PHEVs) in Denmark, Finland, Norway, Sweden, and Germany towards 2030. When charged/discharged intelligently, the electric vehicles can, in the long term, facilitate larger wind power investments, while they in the short term in many cases are likely to result in increased coal-based electricity generation. The electric vehicles can contribute significantly to reducing CO2 emissions, while system costs are generally increased, due to assumed investments in the costly BEVs. The need for peak/reserve capacities can be reduced through the use of vehicle-to-grid capability. Flexible operation will be more important for electric vehicles than for individual heat pumps. The reason is that in the situation without flexible operation, the electricity demand for charging of electric vehicles will typically be concentrated in the hours, where conventional electricity demand peaks, while individual heat pumps will have a more distributed load profile. Competing flexibility measures, such as large heat pumps, electric boilers, and thermal storages in the district heating system, have also been included in the energy systems analyses. These technologies can together facilitate increased wind power investments and reduce CO2 emissions in the same order of magnitude as a large-scale implementation of electric vehicles. The connection between large heat pumps/electric boilers and the large district heating storages allows for storing electricity as heat during longer periods when needed. This is an advantage compared to individual heat pumps and electric vehicles, which will mainly be able to provide power balancing intra-day and intra-hour, due to smaller storage capacities. Overall, it is concluded that individual heat pumps, flexibility measures in the district heating system, and PHEVs, can provide significant contributions to a cost-effective integration of wind power towards 2030. Heat storages complementing individual heat pumps can contribute only moderately in this regard.
http://balmorel.com/index.php/activities/theses/51-wind-power-integration-with-heat-pumps-heat-storages-and-electric-vehicles
I often talk and teach my clients about boundaries. Setting limits on what you will and won’t do, what you can and can’t give. Healthy emotional boundaries are essential to mental well being. If you want to learn more about setting boundaries read this article or watch this video. When we start to set emotional boundaries, people around us often get upset because we are “rocking the boat”. But if we want other people to respect our boundaries, we also must learn to respect theirs. As much as we might want to, we just cannot control other people. Often, we get angry when other people don’t do what we want them to do. In reality, we are not actually angry, we are disappointed. We need to learn to look under the anger and verbalize our real hurt and then deal with it appropriately. We are responsible for our own emotions and how we respond to them. When we don’t respect other people’s boundaries, we live a very frustrated life. You ask someone for help, she says no and you assume she hates you. Your brother’s family says they can’t travel this year for Christmas, and you get angry because they aren’t doing what you want, what you think they SHOULD do. You don’t like your sister’s new boyfriend, and you keep telling her that, but she just doesn’t listen! Your spouse is exhausted and doesn’t want to talk about your kid’s grades right now, but you won’t let it go. Unfortunately, when we try to control other people, which is the same thing as not respecting their boundaries, all we get is super frustrated and relationships that are filled with fighting. Eventually you wind up in a place where either no one wants to spend time with you, or you are just always arguing with people. Your relationships are miserable, you are lonely and frustrated. The reality is, all of us have to learn this lesson at some point in life. It’s a tough one. But the sooner we learn to not attempt to control others, the sooner we will feel more peace with ourselves and our relationships. Learning to stop trying to control others is difficult. However, if we put in the effort, and learn to take responsibility for our own emotions and reactions, we can find more peace overall, and more closeness in our important relationships. Keep reading for 3 benefits of giving up on controlling others. 1. You will be less disappointed. Notice I didn’t say you will never be disappointed. People will let us down, that’s just part of life. But when you give others the freedom to make their own choices, you won’t stress out about every decision they make. Trying to force people, or manipulate them, into making choices you want them to make is totally pointless, and just plain not fair. Do you want people dictating your choices to you? Of course not. Love people enough to let them make their own decisions. This often happens with extended families. Let’s say the Smith family is going on vacation. Mr. & Mrs. Smith have 3 grown children, Charles, Evan and Sarah. All the grown children are married and have small children. They all decide to go to Disney World for 5 days. On the third day, Sarah’s 1 year old son is exhausted. She and her husband decide that their family will skip dinner with the whole group and go back to the hotel for some down time. Mr. Smith is livid. He tells Sarah that he and Mrs. Smith paid for this trip, and she is clearly ungrateful since she and her family are skipping dinner. Sarah, while hurt from her father’s words, simply excuses herself and her family and they take their son back to the hotel. Mr. Smith wasn’t respecting his adult daughter’s boundaries. He was using the fact that he and his wife paid for the trip to manipulate his daughter into doing what he wanted her to do. Sarah set a boundary, she made her own choice to put her son’s needs first and go back to the hotel for down time. Mr. Smith put his own desires first, ignored his adult daughter’s right to make her own choices, and caused tension for the whole family. Plus, for himself, Mr. Smith will spend the rest of his night angry, not enjoying his vacation. If Mr. Smith could say to himself, “I’m disappointed Sarah and her family will miss dinner, but I understand they need to choose what is best for them,” he would be respecting her boundaries and save himself a lot of disappointment and anger. 2. You will be an emotionally safe person for the people you care about. When we ignore other people’s limits we are really ignoring their emotional needs. Some people will tell you emotions are bad, ignore them. Really though, we all know that’s not true. Emotions point out things that are really going on for us. We can’t let emotions rule us, but we absolutely need to pay attention to them. Plus, sharing our feelings with others is how we bond with them, and we all need to be connected to others. Hannah and Lydia are sisters, both in their early twenties. Lydia has a pattern of dating guys that don’t treat her very well. Hannah hates seeing her sister in these situations. Recently Lydia started dating Tim, who is also not very nice to Lydia. Hannah called her sister and said to her, “Why are you doing this again? Why don’t you get it? Tim is just like all the others! He drinks too much, and he only cares about himself. You need to break up with him now! I’ll be so mad at you if you stay with him.” Hannah’s heart is in a good place, she doesn’t want to see Lydia get hurt. However, Lydia is an adult, and can make her own choices. It’s not Hannah’s job to decide who Lydia can or cannot date. Could Hannah express her concern in a way that respects Lydia’s right to make her own choices? Absolutely. Imagine if Hannah had said, “Lydia, I really want you to be happy. I know that the last few guys you dated really hurt you. I don’t want to see that happen again, and sometimes I worry Tim is like the others. Do you think that could be true?” Hannah is expressing her hesitation about Tim, but still allowing Lydia to make her own choices. When we allow others to make decisions in their own lives, we are an emotionally safe person for them. Which Hannah do you think Lydia is more likely to talk to? Of course, the second Hannah. No one will respond well when demands are made on them. It’s not emotionally safe. The second Hannah made it okay for Lydia to share how she really feels and created more connection between the sisters. 3. You will have less stress in your relationships. Controlling others is downright exhausting, and honestly, it never, ever works out well. When we stress out over other people’s decisions, we are only causing angst for ourselves. Again, we might feel disappointment in people’s choices, but we have to allow them the space to make their own decisions. Laura is married and has three kids. Her brother Shawn is also married with four kids and lives about 30 minutes away from Laura’s family. Shawn and his wife have their four kids signed up for soccer, swimming, karate, baseball and piano lessons. They don’t have a lot of free time with all four kids’ activities. Laura and her husband only allow their kids to do one activity at a time. They value having down time and time with family more than activities. Laura frequently asks Shawn and his family to spend time with her family, but most often they can’t because of their schedule. The constant turning down invitations makes Laura angry. She often tells her husband, “Why do they have to do everything? Don’t they value family? They are just too busy! It’s awful! I guess they just don’t care about seeing us.” Laura decides to call Shawn and tell him she is angry that they never see his family. Shawn doesn’t take too well to his sister’s call, gets really defensive and angry himself, and the call doesn’t end well. Laura is really disappointed that she doesn’t see more of her brother and his family, which is of course, understandable. However, she cannot tell Shawn and his wife how to set up their children’s schedule, because that’s not Laura’s job. She is trying to control Shawn and his wife’s choices to get what she wants. Her phone call, which was her ignoring Shawn’s boundaries, only made her relationship with her brother worse. You will not always agree with the choices your loved ones make. And maybe you are right some of the time. No matter what, you can’t push people into making decisions you want them to make. It absolutely can be hard to watch people make choices you don’t agree with, but you have to respect them as adults and give them space to decide for themselves what they will and will not do. Emotional boundaries are essential for mental wellness. We need to know what is our responsibility and what is not. We need to own our own feelings and reactions and deal with them appropriately. We also must allow others in our lives to do the same. We must manage our own disappointment and not try to manipulate and control others. When we keep trying to hold on to control, we not only hurt our relationships, we hurt ourselves. We cause extra stress, fighting and anger. Very often I see this in my client’s marriages. One spouse tries to control the other, and they can’t use words to describe their disappointment, so they yell and fight at one another. Maybe you’ve had a traumatic experience, perhaps a sexual abuse history, and you manage your anxiety by trying to control those around you. All you are really doing is creating more strife in your marriage or other relationships. Do you recognize any aspect of yourself in these stories? Are you struggling to respect other people’s boundaries? Chances are if you don’t respect others limits, you have no limits of your own. Do you want to find more peace in your relationships and for yourself? Here at Lime Tree Counseling, LLC, I love helping clients figure out how to take responsibility for what issues are their own to work on and what things they need to let go for others to work on. You are only responsible for you. Can I help you draw the emotional map of what’s yours and what isnt’t? Contact us today to schedule a free phone consult or to schedule an appointment.
https://limetreecounseling.com/the-flip-side-of-boundaries-3-reasons-to-respect-others-limits/
This note argues that in a situation of an inelastic natural gas supply a restrictive monetary policy in the euro zone could reduce the energy bill and therefore has additional merits. A more hawkish monetary policy may be able to indirectly use monopsony power on the gas market. The welfare benefits of such a policy are diluted to the extent that some of the supply (approximately 10 percent) comes from within the euro zone, which may give rise to distributional concerns. 359 We collect data on the size distribution of all U.S. corporate businesses for 100 years. We document that corporate concentration (e.g., asset share or sales share of the top 1%) has increased persistently over the past century. Rising concentration was stronger in manufacturing and mining before the 1970s, and stronger in services, retail, and wholesale after the 1970s. Furthermore, rising concentration in an industry aligns closely with investment intensity in research and development and information technology. Industries with higher increases in concentration also exhibit higher output growth. The long-run trends of rising corporate concentration indicate increasingly stronger economies of scale. 358 Liquidity derivatives (2022) It is well established that investors price market liquidity risk. Yet, there exists no financial claim contingent on liquidity. We propose a contract to hedge uncertainty over future transaction costs, detailing potential buyers and sellers. Introducing liquidity derivatives in Brunnermeier and Pedersen (2009) improves financial stability by mitigating liquidity spirals. We simulate liquidity option prices for a panel of NYSE stocks spanning 2000 to 2020 by fitting a stochastic process to their bid-ask spreads. These contracts reduce the exposure to liquidity factors. Their prices provide a novel illiquidity measure refllecting cross-sectional commonalities. Finally, stock returns significantly spread along simulated prices. 357 Spillovers of PE investments (2022) In this paper, we investigate a primary potential impact of leveraged buyout (LBOs) transactions: the effects of LBOs on the peers of the LBO target in the same industry. Using a data sample based on US LBO transactions between 1985 and 2016, we investigate the impact of the peer firms in the aftermath of the transaction, relative to non-peer firms. To account for potential endogeneity concerns, we employ a network-based instrumental variable approach. Based on this analysis, we find support for the proposition that LBOs do indeed matter for peer firms’ performance and corporate strategy relative to non-peer firms. Our study supports a learning factor hypothesis: peers gain by learning from the LBO target to improve their operational performance. Conversely, we find no evidence to support the conjecture that peers lose due to the increased competitiveness of the LBO target firm. 356 Identifying the cause of discrimination is crucial to design effective policies and to understand discrimination dynamics. Building on traditional models, this paper introduces a new explanation for discrimination: discrimination based on motivated reasoning. By systematically acquiring and processing information, individuals form motivated beliefs and consequentially discriminate based on these beliefs. Through a series of experiments, I show the existence of discrimination based on motivated reasoning and demonstrate important differences to statistical discrimination and taste-based discrimination. Finally, I demonstrate how this form of discrimination can be alleviated by limiting individuals’ scope to interpret information. 355 The reuse of collateral can support the efficient allocation of safe assets in the financial system. Exploiting a novel dataset, we show that banks substantially increase their reuse of sovereign bonds in response to scarcity induced by Eurosystem asset purchases. While repo rates react little to purchase-induced scarcity when reuse is low, they become increasingly sensitive at high levels of reuse. An elevated reuse rate is also associated with more failures to deliver and a higher volatility of repo rates in the cross-section of bonds. Our results highlight the trade-off between shock absorption and shock amplification effects of collateral reuse. 354 Common ownership and the (non-)transparency of institutional shareholdings: an EU-US comparison (2022) This paper compares the extent of common ownership in the US and the EU stock markets, with a particular focus on differences in the ap- plicable ownership transparency requirements. Most empirical research on common ownership to date has focused on US issuers, largely relying on ownership data obtained from institutional investors’ 13F filings. This type of data is generally not available for EU issuers. Absent 13F filings, researchers have to use ownership records sourced from mutual funds’ periodic reports and blockholder disclosures. Constructing a “reduced dataset” that seeks to capture only ownership information available for both EU and US issuers, I demonstrate that the “extra” ownership information introduced by 13F filings is substantial. However, even when taking differences in the transparency situation into due account, common ownership among listed EU firms is much less pronounced than among listed US firms by any measure. This is true even if the analysis is limited to non-controlled firms. 353 Peer effects can lead to better financial outcomes or help propagate financial mistakes across social networks. Using unique data on peer relationships and portfolio composition, we show considerable overlap in investment portfolios when an investor recommends their brokerage to a peer. We argue that this is strong evidence of peer effects and show that peer effects lead to better portfolio quality. Peers become more likely to invest in funds when their recommenders also invest, improving portfolio diversification compared to the average investor and various placebo counterfactuals. Our evidence suggests that social networks can provide good advice in settings where individuals are personally connected. 352 Energy efficiency represents one of the key planned actions aiming at reducing greenhouse emissions and the consumption of fossil fuel to mitigate the impact of climate change. In this paper, we investigate the relationship between energy efficiency and the borrower’s solvency risk in the Italian market. Specifically, we analyze a residential mortgage portfolio of four financial institutions which includes about 70,000 loans matched with the energy performance certificate of the associated buildings. Our findings show that there is a negative relationship between a building’s energy efficiency and the owner’s probability of default. Findings survive after we account for dwelling, household, mortgage, market control variables, and regional and year fixed effect. Additionally, a ROC analysis shows that there is an improvement in the estimation of the mortgage default probability when the energy efficiency characteristic is included as a risk predictor in the model. 351 We investigate what statistical properties drive risk-taking in a large set of observational panel data on online poker games (n=4,450,585). Each observation refers to a choice between a safe 'insurance' option and a binary lottery of winning or losing the game. Our setting offers a real-world choice situation with substantial incentives where probability distributions are simple, transparent, and known to the individuals. We find that individuals reveal a strong and robust preference for skewness. The effect of skewness is most pronounced among experienced and losing players but remains highly significant for winning players, in contrast to the variance effect.
https://publikationen.ub.uni-frankfurt.de/opus4/solrsearch/index/search/searchtype/series/id/16172
- IPCC: Summary of projected changes in crop yields due to climate change Note: Changes in crop yields are relative to late-20th-century levels Source: IPCC WGII AR5 Summary for Policymakers, 31 March 2014 Climate adaptation and agriculture: - Growing demand and pressure on agriculture US $70 billion to $100 billion a year is needed between 2010 and 2050 to adapt to a 2oC warmer world (World Bank, 2011) US$83 - $90 billion annual agricultural investment gap in the agricultural sector of developing countries, to meet food security needs up to 2050 (FAO 2011, Global Harvest Initiative 2011) - How NAPs evolved Established in 2010 (Cancn) by the UNFCCC. Purpose: Facilitate effective medium- and long-term adaptation planning and implementation in developing countries, in particular LDCs (FCCC/CP/2011/9/Add.1) Adaptation Committee est. under Cancun Adaptation Framework. Purpose: Implement enhanced action on adaptation and facilitation of NAPs by non-LDC developing country Parties. Contributes to (not duplicate work of) the Least Developed Countries Expert Group (LEG) to support LDC national adaptation plan processes and the SBI on the work programme on loss and damage. LEG Technical Guidelines for NAPs: released 2012. - NAPAs and NAPs COP 17 in Durban defined NAP process objectives (FCCC/CP/2011/9/Add.1): (a) reduce vulnerability to the impacts of climate change, by building adaptive capacity and resilience, and (b) facilitate integration of climate change adaptation, in a coherent manner, into relevant new and existing policies, programmes and activities, in particular development planning processes and strategies, within all relevant sectors and at different levels, as appropriate. And recognised that adaptation planning will be continuous, progressive and iterative. - NAPAs and NAPs 2013. NAPAs and NAPS in Least Developed Countries. IIED LDC Paper Series. Climate adaptation and agriculture: Solutions to successful national adaptation plans SBSTA, June 2014 - Countries reviewed - 10 country workshop 37 policy makers, 10 different countries two-day workshop on November 13-14, 2013 at COP in Warsaw, Poland - NAP Dashboard Source: Planning climate adaptation in agriculture. CCAFS Report No. 10 - Areas of concern: Many countries lack consistent, comprehensive and coordinated approaches in their vulnerability and risk assessments (affects ranking) Most of the countries conducted impact assessmentsthe foundation of the planning processon a purely sectoral basis Cross- or multisectoral analyses to prioritize adaptation actions can be useful, many countries have difficulties performing such strategic studies Most did not assess the economic implications of climate risks, which compromises the design of adaptation strategies and measures: Kenya: Annual cost of climate change impacts USD $1 to 3 billion/yr. by 2030. Institutional frameworks and governance structures are lacking to effectively coordinate and implement adaptation activities, particularly cross-sectoral ones Many adaptation and food security programs currently being implemented are not well integrated into a broader national strategy, and are often driven by bilateral and/or multilateral funding sources. - The adaptation finance gap - Identification of barriers and conflicts basis for assessing future research and capacity needs: Workshop results: Barrier Frequency Lack of organization in access to finance 5 Lack of dedicated finance instruments for CC at national level 5 Insufficient consideration of climate issues in national policies and programmes 4 Unclear funding for implementation 3 Inadequate appreciation of investments in adaptation 3 Lack of long-series climate data 2 Lack of baseline data/information 2 High cost of international expertise, infrastructure and tools for climate research 2 Need for financial planning 2 - Success in cross- or multisectoral planning: Nepal (NAPA): Thematic working groups Agriculture and Food Security, Forest and Biodiversity, Water Resources and Energy, Climate Induced Disasters, Public Health, Urban Settlements and Infrastructure. Stakeholder comprised, identified priority activities + combined project profiles. Ghana: Akropong Approach results in cross-sectoral project plan. Logical framework analysis + multi-criteria analysis to rank importance of activities. Tanzanias 2012 Guidelines for Integrating Climate Change Adaptation into National Sectoral Policies, Plans and Programmes of Tanzania, issued by the Vice Presidents Office. - Design of institutional structures should consider who can have greatest influence in adaptation and NAP policy environment: OECD experience: Success of adaptation plans and measures may be attributed more to their prominence in national-level priorities and commitment than where such plans sit in the organizational structure of government (Mullan et al. 2013) Ethiopia: coordination of climate change activities was moved from the National Meteorological Agency to the Office of the Prime Minister Kenya: the National Climate Change Framework Policy and a draft Bill being deliberated in Parliament envisions the National Climate Change Council (NCCC) being anchored in the Presidency, with the NCCC being chaired by the Deputy President. Necessary influence and leverage for NAPs:
https://fdocuments.us/document/planning-climate-adaptation-in-agriculture-advances-in-research-policy-and-finance.html
The Coronavirus Pandemic has had a devastating effect on the world economy, and it is nothing close to what any country prepared for. So far, we have witnessed how business owners had to re-invent themselves and how others closed down due to the economic crises. And this is one out of other sectors that were shaken up by the pandemic. The human race is still adjusting to the new norm of life and work: and while it seems very challenging for those in developing countries, especially in Africa, many, for instance, are exploring the option of immigrating from South Africa to Canada. The Covid-19 pandemic, which originated in the city of Wuhan, China, is caused by severe acute respiratory syndrome. And it is one: from which the world is still recovering. Corona virus Effect In South Africa African countries like South Africa have had a difficult time adjusting to the challenges posed by the Coronavirus, despite having its first case of Covid-19 in the early week of March 2020. Hence, it is no surprise that some of the citizens are migrating to a country like Canada, where the standard of living is far more bearable. The virtual lines at the Canadian visa office in Pretoria have never been longer. Despite the progress in the field of industry and manufacturing, South Africa is a developing country and; still has a high rate of poverty. Since the Covid-19 meant that people had to abide by the stay-at-home policy, It led to the loss of job and income-earning opportunities in the country. The unhygienic environment in some of the communities, coupled with the scarce supply of water: ended up adding to the spread of other diseases alongside the Coronavirus. Every sector felt the cold grips of the pandemic. Similarly, unclean water and poor sanitation facilities in the country further exposed the need to strengthen the health sector. Since the people of South Africa are not new to migrating to other countries, the depreciating effects of the pandemic only helped to strengthen the resolve of those who already saw reasons to migrate. Effects of Coronavirus in Canada Canada has the 11th-largest world economy, and as a developed country, the effect of the pandemic could be better-managed compared to other developing countries. In Canada, for instance, the health care system is a lot more favourable, and they stand a better chance of preserving more lives. Because of the low birth rate in the country, immigrants are welcome, as they contribute to the growth of the Canadian economy. During the heat of the pandemic, international trips were on hold, and Canada had its fair share of Job losses, Covid-19 cases, and so on. However, in response, the federal and provincial government extended their support to individuals and organizations in a bid to alleviate their financial burdens. Here Are Top 5 Reasons Why South Africans Migrate To Canada 1. Access to Quality Education Canada has highly-ranked universities and, notable public school system. Hence, one of the reasons why South Africans migrate to Canada is so that they can have access to quality education beyond what is available in their country. Canada is one of the best places in the world to study, and South African immigrants know this very well. 2. An Advancing Economy The rate of unemployment in South Africa is alarming. And with the further loss of jobs caused by the Corona virus Pandemic, it seemed like the perfect time to move to a country that has a stable and thriving economy. Also, because of the nature of the economy in Canada, there are lots of opportunities available for immigrants, and it’s far more supportive than the economy in South Africa. 3. Better Health Care In Canada, the health care system is structured: to provide prompt emergency services and proper health care to the residents. Because South Africa is a developing country, its healthcare system is no match for the Canadian health care system. Hence when you want reliable medical care for yourself and your loved ones, you can better understand why more South Africans are migrating. 4. You Do Not Need To Be Left Behind Due to the low birth rate in Canada, the country is open to immigrants and has a goal of welcoming 1.3 million immigrants by 2021. Hence, the process of applying for a permanent residency in Canada is very straightforward. If you have had a hard time getting into other countries, maybe it’s time to try out a country that needs you. Canada is open to citizens from South Africa, and if you are eager to leave your country for a more supportive economy in Canada, then you can take the known steps to find your way in. 5. Join the Community of South Africans in Canada There are lots of South Africans in Canada, and you can always join the thriving community of your countrymen. It’s another way to feel at home, away from home. Conclusion As long as more people desire a better living condition, they will always see the need to migrate. South Africa is a great but developing country; hence it is easy to understand why more of the citizens will relocate to a country like Canada that has better living conditions and better infrastructures.
https://southafricatoday.net/economy/why-more-south-africans-are-looking-to-immigrate-to-canada-during-coronavirus/
Microservices architecture is popular for its distributive system styles due to the independent character of each of the services in the architecture. Microservices are built to be single and each service has its running process and interconnecting with a lightweight mechanism that called application programming interface (API). The interaction through microservices needs to communicate internally. Microservices are a service that is likely to become unreachable to its consumers because, in any distributed setup, communication will fail on occasions due to the number of messages passing between services. Failures can occur when the networks are unreliable, and thus the connections can be latent which may lead to failure or slow response. This might be a problem for synchronous remote calls actively waiting for a response. If they do not use a proper timeout mechanism, they may end up waiting for an extended amount of time. Applications usually set a timeout for all remote calls to avoid hanging of the whole application due to network failure or component failure. However, this timeout needs to be set carefully to make the system or microservice application to work as required. This would prevent further problems because if a remote call is waiting too long for a reply, it can slow down the system in its entirety, and if a connection timeout is extremely fast, it may ignore a response that is sent after timeout. This thesis proposes a dynamic fault tolerance (DFTM) Model to improve the stability and resilience of the microservices architecture. The Model is designed using a two-states Circuit Breaker called Switch Circuit Breaker with Markov-Chain. In addition, a modified Circuit Breaker (three states – open, closed, and half-open) to Switch Circuit Breaker (two states – open and closed) is presented here. The Circuit Breaker uses timeout to detect fault but timeouts usage hinges on assumptions about the real-time behavior of the system and awaiting process can be deduced from the occurrence of a timeout that a failure has occurred. Therefore, DFTM model adopted Markov Chain based model to detect fault without a timeout. Then, it sends the fault directly to Switch Circuit Breaker that uses a 2-states to cover the faults. An important finding is that the DFTM model presents a solution to the problem of transient failures or faults in the interservice communication of microservices architecture. Also, it improves the performance and reliability of microservices architecture. Library of Congress Subject Headings Fault tolerance (Engineering) Application program interfaces (Computer software) Application software -- Development. Software architecture. Format application/pdf Number of Pages 58 Publisher South Dakota State University Recommended Citation Addeen, Hajar Hameed, "A Dynamic Fault Tolerance Model for Microservices Architecture" (2019). Electronic Theses and Dissertations. 3410.
https://openprairie.sdstate.edu/etd/3410/
The UK Government has published a Technical Note on International agreements during the implementation period, which considers the UK’s status under the large number of international agreements between the EU and non-EU third countries during any implementation period following the UK’s departure from the EU. The UK is currently party to these agreements by virtue of its membership of the EU, but that will end on Brexit day. Talks between the EU and the UK have moved on to the subject of a transitional or ‘implementation’ period, which is expected to last for a period of around two years immediately following the UK’s departure from the EU. There have been suggestions that, as far as possible, the status quo would be preserved during this time: for example, the UK would be bound by the EU acquis and all four freedoms would continue to apply. However, there has been uncertainty as to whether or how the UK could continue to benefit from the EU’s international agreements during such period. According to the Note, which covers only bilateral and not multilateral agreements, the UK proposes that these third country agreements … should continue to apply to the UK in the same way for the duration of the implementation period. The precise legal basis for this aspiration is, however, unclear. As the report acknowledges, the UK will no longer be a member of the EU from the end of March 2019. However, according to the Note, the best approach would be for the parties to confirm that, for the duration of the implementation period, these agreements continue to apply to the UK and that the UK is to be treated in the same way as EU Member States for the purposes of these agreements. This would be achieved by agreement of the parties to interpret relevant terms in these international agreements, such as “European Union” or “EU Member State”, to include the UK. The Note also cites in aid, Article 31 of the Vienna Convention on the Law of Treaties, which provides that a treaty is to be interpreted in its context, which can include a subsequent agreement between the parties regarding its interpretation or application. What is only briefly touched on, however, is that the agreement of ‘the parties’ would require the agreement of the third countries rather than simply the EU. While the objective of the Government’s proposal would clearly help to preserve continuity during any implementation period, whether all necessary agreements can be achieved in the 13 months left until Brexit remains to be seen. Moreover, there is a noticeable disconnect between the UK seeking agreement from third countries that existing agreements should be construed as if the UK were start part of the EU, while at the same time seeking to promote an independent trade policy, separate from the EU.
https://www.insidebrexitlaw.com/blog/international-agreements-during-the-implementation-period
Q: Generating correlated binomial random variables I was wondering if it might be possible to generate correlated random binomial variables following a linear transformation approach? Below, I tried something simple in R and it produces some correlation. But I was wondering if there is a principled way to do this? X1 = rbinom(1e4, 6, .5) ; X2 = rbinom(1e4, 6, .5) ; X3 = rbinom(1e4, 6, .5) ; a = .5 Y1 = X1 + (a*X2) ; Y2 = X2 + (a*X3) ## Y1 and Y2 are supposed to be correlated cor(Y1, Y2) A: Binomial variables are usually created by summing independent Bernoulli variables. Let's see whether we can start with a pair of correlated Bernoulli variables $(X,Y)$ and do the same thing. Suppose $X$ is a Bernoulli$(p)$ variable (that is, $\Pr(X=1)=p$ and $\Pr(X=0)=1-p$) and $Y$ is a Bernoulli$(q)$ variable. To pin down their joint distribution we need to specify all four combinations of outcomes. Writing $$\Pr((X,Y)=(0,0))=a,$$ we can readily figure out the rest from the axioms of probability: $$\Pr((X,Y)=(1,0))=1-q-a, \\\Pr((X,Y)=(0,1))=1-p-a, \\\Pr((X,Y)=(1,1))=a+p+q-1.$$ Plugging this into the formula for the correlation coefficient $\rho$ and solving gives $$a = (1-p)(1-q) + \rho\sqrt{{pq}{(1-p)(1-q)}}.\tag{1}$$ Provided all four probabilities are non-negative, this will give a valid joint distribution--and this solution parameterizes all bivariate Bernoulli distributions. (When $p=q$, there is a solution for all mathematically meaningful correlations between $-1$ and $1$.) When we sum $n$ of these variables, the correlation remains the same--but now the marginal distributions are Binomial$(n,p)$ and Binomial$(n,q)$, as desired. Example Let $n=10$, $p=1/3$, $q=3/4$, and we would like the correlation to be $\rho=-4/5$. The solution to $(1)$ is $a=0.00336735$ (and the other probabilities are around $0.247$, $0.663$, and $0.087$). Here is a plot of $1000$ realizations from the joint distribution: The red lines indicate the means of the sample and the dotted line is the regression line. They are all close to their intended values. The points have been randomly jittered in this image to resolve the overlaps: after all, Binomial distributions only produce integral values, so there will be a great amount of overplotting. One way to generate these variables is to sample $n$ times from $\{1,2,3,4\}$ with the chosen probabilities and then convert each $1$ into $(0,0)$, each $2$ into $(1,0)$, each $3$ into $(0,1)$, and each $4$ into $(1,1)$. Sum the results (as vectors) to obtain one realization of $(X,Y)$. Code Here is an R implementation. # # Compute Pr(0,0) from rho, p=Pr(X=1), and q=Pr(Y=1). # a <- function(rho, p, q) { rho * sqrt(p*q*(1-p)*(1-q)) + (1-p)*(1-q) } # # Specify the parameters. # n <- 10 p <- 1/3 q <- 3/4 rho <- -4/5 # # Compute the four probabilities for the joint distribution. # a.0 <- a(rho, p, q) prob <- c(`(0,0)`=a.0, `(1,0)`=1-q-a.0, `(0,1)`=1-p-a.0, `(1,1)`=a.0+p+q-1) if (min(prob) < 0) { print(prob) stop("Error: a probability is negative.") } # # Illustrate generation of correlated Binomial variables. # set.seed(17) n.sim <- 1000 u <- sample.int(4, n.sim * n, replace=TRUE, prob=prob) y <- floor((u-1)/2) x <- 1 - u %% 2 x <- colSums(matrix(x, nrow=n)) # Sum in groups of `n` y <- colSums(matrix(y, nrow=n)) # Sum in groups of `n` # # Plot the empirical bivariate distribution. # plot(x+rnorm(length(x), sd=1/8), y+rnorm(length(y), sd=1/8), pch=19, cex=1/2, col="#00000010", xlab="X", ylab="Y", main=paste("Correlation is", signif(cor(x,y), 3))) abline(v=mean(x), h=mean(y), col="Red") abline(lm(y ~ x), lwd=2, lty=3)
Jesuit business education has promoted the concept of “the 3 P’s: People, Planet, and Profit” to blend corporate economic goals with corporate social responsibility. One sector that embodies the concept of “doing good” for people while “doing well” financially is the healthcare industry. Often the impact on the health and well-being of society is discussed as an industry benefit, yet we hear little of the impact of job creation and local economic growth that healthcare institutions provide. Hospital systems are the largest employer in a number of American cities. Medical facilities or institutions of higher education are found among the top ten private employers in each of the 20 largest cities in the US. The nation’s hospitals employ more than 5.5 million people making hospitals the second biggest private employer in the U.S. behind restaurants. According to the American Medical Association, physicians generate $1.6 trillion to the national economy and support almost 10 million jobs. Creating well-paying jobs can impact the vast disparities of wealth in the U.S. and around the world. In his 2013 Evangelii Gaudium (“The Joy of the Gospel”), Pope Francis calls for an examination of the purpose of business and prioritizing the common good. This paper explores the role of the healthcare industry in providing access to care and living wages to a greater number of individuals. Growth of the healthcare industry can allow society to reap the benefits of greater health and provide economic growth and prosperity to the local community.
https://www.ignited.global/conferences/cjbe-20th-annual-meeting-creighton-university/sessions/beyond-business-medicine-and-healing-social-responsibility-economic-growth-and-job-creation
You could say there are two kinds of virtues in the world, the résumé virtues and the eulogy virtues. The résumé virtues are the ones you list on your CV, the skills that contribute to external success. The eulogy virtues are deeper. They're what get talked about at your funeral and they are usually the virtues that exist at the core of your being - whether you are kind, brave, honest or faithful, what kind of relationships you formed over your lifetime. In this urgent and soul-searching book, David Brooks explores the road to character. We live in a culture that encourages us to think about how to be wealthy and successful, but which leaves many of us inarticulate about how to cultivate the deepest inner life. We know that this deeper life matters, but it becomes subsumed by the day-to-day, and the deepest parts of who we are go unexplored and unstructured. The Road to Character connects us once again to an ancient moral tradition, a tradition that asks us to confront our own weaknesses and grow in response, rather than shallowly focus on our good points. It is a focus David Brooks believes all of us - including himself - need to reconnect with now. Telling the stories of people through history who have exemplified the different activities that contribute to a deeper existence, Brooks uses the diverse lives of individuals such as George Eliot, Dwight Eisenhower and Augustine to explore traits such as self-mastery, dignity, vocation and love. He hopes that through considering their lives it will fire the longing we all have to be better, to find the path to character. David Brooks is a columnist for The New York Times and frequent broadcaster. His previous books include the bestsellers The Social Animal and Bobos in Paradise. His New York Times columns reach over 800,000 readers across the globe.
https://www.amazon.com.mx/Road-Character-David-Brooks-ebook/dp/B00R3C1U52
LYONPO TSHERING WANGCHUK, Chief Adviser of the Interim Government of Bhutan, said that the reform of the United Nations, including the Security Council, must focus on accommodating the interests and concerns of all Member States, particularly the unrepresented and underrepresented. Bhutan is well on track to implementing the 2030 Agenda. As a tiny landlocked country in the Himalayas, Bhutan started its development in 1961. While it has achieved two of the three thresholds to be eligible to graduate from the group of least developed countries, it was not able to achieve the Economic Vulnerability Index threshold. Bhutan continues to face serious economic challenges and vulnerability to natural disasters. Graduation from the group of least developed countries must be done in a sustainable manner by building productive capacity and economic resilience, he said, stressing the need to ensure that there is no back slippage on hard‑earned development gains. Graduation must be sustainable without disrupting ongoing development plans. The ever‑increasing environmental pressures from climate change, biodiversity loss, water scarcity, soil degradation, and air and water pollution have far‑reaching economic and social consequences, which contribute to poverty and social inequality. “Earth cannot and must not be the monopoly of homo sapiens,” he said, recalling a Buddhist tenet associated with conservation and protection of the five elements of the planet. He called on leaders to tackle climate change and expressed support for the Paris Agreement. As a small developing country with a population of half a million people, Bhutan remains fully committed to the cause of international peace and security. It has also been working towards gradually broadening and deepening engagement with United Nations peacekeeping operations. Since Bhutan’s 2008 transition from an absolute benevolent monarchy to a democratic constitutional monarchy, it has had two elected Governments, he continued. The term of the second elected Government concluded last month and the third parliamentary elections are under way. The primary round of elections just concluded on 15 September; the general round of elections are slated for 18 October, and the new Government is planned to be sworn in by early November. Over the past decade, the people of Bhutan have focused on establishing a strong, functional and intelligent democracy, he added.
https://gadebate.un.org/en/73/bhutan
Forces that shape our perception of self Degree of intimacy in which the individual has with different group of people has an effect on the effectiveness of the judgment that they had on us which build our self-concept (Delaney & Madigan, 2009). Primary groups refer to a small group of people that we have daily interactions with. They have a strong influence on the individual’s self. Some examples of these people are our friends and family. Secondary groups refer to a larger and less intimate people such …show more content… Cooley came out with the theory of “looking-glass self” while Mead proposed a theory of role-play and imitation (Andersen & Taylor, 2007). Cooley explained how a person’s perception of self is driven by the relationship to others. The development of looking-glass self derived from (1) how we perceive ourselves appear to others; (2) how we perceive others judge us (3) the feelings that resulted from these thoughts. Cooley’s looking-glass self theory involves perception and its effects. The perception of how others view us and the effects of their judgment on us. This means that “self” is formed based on the interaction people have with each other which leads to self-reflection (Andersen & Taylor, 2007). For an example, if the person perceived that others think they are dumb, the person will form this self-image and become reluctant to participate in a …show more content… When people take up the roles, they are expected to fulfil the expectations that are associated with the roles that they play (Andersen & Taylor, 2007). According to Mead, role-taking is essential for the development of sense of self. Role-taking is a source of self-awareness where when people take on new roles, their awareness of self will change and they will develop a set of identity based on the roles that they play. Mead explained this process by examining childhood socialization in three stages: imitation stage, play stage and game stage (Mead,
https://www.cram.com/essay/Glass-Self-Theory/FJJDWT7ZVU
So here is my 'official' joining post. This is what I intend to make in November for this challenge: Butterick 5923 This is the dress I wanted to make in October, I had it cut out (mostly) but didn't get to it at all. I need to recut the collar which I messed up the first time by not paying attention to the orientation of the pattern piece, and then get this one sewn! I really love the fabric and hope it will turn out properly. Sneak peek -- Kwik Sew 4026 This looks like a nice, simple dress with the kind of clean lines that make a really fun print stand out. I've had my eye on this fabric for a while, and then it went on sale last week -- a sign! Because my first choice is already cut out, this will be my official do-it-all-in-November selection. I'm planning on making View B, with the sleeve. Hoping to get these both done before the rush begins to finish all the Christmas sewing!
https://www.followingthethread.ca/2013/11/mgm-garment-for-november.html?showComment=1384152537545
Article by Materahub – Image above: LonelyPlanet The rural depopulation is not a new phenomenon among just a few state governments. It is, in fact, a global trend. According to the United Nations, only 44% of the world population lives in rural areas in 2020, and this number is forecast to decline to 32% by 2050. The countries experiencing the largest projected decline in the rural population are mainly from Asia, where the economy is booming and the speed of urbanization is faster than ever before, namely, China, India, Indonesia, etc. Europe is not an exception to this worldwide tendency that has been happening since the second half of the 20th century. In 2018, only 28% Europeans lived in rural regions. In Italy, Spain and Serbia around 5500, 3500 and 4700 small villages are declining respectively. And in thirty years’ time, the population of European rural regions is estimated to decrease by 8 million inhabitants. Some common reasons proposed to explain this problem are population aging (with a rise in life expectancy and a drop in fertility), rural-to-urban migration of young people for better economic opportunities and free-movement of goods and individuals among European Union countries. This phenomenon can lead to catastrophic consequences for these rural territories, for instance, the brain drain, whereby people leave to achieve higher education or career goals, the depletion of human capital resources and even the creation of a vicious cycle of socio-economic losses. These issues can be precisely named as fewer job opportunities, lack of infrastructure especially modern transportation networks, degradation of basic public services and entertainment options, etc. However, rural areas are the backbone of the territorial equation of the world in which almost half of the population are residing. It is consequently essential to promote balanced development in order to relieve urban overpopulation and revitalize rural regions, especially backcountry zones. In fact, both global and local initiatives have been put forward and carried out to cope with above-mentioned circumstances in rural territories. For example, many campaigns are being run to revitalize rural areas through digital transformation – Smart Villages. IEEE Smart Village: Empowering off-grid communities, which aims at enabling the access to the energy in the rural contexts, are worldwide active to meet the SDG 2030, especially Goal 7 – Affordable and Clean Energy. Currently, there are more than a thousand stakeholders from 70 countries joining this network, but mainly in six large regions, namely East Africa, West Africa, South Asia, South-East Asia, South America, and Central America, Caribbean, Mexico – the so-called developing world with limited possibilities to access infrastructure. Another Smart City initiative is supplemented by the Indian Smart Village Initiative focusing on harnessing the benefits of ICT for the people living in rural sites. It is expected to provide a basic framework for local people to enhance their participation on a local level and to improve their economic, social and living conditions thus making their community stronger. Furthermore, in China, rural revitalization is the critical component of the government’s work plan in the period of 2020-2025, under the five-year Rural Revitalization Strategic Plans (RRS). China shifted its focus to overall rural revitalization to benefit both the environment and the people. These plans include, for instance, promoting the socioeconomic development of the countryside and encouraging the development of secondary and tertiary industries in rural areas, in which agriculture, rural areas and farmers are the top priorities. At the same time, the rural revitalization strategy is targeted to promote the sustainable development of the countryside, including creating a robust rural economy. IEEE Smart Village: Empowering off-grid communities project In Europe, there are also abundant campaigns being carried out. At the private organization or institution level, the project ReWI Visions is run by JAPA – Jyväskylä Sustainable Development Association and LEADER – funded youth association Jyvässeudun 4H to inspire young people in rural Finland to examine circular economy entrepreneurship and start business. The activities aim to offer a comprehensive toolbox for coaching circular economy talent. Youth exchanges, international communication, promoting youth circular economy entrepreneurship with business courses and youth club activities are supplemented by manuals and videos. Impressively, 40 companies with circular economy themes have already been established by young people through the project. Another fantastic project is Municipal electric cars-sharing service set up with LEADER funding aimed at improving mobility and access to public services in Villerouge-Termenès is a small village in southern France with 140 inhabitants. In this village, there are no major public services such as medical centers or large shops in the area and the closest bakery is 10 km away. Much of the population had difficulties in getting around, especially the elderly and young people with modest financial resources. Thanks to this cost-efficient and green transport solution, villagers can have trips up to 80 km, including access to supermarkets, DIY stores, specialized care and hospitals as well as visit family and friends. There are even more projects being implemented under the Horizon 2020 (#H2020) – the EU’s research and innovation funding programme from 2014-2020 with a budget of nearly €80 billion. For instance, FARMWELL is a Thematic Network project aimed at improving farmers’ wellbeing through social innovation. RURALIZATION is another campaign promoting rural development through the involvement of new generations and new models for land use and economic activities. Or SHERPA – rural science-society-policy interfaces is a sustainable hub to engage into rural policies with actors. Municipal electric cars-sharing service project, Narbonne, France At the policy level, the European Commission has been actively entering this global race with the Priorities 2019-2024 strategy, in which A long-term vision for the EU’s rural areas is enlisted in New push for European democracy, under this long-term strategy. A Rural Pact launched in December 2021 and an “EU Rural Action Plan”, with tangible flagship projects and new tools will help achieve the goals of this vision. Rural Pact particularly is a framework for cooperation among authorities and stakeholders at European, national, regional and local levels to act on the needs and aspirations of rural residents. At a more micro level, the EU Rural Action Plan will testify the Commission’s commitment to the rural communities by articulating around flagship initiatives under different EU policy areas, namely creating an innovation ecosystem, boosting sustainable transport links and digitalisation, increasing environmental, climatic and social resilience and supporting economic diversification. The flagship initiatives can be named as a rural revitalization platform, rural digital futures, entrepreneurship and social economy in rural areas, etc. To give a clear example, the Commission launched the Start-up Village Forum aiming at becoming an open space to connect rural innovation actors. The broader objectives are supporting further development of rural innovation ecosystems, identifying and analyzing triggers for innovation in rural areas, connecting rural innovation actors across the EU, with a focus on start-up and providing a common understanding of the concept of Start-up village and highlighting the link with the concept of smart village. In fact, in order to ensure the success of any project, human resource is always among the fundamental elements that usually is determined by the experience, skills and capabilities of people involved in. Understanding the necessity of ensuring this solid basis, the European Skills Agenda was launched by the European Commission which is a five-year plan to help individuals and businesses develop more and better skills and to put them to use. The first flagship of actions under this plan is Pact for skills, a shared engagement model for skills development in Europe. It involves private and public organizations to join forces and take concrete action to upskill and reskill people in Europe. Pact for skills covers 14 roundtables such as tourism, cultural and creative industries, agri-food etc. And the key principles are promoting a culture of lifelong learning, building strong skills partnerships, monitoring skills supply/demand and anticipating skills needs and working against discrimination in terms of both gender and opportunities. As for cultural and creative industries (CCIs), many researches prove the importance of this sector in various aspects. The KEA European Affairs 2009 Report examined the contribution of the culture and the creative industries contribution to the wider economy. It also featured a series of recommendations for the better integration of culture and creativity into EU-wide strategy and policy. Another research done by some professors in 2012, with the case study in Jämtland Country, Sweden also illustrated the potential of cultural and creative industries in remote areas. In a mapping of the CCI sector in the country, it turned out that the CCI stands for in total nearly 1977 businesses, which was more than 10% of the total amount of businesses in the county. And these businesses employed 3.1 percent of the total population. This sector also embraced more businesses than many of the traditionally strong industrial sectors in the county, such as energy production, workshop industry, construction industry and the transport sector. Further relevant aspects of the sector include its tendency towards self-employment, frequent ownership by women, as well as a high frequency of multi-sectoral businesses in the CCI. Hence, it is a sector characterized by micro-businesses often involved in several sub-categories in the CCI, often owned by women. With all these aforementioned aspirations to create a competency framework which is applicable in the cultural and creative industries (CCIs) to contribute to the development of rural areas, Open School for Village Hosts project, co-funded by Erasmus+ Programme of the European Union, was consequently born. This project will identify the range of competencies that Village Hosts could need and provide them with a pilot training programme to build and level up their skills. Village Hosts afterwards are expected to be capable of discovering existing but unknown assets in the community to create new livelihoods, and work opportunities, in emerging urban-rural markets: positive-impact tourism, nature reconnection, adventure sports, farm-shares, learning journeys, wellness retreats, work-vacations, heritage trails, and more. With the collaboration of experienced partners from Italy, Spain, Hungary, Serbia, Poland and Latvia, we envision a rural revitalization that can bring new social, economic and ecological life to small villages and their local communities all around Europe.
https://www.villagehosts.eu/rurality-worth-protecting-and-how-open-school-for-village-hosts-can-contribute/
Luthans asserts that motivation is the process that arouses, energizes, directs, and sustains behaviour and performance. One cannot feel very good about oneself if one is not motivated. Ability to Self-Organize With college level writing there are many papers, notes, books, scraps of paper, drafts of papers, and photocopies of journal articles that will get lost in the shuffle without the ability to self-organize. Studies on work motivation seem to confirm that it improves workers' performance and satisfaction. Other inclusions are the availability of power and status, pay satisfaction, promotion opportunities, and task clarity Bolarin, ; Gemenxhenandez, Max, Kosier, Paradiso and Robinson, These include compensation and benefits, advancement opportunities, and technological challenges. Motivating is the management process of influencing behaviour based on the knowledge of what make people tick Luthans, You have strengths and weaknesses that you have to accept. He tries to prepare himself for those future challenges by having a close understanding of the boundaries and limitations of his subject.
https://selorixytigisow.dellrichards.com/essay-speech-qualities-of-a-good-student193601596yp.html
- Published: A study on the difference between two types of mountaineering outerwear in a cold, wet and windy environment Extreme Physiology & Medicine volume 4, Article number: A91 (2015) - 753 Accesses - Introduction For mountaineering, the outer or protective layer of clothing system is especially important for extremely environment. Generally, a breathable barrier embedded in the layered fabric will be used to prevent rain or snow from outside and let the body moisture diffused into air. There are two types of membrane, one is hydrophobic (HPO) and the other is hydrophilic (HPI). A considerable amount of studies claimed that water vapor transport through HPI polymers is highly influenced by the test conditions ; in non-isothermal test, the clothing systems incorporating HPI polymers are improved to greater amounts than those incorporating microporous polymers ; some experimental results further point out that the water vapor transfer rate of porous polyurethane laminated fabric was greater under isothermal conditions whilst the water vapor transfer rate of HPO laminated fabrics was greater under non-isothermal conditions, especially when a fabric contains more condensation ; and, with the use of hot plate and sweating arm system, an EMPA study showed the hydrophilicity and condensation have little effect on effective water vapor resistance of multilayer textile combination in 20 °C but become larger with decreasing outside temperature . In our previous study, the difference between the microstructure of PTFE and the hydrophilicity of PU affects the comfort properties of leisure wearing especially in mild and cool temperature, water vapor resistance testing (Ret), EMPA sweating torso wearing trial simulation, and subjective wearing trials were conducted. The HPO is better in Ret test, though the HPI is slightly better in the non-isothermal state. While the subjective wear trial showed no significant difference . So this extended study aimed to determine the comfort properties by a wear trial in an extremely environment. Methods Two males (age: 21 yrs; height: 170 and 175 cm; mass: 60~65 kg) wore either a HPO or HPI jacket (same as in previous study and with beanie, scarf, and gloves), performed the same protocol involved in 15 minutes of sitting without rain, and 20 minutes of walking (1.5 km.h-1 at 5% gradient) on treadmill, with rain. The environmental conditions were maintained at: 5 ± 1 °C, 50 ± 3 % RH, 150 ± 10 mm.h-1 rainfall, 3 ± 0.5 m.s-1 wind speed. Skin temperature (TS), microclimate humidity (HM) and temperature (TM), IR skin temperature (TIR), sweat condensation (SC) and perceived comfort were recorded for each participant. Results The results of TS and TM showed HPI was lower at the beginning and up to 0.7 and 1.3 °C higher at the follow-up stage of the experiment. Thermal perceptions revealed similar trend. HM demonstrated similar results of HPO and HPI, while subjective dampness showed 0.5 grade dryer of HPI in walking period. The SC weight was 21.2 g and 17.7 g for HPO and HPI respectively. The declines of TIR were very similar. Discussion Both for TS and TM, HPI and HPO crossed each other during walking, and HPI revealed higher warmth keeping ability. SC weight was 3.5 g lower and the dampness perception was slightly drier for HPI. Conclusion HPI demonstrated slightly colder at the starting point, but had less sweat condensation in the clothing system, and became warmer after walking for about 10 minutes in a cold, wet and windy environment, suggesting that a better comfort was achieved. However, the subjective perspirations were not significant between HPO and HPI outerwear in terms of comfort property. References Mukhopadhyay A, Midha VK: A Review on Designing the Waterproof Breathable Fabrics Part I: Fundamental Principles and Designing Aspects of Breathable Fabrics. J of Industrial Textile. 2008, 37 (3): 225-262. 10.1177/1528083707082164. Gretton JC, et al: Moisture Vapor Transport Through Waterproof Breathable Fabrics and Clothing Systems Under a Temperature Gradient. Textile Res J. 1998, 68 (12): 936-941. 10.1177/004051759806801209. Ren YJ, Ruckman JE: Journal of Industrial Textile. 2004, 32 (3): 165-175. Rossi RM, et al: Textile Res J. 2004, 74 (1): 1-6. 10.1177/004051750407400101. Lin YW, et al: Proceedings of 13th ICEE. 2009, 70-74. Rights and permissions This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. About this article Cite this article Wang, LC., Chou, HC. & Jou, GT. A study on the difference between two types of mountaineering outerwear in a cold, wet and windy environment. Extrem Physiol Med 4, A91 (2015). https://doi.org/10.1186/2046-7648-4-S1-A91 Published:
https://extremephysiolmed.biomedcentral.com/articles/10.1186/2046-7648-4-S1-A91
In August 2020, Dr. Kremser received the Crime Prevention Through Environmental Design (CPTED) Professional Designation through the National Institute of Crime Prevention. CPTED is a proactive technique in which the proper design of the built environment can lead to a reduction in fear and incidents of crime. CPTED designated experts are qualified to develop strategies that influence human behavior and work closely with law enforcement agencies on initiatives to improve safety in the community, including the safe design of schools, parks, healthcare facilities, and houses of worship. CPTED designated experts are also involved in writing city ordinances, planning and zoning, architectural site plan reviews, and performing field assessments. In June 2019, Dr Jon Kremser attended a seminar on Digital Threat Assessment at the National Association of School Resource Officers conference in Tennessee. This provided the most up-to-date information on social media investigations relating to school safety. Dr. Kremser notes that an overall assessment of risk to school safety “must also include an investigation of social media and students’ online activity.” At the most recent annual meeting of the American Society of Criminology in Atlanta, Dr. Kremser presented a paper addressing “The Rise in Risk of Victimization in Juvenile Facilities.” His presentation focused on specific facility design, minimum staffing ratios, policies, and training as a way to reduce the risk of victimization of youth The approaching holiday season means stress! During this time of the year your organization is at heightened risk of workplace violence. OSHA defines workplace violence as “violent acts (including physical assaults and threats of assaults) directed toward persons at work or on duty.” Forensic Security & Protection, LLC can help your company prevent a tragedy and minimize liability. We offer an effective violence prevention program for your workplace, with an emphasis on prevention training for you and your employees. We can also audit any of your current preventive measures. Contact us today for a free, no-obligation consultation. -Jon On August 21, Dr. Kremser presented “Breaking Barriers: Coordinating Healthcare and Police Response to the Heroin and Opioid Epidemic in the United States” at the United Nations Office on Drugs and Crime in Vienna, Austria. His presentation was part of the 29th Annual Meeting of the International Police Executive Symposium. Dr. Kremser’s paper focused on the governor’s Proclamation of Disaster Emergency within Pennsylvania, which was introduced earlier this year by Gov. Wolf to address the devastating problem of opioid addiction across the Commonwealth. The results of the Proclamation include better collaboration among state agencies, and additional resources for law enforcement and other first responders. Dr. Kremser is a member of the Pennsylvania Commission on Crime and Delinquency Diversion Subcommittee in Harrisburg. Forensic Security & Protection, LLC is committed to the exchange of experience and techniques with other professionals in the field. In August 2017, Dr. Kremser attended the Association of Threat Assessment Professionals (ATAP) annual Threat Management Conference in Anaheim, California. The objective of ATAP is to learn more about how best to protect victims of stalking, harassment and threat situations and become better equipped to manage threatening situations. ATAP is comprised of professionals such as law enforcement, prosecutors, mental health professionals, and corporate security experts. Dr. Kremser conducted an innovative workshop on “Making Sense of Cyberbullying: Bridging Research and Intervention,” for the Pennsylvania Principals Association’s 2017 Educational Leadership Summit in Altoona, Pennsylvania.
https://forensicsp.com/events/
Cyber Security – How well is your business protected? Threat of cyber attack is an increasing risk for businesses of all shapes and sizes. Cyber criminals, hackers and online scammers have become extremely sophisticated in their methods. The consequences of cyber attack on your business could be significant; including data protection breaches, monetary loss, business interruption, reputational damage and potential litigation from affected customers or clients. While most businesses are aware of such risks, the majority are not prepared for such an eventuality. While you may not be able to entirely insulate your business from the threat of attack, being prepared means that you can minimise the potential damage it would cause. - IT risk review The first step is to carry out a review of your current IT security procedures to asses from an IT point of view where there are weaknesses and from a data point of view what needs to be protected. Your IT provider will be able to make recommendations and there may be a risk/cost analysis to be carried out in terms of implementing such recommendations. In carrying out such assessment it would be dangerous to take an ‘it won’t happen to us’ approach. As the nature of the risks from cyber attack are constantly changing and are becoming more sophisticated, regular reviews should be carried out. - Internal Policies You should have a policy with regard to IT security that is communicated to all staff dealing with matters such as password protection, use of remote devices, sharing of sensitive data such as customer bank account details and opening of suspected spam mail. Ensuring that all your team are aware of the risks and take responsibility for implementing a firm security policy will greatly minimise the risk of attack. - Insurance It is advisable to consult with your insurers to ascertain whether your policies of insurance cover you in the event of a cyber attack. If not you may want to consider amending current policies or taking out a specific policy for that purpose. - Crisis strategy Many businesses that have internal security policies in place, have not considered a strategy or policy to apply in the event of a cyber attack. Having considered this in advance will assist with minimising potential risk and loss. You might consider appointing a designated person to take responsibility for co-ordinating the response to such attack. The extent of the attack and data loss will need to be assessed and it will have to be considered whether the Gardaí need to be informed. Matters such as your reporting requirements to the Office of the Data Protection Commissioner and/or other regulatory body, informing banks and insurers as well as customers or clients who may be affected need to be considered. Where clients are affected, there will certainly be some reputational damage and negative publicity. While your business will have been the victim of the attack, criticism will often be levied against the victim for not having adequate security in place or for the manner in which they deal with the event. Therefore, your customer service would need to be briefed on how to respond to queries. Just as there are many types of cyber attack, there are, of course, many approaches to dealing with an attack and no one approach is correct. However, being pro-active in planning for such eventuality rather than re-active will undoubtedly minimise the risk of attack and in the event of an attack, minimise the effect of it.
https://reddycharlton.ie/insights/cyber-security-how-well-is-your-business-protected/
The first woman to win a Nobel Prize, physicist and chemist Marie Curie is the 19th hero in the New York Times bestselling picture book biography series about heroes. This friendly, fun biography series focuses on the traits that made our heroes great--the traits that kids can aspire to in order to live heroically themselves. Each book tells the story of one of America's icons in a lively, conversational way that works well for the youngest nonfiction readers and that always includes the hero's childhood influences. At the back are an excellent timeline and photos. Being a woman scientist in the 19th century meant Marie Curie faced plenty of obstacles, but she never let them dull her love of science and passion for learning. About the Author Brad Meltzer is the New York Times bestselling author of Heroes for My Son, Heroes for My Daughter, and a number of suspense novels like The Escape Artist. He is also the host of the History Channel television shows Brad Meltzer's Decoded and Brad Meltzer's Lost History (in which he helped find the missing 9/11 flag). He lives in Florida with his wife and their three children. Christopher Eliopoulos began his illustration career at Marvel Comics, and has worked on thousands of comics, including Cosmic Commandos, Monster Mayhem, Franklin Richards: Son of a Genius, Pet Avengers, and Cow Boy, all of which he wrote and illustrated. He lives in New Jersey with his wife and their identical twin sons (when the boys are home from college).
https://www.writersblockbookstore.com/book/9780525555858
New research is shedding additional historical light on the Cold War's iconic nuclear standoff between the U.S. and U.S.S.R., with the tiny nation of Cuba in the middle. For the next two weeks, CONTEXT will look back on what we're learning with an eye toward the lessons that apply today. In part 2 of our "On The Brink" series, Philip Brenner describes how and why the missiles were brought to Cuba and what might have happened if they'd stayed. Philip Brenner is Professor of International Relations and Affiliate Professor of History at American University. He has served as the Senior Associate Dean for Academic Affairs and Director of the U.S. Foreign Policy Program in the School of International Service, and and as Co-Director of American University's Inter-Disciplinary Council on Latin America. His most recent book is A Contemporary Cuba Reader (Rowman and Littlefield, 2007). He is also the co-author of Sad and Luminous Days: Cuba's Struggle with the Superpowers after the Missile Crisis (Rowman and Littlefield, 2002).
https://www.wilsoncenter.org/article/the-brink-part-2-the-cuban-missile-crisis-50-years-later
Plagiarism – Ask the Rabbi Q. I read an article which sounded familiar. I checked and found that the author had lifted and passed off as his own work whole sections from something I had written. This is against Jewish law, isn’t it? A. Plagiarism, the theft of another person’s words, transgresses the law of “Do not steal”. There is an associated but more complicated issue – the adoption of someone else’s ideas. Jewish law requires that one’s source should be acknowledged: “He who quotes a thing in the name of the one who said it brings redemption to the world”. This applies even though ideas disseminated by their author are meant to become part of mankind’s cultural heritage. The question is whether the author has a legal as well as a moral claim. If the author or their associates expend time, money and material on publishing a work, others must not adversely affect the publication by causing a loss, e.g. by producing a pirated edition. In the 16th century when the Maharam of Padua issued an edition of Rambam’s “Mishneh Torah”, Rabbi Moses Isserles forbade the purchase of a rival edition. Opponents of the ban said that it would only hold validity if it were clearly stated to have geographical or personal limitations. There is an issue in regard to photocopying. If you buy a book it is yours and in most cases you can copy or even destroy it (though there is a problem if the text contains the Divine Name), but if the book belongs to a library, there are restrictions on copying. When I was a Jews’ College student in London, I found that the file copies of the “Jewish Chronicle” had been defaced; the weekly sermon had been cut out, presumably by an aspiring preacher (not me) who had no ideas of his own. I guess the College could have considered legal action against the malefactor if they had caught him.
https://www.oztorah.com/2015/01/plagiarism-ask-the-rabbi/
CLAIM OF PRIORITY BACKGROUND DETAILED DESCRIPTION This application claims the benefit of priority of U.S. Provisional Patent Application Ser. No. 62/274,362, filed on Jan. 3, 2016, which is herein incorporated by reference in its entirety. Media content provided by entertainment, news, and other communications sources, is ubiquitous in today's modern society. Even through no effort of their own, individuals are subject to constant advertising in public spaces, or ambient audio or video entertainment in spaces like restaurants and waiting areas. In some cases, individuals receive their own media content from various sources, such as by tuning in to a particular radio or television broadcast, or by receiving media streamed via the internet from social media or other sources. Portable or fixed electronic devices, such as including cellular telephones, televisions, or tablet computers, can directly or indirectly receive media content. For example, a television can receive streaming audio and/or video content via the Internet in response to a user request for media. In an example, a smart phone can be configured to receive information about ambient media using an on-board microphone or camera. In some examples, a smart phone can be directed by a user to identify a source or other characteristic of a sample of ambient media, such as by using an app to communicate with a third party that provides identification services. In an example embodiment, a machine can be configured to interact with one or more users or user devices to identify audio or video content. One approach is to configure the machine to perform audio fingerprinting, or classification, based on an audio sample, for example, by comparing a query fingerprint of an audio signal sample against various reference audio sample fingerprints in a database in an attempt to identify a match. When a query is performed using large databases, such audio fingerprinting can be computationally expensive and time consuming. Various systems and techniques described herein can be used to reduce the cost and expense of performing classification services, such as by using classification models that are tuned for particular environments, media types, or other scenarios. In an example embodiment, context information about a media query can be used to help select a classification model from among multiple different available classification models. In an example embodiment, some classification models applied by a classifier system, such as a convolutional neural network-based media classifier, can yield media classification results more efficiently than other models. FIG. 1 FIG. 1 100 100 110 115 130 150 190 110 115 118 118 105 130 150 110 130 150 illustrates generally an example embodiment of a system configured to receive and analyze media information. includes a network diagram illustrating a network environment suitable for media classification, according to some example embodiments. The network environment includes a media processor circuit , a database , and first and second mobile devices and , such as can be communicatively coupled to each other via a network . The media processor circuit , with or without the database , can form part of a cloud (e.g., an audio processing cloud, such as including multiple geographically distributed machines configured to function independently or collectively as a server to provide media processing services). The cloud can form all or part of a system that includes a network-based media classifier , for example, as a cloud-based server system configured to provide network-based audio processing services to the first and second mobile devices and . The media processor circuit and the first and second mobile devices and can each be implemented in a special-purpose (e.g., specialized) computer system, in whole or in part, as further described herein. 100 130 150 118 105 In an example embodiment, the network environment is configured to perform classification services on audio or video data. That is, in response to some input media, such as a sample of audio or video information from one of the first and second mobile devices and , the various components of the cloud can be used to provide an indication of a class or classification of the input media. A classification can include some characteristic associated with the media. In an example embodiment where input media includes an audio signal, a classification can include an indication of an author or artist, an album name, a source, a tempo, a mood, or other quantitative or qualitative feature corresponding to the audio signal. In an example embodiment, the input media is at least a portion of a song, and the network-based media classifier can provide an indication of the song's title, artist, names of various recordings and/or albums associated with the song, or other information about or associated with the song. FIG. 1 105 130 150 105 In the example embodiment of , the network-based media classifier and/or one of the first and second mobile devices and are configured to access input media queries from other sources, such as via an intranet or via the Internet. In an example embodiment, the network-based media classifier receives unclassified media data from a transceiver or receiver that is configured to receive broadcast signals, such as television, radio, or other signals. 110 118 130 150 105 105 130 150 A processor circuit can be configured to analyze an input media query to access or identify a context parameter that corresponds to the query. The processor circuit can include the media processor circuit at the cloud , or the processor circuit can be implemented at one of the first and second mobile devices and . In an example embodiment, the processor circuit can be provided at another portion of the network-based media classifier , or at another location that is communicatively coupled with the network-based media classifier and/or the first and second mobile devices and . In an example embodiment, the processor circuit configured to identify a context parameter can be the same or different processor circuit than is used to perform audio classification services. 130 A context parameter can be additionally or alternatively accessed from somewhere other than an input media query. For example, a context parameter can include information about an input media query, such as sensory or environmental information that corresponds (e.g., temporally) with the query. In an example embodiment, a context parameter includes location information that corresponds to a location where a query was sampled (e.g., by the first mobile device ). In an example embodiment, a context parameter includes device identification information or user identification information that corresponds to a device or user that acquires or prepares a query. Other context parameters can additionally or alternatively be used, as further discussed herein. 110 In an example embodiment, a processor circuit can be configured to determine spectral features of a media query. The spectral features can form all or a part of a fingerprint associated with the query. Based on or using a context parameter, the processor circuit can be configured to select a classification model for use in processing the spectral features to provide an indication of a media type that corresponds to the query. In an example embodiment, the processor circuit (e.g., the media processor circuit ) determines a media type probability index for the query using the selected classification model and the determined spectral features. In an example embodiment, the probability index includes an indication of a likelihood that the media data corresponds to a particular known media, such as a particular song, artist, or album. In an example embodiment, if the probability index exceeds some specified threshold likelihood, then the processor circuit can return a media type indication without the probability index. FIG. 1 132 152 132 152 130 150 132 130 130 130 132 152 150 150 132 152 130 150 105 Returning to the example embodiment of , first and second users and are illustrated. One or both of the first and second users and can be a human user (e.g., a human being), a machine user (e.g., a computer configured by a software program to interact with the first or second mobile device or ), or any suitable combination thereof (e.g., a human assisted by a machine or a machine supervised by a human). The first user is associated with the first mobile device and can be a user of the first mobile device . For example, the first mobile device can be a computer, a vehicle computer, a tablet computer, a navigational device, a portable media device, a smart phone, or a wearable device (e.g., a smart watch, smart glasses, smart clothing, or smart jewelry) belonging to or having user profile data associated with the first user . Likewise, the second user is associated with the second mobile device and can be a user of the second mobile device . In an example embodiment, each of the first and second users and and/or the first and second mobile devices and can be associated with various different classification models for use by the network-based media classifier . 130 150 130 150 130 150 130 150 105 In an example embodiment, one or both of the first and second mobile devices and can be configured to receive unclassified media using a microphone or other media sensor input. In an example embodiment, unclassified media can be sampled from a device's native source (e.g., a sound card or other native component of the first mobile device or second mobile device ) or from a device output terminal (e.g., a video out, a line out, a headphone jack, etc.) of the first or second mobile devices or , or from some other device. The unclassified media can be sampled (e.g., by circuitry on-board the first or second mobile device or ) and communicated to the network-based media classifier with a request for media identification services. FIG. 1 FIG. 12 Any of the systems, machines, devices, or associated databases shown or described in connection with can be, include, or can otherwise be implemented using a special-purpose (e.g., specialized or otherwise non-generic) computer that has been modified (e.g., configured or programmed by software, such as one or more software modules of an application, operating system, firmware, middleware, or other program) to perform one or more of the functions described herein for that system or machine. An example embodiment of a special-purpose computer system configured to implement any one or more of the methodologies described herein is discussed below at . Within the technical field of such special-purpose computers, a special-purpose computer that has been modified by the structures discussed herein to perform the functions discussed herein is technically improved compared to other special-purpose computers that lack the structures discussed herein or are otherwise unable to perform the functions discussed herein. Accordingly, a special-purpose machine configured according to the systems and methods discussed herein provides an improvement to the technology of similar special-purpose machines. 115 190 110 130 190 190 FIG. 6 The database , which is further discussed below at least at , can include a data storage resource that can be configured to store data in various ways, for example in a text file, a table, a spreadsheet, a relational database (e.g., an object-relational database), a triple store, a hierarchical data store, or any suitable combination thereof. The network can be any network that enables communication between or among systems, machines, databases, and devices (e.g., between the media processor circuit and the first mobile device ). The network can be a wired network, a wireless network (e.g., a mobile or cellular network), or a combination thereof. The network can include one or more portions that constitute a private network, a public network (e.g., the Internet), or a combination thereof. 190 190 The network can include one or more portions that incorporate a local area network (LAN), a wide area network (WAN), the Internet, a mobile telephone network (e.g., a cellular network), a wired telephone network (e.g., a plain old telephone system (POTS) network), a wireless data network (e.g., a WiFi network or WiMax network), or a combination thereof. Any one or more portions of the network can communicate information via a transmission medium. As used herein, “transmission medium” refers to any intangible (e.g., transitory) medium that is capable of communicating (e.g., transmitting) instructions for execution by a machine (e.g., by one or more processors of such a machine), and includes digital or analog communication signals or other intangible media to facilitate communication of such software. FIG. 2 FIG. 2 200 201 100 201 130 150 200 202 204 200 206 201 210 200 201 illustrates generally a flow diagram of a method that can include providing a media classification. The example embodiment in illustrates generally a flow diagram of a method for determining a media classification for media data , for example, using the network environment . The media data can be a media query received from one of the first and second mobile devices and , or from another source. The method includes, at operation , acquiring media training data and, at operation , analyzing the acquired media training data. The method further includes, at operation , classifying media data (e.g., a media query that includes audio and/or video information) using various training data. At operation , the method includes providing an indication of a media classification, or providing an indication of a likelihood that the media data corresponds to some specified media. 200 206 203 206 201 203 In an example embodiment, the method includes, at operation , using a context parameter . At operation , the media data and the context parameter can be used together to classify the media data using various training data. 200 206 201 In the method , features can be extracted from media data (e.g., training data or query data) using various digital signal processing (DSP) methods. For example, mel-frequency cepstral coefficients (MFCC), spectral flatness, mean value, zero crossings, spectral centroid, chord class, or percussiveness, among other extractable features, can be used. Various digital signal processing feature extraction methods and standards (e.g. MPEG-7) known in the art can be used. The extracted features can be used to perform comparative analyses of the media data, for example, using a trained classifier at step . In an example embodiment, training data is used to establish reference standards or models relative to identified criteria, and those models can then be used for evaluating the media data . FIG. 2 202 115 In the example embodiment of , a training set of media data acquired at step includes one or more audio signal samples (e.g. a recording of a musical composition) and one or more editorially formed characteristics corresponding to each of the one or more audio signal samples. In an example embodiment, several characteristics can be considered together to be a media profile or fingerprint corresponding to the training set of media data. In an example embodiment, characteristics for the training data are defined automatically by associating media data with other existing media information (e.g., using the Compact Disc Database, CDDB, or other similar resource). In an example embodiment, characteristics for the training data are defined by one or more persons or aggregated from many persons who evaluate media for inclusion in the database . In an example embodiment, characteristics for the training data are generated using various analyses of aggregated end-user and/or editorially-generated tags or prose associated with the content, or other data mining methods can be applied to datasets whose origins can be independent of the training process. 204 110 206 201 At operation , analyzing media from the training data can include extracting various spectral features from the training set of media data. In an example embodiment, extracted features are evaluated against and correlated with the editorially assigned categories to create ‘n’ number of classification models, such as including at least one model for each of the ‘n’ categories. Accordingly, in this example, each classification model can be a correlation set of a category with extracted features typical of that category. Such classification models can be shared with a classifier (e.g., implemented by the media processor circuit ) and applied at operation , where one or more classification models can serve as a basis for a comparison or reference standard for classifying the media data . 206 201 201 204 206 216 201 226 201 201 201 At operation , classifying the media data using the training data can include extracting various spectral features from the query media data . The extracted features can then be compared with or processed using the classification models from . Classifying media at operation can include, at operation , transforming the media data , or at operation , applying one or more classification models. In an example embodiment, classifying the media data can include using a convolutional neural network trained on the previously analyzed training data. The convolutional neural network can be a processor-implemented classifier that uses media signal features from media data that is sampled over one or more time windows (e.g., several milliseconds to several seconds). The classifier can be configured to detect whether a signal event or spectral feature is present in the media data or not. The convolutional neural network generally contains thousands to millions of parameters that can be tuned for specific media events (e.g. whistles, dog barks, presence of music), and situations (e.g. specific recording device/microphone, environmental noises). 216 201 201 201 201 At operation , various functions can be applied to the media data . For example, a constant Q transform can be applied to convert a data series from the media data to a frequency domain. Various spectral features can be identified in the media data to discern different media types. Local contrast normalization can be performed, and then the resulting data can be applied to the convolutional neural network. When the media data includes an audio signal, the local contrast normalization process can help make classifications invariant to overall volume changes in the audio signal and, to some degree, invariant to masking by extraneous sounds or loud sounds. 226 201 At operation , a classification model can be applied using the transformed media data as an input to the convolutional neural network. The convolutional neural network can analyze various features of the transformed media data (e.g., over a time window of several seconds) to attempt to identify whether some specified feature or event is present or not. Such a convolutional neural network can include thousands to millions of parameters that can be tuned (e.g., weighted) for specific media events (e.g., whistling noises, dog barks, presence of music, etc.) or can be tuned for specific situations (e.g., media acquired by way of a mobile device, subject to environmental noise, etc.). Different classification models can be associated with unique combinations of tuned parameters. 226 110 201 226 In an example embodiment, applying a classification model at operation includes detecting various events or characteristics in a query. For example, the media processor circuit can be configured to populate, for a given query, a profile across ‘n’ categories including assigning a value for each feature or category based on similarities between the extracted features of the media data and the features of ‘n’ classification models. In an example embodiment, applying a classification model at operation includes detecting whether a query includes (or is likely to include) a specified event, such as an auditory event (e.g. whistle, dog-bark, presence of music, etc.). The classifier can optionally be used to classify discrete audio samples, or it can be used substantially continuously to classify a stream of received audio, for example, by processing a series of samples in sequence or at a specified interval. 203 203 203 201 201 203 201 203 201 In an example embodiment, a context parameter can be used to trigger a classification process. Additionally or alternatively, a classification model can be selected for use based on the context parameter . In an example embodiment, the context parameter can be derived from the media data itself, such as by identifying changes in frequency content, amplitude, or in some other characteristic of the media data signal. In an example embodiment, the context parameter can be based on a signal from another source or sensor that is associated with the media data . For example, the context parameter can be based on a signal from an accelerometer, a user input, a location sensor, a clock or timer circuit, or other context-providing indication that corresponds with the media data . 110 105 201 201 110 206 201 203 In an example embodiment, the media processor circuit (or other processor circuit, such as associated with an input device to the network-based media classifier ) can be configured to monitor the media data for a specified event. The media data can represent a media stream, such as an audio stream that is continuously or intermittently sampled. In this example, the media processor circuit can be configured to operate in an “always on” mode where media classifications can be performed, e.g., at operation , at various intervals, or can be performed in response to some triggering event, such as a triggering event that exceeds a specified threshold change in the media data or a specified threshold change in the context parameter . 110 105 206 110 206 210 In an example embodiment, the media processor circuit (or other processor circuit, such as associated with an input device to the network-based media classifier ) can be configured to scan a fixed audio segment or sample to identify one or more sample characteristics, such as to determine whether the sample includes music, speech, or some other media type. If a specified event is detected, then a media classification request can be initiated at operation . For example, the media processor circuit can be configured to monitor an audio channel of a baby monitor. When the classifier identifies a change in the frequency content of a signal in the audio channel, the signal can be classified at operation to determine whether it includes, or is likely to include, a baby's cry (e.g., as distinguished from other noises like talking or rustling). If a baby's cry is positively indicated, e.g., at operation , then a caregiver can be notified. 206 201 201 201 105 201 110 At operation , classifying the media data can include providing a media type probability index for the media data . The media type probability index includes an indication or likelihood that the media data corresponds to a specified media type. For example, the network-based media classifier can be configured to receive the media data or query and then transform the received query into a specified frequency or spectral space. The media processor circuit can then apply a convolutional neural network to generate an indication of a probability or likelihood that the query corresponds to a particular media type. In an example embodiment, the result includes an indication of a relative likelihood that an audio sample corresponds to speech, music, or some combination thereof. Based on other results from the neural network, the results can indicate a likelihood that the audio sample corresponds to some specified audio characteristic or classification, such as a specified song. An output of the neural network can include an audio event probability, or an indication of how likely it is that a particular query corresponds to a specified or designated audio type. 206 132 152 130 150 110 In an example embodiment, at operation , classifying media includes using a classification model to narrow a universe of available or potential classifications for a particular sample or user. For example, a classification model can include various weights or other means to constrain available paths in a convolutional neural network. In an example embodiment, the first and second users or and/or their respective devices or , can be associated with different classification models, and the models can be applied by the media processor circuit to more efficiently classify media. In an example embodiment, a classification model can be selected based on some a priori knowledge of a source device or user history. For example, a particular classification model, from among multiple different available classification models, can be selected for use when the query includes an audio signal sampled from a mobile telephone's microphone. 201 In an example embodiment, a first query includes media data that includes classical music, and the first query corresponds to a first parameter that indicates that the query includes classical music. The first query can be processed using a neural network configured by a first classification model, and the first classification model can “tune” the network to more efficiently identify classical music due to various weights, filters, or other components of the network that are selected or structured to discern among different characteristics of classical music. A second classification model, such as tuned for recognizing pop music, may not successfully or efficiently identify the first query. Similarly, a second query that includes spoken word content may not be successfully or efficiently classified using either of the first classification model tuned for classical music or the second classification model tuned for pop music. 210 105 201 200 Various classification models can be applied to provide media classification at operation , such as by loading a model stored locally on a device, or by receiving a model via the network-based media classifier . Multiple models can optionally be used or applied in series. In an example embodiment, a model can be particular to a specified class of input devices (e.g., devices used to receive or provide the query or media data ). For example, a first model can be configured to classify speech and music when received from a mobile telephone microphone. A second model can be configured to classify speech and music when received from a mobile telephone's native sound board. A third model can be configured to identify whether query data includes one or both of speech and music. A fourth model can be configured to classify audio from a television broadcast source. Depending on a detected situation or context, such as based on one or more context parameters, one or more classification models can be selected for use in the method . In some example embodiments, a model is chosen automatically by detecting the source device, or by comparing the quality of competing models, or by some analysis of the query to be analyzed, among other criteria. FIG. 3 FIG. 3 FIG. 3 130 150 130 310 320 330 340 350 360 370 380 130 illustrates generally a block diagram of some components of a mobile device. In an example embodiment, shows a block diagram corresponding to various components of the first mobile device . The second mobile device can optionally include the same or similar components. In the example embodiment of , the first mobile device includes a first processor circuit , a transceiver circuit , an accelerometer , and an audio playback circuit . The example embodiment includes a microphone , a location sensor , a signal feature generator , and a class archive . The various components of the first mobile device can be configured to communicate with each other (e.g., via a bus, shared memory, or a switch). 320 130 190 320 190 320 The transceiver circuit can be configured to enable communication between the first mobile device and the network , for example, using a wired or wireless communication protocol. In an example embodiment, the transceiver circuit is configured for WiFi or cellular network communications with the network . In an example embodiment, the transceiver circuit is configured to receive a broadcast signal, such as a television signal, a radio signal, or other signal that includes digital or analog media signal information. 340 350 360 370 380 390 390 130 130 310 390 118 390 130 130 390 320 330 130 105 390 130 In an example embodiment, the audio playback circuit , microphone , location sensor , signal feature generator , and class archive can form all or part of an app , such as can include a software application configured to be executed or instantiated on a mobile hardware device or one or more other devices. In an example, the app is stored (e.g., installed) on the first mobile device and is executable by the first mobile device by one or more processor circuits, such as the first processor circuit . In an example embodiment, the app is stored, at least in part, at the cloud , and the app interfaces with one or more of the components of the first mobile device . In an example embodiment, the one or more processor circuits in the first mobile device can be included (e.g., temporarily or permanently) in the app , transceiver circuit , accelerometer , or any suitable combination thereof. In an example embodiment, the first mobile device is configured to interact with the network-based media classifier via the app to classify media present at (e.g., media played by) or near (e.g., media able to be received or sensed by) the first mobile device . FIG. 3 130 340 350 130 360 130 130 In the example embodiment of , the first mobile device includes various inputs that are configured to sample or receive media information. For example, audio signal information can be sampled from the audio playback circuit , or audio signal information can be received using the microphone . In an example embodiment, the first mobile device further includes one or more sensors or inputs that can be used to identify or provide corresponding one or more context parameters. For example, the location sensor can include a GPS sensor, a WiFi transceiver, or other sensor that can be configured to determine or provide information about a location of the first mobile device . In an example embodiment, the first mobile device is associated with one or more context parameters by programming or by a user-input. 370 130 105 130 105 370 130 370 340 350 370 370 The signal feature generator can be included at the first mobile device , at the network-based media classifier , or at some other location or device that is communicatively coupled with the first mobile device or the network-based media classifier . The signal feature generator can be configured to identify or extract various media data features, such as spectral, temporal, or other features, from a media signal provided by the first mobile device . For example, the signal feature generator can receive an audio sample from the audio playback circuit or from the microphone and then extract, using digital signal processing (DSP) methods, various features from the audio sample. In an example embodiment, mel-frequency cepstral coefficients (MFCC), spectral flatness, mean value, zero crossings, spectral centroid, chord class, or percussiveness, among other features, can be determined or provided by the signal feature generator based on a media sample. The various features extracted or identified using the signal feature generator can be considered to be a fingerprint that uniquely corresponds to the signal to be identified or classified. 380 390 105 132 130 380 105 115 132 130 380 390 105 130 The class archive can include one or more indications of classifications, classification models, or other information that can be applied by the app (e.g., in coordination with other components of the network-based media classifier ) to classify query media. The classifications, classification models, or other information can be associated with the first user and/or with the first mobile device , and can be applied to narrow a search in a convolution neural network to identify or classify a media sample. In an example embodiment, the class archive includes a user identification code that is applied by the network-based media classifier to retrieve, from the database , one or more classification models associated with the first user and/or with the first mobile device . In an example embodiment, the class archive includes various classification models that can be applied by the app and/or by the network-based media classifier to classify media provided by the first mobile device . FIG. 4 201 410 340 350 130 illustrates generally an example that includes a query sound waveform and a context sensor signal. The query sound waveform can comprise all or a portion of media data that is indicated to be identified or classified. In an example embodiment, the query sound waveform can be received using the audio playback circuit or the microphone of the first mobile device . FIG. 4 420 130 130 420 360 130 412 410 412 201 200 includes the context sensor signal , such as can be received from the first mobile device , or can be determined using information about the first mobile device and/or a user. For example, the context sensor signal can include information from the location sensor that indicates a location of the first mobile device . In an example embodiment, an audio sample can include a segment of the query sound waveform . The audio sample can be used as the media data for classification, for example, according to the method . 420 410 420 410 420 330 130 420 130 420 420 421 420 422 423 420 FIG. 4 The context sensor signal can represent context information that corresponds to the query sound waveform . For example, the context sensor signal can be received concurrently with the query sound waveform . In the example embodiment of , the context sensor signal includes position information, such as can be determined using the accelerometer of the first mobile device . The context sensor signal includes, for example, a binary indication of whether the first mobile device is at rest (signal low) or in motion (signal high). Context information can be sampled from the context sensor signal at various intervals, or context information can be sampled when a change is identified in the context sensor signal . For example, first context data can correspond to a first state change in the context sensor signal . Second and third context data and can correspond to periodic samples of the context sensor signal , for example, after some threshold context event is identified. Other context information can additionally or alternatively be used, as further discussed herein. FIG. 5 FIG. 3 FIG. 5 110 110 130 110 510 520 530 110 570 illustrates generally a block diagram of a media processor circuit, such as the media processor circuit . The media processor circuit can include various components that can work together with, or independently from, the components of the first mobile device (see, e.g., ). In the example embodiment of , the media processor circuit includes a query receiver , a classification model selector , and a media type probability index generator . In an example embodiment, the media processor circuit further includes a signal feature generator . 510 520 530 590 105 110 590 118 590 130 150 In an example embodiment, the query receiver , the classification model selector , and the media type probability index generator can form all or part of an app that is stored (e.g., installed) at the network-based media classifier , and is executable by the media processor circuit , or by one or more other processor circuits, to perform media classification services. In an example embodiment, the app is stored, at least in part, at the cloud , and the app interfaces with one or more of the components of the first and second mobile devices and . 510 201 510 201 130 150 510 110 201 110 510 201 The query receiver can be configured to receive query media, such as the media data , that is indicated for classification. In an example embodiment, the query receiver receives the media data from one of the first and second mobile devices and . The query receiver optionally acts as a gate to the other components of the media processor circuit , such as by verifying that the media data includes a query format or query signal content quality that can be processed by the media processor circuit . In an example embodiment, the query receiver is configured to receive one or more context parameters associated with the media data . 110 570 570 370 130 570 570 201 510 570 510 370 The media processor circuit can include the signal feature generator . The signal feature generator can be independent of, or configured for operation in coordination with, the signal feature generator of the first mobile device . The signal feature generator can include code or a processing module that is configured to generate a spectrogram or other spectral content information corresponding to an audio signal sample or video signal sample (e.g., a query media sample). The signal feature generator can be configured to identify or extract various features from the media data received by way of the query receiver . For example, the signal feature generator can receive an audio signal sample from the query receiver and then extract, using digital signal processing (DSP) methods, various features from the audio sample. Mel-frequency cepstral coefficients (MFCC), spectral flatness, mean value, zero crossings, spectral centroid, chord class, or percussiveness, among other features, can be determined by the signal feature generator . 520 510 520 The classification model selector can include software code or a processing module or circuit that is configured to use information about at least one context parameter (e.g., a context parameter received using the query receiver ) to select a classification model for use in classifying a media sample. The at least one context parameter can optionally be derived by the classification model selector or by another module, such as by analyzing the query media to be identified, for example, to determine whether the query media includes music, speech, or a combination of music and speech. 520 115 115 115 520 115 370 570 FIG. 6 In an example embodiment, the classification model selector communicates with the database to retrieve one or more classification models from the database . An example of the database is discussed below at . The classification model selector and/or the database can provide recommendations for various classification models to use, for example, based on the context parameter(s) and/or spectral features (e.g., identified by the signal feature generator or ) associated with the query to be identified. 530 370 570 530 530 The media type probability index generator is configured to apply the spectral feature information from the signal feature generators and/or to a classifier network to generate a probability index, or likelihood, that a query corresponds to a specified media type. In an example embodiment, a media type can include a media category (e.g., spoken word, music, classical music, newscast, etc.) or a media type can include an indication of a specified source or specified work, such as a song, TV program (e.g., including an indication of a specific episode), or performer. That is, in an example embodiment, the media type probability index generator can provide an indication of a likelihood that a query corresponds to a specified recording of a specified song by a specified artist. In another example embodiment, the media type probability index generator can provide an indication of a likelihood that a query corresponds to a specified artist. FIG. 6 FIG. 6 115 115 110 190 621 621 illustrates generally a block diagram of a database for storing classification models. The block diagram of can correspond to the database . The database can be communicatively coupled with the media processor circuit and/or the network to provide, among other things, various classification models and neural network data for classifying media. The neural network data can include various structural, filter, or weighting information for use in a convolutional neural network, or in some other similar tool that can provide model-based classification services. 115 610 601 602 603 601 1 2 610 110 602 610 FIG. 6 In an example embodiment, the database in includes a classifier that includes multiple classification models , , and . A first classification model is a function of multiple parameters (PARAM[], PARAM[], . . . , PARAM[n]). That is, given a set of parameters, such as including but not limited to context parameters, the classifier can return a corresponding classification model for use by the media processor circuit . The parameters can correspond to context parameters, spectral features, or other information associated with one or more of a query, device, user, or other input. A second classification model can be a function of the same or different multiple parameters. The classifier can include n classification models that correspond to different combinations of parameters. In an example embodiment, a specified combination of parameters can return multiple different classification models. In an example embodiment, a specified combination of parameters can indicate that several different classification models are selected and can be serially applied. 115 110 130 110 115 110 610 110 In an example embodiment, the database can be queried by the media processor circuit whenever at least one parameter changes. For example, in response to a change in a context parameter received from the first mobile device , the media processor circuit can query the database to determine whether an updated classification model is indicated by the changed context parameter. If an updated classification model is not indicated, then the media processor circuit can perform its media classification service using a previously received model. If an updated classification model is indicated, then the classifier can transmit information about the new classification model to the media processor circuit for use in present or subsequent classification operations. FIG. 7 FIG. 7 FIG. 7 700 132 130 701 702 130 711 711 701 illustrates generally an example of using a classification network to identify media content. includes an example of using a classification network, such as a convolutional neural network tuned for a particular media type, to classify media. In the example embodiment, the first user and the first mobile device are illustrated as traveling between a first location (e.g., a music venue) and a second location (e.g., the street outside of the music venue). In the example embodiment of , the first mobile device can be configured to sample first audio data . In an example embodiment, the first audio data is representative of live music being played at the first location . 130 712 712 701 132 701 132 The first mobile device can be further configured to identify first context data . In an example embodiment, the first context data includes one or more of GPS location information identifying the first location , user information about the first user (e.g., calendar entry information, owned music library information, playlist or listening history, etc.), or other information that can be used to identify one or more of the first location and the first user . 130 701 130 701 132 130 In an example embodiment, a media identification query can be generated by the first mobile device to identify audio content at the first location . The media identification query can be generated automatically by the first mobile device , such as based on a specified schedule or on a detected change in ambient sound content of the first location , or the query can be generated in response to a user input (e.g., received from the first user ) at the first mobile device . 130 390 711 105 130 712 105 105 715 115 110 711 712 711 110 715 711 718 701 715 110 711 The first mobile device , such as using the app , can be configured to provide the first audio data to the network-based media classifier . In an example embodiment, the first mobile device additionally provides the first context data to the network-based media classifier . At the network-based media classifier , a first classification model can be retrieved from the database , and then applied using the media processor circuit to provide a media type probability index that corresponds to the first audio data . The first context data can influence which one or more of n available classification models is selected for use with the first audio data . In an example embodiment, the media processor circuit applies the first classification model in a convolutional neural network to identify, in the first audio data , a first indication of media content , such as corresponding to a song being played at the first location . In an example embodiment, applying the first classification model includes applying various weights or other parameters in a convolutional neural network, such as using the media processor circuit , to process and identify the first audio data . FIG. 7 132 701 702 712 722 130 721 702 132 130 132 130 In the example embodiment of , the first user transitions or relocates from the first location to the second location . In response to the detected change in location, such as indicated by a change from the first context data to the second context data , the first mobile device can sample second audio data that is representative of the new environment, that is, representative of the second location . The sampling and re-sampling of the audio data can be triggered in various ways. In an example embodiment, the sampling can be performed automatically in response to a detected change in the location or context of the first user . In an example embodiment, the sampling can be performed in response to a user input at the first mobile device . In an example embodiment, the sampling can be performed periodically or at some specified interval. The sampling can be triggered in response to other detected changes associated with the first user and/or with the first mobile device . 721 722 105 105 725 721 110 725 730 721 728 702 FIG. 7 The second audio data , such as together with the second context data , can be provided to the network-based media classifier . The network-based media classifier can optionally identify a different classification model, such as a second classification model , to apply using the convolutional neural network to generate an indication of a media type for the second audio data . In an example embodiment, the media processor circuit applies the second classification model at the classification network to identify, in the second audio data , a second indication of media content , such as corresponding to a different second song being played at the second location . In the example embodiment of , the different second song could be played by an outdoor speaker system of a nearby restaurant. FIGS. 8-11 FIGS. 12 and 13 illustrate generally examples of determining media type probabilities, selecting different classification models for use in a classification network (e.g., a convolutional neural network trained to identify various media), and using context parameters, among other things. illustrate generally examples of training a neural network, and determining when or whether to expend computational resources to provide media classification services, such as using a convolutional neural network, based on one or more characteristics of a media query to be identified. Each of the methods can be implemented using code that can be executed using the systems, methods, or devices described herein. FIG. 8 800 810 340 350 130 illustrates generally an example embodiment that includes determining an audio type probability index. The example embodiment can include a first process comprising multiple different steps or operations. At operation , the example embodiment includes accessing audio data or an audio signal that represents a query sound to be identified. Accessing audio data can include, among other things, receiving or identifying an audio sample using various portable or fixed electronic devices, such as including cellular telephones, televisions, or tablet computers, among other devices. In an example embodiment, accessing audio data includes monitoring the audio playback circuit or sampling ambient audio information using the microphone of the first mobile device . 820 810 810 At operation , the example embodiment includes accessing a first context parameter. The first context parameter can correspond to the query sound from , and/or the context parameter can correspond to a device, a user, or can otherwise be associated with the query sound accessed at operation . A context parameter that corresponds to a query sound can include time-varying context signal information that is received substantially concurrently with, or recently before or after, the query sound. In this manner, the context parameter can provide information about what is or what may be occurring at or near the location or time at which the query sound is accessed. In an example embodiment, a context parameter that corresponds to a query sound can be substantially time-invariant, such as can include information about a user identification, or a device type, among other things. 820 360 130 360 In an example embodiment, accessing the first context parameter at operation includes receiving location information from the location sensor of the first mobile device . For example, the first context parameter can include location information (e.g., GPS or other coordinates) that is determined using the location sensor . The location information can be associated with a particular query sound that is recorded or sampled when the location was determined. 830 810 830 310 130 110 105 At operation , the example embodiment includes determining spectral features of audio data, for example, of the audio data accessed at operation . Determining spectral features can include, among other things, using digital signal processing (DSP) methods to identify mel-frequency cepstral coefficients (MFCC), spectral flatness, mean value, zero crossings, spectral centroid, or other extractable features. Various digital signal processing feature extraction methods and standards (e.g. MPEG-7) can be used. Determining spectral features at operation can include using one or more of the first processor circuit of the first mobile device , the media processor circuit of the network-based media classifier , or some other processor or circuit to receive and analyze a digital audio sample that is to be classified. 835 310 130 110 105 835 830 At operation , the example embodiment can include determining a likelihood that the query sound can be positively classified based on a query sound characteristic or based on a context parameter characteristic. The query sound and/or context parameter can optionally be analyzed (e.g., using one or more of the first processor circuit of the first mobile device , the media processor circuit of the network-based media classifier , or some other processor or circuit to receive and analyze a digital audio sample that is to be classified) to identify a characteristic or feature that can be used or compared to a specified threshold characteristic or threshold feature to provide an indication of a likelihood that the query can be classified. In an example embodiment, operation can include using the spectral feature(s), determined at operation , to determine the likelihood that the query sound can be positively identified. 835 820 830 835 810 820 835 800 840 835 800 840 835 800 FIG. 13 Although operation is illustrated as following operations and , the determination at operation can be performed substantially immediately following acquisition of the query sound (e.g., following operation ), or can be performed for the context parameter substantially immediately following acquisition of the context parameter (e.g., following operation ). Based on the result of operation , the process can continue at operation . For example, if the likelihood determined at operation is above a specified threshold likelihood that the query sound can be identified, then the process can continue at operation with selecting a classification model for use in classifying the query sound. If the likelihood determined at operation is below a specified threshold likelihood, then the process can terminate, or can re-sample one or more of the query sound and/or the context parameter, such as to attempt to acquire a better quality signal for use in a media classification process. An example embodiment that includes determining whether to proceed with a classification, such as based on a characteristic of a media query, is included herein in the discussion of . 840 840 601 603 110 115 105 FIG. 6 At operation , the example embodiment includes selecting a classification model for use with the audio data, for example, to be used to identify or classify the audio data. In an example embodiment, selecting the classification model at operation includes using at least one context parameter. The at least one context parameter can be one of multiple parameters used to select or identify a classification model (see, e.g., classification models - in the example embodiment of ). In an example embodiment, the media processor circuit is configured to retrieve a classification model from the database in the network-based media classifier . The retrieved classification model can include parameters for use with a neural network that can be tuned (e.g., by the retrieved model) to identify or classify the query sound based on the audio data. 850 830 At operation , the example embodiment includes determining an audio type probability index for the query sound using the first classification model. In an example embodiment, determining the audio type probability index includes performing comparative analyses of the various spectral features (e.g., determined at operation ) at one or multiple nodes in a neural network to identify an audio type or a probability index that indicates that the audio data is likely to correspond to a specified audio type or audio content. In an example embodiment, determining the audio type probability index includes using the audio data, the first classification model, and a trained classifier network. In an example embodiment, determining the audio type probability index includes classifying or identifying the query sound by title, artist, source, or otherwise positively identifying a correlation between the query sound and a particular characteristic. 860 850 860 FIG. 12 At operation , the example embodiment can include storing information about whether the audio type probability index, such as determined at operation , indicates that the query sound was successfully or unsuccessfully classified, that is, whether the query was properly identified, misidentified, or not identified. Operation can optionally be performed during normal system use, or during a training process. An example of a training process is illustrated below at . 860 At operation , whether a classification operation is successful or not can be determined automatically, such as by identifying that one or more characteristics of the query sound, the context parameter, and/or the audio type probability index, exceeds a specified threshold characteristic (i.e., by using an objective measure). In an example embodiment, whether a classification operation is successful or not can be indicated by a user responding to an inquiry about whether the user accepts or rejects a result of the classifier. 840 A query sound can be unsuccessfully classified or misidentified for numerous reasons. For example, a poor quality or noisy media signal can be difficult to analyze, or can yield spectral features or a signal fingerprint that does not accurately correspond to underlying media signal content, such as due to masking by noise or other signal anomalies. Noise can be introduced in a media signal when the media signal is sensed using an environment sensor like a microphone, or can be introduced by electrical or electromagnetic noise, or via other means. A poor quality or noisy context parameter can be similarly difficult to analyze if it is noisy or if it includes anomalies that lead to a misinterpretation. If one or more of the query sound and/or the context parameter are noisy or inaccurate, then the classification model selected at operation can be improper and, as a result, the determined audio type probability index can be poor. 860 In an example embodiment, the information stored at operation includes relative or absolute information about the success (or lack of success) in classifying or identifying the query sound, and optionally includes information about a correspondence between the success and one or more characteristics of the query sound, the context parameter, or the classification model(s) used. The stored information can be later retrieved and compared to a new query sound, context parameter, or model, such as for use in identifying a likelihood that a classification of the new query sound will be successful. If the likelihood of success is too low, then a classification process can be suspended in order to preserve computational resources or power for other tasks. FIG. 9 900 illustrates generally an example embodiment that includes selecting a classification model based on context parameters. The example embodiment can include selecting among multiple different classification models that are available for use in a classification network. 910 110 310 910 At operation , the example embodiment includes accessing first and second context parameters corresponding to a query sound. The first and second context parameters can be of the same type or format (e.g., both can include location information or coordinates, time stamp information, user profile information, etc.), or the context parameters can be different. One or both of the first and second context parameters can be determined using, or can be derived from, an audio sample or query sound to be identified. One or both of the context parameters can be determined based on sensor information, user information, or other information that can indicate a characteristic of an audio sample, or can indicate a characteristic of a user or device that is associated with an audio sample. In an example embodiment, the media processor circuit and/or the first processor circuit can access the first and second context parameters at operation . 920 130 701 701 701 701 At operation , the example embodiment includes determining a search scope characteristic that is associated with each of the first and second context parameters. For example, a first search scope can be associated with the first context parameter, and a different second search scope can be associated with the second context parameter. In an example embodiment, the first context parameter includes GPS coordinates for the first mobile device . A first search scope associated with the GPS coordinates can be relatively narrow, for example, if it can be determined that the GPS coordinates correspond to the first location . In this case, a search scope characteristic can optionally be narrowed to include only media context that is known to be associated with the first location (e.g., as can be determined by a calendar of events associated with the first location , or other prior or programmed information about the first location ). 130 In an example embodiment, the second context parameter includes an indication of a device type that was used to acquire query data. In this example, the second context parameter can include information indicating that audio data or a query sound of interest was acquired using the first mobile device . Using information about the source of the query sound, for example, a search scope characteristic can be narrowed or tuned to identify media that is most likely to be received by a mobile device. Additionally or alternatively, one or more pre-processing filters can be tuned to optimize spectral information sampled from a mobile device. 930 940 840 940 FIG. 8 At operation , the example embodiment includes selecting one of the first and second context parameters. The selected one of the context parameters can be used to select a classification model at operation (see, e.g., at operation ). That is, the selection of a particular classification model at operation can be based on, or informed at least in part by, the selected one of the context parameters. In an example embodiment, a context parameter that is selected for use can be a context parameter that corresponds to a relatively narrower search scope. When the search scope is narrower, performing media classification can be more focused and less processing resources are thus consumed. Accordingly, in the example embodiment above, the first context parameter can be selected when the GPS coordinates correspond to a smaller or more narrow set of potential media classifications, such as compared to the second context parameter that can correspond to any or all media that can or is likely to be sampled by a mobile device. 940 610 940 110 FIG. 6 Selecting the classification model at operation can include using the selected one of the first and second context parameters. In some examples, different weights can be associated with different context parameters, and multiple context parameters can be used to select a classification model (see, e.g., at classifier that includes multiple classification models, and each different model can be a function of multiple different parameters). Selecting the classification model at operation can include using the media processor circuit to retrieve the selected classification model and to apply the selected classification model in the context of a neural network that is configured to provide an indication of a media type. FIG. 10 FIG. 10 FIG. 7 1000 illustrates generally an example embodiment that includes selecting an updated classification model. The example embodiment can include receiving and parsing updated context parameter information. Some portions of correspond generally to the example embodiment illustrated at . 1010 421 420 130 150 110 105 FIG. 4 At operation , the example embodiment can include identifying a classification model update trigger. Identifying an update trigger can include, among other things, determining that a query sound characteristic has changed (e.g., determining that one or more of a tone, amplitude, timbre, percussiveness, or other quality is different than in an earlier query sound or sample), determining that an update interval has elapsed, or determining that a context parameter associated with a query sound has changed by more than a specified threshold amount. Referring again to the example at , an update trigger can be provided, for example, based on the first context data , such as can be associated with a change in the state of the context sensor signal . In an example embodiment, the classification model update trigger can be identified by a source device (e.g., by the first or second mobile device or ), by the media processor circuit , by a user input to a source device and/or to the network-based media classifier , or by other means. 1020 110 1020 130 712 722 FIG. 10 FIG. 7 At operation of , the example embodiment includes receiving at least one updated context parameter. For example, in response to identifying the classification model update trigger, the media processor circuit or other component can request an updated context parameter, such as to evaluate whether a classification model update is required or desired. At operation , the example embodiment can include receiving an updated context parameter based on a change in a continuous or intermittent query stream, or receiving the updated context parameter based on a detected change in device status (e.g., a change in location or change in user). In the context of , for example, receiving an updated context parameter can include using the first mobile device (or other device or process) to receive and recognize the first context data and the different second context data . 1030 1030 105 1040 If more than one context parameter update is identified, then at operation the example embodiment includes selecting at least one of the updated context parameters for use in selecting a classification model. In an example embodiment, a context parameter corresponding to a narrower classification of media is selected at operation , such as to narrow the scope of a search performed by the network-based media classifier . At operation , the example embodiment includes selecting a classification model based on the updated context parameter(s). The selected classification model can be applied as described above for classifying query media associated with the updated context parameter(s). FIG. 11 1100 illustrates generally an example embodiment that includes determining when to apply an updated classification model. For example, various considerations can be evaluated to determine whether or when to expend resources in updating a context parameter, or retrieving or applying an updated classification model. 1110 110 1105 At operation , a processor circuit such as the media processor circuit can determine whether a specified duration has elapsed since a prior query was classified. The specified duration can represent a minimum “retry” or “reclassify” interval between queries. At operation , if the specified duration has not elapsed, then the system can pause for a specified update interval. Upon expiration of the update interval, the query can proceed. 1120 110 310 1105 At operation , the media processor circuit , the first processor circuit , or other processor can be used to analyze a query for quality. If the query includes audio signal information that is deemed to be too noisy or too quiet (i.e., the query has insufficient signal content) for use in a classification network, then the method can return to . After pausing for a specified interval, a query can be re-sampled such as to see if noise or amplitude levels have changed. In some examples, some networks or classification models can be more tolerant of noise. In other examples, a priority indication or user-override can be provided to force the network to process a noisy signal. 1130 1105 If the query is determined to have a sufficient minimum signal quality, then at operation the example embodiment can include determining whether the query itself has changed by more than some specified threshold amount. If the query itself is reasonably unchanged from a prior query (e.g., in frequency content, tone, percussiveness, or other quantitative or qualitative measure), then the system can assume that a prior query still represents present query content to be identified, and a reclassification can be avoided or delayed. The example embodiment can include a return to to pause, and a new query can be generated after a specified interval or after identification of a trigger event. 1140 1145 1150 1145 1160 If the query is sufficiently different from an earlier query, then at operation , the example embodiment can include determining whether a context parameter associated with the query has changed by more than a specified threshold amount. If the context parameter has not changed, or if a new context parameter is not available, then the example embodiment can continue at operation by selecting or applying a prior classification model to the query. If the context parameter is determined to have changed by more than the specified threshold amount, or if a new context parameter is available, then the example embodiment can continue at operation by determining whether the changed or new context parameter itself meets some specified minimum signal quality. For example, if a context parameter includes noise, or represents an unknown context, then a prior classification model can be used (e.g., at operation ). If, however, the context parameter meets the quality threshold, then the example embodiment can continue at operation by selecting and applying a new classification model, based on the new context parameter, to classify the query. FIG. 12 1200 illustrates generally an example embodiment that includes training a media query identification system. The example embodiment can include a process that identifies a tolerance for changes in one or more input signal characteristics. For example, the process can be used to determine how much noise can be tolerated in a media query signal. An amount of noise or variation from a pure or expected media query and/or context parameter can be referred to as a tolerance. If too much noise is present in an input signal, then the noise level exceeds a specified tolerance threshold, and it can be unlikely that a media type classification based on the noisy input signal will be accurate. If a noisy signal can be identified early, such as before an attempt is made to identify or classify a query, then a system can preserve computing resources and/or power by inhibiting media classification services. If less than a specified threshold amount of noise (or other signal characteristic anomaly) is present in a media query, then a system can carry out the media classification process with a reasonable likelihood of success. That is, if an input signal has a noise (or other signal anomaly characteristic) that is less than a specified noise tolerance threshold, then the system can proceed with media classification of the input signal. 1210 1220 1212 1212 1214 At operation , the example embodiment includes accessing a media query for training. At operation , the example embodiment includes accessing a context parameter for training. In an example embodiment, the accessed media query and/or context parameter can be pre-selected, such as by an individual or machine tasked with training a media classification system. At operation , a media training parameter can be introduced. Introducing the media training parameter can include, among other things, adding or changing a noise characteristic, a frequency profile, a frequency filter, or other characteristic of the original media query. That is, introducing the media training parameter at operation can include intentionally modifying the content or one or more characteristics of the original media query to provide an updated media query at operation . In an example embodiment, the media training parameter corresponds to a specified tolerance threshold, or the media training parameter can be selected to exceed a prior tolerance threshold, such as to test or train one or more new conditions with the classification system. 1222 1220 1224 1214 1210 1224 1220 At operation , a context training parameter can be similarly introduced for the context parameter accessed at operation . Introducing the context training parameter can include intentionally modifying the content or one or more characteristics of the original context parameter to provide an updated context parameter at operation . In some example embodiments, at least one of the media training parameter and the context training parameter is not used, and the updated media query provided at operation is the same as the original media query accessed at operation , or the updated context parameter provided at is the same as the original context parameter accessed at operation . 1230 1240 1240 At operation , a first classification model can be selected. The first classification model can be selected based on one or more of the media query, the updated media query, the context parameter, or the updated context parameter. In an example embodiment, the classification model can be selected by a user tasked with training a system, or can be selected automatically by a machine. At operation , the example embodiment can use the first classification model to attempt to classify the updated media query. Operation can include one or more of providing an indication of a media type (e.g., including an indication of a media source, origin, or other identifying characteristic) corresponding to the media query, or the operation can include providing a likelihood that the media query corresponds to a specified media type. 1250 1210 1260 1260 1212 1212 1250 At operation , the example embodiment can include determining whether the media query was successfully identified, such as based on a priori information about the original media query that was accessed at . If the media query is successfully identified, then the example embodiment can continue at operation with updating a tolerance threshold. For example, at operation , a tolerance threshold, such as for use in later classification tasks for other media queries, can be updated to correspond to the media training parameter introduced at operation . In an example embodiment, the media training parameter introduced at operation includes a first noise profile and, when a media query having the first noise profile is successfully classified at operation , then a tolerance threshold corresponding to the first noise profile can be selected for use in later classification tasks for the same or other media queries. 1250 1270 1270 1270 1212 1250 At operation , if the media query is not successfully identified, then the example embodiment can continue at operation with updating one or both of the media training parameter and the context training parameter. In an example embodiment, updating the training parameters at operation includes selecting for further analysis a training parameter that corresponds to a different tolerance threshold than was previously used. In an example embodiment, at operation , a tolerance threshold, such as for use in later classification tasks for other media queries, can be updated to correspond to a media training parameter that is previously known to yield successful classifications. In an example embodiment, the media training parameter introduced at operation includes a first noise profile and, when a media query having the first noise profile is unsuccessfully classified at operation , then a tolerance threshold corresponding to a different second noise profile can be selected for use in later classification tasks for the same or other media queries. FIG. 13 1300 1310 1320 110 105 310 130 130 1320 illustrates generally an example embodiment that includes determining whether to classify a media query. In an example embodiment, a media query can be previously acquired or accessed, or a media query can be received periodically or intermittently. At operation , the example can include pausing for a specified minimum duration or update interval, such as a specified time from a previous classification event. In an example, after the update interval is elapsed, or in response to a user request for media classification, a noise characteristic of a media query can be identified at operation . In an example embodiment, the noise characteristic can be identified using the media processor circuit , or using some other processor circuit, such as can be associated with an input device to the network-based media classifier . For example, the first processor circuit of the first mobile device can be used to identify a noise characteristic of a media query received by or prepared by the first mobile device . Based on the noise characteristic identified at operation , a media classification system can be caused to initiate or inhibit a media classification process. It can be advantageous to inhibit or prevent a media classification process from proceeding, for example, to conserve processing capacity of one or more processor circuits for other tasks. 1330 1320 1332 110 310 100 1310 1332 1350 840 FIG. 12 FIG. 8 At operation , the example embodiment includes comparing the noise characteristic identified at operation with a specified noise tolerance threshold. The noise tolerance threshold can be specified by a user, by a programmer, or the noise tolerance threshold can be learned by a system employing various network training operations (see, e.g., the example embodiment of ). At operation , if the identified noise characteristic is determined (e.g., by the media processor circuit , by the first processor circuit , or by another processor circuit associated with a device in the network environment ) to exceed a specified noise tolerance threshold, then the query can be deemed to be too noisy to classify and the example can return to operation to pause for an update interval. After the update interval elapses, the example could continue, for example, by sampling or acquiring a subsequent query with a different noise characteristic, and the example could include analyzing the subsequent query to determine whether its different noise characteristic is more suitable for classification. If, at operation , the identified noise characteristic does not exceed the specified noise tolerance threshold, then the example can continue at operation with classifying the query. For example, the embodiment can continue at operation in the example of by selecting a classification model for use with the query. 1340 1320 1310 1342 840 FIG. 8 At operation , the example embodiment includes comparing the noise characteristic identified at operation with a result of a prior classification attempt where the prior classification attempt is for a query having the same or similar noise characteristic. If a prior classification attempt was not successful at identifying or classifying the query having the same or similar noise characteristic, then query classification can be avoided and the example embodiment can return to operation . If, at operation , a prior classification attempt was successful at identifying or classifying the query having the same or similar noise characteristic, then a query classification process can commence or continue, such as at operation in the example of . FIG. 13 Although the example embodiment of refers to a noise characteristic of a query, other characteristics can be similarly analyzed. For example, instead of a noise characteristic, a frequency content characteristic can be identified and used to determine whether to classify a query. In an example embodiment, a visual characteristic such as a color content characteristic can be identified and used to determine whether to classify a query that includes visual media or video information. Other media characteristics can be similarly analyzed and used as a gate to initiating or inhibiting media classification processes. FIG. 14 FIG. 14 FIG. 14 1400 1424 1422 1400 1424 1400 is a block diagram illustrating components of a machine, according to some examples, that can read instructions from a machine-readable medium and perform any one or more of the methods discussed herein. In the example embodiment of , a machine , according to some example embodiments, is able to read instructions from a machine-readable medium (e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and to perform any one or more of the methods discussed herein, in whole or in part. shows the machine in the example form of a computer system (e.g., a computer) within which the instructions (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine to perform any one or more of the methodologies discussed herein can be executed, in whole or in part. 1400 1400 1400 1424 1424 1400 105 1400 100 In an example embodiment, the machine operates as a standalone device or can be communicatively coupled (e.g., networked) to other machines. In a networked deployment, the machine can operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a distributed (e.g., peer-to-peer) network environment. The machine can be a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a cellular telephone, a smart phone, a set-top box (STB), a personal digital assistant (PDA), a web appliance, a network router, a network switch, a network bridge, or other machine capable of executing the instructions , sequentially or otherwise, that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall be understood to include any collection of machines that individually or jointly execute the instructions to perform all or part of any one or more of the methods discussed herein. In an example embodiment, the machine includes a device that can be used as an input to the network-based media classifier , or the machine can include all or a portion of the network . FIG. 14 1400 1402 1404 1406 1408 1402 1424 1402 In the example embodiment of , the machine includes a processor circuit (e.g., one or more central processing units (CPUs), one or more GPUs, one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any suitable combination thereof), a main memory , and a static memory , which are configured to communicate with each other via a bus . The processor circuit contains solid-state digital microcircuits (e.g., electronic, optical, or both) that are configurable, temporarily or permanently, by some or all of the instructions such that the processor circuit is configurable to perform any one or more of the methodologies described herein, in whole or in part. 1400 1410 1400 1412 1414 1416 1418 1420 The machine can include a graphics display capable of displaying graphics or video, and the machine can include an alphanumeric input device (e.g., a keyboard or keypad), a pointer input device , a data storage , an audio generation device (e.g., a sound card, an amplifier, a speaker, a headphone jack, or any suitable combination thereof), or a network interface device . 1416 1422 1424 1424 1404 1406 1402 1400 1404 1406 1402 1424 190 1420 1420 1424 In an example embodiment, the data storage (e.g., a data storage device) includes the machine-readable medium (e.g., a tangible and non-transitory machine-readable storage medium) on which are stored the instructions embodying any one or more of the methodologies or functions described herein. The instructions can also reside, completely or at least partially, within the main memory , within the static memory , within the processor circuit (e.g., within the processor's cache memory), or any suitable combination thereof, before or during execution thereof by the machine . Accordingly, the main memory , the static memory , and the processor circuit can be considered machine-readable media (e.g., tangible and non-transitory machine-readable media). The instructions can be transmitted or received over the network via the network interface device . For example, the network interface device can communicate the instructions using any one or more transfer protocols (e.g., hypertext transfer protocol (HTTP)). 1400 1430 1430 In an example embodiment, the machine is a portable computing device (e.g., a smart phone, a tablet computer, or a wearable device), and can have one or more additional input components (e.g., sensors or gauges), such as discussed herein. Examples of such input components include an image input component (e.g., one or more cameras), an audio input component (e.g., one or more microphones), a direction input component (e.g., a compass), a location input component (e.g., a global positioning system (GPS) receiver), an orientation component (e.g., a gyroscope), a motion detection component (e.g., one or more accelerometers), an altitude detection component (e.g., an altimeter), a biometric input component (e.g., a heart rate detector or a blood pressure detector), and a gas detection component (e.g., a gas sensor). Input data gathered by any one or more of these input components can be accessible and available for use by any of the components described herein. 1422 1424 1400 1424 1400 1402 1400 1424 1400 1424 As used herein, the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and can be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing the instructions for execution by the machine , such that the instructions , when executed by one or more processors of the machine (e.g., processor circuit ), cause the machine to perform any one or more of the methodologies described herein, in whole or in part. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as cloud-based storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more tangible and non-transitory data repositories (e.g., data volumes) in the example embodiment form of a solid-state memory chip, an optical disc, a magnetic disc, or any suitable combination thereof. A “non-transitory” machine-readable medium, as used herein, specifically does not include propagating signals per se. In some example embodiments, the instructions for execution by the machine can be communicated by a carrier medium. Examples of such a carrier medium include a storage medium (e.g., a non-transitory machine-readable storage medium, such as a solid-state memory, being physically moved from one place to another place) and a transient medium (e.g., a propagating signal that communicates the instructions ). Some examples are described herein as including modules. Modules can constitute software modules (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium), hardware modules, or any suitable combination thereof. A “hardware module” is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and can be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems or one or more hardware modules thereof can be configured by software (e.g., an application or portion thereof) as a hardware module that operates to perform operations described herein for that module. 100 FIG. 1 One or more of the processor circuits discussed herein can perform operations in a “cloud computing” environment or as a service (e.g., within a “software as a service” (SaaS) implementation). For example, at least some operations within any one or more of the methods discussed herein can be performed by a group of computers (e.g., as examples of machines that include processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)). The performance of certain operations can be distributed among the one or more processors, whether residing only within a single machine or deployed across a number of machines, for example, in the network of . In some example embodiments, the one or more processors or hardware modules (e.g., processor-implemented modules) can be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or hardware modules can be distributed across a number of geographic locations. Several additional examples follow. Example 1 can include or use subject matter (such as an apparatus, a method, a means for performing operations, or a machine readable memory including instructions that, when performed by the machine, can configure the machine to perform acts), such as can include or use a method for classifying media, the method comprising accessing, using one or more processor circuits, digital media data that represents a media query to be identified, the digital media data provided by a first remote device, accessing, using the one or more processor circuits, a first context parameter that corresponds to the media query to be identified, the first context parameter provided by the same first remote device, determining, using the one or more processor circuits, spectral features corresponding to the digital media data, and selecting, using the one or more processor circuits, a first classification model stored in a database, the first classification model being one of a plurality of different classification models stored in the database, the selecting based on the first context parameter. Example 1 can further include determining, using the one or more processor circuits, a media type probability index for the media query using the first classification model and the determined spectral features corresponding to the digital media data, wherein the determined media type probability index indicates a likelihood that the media query corresponds to at least one media characteristic of a plurality of different media characteristics. Example 1 can further include receiving, at the first remote device, one or both of the media type probability index and the at least one media characteristic. Example 2 can include or use, or can optionally be combined with the subject matter of Example 1, to further include identifying, using the one or more processor circuits, a change in the digital media data or a change in the first context parameter and, in response, selecting a different second classification model from among the plurality of different classification models, and determining, using the one or more processor circuits, an updated media type probability index using the different second classification model. Example 3 can include or use, or can optionally be combined with the subject matter of Examples 1 or 2, to further include wherein the selecting the first classification model from among a plurality of different classification models includes selecting one or more of the determined spectral features and using information about the selected one or more features with a first portion of a neural network, and wherein the determining the media type probability index includes using an output of the neural network. Example 4 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-3, to further include wherein the determining the media type probability index includes using a neural network with the selected first classification model to provide an indication of a likelihood that the digital media data corresponds to a specified audio event or specified visual event, wherein the neural network is previously trained using a priori information about the specified audio event or the specified visual event. Example 5 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-4, to further include wherein the accessing the first context parameter includes accessing a context parameter that indicates that the digital media data includes audio data received by a microphone of a mobile device, wherein the selecting the first classification model includes selecting a speech/music classification model for mobile devices, and wherein the determining the media type probability index includes using the selected speech/music classification model for mobile devices and using the determined spectral features corresponding to the digital media data that includes the audio data received by the microphone of the mobile device. Example 6 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-5, to further include wherein the accessing the first context parameter includes accessing a context parameter that indicates that the digital media data includes audio data received from a television broadcast, wherein the selecting the first classification model includes selecting a speech/music classification model for television broadcast, and wherein the determining the audio type probability index includes using the selected speech/music classification model for television broadcast and using the determined spectral features corresponding to the digital media data that includes the audio data received from the television broadcast. Example 7 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-6, to further include wherein the accessing the first context parameter includes accessing an indication of a source type of the digital media data, and wherein the source type includes one or more of a mobile device, a broadcast video or broadcast audio stream, a local signal source, or a microphone signal source. Example 8 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-7, to further include accessing, using the one or more processor circuits, a second context parameter that corresponds to the media query to be identified, wherein the second context parameter is provided by the same first remote device or a different device, and determining, using the one or more processor circuits, search scope characteristics that are respectively associated with each of the first and second context parameters, and selecting, from the database and using the one or more processor circuits, the one of the first and second context parameters associated with a narrower search scope, wherein the selecting the first classification model includes using the selected one of the first and second context parameters associated with the narrower search scope. Example 9 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-8, to further include accessing, using the one or more processor circuits, a second context parameter that corresponds to the media query to be identified, determining, using the one or more processor circuits, signal quality characteristics that are respectively associated with each of the first and second context parameters, and selecting, using the one or more processor circuits, one of the first and second context parameters based on the determined respective signal quality characteristics, wherein the selecting the first classification model includes using the selected one of the first and second context parameters. Example 10 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-9, to further include wherein the accessing the first context parameter that corresponds to the media query includes accessing context information that temporally coincides with the media query to be identified. Example 11 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-10, to further include wherein the accessing the first context parameter includes determining the first context parameter using a determined characteristic of a sampled portion of the digital media data itself. Example 12 can include or use, or can optionally be combined with the subject matter of Example 11, to include or use wherein the determining the first context parameter using the media data itself includes determining whether the media data includes one or more of previously-recorded music, live music, speech, television audio, movie audio, game audio, or other audio. Example 13 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-12, to further include wherein the accessing the first context parameter includes receiving context information from a sensor device associated with the first remote device, the sensor device including one or more of a GPS or location sensor, an accelerometer, a microphone, a clock or timer circuit, or a user input. Example 14 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-13, to further include analyzing the determined spectral features corresponding to the digital media data to determine whether a threshold change has occurred in the media query since earlier digital media data was accessed, and if the threshold change has not occurred, then inhibiting the determining the media type probability index. Example 15 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-14, to further include wherein the accessing the digital media data includes periodically or intermittently sampling audio data from a continuous query sound source, and wherein the determining the media type probability index includes determining an audio type probability index for each of the respective periodically or intermittently sampled audio data. Example 16 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-15, to further include determining a search depth parameter, and wherein the determining the media type probability index includes using the search depth parameter to determine a processor circuit resource amount to expend in determining the media type probability index. Example 17 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-16, to further include wherein the accessing the digital media data includes analyzing a signal quality of the media data and, if the signal quality is insufficient, then re-sampling the digital media data corresponding to a different portion of the media query. Example 18 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-17, to further include wherein the selecting the first classification model includes selecting the model from among a plurality of classification models that are previously associated with a specified user. Example 19 can include or use, or can optionally be combined with the subject matter of at least one of Examples 1-18, to further include wherein the determining the media type probability index associated with the media query includes using the spectral features as an input to a convolutional neural network that provides the media type probability index and/or the at least one media characteristic. Example 20 can include or use subject matter (such as an apparatus, a method, a means for performing operations, or a machine readable memory including instructions that, when performed by the machine, can configure the machine to perform acts), such as can include or use a tangible, non-transitory computer-readable storage medium comprising instructions that, when executed by at least one processor of a machine, cause the machine to perform operations comprising: receiving, from a remote device, digital media data that represents a media query to be identified; receiving, from the remote device, a first context parameter that corresponds to the media query to be identified; selecting a first classification model from a database, the first classification model being one of a plurality of different classification models stored in the database, the selecting based on the received first context parameter; determining a media type probability index for the media query using a neural network to apply the selected first classification model; determining at least one media characteristic for the media query using the determined media type probability index; and providing, to the remote device, the at least one media characteristic for the media query. Example 21 can include or use, or can optionally be combined with the subject matter of Example 20, to include wherein the operations further comprise: receiving, from the same or a different remote device, a subsequent second context parameter that corresponds to the same or different media query to be identified; selecting a second classification model from the database, the selecting based on the second context parameter; and determining an updated media type probability index for the same or different media query to be identified using the neural network to apply the selected second classification model. Example 22 can include or use, or can optionally be combined with the subject matter of at least one of Examples 20 and 21, to further include wherein the operations further comprise: receiving an indication of a change in a status of the remote device, and in response to receiving the indication: accessing a subsequent second context parameter that corresponds to the same or different media query to be identified; selecting a second classification model from the database, the selecting based on the second context parameter; and determining an updated media type probability index for the same or different media query to be identified, using the neural network to apply the selected second classification model. Example 23 can include or use, or can optionally be combined with the subject matter of Example 22, to include or use wherein the operations for receiving the indication of the change in status of the remote device include receiving sensed information about a change in an environment characteristic corresponding to the device. Example 24 can include or use, or can optionally be combined with the subject matter of at least one of Examples 20-23, to further include wherein the operations further comprise determining a noise characteristic associated with the media query and, when the determined noise characteristic indicates a noise level associated with the media query that exceeds a specified threshold noise level, then inhibiting one or more of the operations for accessing the first context parameter, for selecting the first classification model from the database, or for determining the media type probability index. Example 25 can include or use, or can optionally be combined with the subject matter of at least one of Examples 20-24, to further include wherein the operations further comprise: determining a spectral characteristic associated with the media query and, based on the determined spectral characteristic, selectively inhibiting one or more of the operations for accessing the first context parameter, for selecting the first classification model from the database, or for determining the media type probability index. Example 26 can include or use subject matter (such as an apparatus, a method, a means for performing operations, or a machine readable memory including instructions that, when performed by the machine, can configure the machine to perform acts), such as can include or use a system comprising a mobile device that includes a first processor circuit, a first sensor configured to receive digital media data that represents a media query to be identified, and a second sensor configured to sense first context information about the mobile device, wherein the first processor circuit is configured to transmit the digital media data and the first context information to a remote media classifier. In Example 26, the system can further comprise a second processor circuit associated with the remote media classifier, the second processor circuit configured to receive the digital media data and the first context information from the mobile device, and the second processor circuit configured to execute instructions that cause the second processor circuit to perform operations comprising: selecting a first classification model from a database comprising a plurality of different classification models, the selecting based on the received digital media data and the first context information, determining a media characteristic for the media to be identified using the selected first classification model and the digital media data, and providing, to the mobile device, the determined media characteristic. Example 27 can include or use, or can optionally be combined with the subject matter of Example 26, to include or use wherein the first processor circuit is configured to coordinate a series of data transmissions to the remote media classifier at specified intervals, and wherein the data transmissions include respective samples of digital media data sampled using the first sensor and corresponding samples of context information sensed using the second sensor, and wherein the second processor circuit is configured to execute instructions that cause the second processor circuit to perform operations further comprising: selecting a different second classification model from the database, the selecting based on a corresponding pair of the sampled digital media data and corresponding sampled context information; and determining the media characteristic using the selected different second classification model. Example 28 can include or use, or can optionally be combined with the subject matter of at least one of Examples 26 and 27, to further include wherein the mobile device further comprises a third sensor configured to sense different second context information about the mobile device, and wherein one of the first and second processor circuits is configured to select one of the first and different second context information for use in selecting the first classification model from the database. Example 29 can include or use, or can optionally be combined with the subject matter of at least one of Examples 26-28, to further include wherein the mobile device further comprises a spectral analysis circuit that is configured to identify a noise characteristic for one or both of the digital media data and the first context information, and wherein the first processor circuit is configured to transmit the digital media data and the first context information to the remote media classifier when the identified noise characteristic indicates a noise level that is less than a specified threshold noise level. Example 30 can include or use, or can optionally be combined with the subject matter of at least one of Examples 26-29, to further include wherein the mobile device further comprises a spectral analysis circuit that is configured to identify a frequency characteristic of the digital media data, and wherein the first processor circuit is configured to transmit the digital media data and the first context information to the remote media classifier when the identified frequency characteristic satisfies a specified frequency characteristic condition. Example 31 can include or use, or can optionally be combined with the subject matter of at least one of Examples 26-30, to further include wherein the second sensor includes a microphone configured to sense an ambient noise characteristic associated with the digital media data, and wherein the second processor circuit is configured to perform the selecting the first classification model from the database using the sensed ambient noise characteristic associated with the digital media data. Example 32 can include or use, or can optionally be combined with the subject matter of at least one of Examples 26-31, to further include wherein the second sensor includes a location sensor that is configured to sense information about a location of the mobile device, and wherein the second processor circuit is configured to perform the selecting the first classification model from the database using the sensed information about the location of the mobile device. This detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples.” Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. The present inventors contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Although various general and specific embodiments are described herein, it will be evident that various modifications and changes can be made to these embodiments without departing from the broader spirit and scope of the present disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part of this application show, by way of illustration, and not of limitation, specific embodiments in which the subject matter can be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments can be used or derived there from, such that structural and logical substitutions and changes can be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Specific embodiments or examples are illustrated and described herein, however, it should be appreciated that any arrangement calculated to achieve the same purpose can be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description. BRIEF DESCRIPTION OF THE DRAWINGS Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings. FIG. 1 illustrates generally an example embodiment of a system configured to receive and analyze media information. FIG. 2 illustrates generally a flow diagram of a method that can include providing a media classification. FIG. 3 illustrates generally a block diagram of some components of a mobile device. FIG. 4 illustrates generally an example embodiment that includes a query sound waveform and a context sensor signal. FIG. 5 illustrates generally a block diagram of a media processor circuit. FIG. 6 illustrates generally a block diagram of a database for storing classification models. FIG. 7 illustrates generally an example embodiment of using a classification network to identify media content. FIG. 8 illustrates generally an example embodiment that includes determining an audio type probability index. FIG. 9 illustrates generally an example embodiment that includes selecting a classification model based on context parameters. FIG. 10 illustrates generally an example embodiment that includes selecting an updated classification model. FIG. 11 illustrates generally an example embodiment that includes determining when to apply an updated classification model. FIG. 12 illustrates generally an example embodiment that includes training a media query identification system. FIG. 13 illustrates generally an example embodiment that includes determining whether to classify a media query. FIG. 14 is a block diagram illustrating components of a machine, according to some examples, that can read instructions from a machine-readable medium and perform any one or more of the methods discussed herein.
Keariam Abera’s victory is what you can refer to as miraculous. If you recall, Keariam was evicted in the first round of the show and was voted in back by the Ethiopian team of ten. Out of the five Ethiopians that were evicted in the first round, he was chosen to join the queue for those envisaging to win the 20, 000 dollars. He came back into the competition more determined, stronger and better. What a good selection the team made as it eventually paid off; Keariam was announced the winner of the first edition of the maltavator challenge. In this interview, he tells us how he was able to conquer others, relate with other countries despite the differences in language, culture, belief, character and his favourite country in the first edition of Malta Guinness “Maltavator Challenge” TV game show. Excerpts here PROFILE Born 27 years ago in the month of July, Ethiopian Keariam Abera Michael is a Business administration and management graduate at the Unity University, Ethiopia. His friends love to call him Karizu. He is handsome, athletic, swift, gentle, well behaved, outspoken and charming. His good looks can be attributed to his amicable and sociable character. He loves playing basketball, taking photographs, hiking and watching movies. Beyond his looks and physique, Keariam is clear sighted and bright as a button. His childhood was as fun as any other kid lucky enough to have everything. PARTICIPATION IN THE MALTA GUINNESS MALTAVATOR CHALLENGE TV GAME SHOW My participation started out as a simple one . A friend of mine told me about the competition. He said I should give it a try that I would make the cut. He also said it is an avenue for me to visit Nigeria for the first time and see how the country is. I have always looked forward to participating in competitions like this but never did until my friend encouraged me to take part in this. And since it’s abroad, I felt it would be a great experience to travel. I eventually did. I guess I got both. ABOUT THE MALTAVATOR CHALLENGE The games in the Maltavator Challenge were great. The whole place is beautiful , the people were great, the crew was outstanding on their job. I met a lot of people and I had a lot of fun. MOST MEMORABLE EXPERIENCE The night when our neighbours, the Ghanians made us (The Ethiopians) dinner and we ate together . We ate from the pot with spoons and forks. While feasting, Toyosi from Nigeria came in and we all started arguing about whose jollof was sweeter-Ganaian jollof or Nigerian jollof. The next day, team Nigeria made us dinner, in cooperation with our Ethiopian ladies . We stayed a while for the night, had some sing along music; it was indeed fun. Another memorable moment would be when me and Rajen from Ethiopia also sneaked off the competition to get some A/C in the bathroom-lol DID YOU MAKE FRIENDS Yes of course I did make friends and learnt from all of them. I learnt from the Ethiopians that I didn’t know before the competition. Also, from the the Ghanians like Daniel, Prince, Marshall, Simon and Oliver . Basically, I learnt from most of the guys in the Nigeria and Ghana team. Also, some of the girls from Nigeria team like Chizy and Love . Also, from the crew of Malta Guinness Maltavator Challenge TV game show- Emmanuel, Adebare, the entire filming crew Of AVF. Basically, most of the people in the competition. MOST SOCIABLE COUNTRY I think the Ghanians were everyone’s favorite, because they were talking and chatting with everyone. They kept pushing each other as a team and even other country’s teams. They were just the coolest and very great at fixing back problems because Daniel fixed mine. Again, we clicked with the Ghanians first , they were our neibhours and truthfully they were very cool. They were also big and fit, so that was impressive to see. Then the Nigerians , and lastly the Ivorians . There was a language barrier but that didn’t keep us from being friends. Also, we all had fun at the swimming pool. LIVING TOGETHER WITH OTHER ETHIOPIANS YOU JUST MET Of course , we were living and eating together. Our room actually had 3 girls in addition to me and Dagnachew, but all the other guys usually come to our place, so we were all together most of the time. CHALLENGES FACED Truthfully there were times I doubted myself , I mean, I didn’t make the top 5 but my team mates voted me in, God bless them all. And even moving on, I had my drawbacks but God gave me strength and pushed me through. I’d say it’s an honor and a pleasure to have participated in the ‘Maltavator Challenge’ let alone win. Also, the weather issue , we’re not used to temperatures rising to 33 and above. Another challenge was in the rope game “Malta moves” where we had to jump from one rope to another. The ropes were slippery and my fingers got caught in the middle. HOW DO YOU FEEL AS A WINNER It feels great to win. From all those people I managed to become the victor,it’s something only God can do. WHAT MADE YOU WIN I would say God , I asked for strength and drive and he gave me that. YOU WON, WHAT DO YOU HAVE THAT OTHERS DON’T HAVE I honestly don’t think I have something they don’t have. Physically, I’m smaller than most of them ; they are more fit than I am. But some of the competitions came out to be to my advantage I guess . At the end of the day, I believe God did this for me because it was part of his plan. YOU WERE VOTED BACK BY THE TEAM. WOULD YOU CALL IT FATE OR LUCK. WHY DID THEY VOTE YOU I would call it both luck and fate, and if I had to pick one I’d call it fate. When I knew I was in the bottom , I kept praying to God to save me because he knows how bad I wanted to remain in the competition and also win the competition. Even if I didn’t know how to, I just knew God will take care of that part. All I had to do was just ask for it. Other people got voted too, but I guess they had more faith in me, may be because they knew that I had more to give or may be because of how friendly I was with everyone . I am greatful to them for keeping me in the game especially those who gave up their own votes to keep me going in the competition as some of them were in the same bottom spot as I was. WHAT WAS RUNNING THROUGH YOUR MIND AT THE LAST STAGE? WERE YOU SCARED? Yes , I actually was . Because on the last lap, my go-cart got hit and the engine stopped running . That moment, what was going through my mind was, what if the others catch up with me, will they take into consideration that the engine stopped running. And after that , I didn’t know if it was the last lap or if I had made my final lap since I wasn’t counting laps but concentrating on the corners. Despite being unsure, I made my final stop at the pit , which would have cost me dearly if I hadn’t finished my laps. WAS THE MALTA GUINNESS MALTAVATOR CHALLENGE TV GAME SHOW A FAIR GAME Yes ,I think it was a fair game. Because the competition was balanced. If it was all about strength, Ghana would have taken it. But it wasn’t just about one trait. FIRST TIME IN NIGERIA AND THE EXPERIENCE Yes it was, truthfully we didn’t see much of Nigeria as much as we wanted to since the time phrase was short. But I would say it was fun, especially the time we went to the Femi Kuti Shrine. ROLE MODELS Starting from my family, there are a lot of people that I admire their dedication and hard-work. My mom is the most patient person I know. She is dedicated to anything she does. She is kind, caring and has the utmost faith in God, even in difficult times. I believe it’s her prayers that have got me here in life. My uncles and aunts who are the most hard-working and truly decent people I know. And Kobe Bryant, the Mamba himself. His dedication and perseverance is something to follow. WHAT MAJOR THING DID YOU LEARN IN THE MALTAVATOR CHALLENGE TV GAME SHOW AND TOOK BACK TO ETHIOPIA Even though we are people with different cultures and languages, we are basically the same. I have learnt how to be more friendly, how to take one’s cultures or tradition into mine. In truth, not everything from another person is easy to accept. For instance, we’re more quiet than the other 3 countries; they were louder and more active. But that’s just how we grew up and how they did, but it doesn’t mean we can’t find a compromise around it. So I’d say compromise was what I learned. NEXT STEP Next step is the future. With God, I have some plans. As he helped me in the Malta Guinness Maltavator Challenge, he’ll help me in any challenge of life . PRIZE MONEY I promised God I’d give back to him even before I won, so I’ll do that and use the rest to invest in my future . Also, I currently got accepted to grad school. I am on a tight deadline to make payment. I haven’t got the money yet . But let’s hope I’ll make it by the time of the deadline. RELATIONSHIP STATUS Not in any relationship HOW DO YOU LOVE YOUR WOMEN Now, that’s a funny question, seems like I’m ordering some sort of ice cream. Well, I like them just right.Just a decent, easygoing person, down to earth in the ways that matter; no too much drama . FINAL WORDS TO MALTA GUNNESS FOR ORGANISING THE SHOW I would say thank you very much for the opportunity of a life time. These moments are unforgettable for me and it’s all because of Malta Guinness. I hope they will keep doing this so others could know what I mean. Getting people together is one thing but getting them involved with one another is another, and they accomplished both. I have drank enough malta to last me a few months. Whenever it gets hot and I see a cold bottle of Malta Guinness, it takes me back to those moments at the competition. Long live the Malta brand , I hope you have even better years ahead of you. On a final note, it would be great if they produced the Herb edition in Ethiopia. It just has that right amount of sweetness. PEOPLE SKEPTICAL ABOUT PARTICIPATING IN COMPETITIONS THINKING IT IS A SCAM On the issue of scam, I’d say know what you’re getting into first, do some research. Don’t just enter into anything just because it’s there, and that’s not only for a competition, it’s for life in general. But for the competition, just try it. At the end of the day, we don’t regret the chances we took but those we didn’t; (assuming it’s the right type of chance; chance to rob a bank doesn’t qualify here). Go take a risk, get out of your comfort zone. I did and it paid off with God’s help. WHAT PEOPLE DON’T KNOW ABOUT KEARIAM I sometimes watch football/soccer . It’s a rare occasion , I secretly admire LeBron James but won’t let my friends know it, lol. And I have a hard time with time conversion.
https://ogefash.reviews/ogefash-blog-interview-with-keariam-the-winner-of-the-first-edition-of-maltavator-challengemaltaguinness/
What Does it Mean to Be Healthy? Lesson 1 Chapter 1 Introductory Note About Patient Education: At Equilibrium Natural health we see our patients as much more than people we prescribe treatments to. We recognize you, the patient, as the most influential in your health. We partner with you to empower you with the knowledge needed to make educated health decisions in your life. Our experience shows that if a patient can have a basic understanding of the 'why' and 'how' behind their health, they will be much better prepared to effectively implement healthy habits into their lives. Please take time to review the five brief lessons in this course 'Introduction to Natural Medicine'. The World Health Organization defined health as the following, "Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity." In 1986, the W.H.O. further clarified that health is: "A resource for everyday life, not the objective of living. Health is a positive concept emphasizing social and personal resources, as well as physical capacities." As illustrated, healthy living is not defined by an absence of symptoms- they are only half the battle. Wellness is defined by the individuals awareness, education and growth in their health. Wellness is being pro-active about health, not simply going to the doctor every time something hurts. Remember, it's not as important where you may be currently standing as it is which direction you are trying to go. Four Corners Health Most people go to the doctor because they have symptoms that affect their health. These symptoms are likely uncomfortable and most want them to stop. Traditional healthcare does a pretty good job at helping symptoms to reduce. Addressing and curing symptoms and getting to the neutral point on the continuum is a fine place for some people. However others want better health than just not having to take medications any more. Some people want more energy, to have better sleep or to enjoy more healthy relationships. The healthcare we provide at Equilibrium Natural Health meets that need. While we work with patients on curing symptoms naturally, our fundamental practices also help to move our patients towards high-level wellness at the same time. The Insurance View On Health And Wellness Most people think they have healthcare to keep themselves well but that is not what health care is setup for. Traditional healthcare would more appropriately be labeled as 'disease care' instead of 'wellness care'. Medicare guidelines, section 2251.3 (guidelines for all insurances): “A treatment plan that seeks to prevent disease, promote health, and prolong and enhance the quality of life; or therapy is performed to maintain or prevent deterioration of a chronic condition is deemed not medically necessary” In other words, traditional medicine does not cover actually getting well, moving past getting rid of symptoms if that is what you are interested in doing. That does not mean they won't be able to help you with your symptoms, however it is important for you to understand traditional medicine will only get you so far. All in all, seeing your health as a proactive practice in your life will help you to receive the very greatest benefits you can while working with us Equilibrium Natural Health. Addressing your symptoms while moving you towards high-level wellness requires looking at all components of your health, a method we call 'the triangle of health', which we cover in the next lesson.
https://kitsapnaturopathic.com/course/what-does-it-mean-to-be-healthy
WATCH: Wreckage Of Costa Concordia Begins Its Final Voyage More than two years after the luxury liner Costa Concordia wrecked off the Italian island of Giglio, killing 32 people, its wreckage has finally begun its voyage to a salvage yard in the port of Genoa. As NPR's Sylvia Poggioli reports, the operation is complex, involving a 14-boat convoy. She filed this report for our Newscast unit: "Boat sirens and fog horns sounded through the tiny Italian port of the island of Giglio when the moorings of the Costa Concordia were finally loosened. "The 15,000-ton vessel — the size of three football fields — had been flipped upright last September in a complex operation called parbuckling. "Then, 10 days ago, salvage workers and engineers began to refloat and stabilize the ship. "This morning, encased in a straightjacket of 30 fotation tanks, and escorted by a 14-vessel convoy, the ship began its journey. At a steady speed of 2 knots an hour, she's expected to arrive in Genoa on Sunday. "The convoy includes a marine-mammal spotting boat as it sails through the Tuscan archipelago, Europe's largest marine sanctuary." We'll leave you with time-lapse video of the first hours of the operation: Back in September, we posted a time-lapse video of the Costa Concordia being righted. Copyright 2021 NPR. To see more, visit https://www.npr.org.
https://www.wmky.org/2014-07-23/watch-wreckage-of-costa-concordia-begins-its-final-voyage
Simon-Kucher has helped numerous chemicals, materials and metals companies develop and institute superior pricing and sales strategies that have increased their profitability. In these sectors, many products are commodities whose price is largely determined by market demand and supply. However, we have helped market leaders develop successful pricing strategies based on supply curves, competitive dynamics and a value-based price logic. We help clients optimise contract elements and structures as well as surcharges and sundries. We also help them manage large account negotiations and develop sales strategies, structures, processes and governance to make their sales teams more effective. Materials and metals companies are profitable only if they can balance volume and price, and then manage their costs to deal with highly volatile pricing pressures. Professional price management and effective sales management are what give these companies the edge.
https://www.simon-kucher.com/en-sg/industries/chemicals-and-materials/base-materials-and-metals
Located in Dieskau, this hotel is within 6 mi (10 km) of Halloren Chocolate Factory, Maya Mare, and Halle Opera House. Franckesche Foundation and New Theater are also within 6 mi (10 km). Hotel Features A restaurant, a bar/lounge, and meeting rooms are available at this hotel. Free buffet breakfast, free WiFi in public areas, and free self parking are also provided. Additionally, express check-in, tour/ticket assistance, and a front-desk safe are onsite. Room Amenities All 100 rooms offer free WiFi, satellite TV, and safes. Guests will also find showers and desks.Information missing or incorrect? Tell us! Opens in a new window. Hotel Amenities Hotel Amenities The hotel offers a restaurant. A bar/lounge is on site where guests can unwind with a drink. A complimentary breakfast is offered each morning. Public areas are equipped with complimentary wireless Internet access. Event facilities measuring 1302 square feet (121 square meters) include meeting rooms. This business-friendly hotel also offers a terrace, tour/ticket assistance, and express check-in. Complimentary self parking is available on site. - Safe-deposit box at front desk - Conference space size (meters) - 121 - Express check-in - Terrace - Bar/lounge - Elevator/lift - Conference space size (feet) - 1302 - Front desk (limited hours) - Free breakfast - Tours/ticket assistance - Meeting rooms 2 - Restaurant - Total number of rooms - 100 - Free self parking - Number of floors - 3 - Free WiFi - Number of buildings/towers - 1 Family Friendly Amenities - Family Hotel - Free Wi-Fi - Cribs/infant beds (complimentary) Internet Available in all rooms: Free WiFi Available in some public areas: Free WiFi Parking Free self parking Room Amenities - Air conditioning - Daily housekeeping - Private bathroom - Shower only - Satellite TV service - Iron/ironing board (on request) - Desk - Free WiFi - In-room safe - In-room climate control (air conditioning) - Free cribs/infant beds - Rollaway/extra beds (surcharge) Where to Eat Guests are offered a complimentary buffet breakfast each morning. Onsite venue - This restaurant serves breakfast and dinner. Nearby Things to Do The recreational activities listed below are available either on site or nearby; fees may apply. - Hiking/biking trails nearby Hotel Policies Check-in Check-in time ends at midnight Check-in time starts at 2 PM Special check-in instructions: Check-out Check-out time is 11 AM Payment types Children and extra beds - Children are welcome. - Kids stay free! Children 12 years old and younger stay free when using existing bedding. - Rollaway/extra beds are available for EUR 15 per night. - Free cribs (infant beds)! Pets - Pets allowed for an extra charge of EUR 10 per pet, per night You need to know Extra-person charges may apply and vary depending on property policy. Fees Optional extras - Pet fee: EUR 10 per pet, per night - Rollaway bed fee: EUR 15 per night Hotel Name - ARC Hotel Kabelsketal - ARC Kabelsketal We should mention Children 12 years old and younger stay free when occupying the parent or guardian's room, using existing bedding. National Rating Hotelstars Union assigns an official star rating for properties in Germany. This property is rated 3 stars. Get an extra 10% off or more! See if this hotel is eligible for member pricing You agree to receive deals and offers from Expedia, and may unsubscribe at any time. Already a member? Sign in Need Help Booking?
https://www.expedia.com/Halle-Saale-Unstrut-Hotels-ARC-Hotel.h1510985.Hotel-Information?chkin=11%2F21%2F2012&chkout=11%2F24%2F2012&hashTag=roomsAndRates
The Fallacy Detective has been my favorite book this term. It’s very simple, and very, very helpful, while putting wit into every chapter. It gives you an insight on a lot of fallacies that people commit, and most of the time you don’t even notice them! Well, not anymore! This one isn’t one of my favorite chapters, but it’s still good, and I just read it, so I’ll give a summary on it. Either-Or is the fallacy someone commits if they only see two options to a problem, when there really is other options besides those two options. An example the book gives is the line “Give me Liberty, or give me Death”, quoted from Patrick Henry. There’s other options besides Liberty or Death, and in this case, Henry probably knows that (so…. Not really a good example), but with this sentence he’s limiting himself to two options. This is what Dad talks to us about a lot, about not limiting yourself, keeping yourself open to other options and ideas. Very simple concept this chapter, so that will pretty much do it for this summary, I’ll probably start giving summaries on previous chapters if I have nothing else to write about, as I really ENJOY reading this book and spotting fallacies.
https://gabedalrymple.com/2015/03/26/the-fallacy-detective-either-or/
Role description: Chair of Accumulate Board of Trustees (voluntary). Established in 2014, Accumulate uses creativity to improve the wellbeing of young people who are affected by homelessness. We work with hostels and housing organisations to deliver learning experiences and showcasing opportunities in art, photography, creative writing, audio-visual production, fashion and other creative disciplines. Our activities support people on the margins of society, and enables them to increase their confidence, skills and self-value and, in the longer term, facilitates them onto their journey into training, education and employment. We are now looking for a dynamic and pro-active chair of trustees. Key responsibilities Provide leadership to the board and to ensure that trustees fulfill their duties and responsibilities for the proper governance of the charity Duties and tasks to fulfil the key responsibilities To guard the long-term future of the charity by ensuring that: the board sets the mission, vision, strategy and high-level policies for the charity within the powers and restrictions in its charitable objects and governing instruments the board takes steps to monitor the performance of the charity and to ensure that the charity satisfies all regulatory and legal compliance requirements the charity’s financial dealings are systematically accounted for, audited and publicly available the board and the charity are fair and open to all sections of the community in all the charity’s activities the board and the charity hear the voices and views of key stakeholders, especially beneficiaries. To ensure the highest possible standards of governance by ensuring that: the charity has a governance structure that is appropriate to Accumulate and its charitable objects and that these structures and the governing instruments are reviewed regularly the board delegates sufficient authority to its chair, the director and others to enable the business of the charity to be carried on effectively between meetings of the board the board has on it the skills it requires to govern the charity well and these skills are utilised, and that the board has access to relevant external professional advice and expertise To ensure the proper and efficient conduct of board meetings by: chairing trustee meetings effectively, seeking consensus, balancing the need for full debate on key questions with the expeditious despatch of business so as to reach clear and agreed decisions as swiftly as possible taking an active role in ensuring that board agendas are meaningful and reflect the key responsibilities of trustees ensuring that board decisions are made in the best, long-term interests of the charity and that the board takes collective ownership of these decisions ensuring that decisions taken at meetings of the board are implemented ensuring that that there is an annual programme of board meetings, carefully structured agendas and high quality briefing papers providing timely information and concentrating on governance. To support the director by: ensuring that the board focuses on its governance role and does not slip incrementally, or otherwise, into the management role (unless essential to the good governance of the charity) arranging regular, but not over frequent, meetings with the director and by developing a very professional relationship with the director within which each can speak openly about concerns, worries and challenges providing leadership to the director to ensure that the charity is run in accordance with the decisions of the board and the charity’s governing documents and that there is clarity about the charity’s objectives at all levels in partnership with the director, to agree respective roles in representing the charity and acting as spokesperson. Personal Qualities Demonstrate a strong and visible passion and commitment to the charity, its strategic objectives and cause Exhibit strong inter-personal and relationship building abilities and be comfortable in an ambassadorial role Strong networking capabilities that can be utilised for the benefit of the charity Ability to foster and promote a collaborative team environment Ability to commit time to conduct the role well, including travel and attending events out of office hours Experience Experience of operating at a senior strategic leadership level within an organisation Successful track record of achievement through their career Experience of charity governance and working with or as part of a Board of Trustees Experience of external representation, delivering presentations and managing stakeholders Significant experience of chairing meetings and events Knowledge and skills Broad knowledge and understanding of the homelessness, creative industries or creative education sector and current issues affecting it Strong leadership skills, ability to motivate staff and volunteers and bring people together Financial management expertise and a broad understanding of charity finance issues Good understanding of charity governance issues Terms The charity’s Chair (and board members) will serve a three-year term to be eligible for reappointment for one additional term. Board Meetings London based, 6 – 9 times per year. Apply Please email [email protected] with a cover letter and cv by Thursday 17th March, Interviews will be from 28th March onwards.
https://accumulate.org.uk/the-accumulate-story/team/trustees/chair-of-accumulate-board-of-trustees/
Q: What are the medical approaches and consequences of the refeeding process? A: Refeeding refers to providing adequate calories and nourishment to severely underweight anorexia nervosa patients. For those who have moderately-severe anorexia nervosa, staged oral refeeding—sitting at a table and eating a prescribed amount of calories—remains the mainstay of this process. Reintroducing food to a malnourished person, however, has risks and complications if pursued without expertise. Anorexic patients who begin eating normal foods may encounter adverse side effects such as constipation, bloating, and stomach pain. Findings that indicate the refeeding process is proceeding too quickly are the development of significant swelling in the legs or feet (edema), a steady increase in the pulse rate into the 70–80 per minute range, and the new presence of shortness of breath. In more severe cases, refeeding syndrome occurs. This refers to the cardiac complications induced by giving too many calories too early in the refeeding process. The resulting problems may include cardiac difficulties ranging from decreased exercise tolerance to major heart abnormalities that can cause fainting and death. Therefore, it is important to involve a dietician, and for the most acute anorectics who are more than 25 percent below their ideal body weight (IBW), an experienced primary care physician. In general, weight gain should be limited to 1.5–2 pounds per week in an outpatient setting, or 2–3 pounds per week in a closely-monitored inpatient environment. The initial caloric intake is generally in the 800 calories per day range with increases of 300–400 calories every 3–4 days to achieve the targeted weight gain. Blood tests should check levels of sodium, potassium, bicarbonate, magnesium, and phosphorous every few days in the first weeks. Because the body shifts to a protein-building mode with the reintroduction of adequate calories, phosphorous can be quickly depleted within 1–2 days. As long as the refeeding process is well planned, problems can be averted. Patients will benefit by establishing a target-weight, following a dietary plan that begets a weight gain that does not exceed the recommended ranges, and having frequent blood tests. The patient should pay particular and honest attention to changes in physical status and promptly report any concerns to his or her treatment team. By Philip S. Mehler, MD Reprinted with permission from Eating Disorders Recovery Today Summer 2005 Volume 3, Number 3 ©2005 Gürze Books About the Author Philip S. Mehler, MD, practices at the Denver Health Medical Center where he was formerly the Chief of Internal Medicine and is now the Associate Medical Director. He is also a Professor of Medicine at the University of Colorado Health Sciences Center. Dr. Gnap website editor eatingdisordersrecoverytoday.com. Dr. Gnap is a family practice physician and behavioral medicine specialist in suburban Chicago. Dr. Gnap developed the Inner Control™ Program in 1970 and has worked with thousands of people to improve and correct medical, emotional, behavioral and learning problems including performance.
https://www.eatingdisordersrecoverytoday.com/learn-about/qa/refeeding-process/
There are two major ways in which crimes, and their according penalties, are classified: those that are classified as misdemeanors, and those that are classified as felonies, with a felony crime being more serious than a misdemeanor. There are also degrees of seriousness of both misdemeanors and felonies, which are classified by letter or number. For instance, Class A felonies are the most serious in nature (murder is one example). Class A felonies also have the highest penalties. Class B felonies are less serious in nature, and so the penalties are less than they would be for a Class A Felony. The classification system progresses down to C, D, E and so forth. Classification systems vary from state to state. Some state use letter classifications, and some use numbers (with the highest level being 1). A Class D Felony is, in most states where letter classifications are used, one of the lower degrees of felonies. However, each state has it own laws for crimes committed under state law. Accordingly, each state has it own classification system for penalties associated with the seriousness of the crime committed. Many Class D felonies, throughout the states, are of a non-violent nature, and do not have victims. Some states also identify certain felonies as “unclassified.” Crimes committed under federal law are penalized according to the federal classification system. In that system, there is one level of felony crime, Level E, which is less serious is nature than a Class D felony. How Serious are Class D Felonies? Any felony charge is serious. According to the classification systems used, Class D felonies are less serious than Class A, B and C felonies. In many jurisdictions, Class D is the lowest level of penalty used. However, some states, such as Alabama, Alaska and Maine have categories for only A, B and C level felonies. Other states, such as Michigan, South Carolina and Wisconsin have systems that go several letters past D. Still other states, such as Virginia, Colorado and Arizona classify their felonies by number, starting with 1 as the most serious level, and progressing downward in seriousness as the numbers rise. Finally, some states refer to felony penalties as “unclassified.” which means that for each crime that is a felony, the penalty for that felony is written into the law defining the felony crime committed. Could I Receive More than Seven Years in Prison? Yes. Although Class D penalties are less serious according to many classification systems, the systems vary as much from state to state in their prison sentences as they do in their classification systems. So, although many states might give a prison sentence of less than seven years for a Class D felony conviction, others, such as Wisconsin, can give up to 40. If you are facing felony charges for a crime committed under state law, an experienced criminal attorney in your state can help you understanding the levels of felony classification (and according penalties) in your state. Under the federal classification system, Class D felonies convictions result in less than ten, but more than five years of jail time. This would be for federal crimes. Some examples of federal crimes are tax evasion, mail fraud and identity theft. Additionally, it’s important to note that, regardless of the classification system, other factors can influence the number of years of the prison sentence given. Prior criminal history of a defendant can influence a judge to give a stiffer sentence, even if the defendant is currently only facing a Class D felony charge. What are Some Common Examples of Class D Felony Crimes? Once again, crimes that are classified as Class D felonies can vary by state, but common crimes include: - Weapons violations; - Child enticement; - Possession of marijuana in a certain amount; - Involuntary manslaughter; - Lesser arson offenses; - Writing a bad check over a certain amount; - Lesser burglary offenses; - Fraud; - Resisting arrest; - Domestic assault; - Human trafficking; - Embezzlement; - Aggravated assault; and - Promoting prostitution. What Other Rights Can I Lose if I am Convicted of a Felony? This depends on the state in which the defendant lives, as well. However, some rights that are commonly lost following a felony conviction include: - Right to vote; - Ability to serve on a jury; and/or - Right to own and carry a gun. A felony conviction can also impact your ability to get a job, get school loans and secure housing. Do I Need to Contact a Criminal Attorney if I Have Been Charged with a Class D Felony? If you have been charged with a Class D felony, you should get in touch with a criminal attorney right away. Once again, any felony charge is serious. Conviction of a felony can result not only in jail time and fines, but in the loss of your rights as a citizen. It can also make it very difficult to secure employment and a place to live going forward. A criminal attorney ca advise you of your rights, and explain the criminal charge and possible sentence. The attorney can also represent and defend you in court.
https://www.legalmatch.com/law-library/article/class-d-felony-lawyers.html
Panko J, Hitchcock K, Fung M, Spencer PJ, Kingsbury T, Mason AM. 2017. A comparative evaluation of five hazard screening tools. Integr Environ Assess Manag 13(1):139–154. See also: Panko J, Hitchcock K, Fung M, Spencer PJ, Kingsbury T, Mason AM. 2017. Letter to the Editor: Authors’ response to Turner et al. Integr Environ Assess Manag 13(1):15–16. Abstract An increasing number of hazard assessment tools and approaches are being used in the marketplace as a means to differentiate products and ingredients with lower versus higher hazards or to certify what some call greener chemical ingredients in consumer products. Some leading retailers have established policies for product manufacturers and their suppliers to disclose chemical ingredients and their related hazard characteristics often specifying what tools to use. To date, no data exists that show a tool’s reliability to provide consistent, credible screening‐level hazard scores that can inform greener product selection. We conducted a small pilot study to understand and compare the hazard scoring of several hazard screening tools to determine if hazard and toxicity profiles for chemicals differ. Seven chemicals were selected that represent both natural and man‐made chemistries as well as a range of toxicological activity. We conducted the assessments according to each tool provider’s guidelines, which included factors such as endpoints, weighting preferences, sources of information, and treatment of data gaps. The results indicate the tools varied in the level of discrimination seen in the scores for these 7 chemicals and that tool classifications of the same chemical varied widely between the tools, ranging from little or no hazard or toxicity to very high hazard or toxicity. The results also highlight the need for transparency in describing the basis for the tool’s hazard scores and suggest possible enhancements. Based on this pilot study, tools should not be generalized to fit all situations because their evaluations are context‐specific. Before choosing a tool or approach, it is critical that the assessment rationale be clearly defined and matches the selected tool or approach.
https://toxstrategies.com/publications/a-comparative-evaluation-of-five-hazard-screening-tools/
I noticed something this week that got me thinking about how valuable a different perspective can be. We had a new employee join our web development team, and along with that came a noticeable increase in lively discussions, whiteboard sessions and overall office buzz! We already have a team of highly skilled experts, but it is amazing how new ideas and alternative methods can lead to greater collaboration and pushing the boundaries. It strikes me that it is the same with test automation. Perhaps it is time to try something new, to move away from traditional development led tools and the challenges they bring, and try the innovative, proven, code-free solution from Original Software. Often it is only when we question ourselves and our long held beliefs that great things happen. This can be hard, but worth it….
https://www.origsoft.com/2019/09/04/take-a-different-perspective-and-try-something-new/
I, Mindset comply with our obligations under the General Data Protection Regulation (GDPR) by keeping personal data up to date; by storing (and destroying it) securely; by not collecting or retaining excessive amounts of data; by protecting personal data from loss, misuse, unauthorised access and disclosure and by ensuring that appropriate technical measures are in place to protect personal data. I use your personal data for the following purposes To deliver the services and information that clients have requested; To contact clients as necessary in accordance with the services they have requested; To maintain my own accounts and records. Client data will never be passed to a third party without the consent of the respective client and always assuming that such confidentiality is not in contravention of any legal action or legal requirement. In accordance with my need to maintain access to client and ex-client data in the event that a client needs advice or further help, I will retain individual data for a minimum period of 3 years. Our Lawful Basis for processing your personal data The client has given clear consent for me to process their personal data for a specific purpose. Further, the processing is necessary for both their and my own, legitimate interests. Your rights and your personal data Unless subject to an exemption under the GDPR, you have the following rights with respect to your personal data: The right to request a copy of the personal data which I hold about you; The right to request that we correct any personal data if it is found to be inaccurate or out of date; The right to request your personal data is erased where it is no longer necessary for me to retain such data; The right to withdraw your consent to the processing at any time; The right to request that I provide you with your personal data and where possible, to transmit that data directly to another data controller, (known as the right to data portability), (where applicable) [N.B. This only applies where the processing is based on consent or is necessary for the performance of a contract with the data subject and in either case the data controller processes the data by automated means]. The right, where there is a dispute in relation to the accuracy or processing of your personal data, to request a restriction is placed on further processing; The right to object to the processing of personal data, (where applicable) [N.B. This only applies where processing is based on legitimate interests (or the performance of a task in the public interest/exercise of official authority); direct marketing and processing for the purposes of scientific/historical research and statistics] The right to lodge a complaint with the Information Commissioners Office. (See below). Complaints Notice The client has the right to complain to the Independent Commissioner’s Office (ICO) if they think there is a problem with the way I am handling their data (see https://ico.org.uk/concerns/handling/).
https://the-mindset-coach.com/privacy-notice/
What is the ship's current position? Where is the ship located? SYBARIS II last received position is 50.76192 N / 1.29747 W on Jan 16, 2018 at 16:15 UTC. Vessel SYBARIS II (IMO: N/A, MMSI: 235014206) is a pleasure craft and currently sailing under the flag of United Kingdom. SYBARIS II has 12m length overall and beam of 4m. Below you can find more technical information, photos, AIS data and last 5 port calls of SYBARIS II detected by AIS. SYBARIS II map position SYBARIS II AIS Data SYBARIS II Master Data Last 5 port calls as detected by AIS Web resources (SYBARIS II) Disclaimer SYBARIS II current position and history of port calls are received by AIS. Technical specifications, tonnages and management details are derived from VesselFinder database. The data is for informational purposes only and VesselFinder is not responsible for the accuracy and reliability of SYBARIS II data.
https://www.vesselfinder.com/vessels/SYBARIS-II-IMO-0-MMSI-235014206
The investment philosophy practiced by Warren Buffett calls for investors to take a long-term horizon when making an investment, such as a decade-long holding period (or even longer), and reconsider making the investment in the first place if unable to envision holding the stock for at least five years. Today, we look at how such a long-term strategy would have done for investors in Consolidated Edison Inc (NYSE: ED) back in 2010, holding through to today. |Start date:||12/29/2010|| | |End date:||12/28/2020| |Start price/share:||$49.53| |End price/share:||$71.26| |Starting shares:||201.90| |Ending shares:||295.80| |Dividends reinvested/share:||$26.09| |Total return:||110.78%| |Average annual return:||7.74%| |Starting investment:||$10,000.00| |Ending investment:||$21,083.71| As we can see, the decade-long investment result worked out well, with an annualized rate of return of 7.74%. This would have turned a $10K investment made 10 years ago into $21,083.71 today (as of 12/28/2020). On a total return basis, that’s a result of 110.78% (something to think about: how might ED shares perform over the next 10 years?). [These numbers were computed with the Dividend Channel DRIP Returns Calculator.] Notice that Consolidated Edison Inc paid investors a total of $26.09/share in dividends over the 10 holding period, marking a second component of the total return beyond share price change alone. Much like watering a tree, reinvesting dividends can help an investment to grow over time — for the above calculations we assume dividend reinvestment (and for this exercise the closing price on ex-date is used for the reinvestment of a given dividend). Based upon the most recent annualized dividend rate of 3.06/share, we calculate that ED has a current yield of approximately 4.29%. Another interesting datapoint we can examine is ‘yield on cost’ — in other words, we can express the current annualized dividend of 3.06 against the original $49.53/share purchase price. This works out to a yield on cost of 8.66%. One more piece of investment wisdom to leave you with:
https://mmjstockwatch.com/2020/12/if-you-bought-consolidated-edison-in-2010-are-you-smiling/
Complex is a multi-precision complex number with correct rounding. Rug is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. See the full text of the GNU LGPL and GNU GPL for details. The new method SmallRational::assign_canonical was added. Details on other releases can be found in RELEASES.md. Integer::new() creates a new Integer intialized to zero. To assign values to Rug types, we use the Assign trait and its method assign. We do not use the assignment operator = as that would drop the left-hand-side operand and replace it with a right-hand-side operand of the same type, which is not what we want here. Arbitrary precision numbers can hold numbers that are too large to fit in a primitive type. To assign such a number to the large types, we use strings rather than primitives; in the example this is done using Integer::parse and Integer::parse_radix. We can compare Rug types to primitive types or to other Rug types using the normal comparison operators, for example int > 100_000_000. Most arithmetic operations are supported with Rug types and primitive types on either side of the operator, for example int >> 128. Where they make sense, all arithmetic operators are overloaded to work with Rug types and the primitives i32, u32, f32 and f64. Where they make sense, conversions using the From trait and assignments using the Assign trait are supported for all the primitives in 1 above as well as the other primitives i8, i16, i64, isize, u8, u16, u64 and usize. This also applies to i128 and u128 if they are supported by the compiler. Comparisons between Rug types and all the primitives listed in 1 and 2 above are supported. For Rational numbers, conversions and comparisons are also supported for tuples containing two integer primitives: the first is the numerator and the second is the denominator which must not be zero. The two primitives do not need to be of the same type. For Complex numbers, conversions and comparisons are also supported for tuples containing two primitives: the first is the real part and the second is the imaginary part. The two primitives do not need to be of the same type. Here a is consumed by the subtraction, and b is an owned Integer. Here a and b are not consumed, and incomplete is not the final value. It still needs to be converted or assigned into an Integer. This is covered in more detail in the Incomplete-computation values section. The left shift << and right shift >> operators support shifting by negative values, for example a << 5 is equivalent to a >> -5. The shifting operators are also supported for the Float and Complex number types, where they are equivalent to multiplication or division by a power of two. Only the exponent of the value is affected; the mantissa is unchanged. For the Float and Complex number types, we need to know the precision when we create a value, and the operation itself does not convey information about what precision is desired for the result. Assign them to an existing object without unnecessary allocations. This is usually achieved using the Assign trait or a similar method, for example int.assign(incomplete) and float.assign_round(incomplete, Round::Up). Convert them to the final value using the From trait or a similar method, for example Integer::from(incomplete) and Float::with_val(53, incomplete). Let us consider a couple of examples. Here the assignment from incomplete into buffer does not require an allocation unless the result does not fit in the current capacity of buffer. If &a - &b returned an Integer instead, then an allocation would take place even if it is not necessary. The precision to use for the result depends on the requirements of the algorithm being implemented. Here z is created with a precision of 45. These operations return objects that can be stored in temporary variables like incomplete in the last few examples. However, the names of the types are not public, and consequently, the incomplete-computation values cannot be for example stored in a struct. If you need to store the value in a struct, convert it to its final type and value. Rug requires rustc version 1.18.0 or later. Rug also depends on the GMP, MPFR and MPC libraries through the low-level FFI bindings in the gmp-mpfr-sys crate, which needs some setup to build; the gmp-mpfr-sys documentation has some details on usage under GNU/Linux, macOS and Windows. integer, enabled by default. Required for the Integer type and its supporting features. rational, enabled by default. Required for the Rational number type and its supporting features. This feature requires the integer feature. float, enabled by default. Required for the Float type and its supporting features. complex, enabled by default. Required for the Complex number type and its supporting features. This feature requires the float feature. rand, enabled by default. Required for the RandState type and its supporting features. This feature requires the integer feature. serde, disabled by default. This provides serialization support for the Integer, Rational, Float and Complex number types, providing that they are enabled. This feature requires the serde crate. Here only the integer, float and rand features are enabled. If none of the features are selected, the gmp-mpfr-sys crate is not required and thus not enabled. In that case, only the Assign trait and the traits that are in the ops module are provided by the crate.
https://lib.rs/crates/rug
Linton, Roseilee Sharmaine (2019). Personal capacity: an exploration into the extent to which the personal capacity of school leaders influences their response to challenges that occur in the changing educational landscape. University of Birmingham. Ed.D. | | | Linton2019EdD.pdf | Text - Accepted Version Available under License All rights reserved. Download (2MB) | Preview Abstract The educational arena has been undergoing unprecedented changes regarding government policy and reforms. These changes have posed significant challenges for school leaders, chiefly, intensification of accountability structures and the decision-making process in schools, as well as increased pressure to work at a relentless pace. The resulting attrition of headteachers and the reluctance of senior leaders to fill the vacancies, burgeoned into a recruitment crisis, raising questions concerning the capacity of school leaders to cope. This study attempts to arrive at a definition of personal capacity as it relates to educational leaders particularly, to determine its impact on their ability to cope and to ascertain its role in their career trajectory. The research which was phenomenological in nature used a self-completed questionnaire and semi-structured interviews to obtain data from a sample of nineteen school leaders. An analysis of the findings yielded a definition of personal capacity, essentially, the ability of individuals to adapt and respond to challenges. Such ability was multifaceted, comprising a dynamic combination of non-discrete components underpinned by key qualities and characteristics, specifically, moral imperative, resilience, self-efficacy, confidence, emotional intelligence, and empowerment, altogether driven by a growth mindset. Personal capacity featured significantly at every stage of the trajectory of school leaders and therefore warrants considerable attention in talent identification and development.
https://etheses.bham.ac.uk/id/eprint/8804/
by Nola Markey, Archaeologist and Brian Finlay, Skwlax Aboriginal Interest Department Little Shuswap Lake Indian Band would like to express their appreciation to the Boyd Family who recently submitted an artifact collection to the community. The Boyd’s have lived along Little Shuswap Lake for just over 50 years and during this time found numerous stone tool artifacts near their property. Some of the tools included arrowheads and spear points, scrapers, utilized flakes, fishnet sinkers, and gaming pieces, made from a variety of materials such as chert, chalcedony and basalt. To Indigenous groups, such artifacts are a significant part of their cultural identity and connection to their ancestors. These types of artifacts, uncovered at an archaeology site, are also important to archaeologists as they provide clues in explaining how people lived in the ancient and recent past. Many artifacts are found accidentally. Furthermore, construction crews find artifacts when they are building roadways or digging up an area to lay foundations. Farmers find sites when they plow their fields or homeowners clearing a field to build a vegetable garden. Archaeologists will get telephone calls from people who stumble across an artifact or site while they were out hiking. Of course, there is a darker history of purposely looting a site and selling artifacts, which is not permitted as artifacts are protected under provincial heritage legislation. Preserving artifacts and protecting sites is paramount to archaeologists and Indigenous communities. Today, many Indigenous communities work closely with archaeologists to develop comprehensive heritage management processes. Some of these processes include developing their own cultural heritage policies, building their own artifact and research repositories, museums and cultural centres, including repatriating artifacts lost to museums from other countries. To Indigenous communities, these artifacts are not merely “things”, they provide a sense of pride and are a testament of the knowledge and achievements of their ancestors. There are stories linked to the places where artifacts are found, offering a sense of place called home or re-affirming their spiritual practices from the past to present.
http://republicofarchaeology.ca/digit/2017/6/14/cultural-pride-through-artifacts
- Please check and comment entries here. Cell-to-Cell Communication and Information Transfer Definition Crucial events are generated by criticality, namely by the processes of phase transition from disorder to correlated disorder, affecting key organismal network functions. There is, as suggested by, “a subtle connection between informational exchange within and between networks and the complexity (non-simplicity) of those networks”. West and Grigolini replaced the term complexity with non-simplicity and explain their reasoning by stating that in physics it is easier to understand how phenomena function by the properties or characteristics that are missing, rather than those that are present. 1. Introduction In physics, a system is considered complex if it meets the anti-reductionist criterion of “the whole being greater than its parts”, but as Melanie Mitchell states in Complexity , there is really no quantitative definition of complexity , and certainly not one upon which physicists, computer-, or biological scientists agree. There does seem to be agreement that living systems exhibit complex, changing behavior at the whole organism level (macrodynamics). This complex organismal behavior emerges from the collective actions and interactions at lower levels of organization, such as among cells (microdynamics). These actions can be understood using dynamical systems theory, which can be described simply as dynamic, i.e., systems that change over time (e.g., heart beats, firing of neurons in the brain, economic markets, or global climate). Developing systems also change in space as well as time and exhibit dynamic patterns of complexity, which adapt in response to learning or external forces. For example, increasing temperature will alter rates of development and growth in fish embryos, expressed as changes in morphological and physiological complexity. Complex adaptive systems, therefore, are complex systems exhibiting nontrivial emergent and self-organizing behaviors , many of which result from cell-to-cell communication at many developmental levels. 2. Exploring Criticality in Developing Organisms It is our assertion that the stable yet dynamic society of a developing organism (i.e., plant seedling or animal embryo) results from, or may be initiated by, the dynamic interplay between two patterns of information exchange or complexity. These two patterns are, (1) self-organization or non-crucial events (Fractional Brownian Motion, FBM) and (2) crucial events (defined as events that determine the efficiency of information exchange . The type of response depends on the interplay of a network of intracellular (within a single cell) and/or intercellular (between cells) communication and their emergent relationships with their surroundings. Much of what we know about patterns of complexity has been learned from long time series generated from electroencephalographs (EEGs) of firing rates in the human brain , heart rate variability , and swarming birds among others. In all these examples there are elements of nonlinear interactions, which result in measurable patterns and modes known as phase transitions where there is a dynamic balance between order and randomness, and crucial events. Crucial events in the above situation are also referred to as renewal events, which are events that reset the memory of the system, erasing the memory of, and independent of, earlier events . Many complex processes can be characterized by crucial or renewal events and all are independent of the underlying microdynamic emergent behaviors that can be localized in time. In the authors clearly laid out that probability distribution densities (pdd) of the time distance, between two consecutive renewal events, which is given by the waiting time distribution ψ τ and written as an inverse power law, where μ is the complexity index which can range from 1 to ∞ , with complexity occurring when 1 < μ < 3. The breakdown in the ergodicity of a complex system for μ < 2 is a direct result of the occurrence of crucial events. However, if an event occurs at a specific time, after which subsequent events are produced, it is time dependent and the ψ τ has a hyperbolic form, see for a more detailed discussion. As is stated in μ < 3 suggests a departure from the condition of ordinary statistical physics to nonlinear statistical physics. Further, crucial events can arise spontaneously in complex systems, in keeping with the theory of self-organized temporal criticality (SOTC) which posits that a system of interacting units may spontaneously generate temporal complexity, that is self-organized criticality (SOC) characterized by crucial events in time in which μ is not limited to merely the non-ergodic regime of μ < 2 but extends to the whole complexity range of 1 < μ < 3. In a developing organism, formation of an orderly multicellular network from relatively homogenous material in a single cell is the result of transactions among nonlinear, self-similar and self-organized components. Those transactions are generally known as cell-to-cell communication and operate based on an inducer (e.g., a cell that produces a signal) and a responder (a cell that responds to the signal by changing some behavior). Successful communication takes place when competence occurs, i.e., when a signal results in a response. We will now consider the processes that are understood to regulate cell-to-cell communication and highlight some of the gaps. These gaps in knowledge about cell-to-cell communication may be where information transfer and crucial events could play a part in directing development of coordinated causal multicellularity. 3. Cell-to-Cell Communication, Complexity and Self-Organization While development is a process that occurs in all organisms, it is concerned with more than just cellular differentiation because different cell types of an organism do not exist in random arrangements. In the mid-twentieth century two biologists, Townes and Holtfreter predicted that embryonic cells could have differences in the components of their cell membranes which allowed them to form organs. Now we understand that formation of organs is a result of cell-to-cell communication achieved through biochemical molecules that are secreted or located in the cell’s membrane. These ‘informational’ molecules can bind to receptors on neighboring cells and stimulate a signaling cascade of intracellular reactions, which results in changes in gene expression, enzymatic activity, and cytoskeletal organization, affecting cell shape and cell behavior. However, it should be mentioned that even at the biochemical level, cell-to-cell communication is much more complicated than suggested above. In addition to signaling cascades with the cell, there are also important intercellular secretory products that trigger cellular responses. These responses are typically ligand–receptor based and range from long distance (endocrine) hormones that travel through the blood stream to short distance (paracrine factors e.g., FGFs) that diffuse between cells across the extracellular matrix (ECM) to target receptors on the cell membrane. Also important to cell-to-cell communication are adhesion molecules that mediate the interactions between cells and the ECM, are critical for maintaining cell structure and function, and are key to organization of cells into tissues and organs. Following fertilization, the process of cleavage transforms a single cell into a multicellular organism containing hundreds of cells (e.g., the nematode worm C. elegans contains 946 cells) or trillions of cells in an adult human being. These different cell types then work together to form a biologically complex, coherent, functional organism that can respond to change and exhibit a degree of resilience . Physical (free diffusion, osmosis, viscosity, elasticity, and viscoelasticity) and cellular processes (mentioned above) act on single cells and take part in acts of aggregation and adhesion to form multicellular systems (tissues) and in the process they ‘re-enact ’ the development of cell-to-cell communication systems that emerged 1.5 billion years ago . This means biochemical and energetic processes involved in cell-to-cell communication that first evolved 1.5 billion years ago are conserved, remaining essentially unchanged over evolutionary time. This then begs the question that if there are, as we suggest in this paper, informational exchange mechanisms also critical for development, might they too also have been conserved? During later stages of development, when organization of tissues and organs is taking place, the embryo stays in a relatively stationary state of reduced information (entropy) exchange that is maintained for extended periods of time, making self-organizing or autopoesis of living matter possible . Autopoesis is understood to be any increase in the order within the system (i.e., production of negentropy) and is possible only if high internal biological organization through cell-to-cell communication exists . Kauffman, in his landmark paper stated that a fundamental task of biology is to account for the origin and nature of metabolic stability in living systems in terms of the mechanisms that control biosynthesis. Kauffman stated that biosynthesis includes the renewal or new production of cells resulting from a state of disorder through mitosis (which includes both DNA replication and cell division). He goes on to contrast order and chaos as interpreted in physics and then in biology. In physics when considering the thermodynamics of gases, the mathematical laws of statistics bridge the gap between the randomness of colliding molecules and the simplicity of the gas laws. Whereas in biology a gene can specify a protein and that protein can, in turn, control the expression and or repress another gene . In living organisms, mathematical laws also engage large networks (referred to as gene regulatory networks (GRNs)) of interacting genes to bring biosynthetic or biological self-organization from disorder. Waddington used the stability landscape to describe the cellular development where the cell was represented as the ball at the top of the landscape ( Figure 1 ). The paths available to the ball (or features of the landscape) are determined by the genotype, interactions among cells, tissues, organs, and the environment , forming what is referred to as the epigenotype (epi- or above-the-genotype). Epigenetics can include all effects and modifications that are dependent on genetic factors, such as DNA sequences, but may increase or decrease phenotypic (observable traits) variation expressed by target sequences in response to environmental cues, or emergent interactions during development . The epigenotype can integrate information from external sources and influence development to produce a cohesive (coherent or stable) organism that will adapt to its environment by responding to change. 4. Measuring Complexity Using Ultraweak Photonic Emission (Upes)/Biophotons |Without Stripes||With Stripes| |η||µ||η||µ| |Dark Counts||0.575||2.739||0.508||2.968| |1||0.777||2.293||0.596||2.677| |2||0.796||2.254||0.558||2.792| |3||0.736||2.358||0.526||2.901| |4||0.737||2.355||0.496||3.016| |5||0.694||2.440||0.509||2.964| |6||0.725||2.377||0.504||2.984| This entry is adapted from 10.3390/e23091141 References - Mitchell, M. Complexity, a Guided Tour; Oxford University Press: New York, NY, USA, 2009. - Schmidt, K.; Starck, J.M. Developmental plasticity, modularity, and heterochrony during the phylotypic stage of the zebrafish, Danio rerio. J. Exp. Zool. Part B 2010, 314, 166–178. - Hunt von Herbing, I.; Boutilier, R.G.; Miyake, T.; Hall, B.K. Effects of temperature on morphological landmarks critical to growth and survival in larval Atlantic cod (Gadus morhua). Mar. Biol. 1996, 124, 593–606. - Pan, T.-C.F.; Hunt von Herbing, I. Metabolic plasticity in development: Synergistic responses to high temperature and hypoxia in zebrafish, Danio rerio. J. Exp. Zool. Part A 2017, 327, 189–199. - Mandelbrot, B.B.; Van Ness, J.W. Fractional Brownian motions, fractional noises and applications. SIAM Rev. 1968, 10, 422. - Culbreth, G.; West, B.J.; Grigolini, P. Caputo Fractional Deritative and Quantum-Like Coherence. Entropy 2021, 21, 178. - West, B.J.; Grigolini, P. Crucial Events: Why Are Catastrophes Never Expected? Studies of Nonlinear Phenomena in Life Science; World Scientific: Singapore, 2021; Volume 17. - Grigolini, P. Emergence of Biological Complexity: Criticality, Renewal and Memory. Chaos Solit. Fractals 2015, 81, 575–588. - Cocchi, L.; Gollo, L.L.; Zalesky, A.; Breakspear, M. Criticality in the brain: A synthesis of neurobiology, models and cognition. Prog. Neurobiol. 2017, 158, 132–152. - Bohara, G.; West, B.J.; Grigolini, P. Bridging Waves and Crucial Events in the Dynamic of the Brain. Front. Physiol. 2018, 9, 1174. - Tuladhar, R.; Jelinek, H.F.; Culbreth, G.; Bohara, G.; Cornforth, D.; West, B.J.; Grigolini, P. Diffusion Entropy versus Multiscale and Rényi Entropy to detect progression of Autonomic Neuropathy. Front. Physiol. 2020, 11, 1759. - Tonello, L.; Grigolini, P. Approaching Bounded Rationality: From Quantum Probability to Criticality. Entropy 2021, 23, 745. - Attanasi, A.; Cavagna, A.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Pohl, O.; Rossaro, B.; Shen, E.; Silvestri, E.; et al. Finite-Size Scaling as a way to Probe Near-Criticality in Natural Swarms. Phys. Rev. Lett. 2014, 113, 238102. - Vanni, F.; Lukovi’c, M.; Grigolini, P. Criticality and Transmission of Information in a Swarm of Cooperative Units. Phys. Rev. Lett. 2011, 107, 078103. - Benfatto, M.; Pace, E.; Curceanu, C.; Scordo, A.; Clozza, A.; Avoli, I.; Lucci, M.; Francini, R.; de Matteis, F.; Grandi, M.; et al. Biophotons and Emergence of Cognition—A Diffusion Entropy Analysis. Entropy 2021, 23, 554. - Culbreth, G.; Bologna, M.; West, B.J.; Grigolini, P. Entropic Approach to the Detection of Crucial Events. Entropy 2019, 21, 211. - Mahmoodi, K.; West, B.J.; Grigolini, P. Self-Organizing Complex Networks: Individual versus global rules. Front. Physiol. 2017, 8, 478. - Mahmoodi, K.; West, B.J.; Grigolini, P. Self-Organized Temporal Criticality: Bottom-up resilience versus top-down vulnerability. Complexity 2018, 2018, 8139058. - Mahmoodi, K.; Grigolini, P.; West, B.J. On social sensitivity to either zealot or independent minorities. Chaos Solit. Fractals 2018, 110, 185–190. - Townes, P.L.; Holtfreter, J. Directed movements and selective adhesion of embryonic amphibian cells. J. Exp. Zool. 1955, 128, 53–120. - Kirschner, M.W.; Gerhart, J.C. The Plausibility of Life; Yale University Press: New Haven, CT, USA; London, UK, 2005. - Rossi, C.; Madl, P.; Foletti, A.; Mocenni, C. Equilibrium and far-from equilibrium states. In Fields of the Cell; Fels, D., Cifra, M., Scholkmann, F., Eds.; Research Signpost: Kerala, India, 2015; pp. 95–112. - Koutroufinis, S.A. Organismus als Prozess; Habilitation Thesis Handed in at the Institute for Philosophy and Science History (in German). Technical University Berlin: Berlin, Germany, 2008. - Kaufman, S.A. Metabolic stability and epigenesis in randomly constructed genetic nets. J. Theoret. Biol. 1969, 22, 437–467. - Rogers, K.W.; Schier, A.F. Morphogen Gradients: From generation to interpretation. Ann. Rev. Cell. Dev. Biol. 2011, 27, 377–407. - West-Eberhard, M.J. Developmental Plasticity and Evolution; Oxford University Press: Oxford, UK; New York, NY, USA, 2003. - Johannes, F.; Colot, V.; Jansen, R.C. Epigenome dynamics: A quantitative genetics perspective. Nat. Rev. Gen. 2008, 9, 883–890. - O’Dea, R.E.; Noble, D.W.; Johnson, S.L.; Hesselson, D.; Nakagawa, S. The role of non-genetic inheritance in evolutionary rescue: Epigenetic buffering, heritable bet hedging and epigenetic traps. Environ. Epigen. 2016, 2, dvv014. - Duclos, K.K.; Hendrikse, J.L.; Jamniczky, H.A. Investigating the evolution and development of biological complexity under the framework of epigenetics. Evol. Dev. 2019, 21, 276–293. - Fröhlich, H. (Ed.) Biological Coherence and Response to External Stimuli; Springer: Berlin/Heidelberg, Germany, 1988. - Ho, M.W. The Rainbow and the Worm; World Scientific: Singapore, 1993. - Tessaro, L.W.E.; Dotta, B.T.; Persinger, M.A. Bacterial biophotons as non-local information carriers: Species-specific spectral characteristics of a stress response. Microbiol. Open 2019, 8, e761. - Popp, F.A. Cancer growth and its inhibition in terms of Coherence. Electromag. Biol. Med. 2009, 28, 53–60. - Gurwitsch, A.G. Die Natur des spezifischen Erregers der Zellteilung. Arch. Entw. Mech. Org. 1923, 100, 11–40. - Volodyaev, I.; Beloussov, L.V. Revisiting the mitogenetic effect of ultra-weak photon emission. Front. Physiol. 2015, 6, 241. - Baressi, M.J.F.; Gilbert, S.F. Developmental Biology, 12th ed.; Oxford University Press: New York, NY, USA, 2020. - Van Wijk, R. Light in Shaping Life: Biophotons in Biology and Medicine; Boekenservice: Almere, The Netherlands, 2014. - Konev, S.V.; Lyskova, T.L.; Nisenbaum, G.D. Very weak luminescence of cells in the ultraviolet region of the spectrum and its biological role. Biophysics 1966, 11, 410–413. - Gallep, C.M.; Dos Santos, S.R. Photon-count during germination of wheat (Triticum aestivum) in in wastewater sediment solution correlated with seedling growth. Seed Sci. Technol. 2007, 35, 607–614.
https://encyclopedia.pub/15475
What is Special Education Counseling? Special education counseling is a specialization of school counseling that is concerned with the success of special needs students. The essential purpose of special education counseling is to ensure that special needs students and their families have access to the appropriate supports and interventions in order to facilitate improved achievement in a school environment. In addition to facilitating academic and intellectual growth, special education counseling is concerned with improving a student’s ability to function in social, emotional, and behavioral capacities. Special education counseling is a multi-modal discipline in that special education counselors must be well versed in counseling theories and techniques, as well as educational theory, educational law, and special education regulations. What Does a Special Education Counselor Do? Special education counselors work with special needs students in elementary schools, middle schools, and high schools, to ensure they have the support services they need in order to achieve their highest potential in the areas of academics, personal and social growth, and career development. This is accomplished by working with students with social, emotional, behavioral, and physical disabilities in a variety of settings, including in one-on-one counseling, group counseling, in special education classrooms, as well as in regular education classrooms. Another essential duty of special education counselors is to work with special education and general education teachers and staff members to ensure that the needs of each special needs student are being met. Special education counselors do this by advocating on the child’s behalf, collaborating with teachers to develop more effective support services for the child, or working with teachers and staff members to develop a better understanding of the interventions in place to help the child succeed. Much collaboration and consultation is also undertaken with other special services workers, such as school psychologists, occupational and physical therapists, speech-language pathologists, and the like. Special education counselors work extensively with families as well. In this context, they might serve as a go-between, helping families understand school policies and procedures, or helping inform families about their rights concerning special education services. Special education counselors also often work with families to improve their ability to meet their child’s needs at home. This might involve offering parents and guardians insight into effective strategies for improving their child’s functioning. It may also involve referring families to outside resources, such as child and family counselors, vocational training, or non-profit organizations that specialize in working with families of special needs children. What is the Role of a School Counselor in Special Education? The role of a school counselor is to help all students, including those with special needs, to achieve their full potential. Introducing an appropriate individualized education program (IEP) at an early age can make all the difference, not only in these children’s academic learning, but in their emotional health and social adjustment and ultimately in their ability to become productive, contributing members of society. However, the population of children who are referred to special education classes is very diverse. So simply pigeonholing them into broad categories won’t work. It is important for each child’s particular strengths, weaknesses, and special needs to be adequately identified and addressed. The school counselor is one person who can help by actively advocating for these children. The following are some of the other important services that school counselors provide. - Counseling sessions with special education students. - Encouraging family involvement in the IEP. It helps immensely if the parents or other responsible adults understand and are on board with the services that have been recommended, including possible referrals to outside organizations. The special education counselor can also address specific questions or concerns and inform parents and guardians of their rights. - Consulting with and working with other school staff to better understand the child’s special needs and what support systems, adaptations and modifications may be necessary. - Collaborating with other school and community professionals, including but not limited to teachers, school psychologists, physical therapists, occupational therapists, and speech and language pathologists, in the delivery of services. - Identifying other students who should be assessed to determine eligibility for special education. What is the Job Outlook for Special Education Counselors? The job outlook for special education counselors is good. The Bureau of Labor Statistics (BLS) estimates 10 percent growth in this field of work through the early part of the next decade. While this represents only modest growth, there has been a rapid increase in the number of special education students over the last 30 years. In fact, the special education population in U.S. public schools increased by 37 percent between 1980 and 2005, as compared to 20 percent for the general education population. This growth is not expected to slow, meaning, the job outlook for special education counselors could very well be far more robust than experts estimate. Local spending on special education impacts opportunities for jobs as well. New Hampshire spends, by far, the most on special education services. Hawaii, Connecticut, and New York likewise have large estimated special education budgets as well, making jobs for special education counselors more likely to be found. Conversely, Mississippi, Washington, D.C., Florida, and Oklahoma spend the least amount of money on special education services, therefore, jobs in those states may be difficult to come by. What are the Education Requirements to Become a Special Education Counselor? The educational requirements for special education counselors are quite different from those for counselors that work outside the school system. It is highly recommended that students begin their studies not in counseling, but instead with a bachelor’s degree in special education and gain experience as a special education teacher. To achieve this goal, students must complete a four-year special education program, including a student teaching placement in a special education setting. Upon graduation, students will need to pass a written examination and fulfill the requirements of obtaining teaching licensure in the state in which they live. After a few years in the classroom as a special education teacher, prospective special education counselors should then pursue a master’s degree in school counseling, which is most often the terminal degree in this field of work. Courses in master’s degree programs revolve around developing a deep understanding of human behavior, educational policies and principles, and topics related to working with special needs populations. Human development, research methods, evaluation and assessment procedures, and statistics are common areas of study as well. Master’s degree programs in school counseling also include practicum and internship experiences, which are much like student teaching experiences. Counselors go into school settings for an established period of time to get real-world experience in applying their knowledge to a special education setting under the watchful eye of a supervising counselor. Upon graduation with a master’s degree, students must obtain certification as a school counselor, which is most often handled by the department of education in the state in which an individual will be employed. Some states may require further certifications to work with special needs populations. Additionally, licensure may or may not be required. It is recommended that prospective special education counselors consult with the governing bodies in their state of residence to determine exactly what the certification and licensure requirements are. How Much Does a Special Education Counselor Make? Specific wage information for special education counselors is not available. However, using information from closely related occupations, a relatively clear picture of potential earnings can be formed. Special education teachers on average, $53,244 per year according to ZipRecruiter, as of August 2022. However, more experienced professionals earn well over $90,000 per year. Meanwhile, the BLS estimates that school counselors earn an average of $62,090 each year, as of May 2021. Because these occupations are highly similar to special education counseling, one can reasonably assume a median wage in the mid-40s for special education counselors. Because they are employed in a school setting, the wages a special education counselor earns depend on their level of education and years of service. Movement up the pay scale both horizontally and vertically will result in an increasingly large annual wage. Related Reading - 31 Jobs You Can Do With a Psychology Degree - How to Become an Educational Counselor - 13 Pros and Cons of a Special Education Teacher - What Can You Do With an Educational Psychology Degree?
https://www.psychologyschoolguide.net/psychology-careers/special-education-counselor/
Menu tectonics essay From the time the beginning on time, Humans thought the ground is solid and immobile. But this is not true whatsoever. The entire world is every-changing and continuously in motion. The stability of the Earth is not what we believe it is. Thinking about the revolving axis in the Earth, ... Balagtasan essay The Dynamic Crust, Earthquakes and the Earth’s Interior Exactly why is the Earth’s crust identified as being dynamic? Crust- solid rock and roll outer zone of Globe The crust is definitely part of the lithosphere. The Earth’s brown crust area is powerful which means constantly changing. Earthquakes Volcanoes Crustal movements ... How to Illustrate the Plate Tectonic Theory Essay The Plate Tectonic Theory largely revolves inside the idea that the Earth’s brown crust area was made up of rigid and thin china that are in motion relative to each other. The theory was first suggested during the 60 and was later on manufactured by other scientists and geologists to ...
https://newszou.com/tag/crust-area/
Author: Ran Kuttner, Ph.D. Development Team: Yaniv Sagee, Riad Kabha, Yael Ben-Zvi, Orit Meoded, Gazal Abu-Raya Partnership between Arab and Jewish Communities for the Construction of Shared Society Givat Haviva Model Author: Ran Kuttner, Ph.D. Development Team: Yaniv Sagee, Riad Kabha, Yael Ben-Zvi, Orit Meoded, Gazal Abu-Raya Members of the Shared Communities team at Givat Haviva: Yaniv Sagee – Givat Haviva Executive Director, Shared Communities Initiator and Head Riad Kabha – Director of the Jewish-Arab Center for Peace, Shared Communities Originator and Director Ran Kuttner – Academic Advisor, Knowledge Developer and Partnership Facilitator Yael Ben-Zvi – Shared Communities Integrator and Partnership Facilitator Mohammad Darawshe – Givat Haviva Board Member Gazal Abu Raya – Partnership Facilitator Orit Meoded – Community Program Coordinator at Shared Communities Samer Atamneh – Director, Education Department Mara List – Information Management and Youth Delegation Coordinator Ella Eisner – Liaison Coordinator Anhar Masarwa – Director of Gender Equality Center Anat Lidror – Director of the Collaborative Art Center and Peace Gallery Marwa Badran Mjadla – Partnership Facilitator Anat Avrahami Marom – Shared Communities program evaluator The Author Ran Kuttner is a Givat Haviva Academic Advisor and a Senior Lecturer at the International Graduate Program in Peace and Conflict Management Studies at the Haifa University. Until 2013 he was an Associate Professor of Negotiation and Dispute Resolution at Creighton University in Omaha, Nebraska, and prior to that a Visiting Scholar at the Program on Negotiation at Harvard Law School for three years. In recent years, as a member of the Givat Haviva team, Ran has facilitated two of its partnerships and advised other facilitators, in addition to developing theoretical and practical knowledge for the Shared Communities program. Ran is also a mediator and mediation instructor who has facilitated dozens of mediation, negotiation and conflict resolution courses and workshops at universities and organizations and for the general public. Editor: Naomi Gluck Uzrad Arabic translation: Odeh Bisharat Editor (Arabic): Hussein Sweiti English translation: Shlomit Cnaan Graphic design and production: Stephanie and Ruti Design © Givat Haviva and Ran Kuttner, December 2019 ISBN 978-965-92616-7-3 This book was made possible by the support of the Robert Bosch Foundation and the New Israel Fund. The contents of this book are the sole responsibility of Givat Haviva and do not necessarily reflect the views of the Robert Bosch Foundation or of the New Israel Fund. Contents Executive Summary.......................................................................................................................... 6 Foreword: Givat Haviva and Its Activities................................................................................... 8 The Shared Communities Program: A Short Introduction................................................... 12 Part I: Theoretical Background Introduction..................................................................................................................................... 16 The Context within which “Shared Communities” Operates................................................ 17 From Co-Existence to Shared Society: A Paradigm Shift....................................................... 23 Dialogue: A Relational Practice................................................................................................... 28 Stages in Partnership Development............................................................................................ 34 The Role of Givat Haviva as an Integrator and Backbone Organization............................. 44 From Representative to Participatory Democracy as an Anchor for Jewish-Arab Partnership......................................................................................................... 48 For Further Reading....................................................................................................................... 54 Part II: Shared Communities Program Introduction..................................................................................................................................... 60 The Municipal Circle...................................................................................................................... 64 Case Histories and What They Can Teach Us........................................................................... 70 The Community Circle.................................................................................................................. 86 Case Histories and What They Can Teach Us........................................................................... 89 The Educational Circle................................................................................................................. 105 Case Histories and What They Can Teach Us......................................................................... 115 The Course and the Stages of Partnership-Constructing Work.......................................... 121 Stage 1: Contact with Municipalities, Starting the Process and Building The Program’s Foundation......................................................................................................... 125 Stage 2: Ongoing Work in Working Teams............................................................................. 155 Stage 3: Transition to the Program’s Second Year................................................................... 164 Stage 4: The Third and Fourth Years........................................................................................ 177 Part III: Regional Work Regionality: A Theoretical Introduction.................................................................................. 192 Regional Programs and Forums Initiated and Facilitated by Givat Haviva..................... 197 Conclusion: Lessons Learned and Challenges for the Future............................. 220 Executive Summary The purpose of this book is to present the Shared Communities program to the professional community, and contribute to the body of knowledge that informs the endeavor towards a shared society. For the past decade Shared Communities has been Givat Haviva’s flagship program, developed and implemented to promote partnership and equality at a time of dissatisfaction with the idea of “coexistence”, and a growing preference for tackling the challenges of building and sustaining a shared society in Israel. The book presents a theoretical and practical model in the form of a multilayered four-year program that facilitates sustainable partnerships between pairs of neighboring Jewish and Arab communities, headed by local authorities, as a basis to developing a concept of shared region. The vision that drives the program is that by using daily experience to exemplify the mutual benefits of cooperation, and by advancing both differing and shared interests, it will lay the foundations for a shared future and a shared society. This book presents the accumulated challenges, successes, experiences and insights gained through Givat Haviva’s work for the past decade. The book examines Givat Haviva’s activity in the Wadi A’ra region. It aims to make our model accessible to policy makers and to the public, in the hope that its message will resonate and serve as a basis for a shared society throughout the country. We invite the various national and local government agencies to make use of this model, expanding it or adapting it also to other regions. The first section of this book is devoted to the theoretical principles of Shared Communities. It lays out the challenges inherent in building a Jewish-Arab partnership, including the transition from coexistence to shared society. It demonstrates that developing a partnership-oriented consciousness is congruent with cultivating a profoundly dialogic mindset. It describes the progression from participation through collaboration to partnership. It discusses Givat Haviva’s role as an integrating organization, which draws many organizations and players – from the public, the business and the non-profit sectors – into municipal-led partnership-building processes. It also draws attention to the challenge of balancing bottom-up and top-down processes. The second section presents Givat Haviva’s working model for partnership facilitation. The first three chapters of this section describe the three circles that constitute the Shared Communities program – municipal, community, and educational – illustrated by practical examples and actual dilemmas from partnerships facilitated by Givat Haviva. The fourth chapter sets out a structured working format, a detailed multi-stage plan for facilitating a partnership through a four-year process. The third section discusses regionally-oriented action as an anchor for creating Jewish-Arab partnership and a basis for a cohesive, equal, shared society on the regional level. It describes the 6 concept of regionality and the range of regional programs and initiatives undertaken by Givat Haviva in order to construct regional consciousness and cohesion. This book presents a multilayered, innovative theoretical model based on an analysis of five partnerships supported by Givat Haviva (and two potential partnership projects which it recommended not to attempt at this time), ten test cases, ten regional programs, activities in dozens of schools, and assessments of various partnerships and programs implemented over the years. The final section of the book reflects upon lessons learned and future challenges. It concludes that at this time, focusing on shared interests is important to the building of a shared society. This is both because it advances reality-changing projects that reduce gaps and construct partnership through ongoing pragmatic work, and because it builds the foundational trust which is essential to a complex discourse on identities. Without these shared efforts to change reality through interest-based work, discourse on identities is limited in its ability to promote shared living. As the book demonstrates, the process requires a great deal of patience – for carrying out substantial infrastructural projects, for constructing a deep partnership consciousness (as opposed to ad hoc collaborations), and for building the capabilities of local and regional officials and other participants to advance their partnerships with increasing independence from the facilitating body, through the development of their inner motivation and leadership. It may also be concluded that the vision of holistic advancement of partnership in all three circles (municipal, community and educational) and with the participation of all sectors (government, business, and civil society and citizens) requires significant support and commitment from government ministries, infrastructure providers, and professional bodies where specific expertise is needed. Appropriate knowledge and skills must be acquired where needed, and where professional development is required in order to reduce gaps and strengthen the partnership, it is important that a program have the mandate to carry it out. The process of building a partnership, in the deepest sense, is an ambitious one, especially at this time. However, this book shows that this comprehensive vision must not be abandoned in the face of our challenging social climate and the considerable time required for significant transformation from collaboration to partnership, and that the Shared Communities program should be viewed as a building of foundations and a planting of seeds, many of which will bear fruit only in years to come. The success of the program should therefore be evaluated in terms not only of performance (the tangible output of its projects) but also of the process – that is, the progress made along the axis of transformation from collaboration to partnership, as detailed in this book. We believe that the Shared Communities program can provide a feasible model for other regions, being adaptable to the particular circumstances of each partnership and each location; and that it can significantly advance the development of a shared, equal society in Israel, as regards both mindset and the changes that create a concrete reality. 7 Foreword: Givat Haviva and Its Activities Seventy years ago the Givat Haviva center, named after Haviva Reik, a Jewish paratrooper in pre-state Israel and emissary of the Hagana, was founded by Hashomer Hatza’ir movement as the national education center of the Kibbutz Federation in Israel. Since then Givat Haviva has always endeavored to promote the values of peace, equality and solidarity in society. It strives to safeguard the democratic, egalitarian character of the state of Israel, a quality it views as essential to citizens’ welfare and as a basis for true partnership between the two nations that share life in Israel. Givat Haviva promotes the existence of a shared, equal society for all citizens of Israel. From its origins in the kibbutz movement it draws its fundamental values of partnership and equality. These form the basis for its comprehensive plan of action, which centers on addressing the chasm between the Jewish and Arab national sectors, a chasm that jeopardizes Israel’s future as a democratic state. Givat Haviva develops sustainable models for building a shared society, in the format of five “circles”: local/inner-communal, inter-communal, regional, national, and global. The Givat Haviva programs strengthen communities, create a shared regional space, collaborate with government offices and serve as a nationwide model, and serve as a model intended for global emulation and integration. Since its birth, Israel has known many changes, which have all been reflected in the history of the relationship between its Jewish majority and Arab minority. For many years we espoused the idea of coexistence, believing that if we merely enabled positive, constructive dialogue between Jews and Arabs in Israel, we would be able to cultivate a healthy relationship between them. This belief dissipated in October of 2000. Those who believed that good relations were possible in the absence of equality were mistaken. Those who believed it was possible to live side by side in an unequal partnership and maintain healthy relations were forced to face a reality of profound discrimination and marginalization of a large Arab sector, which was not prepared to concede and to hold a “dialogue” of good relations with the discriminators. Givat Haviva has undergone a profound change over recent years. As this book explains in depth, we have redefined our mission, and instead of discussing coexistence, are now building an infrastructure for a shared society. Seven years ago we redefined our mission as follows: Givat Haviva is dedicated to building a shared, egalitarian society in Israel. A shared society is one in which all citizens, whatever their nationality, religion or gender, feel that Israel is their home. They all belong to it; they all own it. From this distinct mission we derived a clearly defined vision: Givat Haviva aims to build an inclusive, socially cohesive society in Israel by engaging divided communities in collective action towards the advancement of a sustainable, thriving Israeli democracy based on mutual responsibility, civic equality and a shared vision of the future. 8 Our theory of change is based on the assumption that a shared society cannot be built merely through limited intervention in specific fields. If we are to produce significant, sustainable change, we must employ a comprehensive, holistic intervention that covers a range of systems, issues and age groups. This is especially true at a time when such action goes against the grain of prevailing attitudes. In the face of current powerful, government-led social trends that discredit shared society, education and dialogue are not enough. We must build infrastructures for the future among the younger generations by means of education for shared living, and at the same time conduct comprehensive, in-depth work with the wider population. Alongside cultivating acquaintance and relationships, as we have done in the past, we must take tangible action for the equality and empowerment of Arab society. We must aim to build local, regional and national partnerships that can address the shared interests of Jews and Arabs from neighboring communities, who, without proactive intervention based on the model we propose, will remain estranged from each other. The model that has developed from our process can be incorporated and help guide the local and national government in continuous implementation of it. We must facilitate a connection between cultures and persons through mutual learning of the “other’s” language. This requires the teaching of spoken Hebrew to the young generation of Arab society, and of spoken Arabic to the young generation of Jewish society. We must create an encounter experience, based on shared creativity, art and culture, which will enable familiarization and socialization between the groups. Finally, we were required to add a new element – taking significant action towards a “top to bottom” policy change: we are no longer only an educational organization concerned with social infrastructure, but a center that combines this with the endeavor to change national and local government policy. To implement our new theory of change, we have designed five channels of action. The first concerns the development of human infrastructure for the future through education for shared living. In this sphere Givat Haviva currently reaches 8,000 young people (which constitute more than half the participants in this type of educational activity in Israel; the funds dedicated by Givat Haviva’s education department to education for shared Jewish-Arab living are more than those allocated to it by the Ministry of Education!). We run eleven education programs that draw on three different pedagogies: encounter and acquaintance programs, programs for shared learning in the school’s curriculum, and leadership programs. We have developed programs for children and young people of various ages, from preschool to high school, and have established a unique program, unlike any in the world, for joint regional education. On the Givat Haviva campus we have founded the International School for leadership, shared living and peace, and beside it the first Arab youth village in Israel, Sindyana, run in partnership with the Society for the Advancement of Education, which leads this singular educational project, and with the Menashe regional council, a council that sets an example in educating for shared living. Alongside these programs we also train teachers, facilitators and principals. 9 The second channel for building a shared society consists of working towards the equality and empowerment of Arab society: to this end we have developed a program for teaching spoken Hebrew to Arab school students to facilitate social, academic and vocational integration. This year, its fifth, the program is implemented among more than 30,000 middle school students. Within a few years we will reach all middle schools in Israel, thanks to the excellent cooperation and tremendous investment in the program by the Ministry of Education. We also run programs in the Arab sector that aim to prepare young people for a career in high-tech, promote informal education, and empower women. In total, we run more than ten programs with over 40,000 participants. The third channel creates connections through language, culture and art skills. Two centers on the Givat Haviva campus are dedicated to this purpose: the Arabic Studies Institute, which accommodates hundreds of students each year; and the shared Arts Center, which offers a wide variety of programs, including courses, exhibitions , an annual festival (“Under the Light”), a pottery school, and a Peace Gallery. Hundreds of people participate in the programs, and thousands more have the opportunity to encounter and relate to and through art and culture. The fourth channel is concerned with influence on government policy and public engagement: for the past six years, Givat Haviva has been hosting Israel’s largest and most influential annual conferences on the topic of shared society. From these conferences evolved an extremely significant program, aimed at creating “a road map for a shared society.” To create this roadmap we brought together 70 opinion leaders from all three sectors – civic, public and private – and together we formulated a comprehensive set of recommendations to be submitted to the government, concerning policy changes and legislation for shared living in Israel, in five areas: economy, governance, education, land, and cultural representation. The document contains approximately 130 recommendations, which we intend to motivate government agencies to implement over the next few years. Finally, the fifth channel concerns partnership between neighboring communities. This has in fact been our flagship program in recent years, while our goal has shifted from coexistence to a shared society. In the course of these years we have worked with communities totaling some 200,000 members and fostered deep connections, based on shared interests, between neighboring Jewish and Arab communities. We believe that social change begins with the socio-political unit that people feel closest to, and therefore we concentrated our activity at community level, with the aim of building shared communities. We employed all five avenues in our work within and between communities that share a regional space. We addressed key aspects of building a shared society, using a wide range of tools that were designed to address various aspects of shared society building with the aim to reach wide audiences. Our aim was to create a model of a Shared Society in a distinct area – Wadi A׳ra – so that the program we formulate might eventually be run by the state, by means of the regional cluster of municipalities created by the Ministry of the Interior, and by local government, among others. Over the past eight years we have worked to 10 create a coherent, complete model that can be reproduced countrywide as well as worldwide, in countries that aim for a shared society embracing diverse cultures. At present we wish to make our model accessible to policy-makers and to the public, so that it may resonate, expand, and serve as a mainstay of shared society throughout Israel. Accordingly, we realize that our role is changing: the extensive funds needed for widespread implementation of the model cannot be acquired from philanthropic organizations in place of state funding. We, as a civic organization, have developed a model, and now it is for the state, the ministries (especially the Ministry of the Interior and its community clusters), local government and neighboring pairs of municipalities, to adopt it, expand it, adapt it to other localities and conditions, and implement it. We are writing this book to pass on our knowledge and present the lessons we have learned from our successes, and no less importantly, from the failures we have experienced in the process. We remain committed to advancing the model and providing guidance to those who choose to use it to construct shared leaving, but we will do so as assistants. We will leave the driver’s seat and the responsibility for cultivating the partnerships, and take on the role of facilitators, supporting and advising the institutions who now take the lead in advancing the model. We believe that the only future possible for Israel is to become a state constructed as a shared, equal society of all its citizens. This program is intended as our contribution to this great vision for our shared state of Israel. Yaniv Sagee – Givat Haviva Executive Director Riad Kabha – Director of the Jewish-Arab Center for Peace at Givat Haviva 11 The Shared Communities Program: A Short Introduction Since 2011 Givat Haviva has been developing and leading the Shared Communities program, in which pairs of neighboring communities, Jewish and Arab, undergo a process of building a sustainable partnership, alongside collaborative regional programs for representatives of all municipalities in the region. The program enjoyed the devoted support of several foundations that share its worldview and made it possible to put it into practice - first and foremost Friedrich Ebert Stiftung, the Robert Bosch Stiftung, and the New Israel Fund. Givat Haviva thanks all the organizations and funds who share and support its work.The program aims to create sustainable partnerships between neighboring communities, as a basis for evolving a common conception of regionality and as part of creating the required conditions for a shared society in Israel as a whole. Citizens and leaders from diverse social backgrounds take part in the program, creating inter- community collaborations, structures and activity mechanisms that will inform collaborations to advance shared values, aims and projects. The vision that guides the program is that by exemplifying the mutual advantages of collaboration through day-to-day experience, among diverse sectors in socially divided communities, it will lay and strengthen the foundations for a shared future and a shared society. This vision gives rise to four chief aims: • Creating sustainable structures: developing integrative, inclusive, sustainable mechanisms for collaborations between and within divided communities. • Deep mutual acquaintance and removal of barriers between communities, creating openness to a future of shared living in which the multicultural complexity of Israeli society can be managed and contained constructively. • Putting plans into practice: training partner communities to plan and implement internal and inter-communal municipal plans that address shared needs and further mutual interests. • Running joint projects: recruiting participants who can motivate the desired change and encouraging them to work together towards the realization of shared community projects, despite differences. Shared Communities is a four-year program in which Givat Haviva facilitates a partnership between two communities and lays the foundation for the continuance of this partnership at the end of this four-year facilitation period (which may be extended if necessary). Basic Premises Givat Haviva’s approach to facilitating municipal partnership is guided by nine main principles: • Through joint identification and advancement of interests, concrete achievements that 12 can help lessen inequality in areas such as infrastructure and economic and environmental development can be attained. • Work on shared interests will enable participants to become more familiar with both societies and acquire intercultural skills and the capacity to live together as partners in a multicultural, pluralistic society (as well as to engage constructively in “tough issues” that cannot be well contained otherwise). • Significant dialogue: in addition to advancing shared interests it is important to enable interpersonal encounters, deepening acquaintance and gradually building significant dialogue between the groups as key agents of change. • Accountability (responsibility, commitment and answerability): greater authority and autonomy entail developing and strengthening accountability for processes, outputs and products. It is essential to integrate accountability into the municipalities’ organizational operation. Therefore the partnership program must boost the municipalities’ ability to construct the responsibility required to consistently lead and maintain the partnership, and regularly initiate project under its auspices. • Sustainable changes rely on institutions that can continue shared activities – under municipal auspices, within the education system, and through community activities – once the facilitated four-year program has ended. The purpose of this short-to-medium period of facilitation is to build an infrastructure for stable, ongoing work after it have ended. • Interlinked circles: cultural change and integration of partnership-mindedness will develop out of complementary endeavors in several spheres: municipal, community and educational. Throughout the process, participants are encouraged to develop an integrative regional mindset. • Activity spheres: activity is concentrated in three distinct spheres, whose cultivation will lead to significant cultural change: the intra-municipal (which requires profound preparation so that each municipality can develop the skills and capability to work with its partner municipality), the inter-municipal, embracing both municipalities; and the regional, embracing the various municipalities of the Wadi A׳ra region. • Integration: it is Givat Haviva’s task to achieve an integration of the various activities, from the planning stage onwards, producing an effective critical mass of initiatives that inform and enrich each other. In this way it creates a held space that serves as a basis for ongoing, effective shared action. • Linking community and municipality: a community, in the sense used in this book, is a group of people brought together by a defined territorial space. In this program we chose the municipality as this defining space. A community is a social system that maintains channels of communication and interrelationships in most areas of life, and its purpose is to fulfil 13 the needs of individuals, groups and organizations, and enable them to be participants and partners in the public sphere. • The above key points for building partnership between communities draw on an inclusive perception of a shared society, and on a work model intended to create the conditions for sustainable partnership, as discussed in the next chapters. 14 Part I Theoretical Background Introduction The Shared Communities program is evolving at a time when the discourse on Jewish-Arab relations is undergoing a shift from “coexistence” as the desired outcome to “shared society.” This shift necessitates a change of aims and a reexamination of the goals and challenges posed by programs and processes intended to help build a shared society and cultivate shared living. The first chapter in this theoretical section examines the problem of the alienation of Arab citizens in Israel, and the challenges facing Israeli society in general, and Givat Haviva as a facilitating body in particular, in the building of Jewish-Arab partnership. The second chapter illuminates the challenges posed by the shift from coexistence to shared society. Givat Haviva’s working philosophy is pragmatic at its core. Its premise is that joint action to change existing conditions, through identifying and addressing both shared and differing interests, will help create a reality of greater equality between Jews and Arabs, gradually develop partnership- mindedness, and by gradually creating the conditions required for a dialogic discourse that will eventually make it possible to address the sensitive, complex issues that lie at the root of the Jewish-Arab conflict in Israel. The third chapter aims to characterize dialogue and its qualities, and highlight the tension that exists between working on shared interests and conducting a dialogue on identities. One of Givat Haviva’s main tasks in the Shared Communities program is to facilitate dialogic mindset and partnership-consciousness in its deepest sense. However, potential partners’ initial levels of readiness sometimes vary, necessitating support and aid to prepare the ground for establishing this kind of partnership. We have chosen to define these different levels as participation, collaboration and partnership. Part of the role of Givat Haviva's facilitators is to identify participants’ readiness level and help them progress along the continuum towards partnership. These levels and the passage between them are discussed in chapter 4. In order to achieve significant effects and generate a new community culture, Givat Haviva engages a large variety of organizations and players from the public sector, the business sector and the third sector in processes of municipality-led partnership building, integrating their activities together to create a collective impact. Givat Haviva’s role as integrator is presented in the fifth chapter. The sixth discusses the transition from representative to participatory democracy, a process which society and the concept of governance are currently undergoing nationally and globally. These processes are also reflected in the approach to partnership work, which aims to find new balances between top-down and bottom-up processes and to engage the general public and stakeholders without official title or roles, in shaping this changing reality by means of partnership. Since these changes also affect the mindset and function of formal and informal leaders, the sixth chapter also discusses shifts in the perception of leadership and the emphases pertaining to leadership in Givat Haviva’s work on Shared Communities. 16 | Part I Theoretical Background Chapter 1 The Context Within Which “Shared Communities” Operates I. The Challenge "Shared Communities" Faces The Shared Communities program addresses the need that has arisen in recent decades to restructure Jewish-Arab relations, shifting the paradigm that characterized Israel’s first decades. The events of October 2000, in which twelve Arab Israeli citizens and one Palestinian citizen were shot to death by Israeli police, harshly brought to the fore the issues of Arab Israeli citizens’ inequality and sense of exclusion from the Israeli collective. There is a demand among Israel’s Arab citizens to be recognized as belonging to the Palestinian nation, alongside a demand to be fully integrated into Israeli society as equal citizens and accepted as a national collective within Israel. Different groups place varying degrees of emphasis on each of these aspects, and some feel that the conflict is irreconcilable under present conditions. Jewish Israeli society, too, finds it extremely challenging. Many Jewish citizens feel threatened by what they see as the inherent tension between Palestinian national loyalty and Israeli citizenship, and are hard put to face this complexity or to engage in dialogue concerning it. The Jewish majority in Israel is itself experiencing tension between two tendencies, based on two aspects of the complex Zionist identity, which both societies sometimes find contradictory and impossible to resolve or even to manage constructively: Judaism, which views Jewishness as supremely important, and requires that Israel’s approach to nationality and identity emphasize this aspect and prioritize Israel’s Jewish population; and democracy, whose values demand full civil equality among all groups. These unresolved tensions manifest in the discrepancy between the Jewish majority’s vision for Israel as “a Jewish state” and the Arab minority’s vision of it as “a state of all its citizens”. Givat Haviva’s working premise is that currently it is not possible to resolve the dispute over the future vision for Israel, but it is possible to cultivate a fabric of just and fair shared living, through constructive engagement and finding better, more nurturing ways to achieve balance for all. Constructive management of, or – engagement with, the natural tensions between the two tendencies is vital, since perceiving them as mutually exclusive leads to conflict and to destructive handling of this tension. A "solution", in this case, would consist in acknowledging the complexity of the situation, facing it, engaging in ongoing dialogue and finding mutually nurturing balances and a new harmony between the differing tendencies. It seems that in the first decades of Israel’s existence, the approach to this complexity was far from comprehensive 17 and involved a great deal of denial. At the same time it promoted those competing and separationist elements that discourage any complex perspective inclusive of the Other. This complexity was seen as threatening, a perception that caused both groups to withdraw from one another. Today’s Jewish majority still finds it difficult to see Israeli Arabs as partners for the construction of an Israeli identity and environment, or to accord them a significant role in the national and civic space. For some dominant groups within Arab society, reinforcing the Palestinian national identity of Israeli Arabs is important even if it detracts from their Israeli civil identity. These two tendencies can also be discerned in the decisions of Israel’s government, a government that has passed the Israel Nation-State Law, which legally entrenches the overprivilege of Jews in Israel and the inequality between them and the Arab minority, while at the same time increasing the resources allocated to strengthening Arab society and reducing gaps. Palestinian Israelis cannot easily view Israeli-ness as an identity and a space in which the Palestinian minority has its place, partaking in the building of the country. Politically, it is possible to discern in Arab society a growing sense of alienation from the State and a mindset that does not advocate integration, side by side with processes of de facto integration in diverse spheres and a wish to shape a consciousness of “Israeliness” of which Arabs are a part. All this takes place in a general context of growing marginalization of Arab Israeli citizens in areas such as education, employment, industrial development, allocation of land for development and building, health, culture, transport, and participation in government and decision-making. Following the clashes of October 2000, the State-appointed Or Commission investigated the root causes of these events. The commission’s report acknowledges the tension, if not the substantial contradiction, between the fundamental principles of a Nation-State and those of a liberal democracy, and the need to invest special effort if one is to achieve reasonable harmony in majority-minority relations. The report describes the discriminatory policy towards Israeli Arabs, and calls for a recognition of the Arab minority’s identity as an indigenous minority that identifies with the Palestinian people and the Arab nation, as well as acknowledgment of the discrimination against it – discrimination that must be amended and inequalities that must be eliminated, in accordance with Israel’s legal obligations. The Commission stated that the issue has been neglected, and that attaining genuine equality must be a key goal of the state's actions. “The state,” it declared, “must initiate, develop, and operate programs emphasizing budgets that will close gaps in education, housing, industrial development, employment, and services.” This statement concerns both material aspects and the need to foster, recognize and embrace Arab society as part of Israel’s developing identity, and the report stated emphatically that “all government agencies must find the means to allow Arab citizens to express their culture and identity in public life in a respectable manner.” Several of the commission’s recommendations were adopted: the Authority for the Economic Development of the Arab, Druze and Circassian Sectors was established in 2007, initially as part of the Prime Minister’s Office, and multiannual 18 | Part I Theoretical Background programs were formulated and implemented with the goal of bringing the development of Arab towns and villages up to national standards. Nevertheless, it seems that all governments since have lacked vision and a broad perspective regarding Arab society in Israel. Several tensions exist in this regard: if an organization wishes to promote a healthier society with regard to Jewish-Arab relations, should it endeavor to eliminate material gaps or to achieve recognition and improved relationships? Will the status of Israel’s Arab citizens be improved by means of rights discourse, in which lawful rights are demanded and the law and justice systems are required to produce the change, or will discourse on identity and creating mutual recognition lead to righting the wrongs? In preparing settings for Jewish-Arab encounter, should the emphasis be on creating closeness and good relations between the groups, or on making the weakened voice heard and demanding to eliminate inequality? II. Common Approaches to Encounters between Groups in Conflict There are various models for intergroup encounter between groups in conflict – strategies of intervention based on different premises and aiming to achieve different goals. A model that was popular in the 1980s was founded on the principles of Contact Theory, which claimed that it is possible to reduce intergroup tension by setting the terms for positive contact between the groups’ members. By creating an equal-status environment that nurtured relationships that involve closeness and intimacy, participants come together and reduce their animosity. The equal-status contact in itself is expected to transform stereotypical negative conceptions and manufacture suitable conditions for collaboration among participants. In the 1990s such coexistence efforts based on contact theory started to be criticized by thinkers who claimed that they entrench inequality and are structured in such a way as to accommodate only the Jewish partners’ need for contact with the other culture. Critics alleged that this model avoided tough political issues and feared the weakening of separate groups’ identities, whereas reality demanded that they raise and confront meaningful political questions of structural inequality in society. Contact theory-based encounters were perceived by these critics as a form of manipulation, whose aim was to maintain the existing power structure with no critical reflection or willingness to reevaluate majority–minority power relations and inequality. In the late 1990s and early 2000s, the confrontation model became popular. According to this model, the intergroup dynamics in the encounters represent the relations between two identities that are entangled in power relations with unequal status and unequal access to resources. Participants are empowered to express their in-group identities, and the tensions or confrontation that arise in the encounters between Jews and Arabs are used to help participants understand the social dynamics in order to develop a critical stance toward them. Structural 19 inequalities are brought to participants’ attention, and the political sphere remains in the focus of the conversations, rather than putting effort into setting the terms for the intimate, interpersonal level exchange, as in Contact Theory. In this model, participants’ national identities are reinforced, and the conflict between these well-defined identities is the center of attention. This empowers Arabs to voice their identity and encourages Jews to reflect on the social inequality or violence in which they take part. Critics of this approach claim that it does not generate the conditions for dialogue (indeed, it grew out of the criticism on Contact Theory’s advocacy of dialogue for its own sake) but rather creates confrontation that highlights separateness, reinforces conflict, and produces polarization and an increased tendency in partners to barricade themselves within their national identities. The minority’s voice is indeed heard and made present, and the conflict is discussed openly, but this does not necessarily result in a constructive dynamic for its management. Inequality and lack of partnership find expression but do not necessarily undergo transformation. Another model that has become popular is based on the narrative approach. This has its origins in therapeutic practice, and in recent decades has also become prevalent in literature and practice as an approach to mediation and inter-group work in conflict situation. This approach holds that the manner in which an individual or group relates a narrative should not be assessed in relation to objective, factual reality. Instead, the focus must be on the way in which the narrator constructs the perception of reality or of the conflict, and the manner in which it is told and shaped as a narrative. The construction of a group ethos is not intended to describe the truth but to build identity, cultivate national pride, protect certain interests etc. It must be understood as such. In conflict situations the collective narrative is constructed and told unilaterally, and thus threatens to delegitimize the narrative of the other. It abounds with a sense of injustice and deprivation, of rightness and self-justification. The purpose of narrative work is to enable each group to understand the narrative of the other and to allow the differing narratives to exist side by side without judgment or decision, thus destabilizing (in a positive sense) and gradually deconstructing each party’s grasp of its own narrative as representing the only, unequivocal truth. A later stage of the process is the co-construction of a new, third narrative which is acceptable to all parties. Each of these models embraces important aspects that should be highlighted in Jewish-Arab encounters. The Shared Communities model developed by Givat Haviva assumes that building a shared society requires a strategy with a starting point that differs from the abovementioned, which gradually builds the preconditions for dialogic discourse between partners. III. Shared Communities As part of the criticism on contact theory-based engagements, and rooted in a critique of both the contact based and the confrontation models, a new synthesis has become more popular 20 | Part I Theoretical Background in recent years, advocating for the notions of “shared citizenship” and “shared living.” These concepts hold a response to processes of exclusion and alienation of Arabs from the public sphere, to which confrontation-model-based interactions gave some attention, and a wish to engage in a dialogue based on collective efforts to come together and find an agreed modus vivendi in Israeli society, based on partnership and equality, and highlighting not only Arab civil status but the mutual dependence of Jews and Arabs in Israel, and their common as well as their separate interests. President Rivlin’s “Four Tribes” speech and the subsequent presidential flagship project, Israeli Hope, emphasize the importance of establishing partnership and equality among the four tribes he identified as making up Israeli society (secular, orthodox, Haredi [ultra-orthodox], Arabs). In our present-day reality, he observes, there is no longer a clear majority-minority relationship regarding fundamental ideological questions. Therefore, the traditional majority-minority mindset must be replaced by a new mindset of partnership among the sectors of Israeli society. In Arab society, too, those who call for integration advocate doing so from a position of equality. Givat Haviva’s philosophy is that building partnership between Jewish and Arab communities, simultaneously attending to their common interests and mutually advancing their separate interests, will promote both partnership between Jews and Arabs (thus serving one of the chief interests of Israel’s Arab citizens) and a sense of security in Jewish-Arab collaboration, which the Jewish majority both needs and fears. Constructing such a dynamic will also, as explained in the next chapter, lay the foundations that will later enable the partnership process to enter a more complex discourse regarding issues that in its early stages it is not yet prepared to handle in a dialogic, constructive fashion. Shared Society is a concept that has also been extensively developed internationally in recent years in political contexts. The Club de Madrid, for example, the world’s largest independent group of political leaders (www.clubmadrid.org), has chosen the vision and goal of building shared societies worldwide, “based on the promotion and protection of all human rights, as well as on nondiscrimination, tolerance, respect for diversity, equality of opportunity, solidarity, security and participation of all people including disadvantaged and vulnerable groups and persons”. Britain’s former Prime Minister Theresa May, for example, spoke in 2017 of a vision of a shared society in terms of its “working for everyone,” and “tackling some of the burning injustices that undermine the solidarity of our society”. The building of a shared society, then, is also concerned with changing the conditions of existence in a way that contact-theory-based encounters do not emphasize. It is also concerned with the changing and the shaping of policy on a national, regional and local level, so as to create an equal, cohesive society in every sphere of life. Partnership must also be integral to the structure of government. Social and economic inequality changes into equal access to resources and knowledge, on the premise that increased inclusion and engagement of all populations in advancing and developing the country’s society and economy, out of a sense of shared responsibility, will help build a flourishing shared society. 21 Mari Fitzduff, in her book on building a shared society, identifies the spheres in which equality and partnership must be developed – namely governance, economy, police services, health, education, environment, culture, and land. Clem McCartney, researcher and content developer at the Club de Madrid, contributes more categories: poverty, access to services, environmental issues and sustainability, public participation, human security, development etc. In one Givat Haviva project, Roadmap for a Shared Society, working teams have formulated policy recommendations regarding governance, economic development, education, land use, and restorative processes and cultural representation. The term shared, McCartney asserts, implies that individuals play a comprehensive role in society and feel a sense of belonging and a share in the responsibility for advancing the various social issues, as well as for the outcomes. They take part in shaping reality rather than merely responding to it as citizens excluded from decision-making or denied their fair share of influence and resources. Cultural diversity is perceived as a valuable resource rather than a threat to be diminished: the differences encountered in the partnership are not perceived as distancing barriers but as beneficial components of a whole that is greater than the sum of its parts. Thus, a shared society represents a vision of equality and sets the goal of creating partnership between equals. 22 | Part I Theoretical Background Chapter 2 From Co-Existence to Shared Society: A Paradigm Shift This chapter aims to present the gradual shift from coexistence to shared society and the potential inherent in this transition. This change is paradigmatic, not merely semantic, and presents new challenges to professionals who facilitate Jewish-Arab partnership building. Coexistence was a term used mainly by Jews who were striving to engage in a dialogue with the other society, to familiarize communities with the culture and traditions of neighboring communities, and to reduce stereotypes and animosity between the two groups. Contact Theory and the Jewish-Arab encounters based on it served these motivations well. The transition from coexistence to shared citizenship and shared society reflects a transition from a separatist to an integrative perception of citizenship, based on a vision of partnership between equals. As noted earlier, it brings into sharp relief the interdependence of Jews and Arabs on each other. It also necessitates a shift from an individualistic approach to a relational-systemic approach, which the Shared Communities program seeks to put into practice. I. A Relational Approach to Conflict Management As background, it is important to understand the ongoing debate in the literature and practice of conflict resolution, concerning the integrative approach versus the approaches that aim to transform constructed alienation – an issue which their proponents claim is neglected by the integrative approach. The integrative approach helps the parties in a conflict engage in dialogue on needs and interests rather than on positions. The parties identify shared interests and think creatively together – not against each other but against the problem – developing ways to utilize their shared and their differing interests and attain at mutual gains. This approach is based on the premise that different interests – unlike positions – are not opposite but complementary, and that a collaborative engagement must be generated around these multiple interests to find an integrative solution. The theories that offer an alternative to the integrative approach – such as the transformative approach or narrative approach to mediation and conflict management – present a working model that extends beyond aiming to satisfy the groups’ interests. It aims to build their capacity for transforming destructive conflict engagement patterns and their perceptions of the conflictual reality, and to establish different communication and a different attitude towards the Other. 23 All these alternative approaches advocate developing a different conception of self, of identity, of what leads us time and again to focus attention on the separate, distinct, self-substantive, independently defined self-definition of the individual and of his/her group. Those advocating for a relational approach argue that the interest-based approach is founded on the conservative premise that we must unearth the true nature and identity of each group or individual, illuminate it, and articulate what is important for each group or individual separately, regardless of the other or of circumstances. This enables a kind of barter in which each party agrees to some of the other’s demands. The relational approach, in its various forms, has a different basic premise: that any attempt to discuss the self in isolation from the context and the fabric or relationship of which it is part is a fundamental error. Instead, we must understand how the conceptions of self are shaped in relation to, and contingent on, surrounding elements. The narrative approach, for example, assists individuals and groups in unearthing the contexts or discourse that gave rise to the narrative – the way in which an individual or group had built the story of their reality and the story of their identity. This “identity”, according to the relational approach, is not without context and cannot be comprehended separately from the processes by which it was constructed. The relational approach examines the partners’ ways of relating, or of “being in relation to” – that is, how they construct who they are in relation to the other and to the situation; at the same time, it builds their capacity for coordinating a new “shared movement” which is more constructive than the destructive firm clinging to one's own sense of separate self. The relational approach challenges the partners to approach the situation with more openness, aware that the images they have constructed regarding the other as well as themselves are now to be reexamined and reconstructed. They are prepared to be present, attentive to what is being constructed in the “here and now” connection of the encounter. II. The Relational Conflict Specialist’s Role in Constructing a Shared Society The shift from co-existence, from standing apart, to a shared society may also bring about a shift to a different mindset – a relational mindset, which challenges the concept of a shared society and sheds new light on it: the transition is from “coexistence” – in which “I” and “thou” create ad hoc collaborations, maintaining separate lives with occasional encounters – to “shared society”, the gestalt of a shared space that precedes the existence of separate individuals within it. • The relational mindset and dynamic include a recognition of inter dependence, in the ways in which the “I” or “we” to who we become in situations is a product of interactive processes with those we recognize and define as “the other”; • A relational mindset implies ongoing learning processes of the ways in which the individual 24 | Part I Theoretical Background defines and situates himself, in relation to the ways in which he defines and situates “the other”; • A relational dynamic implies cultivating a new way of listening, exercising humility and suspending judgment and the patterns that an individual has created within himself, out of real interest in reexamining his definitions in situ, that is, out of and within the contexts in which he constructs them; • A relational mindset implies being mindful of our thought processes and to the ways in which we impose assumptions on contexts that in fact require us to adapt and to observe them with fresh, new eyes, without bias or presuppositions – a requirement we are not always able to fulfil. Concepts and qualities such as co-construction, co-evolution, joint action, joint meaning-making, coordinated meaning and adaptive change are not foreign to global conflict-transformation discourse, but they are certainly foreign to the Middle-Eastern experience at this time. The approach proposed here aims at integrating these concepts and approaches into public discourse as part of building a dynamic of partnership and of shared society. To construct a shared-society mindset it is necessary to depart from starting point of individualistic premises and gradually adopt a relational approach to events in the shared space (the fourth chapter expands on the developmental theme and presents the axis along which the conception of partnership develops). Professionals wishing to encourage constructive conflict management must offer the possibility of constructing what is known here as a relational mindset and a shared relational-dialogic space, thereby creating the basis for construction of partnership- mindedness – within us and between us. A relational conception of the world is not merely an enhanced version of contact theory. Rather, it is a mindset that aims to identify what is required to create the conditions for a shared society: cultivating practices conducive to shared construction of reality – of a mindset and dynamic whereby the partners mutually construct the reality of their lives – also cultivates partnership- mindedness. Adopting relational practices and a relational mindset means developing an awareness of the shared space, of a more inclusive, systems-based perspective and of a shared life, in which the sum of the group’s reality (the shared region, state or city) is involved in a dialogic process of co-constructing its characteristics. While coexistence presents a narrow, unfulfilling vision of social cohesiveness and legitimizes the option of withdrawal from the efforts to co-construct a shared reality, cultivating a relational mindset involves developing the capability to establish a more complex sense of interdependence and of shared space, in which the parties take equal responsibility for their physical, mental and social welfare. Instead of assuming partial responsibility for the quality of life in the defined territory of that particular group, the state of shared living involves shared construction of and responsibility for that space, with an understanding that the individual does not have the luxury to improve his quality of life separately and without considering the quality of life of so-called “others” – that is, the group he perceives as being outside his responsibility. 25 III. Between Interests and Relationality As we have stated, the conflict-resolution literature distinguishes between approaches to mediation based on identifying and addressing interests, and approaches that define themselves as relational and question the interests-based approach. Givat Haviva’s approach aims to integrate the two: the mapping of interests and shared social action that aims to express and address them, alongside an attempt to implement a complex, systemic version of the interests-based approach, which views the dialogic interaction among the various interests as part of a greater whole, a larger context or space within which these interests are shaped and reshaped/ In this shared space they are mutually and relationally formed, alongside the construction of an evolving set of priorities that develops and changes as the continual discourse takes place. Givat Haviva’s approach may be seen as a developmental one, which posits that the shared dialogic, relational space is built up gradually out of preliminary interests-based discourse. Focusing on interests leans on established, widely accepted psychological theories (Realistic Group Conflict Theory), which view inter-group hostility as a product of conflictual, competing aims and goals that can be diminished if there are mutually desirable superordinate goals, attainable only by means of inter-group collaboration. In practice, we translate these goals and aims into interests. According to this theory, relations between groups have characteristics that differ from the distinct characteristics of either group. They result from the history of the groups’ interactions, among other factors. Therefore, inter-group relationships cannot be evaluated outside the realm of inter-group characteristics according to intra-group characteristics. In practice, these characteristics must be translated to the relational space shared by the groups. Indeed, according to this theory, neither the shaping of tendencies nor the understanding of content that arises in interactions can be the product of studying individuals in isolation from the environment in which they are shaped. The first two basic premises of Shared Communities, as described in the introduction to this book, are that shared identification and advancement of interests can bring about concrete substantial achievements, which can help lessen inequality in areas such as infrastructure and economic as well as environmental development; and that work on shared interests will enable participants to become more familiar with both societies, to acquire intercultural skills, and to build the capacity to live together as partners in a multicultural, pluralistic society. Our underlying assumption is that working on shared interests will advance partnership and equality between Jews and Arabs, as well as creating the preconditions, opportunities and ripeness for a relational dynamic. A shared space, if we can develop relational capability and partnership, can bring about the ripeness to engage constructively with “difficult” issues that otherwise cannot be contained and effectively managed. Under current conditions, there is as yet no ripeness for identity discourse, neither at the dynamic/process nor at the content level. Each group must be empowered, as commonly and non-relationally understood, to express 26 | Part I Theoretical Background its interests clearly and be able to collaborate with a cultural group whose characteristics and working culture are significantly different. It must be noted, however, that in keeping with the relational premises, the empowerment occurs from and within the encounter itself, when significant issues are brought to the shared space during the work process; it does not manifest in withdrawal and retreating into abandoning the collaboration. It is important that conditions and ripeness for complex identity discourse be created. In some cases, as discussed later, we have avoided engagement with sensitive issues that began to surface, when we judged that this could produce a crisis that the team was unequipped to manage constructively at that stage. Building a dialogic space between conflicted groups may take years, and therefore Shared Communities sets more modest goals as a working framework for the first years. It acknowledges that the four- year time framework to which the partner municipalities have committed may well be sufficient only for laying the foundations. It may be only at a more advanced stage that the seeds planted ripen into full-fledged relational dialogue or discourse on complex identity issues. In one of the partnerships, for example, even when conditions seemed right and it appeared appropriate, after three years’ work in the partnership, to invite a joint leadership team from both municipalities to a complex identity discourse, the feedback we received indicated that this would not be right at that stage. We realize that identity discourse on complex core issues is essential to building partnership, in its deepest sense, and a shared society in Israel; however, we take the view that conditions and ripeness must be created by developing a shared dialogic space and a relational dynamic by which these can be maintained constructively. Therefore, at each stage discourse topics must match the type of discourse possible and the maturity of the partnership. 27 Chapter 3 Dialogue: A Relational Practice A shared society is a dialogic society. From a relational perspective, shared and dialogic are almost synonymous, due to the mental characteristics and the practices they share. This section will show that construction of dialogic practices and mindset is tantamount to construction of shared-society practices and mindset. In other words, a dialogic mindset (not to be confused with “dialogue encounters” in the spirit of coexistence encounters) and its gradual development through the Jewish-Arab encounter process, will make it possible to construct and establish a shared society. In keeping with the underpinnings presented in the previous chapter, prominent dialogue scholars across a number of intellectual disciplines present the concept of dialogue as a relational practice, identifying the difference between dialogue and other types of interaction (discussion, conversation) through questioning the conception of individualistic self that characterizes exchanges between two subjects distinguished by a clear boundary and frame. While the individual, according to this approach, strives to define the independence of the self by means of clarifying his independent, unchanging inner core, the dialogic approach sees the self as an entity located and defined within, rather than separately from, the assemblage of relations and context in which it functions. Martin Buber, for example, offers an alternative to the more consensual individualistic worldview, and in fact turns it on its head: his starting point is the idea that only the relation grants the things their true, independent existence. Buber makes a radical claim that in dialogue the relation as a primary and foundational experience is echoed–one in which the nature of humans and of the world is understood. He draws a distinction between two modes of conversations–indeed, between two different qualities of human interaction: “I-Thou” and “I-It,” the former manifesting dialogic relations. While the I-It relation is characterized by seeing the “it” as bounded by others, as an object perceived with cold indifference, the I-Thou is a dialogic relation, acknowledging that only in the presence of the I-Thou primary relation can the self be wholly apprehended. In dialogue, people understand that only within the scope of that relation does the “I” become a person in its full sense, thus fulfilling his humanness. The realm of the “Thou” overcomes the disconnection embedded in the relation with an object and includes a different focus, the living person that is gradually constructed before him within the context. Similar to Buber’s view, David Bohm explains that the etymology of “dialogue” is “a stream of meaning flowing among and through us and between us” (Bohm, 1996). Bohm draws a distinction between the dialogic state and the state of trading information among human beings, a “discussion”, where each person guards the foundational assumptions with which they 28 | Part I Theoretical Background have arrived, and conducts a trade-off or negotiation, without being open to questioning the fundamental assumptions of all participants, himself included. While this kind of interaction suffices for coexistence efforts, the efforts to build a shared society and shared living are directed at changing it. In contrast to the trading information or negotiation state, the dialogic state, according to Bohm, calls for a re-examination of the assumptions and perceptions with which the participants arrive, including the perception of the parties as two different systems. We will argue that in advanced phases of shared society efforts, such re-examination ought to take place, and will emphasize relational processes in the space, in which meaning arises from the dialogic conversation process. Charles Taylor (1999) draws a distinction between “monologic acts” (single-agent acts) and “dialogic acts”, in which it can be conceived that the self neither preexists all conversation, as in the old monological view, nor does it arise from an introjection of the interlocutor, but it arises within conversation, because this kind of dialogical action by its very nature marks a place for the new locator who is being inducted into it. Once again, the dialogic state is emphasized as questioning the more common sense of self and as cultivating the ability to focus on the way in which the self is established within and out of the interaction. This questioning is necessary not only on the interpersonal level but on the societal level and among social groups, in order to cultivate a shared society. Taylor articulates what can be seen as a relational vision for transforming adversity and social fragmentation into dialogue: A vision of shifting from an introjected “I,” which has to find its own voice, to gradual awareness of the process of how it arises within conversation, a process of gradually finding one’s own voice as a participant of dialogue. This–by definition–cannot be a process undergone by individuals as it is a shared social process that will provide new meaning to the notion of shared society, and to the strategies and skills needed for its gradual cultivation. Dialogue, in the relational view of the self that is presented here as an alternative to the individualistic worldview, shifts the focus from the individual to the domain of relation and relatedness; in dialogue, the self is constructed within interactions and from them in an ongoing process of mutual construction, in the flow of interactions between the individual and his world, in which he not only gives but also receives expression to his values and vision, in harmony and in keeping with the context. In the dialogic process, meaning is transmitted from the individual’s mind out to the space in which people operate together. Social construction is not a matter of penetrating the privacy of the other’s subjectivity, which seems an obvious unit of analysis for those wanting to understand the social world, but rather a relational achievement that depends on coordinating action. A dialogic mindset centers on coordination of meaning, based on the understanding that meaning emerges only through interaction, within and from relationality. This approach emphasizes patterns of interactions and relational processes out of which we construct our individuality, on the premise that meaning is not the possession of individuals, of separate persons, but rather an emergent property of coordinated action. We must be being aware of meaning as it unfolds, and affects the flow of processes from within our living involvement with them. 29 It is important to stress the complexity that this perception brings to the concept of empowerment and to the importance of assisting each person and group articulate their voice, their needs and wishes, their history and particular identity, in a clear and brave manner, while cultivating the person's or group's agency. This approach does not aim at deny the importance of all of these, but to assist people and groups develop their capacity to see the ways in which identities and agencies take shape, change and are continuously constructed within new contexts and realities. This is not to dismiss the properties that agents bring (as some repressive approaches and social practices have forced them to do), but to create the conditions for dialogic modus vivendi, joint construction of the concrete reality of all partners partaking in the social space. In the "dialogic moment" as described by Maya Kahanoff (2010), interests not only work against or alongside each other, and interpretations exist not only as competing alternatives, but are brought into confrontation and collaboration at the same time. The concept of empowerment is transformed: while the individualistic ethos defines empowerment in a one-dimensional manner as the ability of the individual or the group to articulate its voice in a clear and distinct manner, independently of external pressures and influences, and to reconstruct their ability to manage their lives under their own conditions (while confusing negative influences and dialogic interactions), empowerment from a relational point of view presents a more complex perspective, in which the individual or the group are able to question the stability of the narrative that a person or a group construct with regard to themselves, clinging to a firm, well-defined and definite sense of self, with inner coherence, which is usually fixed and entrenched, guarding itself from "outer" influence and change. The empowerment and dialogic modus vivendi described here are required in both inter-group and intra-group relations, so that interaction of a different character – dialogic - can take place between the various identity components that exist in tension with one another as described in the first chapter (Palestinian and Israeli identities among the Arab minority in Israel and among the Jewish majority that finds it challenging to accept their residing together, or Jewish and democratic aspects in the perception of the state, among the Jewish majority and among the Arab minority that struggles to see them residing together). Qualities of interaction that characterize dialogic interaction, as derived from the dialogic approach described here: • A shift from focusing on the parties to focusing on the space between them • A shift from firm opinions to suspension of judgment and an interest in shared exploration and illumination. • A shift from a dichotomy of right/wrong, good/bad to a suspension of judgment regarding good and bad, and an effort to understand the speaker’s patterns of thinking and inner logic. • A shift from binary, either/or thinking, to paradoxical thought that embraces apparent contradictions. • A shift from dialectic thinking (bringing about a synthesis between opposing, polarized 30 | Part I Theoretical Background viewpoints) to dialogic thinking (flow, generative movement and taking shape in a non- polarized way). • A shift from debate or confrontation between two parties, defending arguments and attempting to persuade while refuting the other’s arguments, to shared deliberation, exploration and examination, with demonstration of tolerance toward diverse thinking patterns. • A shift from linear thinking that venerates consistency, clarity, and justifications based on pure logic, to phasic logic (allowing spontaneous emergence of ideas, impressions and experiences), paradoxes, and vagueness. • A shift from the desire to attain a definite aim, to the desire to learn and to focus on improving interaction and synchronization. • A shift from seeking the familiar, clinging to it and returning to it, to recognizing the singularity of context and constructing new territories. • A shift from preoccupation with thoughts and patterns from the past to being present in the total experience (thoughts, emotions, feelings). • A shift from thinking inside personal space and according to constructed schemes, to joint thinking and shared construction of knowledge. • A shift from attempting to direct matters in a predetermined direction and obstructing development of thought, to constructing knowledge and generation of ideas from within the context. • A shift from emphasizing difference and otherness, to making dilemmas and uncertainties present so that they can be examined together. • A shift from emphasizing independence and separateness to making mutual dependence present: a shared effort, not individual efforts. • A shift from personal interpretation of concepts to jointly unfolding interpretation and building a shared meaning, while acknowledging the importance of context. • A shift from clear agendas and firm opinions to recognition of continuous change and willingness to embrace change. • A shift from commitment to defined viewpoints, standpoints and ideas, to questioning and a willingness to clarify the premises, values and experience that underlie each viewpoint. • A shift from experiencing doubt and uncertainty as a threat, to viewing them as a mental resource (a different interpretation of insecurity, and a different approach to change). • A shift from a conception of unknowing as a lack that must be filled with knowledge, to viewing it as a space in which to stay and examine questions. • A shift from listening in order to agree or disagree, to listening in order to learn, understand, develop, build shared meaning, and leverage ideas. 31 • A shift from predictable statements and hackneyed ideas to joint thinking and construction of mutually forming ideas. • A shift from having the topics and problems for discussion defined by normative boundaries of discourse, to encouraging participants to challenge and question these boundaries, and express basic needs and wishes that are beyond the normative range. • A shift from reacting to reality, to focusing on the shared process as creating reality. • A shift from reactive, immediate responses, to reflection and thoughtful responses generated by a mindset of mutual creating and building. • A shift from regulation of discourse by means of power relations, control and inequality, to equality, partnership, and shared responsibility. • Comprehensive mutual recognition of the other’s situation in the here-and-now of the encounter as well as in the effect of the encounter. The dialogic approach presented here is consistent with the developmental approach as described by one of the central current thinkers in developmental psychology, Robert Kegan of Harvard University. Kegan's five-stage developmental model describes the fourth stage as a stage in which people transform the unhealthy dependency of the third stage, which he calls the Interpersonal Stage. In that stage humans lack autonomy and are therefore conditioned and highly influenced by others' opinions. In the fourth stage – the Institutional Stage – a person develops a significant sense of agency, identity, a stable and determinant perception of self that includes a clear ideology with regard to who they are, and through which they interpret their social realm. Much effort and determination are invested at this stage in preserving the institute, the stable identity. Kegan claims that this is the most advanced stage that most people reach, and that it confirms the social ethos of the individualistic convention with regard to self- fulfillment, personal autonomy, and self-expression. Humans in this stage identify with their sense of self, but lack the capacity to reflect upon their selves, to observe from the outside. This is something that takes place in the fifth, more advanced developmental stage, which Kegan names the Interindividual stage. At this stage the notion that one has a single, stable perspective or unitary collection of interests is surrendered. The unified and coherent institution and its ideology is replaced by a perception of continuously-constructed system with a capacity to critically examine and reexamine one's perception of consistent self and the perception of ideology altogether. Mutual dependency is perceived as fashioning a bigger context, a wider consciousness than the self 's consciousness in which these separate identities interpenetrate. The separate identities are co-regulated in reciprocity. While at the institutional stage humans return to their private space to examine the situation and the means by which one affects and is affected by the other, in the interindividual stage the examination takes place in a dialogic rather than dichotomous understanding of self-and-other in a co-constructed process. As a matter of fact, the fifth stage exemplifies the dialogic mindset, the I-Thou relations, and the realization of the shared-society ethos presented in this book. It is evident, as the next chapter describes, 32 | Part I Theoretical Background that a process of managing conflict or tension between groups, where one side is fearful of negative effects, as a result of intolerance and repression of one's voice in the shared space, is a situation in which there are barriers that need to be taken into account. The challenges inherent in realizing a partnership mindset are many, and deserve a patient, process-oriented approach, in order to set the conditions that will enable the construction of a mindset and practices that enable partnership in its deepest sense. 33 Chapter 4 Stages in Partnership Development In the literature, both in Israel and worldwide, we find several typologies defining different qualities of partnership and different levels of “shared living”. This chapter examines the characteristics of these levels as described in the literature, and presents Givat Haviva’s definitions for them. Identifying the attributes of each level enables our professionals to define the challenges of the transitions between levels in the partnerships they facilitate, and to diagnose the practical challenges to be addressed in each partnership in order to cultivate its dialogic character and enhance its quality. Our assertion is that if we wish to bring about the most advanced level, that of partnership, rather than repeating past mistakes and investing in “more of the same” efforts towards “coexistence”, we must understand that the components of the change align with the paradigm shift proposed in the first chapter, which includes the construction of a relational mindset. I. Types of Partnerships The distinction between long-term, sustainable change and changes with less impact appears repeatedly in various forms in the literature. Innes and Booher (1999), for example, define first- order, second-order and third-order impacts in the planning of collaborations: 1. First-order impact is a direct, immediate result of the collaboration process. This category includes social, political and intellectual capital, shared strategic plans and agreements. 2. Second-order impact exists when partnerships manifest outside the formal boundaries of the efforts. This includes new partnerships, coordination of shared action, new learning, change in practices and change in perspectives. 3. Third-order impact, which takes place only at a later stage, includes new partnerships, continued shared evolution, reduced conflict between the partner groups, adaptation of services, of resources and of the partners themselves, new institutions, new norms, new ways of coping with social issues, and new channels of discourse. Grey (2000) adds: generating social capital, creating shared meaning, increased interaction, and a change in the distribution of power. Collaboration, the writers claim, creates public value if it succeeds in bringing about change of first, second and third order. In their book Networks That Work (2011), Paul Vandeventer and Myrna Mandell distinguish among cooperation, coordination and collaboration. While it is possible to be satisfied with first- level qualities, in order to produce sustainable change it is necessary to go beyond it. 34 | Part I Theoretical Background The first level, cooperation, is characterized by high levels of urgency in gathering to convene problem-solving; building a field or gathering momentum for movement; reaching agreements by commonly understood forms of dialogue, negotiation and consensus; sharing information; testing ideas and learning about different approaches to one another's work; and creating social environments that lead to better personal and professional relationships. The second level, coordination, adds to these the following characteristics: identifying and pursuing intentional policy advocacy priorities for all parties; negotiating network members' commitments and allocation of time and energy resources; pushing established organizational boundaries and creating a more robust sense of interdependence; and strengthening relationships by engaging in activities that require greater mutual reliance. The third level, collaboration, involves greater risk-taking. At this level, participants join together to pursue a long-term system creation or reform. All partners take part in changing the system; change old conceptions regarding how it works; agree upon, work within and self-enforce the tasks to be carried out to maintain the new or reformed system; ; build robust methods for identifying, engaging, and resolving conflicts; participate authentically in transferring resources from certain services or projects to others, in ways that would not have been possible under the old system; reach agreements about the ways in which old working patterns can be radically and permanently changed; and redefine the way in which the partners play their roles in the new, larger system (ibid., pp. 23-24). Sikkuy, in a publication summing up a decade of activity and entitled Equality Zones: Promoting Partnerships between Jewish and Arab Municipal Authorities, distinguish between cooperation and partnership, stating that a partnership “is a continuing and structured link between two or more independent organizations that choose to work together to achieve a common goal. Thus a partnership is based on mutual desire and on continued and regular activity, with the idea of leveraging the participants’ influence while maintaining flexibility, and creating a joint space and common language that increase their acquaintance with each other. This differs from cooperation, which is generally a random, one-time, or limited episode.” (pg. 15). Tami Rubel Lifschitz, in her paper “Practices of Collaboration” (Shatil), distinguishes among four practices of collaboration: sharing, participation, collaboration (ad-hoc or continuous) and strategic or core partnership. The distinction that is most relevant to our purposes is between the third and fourth levels. Collaboration implies action – a project or tactic led by several protagonists who make decisions together on the basis of mutual discussion. A good collaboration conjoins the unique added value of each party’s particular qualities, together with common values, aims or interests that bond them together. It enables both to achieve things they could not have achieved on their own. Collaboration with a partner also enables a deeper encounter with the “other” and the other’s culture, with mutual exchange of ideas and opportunities for learning, renewal and creativity. Strategic or core partnership is a long-term relationship, which usually combines core aims, vision and values with pragmatic interests. In contrast to the case of collaboration, here 35 the shared project is not the main aim, but is regarded as a means for attaining broader shared aims. This kind of partnership often involves a personal affinity between its leaders – a deep, meaningful link that makes it possible to dream, create, and initiate projects together. In the absence of such strong interpersonal connection, working together tends to concentrate more on the pragmatic facet. In cases where work centers on the concrete task, rather than on the strategic vision or core values, the relationship between the parties may become a long-term collaboration, but not a partnership. II. Givat Haviva's Partnership Model The intervention model in the Shared Communities program developed by Givat Haviva is based on a three-level typology that draws a 4-year path in which Givat Haviva serves as the integrator and facilitator of various joint initiatives among the communities, aiming at cultivating a spirit of partnership amongst them, based on the relational model, as described below. The Givat Haviva typology comprises the following stages: participation collaboration partnership The three levels—participation, collaboration, and partnership—demonstrate three developmental stages from a relational point of view. It is important to note that the transition toward partnership expresses the overcoming of a mindset that is nonrelational or characterized by individualistic manifestations. Givat Haviva’s definition of participation is as follows: “participation in a partnership that another is leading, with the joint work being random, periodical, and limited in time for the resolution of ad hoc problems. Participants arrive at agreements on a predetermined set of goals with the help of agreed-upon forums designed for partial sharing of knowledge, information, and resources, collaboration in order to gain better perspective on a problem and its possible resolution, and construction of a social sphere that leads to better relationships and which entails regathering when needed.” From a relational point of view, this stage demonstrates a separately formulated sense of each of the groups, bringing to the shared space and maintaining a well secured sense of what the problems to tackle are and of their interests and priorities. Satisfaction of well-defined interests is the focus and joint exploration or co-evolution is not the emphasis. Collaboration is defined as follows: Participants, or rather collaborators, lead together a joint project or a number of initiatives simultaneously, while identifying and addressing separate and shared interests and significant issues, for which the groups’ priorities are compatible. The decisions on the work plan and the projects’ goals and characteristics are taken together, while pushing the constructed limits of their respective organizations and creating a more 36 | Part I Theoretical Background meaningful sense of dependency for the satisfaction of interests. The joint dependency leads to the strengthening of ties and the formation of a platform for ongoing collaboration, with partial interest in and capacity for developing new joint initiatives, new learning, and a change in practices and perspectives.” From a relational point of view, there is progression toward joint decision making and reflecting on one’s boundaries: Thus giving room for dependency, yet still for the satisfaction of their separately defined interests. There is also progression toward joint learning, a step toward co-evolution and joint meaning-making. For example, an economic-development committee of a partnership between a Jewish and an Arab municipality may find its drive and will to participate in the program in its early evolutionary stage through the calculation of the self-interests of each of the municipalities, that is – “what do we (my municipality) gain from it financially,” or – “how are our economic interests served through any designated program.” In a later, more advanced collaborative stage, the question may be framed differently. For example – “how is the well-being of all citizens in the region served through the project at hand?” This adopts a regional view of the common good, regardless of the designation into in-group interests. Givat Haviva’s definition of partnership is as follows: “profound connection in vision, core goals, and values; joint construction of a system, or long-term reform, in which all participants contribute to the change of the system and arrive at agreements regarding the means by which it is possible to change – radically and for the long term – old working patterns; in conjunction with adapting services, resources, institutes, and norms; and while developing new ways for coping with social problems. The partners develop and enforce upon themselves the tasks in need of addressing to maintain and continuously develop the partnership while creating joint meaning and joint social capital, and while constructing vital systems for identifying, surfacing, managing, and resolving conflicts.” The profound connection helps reexamine foundational premises and make present the sense of shared existence. This sense assists participants in this later stage to refrain from withdrawing to a separate, fortified sense of distinct identity. Indeed, there is room for voicing differences and empowering group narratives, and the experience of singular identity is likewise empowered, but these arise from and within the interaction, and are addressed compassionately in the realm of the shared space. In this space, the sharing of vision, core goals, and values have a key role in constructing the sense of partnership, and they serve as the holding space for the dialogue that deals with differences, among other things. This is no longer merely the meeting space of separately identified needs and interests. The interests are no longer defined ad hoc, and the agendas of each party in the encounter are no longer intended to create optimal value for its own community, but are calculated and addressed with a holistic perspective that takes into account the complexity of the regional considerations at a more systemic level. The design of a system and long-term reform reinforce the co-construction of a sustainable system that stands on its own and in which all the parties partake, relationally emerging through 37 and within their interactions. The changing of established patterns of action that stem from the individualistic tendency is a derivative of the ability to deconstruct mental constructs and habitual ways of being that derive from a fixed sense of self that ought to be transformed. These patterns are replaced by co-constructing in mutuality a new sense of going-on together (for example, seeing an environmental challenge as a regional challenge rather than a challenge a municipality needs to face with its known, well-defined capacity). The ability to adapt to new realities requires the transformation of one’s habitual grasping of self and other. It also requires the transformation of seemingly objective constructs of objective reality, forced on the individual as part of a social construction process. The sense of internal-reinforcement and responsibility is a result of motivation building, a motivation that recognizes and sees as a priority the maintaining and sustaining of the partnership in its profound, dialogic stance. This includes making meaning together and refraining from the withdrawal to separate sense of self that stands in opposition to the other and which characterizes poor conflict management. Developing the capacity and skills to manage conflicts from a relational standpoint requires the reflective ability to identify and raise this common, almost naturally occurring withdrawal (from the dialogic space to the entrenchment in in-group identities, situated in opposition to one another). Good, highly capable conflict management systems are aware of the tendency to polarize, dichotomize, and exclude, and are capable of transforming this human tendency into relational awareness and inclusive, interdependent dialogue. In this more advanced stage, the maintenance and improvement of the partnership become the center of attention, participants discovering its core through it. For example, in the economic- development committee of a partnership between a Jewish and an Arab municipality described above, partners in this stage may find the drive – when gradually cultivating the mindset of partnership and a growing sense of shared ownership and responsibility for the joint region – to enter a dialogue where the committee may be focused on ideas such as “Who are we as a partnership?” “What is our shared vision for this region?” and even more – “How is our sense of who we are revisited through and within the interaction?,” or “How can our partnership define itself anew through the shared activity that pushes our boundaries, our sense of who we are, in a way that helps us thrive?” When asking this more advanced set of questions, participants are no longer avoiding structural imbalance; they are restructuring power relations, and revisiting inequality. Moreover, while gradually experiencing profound changes in Jewish-Arab dynamics and cultivating a relational sense of co-construction of their joint reality, they are now able to raise and manage constructively these tough issues that usually are highly challenging to address in Jewish–Arab encounters. The characteristics of partnership include significant markers of relational and dialogic qualities that must be reinforced: co-evolution, adaptation of services and of the partners themselves, new ways of conducting discourse, creation of shared meaning, coming together to build a long-term, shared system or reform, transformation of mindsets regarding the way in which the system 38 | Part I Theoretical Background should operate, redefining the way in which all parties fulfil their roles in the new, expanded system, preserving flexibility, and creating a shared space and a shared language, among other things. All these features necessitate a quality of encounter that requires a rethinking of the ways in which individuals and groups define themselves. Professionals who wish to construct a shared- society mindset can refer to ideas proposed by the relational approaches, and understand the concept of partnership accordingly: the individual’s existence and quality of life are measured not in isolation but within the relationships and from the social context in which he exists. They are what they are depending on the context, on the space and on the relationships with in it. The relational approach rests on these premises, and this is the paradigm of shared society, in contrast to a society in which groups live alongside each other and seek their well-being independently and separately, rather than in the wider context and in relation to the many variables that shape the ever-changing space in the “here and now” of the interaction. If we wish to make present a profound, dialogic partnership-mindedness, we must analyze challenges and barriers in relational terms and implement suitable conflict-transforming skills, so that we might construct a mindset that enables such living practices. III. From Participation to Partnership One of Givat Haviva’s main tasks in the Shared Communities program is to support the participants in their progress towards partnership in the deep sense of the word. Initially, however, potential partners’ degrees of readiness may vary, and support and help will be needed to prepare the ground for establishing a partnership of this kind. When we wish to advance a partnership between pairs of communities and assist them in building a work plan, we must examine several points, some of which were highlighted earlier, and focusing on which will help us build an optimal approach and work plan: 1. What goals do we wish to achieve in this working plan, and what partnership standard should we set for the participants? 2. How shall we help the partners decide on the partnership’s goals and undergo the necessary process of change? 3. If our aim is to produce a core partnership, do we have – or if not, how can we acquire – the requisite resources and capabilities? 4. How can we ensure throughout the process that joint efforts are focused on the strategic vision or core values, rather than on the concrete task at hand? Regarding the participants’ level of engagement and proactivity, at the participation stage they mainly take part in the process led by Givat Haviva and at the collaboration stage they collaborate with each other and with the facilitating body, while at the partnership stage they are jointly 39 responsible for advancing, developing and integrating the process, with increasing independence from the external body. The following table was formulated to help professional bodies identify at what stage of readiness communities are for partnership, guide them productively through that stage, and identify the goal of the next stage in the partnership’s development, and the challenges it may pose. Note that not all criteria are relevant to each program or project. In addition, the teams in one partnership program may have varying levels of maturity, and some of them may have reached a more advanced stage than others. A partnership or project may also have reached several different stages according to different criteria. The dynamics may be more advanced where a certain criterion (e.g. “Collaboration”) is concerned, while in another field the participants may require more work, since they are still in a stage of “Participation”. Team members may also have differing conceptions of partnership, which must be considered. Criterion Participation Collaboration Partnership Shared vision and agenda: Participants’ Convergence of Shared general aim, not coordinating the core unity of purpose interests around a necessarily identical aim, vision, values and agendas; collaboration interests, while continually shared project depends on similarity of reexamining the agenda goals. together. Decision- Each group Decisions are made Joint construction of the making makes decisions together, through collective consciousness, separately, while mutual discussion and and generation of decisions the facilitating examination; good from the shared space and body has a central collaboration takes into core aims, with a systemic, role in generating account the unique added- integrative approach ideas. value each side brings, and aiming to act for the while showing recognition common good of each party’s particular values alongside those common values, aims or concerns that bring them together Sharing of Partial sharing Full sharing in order Full and consistent sharing information of information, to discuss and resolve of information in order to among depending on the problems, in the course of create new knowledge and participants extent of shared joint projects meaning together, as well as agendas collective intelligence 40 | Part I Theoretical Background Criterion Participation Collaboration Partnership Organizational Short-term and Middle- to long-term (over Long term and not commitment time-limited, ten years), continuing and time-limited, consistent issue-specific and issue-specific (though may and systemic, responds pragmatic, for concern several issues), for to new challenges and solving concrete solving concrete problems, initiates expansion of the problems both pragmatic and partnership, ideology and ideological values take precedence over pragmatism Personal Temporary Continual but unofficial Personal accountability commitment of engagement of participation of officials and commitment of the participants persons committed who cooperate with the participants who lead, to the cause, facilitating body and maintain and develop dependent on take part in shaping the the partnership, while facilitation partnership continually restructuring relevant needs Learning and Case-specific Case-specific evaluation of Systemic evaluation using innovation evaluation of successful and unsuccessful consistent criteria, learning successful and actions in order to improve by comparison to other unsuccessful future projects. Use of contexts and constant actions. Relevance formative-evaluation tools examination in order to of conclusions is develop new knowledge limited and initiatives. Use of developmental-evaluation tools Source of External, dictated External, stemming from Inner; deep conviction motivation by the system or by the realization that this of the partnership’s proper norms; or activity serves the group's importance, inner conviction of interests and helps solve acknowledgment of the importance of a problem; or inner mutual dependence, addressing specific conviction, recognizing deep commitment and concerns or issues the importance of the considering the partnership a high priority, while action regardless of specific allotting considerable problems, and partial, resources and going beyond limited dedication of official duty resources Initiating Focusing on Considering follow-up Joint generation of ideas further action the defined task activities, seeking shared with the constant aim without exceeding areas to highlight and new of better, more efficient or extending it topics prioritized by all action, and corresponding participants redefinition of roles 41 Criterion Participation Collaboration Partnership Institunalization Ad hoc issue- Establishing a joint team/ Influencing policies, and regulation specific work, with format that convenes as establishing new, needed to address problems permanent social the possibility of in a specific field institutions that explore reconvening if new ways of improving necessary in some residents’ welfare cases Receptive- ness Joint work on Identifying problems Joining forces to bring to change specific issues and acting jointly to about fundamental changes and/or problems broaden the scope of the or long-term systemic defined by each partnership, prioritize reforms participant issues to be addressed, separately, each define them and address contributing their each specifically, while relative advantage perspective and practices change to a certain extent Preserving the Producing focused, “Pushing the organizational Changing the ways in identity of the relatively “safe” limits”, creating a stronger which each organization organization or attempts in sense of mutual dependence fulfils its roles in the group being mutually among the partners, while extended system committed also reinforcing individual ties by means of activities that require increasing mutual trust Extent of Sporadic; program Acceptance of the Deep conviction among embeddedness depends on the collaboration process and a critical mass of officials of the program involvement partial recognition of its and representatives from and of of officials and importance by many of the the three sectors, large- participation individual officials and representatives scale involvement of circles in stakeholders of the different sectors. citizens and containment everyday life Selective engagement of of diverse voices, including citizens and objectors to the objecting voices, in order to process. build a public willingness and participation in consolidating the partnership 42 | Part I Theoretical Background Criterion Participation Collaboration Partnership Funding Guaranteed Guaranteed external An official policy by external funding funding for broad support which public funds and policy and foundations ascribe to the commitment for for limited support of program activities, partnership’s aims; internal of various activities alongside internal funding funding for supporting self-funding of specific projects by processes, limited external funding participants Allocation of Existing Augmenting existing Extensive allocation of resources and employees, manpower, appointing resources for institutions personnel volunteers. a part-time regional and structures that are Restricted budgets partnership director for the products of the partnership. for special events regional authority. Partial Expanding personnel only funding of defined activities pool as needed (full-time and projects partnership director, mediation center director, art center staff etc.) Discourse Negotiation Discussion Dialogue format Involvement of Facilitating-leading Facilitating-supporting, Background consultation external body Transfer of Each activity Reporting and Alignment of various information stands alone communication between activities and transmitting between and aims to different activities without continuous information activities achieve its own monitoring consistency of between activities to aims and goals; goals or cross-fertilization create unity of purpose, no significant between initiatives cross-fertilization, communication and understanding of or reporting take commonalities of projects place and bodies Attitude Indifference No dependence on other Recognition of the towards other regarding similar initiatives. Exchange of importance of integration initiatives and initiatives and information and occasional and collaboration among organizations organizations that attempts to form ad hoc all relevant players to create address the same partnerships with other collective influence issue bodies active in the field Conflict Conflict is Conflict is perceived as Conflict is perceived management perceived as a a problem that can be as a source of change barrier and is overcome by redistributing and rectification, and avoided wherever value, without necessarily is managed using an possible generating a crisis integrative approach in order to create value 43 Chapter 5 The Role of Givat Haviva as an Integrator and Backbone Organization This chapter presents an approach informed by the collective impact working model, and delineates Givat Haviva’s role as a backbone organization in leading partnerships between communities. The central precept of this approach, which is consistent with the motivation of building a shared society between communities, is that to achieve a significant effect and to create a new community-based culture, it is necessary to generate collective impact; that is, to cause multiple organizations and players – from the public sector, the business sector and the third sector – who take part in social processes, to work together and create a force of significant influence. In an analysis of initiatives that have successfully generated collective impact and activated processes of social change, it was found that all these successful efforts involved an organization active behind the scenes. In the literature, such organizations are known as backbone organizations. Creating and managing a collective-impact initiative requires a detached organization with specialized management and coordination capabilities, to support the initiative and serve as its “backbone”. In this way it saves the participating organizations time, a precious resource that is always in short supply. Many projects fail due to the mistaken belief that coordination among all participants can take place without a supportive infrastructure. It is sometimes difficult to argue the importance of a backbone organization, but studies show that the participants in every successful collective-impact project were adamant that without the significant involvement of such an organization, the project would have been doomed to fail and “do more of the same” – assigning decisions to a small team at the top of a hierarchy and thus failing to strengthen the community in its processes of change. I. Responsibilities of a Backbone Organization A backbone organization supplies a team that deals exclusively with the partnership but is separate from the participating organizations’ team. The backbone organization is charged with all activities required to simplify the participants’ task and ensure that the work is carried out smoothly. Under the right circumstances, “backbone organizations embody the principles of adaptive leadership: the ability to focus people’s attention and create a sense of urgency; the skill to apply pressure to stakeholders without overwhelming them; the competence to frame issues in a way that presents opportunities as well as difficulties, and the strength to mediate conflict among stakeholders” (Kania & Kramer, 2013). The role of backbone organizations comprises six principal activities by means of which such organizations achieve their goals (Turner et al. 2011): 44 | Part I Theoretical Background 1. Guiding vision and strategy: this manifests in bringing together multiple organizations and players, and building a collective direction. 2. Supporting aligned activities: synchronizing various activities and ensuring ongoing transmission of information among them, in order to create unity of purpose and understand the commonalities between the different projects and bodies. 3. Establishing shared measurement practices. 4. Building public will: shaping public consciousness while emphasizing the importance of the project. 5. Advancing policy. 6. Mobilizing funding. Other common responsibilities of backbone organizations include: • Planning and managing the project • Supporting data-gathering and -reporting activities • Supporting communication and technology • Management of logistical and administrative details. II. Integration of Partnerships between Communities Shared Communities may seem to involve two organizations – a Jewish municipality and an Arab one; but in practice, the facilitation work can be seen as a task of integrating teams, sub-teams, diverse projects in the partnership, pairs of municipalities and various regional projects etc. The integration required falls into three main categories: 1. Realizing that the various projects approved by the municipality are not projects that stand on their own or operate independently and hence do not fit the model of collective impact even though they share the auspices of the municipality and steering committee. Therefore, the integrator must bring them together, align them, and aspire to ensure that concrete projects are seen as part of a bigger picture and vision. 2. Engaging leaders and officials from diverse sectors and ranks of society (additional officials from the public sector, key figures from the business sector, NGOs and activists from the third sector, committee leaders, subject coordinators, relevant professionals, unofficial public leaders etc.) and interweaving them into a collective system with substantial social influence. 3. Bringing about fruitful integration of all activities that are part of the partnership within Givat Haviva. A backbone organization is thus crucial to the success of the partnership, and the organization that facilitates the partnership between communities should be seen to a certain extent as such a backbone organization. 45 III. Conditions for Successful Collective Impact In another article, Kania & Kramer (2011) list five conditions for successful collective impact initiative, which together clearly define the tasks and goals of integration: 1. A shared social agenda regarding the vision for the required change: this includes a shared understanding of the nature of the problem, the right approach to solving it, and the actions to be taken. It is important to acknowledge disagreements among participants and give them due attention, in order to choose the framework and priorities. 2. Continuous communication among leaders: Participants need several years of regular meetings to recognize and appreciate the common motivation behind their different efforts, to ensure that their own interests will not be ignored, and that decisions will be made on the basis of objective evidence and for the benefit of the project, not to favor the priorities of one participant over another. They must also develop a shared vocabulary, which is essential to developing shared measurement systems. These meetings are extremely important and should be held monthly or even twice a month, and attended by the organizations’ CEO-level leaders. Meetings should be scheduled according to a structured agenda. 3. Mutually reinforcing activities: it would be ineffective for all participants to focus on the same activities. Each should undertake the specific set of activities at which he excels, and encourage others to do the same. The power of collective action comes not from the sheer number of participants or the uniformity of their efforts, but from the coordination of their differentiated activities through an overarching plan of action. Activities should be coordinated, with each participant acting in his own field and held to shared standards. 4. Backbone support organization: a separate set of professionals who are charged with planning, managing and supporting the action framework’s activities at all organizational, technological and logistical levels. Experience gleaned from facilitation of partnerships between communities shows a need for at least three roles: a project manager for the action framework, a data manager, and a facilitator. The importance of these functions lies in their ability to focus participants’ attention and create an understanding of the project’s significance with no unnecessary pressure, alongside the competence to frame issues in a way that presents opportunities as well as difficulties. Most important of all, of course, is their ability to mediate conflict and attain compromise among all stakeholders. 5. Developing a shared measurement system: a shared agenda is worthless unless all participants agree on the ways in which success will be measured and reported. Measurement and evaluation are particularly important, not only because they indicate what is effective and what is not, but because they increase each participant’s accountability and commitment, preventing any shirking of responsibility. This kind of project can only be evaluated once the discourse has reached a certain stage of maturity and the first practical actions have been taken to advance the chosen agenda. Measurement and evaluation may be carried out using 46 | Part I Theoretical Background web-based technologies, which can increase efficiency, reduce cost, improve data quality, and document the progress of the initiative as a whole. IV. Challenges in Public Network Management Another relevant study, which discusses management of public networks, lists five main challenges to be taken into account to create an effective system (Milward & Provan, 2006). These are pertinent to Givat Haviva’s role in facilitating partnership between communities, and apply both to the integrator and to the partner organizations: 1. Management of accountability – how to create accountability and ensure that participants carry out the tasks to which they have committed; 2. Management of legitimacy – how to promote the activity and continually negotiate the legitimacy of the partnership and the importance of its various activities; 3. Management of conflict – what kind of conflict-resolution system should be established to address conflicts that arise in the partnership process; 4. Management of design – what kind of management model best serves the program. It may be a consensus-building model in which decisions emerge organically from the network’s activity, with no specific organization in charge; there may be a specific responsible body within the system (for example, some members of the network may be given the authority to act as integrators); or there may be an external backbone organization exclusively charged with managing the system. It is critical that we define to ourselves which management model we intend to build in the program, and examine whether it is appropriate that in the early stages we serve as a backbone organization, the aspiration being that by the program’s fourth year this role will be taken over by members of the partnership and that by that stage our role will consist of coordinating information rather than initiating and motivating. 5. Management of commitment – how to sustain the commitment of the various players to the ongoing activities of the partnership in particular and of the network in general, and ensure that they are committed to the overall working system rather than only to specific activities under their responsibility. It must be noted that the approach to management and leadership proposed here supports types of leadership that enable and give room for growth. Such approaches and understandings of leadership will be discussed further in the next chapter. It must be remembered, however, that while ideally management would be the task of the project’s steering committee/leadership team, which would act as an integrator, in practice it does not have the resources (time, staff, expertise etc.) to fulfil this, unless a specific post is created for the ongoing task of maintaining the integration. Therefore, for the first few years at least, Givat Haviva must commit to a dominant, active role in the integration process. 47 Chapter 6 From Representative to Participatory Democracy as an Anchor for Jewish-Arab Partnership Over the past decades it has become increasingly prevalent to view democracy as a deliberative practice, by means of which stakeholders as well as the general public partake in the processes of policy design and implementation. Stakeholders – from civic society, from the business sector, and where relevant from various public authorities – now take a more significant part in introducing changes, this practice gradually replacing the representative-democracy approach whereby elected representatives delineate policy in a more hierarchical fashion. These developments have come about since public and social problems are becoming increasingly complex, so that in order to address them it is necessary to share information, responsibility and capacity. An additional factor is the increasing accessibility of information, which contributes to people’s wish to participate in decision-making, be less reliant on “experts”, and have less policies imposed upon them by “the powers that be”. Governments in democratic societies are therefore increasingly acknowledging the right, ability and duty of citizens to shoulder responsibility for their social situation and to take action to change it, rather than leave decisions exclusively to authorities and policy-makers. It is important to note that deliberative practices are not a new phenomenon. There are testimonies to the existence of such practices centuries ago, as in the classic example of democracy in Athens. Recent decades have seen the development of a range of practices and procedures for deliberation suited to the needs and nature of particular contexts and issues. One of these developing norms is the inclusion of diverse populations in the policy-design processes, to some degree, even if ultimately the decisions themselves are made by policy-makers. Civic society, including social activists and organizations, is becoming an increasingly important, influential player in the development and implementation of public policy; there is a growing awareness that organizations and other players may have knowledge and experience that the government lacks, as well as an ability to move forward processes and support efficient and effective implementation of their products. The topic of public participation is pertinent to many issues and is studied by several academic disciplines, such as public policy, public administration, planning, conflict resolution, leadership, and communications. 48 | Part I Theoretical Background I. From Government to Collaborative Governance The approach that equated rulership with government, and according to which the elected leadership is the only dominant body in the design and implementation of policy, is being replaced by a paradigm of governance, which emphasizes the inclusiveness of processes involved in ruling, decision-making, power relations and policy leadership – by the government, the market, public networks, social organizations, informal groups, local and international corporations – and the interrelationships among them all. Governance has been defined as “the art of governing communities in participatory, deliberative, and collaborative ways” (Leighninger, 2006). This new paradigm views governance as the process of creating and implementing policies informed by the shared goals of citizens and organizations that do not necessarily have official authority. Practices of such new governance may, for example, build networks that go beyond government circles – as ongoing or as time-limited processes – in which public, private and non-profit organizations, as well as the general public, partake. Collaborative governance also addresses public conflicts by initiating complex deliberation processes, which bring together stakeholders and representatives of various publics. This is based on the understanding that conducting a discourse process and building agreements can prevent escalation and polarization among groups with differing interests, and help reach creative, win- win solutions, which are satisfying or at least acceptable to all concerned. Processes like this are very valuable since they allow participants to go beyond a narrow prism of interests and to develop a complex systemic viewpoint that takes into consideration the multilayered needs and interests of all stakeholders, as well as enabling the development of capacity for collaboration and dialogue. Professional literature distinguishes between processes aimed at solving a specific problem and processes in which stakeholders and partners interact as a complex system with the intention of developing capabilities for better, longer-lasting dialogue and collaboration, either in order to produce collaboration/participation that increases benefits for all concerned (by optimizing benefits from existing resources) or as preemptive action to prevent potential future escalation of conflict. Another model (NCDD 2013) distinguishes among four principal types of dialogue and discourse processes: exploration, conflict transformation, decision making and collaborative action. The facilitator engaged in process-design should help identify the required process and design it accordingly. Collaborative governance seeks to include stakeholders in the decision-making process, encourages citizens' autonomy and independence, and presents a means for promoting the common good through inclusion of citizens and stakeholders. The rich variety of methods and practices that have developed in recent years for the management of dialogue and deliberation processes is beyond the scope of this book. However, it is important to discuss public participation in decision-making processes. 49 II. Public Participation The concept of public participation is much discussed in literature, and has generated diverse practices for bringing together diverse populations for diverse deliberation processes. These perceptions and processes play a key role in building a shared society, since they encourage the engagement and participation of publics from all walks of life in social processes – including marginalized populations that are unused to activism of this type – and helps them develop the sense that they are capable of influencing the shaping of policies and their living conditions. Public participation processes recognize the public’s right to information that influences decision- making, its right to argue its case directly before decision-makers, and of course, its right to have access to the decision-making process. Citizens are empowered to acquire relevant knowledge, to process information that may seem “too complicated”, and to apply their own thinking to complex decision-making processes. It is important that processes of public participation and stakeholder participation uphold the principles of equity and of inclusiveness; all stakeholders who have relevant interests or are involved in the issues under consideration should participate in the process. Participatory processes and policy-making by consensus are directly linked to the sustainability principle, since a policy formulated with the participation of stakeholders and approved by them is likely to be sustainable, and such stakeholders will probably take part in the efforts for its effective implementation, out of a sense of accountability and ownership. Such processes also reinforce stakeholders’ and citizens’ trust in the establishment and lessen the alienation sometimes felt on encountering it, since it may be perceived as oppositional and intimidating, and as serving narrow interests, not always with impartiality. There are still some – members of the public as well as those in positions of authority – who reject this new approach, whether because they wish to preserve power concentrations or because they have internalized an oppressive mindset and struggle with the transition from the widespread fundamentally hierarchic conceptions to those based on decentralization of power and flattening hierarchies. It is still a challenge to accept deliberation and inclusion processes as an engine of growth and welfare, which advances social cohesion, greater involvement, and active participation by a public that claims control and responsibility for its living situation. These processes at times encounter ambivalence, where collaborative approaches are embraced on a declarative level, as lip service, while in practice collaborative processes are far from being implemented.
https://anyflip.com/aylq/wbbq/basic
We will ensure that our learning environment is respectful and engaging. Where possible we will ensure that it is both culturally responsive and appropriate. Students will feel safe and supported in the classroom. When involved in education outside the classroom we will adhere to school policy ensuring ākonga are safe on school trips. We will adapt practice and pedagogy in response to the needs of ākonga in our classes. Elementary Language to Advanced and Tertiary Level. Roopu Manu Huia Regional Secondary Schools Kapa Haka Festival, NZ Schools' Festival. Cultural Performance Group for all official school functions / Community Function. Covered in all Te Reo Maori and Maori Performing Arts Programmes.
http://www.hastingsboys.school.nz/curriculum/arts_faculty/maori/index.htm
Major establishment press outlets ignored Friday's news that "Teresa Fryer, the chief information security officer for the Centers for Medicare and Medicaid Services (CMS) ... explicitly recommended denial of the website’s Authority to Operate (ATO), but was overruled by her superiors." Fryer also "refused to put her name on a letter recommending a temporary ATO be granted for six months" In other words, HealthCare.gov should not have launched. Brian Fung at the Washington Post's "The Switch" blog didn't consider the idea that HC.gov shouldn't even have gone live the most important story element. While failing to disclose Fryer's no-go recommendation and refusal to go along, he and his post's headline instead obsessed over whether Republican Congressman and House Oversight Committee chair Darrell Issa might "release files" that "could aid hackers." It wouldn't be a surprise to learn that hackers already have them, or at least have figured out how to work with or around them. Excerpts follow the jump (bolds are mine): These HealthCare.gov files could aid hackers. And Darrell Issa may release them. Significant security vulnerabilities are still being uncovered in the Obama administration's health-insurance Web site, nearly three months after the launch of HealthCare.gov. Officials discovered two such vulnerabilities, known as "high findings," within the last month, including one this week, Teresa Fryer, the chief information security officer for the Center for Medicare and Medicaid Services, told the House Oversight Committee this week in an interview. Fryer said that both issues were being addressed. The debate over the security of HealthCare.gov has raised questions about whether similar vulnerabilities exist in systems across the federal government. Because the Internal Revenue Service, the Social Security Administration and other agencies communicate with HealthCare.gov, security gaps in those agencies could, if discovered, allow hackers to penetrate their systems and indirectly compromise the functioning of the new health-care law, according to outside security experts. ... While software vulnerabilities in Healthcare.gov have been documented, the potential risk stemming from the site’s interconnection with other federal systems has not. Officials from the White House, the Health and Human Services Department and others did not answer questions posed by The Washington Post about whether serious vulnerabilities exist in other federal IT systems linked to HealthCare.gov. ... Separately, Mitre, an independent contractor hired to test the security of HealthCare.gov, identified 28 security vulnerabilities in one of several tests it conducted in mid-October, according to the company. ... Last month, Mitre agreed to send redacted copies of its test results to Issa in response to a subpoena. On Dec. 9, Issa requested the same documents in an unredacted format. In a series of four letters to Issa, executives from Mitre, the contractor behind the studies, warned that the unredacted documents could pose a risk to national security. "In the wrong hands, this information could cause irreparable harm to the basic security architecture of HealthCare.gov," wrote Mitre chief executive Alfred Grasso in a letter that accompanied the unredacted documents, "and potentially to the security of other CMS data networks that share attributes of this architecture." The Obama administration chimed in, with the White House counsel's office urging Issa not to leak the documents for fear of endangering "other, similarly constructed federal IT system controls." The obviously unexplored "potential risk stemming from the site’s interconnection with other federal systems" represents yet another reason why Fryer's no-go recommendation should have been heeded. It would appear likely that Mitre had to write the letters it did as a matter of due diligence, so the fact that the firm did so is not automatically extraordinary, as Fung would want us to believe. As to Issa, contrary to the Obama administration's assumptions, he wasn't quoted about releasing or "leaking" them, only about getting them in unredacted form. Is the Obama administration so distrusting of Congress, to which it gives national security briefings all the time, or is it more concerned that there's damning information in what has been redacted to this point? If Team Obama is worried about "leaks," maybe it should be look at the Democrats on Issa's committee, who would appear to have a lot to gain from anonymously "leaking" the documents to embarrass the chairman. The larger point is that Fung blew off Fryer's courageous refusal to go along and get along in favor of fretting about something (i.e., releasing security-sensitive documents) no one has ever said they intend to do. Maybe the paper should change its name to "The Water-Carrying Post." Cross-posted at BizzyBlog.com.
https://www.newsbusters.org/blogs/nb/tom-blumer/2013/12/26/wapo-headline-frets-about-possible-issa-file-releases-not-insecure
Issui Enomoto presents a dreamy take on Tokyo and Yokohama. The 43-year-old taxi driver has taken photos of the two cities for more than a decade from his car during his night shift. To create the dreamlike quality of his photographs, he overlays multiple shots, as CNN details: Listen beautiful relax classics on our Youtube channel. Enomoto takes photographs throughout his night shift, then overlays multiple shots to create a dreamlike effect with varying exposures. The resulting pictures offer ethereal glimpses of individuals before they fade into the night. The photographer’s work has been exhibited as part of a group show at the Tokyo Photographic Art Museum. He has also had solo shows at smaller, independent spaces such as Tokyo’s Gallery Kan. Enomoto’s work captures these transient, unspoken encounters with passengers — or even just passersby.
https://aworkstation.com/taxi-driver-photographs-dreamy-japanese-scenes/
Research shows that sensory play builds nerve connections in the brain’s pathways, which lead to the child’s ability to complete more complex learning tasks. Sensory play supports language development, cognitive growth, fine and gross motor skills, problem solving skills, and social interaction. Sensory play includes any activity that stimulates your young child’s senses: touch, smell, taste, movement, balance, sight and hearing. Sensory activities facilitate exploration and naturally encourage children to use scientific processes while they play, create, investigate and explore. Kids love sensory play, it generally involves lots of mess and a great deal of laughter! Here are all our examples, ideas and inspiration for you to use to create some sensory play at home. We also have sensory play sessions at our centres for you to join in with.
https://www.kidzplay.co.uk/play-category/sensory-play/
Myostatin (MSTN) is a secreted protein that plays an important role in regulating muscle mass. We originally identified myostatin in a screen for new members of the transforming growth factor-ss (TGF-ss) super family in mammals. We showed that Mstn is expressed specifically in the skeletal muscle lineage both during embryonic development and in adult mice and that targeted deletion of the Mstn gene in mice leads to a dramatic and widespread increase in skeletal muscle mass. Subsequent genetic studies in cattle, sheep, dogs, and humans have all shown that the function of myostatin as a negative regulator of muscle mass has been highly conserved across species. The demonstration that myostatin normally acts to limit muscle mass has suggested the possibility that targeting the myostatin pathway may have utility for enhancing muscle growth and regeneration in disease states characterized by debilitating muscle loss, including muscle degenerative diseases, neuromuscular diseases, cachexia, and age-related sarcopenia. Indeed, a number of studies have demonstrated beneficial effects of targeting the myostatin pathway in many of these disease settings. In order to develop strategies and methods for exploiting this signaling pathway for human therapeutic applications, we have focused much of our work on understanding the mechanisms by which myostatin signals to target cells and by which myostatin activity is regulated. The overall goal of this project is to continue our efforts to elucidate the molecular and cellular mechanisms underlying myostatin action. A major goal of this project will be to identify the cell types in muscle that are the direct targets for myostatin signaling in vivo. There is considerable debate as to whether myostatin normally exerts its effect by signaling to satellite cells, which are the stem cells resident in muscle, or directly to myofibers. In the first part of this project, we will attempt to determine the role of satellite cells in mediating myostatin signaling and the effects of myostatin inhibition. These studies will be important not only for understanding the basic biology of skeletal muscle growth but also for pursuing clinical applications based on targeting this pathway, as a critical question has been whether therapies based on myostatin inhibition will have beneficial effects in disease settings where the satellite cell population has already been depleted. In the second part of this project, we will continue our efforts to understand the roles of key components of this regulatory system. In particular, we will use genetic approaches in mice to characterize further the role of activin type II receptors in mediating myostatin signaling and the role of follistatin in regulating myostatin activity. Taken together, we believe that the results of these studies will provide important insights into the mechanism of action of myostatin and its regulatory components and that these findings could have important implications both for assessing which disease states might be most responsive to therapeutic agents targeting this pathway and for identifying the most effective strategies for therapeutic intervention. The overall aim of this proposal is to investigate the mechanisms underlying the regulation and activity of myostatin, which is a signaling molecule that plays a critical role in regulating skeletal muscle growth. These studies could have important implications for the prevention and treatment of a wide range of muscle wasting diseases, like muscular dystrophy, sarcopenia, and cachexia, as well as metabolic diseases, like obesity and type II diabetes.
http://grantome.com/grant/NIH/R01-AR059685-04
Photo Quiz An Elderly Woman with a Non-healing Ulcer Am Fam Physician. 2005 Jun 1;71(11):2163-2165. An 83-year-old woman presented with a non-healing ulcer on her left lower extremity. According to the woman’s caregiver, she had had the ulcer for more than 20 years. She reported no history of diabetes, coronary artery disease, or peripheral vascular disease, and she was taking no medications. On physical examination, there was a 3 × 3 cm ulcer, mildly tender to palpation, with a central necrotic area and irregular borders (see accompanying figure). The patient had no lower extremity edema and arterial pulses were palpable on both legs. A biopsy was performed. Question Based on the patient’s history and physical examination, which one of the following is the correct diagnosis? A. Venous stasis ulcer. B. Arterial ulcer. C. Marjolin’s ulcer. D. Pyoderma gangrenosum. E. Diabetic ulcer. Discussion The answer is C: Marjolin’s ulcer. This entity was first described in 1828 by Marjolin, who diagnosed non-healing ulcers developing in burn scars. Later, Dupuytren found these ulcers to be malignant in nature.1 Today, the term Marjolin’s ulcer is used to describe a cancer arising from any site of chronic inflammation. Ninety percent of Marjolin’s ulcers develop from burn scars. Other sites include stasis ulcers, decubitus ulcers, granuloma inguinale, syphilitic lesions, and smallpox vaccination scars. They also may occur in sinus tracts from hidradenitis suppurativa, acne conglobata, or osteomyelitis.1 Marjolin’s ulcers occur most often on the extremities and in wounds that have been present for 30 years or more.2 Clinically, they exhibit exuberant granulation tissue that spills over their well-defined margins onto the surrounding tissue. Histologic examination is consistent with squamous cell carcinoma in 95 percent of cases, although basal cell carcinoma, melanoma, and sarcomas also may be present.3 Marjolin’s ulcers have a particularly aggressive course with higher rates of recurrence and overall metastasis compared with other forms of squamous cell carcinoma. The five-year survival rate is 30 percent.4 There is limited evidence on what percentage of treatment-resistant, non-healing ulcers reveal malignancy. Thus, experts recommend biopsy (i.e., wedge-shaped biopsy of the borders or multiple punch biopsies) of any non-healing ulcer resistant to standard intervention to rule out malignancy.5–7 Once diagnosed, wide excision followed by skin grafting is recommended. Although autologous skin grafting remains the gold standard, bioengineered skin substitutes such as cultured autologous and allogeneic keratinocyte grafts, composites, cellular and acellular matrices, and living skin substitutes also have been shown to stimulate healing by acting as an occlusive dressing while releasing tissue growth factors and cytokines.8 Elective lymph node dissection has been suggested because of the high rate of metastasis.9 Finally, amputation may be recommended for recurrent disease, or when a Marjolin’s ulcer is associated with underlying osteomyelitis. The etiology for the malignant transformation is unclear. One suggestion is that previously traumatized skin may be more susceptible to mutation, and that fibrotic and avascular scar tissue may interfere with immune surveillance, resulting in the inability of circulating lymphocytes to reach and destroy neoplastic cells.1–9 The differential diagnosis is broad and may include any non-healing ulcer such as venous stasis ulcers, diabetic ulcers, arterial ulcers, or pyoderma gangrenosum. Venous stasis ulcers account for 70 to 80 percent of all lower extremity ulcers. They usually are located in the gaiter area between the gastrocnemius muscle and a point 2.5 cm below the ankle. Venous stasis ulcers often are shallow with irregular borders and are surrounded by indurated, hyperpigmented skin. As the etiology suggests, venous stasis ulcers are found invariably on edematous legs, which may be leaking serous fluid. Because diabetic ulcers usually are painless, they often go undetected. Usually developing on the feet, diabetic ulcers result from several factors associated with longstanding diabetes. First, peripheral neuropathy impairs the normal sensory protective mechanism of the feet, leaving them vulnerable to repeated trauma and subsequent ulcer formation. Secondly, these ulcers worsen and heal poorly as a result of microvascular and atherosclerotic derangements. Finally, diabetic ulcers have a high incidence of infection, which further slows the healing process. Arterial ulcers are the result of atherosclerosis and peripheral vascular disease. Narrowed vessels limit blood supply to the distal extremity, which results in ischemic necrosis. Arterial ulcers develop on the distal extremities and are sharply demarcated and painful with little granulation tissue. They may be accompanied by other signs of poor perfusion such as weak pulses and cool, hairless skin. Pyoderma gangrenosum also may develop in surgical scars, fistulas, and ostomy sites. However, pyoderma gangrenosum is an immune-mediated, inflammatory condition commonly associated with inflammatory bowel disease and immunodeficient states. These ulcers are painful with well-demarcated, undermined edges. They have a purulent, dusky purple base, surrounding erythema, and satellite pustules.10 The editors of AFP welcome submissions for Photo Quiz. Guidelines for preparing and submitting a Photo Quiz manuscript can be found in the Authors' Guide at http://www.aafp.org/afp/photoquizinfo. To be considered for publication, submissions must meet these guidelines. E-mail submissions to [email protected].
"Human Flourishing Program". What is it with academics giving things such Orwellian titles? Parenting How Parental Love Impacts Flourishing Later in Life Parenting practices have a profound influence on the lives of children. Posted Jun 28, 2019 Over the past several years, the Human Flourishing Program at Harvard University has pursued research on parenting practices and how these practices affect the flourishing of children as they grow and develop into adulthood. A common division of parental styles classifies them along two axes: parental warmth, on the one hand, and parental discipline, on the other. Based on whether the parenting approach is high or low on each of these two dimensions, parenting styles are then divided into four types. The parenting approach with high warmth and high discipline is sometimes called the authoritative style. In contrast, the approach with low warmth and high discipline is referred to as the authoritarian style. The approach with high warmth but low discipline is referred to as the permissive style. And, finally, the approach with low warmth and low discipline is referred to as the neglectful style. Research has fairly consistently shown that the approach of high warmth and high discipline (the authoritative style) tends to be associated with the best childhood outcomes. However, much of the research on the topic of parenting styles has examined single outcomes, one at a time. When this is done with different samples, and in different settings, it can be difficult to get a broad picture of the relative strengths and weaknesses of the different approaches. Moreover, much of the existing research is also cross-sectional in nature, meaning that all of the data is collected at once, rather than collected repeatedly over time. This can make it difficult to infer causality. For example, if parental warmth is associated with better child outcomes, it is difficult to know whether parental warmth really causes those outcomes or whether, for example, those children who are going to have good outcomes anyway are simply easier to love. One needs data collected over time, ideally on numerous well-being outcomes, to resolve these questions. This year we published two empirical studies, one in January in Social Science and Medicine, and one just last month in Nature Human Behavior. Both studies used data collected over many years. We looked at the effects of different parenting styles on numerous health and well-being outcomes, and the results, using more rigorous methodology, were largely consistent with previous findings. Children who had parents with the authoritative approach to parenting (high warmth, high discipline) fared best later in life. One of the interesting results of our two studies was that parental warmth, or love, seemed to be the dominant factor. While those who had high warmth and high discipline (authoritative style) did the best, the next best category was high warmth, low discipline (the permissive style) and they did considerably better than the group that, in contrast, had low warmth and high discipline (authoritarian style). Not surprisingly, the group that had low warmth, low discipline (neglectful style) fared poorest. However, overall, parental warmth or love seemed the most important. When parental warmth was considered on its own, it was the most important aspect of parenting we were able to identify. This conclusion was especially manifest in our Nature Human Behavior study, in which we examined multiple aspects of positive parenting. In that study, parental warmth in childhood (measured by satisfaction with the parent-child relationship, generally concerning love and attachment) was associated, several years later, with a 46 percent reduction in depression, a 39 percent reduction in anxiety, a 68 percent reduction in eating disorders, as well as higher levels of emotional processing and expression, and lower levels of cigarette and marijuana smoking. Other parenting practices, like family dinners, were also important, but simply not as important as parental love and affection. Likewise, in our Social Science and Medicine study, parental warmth was associated with a wide range of positive flourishing outcomes later in life. The links to some of these (such as happiness/emotional well-being, positive relations later in life, and self-acceptance) were larger than the associations with others (such as social coherence), but they were positive for almost all outcomes. The research points to the important role of love in human flourishing. While the term “love” means different things to different people, in much theological writing, it is understood as a desire for and/or commitment to the good of the other. That seeking of the other’s good, in childhood (and arguably throughout life), is very powerful. In addition to the actions that promote the good of the beloved, that experience of love affirms the person’s intrinsic value and worth. It establishes a bond. It fulfills one of the deepest human yearnings for connection with others. It is no wonder, then, that the experience of love appears to affect so many health and well-being outcomes. Love is not often considered in our medical and public health discussions as to what shapes health. But perhaps that should change as more and more research points to its profound importance. Our study is not without its limitations. While the experience of parental warmth, or love, emerged as being of central importance, the measurement of constructs like love or discipline is challenging. Measures will always be imperfect. Our measures of parental discipline do not take into account what form that discipline takes. The range of outcomes we were able to examine with the data are also limited. They do not include assessments of character. Indeed, the measurement of character is still in its infancy, and difficult to study. But we might expect parenting practices oriented towards discipline to profoundly shape character, and to shape also the child’s subsequent contribution to the lives of others in adulthood. Nevertheless, despite these limitations, what the research does very clearly establish is the importance of a child’s experience of love. Love can be experienced in different ways. Our last posting pointed to the importance of forgiveness. Forgiveness—being understood as the replacing of ill-will with good-will towards the offender—is itself a form of love. Perhaps love thus ought to be given greater attention, in the empirical sciences, in life, and in our public discourse, as to what brings about true human flourishing. The Human Flourishing Program at Harvard University aims to contribute to, bring together, and disseminate knowledge from various academic fields on topics fundamental to human flourishing. Sign up here for our monthly research e-mail, or follow us on Twitter Facebook image: LightField Studios/Shutterstock References Chen, Y. Kubzansky, L., and VanderWeele, T.J. (2019). Parental warmth and flourishing in mid-life. Social Science and Medicine, 220:65-72. Chen, Y., Haines, J. Charlton, B., and VanderWeele, T.J. (2019). Positive parenting improves multiple aspects of health and well-being in young adulthood. Nature Human Behavior, in press: https://doi.org/10.1038/s41562-019-0602-x. "Human Flourishing Program". Is 1984 one of your favorite Is 1984 one of your favorite books?! Orwell vs money&status I have read about 800 books (counted when I studied literature), and judging someone because of he/she mentioned a book is a big signal of judgemental obsession. I think the first comment was spot on. The terminology sometimes does sound weird. Does it make your life easier to jump to conclusions and labels so quickly? Orwell wrote excellent essays. Everything he wrote shows high intelligence. He would probably score high both in analytical and emotional intelligence. Sonetimes I envy those who see the world through a more materialistic mind which enables simplifications that make their life easier - they just do not bother to go beyond the first impression labels.. Maybe, I shouldn't have written "materialistic mind" - maybe there is a different reason behind this... Something like: "Skripal was a scumbag." Something like that. I do not know. To me Skripal was not a scumbag. Neither the other victim of novicok who actually was unemployed. They were not scumbags either. Sorry, I didn’t mean to judge Sorry, I didn’t mean to judge you for your book selection. I was asking because I love 1984, and it’s been one of my favorites since I read it at 12. Books often begin to lose their relevance as they age, but the themes in 1984 tend to do the opposite. How did a 1949 Orwell predict that Big Brother would be so present in my current 2019 life? That screens would influence us in such a dramatic fashion? You can get even more meta about it... Did he know it was coming, or did he create the idea, an idea that was then co-opted by big business/government? What I love the most about the book is the doublespeak and wordplay. I love the misuse of speech and how that directly correlates with so much of our present day life. “We have always been at war with Eurasia.” “We have always been at war with Eastasia.” If we say something one day, does saying the opposite tomorrow erase yesterday’s reality? It’s hard to tell sometimes. Anyway, sorry for my poorly worded first comment. I’ll try to use my word more thoughtfully in the future. apology So it is me who owes you an apology... :) Actually, I am from Eurasia :) I am not against the idea of EU but there is a lot of hidden agenda about the subsidies to various European states, which I think look more like investments into a greater scale of the so called business with poverty. Aldous Huxley Brave New World. Sorry, I am not a native speaker, so I always say things in a clumsy way. I like Orwell's essays more than "1984". I was not even able to read Animal Farm... Anyway, I badly interpreted that"?!" in yr first comment. Yes, I will make a note about Yes, I will make a note about using ?! in the future. Question: as a European dweller, could you explain what countries people might be talking about when refer to “western Europe”? I had never heard the term before, but it seems to be used on PT quite frequently. books The first two books I was able to read in English were: 1984 and The Graduate (like the film with Dustin Hoffmann) Both of those two books are written in a language that a student of English can understand. Anyway, there is a lovely sentence in The Graduate that is extremely relevant today: "I will say just one word to you: PLASTICS." A business partner of his fathers says this sentence to Dustin Hoffmann to suggest that he should start a carrier in the production of plastics because that is the future source of high financial income... It is even more sarcastic prophecy than "1984". western Europe Britain, France, Germany, Spain, Portugal... Thank you! And I’ve never Thank you! And I’ve never read or watched The Graduate, I’ll have to look into that. thanks for a chat :) That is a romantic film, nothing as complex as Orwell. Music Simon and Garfunkel (not my favourite - but a good music for that kind of film.) Anytime. And sign me up for Anytime. And sign me up for anything Simon and Garfunkel. I’ve always found “I am a Rock” to be one of the most unintentionally funny songs of all time.
https://www.psychologytoday.com/us/blog/human-flourishing/201906/how-parental-love-impacts-flourishing-later-in-life
In the first article in the series on Korean vocabulary we discussed how important vocabulary is for learning Korean (or any other language) or for getting success in Test of Proficiency in Korean (TOPIK). In the last article I listed 8 common myths about foreign language vocabulary that most of us believe. In this article, I am going to make a list of some methods that can help you improve your Korean Vocabulary. I have been using some of the vocabulary learning methods listed here myself and other methods I have either researched or got to know from other Korean learners. But before going to the list I would like to make it clear that this list is not exhaustive and these are not necessarily the best ways to learn vocabulary for everyone. All of us are different and have our own ways of learning things. The method that worked for me will not necessarily work for you. Analyze all of them and see what suits you best. Word lists and flash cards are the most widely used Vocabulary learning methods. Either make a two column table with Korean words in one column and their meanings in the other (you can also add more columns like example sentence etc.) or make flash cards with Korean words on one side and their meanings on the other. Review these words regularly (at least once a week) and mark the words that you still haven’t learned completely. While reviewing give special attention to these marked (difficult) words. With time these difficult words will also go to your permanent memory and then you can unmark them. Even if you think you have completely learned a list, it’s better to review it once in a while. Make a separate list of difficult words you often forget and review it more frequently. You can also consider making your own personal dictionary – a dictionary of words, important for you. Always keep a dictionary handy and look up for words you don’t know. Try to get rid of the habit of finding an alternative way of expressing yourself when you don’t know (or can’t recall) the appropriate word. Looking up dictionary may be a boring task but it really helps in long term. I prefer online dictionaries of Naver and Daum over paper and electronic dictionaries as these online dictionaries give you much more information. Looking up dictionary is even better than asking a teacher or a Korean because when you see dictionary you don’t learn only one word but several related words and phrases as well. Words are rarely used alone. They are used in a context with other words, phrases and grammar patterns. Just knowing the meaning of a word is not enough; you need to know ‘who’, ‘where’, ‘when’ and ‘how’ of the word. Otherwise there are good chances that you will make a blunder some time. There are words that are used with particular people, at particular time and situations and in a particular way. Using any of these aspects of the word in a wrong way may create an embarrassing situation for you. So when you learn a word try to get as much information as you can about the context in which it’s used. When you learn a word, try to write/learn other words/phrases that are frequently used with it. For example, If you look up dictionary for the word ‘훨씬’ you will find the meaning as ‘(very) much, far, a lot’. But you can’t use ‘훨씬’ for every ‘far’, ‘much’, or ‘a lot’. if you read example entries you will find that this word is most frequently used with ‘보다’, ‘더’ and adjectives to show comparisons. Also learn various possible forms of a word. For example – 예쁘다 – 예쁩니다, 예뻐요, 예쁘죠 etc. This type of additional information will help you learn the appropriate use of the words. Knuckle mnemonic for the number of days in each month of the Gregorian Calendar. Each projecting knuckle represents a 31-day month. Make associations and connections between the new words you learn and words/things you already know. For example, some word may sound like something different/funny/interesting in your mother tongue. Or, some words mind remind you of something. Funny and weird associations are OK. in fact, they are better. You can also use Mnemonic (a memory trick used to remember things) to improve your vocabulary retention. Connect words with images, people or events around you, convert them into rhymes/poems etc. We need frequent exposure to a word, either by listening, speaking, reading or writing, to learn it permanently. Research says words need 20-30 repetitions/reviews to move to our permanent memory. Use the new words you learn as much as you can even if you are doubtful about its correct use. Just play with the words; use them in different combinations. Take risks. Mistakes and blunders are okay; they make us learn things. And in fact the funny mistakes are the most interesting thing about learning a foreign language. Aren’t they? Using the foreign language words in sentences in one’s mother tongue can also be a good technique to learn and practice words; and it’s fun. For example you learned the word ‘예쁘다 (Beautiful)’. You can practice it with sentences like ‘These Flowers are really 예쁘다’. It might sound weird but believe me it can help you memorize some really difficult words. Reading widely is the BEST way to enhance vocabulary in any language. Reading exposes us to a wide range of vocabulary in various contexts. The important thing here, is to select a reading material that interests you and suits your level otherwise you will give up very soon. No one likes to look up dictionary 10 times to understand a single sentence. There are plenty of reading material – newspapers, comics, stories, blogs- available in Korean language on internet. Read what you like to read the most and note down the new and interesting words and phrases you come across. It’s also a good method to read the English (or your mother tongue) version of a big news first and then read the Korean version. Now you already know the news story, you just have to focus on the language they use to convey it. This is the most important one. According to the research most successful learners develop their own methods to learn vocabulary. Generally this methods is a combination of several methods. By using all the methods for some time, you will be able to figure out which ones are working for you. Then you can take the good points from all the working methods and develop you own custom method that is the best for you. What method do you use? Share in the comments. I like the article. However, poor spelling may undermine people’s confidence in what they read. “embarrassing” should, I think, be “embarassing”. I have corrected the second word but i think ’embarrassing’ is correct.. isn’t it? it’s embarrassing you are correct TOPIK…some people should take their own advice before they try to correct someone. Thanks very much for this post. I’m sure the method goes beyond Korean. tnk u so much for this excellent artical! Links seem to be reversed: “Reasons to improve…” links to “how to” article, and v.v. Thanks for great articles!
https://www.topikguide.com/how-to-improve-your-korean-vocabulary/
imports of natural gas and exports of crude oil, natural gas liquids (NGLs), natural gas, refined petroleum products and electricity. For oil and gas exports, the Board’s role is to evaluate whether the oil and natural gas proposed to be exported is surplus to reasonably foreseeable Canadian requirements, having regard to the trends in the discovery of oil or gas in Canada. If a party wishes to rely on material from this report in any regulatory proceeding before the Board, it may submit the material, just as it may submit any public document. Under these circumstances, the submitting party in effect adopts the material and could be required to answer questions pertaining to its content. While preparing this report, in addition to conducting its own quantitative analysis, the NEB held a series of informal meetings and discussions with various industry and government stakeholders. The NEB appreciates the information and comments provided and would like to thank all participants for their time and expertise. This report does not provide an indication about whether any application will be approved or not. The Board will decide on specific applications based on the material in evidence before it at that time. This report provides an outlook of Canadian natural gas deliverabilityFootnote 1 from the beginning of 2016 to the end of 2018. The outlook presents three distinct cases, a Higher Price Case, Mid-Range Price Case, and a Lower Price Case, each of which are based on a set of assumptions. Since mid-2014, lower commodity prices have effected Canadian producers via reduced revenues, constrained cash flows and significantly reduced gas-targeted drilling. Major cuts to capital expenditures were made in 2015. Producers are wrestling with spending within cash flows and having to remain within bank-imposed debt limits, while continuing drilling operations to help minimize declines in reserves and production. Canadian natural gas deliverability is expected to decline in the near-term as reduced drilling activity and continued U.S competition further challenge Canadian output. The lower Canadian dollar has resulted in additional complications, although providing a modest boost to revenues because exports to U.S. markets are paid in U.S. currency, it also creates challenges because some equipment and required supplies are purchased from the U.S., and paid for in U.S. dollars. Despite this challenging environment North American producers may continue to find deliverability gains on a per-well basis through high-gradingFootnote 2. It is expected that gas prices and Canadian drilling activity in 2016 will remain suppressed because the warmer-than-average winter softened demand and left ample storage volumes that require less production to refill. Multiple pipeline projects flowing gas out of the U.S. Appalachian Basin are scheduled to be operational by 2017-2018 and are expected to further challenge western Canadian gas in key markets. The Canadian liquefied natural gas (LNG) picture remains ambiguous. A 2016-2017 final investment decision (FID) for one or more Canadian LNG export projects could accelerate pre-positioning by producers and result in additional Canadian deliverability over the projection period. In the Mid-Range Price Case, the Henry Hub price of natural gas would initially fall from $2.70/MMBtuFootnote 3 in 2015 to $2.50/MMBtu in 2016, climbing thereafter to $3.00/MMBtu by 2018, while Canadian natural gas deliverability declines slightly from 427 106m³/d (15.1 Bcf/d) in 2015 to 412 106m³/d (14.5 Bcf/d) in 2018. The Higher Price Case would see natural gas prices at $4.00/MMBtu by 2018, resulting in more drilling and Canadian deliverability increasing to 434 106m³/d (15.3 Bcf/d) by 2018. In a Lower Price Case, prices would remain at, or below $2.50/MMBtu, and deliverability would decline to 393 106m³/d (13.9 Bcf/d) by 2018. A comparison of the price assumptions for each case can be found in Figure 1.1. The Analysis and Outlook section of this report contain key assumptions for each price case. The Appendices contain a detailed description of the input assumptions used in projecting deliverability. North American producers continue to struggle with lower commodity prices. This is resulting in reduced revenues, constrained cash flows, less gas-targeted drilling, and a reduction of oil-derived natural gas production. Reserve write-downsFootnote 4 and reduced credit ratings have made it more difficult for some producers to access capital. Consequently, producers are decreasing drilling activity, reducing staff, seeking price concessions from suppliers and pursuing efficiency improvements in order to reduce cost. While the devaluation of the Canadian dollar relative to the U.S. dollar benefits Canadian producers when export sales are paid in U.S. currency, it also disadvantages Canadian producers when purchasing equipment and supplies in U.S. dollars. Canada produced an average of 427 106m3/d (15.1 Bcf/d) of marketable.Footnote 5 natural gas in 2015, up 2.6 per cent from 2014, remaining well below the 482 106m³/d (17 Bcf/d) peak in 2005. Western Canada is the primary natural gas producing region, contributing 99 per cent of total Canadian natural gas production in 2015. The remainder of Canadian natural gas production is supplied by Nova Scotia, Ontario, and New Brunswick. Overall natural gas demand in Canada was up slightly in 2015 at about 269 106m³/d (9.5 Bcf/d) and is expected to see ongoing modest growth as lower gas prices encourage industrial consumption. Rising oil sands production is fueled by natural gas and including gas consumed for cogeneration is now over 88 106m³/d (3.1 Bcf/d). Gradual growth in Canadian electricity demand is being met by a combination of increases in renewable generating capacity (wind, solar and hydro) and from natural gas. Imports of U.S. Marcellus and Utica gas will continue to challenge Canadian gas for markets in central Canada. Canada’s natural gas exports to the U.S. remained flat in 2015 at about 211 106m³/d (7.4 Bcf/d). Imports of U.S. gas declined moderately in 2015 due to an increase in firm service contracting on Canadian pipelines resulting in Canadian net exports of 158 106m³/d (5.6 Bcf/d) in 2015. This represented about a five percent increase in net exports in 2015, but remained well below the 2007 peak in Canadian exports of 294 106m³/d (10.4 Bcf/d). Canadian natural gas exports to the U.S. Midwest continued to decline in 2015 as pipeline reversals and expansions flow more U.S. Marcellus and Utica gas into that market. Part of this decline was made up by increased Canadian gas exports to the western U.S. as higher temperatures increased demand for gas-fired power generation to meet air conditioning demand. U.S. natural gas production has increased steadily since 2005 and averaged 2 103 106m³/d (74.2 Bcf/d)Footnote 6 in 2015. This represents an increase of 5.3 per cent year-over-year and a 50 per cent increase over production of 1 401 106m³/d (49.5 Bcf/d) in 2005. Natural gas produced in the U.S. is increasingly derived from tight and shale formations and is serving a growing share of U.S. demand, in turn reducing the need for Canadian natural gas imports. The U.S. expects only modest growth in natural gas demand. Only the power generation sector has shown robust demand as lower gas prices have allowed gas-fired power plants to maintain higher utilization rates beyond the typical summer period of air-conditioning loads. In addition, a number of coal plant retirements and increases in requirements for gas-fired generation as a backup for intermittent wind and solar capacity are adding to gas demand. In 2015, U.S. natural gas demand was 2 129 106m³/d (75.2 Bcf/d), an increase of three per cent over the prior year. The U.S. shipped its first LNG cargo export in February 2016. By the end of 2018, the U.S. is expected to have operationalFootnote 7 liquefaction capacity of 241 106m³/d (8.5 Bcf/d) which is equivalent to about 11 per cent of 2015 U.S. natural gas production. Mexico is becoming an increasingly important outlet for excess U.S. natural gas supply. The amount of gas moving southward to meet growing Mexican demand represents volumes not available to compete with Canadian gas in other regions of the U.S. and Canadian market. Mexican natural gas production decreased slightly between 2006 and 2015 to about 114 106m³/d (4 Bcf/d)Footnote 8. Although Mexico may turn out to have sizeable resources of shale gas, its development lags behind the U.S. and Canada. Shale gas production is unlikely to expand rapidly in the short-term. Mexican demand for natural gas is expected to increase significantly in the mid to long term due to the planned construction of dozens of natural gas-fired power plants. In 2015 Mexico imported 82 106m³/d (2.9 Bcf/d) of natural gas from the U.S.Footnote 9. It is expected that Mexico will continue to rely on imports to meet incremental demand for natural gas. As additional pipeline infrastructure is added, imports from the U.S. are expected to satisfy an increasing portion of Mexican demand and potentially displace some imports of higher cost LNG from other countries. Mexican natural gas imports from the U.S. are expected to increase to 142 106m³/d (5 Bcf/d) by 2020Footnote 10. The North American natural gas market continues to be oversupplied. Storage inventories in the U.S. began 2016 above historical averages due to a warmer than usual winter. Strong U.S. gas production, ample inventories and reduced heating demand are expected to keep the market amply supplied and could keep price soft for most of 2016. Cyclical imbalances of supply and demand are typical of the North American natural gas market. Demand often varies because of weather, changes to economic growth, and infrastructure constraints. A typical cycle occurs as follows: during periods of increased demand, prices increase to ration supply and direct it toward the markets that value it most. Higher prices also provide incentives to develop and produce the next most costly natural gas resources which can cause deliverability to exceed demand, subsequently depressing prices. Lower prices discourage production of high cost supply sources but at the same time also foster demand. As demand grows, prices begin to rise again and the cycle repeats itself. Natural gas prices have been on a downward trend since early 2014 and oil prices dropped sharply in mid-2014 which subsequently reduced demand for drilling rigs and well-servicing equipment across the oil and gas sector. Producers and service companies have since lowered costs, improved operational efficiencies, and achieved higher levels of production per-well by high-grading. Producers significantly reduced costs in 2015, some reporting cost reductions between 25 and 50 per cent. Although producers will continue to look for further savings in 2016, it is unlikely future cost reductions will be of the same magnitude as those achieved in prior years as the majority of cost savings have likely already been obtained. Modern drilling technologies, such as multi-stage hydraulic fracturing and multi-well pads, are now used extensively, improving the size and economics of the Canadian and U.S. natural gas resource base while boosting deliverability. It may take years for new major markets to develop for natural gas. Natural gas has largely displaced competing fuels in traditional space-heating markets in Canada and the U.S. already. Proposed LNG export facilities represent a large potential increase in gas demand. Long lead times to obtain approvals, establishing overseas markets, and the construction of facilities are factors that slow down the development of these projects. Currently, none of the proposed Canadian LNG projects with approved export licenses have announced a FID, although one project has issued a conditional FID. Other potential sources of major demand growth could require years or decades to develop to meaningful scale. Examples include growth of the North American petrochemical industry, additional upgrading of bitumen in Alberta, and widespread use of compressed natural gas or LNG for transportation. The U.S. has a large inventory of wells that have been drilled but not completed. This allows producers to avoid selling into the market at lower prices, while taking advantage of lower drilling and service costs available because of reduced activity. These wells can be completed later when prices rise, which could rapidly increase supply, stifling large price increases. Many small and mid-sized Canadian oil and gas producers could have difficulty accessing capital, which not only challenges drilling operations, but also increases the chance of bankruptcy or acquisition of smaller producers by larger, more financially stable companies. In lieu of debt financing, producers and service companies in the U.S. and Canada are utilizing private equityFootnote 11 investment. This may provide the capital required by smaller and mid-sized producers to continue operations. Currently, there is more private equity investment activity in the U.S. than in Canada. As commodity prices remain depressed, an increase in merger and acquisitions (M&A) is expected. In order to obtain the best deal possible, investors typically wait to see evidence of prices bottoming out before investing. Anticipation of even lower prices partially explains why Canada has yet to see an increase in M&A activity. In 2015, year-over-year M&A activity in the Canadian oil and gas sector fell by almost half, from $41 billion in 2014 to $21 billionFootnote 12. The extraction of NGLsFootnote 13 (which are priced in relation to crude oil) from natural gas production represents an additional source of producer revenue. As natural gas prices declined after 2008 and crude oil prices continued to rise, the increasing value of the NGLs from some natural gas wells could exceed the value of the natural gas produced. This promoted NGL-targeted drilling and resulted in additional natural gas deliverability based on the value of the NGLs rather than the natural gas. Eventually rising NGL deliverability began creating excess supplies of ethane, propane, and butane in Canada and the U.S. Excess NGL volumes coupled with declining crude oil prices since 2014 have decreased the supplemental revenues generated from NGL-targeted activity and slowed the development of this source of natural gas deliverability. Heavier NGLs such as condensate have higher value in western Canada because they are used to dilute bitumen for pipeline transport. It is possible that condensate-rich gas plays could see sustained drilling activity in western Canada. Shale gas resources such as the Marcellus and Utica are close to markets in central Canada, the U.S. northeast, and the U.S. Midwest. Gas from this area has significantly displaced Canadian exports to the North East U.S. market because proximity presents a cost advantage relative to shipping in Western Canada Sedimentary Basin (WCSB) gas. By 2018, newly constructed pipelines in the Marcellus and Utica region could add additional 88 106m³/d (3.1 Bcf/d) import capacity into Canadian markets and 156 106m³/d (5.5 Bcf/d) into the U.S. Midwest. This additional capacity could displace some of the supply provided by the WCSB in these markets. Since July 2015 production from the Marcellus shale has been slowly declining as companies wait for higher prices and new pipeline infrastructure. Production from the Utica however, is increasing and has largely offset Marcellus production declines, continuing to challenge Canadian market share. Producers are testing gas resources in western Canada that could support proposed LNG exports, potentially increasing drilling in the area. A FID to proceed with a Canadian LNG export project in 2016-2018 could accelerate this activity within the time period assessed in this report. The Nova Scotia Deep Panuke project was expected to offset declining output from the Sable Offshore Energy Project. Deep Panuke is now operating seasonally, producing in winter when demand is greater. Increasing amounts of water are being produced with natural gas at Deep Panuke, and this could shorten the project’s lifetime. The Alberta Government recently reviewed and updated its oil and gas royalties program. The new royalty framework, which comes into effect for wells drilled in 2017, favors efficiency and may create benefits for some producers. The new royalty framework recommends that existing royalties remain in effect for 10 years on investments already made, and royalty changes should only be implemented on new wellsFootnote 14. The lower price of natural gas alongside a change in environmental regulations is encouraging switch from coal to gas for power generation in the U.S. Coal-to-gas switching for power generation would create additional demand for natural gas. To date, the majority of coal plant retirements have been aging units, not heavily utilized. In the U.S., the extent to which the displacement of modern efficient coal plants equipped with emissions controls by gas plants would depend on the price competitiveness of gas compared to coal. The timing of further displacement in power generation will depend on mandated timelines in government legislation, demand for power generation, and relative prices of gas and coal. Canadian natural gas drilling activity decreased significantly in 2015 (Table 3.1) due to lower prices, major slashes to capital expenditures, and a difficult economic environment. Drilling costs are expected to continue declining slightly throughout 2016, as producers find remaining efficiencies. Increased deliverability from the U.S. continues to depress gas prices, rendering some western Canadian natural gas prospects uneconomic to pursue. Three price cases for Canadian natural gas deliverability are examined in this report. These cases differ primarily in terms of Canadian and U.S. natural gas prices and the rate at which Canadian gas is backed out of key markets by lower cost U.S. supply. The Appendices contain a detailed description of the assumptions used for projecting deliverability. A summary of the key assumptions used in the cases and their respective deliverability results are shown in Table 3.1. Onshore production from New Brunswick and Ontario, which is declining as minimal future drilling activity is expected over the projection period. Nova Scotia production from the offshore Sable Island project and Deep Panuke. Shale gas potential exists in Quebec, New Brunswick, and Nova Scotia, however, provincial policies currently prohibit hydraulic fracturing which is required for shale gas development. It is assumed these policies do not change over the projection period. Natural gas production from the Mackenzie Delta and elsewhere along the Mackenzie Corridor in the Northwest Territories ceased in 2015 on account of lower prices rendering production uneconomic. The three price cases cover a range of market conditions: In the Mid-Range Price Case, Canadian gas struggles to maintain market share as low cost U.S. natural gas sources back Canadian supply out of central Canada and the U.S. Midwest market. Deliverability remains relatively flat in 2016 and declines through 2018. By the end of 2018 deliverability declines as newly drilled wells are unable to replace declining production from older wells. In the Higher Price Case, U.S. production from the Marcellus and Utica region is needed to support increasing Mexican exports, increasing U.S. LNG exports, additional gas-fired power generation and petrochemical industry requirements, and to offset declines in U.S. natural gas produced from oil wells. These factors increase the opportunity for Canadian gas to flow into key markets. Strong economic growth and U.S. LNG projects finishing ahead of schedule contribute to increased demand over the period. As a result, Canadian deliverability rises throughout the projection period. In the Lower Price Case, lower cost Marcellus and Utica shale gas resources further increase their market share in central Canada and the U.S. Midwest, facilitated by new pipeline capacity. Displaced U.S. Rockies supply creates challenges for Canadian gas to access markets on the U.S. West Coast. U.S. LNG exports increase more gradually resulting in increased U.S. gas surplus. Consequently, western Canadian natural gas is further challenged and squeezed out of key markets. Lower prices and reduced market opportunities result in steadily decreasing deliverability over the projection period. A comparison of the three Canadian natural gas deliverability outlooks to 2018 is shown in Figure 3.1. The levels of drilling activity that provides these deliverability outcomes are the result of capital investment assumptions and estimated drilling costs. Comparisons of natural gas drilling activity in the three cases in terms of drill days and gas-intent wells drilled are shown in Figure 3.2 and Figure 3.3, respectively. Throughout the projection period the Mid-Range Price Case assumes moderate economic growth, weather conditions in line with seasonal averages, continued modest growth in Canadian and U.S. natural gas demand, and on-time completions of U.S. LNG facilities and pipeline infrastructure. Following the warmer-than-average winter of 2015-2016 the North American market remains oversupplied in 2016, both in terms of NGLs and marketable natural gas, resulting in slimmer margins for producers. Following major cuts in 2015, capital expenditures increase slightly over the projection period as pricing conditions improve; however, U.S. natural gas continues to back Canadian supplies out of markets in central Canada and the U.S. Midwest. The outlook for Canadian LNG remains ambiguous with no FID’s made before 2017. In spite of additional drilling, Canadian deliverability declines over the projection, as newly drilled wells are unable to fully replace declining production from older wells. In the Mid-Range Price Case, Canadian natural gas deliverability continues to be well above Canadian demand.Footnote 15 Canadian deliverability remains relatively flat in 2016 and falls throughout 2017 and 2018 as declines from older wells outpace drilling and production from new wells. Tight gas activity increases over the projection with 700 tight gas wells drilled in western Canada in 2018, including 411 in the Montney tight gas play. The Duvernay Shale play continues to see the most Canadian shale gas activity with 35 wells drilled in 2018. A summary of the Mid-Range Price Case is available in Table 3.2. Annual average of NEB reported provincial production, where available. Canadian and U.S. gas markets have been well supplied at historically moderate prices for the past few years. A warmer-than-average winter and elevated storage levels going into 2016 keep prices depressed in the short term. Markets could tighten from reduced capital expenditures, drilling reductions, rising natural gas demand from improved integration of the Mexican market or U.S. LNG projects coming online ahead of schedule. The Higher Price Case assumes a larger recovery for Canadian natural gas deliverability because of higher gas demand from a combination of various factors including: stronger economic growth in the U.S. and Canada, weather that is cooler in the winter and warmer in the summer than average to increase space heating and cooling demand, increased Mexican demand that draws more U.S. gas southward, and U.S. LNG facilities being completed ahead of schedule and heavily utilized. Increased demand boosts prices and results in less displacement of Canadian gas by U.S. supplies. Despite rising natural gas prices, it is assumed that power generators prefer natural gas over coal in specific markets, potentially to meet stricter environmental regulations or to better match variations in the electricity demand profile. It is also assumed that the U.S. petrochemical industry completes a major expansion and increases its use of natural gas and NGLs. Accordingly, Canadian producers are able to obtain capital more easily while continuing to focus drilling efforts on highly productive prospects. A FID in 2016-2017 to proceed with a Canadian LNG export project would accelerate pre-positioning by producers and result in additional Canadian deliverability over the projection period. Canadian natural gas deliverability grows continuously over the projection in the Higher Price Case, increasing from 427 106m³/d (15.1 Bcf/d) in 2015 to 434 106m³/d (15.3 Bcf/d) by 2018. Tight gas production is still the primary source of new production growing from 221 106m³/d (7.8 Bcf/d) in 2015 to 253 106m³/d (8.9 Bcf/d) in 2018. A summary of the Higher Price Case is available in Table 3.3. Higher prices, increased demand, and improved competitiveness of Canadian gas relative to the U.S. keep deliverability increasing over the projection period. Capital expenditures increase steadily and the supply overhang experienced in the North American market over the past few years diminishes slightly, as harsh weather and U.S. LNG facilities finishing ahead of schedule increase demand and drilling takes place to meet it. In the Lower Price Case, demand for Canadian and U.S. natural gas is assumed to decrease because of warmer winters and cooler summers than average to decrease space heating and cooling demand coupled with more modest economic growth. Other factors include less growth in U.S. exports to Mexico due to slower Mexican demand growth and higher Mexican LNG imports, U.S. LNG facilities not being utilized to maximum capacity, and strong production growth out of the Marcellus and Utica which further displaces Canadian supply. Lower prices reduce revenues, resulting in less capital dedicated to drilling. Canadian producers continue to experience difficulty obtaining debt financing, while private equity investment would occur almost exclusively in the U.S. Canadian natural gas deliverability would remain more than adequate to meet domestic demand. The Lower Price case would also assume no FIDs for Canadian LNG projects are made during the 2016-2018 period. Canadian natural gas deliverability declines in 2016 to 420 106m³/d (14.8 Bcf/d) and falls significantly thereafter reaching 393 106m³/d (13.9 Bcf/d) by 2018. Lower natural gas prices further reduce investment in the sector. A summary of the Lower Price Case is available in Table 3.4. Canadian natural gas consumers would benefit from lower natural gas prices in the short term. This case shows the greatest decline in natural gas deliverability which results in intensified competition from U.S. sources of natural gas, as well as a significant reduction in drilling and other gas-related service activities. The key difference from the previous deliverability projection, Short-term Canadian Natural Gas Deliverability Outlook 2015-2017Footnote 16, has been the announcement of major cuts to capital expenditures because of sustained lower commodity prices. Drilling activity in 2015 was significantly lower than the previous year as a result. The warmer-than-average winter of 2015-2016 also reduced natural gas heating demand, keeping storage levels above average going into spring and keeping prices soft. Commodity prices have been lower for longer than was assumed in the 2015-2017 projection. Producers are adjusting by slashing capital expenditures and operating within available cash flows. In addition to tighter capital constraints, reduced producer creditworthiness has increased the difficulty of obtaining capital. Lower NGL prices due to oversupply are expected to reduce the amount of drilling for liquids-rich natural gas, while low oil prices are expected to reduce oil drilling. Altogether, this has significantly reduced demand in the service sector. Throughout 2015 producers worked with service companies to lower service costs and improve capital efficiency on a per-well basis. Although further improvements are possible, it is unlikely they will be of the same magnitude as in 2014-2015. The Alberta Reference Price in 2015 was $2.57/GJ, below the $2.85/GJ projected in the 2015-2017 Mid-range Price Case and well below the $3.00/GJ projected in the 2015-2017 Higher Price Case. However, actual production averaged 427 106m³/d (15.1 Bcf/d) in 2015, which was near the Higher Price Case projection of 429 106m³/d (15.1 Bcf/d). Actual production was higher than anticipated in the 2015-17 Mid-range Price Case projection due to improved drilling efficiency and improved initial production from some newly drilled wells. The development of a Canadian LNG export market. Canada’s LNG future remains uncertain. A FID could increase Canadian natural gas deliverability though the construction of facilities would only occur beyond the projection period. It is likely that a significant portion of natural gas exported as LNG will be produced from corporate reserves devoted to the project. Prior to LNG export project completion, these gas resources will need to be proven by additional drilling and testing, and the resultant production would be sold into the North American market. The price spreads between natural gas, oil and NGL. The developing NGL glut and subsequent decrease in NGL prices, along with lower oil prices, may result in reduced NGL and oil-targeted drilling, which produces natural gas as a byproduct. It is possible that reduced gas production from these sources would help to balance markets. Coordinated production efforts as a result of acquisitions and the consolidation of smaller North American producers by major companies. Moreover, economies of scale could be achieved by integrating supply chains of major companies, further reducing costs. The rate at which Canadian natural gas is displaced from markets in central Canada and the U.S. Additional pipeline capacity from the Marcellus and Utica to the U.S. Midwest will be a key factor affecting markets which have been past supporters of Canadian natural gas. The potential for increased future deliverability from the Montney despite lower gas prices. NGL-rich gas from the Montney is some of the lowest cost gas in North America and can be competitive with Marcellus gas in certain markets depending on relative transportation costs and foreign exchange rates. Improved economics of North American natural gas production. Technological advancements, efficiency gains, and improved data analytics in drilling and hydraulic fracturing operations have improved production capacity of North American natural gas. Inputs including labour and materials have seen cost rollbacks in response to lower activity levels. Depending on the individual producer, improvements in these economic factors may contribute to increased deliverability. The development of oil sands. Natural gas is used as a major fuel source to provide energy for Canadian oil sands projects. Oil sands projects under construction and scheduled to begin production between 2016 and 2018 are generally considered sufficiently advanced to be completed despite lower oil prices. Projects in early stages of planning or development may be postponed until global oil markets become more supportive. The pace of coal to gas switching for electricity generation in key markets of Canada and the U.S. This has the potential to increase demand for WCSB natural gas and subsequently increase Canadian deliverability. North American natural gas prices are quoted at Henry Hub and given in $US/MMBtu. Canadian natural gas prices are quoted as the Alberta Gas Reference Price and are listed in $C/GJ. Deliverability is the estimated amount of gas supply available from a given area based on historical production and inasideidual well declines, as well as projected activity. Gas production may be less than deliverability due to a number of factors, such as weather-related supply interruptions, and shut-in production due to economic or strategic considerations or insufficient demand. When the amount of investment capital available to industry tightens, producers and service companies attempt to reduce costs while focusing their drilling efforts on the most economic prospects–commonly referred to as ‘high-grading’. Unless otherwise specified, North American natural gas prices are quoted at Henry Hub, given in $US/MMBtu and rounded to the nearest $0.05. Canadian natural gas prices are quoted as the Alberta Gas Reference Price and are listed in $C/GJ. A write-down is a reduction in the estimated or nominal value of an asset. Marketable (sales) gas is gas that has been processed to remove impurities and NGLs and meets specifications for use as an industrial, commercial, or domestic fuel. EIA estimate of U.S. Lower 48 dry natural gas production. EIA Natural Gas Weekly Update April 15, 2015. Private equity investment generally refers to capital invested by inasideiduals or funds into private (non-publically traded) companies, or into publically traded companies with the intention of taking them private. NGLs are liquid hydrocarbons including ethane, propane, butanes, and pentanes plus. Natural gas containing commercial amounts of NGLs is known as NGL-rich, liquids-rich or wet gas. Dry natural gas contains little or no NGLs. Projections of Canadian demand for natural gas are available in Appendix E.
https://www.neb-one.gc.ca/nrg/sttstc/ntrlgs/rprt/2016-2018ntrlgsdlvrblty/index-eng.html
Politics of Installation The field of art is today frequently equated with the art market, and the artwork is primarily identified as a commodity. That art functions in the context of the art market, and every work of art is a commodity, is beyond doubt; yet art is also made and exhibited for those who do not want to be art collectors, and it is in fact these people who constitute the majority of the art public. The typical exhibition visitor rarely views the work on display as a commodity. At the same time, the number of large-scale exhibitions—biennales, triennales, documentas, manifestas—is constantly growing. In spite of the vast amounts of money and energy invested in these exhibitions, they do not exist primarily for art buyers, but for the public—for an anonymous visitor who will perhaps never buy an artwork. Likewise, art fairs, while ostensibly existing to serve art buyers, are now increasingly transformed into public events, attracting a population with little interest in buying art, or without the financial ability to do so. The art system is thus on its way to becoming part of the very mass culture that it has for so long sought to observe and analyze from a distance. Art is becoming a part of mass culture, not as a source of individual works to be traded on the art market, but as an exhibition practice, combined with architecture, design, and fashion—just as it was envisaged by the pioneering minds of the avant-garde, by the artists of the Bauhaus, the Vkhutemas, and others as early as the 1920s. Thus, contemporary art can be understood primarily as an exhibition practice. This means, among other things, that it is becoming increasingly difficult today to differentiate between two main figures of the contemporary art world: the artist and the curator. The traditional division of labor within the art system was clear. Artworks were to be produced by artists and then selected and exhibited by curators. But, at least since Duchamp, this division of labor has collapsed. Today, there is no longer any “ontological” difference between making art and displaying art. In the context of contemporary art, to make art is to show things as art. So the question arises: is it possible, and, if so, how is it possible to differentiate between the role of the artist and that of the curator when there is no difference between art’s production and exhibition? Now, I would argue that this distinction is still possible. And I would like to do so by analyzing the difference between the standard exhibition and the artistic installation. A conventional exhibition is conceived as an accumulation of art objects placed next to one another in an exhibition space to be viewed in succession. In this case, the exhibition space works as an extension of neutral, public urban space—as something like a side alley into which the passerby may turn upon payment of an admission fee. The movement of a visitor through the exhibition space remains similar to that of someone walking down a street and observing the architecture of the houses left and right. It is by no means accidental that Walter Benjamin constructed his “Arcades Project” around this analogy between an urban stroller and an exhibition visitor. The body of the viewer in this setting remains outside of the art: art takes place in front of the viewer’s eyes—as an art object, a performance, or a film. Accordingly, the exhibition space is understood here to be an empty, neutral, public space—a symbolic property of the public. The only function of such a space is to make the art objects that are placed within it easily accessible to the gaze of the visitors. The curator administers this exhibition space in the name of the public—as a representative of the public. Accordingly, the curator’s role is to safeguard its public character, while bringing the individual artworks into this public space, making them accessible to the public, publicizing them. It is obvious that an individual artwork cannot assert its presence by itself, forcing the viewer to take a look at it. It lacks the vitality, energy, and health to do so. In its origin, it seems, the work of art is sick, helpless; in order to see it, viewers must be brought to it as visitors are brought to a bed-ridden patient by hospital staff. It is no coincidence that the word “curator” is etymologically related to “cure”: to curate is to cure. Curating cures the powerlessness of the image, its inability to show itself by itself. Exhibition practice is thus the cure that heals the originally ailing image, that gives it presence, visibility; it brings it to the public view and turns it into the object of the public’s judgment. However, one can say that curating functions as a supplement, like a pharmakon in the Derridean sense: it both cures the image and further contributes to its illness.1 The iconoclastic potential of curation was initially applied to the sacral objects of the past, presenting them as mere art objects in the neutral, empty exhibition spaces of the modern museum or Kunsthalle. It is curators, in fact, including museum curators, who originally produced art in the modern sense of the word. The first art museums—founded in the late 18th and early 19th centuries and expanded in the course of the 19th century due to imperial conquests and the pillaging of non-European cultures—collected all sorts of “beautiful” functional objects previously used for religious rites, interior decoration, or manifestations of personal wealth, and exhibited them as works of art, that is, as defunctionalized autonomous objects set up for the mere purpose of being viewed. All art originates as design, be it religious design or the design of power. In the modern period as well, design precedes art. Looking for modern art in today’s museums, one must realize that what is to be seen there as art is, above all, defunctionalized design fragments, be it mass-cultural design, from Duchamp’s urinal to Warhol’s Brillo Boxes, or utopian design that—from Jugendstil to Bauhaus, from the Russian avant-garde to Donald Judd—sought to give shape to the “new life” of the future. Art is design that has become dysfunctional because the society that provided the basis for it suffered a historical collapse, like the Inca Empire or Soviet Russia. In the course of the Modern era, however, artists began to assert the autonomy of their art—understood as autonomy from public opinion and public taste. Artists have required the right to make sovereign decisions regarding the content and the form of their work beyond any explanation or justification vis-à-vis the public. And they were given this right—but only to a certain degree. The freedom to create art according to one’s own sovereign will does not guarantee that an artist’s work will also be exhibited in the public space. The inclusion of any artwork in a public exhibition must be—at least potentially—publicly explained and justified. Though artist, curator, and art critic are free to argue for or against the inclusion of some artworks, every such explanation and justification undermines the autonomous, sovereign character of artistic freedom that Modernist art aspired to win; every discourse legitimizing an artwork, its inclusion in a public exhibition as only one among many in the same public space, can be seen as an insult to that artwork. This is why the curator is considered to be someone who keeps coming between the artwork and the viewer, disempowering the artist and the viewer alike. Hence the art market appears to be more favorable than the museum or Kunsthalle to Modern, autonomous art. In the art market, works of art circulate singularized, decontextualized, uncurated, which apparently offers them the opportunity to demonstrate their sovereign origin without mediation. The art market functions according to the rules of the Potlatch as they were described by Marcel Mauss and by Georges Bataille. The sovereign decision of the artist to make an artwork beyond any justification is trumped by the sovereign decision of a private buyer to pay for this artwork an amount of money beyond any comprehension. Now, the artistic installation does not circulate. Rather, it installs everything that usually circulates in our civilization: objects, texts, films, etc. At the same time, it changes in a very radical way the role and the function of the exhibition space. The installation operates by means of a symbolic privatization of the public space of an exhibition. It may appear to be a standard, curated exhibition, but its space is designed according to the sovereign will of an individual artist who is not supposed to publicly justify the selection of the included objects, or the organization of the installation space as a whole. The installation is frequently denied the status of a specific art form, because it is not obvious what the medium of an installation actually is. Traditional art media are all defined by a specific material support: canvas, stone, or film. The material support of the installation medium is the space itself. That does not mean, however, that the installation is somehow “immaterial.” On the contrary, the installation is material par excellence, since it is spatial—and being in the space is the most general definition of being material. The installation transforms the empty, neutral, public space into an individual artwork—and it invites the visitor to experience this space as the holistic, totalizing space of an artwork. Anything included in such a space becomes a part of the artwork simply because it is placed inside this space. The distinction between art object and simple object becomes insignificant here. Instead, what becomes crucial is the distinction between a marked, installation space and unmarked, public space. When Marcel Broodthaers presented his installation Musée d’Art Moderne, Département des Aigles at the Düsseldorf Kunsthalle in 1970, he put up a sign next to each exhibit saying: “This is not a work of art.” As a whole, however, his installation has been considered to be a work of art, and not without reason. The installation demonstrates a certain selection, a certain chain of choices, a logic of inclusions and exclusions. Here, one can see an analogy to a curated exhibition. But that is precisely the point: here, the selection and the mode of representation is the sovereign prerogative of the artist alone. It is based exclusively on personal sovereign decisions that are not in need of any further explanation or justification. The artistic installation is a way to expand the domain of the sovereign rights of the artist from the individual art object to that of the exhibition space itself. This means that the artistic installation is a space in which the difference between the sovereign freedom of the artist and the institutional freedom of the curator becomes immediately visible. The regime under which art operates in our contemporary Western culture is generally understood to be one that grants freedom to art. But art’s freedom means different things to a curator and to an artist. As I have mentioned, the curator—including the so-called independent curator—ultimately chooses in the name of the democratic public. Actually, in order to be responsible toward the public, a curator does not need to be part of any fixed institution: he or she is already an institution by definition. Accordingly, the curator has an obligation to publicly justify his or her choices—and it can happen that the curator fails to do so. Of course, the curator is supposed to have the freedom to present his or her argument to the public—but this freedom of the public discussion has nothing to do with the freedom of art, understood as the freedom to make private, individual, subjective, sovereign artistic decisions beyond any argumentation, explanation, or justification. Under the regime of artistic freedom, every artist has a sovereign right to make art exclusively according to private imagination. The sovereign decision to make art in this or that way is generally accepted by Western liberal society as a sufficient reason for assuming an artist’s practice to be legitimate. Of course, an artwork can also be criticized and rejected—but it can only be rejected as a whole. It makes no sense to criticize any particular choices, inclusions, or exclusions made by an artist. In this sense, the total space of an artistic installation can also only be rejected as a whole. To return to the example of Broodthaers: nobody would criticize the artist for having overlooked this or that particular image of this or that particular eagle in his installation. One can say that in Western society the notion of freedom is deeply ambiguous—not only in the field of art, but also in the political field. Freedom in the West is understood as allowing private, sovereign decisions to be made in many domains of social practice, such as private consumption, investment of one’s own capital, or choice of one’s own religion. But in some other domains, especially in the political field, freedom is understood primarily as the freedom of public discussion guaranteed by law—as non-sovereign, conditional, institutional freedom. Of course, the private, sovereign decisions in our societies are controlled to a certain degree by public opinion and political institutions (we all know the famous slogan “the private is political”). Yet, on the other hand, open political discussion is time and again interrupted by the private, sovereign decisions of political actors and manipulated by private interests (which then serve to privatize the political). The artist and the curator embody, in a very conspicuous manner, these two different kinds of freedom: the sovereign, unconditional, publicly irresponsible freedom of art-making, and the institutional, conditional, publicly responsible freedom of curatorship. Further, this means that the artistic installation—in which the act of art production coincides with the act of its presentation—becomes the perfect experimental terrain for revealing and exploring the ambiguity that lies at the core of the Western notion of freedom. Accordingly, in the last decades we have seen the emergence of innovative curatorial projects that seem to empower the curator to act in an authorial, sovereign way. And we have also seen the emergence of artistic practices seeking to be collaborative, democratic, decentralized, de-authorized. Indeed, the artistic installation is often viewed today as a form that allows the artist to democratize his or her art, to take public responsibility, to begin to act in the name of a certain community or even of society as a whole. In this sense, the emergence of the artistic installation seems to mark the end of the Modernist claim of autonomy and sovereignty. The artist’s decision to allow the multitude of visitors to enter the space of the artwork is interpreted as an opening of the closed space of an artwork to democracy. This enclosed space seems to be transformed into a platform for public discussion, democratic practice, communication, networking, education, and so forth. But this analysis of installation art practice tends to overlook the symbolic act of privatizing the public space of the exhibition, which precedes the act of opening the installation space to a community of visitors. As I have mentioned, the space of the traditional exhibition is a symbolic public property, and the curator who manages this space acts in the name of public opinion. The visitor of a typical exhibition remains on his or her own territory, as a symbolic owner of the space where the artworks are delivered to his or her gaze and judgment. On the contrary, the space of an artistic installation is the symbolic private property of the artist. By entering this space, the visitor leaves the public territory of democratic legitimacy and enters the space of sovereign, authoritarian control. The visitor is here, so to speak, on foreign ground, in exile. The visitor becomes an expatriate who must submit to a foreign law—one given to him or her by the artist. Here the artist acts as legislator, as a sovereign of the installation space—even, and maybe especially so, if the law given by the artist to a community of visitors is a democratic one. One might then say that installation practice reveals the act of unconditional, sovereign violence that initially installs any democratic order. We know that democratic order is never brought about in a democratic fashion—democratic order always emerges as a result of a violent revolution. To install a law is to break one. The first legislator can never act in a legitimate manner—he installs the political order, but does not belong to it. He remains external to the order even if he decides later to submit himself to it. The author of an artistic installation is also such a legislator, who gives to the community of visitors the space to constitute itself and defines the rules to which this community must submit, but does so without belonging to this community, remaining outside it. And this remains true even if the artist decides to join the community that he or she has created. This second step should not lead us to overlook the first one—the sovereign one. And one should also not forget: after initiating a certain order—a certain politeia, a certain community of visitors—the installation artist must rely on the art institutions to maintain this order, to police the fluid politeia of the installation’s visitors. With regard to the role of police in a state, Jacques Derrida suggests in one of his books (La force des lois) that, though the police are expected to supervise the functioning of certain laws, they are de facto also involved in creating the very laws that they should merely supervise. To maintain a law always also means to permanently reinvent that law. Derrida tries to show that the violent, revolutionary, sovereign act of installing law and order can never be fully erased afterwards—this initial act of violence can and will always be mobilized again. This is especially obvious now, in our time of violent export, installing, and securing of democracy. One should not forget: the installation space is a movable one. The art installation is not site-specific, and it can be installed in any place and for any time. And we should be under no illusions that there can be anything like a completely chaotic, Dadaistic, Fluxus-like installation space free of any control. In his famous treatise Français, encore un effort si vous voulez être républicains, the Marquis de Sade presents a vision of a perfectly free society that has abolished all existing law, installing only one: everyone must do what he or she likes, including committing crimes of any kind.2 What is especially interesting is how, at the same time, Sade remarks upon the necessity of law enforcement to prevent the reactionary attempts of some traditionally-minded citizens to return to the old repressive state in which family is secured and crimes forbidden. So we also need the police to defend the crimes against the reactionary nostalgia of the old moral order. And yet, the violent act of constituting a democratically organized community should not be interpreted as contradicting its democratic nature. Sovereign freedom is obviously non-democratic, so it also seems to be anti-democratic. However, even if it appears paradoxical at first glance, sovereign freedom is a necessary precondition for the emergence of any democratic order. Again, the practice of art installation is a good example of this rule. The standard art exhibition leaves an individual visitor alone, allowing him or her to individually confront and contemplate the exhibited art objects. Moving from one object to another, such an individual visitor necessarily overlooks the totality of the exhibition’s space, including his or her own position within it. An artistic installation, on the contrary, builds a community of spectators precisely because of the holistic, unifying character of the installation space. The true visitor to the art installation is not an isolated individual, but a collective of visitors. The art space as such can only be perceived by a mass of visitors—a multitude, if you like—with this multitude becoming part of the exhibition for each individual visitor, and vice versa. There is a dimension of mass culture which is often overlooked, that becomes particularly manifest in the context of art. A pop concert or a film screening creates communities among its attendees. The members of these transitory communities do not know each other—their structure is accidental; it remains unclear where they have come from and where they are going; they have little to say to one another; they lack a joint identity or previous history that could provide them with common memories to share; nevertheless, they are communities. These communities resemble those of travelers on a train or airplane. To put it differently: these are radically contemporary communities—much more so than religious, political, or working communities. All traditional communities are based on the premise that their members, from the very beginning, are linked by something that stems from the past: a common language, common faith, common political history, common upbringing. Such communities tend to establish boundaries between themselves and strangers with whom they share no common past. Mass culture, by contrast, creates communities beyond any common past—unconditional communities of a new kind. This is what reveals its vast potential for modernization, which is frequently overlooked. However, mass culture itself cannot fully reflect and unfold this potential, because the communities it creates are not sufficiently aware of themselves as such. The same can be said of the masses moving through the standard exhibition spaces of contemporary museums and Kunsthalles. It is often said that the museum is elitist. I have always been astounded by this opinion, so counter to my own personal experience of becoming part of a mass of visitors continuously flowing through the exhibition and museum rooms. Anyone who has ever looked for a parking lot near a museum, or tried to leave a coat at the museum checkroom, or needed to find the museum lavatory, will have reason to doubt the elitist character of this institution—particularly in the case of museums that are considered particularly elitist, such as the Metropolitan Museum or the MoMA in New York. Today, global tourist streams make any elitist claim a museum might have seem like a ridiculous presumption. And if these streams avoid one specific exhibition, its curator will not be at all happy, will not feel elitist but disappointed for having failed to reach the masses. But these masses do not reflect themselves as such—they do not constitute any politeia. The perspective of pop-concert fans or moviegoers is too forward-directed—at stage or screen—to allow them to adequately perceive and reflect the space in which they find themselves or the communities of which they have become part. This is the kind of reflection that advanced present-day art provokes, whether as installation art, or as experimental curatorial projects. The relative spatial separation provided by the installation space does not mean a turn away from the world, but rather a de-localization and de-territorialization of mass-cultural transitory communities—in a way that assists them in reflecting upon their own condition, offering them an opportunity to exhibit themselves to themselves. The contemporary art space is a space in which multitudes can view themselves and celebrate themselves, as God or kings were in former times viewed and celebrated in churches and palaces (Thomas Struth’s Museum Photographs capture this dimension of the museum very well—this emergence and dissolution of transitional communities). More than anything else, what the installation offers to the fluid, circulating multitudes is an aura of the here and now. The installation is, above all, a mass-cultural version of individual flânerie, as described by Benjamin, and therefore a place for the emergence of aura, for “profane illumination.” In general, the installation operates as a reversal of reproduction. The installation takes a copy out of an unmarked, open space of anonymous circulation and places it—if only temporarily—within a fixed, stable, closed context of the topologically well-defined “here and now.” Our contemporary condition cannot be reduced to being a “loss of the aura” to the circulation of the copy beyond “here and now,” as described in Benjamin’s famous essay on “The Work of Art in the Age of Mechanical Reproduction.”3 Rather, the contemporary age organizes a complex interplay of dislocations and relocations, of deterritorializations and reterritorializations, of de-auratizations and re-auratizations. So one can say that installation practice demonstrates the dependency of any democratic space (in which masses or multitudes demonstrate themselves to themselves) on the private, sovereign decisions of an artist as its legislator. This was something that was very well known to the ancient Greek thinkers, as it was to the initiators of the earlier democratic revolutions. But recently, this knowledge somehow became suppressed by the dominant political discourse. Especially after Foucault, we tend to detect the source of power in impersonal agencies, structures, rules, and protocols. However, this fixation on the impersonal mechanisms of power lead us to overlook the importance of individual, sovereign decisions and actions taking place in private, heterotopic spaces (to use another term introduced by Foucault). Likewise, the modern, democratic powers have meta-social, meta-public, heterotopic origins. As has been mentioned, the artist who designs a certain installation space is an outsider to this space. He or she is heterotopic to this space. But the outsider is not necessarily somebody who has to be included in order to be empowered. There is also empowerment by exclusion, and especially by self-exclusion. The outsider can be powerful precisely because he or she is not controlled by society, and is not limited in his or her sovereign actions by any public discussion or by any need for public self-justification. And it would be wrong to think that this kind of powerful outsidership can be completely eliminated through Modern progress and democratic revolutions. The progress is rational. But not accidentally, an artist is supposed by our culture to be mad—at least to be obsessed. Foucault thought that medicine men, witches, and prophets have no prominent place in our society any more—that they became outcasts, confined to psychiatric clinics. But our culture is primarily a celebrity culture, and you cannot become a celebrity without being mad (or at least pretending to be). Obviously, Foucault read too many scientific books and only a few society and gossip magazines, because otherwise he would have known where mad people today have their true social place. It is also well known that the contemporary political elite is a part of global celebrity culture, which is to say that it is external to the society it rules. Global, extra-democratic, trans-state, external to any democratically organized community, paradigmatically private, this elite is, in fact, structurally mad—insane. Now, these reflections should not be misunderstood as a critique of installation as an art form by demonstrating its sovereign character. The goal of art, after all, is not to change things—things are changing by themselves all the time anyway. Art’s function is rather to show, to make visible the realities that are generally overlooked. By taking aesthetic responsibility in a very explicit way for the design of the installation space, the artist reveals the hidden sovereign dimension of the contemporary democratic order that politics, for the most part, tries to conceal. The installation space is where we are immediately confronted with the ambiguous character of the contemporary notion of freedom that functions in our democracies as a tension between sovereign and institutional freedom. The artistic installation is thus a space of unconcealment (in the Heideggerian sense) of the heterotopic, sovereign power that is concealed behind the obscure transparency of the democratic order. × Boris Groys (1947, East Berlin) is Professor of Aesthetics, Art History, and Media Theory at the Center for Art and Media Karlsruhe and Global Distinguished Professor at New York University. He is the author of many books, including The Total Art of Stalinism,Ilya Kabakov: The Man Who Flew into Space from His Apartment, Art Power, The Communist Postscript, and, most recently, Going Public. The nature of artistic speculation is, in part, to create new spaces and defer their use to others. While the pioneer gets first dibs in deciding the ideologies and governing principles of the spaces he creates, he is seldom present to see his planning in practice—he is off to new adventures while the subject must find ways of translating this vision into something inhabitable. Beyond the issue of governance, these circumstances beg the deeper question of the potential for simply inhabiting... Working in the early 1990s on the book As a Weasel Sucks Eggs: An Essay on Melancholy and Cannibalism (published in English in 2008 by Sternberg Press), we exchanged a few letters with the late Jacques Derrida, who was then working on what he referred to as the “cannibalistic tropes” in hermeneutics and German Idealism. He was grateful for a little fragment by Novalis that we had sent him: All enjoyment, all taking in and assimilation, is eating, or rather: eating is nothing... Some people are the motor of the event. Like an animator bringing characters to life. A character is drawn and through this process is free to behave in whatever way the animator wants it to. At our event you don’t know exactly who animates who, but it is definitely taking place. Everyone is part of the same story, but with separate lives. They are in the style of recent Manga comics from Japan, where each character has a complex formulation that frequently changes from episode to... The field of art is today frequently equated with the art market, and the artwork is primarily identified as a commodity. That art functions in the context of the art market, and every work of art is a commodity, is beyond doubt; yet art is also made and exhibited for those who do not want to be art collectors, and it is in fact these people who constitute the majority of the art public. The typical exhibition visitor rarely views the work on display as a commodity. At the same time, the... This question concerning the disappearance of borders is closely connected to processes through which capital is accumulated. One process is what David Harvey has called “accumulation by dispossession,” in which wealth is accumulated through redistribution and appropriation of assets (through the channels of credit systems, predatory speculation, privatization of land assets, etc.). 1 The second process is what we are facing today, what Michael Hudson has termed “the imperialism of... 1A. This white space is a stage—a platform that highlights and draws out a set of material conditions and coldly inspects them. This virtual space is inhabited by an androgynous protagonist with a specific sense of fashion. The figure is the subject, is the engine that powers this investigation—our focal point, the site through which a concept can be allowed to appear. The ground is the context—the white space, the white table, the stage of phantasms. The subject is enframed, is charged... From the start of the credit crisis, panic has been caused by what we didn’t know. What happens if a Lehman Brothers is allowed to fail? What happens if toxic mortgage-backed securities are bought through a government bailout? Then, what happens if the U.S. treasury decides not to bail the banks out of these bad debts? So it should have been reassuring that one of the most nerve-racking of the unknowns turned out to be benign. We now know that we should not have feared huge loses... Exhibitions, symposia and teaching positions at art schools world wide Thank You! Subscription pending. Your email subscription is almost complete. An email has been sent to the email address you entered. In this email is a confirmation link. Please click on this link to confirm your subscription.
Perhaps the most intriguing sign of Cuban art’s rising visibility at this year’s Art Basel Miami Beach was the Salon conversation “The New Role for Art in Cuba,” held Thursday evening, December 3. A capacity crowd of more than 200 listened attentively as Carlos Garaicoa; Glenda León; Stéphane Aquin, chief curator of the Hirshhorn Museum and Sculpture Garden in Washington, DC; and the moderator, curator and art historian Iliana Cepero Amador, gave thoughtful consideration to a range of topics, from potential roles for private galleries in Havana to income inequality to the current focus on painting by young Cuban artists. The first part of the conversation was a focused exchange between Cepero Amador and Garaicoa, which took as its starting point the recent opening of Galleria Continua’s Havana space, and the presentation of the gallery’s 25th-anniversary retrospective show at the Wifredo Lam Center. As moderator, Cepero Amador began by framing the discussion with a quote from British curator and art critic Julian Stallabrass: “The art world is bound to the economy.” With that, she asked Garaicoa: What are the implications of inviting a foreign commercial gallery into a public space like the Lam Center, which was formed on the premise of gathering and promoting art of developing countries? León saw this, in part, as a need by the artists of her generation to differentiate themselves in the marketplace. She saw the current turn toward painting on the part of younger artists as shaped by the market as well. Garaicoa pointed out that the current international interest in Cuban art is not without precedent. “I lived this in the 1990s already,” he said, “this explosion of Cuban art.” Back then, he recalled, museums and collectors offered opportunities for artists to go abroad, giving them the capacity to gain international experience and a basic understanding of how the art market functioned. Before the discussion, Carlos Garaicoa, left, visits with Cuban Art News publisher Howard Farber.
https://cubanartnews.org/2015/12/22/at-abmb-the-new-role-of-art-in-cuba/
Welcome! PRS is committed to positive relationships among parents, schools and communities to support the development of the whole child. The PRS assists families with questions and planning as well as provides resources and training. KEEP IN TOUCH Our newsletter is changing and you must sign up again if you wish to continue receiving our newsletter. Please use the "Follow Leanne Kidwell" link in the orange box at the bottom of the newsletter to sign up. Smore Newsletter Link - Kindergarten Connection - Mandt Overview for Families - Middle School Connection - Overview of the Virginia IEP System - Seven Steps to Earning Instructional Control with Your Child - While the Bus is Still Coming - Exploring Self-Advocacy and Work Place Readiness Skills in Elementary School - While the Bus is Still Coming - Transition in Middle School and Early High School - Parent Project - A New LCPS Initiative - Social-Emotional Learning: Using Skills and Strategies to Support Academic Learning - Sources of Strength - Substance Use Trends and Student Assistance - December Meeting - Transition Planning and the IEP - November Meeting - Business Meeting, No Program - October Meeting - Meeting the LCPS Special Education Office & Understanding Their Roles and Respoinsibilities - September Meeting - Overview of Virginia IEP System - How to: Self Management Rationale: Self-management strategies are shown to be effective ways to teach or grow the independence skills in monitoring and managing one's own behavior. How to Documents: - How to: Setting Expectations Rationale: Expectations are the behaviors we expect from another person. Oftentimes our expectations are not communicated clearly to those around us, which can lead to miscommunication and frustration. Setting expectations means that you have decided on the expectations and have communicated them clearly to the other person. Setting clear expectations, then teaching and reinforcing them, will increase the likelihood of better outcomes for your child while learning in the home environment. How to Documents: - How to: Setting Up The Environment Rationale: Teachers plan their classrooms to meet the needs of their students. Each child is taken into consideration when arranging the classroom environment to support student engagement. Most home environments are not set up like classrooms. By making a few adjustments, and using what you have available, you can set up a space at home that promotes engagement in learning. How to Documents: - How to: Choosing Reinforcers Rationale: Deciding on what to use for reinforcement can be very challenging. Some children have many likes, while others have very few. A child can like something but not be willing to complete a task in order to receive it. Choosing the right objects, activities, or social interactions will provide you with options for reinforcing desired behaviors. How to Documents: - How to: Using Reinforcers Rationale: Providing reinforcers immediately following the desired behavior will increase the likelihood that a desired behavior will occur again. Withholding a reinforcer while the child is engaging in an undesired behavior will decrease the likelihood that the behavior will occur again. When seeking behavior change it is important to consistently deliver reinforcers at the proper time. How to Documents: - How to: Schedules And Routines Rationale: A schedule is a laid-out structure or plan on how and when tasks and activities will be completed. A routine is established by repeatedly following a plan. Having a set schedule will offer you and your child consistency, allowing for the establishment of routines. How to Documents: - How to: Creating A Token Board Rationale: Token boards are a visual representation of expectations for the learner. Token boards come in various designs, from fancy laminated internet templates to glass pickle jars with coins. Token boards should be individualized and accessible for the child. Creating a token board can be simple and made with materials found in your home. How to Documents: - How to: Implementing A Token Board Rationale: Token economies are individualized systems that allow an adult to teach and maintain behavior by providing praise and tokens to reinforce expectations while an individual is working towards a goal. The learner earns a predetermined number of “tokens” and then exchanges them for something that is desired (social interaction, tangibles, or activities). It may be helpful to watch the “Creating a token board” module first. How to Documents: - How to: Regulating Emotions Rationale: Emotions impact attention, memory, creativity, performance, and learning. Emotional regulation strategies help to prevent, reduce, or maintain emotions in order to promote well-being and positive learning experiences. How to Documents: - How to: Building A Growth Mindset Rationale: Growth Mindset is believing that your abilities are cultivated through effort. With a growth mindset, students continually work to improve their skills and view mistakes as opportunities to learn. They are resilient learners, continuing to try, even when facing great challenges. Through the use of encouraging affirmations and positive quotations, negative thought patterns can evolve into positive ones. How to Documents:
https://www.lcps.org/ParentResourceServices
NAO publishes data-sharing guide for government leaders The National Audit Office (NAO) has published a guide to help senior leaders in government make a better use of data and data-sharing across public services. The 27-page document identifies seven key areas where government decision-makers face data challenges, including data-sharing, quality, standards, resourcing, APIs, analytics and cross-government data sets. It also offers a “way forward” which reviews why initiatives have failed in the past and offers advice on how to tackle those challenges. By encouraging government executives to examine the issues holding back effective data sharing, the guide aims to help them realise what a good use of data can bring to the operational delivery of public services. Inadequate use of data is a leading cause of inefficiencies in government and organisations must acknowledge the importance of fixing underlying data issues, such as addressing legacy systems and allowing for better cross-departmental data-sharing, the guide explains. Government leaders effectively addressing the challenges highlighted in the report, “could bring substantial benefits by enabling reform and transformation”, says the document. Persisting challenges of data-sharing in government The report identifies three “substantive issues” preventing government executives from making a better use of data. Despite continuous talk about the importance of data, there is still a lack of understanding of what quality data is, as new initiatives lacking a structured approach and poor data quality evidence. Not seeing data as a priority is also a challenge preventing the case for long-term investments to improve data quality and sharing. The guide also identifies a data culture “not fit-for-purpose” as a barrier standing in the way of making a better use of data, with government lacking “the necessary capability, leadership and culture to introduce and support sustained improvements.” Departments must share data for the benefit of citizens Although the 2017 Digital Economy Act enabled public authorities to share personal data among public bodies to improve service delivery, a 2019 Office for National Statistics (ONS) report found that the legislation did not offer departments the reassurance they need to be confident about sharing data legally. The ONS report concluded that “despite years of effort and many well-documented failures, government has lacked clear and sustained strategic leadership on data,” which in turn led to departments to under-prioritise their data management and improvement efforts. A risk aversion culture among public authorities, who are often reluctant to data share fearing the potential consequences of doing it incorrectly, has for a long time been an important obstacle among departments in government to collaborate with data. This, however, improved substantially during the pandemic, when the urgency of the situation pushed departments to work together with data to manage the national emergency. The pandemic showed that government can do much more with data for the benefit of citizens’ if it works more closely, however, there is still a risk of slipping into the old ways of working. Alison Pritchard, Deputy National Statistician and Director General for Data Capability at the ONS, said that although she has seen progress in data-sharing across government, friction remains and there is a need to continue creating use cases as seen during the Covid-19 outbreak, when government departments made leaps and bounds with data sharing. “The pandemic really generated that moment where the management of risk was clearly overtaken by the benefits in sharing that data,” Pritchard said at the Government Data Show. “So the moment you have a really clear purpose, you're able to generate progress around data sharing”.
https://www.govx.digital/data/nao-publishes-data-sharing-guide-for-government-leaders
The Department of Soil and Water Conservation Punjab takes care of Soil and Water in the State. The natural resources of Soil and Water are most important for sustaining agricultural production. The preservation and up-gradation of soil and water resources is being given the top priority at the national level. Therefore, the department has an important role in conservation of these resources. The department carries out the following functions: - The department carries out survey of soils and prepares soil inventory of different type of soils in the State. The soils are classified as per their capability. The data bank on the soils is utilized for formulating strategy and schemes for best utilization of the soils and also preparing schemes for conservation and up gradation of soils and water resources. - The department executes schemes/ projects for conservation of soils and water resources in the State. These projects are being implemented under the State Plan Schemes and Centrally Sponsored Schemes. - The nucleus cell of State Land Use Board, functions under the department and is responsible for maintaining updated data of soil & water resources and keeps liaison with other departments and organizations connected with the land use and for generating awareness for optimum and judicious use of soil and water resources in the State. ACTIVITIES OF DEPARTMENT The Activities/Works executed by the department under various schemes of Soil Conservation and Water Management:- Soil Conservation Activities on Watershed Basis - Land leveling/ Land grading - Bench Terracing - Gully reclamation & soil erosion control works on watershed basis - Water Harvesting structures are constructed - Tapping of perennial flow(Makowal type) - Contour bunding - Drainage line treatment - Chiseling of Soils. - Scrapping of sand dunes - Ravine reclamation - Reclamation of alkaline/saline & water logged area. - Contour Vegetative hedges - Horticulture Development - Earthen Loose boulders with vegetative support in upper, middle and lower reaches - Loose boulders with Gabion structures - Spillways - Percolation Tanks - Land use incentive for moisture conservation - Silvipastural development - Afforestation - Agro forestry - Vegetative filter stripes in place of diversion drains - Crop demonstrations - Dryland horticulture - Composite horticulture - Household production system such as poultry, Piggery, bee keeping, Sericulture, mushroom cultivation, Production of bio fertilizers - Bio mass/Agro processing such as basket making, rope making leaf plants and saucers, wood carving and toy making, extraction, poultry, Charkha - Bank stabilization with vegetative measures - Small dugout/sunken ponds - Runoff management dug out pond with vegetative inlets and outlets - Live stock management Soil Survey Works - Collection of field information of soils - Laboratory analysis - Correlation and Classification - Mapping of soils - Establishing & drawing boundaries of different kind of soils on the standard base map and prediction about the potential of soils for alternate use - Different types of soil survey are conducted in the field i.e. Reconnaissance Survey, Semi detailed survey, Detailed soil survey, Wasteland mapping etc. - The projects of soils survey are Soils map, Soil Survey reports Water Management Works - Laying of Underground pipeline for irrigation from tubewells, canal outlets etc - Natural & Artificial Ground Water Recharging - Construction & maintenance of common watercourses - Utilization of sullage water for irrigation - Drip irrigation - Micro Sprinkler Irrigation - Lift Irrigation projects Machinery Division: The department has a fleet of 5 bulldozers, 1 Excavator-cum-loaders & 3 Tractors for undertaking various soil & conservation works in the State. There is a separate division for running & maintenance of machinery working under Divisional Soil Conservation Officer with its Headquarter at Mohali. Training Institute: There is a Training Institute at Mohali to impart training to farmers and Technical persons upto the level of Soil Conservation Officers. The courses cover all the practical aspects of Soil & Water Conservation works such as land leveling, bench terracing, contour bounding, gully reclamation, water harvesting dams and drainage etc and also give lessons of practice in preparation of plan & estimates etc. State Land Use Board: A nucleus cell i.e. Land Use Board has been created to assess the land resources like soil, water & vegetation and is responsible for maintaining a soil inventory, to keep liaison with other departments and organizations connected with land use and for generating awareness for optimum & judicious use of soil & water resources in the State. MAJOR ACTIVITIES OF DEPARTMENT Underground Pipeline System (UGPS) Replacement of open irrigation channels with underground pipeline system for conveyance. It helps save about 30% water. Farmers can opt for RCC, PVC or HDPE pipes Laying of underground pipeline saves water upto extent of 30%, which is also corroborated by evaluation studies carried out by ICAR and NABARD. Saving of irrigation water to this extent means that this amount of water withdrawal from ground shall be reduced. Besides the water conservation, additional advantages for farming community is reduced labour and maintenance costs. This intervention also saves about 2-3% of land, wasted on account of construction of open channels which can be put under agricultural use. Type of Assistance Provided: - Capital assistance @90% on Community UGPS projects mainly in canal command areas under RKVY, PIDB, RIDF-17, RIDF-21 and Sangrur-Barnala schemes. - Subsidy @50% on Individual UGPS projects in tubewell commands areas under Crop Diversification Programme Micro Irrigation(MI) - Drip and Sprinkler Irrigation systems Ascientific method of irrigation carrying desired water and nutrients direct to the root zone of the plant, drop by drop. It is a technique in which water flows through a filter into special drip pipes, with emitters located at different spacing. Water is distributed through the emitters directly into the soil near the roots through a special slow-release device. If the drip irrigation system is properly designed, installed, and managed, drip irrigation may help achieve water conservation by reducing evaporation and deep drainage. Compared to other types of irrigation systems such as flood or overhead sprinklers, water can be more precisely applied to the plant roots. In addition, drip can eliminate many diseases that are spread through irrigation water. Drip irrigation is adaptable to any farmable slope and is suitable for most soils. Advantages - High water application efficiency and lower labour costs. - Minimised fertiliser/nutrient loss due to localised application and reduced leaching. - Ability to irrigate irregular shaped fields. Levelling of the field not necessary. - Allows safe use of recycled (waste) water. - Moisture within the root zone can be maintained at field capacity and minimised soil erosion. - Soil type plays less important role in frequency of irrigation. - Highly uniform distribution of water i.e., controlled by output of each nozzle. - Usually operated at lower pressure than other types of pressurised irrigation, reducing energy costs. Assistance provided: - Subsidy calculated as per PMKSY Micro Irrigation Guidelines - Subsity subject to fund availability with the department. - Priority tubewell connection for farmer opting for micro irrigation system Laying of Underground Pipeline for Irrigation using Treated Water from Sewerage Treatment Plants The Project aims to provide treated water from these sewerage treatment plants for irrigation through network of underground pipelines. The availability of irrigation water from this source will not only reduce stress on other water resources especially the ground water, which is declining at tremendous rate in the State. The state of Punjab has taken a major initiative under the programme “Cleaning of Rivers” for treatment of municipal waste water by installation of Sewerage Treatment Plants across all major cities, towns besides for cluster of villages wherein underground sewerage facility is available. This programme is flagship programme of State Govt and is personally monitored by Hon’ble Chief Minister, Punjab. The Punjab Water Supply and Sewerage Board along with Punjab Urban Development Agency have already installed Sewerage Treatment Plants for treatment of municipal waste water for all major cities and towns of the State. The construction of STP’s for other townships and cluster of villages are currently underway. Till date approx 45 nos of STP’s are functioning and almost 60 nos of STP’s are nearing completion along with 70 nos of STP’s are in course of planning and execution stage. This mean that abundant amount of treated water is available for utilization in agriculture. Major Advantages - Reduced stress on ground water aquifers, which inturn are replenished and the declining trend of water levels is arrested. - More potable water availability as use of treated water for irrigation purposes reduces use of potable water in agriculture. - Environmental pollution is mitigated as untreated water will not be released into the environment. - Farmers dependence on other sources of irrigation is reduced as treated water will be available through out the year. - Use of less inorganic or chemical fertilizers as treated sewerage water is rich in nutrients required for crop growth. - The programme helps in creating awareness among the people for optimum utilization of water resources. Assistance Provided - The Projects are implemented on 100% grant basis by the Department. - Upon completion the projects are handed over to water user society of farmers in command area for operation. - Projects are implemented with funding from various sources like NABARD-RIDF-18, PPCB, GLADA, GMADA, Local Bodies etc Rainwater Harvesting in Kandi area Kandi Area comprises of 538100hectares i.e. almost 10% of total area of the state. It stretches from district Mohali to district Pathankot along the border of Himachal Pradesh. It is sub mountainous zone with average rainfall of 1000mm. Agriculture in this area is mainly dependent on rains as it is predominately rainfed area. Socio economic conditions of people are poor because of scattered and small land holdings. Average agriculture yield has been only 700 Kg/Hectare against State’s 4500 Kg/Hectare. Further Kandi area being very undulating and sub mountainous, is traversed by numerous small and big seasonal streams called choes and is prone to flash floods. About 40% of this rain water gets wasted in floods, which causes considerable damages to the crops & village abadies and makes the land in- fertile. These streams/choes criss-crossing the area, also holds in themselves potential to transform this area through harvesting of water. Department has initiated this programme for development of the area and recharging of ground water. Activitiy under Project - Construction of Rainwater Harvesting & recharging structures for storage, irrigation and natural recharge. - Renovation of village ponds to increase their water holding capacity and for recharging ground water Advantages These water harvesting structures are multipurpose as besides providing irrigation to command area and augmenting ground water recharge, they have helped in checking erosion, flash floods besides rejuvenating the forest cover in the catchment area. Water harvesting in perennial flows also provides life saving irrigation to farmers of the adjoining area. Assistance The Projects are implemented on 100% grant basis with funding under NABARD-RIDF-17 Programme on Watershed Basis - Intergrated Watershed Management Programme The Growth of Agriculture in Punjab, from Sixties onwards, has few parallels in the world. Integrated Watershed Management Programme (IWMP) implemented by Ministry of Rural Development, DoLR, GOI with the objective to restore the ecological balance by harnessing, conserving and developing degraded natural resources such as soil, forest and water along with overall development of community. This programme is implemented on 60:40 Centre:State sharing basis. IWMP was first introduced in state in year 2009-10 with sanctioning of six projects in the districts of Ropar, Hoshiarpur and Gurdaspur which are currently underway. Earlier according to Common Guidelines of Watershed Management only rainfed area could be taken up under the programme but considering the falling water table in Northern states especially Punjab, amendments were made in guidelines and area with over exploited ground water resources can also be taken up under this programme. Till date total of 65 nos of projects are being implemented in the State in districts of Hoshiarpur, Ropar, Mohali, Gurdaspur, Pathankot, SBS Nagar, Jalandhar, Kapurthala, Taran Taran, Ludhiana, Patiala and Fazilka. Ground water exploitation in the non kandi area taken up under the programme is very high. The average level of ground water development of the project area ranges from 130% -200% and falls in over exploited category. Major Works Rain Water Harvesting and Recharging, Erosion Control, Land levelling, Renovation of ponds, Drainage Line Treatment, Plantation, Retaining Walls and, Farm Production, Livelihood Generation, Self Help Groups. Assistance - Projects are implemented on 100% grant basis. - 67 projects under implementation in 14 districts. - Upon completion it shall benefit more than 3 lac hectares. Other Programmes - Conservation and Management of Harike, Rupnagar and Nangal wetlands with funding from Ministry of Environment, Forest and Climate Change (MoEF&CC). - Project for Restoration and Preservation of Holy Bein in Kapurthala district. - Pilot Project for Reclamation of Ravinous and Gullied lands in Gurdaspur district.
http://dswcpunjab.gov.in/contents/activities.html
A later phase of Cubism, in which paintings and drawings were constructed from objects and shapes cut from paper or other materials to represent parts of a subject, in order to engage the viewer with pictorial issues, such as figuration, realism, and abstraction. Pablo Picasso a Spanish artist, founder of Cubism, which focused on geometric shapes and overlapping planes Georges Braque 20th century artist that collaborated with Picasso on their experimental artistic style (cubism). Up until Braque's unfortunate wounding in WWI, the Picasso and his friend continued to influence each other greatly, influenced the world greatly with a unique brand of artistic expression that exercises the minds of the modern world Futurism the most important Italian avant-garde art movement of the 20th century, celebrated advanced technology and urban modernity, Committed to the new, its members wished to destroy older forms of culture and to demonstrate the beauty of modern life Giacomo Balla Italian painter; adopted the Futurism style, creating a pictorial depiction of light, movement and speed. He was signatory to the Futurist Manifesto in 1910 and began designing and painting Futurist furniture and also created Futurist "antineutral" clothing, "Boccioni's Fist" FT Marinetti Italian poet and editor, the founder of the Futurist movement Umberto Boccioni Italian painter and sculptor. Like other Futurists, his work centered on the portrayal of movement (dynamism), speed, and technology. Ex: "Visioni simultanee" (1912) and "The Street Enters the House" (1911) Fauves! A French term meaning "wild beast" and descriptive of an artistic style characterized by the use of bright and intense expressionistic color schemes. Biskra Henri Matisse's Blue Nude (Souvenir de Biskra), which may be classified as Fauvist, was controversial Henri Matisse Fauvism, The Blue Nude The Blue Rider This Kandinsky painting of a man on horseback galloping across a green hill The Bridge Painted by Kirchner Synaesthesia A perceptual experience in which stimulation of one sense produces additional unusual experiences in another sense Theosophy A set of teachings about God and the world based on mystical insight, especially teachings founded on a blend of Buddhist and Hindu beliefs. Ernest Ludwig Kirchner German expressionist painter and printmaker and one of the founders of the artists group Die Brücke or "The Bridge", a key group leading to the foundation of Expressionism in 20th-century art. Vasily(Wassily) Kandinsky an influential Russian painter and art theorist. He is credited with painting one of the first purely abstract works. Otto Dix German painter and printmaker, noted for his ruthless and harshly realistic depictions of Weimar society and the brutality of war, went to the war in order to see what it was like through the eyes of a soldier and later showed the bad conditions in his art Paula Modersohn- Becker a German painter and one of the most important representatives of early expressionism. In a brief career, cut short by an embolism at the age of 31, she created a number of groundbreaking images of great intensity The Degenerate Art Exhibition an art exhibition organized by Adolf Ziegler and the NSDAP in Munich from 19 July to 30 November 1937. The exhibition presented 650 works of art, confiscated from German museums, and was staged in counterpoint to the concurrent Great German Art Exhibition. Marcel Duchamp French, naturalized American painter, sculptor, chess player, and writer whose work is associated with Dadaism and conceptual art, although careful about his use of the term Dada and direct association with Dada groups Readymade First used in relation to Dada works, it is a term that is applied to a 'found object' that is then transformed into a work of art Dada An early 20th century artistic movement that attacked traditional cultural styles and stressed the absence of purpose in life Aaron Douglas A Harlem Renaissance painter whose work celebrates African American versatility and adaptability, depicting people in a variety of settings. Precisionism An American art movement of the 1920s and 1930s. The Precisionists concentrated on portraying man-made environments in a clear and concise manner to express the beauty of perfect and precise machine forms. Alfred Stieglitz Best known for The Steerage, a work showing immigrants returning to Europe in the second-class lower decks of a luxury liner Georgia O'Keefe a 20th century American painter best known for her flower canvases and southwestern landscapes. The Bauhaus a school or art and architecture in Germany from 1919 to 1933, whose influence was noticed across the 20th century. It was believed artists could enhance society by bringing the principles of good design to industrial mass production. Cabaret Voltaire a nightclub founded by Hugo Ball and his wife, a night singer; a bohemian place for anti-war; where Hugo Ball did his sound poems; pivotal point for art movement, Dada Hugo Ball German author, poet and one of the leading Dada artists George Grosz a German artist known especially for his caricatural drawings of Berlin life in the 1920s John Heartfield Berlin Dadaist, He was a pioneer in the use of art as a political weapon. Some of his photomontages were anti-Nazi and anti-fascist statements. Hannah Hoech [1889-1978] was a German Dada artist. She is best known for her work of the Weimar period, when she was one of the originators of photomontage. Primitivism a belief in the value of what is simple and unsophisticated, expressed as a philosophy of life or through art or literature. The Harlem Renaissance An African-American cultural movement of the 1920-30s, centered in Harlem that celebrated black traditions, the black voice, and the black ways of life. Related questions QUESTION how would you define conceptual art? 15 answers QUESTION Yinka Shonibare works in many media but often creates installations with figures dressed in textiles. Where do those textiles come from and where are they most popular? 3 answers QUESTION Marcel Duchamp's readymades, the painting techniques of Jackson Pollock, and the rebellious performances of Dada artists 4 answers QUESTION Which is most likely true of an untitled painting if you can recognize its subject?
https://quizlet.com/79492076/art-quiz-3-flash-cards/
Understand how new technologies can make supply chains more sustainable and learn how to deal with today's trends. In this course you will learn why managing supply chains is important and challenging. It is a key activity for both large multi-national companies as well as for small businesses. We’ll find out how to design supply chains that meet future economic, environmental and social requirements, asking questions such as: - How did global supply chain become what they are today? - How can current information and communication technology (ICT) make supply chains more efficient? - Which trends do we need to recognize and cope with? - How can you implement sustainable supply chain innovations? - And what impact will sensing technologies, such as the use of RFID, have on the supply chain? We will look at examples from the food industry. Also, the building and construction supply chain will be discussed since it has a big impact. One out of three trucks is construction related so innovations applied on this chain can have a huge impact. You’ll be encouraged to apply the knowledge you gain to your own product or industry (or a company and supply chain you’re interested in). You are invited to share your ideas for optimizing your supply chain with other learners. Together with academics from the University of Twente and experts from industry, you’ll learn how to innovate supply chains. These instructors have vast experience in integrating ICT and extensive research in supply chain process optimization, innovation and management. Программа Week 1 : Introduction to supply chain innovation Week 2 : Innovations and sustainability in supply chains Week 3 : The role of information technology in supply chains Week 4 : Business process management and enterprise architecture Week 5 : Sensing in supply chain Week 6 : Supply chains in the construction industry Пользователи - Jos Van Hillegersberg - Hans Voordijk - Ruth Noemi Francia Sloot - Yvar Bosdriesz Разработчик The grand challenges the world will be facing during the coming decades will undoubtedly change the global university landscape too. Today’s students are tomorrow’s global citizens, professionals who will contribute to finding solutions for major social issues in fields like healthcare, governance, energy and the environment. As a young and entrepreneurial university in the Netherlands, the University of Twente prepares young people for this future. It does so by offering cutting-edge, appealing and future-proof education. By being a global player in technology and social research. And by creating a thriving campus hot-spot, as an ecosystem for hundreds of spinoff companies ‘High tech, human touch’ implies that the university is continuously on the look-out for unexpected combinations of research and education, always with a view to the needs of society. Twente therefore cooperates closely with large companies, SMEs, hospitals and government organisations, throughout our own region and elsewhere. Its unique campus in the Twente region currently welcomes 9,500 students, and employs over 3,000 scientists and professional support staff. Платформа FutureLearn est une plate-forme d'apprentissage proposant des formations en ligne ouvertes à tous (MOOC) Fondée en Décembre 2012, la société est entièrement détenue par l'Open University à Milton Keynes, en Angleterre. Elle est la 1ère plateforme offrant des MOOC au Royaume-Uni, avec à son actif plus d'une cinquantaine d'universités partenaires provenant du Royaume Uni mais aussi du reste du monde. FutureLearn se différencie également par des partenariats avec des entités non-universitaires comme le British Museum, le British Council, la British Library et la national Film and Television School.
https://www.my-mooc.com/ru/mooc/supply-chain-innovation/
Sept. 15, 2011 -- Eating lots of white-fleshed fruit such as apples and pears may significantly reduce the risk of stroke. In a new study, Dutch researchers set out to determine a possible link between stroke risk and eating fruits and vegetables of various colors. They took a look at self-reported information from 20,069 people between ages 20 and 65 of what they ate over a one-year period. All of the people had no previous diagnosed heart disease or stroke at the start of the study. During the 10 years of follow-up, 233 people had strokes. The researchers say the risk of stroke was 52% lower for people who ate a lot of white-fleshed fruits and vegetables, compared to those who didn't. Every Little Bit Helps The researchers found that each 25-gram daily increase of white fruits and vegetables was associated with a 9% lower risk of stroke. To put that in context, a single apple is about 120 grams. "To prevent stroke, it may be useful to consume considerable amounts of white fruits and vegetables," Linda M. Oude Griep, MSc, of Wageningen University in the Netherlands, says in a news release. She says an apple a day "is an easy way to increase white fruits and vegetable intake," but because other fruits and vegetable color groups also protect against chronic diseases, it's important to eat a lot of different fruits and vegetables. Foods in the white category also include bananas, cauliflower, chicory, and cucumbers. Potatoes were classified as a starch. Color Reflects Presence of Beneficial 'Phytochemicals' The color of the edible portions of fruits and vegetables reflects the presence of beneficial phytochemicals (plant compounds), such as carotenoids and flavonoids. In the study, researchers divided fruits and vegetables into four color groups: Green (dark leafy vegetables, cabbages, and lettuces), orange-yellow (mostly citrus fruits), red-purple (mostly red vegetables), and white, of which 55% were pears and apples. Previous studies on protective effects of fruits and vegetables have focused on the food's nutritional value and characteristics, such as the edible part of the plant, the color, the botanical family, and its ability to provide antioxidants. The researchers write that they believe their study is the first to examine fruit and vegetable color groups in relation to stroke. Still, they say that more study is needed to confirm their findings. "It may be too early for physicians to advise patients to change their dietary habits based on these initial findings," Oude Griep says. Current Guidelines for Fruits and Vegetables Currently, the U.S. Preventive Health Services recommends that daily diets include vegetables from five subgroups: dark green, red-orange, legume, starchy, and other vegetables. Heike Wersching, MD, MSc, of the University of Munster in Germany, writes in an accompanying editorial that even though the Dutch researchers' study group was "remarkably large," their results should be interpreted with caution. First, Wersching says participants filled out questionnaires about what they ate, meaning it relied on their memory, which is not always a totally reliable method for gathering and interpreting data. Also, Wersching writes that it's possible that the Dutch scientists' findings could be due to "a generally healthy lifestyle" of people who have diets rich in fruits and vegetables. Wersching concludes, however, that if the findings of the Oude Griep group are replicated, "the time for an 'apple a day' clinical trial has come." The study and editorial are published in Stroke: Journal of the American Heart Association.
Uptake of Mineral Nutrients in Plants: The plants uptake their minerals from the soil with the help of roots. This process of uptake of mineral nutrients is known as the absorption of minerals. The mineral absorption takes place by two methods- - Passive Absorption. - Active Absorption. Passive Absorption of Minerals: It is the spontaneous process in which there is no use of ATP for energy. During this absorption, the minerals themselves move from the region of higher concentration to their region of low concentration i.e. from the soil into the root by diffusion. This type of passive absorption is known as simple diffusion. The passive absorption of solute ions or molecules mediated by carriers is called facilitated diffusion. In this process certain carriers (protein molecules) help in the transport of molecules across the membranes. This type of diffusion does not make use of energy and takes place along the concentration gradient only, till the equilibrium stage is reached. Ion Exchange Mechanism– Mineral elements are absorbed by the plant either in the form of molecules or mostly in the form of ions. But the ions are charged either positively or negatively. Therefore, when a positively charged ion is absorbed by the root cells from the soil, then the root cells have to pass out the same number of positively charged ions or every positively charged ion must be accompanied by the negatively charged ion into the cells. Thus, the positively charged ions (cations), such as Ca2+, K1+, Mg2+ and Fe2+ etc. of external soil solution exchange with H+ ions adsorbed to the surface of the cell wall or membranes of the tissue. Similarly, anions like NO3–, Cl–, SO42-, etc. exchange with the free OH– (Hydroxyl ions). Donnan Equilibrium Theory- According to this theory there are present some non-diffusible anions along the inner surface of outer cell membrane. They remain fixed here because the outer membrane is impermeable to them. But the cell membrane is permeable to cations and anions present in the outer solution. Normally, the anions and cations diffuse into the cell in equal number till equilibrium between cell sap and the external medium is reached. But for real electrical balancing, more cations from the external solution would be needed to electrically balance the fixed anions present in the cells. Thus, the cation concentration would be greater in the internal solution than in the external solution. This type of electrical balance or equilibrium is known as Donnan Equilibrium. The Donnan Equilibrium theory and ion-exchange mechanism, explains the accumulation of ions against the concentration gradient without the use of metabolic energy. Mass Flow- It is also believed that ions move through the roots along with the mass flow of water under the influence of transpiration. Active Absorption of Minerals: It has been observed that the plant cells normally have larger quantities of minerals as compared to the outside medium. That means the minerals are accumulated inside the roots from the soil, against the concentration gradient. The absorption of minerals against the concentration gradient is done by using energy (ATP). This type of absorption is known as active absorption. During this process, the ions/minerals are taken from outside (soil) where their concentration is less to inside the cell where their concentration is higher by the use of energy. Carrier Concept- According to this concept there are present certain specific carriers (compounds) on the plasma membrane. There are specific carriers for different ions. On the outer surface, their carriers combine with the ions to be absorbed. The plasma membrane is permeable to these ion-carrier complexes. These complexes move into the cell and on the inner face of the membrane the complexes break down to release the ions. Later the carrier compound returns to the outer face of the membrane to repeat the process.
https://gkscientist.com/uptake-of-mineral-nutrients-in-plants/
Analyze the CCOT of the middle ages in Western Europe during 600C.E.-1450C.E. In 476, the Roman Empire fell which created no central military authority. Many German Barbarians invaded the area so they could rule. With all the corruption, Feudalism was created which was when someone wouldgive up their land for military protection. The fall of Rome brought many revisions to political structure because it went from a lack of central authority to a society where feudalism and manorialism were strongly used. The economic basis went from the Roman coin system to the barter system and back to the coin system. The social structure was revised because it went from no middle class to a society with a middle class who were the merchants. Due to many technological innovations, the knight became the most important symbol to society because he was the basis of warfare. However, some aspects of Western Europe remained constant such as the struggle between kings, nobles, and the church, and the church remained the center of education, medicine, and news during the time period of 600C.E.-1450C.E. After the fall of Rome, politically, there was nothing but chaos. Since there was no central government, feudalism and manorialism developed, and there was no middle class. Then there were six Crusades which were military campaigns that tried to get all of Western Europe to convert to Christianity. As a result, kings became stronger because they made taxes and got land from lords who died in combat without heirs. Later, there was a need for a middle class which became merchants who helped with trading in Europe. In the early stage of the middle ages, the coin system changed to the barter system because of all the chaos. Manorialism, which was when serfs and lords lived on a manor, developed. The serfs were farmers on the manor; they were basically servants to the lord. After the Crusades, a merchant class rose because many items needed to be made and traded. So many serfs became a merchant because it was easy for them to become since they already knew how to make a lot of the items needed. Since there was a lot of trade, it was much easier for the merchants to use the coin system so the barter system wasn??™t used anymore. Also, the king was tied to the merchants because he could use them to get taxes from other areas. The plague caused a huge population decrease so there was less trade and farming on land. In the beginning of the middle ages, education disappeared and war became supreme. There were two classes: serfs and lords because the king was weak and manorialism. The Code of Chivalry, which is a moral code for taking care of people, became important. Knights followed it and it took a person about 21 years to become a knight. At age 7, he became a Paige then at age 14, he became a Squire then at age 21, he became a knight. After the Crusades, the middle class developed which were the merchants. The merchants also had stages: Apprentice, Journeymen, and then Master. To become a Master, one had to do a big project and the Craft Guilds, which was an assembly of craftsmen, had to accept him. In the later part of the middle ages, the merchants became wealthy, and the plague knocked out 1/3 of Europe. In the early stage of the Middle Ages, many technological advances were developed. For agriculture, the mould board plow, horse collar and three-field crop rotation developed. The horse collar let the horse the plow with his shoulders, so the horse could carry heavier plows. The three-crop rotation was two fields would be used while the other gained nutrients, then the fields would be switched. For warfare, there was the use of the stirrup and mounted knights, and the development of plate armor. During the Crusades, many ideas in medicine, science and art were developed. In the later stage of the middle ages, the crossbow, longbow and firearms (muskets and crossbow) were developed. The king used bowmen instead of knights because the bowmen were cheaper and they were very loyal to the king. Also, hired soldiers replaced the feudal system of exchanges for military protection/service. Although there were many changes during the middle ages, there were also certain parts that remained the same. There was always a struggle between the kings, nobles, and the church because each wanted a lot of power. The church remained the center of education, medicine, rest stop, food and clothing for the poor, agriculture research, news, and a moral code. The economy was still agriculture-based. The Code of Chivalry was used. Lastly, the castle was the primary defense and the keep was also very important for defensive purposes. The fall of Rome brought many revisions to Western Europe even though some of Western Europe stayed the same. The fall of the Byzantine Empire in 1453 brought an end to the middle ages. The long bow and firearms brought the end of feudalism because soldiers were hired instead of forced to serve. When feudalism came to an end, there was a rebirth in Western Europe. This rebirth was called Renaissance and it brought many ideas to science, math, and art.
http://vermontcorporatecollege.com/4420/
At the Willows School, we believe that our pupils should have a good understanding of different faiths and cultures in order to be a better member of a multicultural society. We aim to develop a respect for, and understanding of different religions and cultures as part of everything we do, not just within R.E lessons. We strive to develop the pupil’s knowledge and understanding of the major world faiths and predominantly the shared key values by following the Rotherham Agreed Syllabus for Religious Education. This syllabus enables us to offer to pupils the key values that have been treasured by all religious traditions and these values are shared by many who have no religious convictions. Our R.E curriculum is based on the asking and answering of questions, sharing experiences, developing pupils’ own experiences, as well as gaining knowledge and understanding of different religions, beliefs, practices, language, culture and traditions, of our local community. We also understand that parents have the right of withdrawal from all or part of our RE lessons according to the Education Act 1996 and the School Standards and Framework Act 1988. Please notify school if you wish to withdraw your child from R.E. lessons. In our school, we focus on celebrations and festivals which are happening within the year based on the demographic of Rotherham to make learning more relevant to our pupils. Our R.E curriculum is bespoke to meet the needs of pupils, is engaging and real for our pupils. We plan for pupils to visit places of worship, invite members of the community into the school and hold whole school Multicultural days, and making celebrations and festivals come alive within the school. Multicultural days allow pupils to take part in a range of cross-curricular subjects for example dance, art, music and cooking workshops. These events gives the whole school has an opportunity to learn as a community and learn tolerance, respect for others thus gaining skills, knowledge and the understanding they need to be able to function in society as responsible citizens in modern Britain. Date: 1st September 2021 Mrs Angela Duncan Religious Education subject lead Intent, Implementation and Impact.
https://thewillowsschool.co.uk/subjects/religious-education/
Welcoming the Stranger views mentoring as an informal cross-cultural friendship and partnership. Mentors partner with a mentee/mentee family to help them navigate their new community and learn about available resources. The relationship may be as fluid and open-ended as works for everyone involved and may encompass a broad array of interactions. Mentors might help with such things as: - locating resources for food, clothing and household goods - providing occasional transportation to appointments - facilitating connections with schools - practicing English - writing resumes and applying for jobs - prepping for a driver’s license - locating and participating in fun outings One newly arrived individual or family might quickly settle in an apartment and make their own connections, only asking for an occasional ride or cooking tip; another might seek hands-on help with negotiating the Metro system or bureaucratic red tape while others may just want to practice English. However, mentors may also need to be proactive in figuring out how to best assist their mentees. Most fundamentally, a mentor’s job is to be a friend and support at a time of need. Mentor Expectations Mentor/mentee relationship are determined by both parties. Starting out we ask mentors to meet with their mentees at least once a week to help establish a relationship. Then you can decide how much time everyone wants to spend together. During the beginning it is important to respect the privacy of your mentee(s) who often come from traumatic experiences, and not press them for personal information. As the relationship develops and grows more comfortable, it will be up to mentees to decide if they want to share more information. Mentors are not expected to be legal advocates, social workers, case managers or experts at anything; their primary task is simply to be a dependable, trusted friend for newcomers who would otherwise feel alone in a strange new world. If questions or issues arise, mentors have access to support from our Mentor Google group and this website’s Resources section. If being a mentor doesn’t work out for any reason, we hope you will work with the WTS coordinator to figure out a smooth transition to end the relationship.
https://wtsmaine.com/mentoring/mentor-job-description/
Past global changes provide a useful test for evaluating climate models. With modeling efforts increasingly focused on decadal predictions and climate services, there is a growing need to evaluate simulated climate variability. Past climates offer some opportunity for this, but require a slightly different approach than those currently adopted to look at mean changes. University College London’s Geography department hosted a three-day workshop to explore the issues associated with paleoclimate variability. In some respects, this workshop emerged in response to the problems discovered during a previous PAGES-supported workshop focused on the El Niño-Southern Oscillation (ENSO; Braconnot et al. 2012). Coming out of this workshop, two requirements were identified as pivotal to model evaluation using PaleoENSO: (i) a more integrated approach across regions and disciplines, and (ii) the development of statistical and analytical tools to enable that intercomparison. A special issue of PAGES news dedicated to ENSO highlighted recent advances across the ENSO regions and disciplines with the aim of kick-starting this more integrated approach (Braconnot et al. 2013). It was recognized that a dedicated activity is required to effectively tackle the two requirements – which are relevant for all modes of climate variability, rather than just ENSO. In response, a working group on variability was established under the auspices of the Paleoclimate Modelling Intercomparison Project (PMIP3); this workshop was its first meeting. The focus of this meeting was on variability on inter-annual to multi-decadal timescales, in part because on longer timescales models appear to underestimate variability (Laepple and Huybers 2014). It was established that variability comparisons suffer additional difficulties compared with conventional analyses for the mean state. Indeed, intermediate processing is often required between the models and data, which is its own field of expertise. Some of this intermediate processing relates to forward proxy modeling, represented at the workshop by efforts to model lake isotopic systems (Jones and Imbers 2010). The rest relates to how proxy properties impact a record’s statistics – for example looking at the impact of sampling individual forams (Thirumalai et al. 2013) or seasonal biases (Laepple and Huybers 2013). One conclusion was an appreciation that additional meta-data is often required to use a proxy-climate record for variability data model comparisons. This additional information is needed to understand how the individual measurements relate to each other. This becomes much more important for studies of paleoclimate variability than when looking at mean changes, as illustrated with the example of London annual temperatures (Fig. 1). We urge people to consider the following three questions when publishing a proxy record: •To what extent are your stated errors random? •Does each sample represent a time-average or a snapshot? • Is the effective temporal resolution lower than the sampling resolution (e.g. through bioturbation or residence times prior to sedimentation)? Practically speaking, for large proxy syntheses it is important that age modeling and calibration are replicable. This requires access to the proxy dataset (including depths, proxy measurements and sample size), the age modeling information (i.e. age estimates and errors with their depths), and the calibration information, as well as knowledge of the sampling strategy and proxy processes. In summary, to obtain reliable and comparable estimates of past climatic variability, we must correct for the additional processes affecting proxy variability. This is not an easy task: it requires input from several disciplines, but has the potential to be much more relevant for the coming decades than studies of mean climate alone. affiliations 1University College London, UK 2Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Potsdam, Germany contact
http://www.pages-igbp.org/products/pages-magazine/7232
Clinical research is a crucial way of getting new medical interventions into the field, whether those are drugs, treatments, or other types of interventions. From Phase 1 clinical trials to Phase 3 clinical trial drugs, every step of the process is essential to ensure that safe, effective treatments can constantly be developed. Step 1: Design In the first step, researches design a trial in order to answer specific questions they have about a drug or treatment. Some of the things the researchers must plan in advance include: - How many people will take part - What assessment will be conducted - How data will be collected - How data will be analyzed - How long the study will last - Who can participate - How the drug will be administered and what amounts . Once the clinical research has been designed, the trials will move from Phase 1 drug trials to a Phase 2 clinical trial, and finally to Phase 3 clinical trial drugs. Step 2: Phase 1 Trials These early trials involve only a few volunteers and are usually open only to those with a particular need or disease that the drug might be useful in treating. Alternatively, some Phase 1 trials will instead involve people who are healthy. Usually, there are no more than 20 to 100 volunteers at this stage, and the primary purpose of the study is to determine if the drug is as safe as researchers have theorized and what dosages are appropriate. About 70% of all drugs that reach this stage will move on to Phase 2 trials. Step 3: Phase 2 Trails In this stage, several hundred people are usually involved, and all of them with a particular disease or condition that it is believed the new treatment will help with. This phase is longer and typically takes between several months and as long as two years to complete. The purpose of this stage is to check that the treatment is as efficacious as is hoped and to further evaluate side effects. About 33% of the drugs that make it to Phase 2 trials will then go on to be tested as Phase 3 clinical trial drugs. Step 4: Phase 3 Trials Phase 3 clinical trial drugs are used on hundreds or even thousands of volunteers. These volunteers will be those who have a disease or condition that the Phase 3 clinical trial drugs are designed to treat, and the trials will typically last between one and four years. The purpose of this stage is to further evaluate how useful the drugs are and to check a wider range of people for possible side effects that might not have manifested in earlier stages when there were fewer people involved in the study. Only about 25% to 30% of Phase 3 clinical trial drugs will pass this phase and move on. Step 5: Phase 4 Trials Although this stage is technically known as Phase 4, the drug will be released for general use once it has passed through Phase 3 trials. Even though the drug is now being used more widely by the public, the FDA and other research organizations will be keeping a close eye on it for a few years to make sure there are no unexpected reactions or events surrounding its use. Clinical research trials are some of the most important parts of modern medicine. They help us make sure our drugs are safe, our treatments are effective, and innovation in the field of modern medicine continues to advance.
https://mossbauer.org/understanding-the-phases-of-drug-trials/
Germany hosted the annual summit of the Group of Seven (G7) from June 26-28 at Elmau in the Bavarian Alps. The need to respond to the armed conflict in Ukraine and China's continued global rise dominated the summit. The G7 leaders' communique—and pre-summit minister meetings—did not ignore global health. The communique and other G7 statements identified many global health challenges. However, the group's focus on the threats that war in Europe and shifts in the global balance of power create for its members and a liberal international order pushed global health down the G7's priority list and toward the margins of world politics. The Times They Are A-Changin' COVID-19 and the need to be better prepared for future pandemics were priorities at the G7 summit in 2021 at Carbis Bay in the United Kingdom. Squabbles among G7 members in 2020 prevented the group from reaching agreement on how to handle the pandemic. Given the disaster that COVID-19 had become, and with controversies raging about G7 members hoarding vaccine supplies, the G7's credibility was on the line at Carbis Bay. Although the G7's promises on vaccine sharing did not mollify its critics, the group prioritized the pandemic. One must scroll well down the communique to reach the results of the summit's deliberations on COVID-19 and pandemic governance One year later, COVID-19 and future pandemic governance had no such prominence at Elmau. One must scroll well down the communique to reach the results of the summit's deliberations on COVID-19 and pandemic governance. The difference between the 2021 and 2022 summits is easy to explain. COVID-19 is no longer the searing crisis that G7 countries and the world faced in June 2021. G7 governments sustained national vaccine campaigns that helped control COVID-19. Vaccines used by G7 countries continued to work well enough against variants (for example, the delta and omicron variants) that emerged after Carbis Bay. These strains proved highly transmissible but generally caused less serious illness and fewer deaths than the original virus. As vaccine production increased, G7 countries shared more vaccines globally. By the 2022 summit, vaccine scarcity was no longer a global problem. These developments do not mean G7 members covered themselves in glory concerning the pandemic (far from it), but they highlight why COVID-19 was not one of the most pressing problems the G7 confronted at Elmau. Instead, the G7 focused on new and metastasizing geopolitical threats to its members, the broader community of democracies, and the global balance of power. The new threat came from Russia's invasion of Ukraine, an act of authoritarian aggression that endangers security and democracy in Europe and has caused collateral economic, energy, and other damage around the world. G7 countries have been consumed with helping Ukraine fight Russia and managing the war's aftershocks. This armed conflict and its far-reaching consequences understandably commandeered the G7's attention at Elmau. The mounting threat comes from China. This threat was prominent at the G7 summit in 2021, where G7 members launched a global infrastructure program—the Build Back Better World (B3W) Partnership—to counter China's strategy to increase its global power and influence through its Belt and Road Initiative (BRI). The B3W plan flopped, so, at Elmau, the G7 promised $600 billion for a Partnership for Global Infrastructure and Investment (PGII). Along with the G7's actions on the Ukraine war, the PGII is a signature outcome from the 2022 summit. This Wheel's on Fire Behind the communique's rhetoric about democratic unity, shared values, and commitment to an equitable world, the G7's responses to Russian aggression and Chinese power demonstrate that its members are on the defensive geopolitically in Europe and beyond. This context is not good for global health. The COVID-19 pandemic already saw the G7 fearful that China and Russia could gain balance-of-power and ideological advantages during this crisis, and G7 countries reacted with geopolitical intentions in sharing vaccines. The Ukraine war and the lack of a counterweight to China's BRI again finds the G7 reacting to moves by rival powers, a context that will draw global health deeper into geopolitical machinations. The Ukraine war and the lack of a counterweight to China's BRI again finds the G7 reacting to moves by rival powers For example, PGII funds will be allocated to build "vaccine and other essential medical product manufacturing" capacity in low-income countries and meet the need for more geographical distribution of such capacity highlighted during COVID-19. However, this need pre-dated the pandemic. In all the years of being engaged on global health, the G7 never addressed this need before the geopolitical threat from China’s BRI demanded a response. Geopolitical considerations prodded the G7 to take other global health actions, such as addressing global food security problems created by Russia's invasion of Ukraine. The Elmau commitments on pandemic governance (for example, the G7 Pact for Pandemic Readiness) are also motivated by the ideological desire to demonstrate that, in the words of the communique, "when the world is threatened by division," democracies can "find solutions to pressing global challenges," such as pandemics and climate change. This motivation begs the question why, when the world was not geopolitically divided after the Cold War, G7 democracies did not find solutions to pandemics—despite being leaders in global health—or to climate change, a problem they disproportionately contributed to causing. The communique devoted much attention to climate change and supported actions (for example, establishing a global Climate Club) on mitigation of greenhouse gas emissions and adaptation to the damage that climate change causes, especially in low-income countries. Climate change is a threat to global health, so the G7's treatment of it is important. However, geopolitical problems also plague this aspect of the G7's global health agenda. The energy security crisis that the Ukraine war has caused puts the G7 on the defensive concerning climate change. G7 countries are compromising their mitigation ambitions in scrambling to secure fossil fuel supplies to meet their energy needs. The G7's attempts to punish Russia in the energy sector are undercut by other countries, such as China and India (a democracy invited to Elmau as a G7 partner), buying Russian oil. G7 countries are unlikely to generate ideological benefits by providing climate adaptation assistance because they bear great responsibility for causing climate change. However, the lack of geopolitical incentives for adaptation will create problems for G7 policymakers, who are struggling with stretched resources to respond to climate adaptation problems at home and authoritarian aggression, power, and influence abroad. Further, most global health issues identified by the G7 at Elmau—ranging from noncommunicable diseases to antimicrobial resistance—will not receive more attention because balance-of-power and ideological competition are intensifying. The lack of any connection between these problems and geopolitics means that the long-standing lack of foreign policy interest in them will continue in a world that is more dangerous for G7 countries. The dynamic tempts rivals to exploit some global health issues for power and influence rather than to support a rules-based multilateral order Chimes of Freedom? At Carbis Bay and Elmau, the G7 attempted to rally its members to meet a historical moment when the power, influence, and credibility of democracies are under threat. Global health is trapped in this moment. China and Russia turned the geopolitical tables on democracies despite decades of global health leadership by G7 countries. That reality sends a warning that such leadership does not produce balance-of-power or ideological benefits for democracies. Nor did those decades of global health leadership make the G7 democracies reliable partners concerning the two greatest transnational threats to global health—pandemics and climate change. At the same time, global health does not escape the distorting lens of geopolitics that countries, including G7 members, now apply in foreign policy. This dynamic tempts rivals to exploit some global health issues for power and influence rather than to support a rules-based multilateral order, shared values, or an equitable world. As the Elmau summit demonstrates, democracies in geopolitical trouble can conflate this temptation with acting as defenders of the principles and values of democracy.
https://www.thinkglobalhealth.org/article/g7-summits-geopolitical-pivot-signals-difficult-future-global-health
(1) Is there a particular historian or group of historians in the 20th century who have made a difference in historiographical thought? – you are writing about ONE historian or ONE group/school of historians TOPIC — This HE focuses on a specific group of historians writing in a specific period of history (the 20th century). As you read the chapters on 20th century historiography in Breisach and Bentley please consider the variety of 20th century historians and historical schools of thought. (1) Is there a particular historian or group of historians in the 20th century who have made a difference in historiographical thought? – you are writing about ONE historian or ONE group/school of historians (2) Do you have some sort of affinity with any of the historians or schools discussed? In other words, who do you like, and why? The first question is more about importance and significance. The majority of the essay (say 75-80%) should be taken up with your discussion of a major historian or group of historians. The essays must target 1250 words, and should begin with a clear and easily identifiable thesis statement. All essays must have a suitable title. Footnotes are required. All sources, as well as the bibliography, must conform to Turabian’s A Manual for Writers, 9th ed The lack of uniformity in historical writing and thought that are observed from the beginning of the 20th century until today is reflected in the many different criteria by which the main trends or historiographical schools usually take their name. These names can be derived from a philosophical approach (positivism), a journal (the French journal Annales), an author (Marx and the various forms of Marxism) or a university with which the trend is associated (like the Bielefeld school, in Germany). Likewise, an interpretive article that marks a milestone can give rise to a name, such as Lawrence Stone on “the return to narrative”). Other names have arisen from problems or challenges of the present (environmental history and global history). In some cases, it has been claimed that a previously neglected analytical or thematic category has identified trends such as the Alltagsgeschichte (history of daily life), the Italian microstoria and the history of women and/or gender. In some cases, a happy expression has come to designate a change of orientation, such as the “linguistic turn”. Below, we present a brief overview of the most significant objectives, socio-cultural contexts, authors and works of the different historiographical trends distinguishable in the 20th and early 21st centuries, especially in the West. Our tour starts with the dominant historiographical model in the early the 20th century and concludes with the commentary on the controversial role played by history in today’s media environment. Until the renewal of the writing of the history advocated by the French journal Annales d’Histoire Economique et Sociale in 1929, the predominant model for historians could be characterised as “history that does not neglect the story, attentive to the great figures, to the exemplary destinations, to the fate of nations and empires”. A history of political predominance made through careful criticism of sources (especially texts), seen from above, punctuated by great (and not so great) events. A history that the militant Lucien Febvre, the co-founder of Annales with M. Bloch, and many others would later describe as “historicising history” and “histoire événementiel” (“factual history”). From the point of view of the theory of history (of historiography as metahistory), this dominant historiographical model was a roughly harmonious combination of classical German historicism embodied by Ranke, Hegelian idealism and Comtian scientific positivism in an atmosphere prone to exalting the nation itself. It has been said that the history of France between 1870 (Franco-Prussian War) and 1914 (the start of the First World War) was a prelude to widespread patriotic mobilisation. The interest in this historiographical model that was demonstrated by the publication and systematic criticism of historical sources is still quite valid. The one-hundred-year anniversary of the foundation of the French journal Annales d’Histoire Économique et Sociale by Marc Bloch and Lucien Febvre will be celebrated in a few years. It took place in 1929, at a socio-cultural crossroads: the university of a city, Strasbourg, that was returned to France in 1918. It arose almost without programmatic manifesto. After the harsh changes of World War II (in which Marc Bloch died fighting in the resistance against Nazism), the journal resumed, appearing with its most lasting title: Annales. ESC (Economies, Sociétes, Civilisations). Fernand Braudel took over running it in 1957. By then, Braudel had already published his masterly study of The Mediterranean and the Mediterranean World in the Age of Philip II, a great thesis that marks a historiographical milestone due to its articulation of the different historical tempos, the longue durée and the événement, the value given to geographical conditioning and the quality of its prose. Among the many authors that could be classified as belonging to this “school” or trend grouped around the Annales, we might highlight Emmanuel Le Roy Ladurie, the author of Les Paysans de Languedoc (1966). This is a work of history of the kind so frequent among those historians. (After shifting from a quantitative socio-structural history with serial sources to an anthropological and narrative history, Le Roy Ladurie wrote Montaillou, village occitane in 1975.) What were the aspirations of the Annales school? L. Febvre and M. Bloch wanted to broaden the historians’ field and make them aware of false objectivism, showing them that historical knowledge is obtained from the historian’s hypothesis and problems arising from the present, in close association with economics, geography and sociology. In addition, in the face of history centred on rulers, they aimed to accommodate the common man in a Europe shaken by the Soviet Revolution of 1917. They also desired to expand the subject of study, in pursuit of a total or comprehensive history, as well as the concept of sources. Fernand Braudel died in 1985, covered with accolades. His works from the 1960s and 1970s on civilisations and their relationship to capitalism had been warmly received in some universities in the United States. The Annales school would be very influential in Latin countries such as Italy and Spain, whose university environments were familiar with the French language and culture. In recent decades, this influence has diminished considerably. As an outsider, Couteau-Begarie has evaluated the strengths and weaknesses of the Annales school. Its clearest strengths include its contributions to economic history, historical demography, the history of material culture, the history of mentalities and social history (although the latter suffers from a fundamental indeterminacy in Annales). Regarding the limits or weaknesses of the praxis of the school, it has little interest in ancient history and the 20th century, showing a clear preference for research on the pre-industrial world of the Ancien Régime. One can speak of an elective affinity between the interest shown by Annales historians for the most stable structures and societies and the primacy given to mediaeval and modern history. In addition, Bloch comes from mediaeval history and Febvre from early modern history. As for the thematic areas, the Annales school hardly cultivates political history, the history of international relations and, with the important exception of Febvre himself, biography. By focusing on the pre-industrial era, the Annales school distinguishes itself and contrasts with the German social history practiced by the Bielefeld school, with which, on the other hand, it is related.
https://assignmenthub.net/is-there-a-particular-historian-or-group-of-historians-in-the-20th-century/
During a 1987 skydive, parachutist Gregory Robertson saw that fellow skydiver, Debbie Williams, had suffered a mid-air collision with a third parachutist. Williams, rendered unconscious by the blow, was hurtling towards the ground at speeds in excess of 100 mph. At 13,500 ft, Robertson was well above Williams when he started to dive towards her. He reached a speed of about 200 mph, and catching up with Ms. Williams in mid-air, he went into a spread-eagle position to slow down and match her speed. As they both descended rapidly, and with seconds to spare, Robertson opened her chute and then his own, saving her life. He had expertly controlled his speed by changing his air resistance and therefore his drag coefficient. Even during a headfirst dive, Robertson reached an ultimate speed that he could not exceed, called his terminal velocity. Just what is terminal velocity and in particular a raindrop’s terminal velocity? Any mass is attracted to the Earth by the pull of gravity. Gravity accelerates all objects towards the ground at a specific rate. Without any other forces present, the speed of an object in free fall will increase the farther or longer it falls. However, air friction or air resistance also exerts a force on an object (raindrop) that opposes the weight force of gravity. The air resistance and weight force on the droplet couple together to determine the terminal velocity for a given object. In general the air resistance on an object depends upon several variables. First, it depends upon the shape of the object. Its shape determines the object’s drag coefficient: the more aerodynamic the shape, the less drag. Second, it depends upon the size of the object; specifically the cross-sectional area presented to the airflow (perpendicular to the direction of travel). And lastly, it depends upon the speed of the object. At low speeds the object's resistance is directly proportional to speed, and at higher speeds the object’s resistance is proportional to its speed squared. Most objects falling through the air would be considered to be moving at a higher speed, even though that speed might not be great compared to some velocities. The speed at which an object falls increases until the upward force of air resistance equals the downward force of gravity, at which time the object reaches the terminal velocity. We know raindrops come in different sizes, so we need to consider an average size. Let us consider the average raindrop to have a radius of about 0.2 cm and a mass of about 0.034 grams. Aerodynamic engineers would give the rather round shape of a raindrop a drag coefficient of about 0.5. When all the parameters are considered the terminal velocity of a typical raindrop is calculated to be about 9 meters per second or 20 mph. A smaller raindrop of radius 0.15 cm has a terminal velocity of about 7 meters per second or 16 mph. In general, depending upon their size, raindrops fall between 15 and 25 miles per hour no matter how high they are when they begin their descent. Not all falling objects have the low terminal velocity of raindrops. For example, you sometimes see people firing a gun into the air. Rifle bullets can exit the barrel with a speed of 2000 miles per hour. However, it returns to Earth at a terminal velocity of only about 200-mph due to air resistance. Although the projectile returns at only a fraction of its original speed, it is still enough to cause an injury.
https://stelioskallinikou.com/places-between
Pharmacy team members, and those delivering supplies of medicines and clinical equipment, should have priority access to fuel amid a growing crisis in supply and closed petrol station forecourts, the Pharmacists’ Defence Union (PDA) has said. In an open letter to Boris Johnson, Mark Koziol the PDA chair explained that ‘many’ members of pharmacy staff are finding it difficult to source fuel for their vehicles and get into work which inadvertently puts patients at risk. He said: ‘As you will be aware from the early days of the Covid-19 pandemic, because some of the other healthcare professions chose to see their patients online, pharmacies became the front line in primary care. ‘Pharmacies where one of the few essential services able to stay fully open providing vital access not only to medicines, but also face to face healthcare advice for patients. As such, pharmacists were considered key workers. ‘Clearly, the logistical and transportation systems that pharmacies are reliant upon to secure their vital supplies of medicines are also a vital component of the pharmacy service. ‘We are therefore calling on you as Prime Minister to ensure that the government prioritises fuel provision for healthcare workers to include pharmacists, and for those delivering supplies of medicines and clinical equipment.’ Some pharmacists took to Twitter to express their concerns: Similarly, Dr Chaand Nagpaul, BMA council chair called on the Goveremnent to ensure General practice teams and other healthcare workers get ‘priority access’ to fuel so they can get to work and reach patients, He said: ‘Emergency and essential workers rely on fuel both to travel to work and for their work itself – whether this is to get to hospitals, practices and other healthcare settings, or for ambulances to reach people in urgent need of care and GPs to visit very ill patients at home. ‘Everyone will have their own reasons for needing to fill up, but as pumps run dry there is a real risk that NHS staff won’t be able to do their jobs, and provide vital services and care to people who urgently need it. ‘While the Government has said it is putting plans in place to alleviate the shortage of HGV drivers to transport fuel, the results of this won’t be immediate. Healthcare and essential workers must therefore be given priority access to fuel so they can continue their crucial work and guarantee care to patients.’ Up to 90% of petrol stations are thought to have run dry. Reports that the army may be brought in to help ensure fuel supplies for essential services, such as the NHS, have been denied, as the public have been urged to stop panic buying.
https://www.thepharmacist.co.uk/news/fuel-crisis-pharmacy-team-members-must-get-priority-access/
Administrative Law & Governance Colloquium 2022 (Artificial Administration): Some Reading This year’s Administrative Law & Governance Colloquium kicks off next week with Professor Catherine Sharkey of New York University. The Colloquium will be virtual and, of course, free of charge. You can register here. Professor Sharkey will be discussing the report she co-authored for the Administrative Conference of the United States on artificial intelligence in US federal agencies: Government by Algorithm. Here is the schedule for this year’s Colloquium: Tuesday, February 8, 2021: 11.30 to 12.50 EST Catherine Sharkey (NYU), Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies Tuesday, February 15, 2021: 20.30 EST to 22.00 EST Janina Boughey (UNSW), The Automated State Tuesday, March 8, 2021: 11.30 to 12.50 EST Sunny Kang (Eightfold AI), Algorithmic Accountability in Public Administration: the GDPR Paradox Tuesday, March 22, 2021: 11.30 to 12.50 EST Teresa Scassa (Ottawa), Administrative Law and the Governance of Automated Decision-making Tuesday, April 5, 2021: 11.30 to 12.50 EST Jennifer Cobbe (Cambridge), Reviewable Automated Decision-making As in previous years, I have collected some materials which might be of interest for attendees at the Colloquium: - Andrew Le Sueur, “Robot Government: Automated Decision-Making and its Implications for Parliament” in Alexander Horne and Andrew Le Sueur eds., Parliament: Legislation and Accountability (Hart Publishing, Oxford, 2016). - Cary Coglianese and David Lehr, “Regulating by Robot: Administrative Decision Making in the Machine-Learning Era” (2017) 105 Georgetown Law Journal 1147. - Cass Sunstein, “Algorithms, Correcting Biases” (2019) 86 Social Research 499. - Catherine Sharkey, “AI for Retrospective Review” (2021). - Celine Castets-Renard, “Comparing European and Canadian AI Regulation” (Law Commission of Ontario, Toronto, 2021). - danah boyd and Kate Crawford, “Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon” (2012) 15 Information, Communication & Society 662. - Digital Disruption White Paper Series, Responsible Artificial Intelligence in the Government of Canada, version 2.0 (April 10, 2018). - Frank Pasquale and Danielle Citron, “The Scored Society: Due Process for Automated Predictions” (2014) 89 Washington Law Review 1. - Jenna Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms” (2016) (January to June) Big Data & Society 1. - John Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation” (2016) 29 Philosophy & Technology 245. - Karen Yeung, “Algorithmic Regulation: A Critical Interrogation” (2018) 12 Regulation & Governance 505. - Law Commission of Ontario, Regulating AI: Critical Issues and Choices (Toronto: April 2021). - Mireille Hildebrandt, “Law as Computation in the Era of Artificial Legal Intelligence: Speaking Law to the Power of Statistics” (2018) 68 University of Toronto Law Journal 12. - Monika Zalnieriute, Lyria Bennett Moses and George Williams, “The Rule of Law and Automation of Government Decision-Making” (2019) 82 Modern Law Review 425. - Pedro Domingos, “A Few Useful Things to Know about Machine Learning” (2012) 55(1) Communications of the ACM 78. - Taylor Owen, “The Violence of Algorithms” in Taylor Owen, Disruptive Power: The Crisis of the State in the Digital Age (Oxford University Press, Oxford, 2015). - The Citizen Lab, Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System (Toronto, University of Toronto, 2018). Again, register here. Here is the description of this year’s Colloquium: In the era of Big Data, governments and public entities are turning more and more to automation, digitization and machine learning to operate more effectively and efficiently. The extent of technological change in and on public administration is difficult to quantify, but concern has grown about the use of cutting-edge algorithms and forms of artificial intelligence to support governmental operations. There have been high-profile examples of maladministration causing interference with privacy interests or the unlawful withdrawal of benefits. States have responded by developing regulatory frameworks, such as the General Data Protection Regulation in the European Union and the Directive on Automated Decision-Making in Canada. Are these frameworks sufficiently robust to cabin automated decision-making, digitization and use of machine learning? Do we need new accountability mechanisms to deal with rapid technological evolution in the machinery of government? And what role for judicial review, as the rise of a “culture of justification” in administrative law and the expansion of the “duty of fairness” impose strict requirements of justification, intelligibility and transparency which machines might not be able to meet? Speakers from Australia, Canada, the United Kingdom and the United States will reflect on these questions and others during the 2022 Colloquium. This content has been updated on April 13, 2022 at 22:44.
https://www.administrativelawmatters.com/blog/2022/01/31/administrative-law-governance-colloquium-2022-artificial-administration-some-reading/
Competitive teams are normally created to support the 9 v 9 or 11 v 11 system based on age group. U9-U10 - 7 v 7 with normally 12 players per team. U11-U12 - 9 v 9 with normally 14 players per team U13-U18 - 11 v 11 with normally 16-18 players per team. Coaches are allowed to pull players from other lower age-group teams to play, but never the other way around. For example, if a U12 coach has injured players or wishes to "invite" a player from a U11 team, he/she may do with with the permission of the U11 coach. However, if a U15 coach has players injured or wishes to "invite" a guest player, he/she cannot invite any player registered as U16 or above. View list of Current Teams by age. Positions & Skills Offensive Positions Center-Forward & Strikers The center-forward is often a tall player, typically known as a target man, whose main function is to score the majority of goals on behalf of the team. The player may also be used to win long balls or receive passes and "hold up" the ball as team-mates advance, to help teammates score by providing a pass ('through ball' into the box); the latter variation usually requiring quicker pace. Most modern center-forwards operate in front of the second strikers or central attacking midfielders, and do the majority of the ball handling outside the box. The present role of center-forward is sometimes interchangeable with that of an attacking midfielder, especially in the 4–3–1–2 or 4–1–2–1–2 formations. A center-forward usually must be strong, to win key headers and 'outmuscle' defenders. The term centre-forward is taken from the early football playing formation in which there were five forward players: two Outside forwards, two inside forwards, and one center-forward. The striker is rather not different from the center-forward. Strikers are more known for their ability to peel off defenders and to run into space via the blind side of the defender and to receive the ball in a good goalscoring position, as typified by Miroslav Klose. They are typically fast players with decent ball control and dribbling abilities. A good striker should be able to shoot confidently with both feet, possess great power and accuracy, and have the ability to pass the ball under pressure in breakaway situations. While many strikers also wear the number 9 shirt, the position is traditionally associated with the number 10, and occasionally numbers 7 and 11, though both of these are more common for wings. Midfield Positions Midfielders are generally positioned on the field between their team's defenders and forwards. Some midfielders play a more defensive role, while others blur the boundaries between midfielders and forwards. The number of midfielders on the team and their assigned roles depends on the team's formation; the collective group of these players on the field is sometimes referred to as the midfield. Most managers assign at least one midfielder to disrupt the opposing team's attacks, while others may be tasked with creating goals, or have equal responsibilities between attack and defence. Midfielders are the players who typically travel the greatest distance during a match. Type of Midfielders: Center midfielder, Wide midfielder, Holding midfielder, Attacking midfielder, "False 10" or "Central Winger", or Winger. Read more here. Defensive Positions Center Back The task of the centre-back (also known as a centre-half or central defender) is prevent opposing players, particularly centre-forwards, from scoring. Centre-backs accomplish this by blocking shots, tackling, intercepting passes, contesting headers and marking forwards to discourage the opposing team from passing to them. The centre-backs of a team will generally be the closest outfield players to the goal when defending, so the position of the centre-backs most often determines the offside line. The manager of the team may instruct the centre-backs to push further up the field, or retreat deeper and closer to their goal, depending on their preferred strategy and the attributes of the opposing team. With the ball, centre-backs are generally expected to make short and simple passes to their teammates, or to kick unaimed long balls down the field. For example, a clearance is a long unaimed kick intended to move the ball as far as possible from the defender's goal. During normal play, centre-backs are unlikely to score goals. However, when their team takes a corner kick or other set piece centre-backs may move forward to the opponents' penalty area: if the ball is passed in the air towards a crowd of players near the goal then the heading ability of a centre-back is useful when trying to score. In this case other defenders or midfielders will temporarily move into the centre-back positions. In the modern game, most teams employ two or three centre-backs in front of the goalkeeper. The 4–2–3–1, 4–3–3, and 4–4–2 formations each use two centre-backs. There are two main defensive strategies used by centre-backs: the zonal defence, where each centre-back covers a specific area of the pitch; and man-to-man marking, where each centre-back has the job of covering a particular opposition player. Sweeper The sweeper (or libero) is a more versatile centre-back who "sweeps up" the ball if an opponent manages to breach the defensive line. This position is rather more fluid than that of other defenders who man-mark their designated opponents. Because of this, it is sometimes referred to as libero ([ˈlibero], which originated from the Italian name for this position "battitore libero" (free hitter). Though sweepers may be expected to build counter-attacking moves, and as such require better ball control and passing ability than typical centre-backs, their talents are often confined to the defensive realm. For example, the catenaccio system of play, used in Italian football in the 1960s, employed a purely defensive sweeper who only "roamed" around the back line. The more modern libero possesses the defensive qualities of the typical libero whilst being able to expose the opposition during counterattacks. The Fundell-libero has become more popular in recent time with the sweeper transitioning to the most advanced forward in an attack. This variation on the position requires great pace and fitness. Whilst rarely seen in professional football the position has been extensively used in lower leagues. Modern libero sits behind center-backs as a sweeper before charging through the team to join in the attack. Some sweepers move forward and distribute the ball up-field, while others intercept passes and get the ball off the opposition without needing to hurl themselves into tackles. If the sweeper does move up the field to distribute the ball, they will need to make a speedy recovery run back the their position. In modern football, its usage has been fairly restricted, with few clubs in the biggest leagues using the position. The position is most commonly associated to have been pioneered by Franz Beckenbauer and Gaetano Scirea, and later by Franco Baresi and Matthias Sammer in the 1990s era, although they were not the first players to play this position, with earlier proponents such as Alexandru Apolzan, Ivano Blason, Velibor Vasović and Ján Popluhár. Though it is rarely used in modern football, it remains a highly respected and demanding position. Full-Backs he full-backs take up the holding wide positions and traditionally stayed in defense at all times, until a set-piece. Modern full-backs take a more attacking role, overlapping with wingers down the flank. There is one full-back on each side of the field except in defenses with fewer than four players, where there may be no full-backs and instead only center-backs. The traditional English full-back was a large, strong man who would make substantial use of "hacking" – deliberately kicking the shins of opponents, a practice that was accepted as legal in Britain but not in other countries, and caused major controversy as the game became increasingly internationalized from the 1950s on. It is now effectively banned everywhere, and it is this in part that has given rise to a different set of defensive roles. The full-backs have become essential in the modern game formation 4-3-3 or the now commonly used 4-2-3-1 formation. In the modern game, full-backs have taken on a more attacking role than is the case traditionally. Wingerless formations, such as the diamond 4–4–2 formation, demand the full-back to cover considerable ground up and down the flank. Some of the responsibilities of modern full-backs include: - Provide a physical obstruction to opposition attacking players by shepherding them towards an area where they exert less influence. They may manoeuvre in a fashion that causes the opponent to cut in towards the centre-back or defensive midfielder with his weaker foot, where he is likely to be dispossessed. Otherwise, jockeying and smart positioning may simply pin back a winger in an area where he is less likely to exert influence. - Making off-the-ball runs into spaces down the channels and supplying crosses into the opposing penalty box. - Throw-ins are often assigned to full-backs. - Marking wingers and other attacking players. Full-backs generally do not commit into challenges in their opponents' half. However, they aim to quickly dispossess attacking players who have already breached the defensive line with a sliding tackle from the side. Markers must however avoid keeping too tight on opponents or risk disrupting the defensive organisation. - Maintaining tactical discipline by ensuring other team-mates do not overrun the defensive line and inadvertently play an opponent onside. - Providing a passing option down the flank; for instance, by creating opportunities for sequences like one-two passing moves. - In wingerless formations, full-backs need to cover the roles of both wingers and full-backs, although defensive work may be shared with one of the central midfielders. - Additionally, attacking full-backs help to pin both opposition full-backs and wingers deeper in their own half with aggressive attacking intent. Their presence in attack also forces the opposition to withdraw players from central midfield, which the team can seize to its advantage. Due to the physical and technical demands of their playing position, successful full-backs need a wide range of attributes, which make them suited for adaptation to other roles on the pitch. Many of the game's utility players, who can play in multiple positions on the pitch, are natural full-backs. A rather prominent example is the Real Madrid full-back Sergio Ramos, who has played on the flanks as a full-back and in central defense throughout his career. In the modern game, full-backs often chip in a fair share of assists with their runs down the flank when the team is on a counter-attack. The more common attributes of full-backs, however, include: - Pace and stamina to handle the demands of covering large distances up and down the flank. - A healthy work rate and team responsibility. - Marking and tackling abilities and a sense of anticipation. - Good off-the-ball ability to create attacking opportunities for his team by running into empty channels. - Dribbling ability. Many of the game's eminent attacking full-backs are excellent dribblers in their own right and occasionally deputise as attacking wingers. - Player intelligence. As is common for defenders, full-backs need to decide during the flow of play whether to stick close to a winger or maintain a suitable distance. Full-backs that stay too close to attacking players are vulnerable to being pulled out of position and leaving a gap in the defence. A quick passing movement like a pair of one-two passes will leave the channel behind the defending full-back open. This vulnerability is a reason why wingers considered to be dangerous are double-marked by both the full-back and the winger. This allows the full-back to focus on holding his defensive line. Wing-Backs The wing-back is a modern variation on the full-back with heavier emphasis on attack. Today, this position is considered a half-back. This type of defender focuses more heavily on attack than defense, yet they must have the ability, when needed, to fall back and mark opposing players to lessen the threat of conceding a goal-scoring opportunity. Some formations have half-back players that mainly focus on defending, and some that focus more on attack. In the evolution of the modern game, wing-backs are the combination of wingers and full-backs. As such, it is one of the most physically demanding positions in modern football. Wing-backs are often more adventurous than full-backs and are expected to provide width, especially in teams without wingers. A wing-back needs to be of exceptional stamina, be able to provide crosses upfield and defend effectively against opponents' attacks down the flanks. Some players who exemplify these traits are Shannon Cole of Western Sydney Wanderers, Ahmed Elmohamady of Hull City, and DeAndre Yedlin of Spurs. A defensive midfielder is usually fielded to cover the advances of wing-backs. It can also be occupied by wingers and side midfielders in a 3-centre-back formation. Goalkeeper The goalkeeper is one of the most important positions on the field. The goalkeeper or goalie is a designated player charged with directly preventing the opposing team from scoring by intercepting shots at goal. The position requires quick reflexes and the ability to jump, slide and direct players from a defensive position.
http://strikeforcesoccer.com/teams/
I was introduced to the concept of scenario planning during my participation in ACEC’s Senior Executive Institute. Our instructors instilled the importance of using this concept as an anchor of an engineering company’s overall strategic plan. Any successful company has a strategic plan that guides its course to future growth and sustainability. Whether your company’s strategic plan covers the next two years, the next 10 years, or more, strategic decisions are more likely to bear fruit if they have been truly vetted by imagining how they will play out within a variety of alternative futures. Scenario planning is the tool for imagining those possible futures. When we consider a range of possible futures, we are far better prepared than if we operate under an assumption of one official future. If your company is not agile enough to anticipate alternatives in its strategic plan, then you are destined to fall short within our ever-changing environment. If you don’t believe me, just think of the varied circumstances this past year has thrown at us. How could scenario planning have helped us adapt to a global pandemic? The basic premise of scenario planning rests on the idea that you cannot predict the future, but you can prepare for it…so how do we start preparing? Step 1: Scenario Planning Roadmap First off, we need to have an awareness of our surroundings. Before we can develop scenarios, we must first research the drivers within our Societal, Technological, Economic, Environmental, Political and Industry context (STEEPI). Our main focus here should be on the trends and driving forces that will form a pattern of events that may happen over time. Step 2: Generate Scenarios - Define what decisions you are considering and when is the analysis. - Identify the major stakeholders. - Who has interest in the scenario? - How might they influence the situation? - What is the future outcome? - Identify pre-determined elements—these are things we know for sure. - Consider surprises and critical uncertainties. - What could change our fundamental assumptions? - What are the risks of being wrong? - Construct the scenario. - Consider scenario plausibility. - Clearly differentiate between scenarios. - Challenge organizational wisdom. Step 3: Application We then apply what we’ve learned in the last two steps by analyzing the scenarios and asking the following questions: - What red flags did the scenario identify and how do we avoid and/or mitigate? - Do we need to alter our strategic plan? - How does this impact what we know? - What opportunities have been identified? Now that you know the steps to successful scenario planning, make sure you add this to your strategic planning toolbox. With the significant changes our industry is currently experiencing, preparing for the future will be key to your company’s ability to thrive and survive in the long run.
https://meadhunt.com/scenario-planning/
Integrate literacy across the curriculum. As all middle grades educators know, the need to integrate literacy across the curriculum is a goal of every discipline and is consistently included in the standards for each professional area. Add to that the fact that the Common Core State Standards includes an all-encompassing focus on reading and writing, and it is very clear literacy is no longer left only to the language arts teachers. But how do you do it? Questions from the Classroom has the answers! This book is designed to not only provide educators with a research and theoretical base, but more importantly, practical strategies for integrating literacy across the curriculum. Each chapter includes step-by-step procedures and sample lessons that address all of the subject areas including mathematics, science, social studies, and language arts. Intended for middle grades teachers in all disciplines Questions from the Classroom is also a must-read for administrators, curriculum coordinators, assistant principals, literacy specialists, and any other individuals involved in instruction in the middle grades.
https://my.amle.org/Shop/Store/Product-Details?productid=%7B881E8FCB-0CFA-494C-856F-BF11461E319B%7D
The fact that we’re able to hear sounds in our environment and process them almost instantly is nothing short of miraculous. Without even having to think about it, we can hear noises both loud and soft, near and far away. The pathway that these sounds travel in our ears is intricate, and if anyone of its parts isn’t working correctly, we can experience hearing loss. Sound and Your Ear Sound waves begin by entering your outer ear, called the pinna. It funnels these waves down your ear canal and into your middle ear, which is separated by the eardrum. The eardrum is a flexible membrane which begins to move when sound vibrations hit it, and that in turn starts to move three small bones in the middle ear: the hammer, anvil, and stirrup. These bones work together to amplify the sound waves and move them to your inner ear. The middle ear is where hearing loss begins to occur, so it’s important that all these moving parts are free of damage. The inner ear is full of fine hair-like cells, replete with nerve endings, within a spiral-shaped organ called the cochlea. These tiny hair cells collect information from sound vibrations coming in from the middle ear and transmit those vibrations into nerve impulses (via the auditory nerve) to your brain. The brain processes and interprets these signals as sounds, allowing us to hear the noises around us. Sensorineural Hearing Loss The most common type of hearing loss, sensorineural hearing loss occurs due to a problem with the inner ear or auditory nerve. It presents itself when either the auditory nerve or the hair-like cells in the cochlea have sustained damage or are missing. This results in the inability to send complete nerve signals to the brain. This kind of hearing loss can be caused by the following: – Head trauma – Illnesses – Drugs that are toxic to hearing (ototoxicity) – Genetics – Malformation of the inner ear – Exposure to loud noise – Aging Conductive Hearing Loss Although rather uncommon and typically temporary, a conductive hearing loss can occur in some patients. It’s caused when an issue in the outer or middle ear blocks sound from the inner ear. Treatment involves the use of medication or surgery while other individuals opt to use hearing aids to improve their hearing ability. Conductive hearing loss can be caused by: – Ear infections – Benign tumors – Swimmer’s Ear – Foreign object in the ear – Fluid in the middle ear from colds – Absence or malformation of the outer ear, ear canal, or middle ear – Allergies – Perforated eardrum – Impacted cerumen (earwax) Mixed Hearing Loss When multiple parts of the ear’s anatomy are damaged, a mixed hearing loss can occur. In most cases, both the middle or outer ear along with the auditory nerve or inner ear have sustained an injury of some type or have encountered one of the conditions listed above. The conductive hearing loss present may be reversible while the sensorineural hearing loss is often permanent. Auditory Processing Disorders Rather than a hearing impairment which affects the ability to detect sounds, Auditory Processing Disorder (APD) causes individuals to struggle with their ability to organize, analyze, and interpret sounds around them. For those with APD, all parts of the ear are functioning properly, but the auditory processing centers in the brain do not function normally. APD does not always include hearing loss and many times the treatments for this disorder versus a hearing impairment are dramatically different. Hearing Loss FAQs An audiologist is a primary licensed hearing healthcare professional who specializes in evaluating, diagnosing, treating and managing hearing and balance disorders. A hearing aid dispenser (hearing aid specialist or dealer) is a person licensed to sell hearing aids and is limited to testing hearing solely for fitting hearing aids. A hearing screening is a limited test to assess whether there is a hearing loss present by listening for soft sounds via headphones or insert earphones. A diagnostic hearing evaluation is THOROUGH and gives more information about not only if a hearing loss is present, but also how much hearing loss is present, what part of the ear the hearing loss is originating from (outer ear, middle ear, inner ear, or nerve), and whether the hearing loss is permanent or correctable.
https://preferredaudiologycare.com/hearing-health/types-of-hearing-loss/
Category: Documents 0 download Embed Size (px) DESCRIPTIONBeginning Our Journey . Welcome to our Differentiated Instruction journey. As a way to begin please: Reflect on 1-2 of the most powerful learning experiences you’ve had in your life, from your earliest memories, through your years as a student and teacher. - PowerPoint PPT Presentation TRANSCRIPT What is Differentiated Instruction? Beginning Our Journey Welcome to our Differentiated Instruction journey. As a way to begin please:Reflect on 1-2 of the most powerful learning experiences youve had in your life, from your earliest memories, through your years as a student and teacher.Record a short description of each of your experiences on the note card provided.Place your note card on the appropriate place on the timeline.When you are finished, take a few moments to share your reflections with a partner, and hear their stories. We will meet as a whole group to complete our discussion.The Ten Principles of Successful ClassroomsListed below are four of The Ten Principles of Successful Classrooms. 1.) Connected Learning 2.) Individual Learning Path 3.) Student Responsibility for Learning 4.) Focus on Higher-Order Open-Ended Problem-Solving Discuss what you think each principle means with a small group. The Ten Principles Of Successful ClassroomsConnected LearningStudents see learning as being connected, both across the disciplines and to their lives.High Academic StandardsAll students are expected to achieve at high levels utilizing the teacher, peers, and other resources to meet with success.Focus on Higher-Order, Open-Ended Problem-SolvingProblem solving activities are the focus of the learning environment, setting a context within which to learn lower-order skills.Technology InfusionTechnology is used as a tool and a resource to support learning and not seen as a goal unto itself.Global CitizenshipStudents understand their role as contributors to a global society and make strides to contribute to the betterment of their world.High Social CapitalStudents have strong, consistent relationships with adults in school; parents are involved as partners in the learning process.Student Responsibility for LearningStudents take responsibility for setting goals, scheduling time, utilizing resources, and making other decisions.Individual Learning PathTeachers differentiate instruction to meet the needs of each individual learner.Working Well CollaborativelyStudents engage in collaborative problem-solving on open-ended problems with peer, working independently on subtasks.Learning from a Felt NeedStudents are presented with meaningful, higher-order activities that create the context for learning and build a felt need to learn the lower-order skills.DIFFERENTIATED INSTRUCTIONIt means changing the pace, level, or type of instruction provided in response to an individual learners needs, learning style or interestsWhat is Differentiated Instruction?Key Principles of a Differentiated ClassroomThe teacher is clear about what matters in subject matter. The teacher understands, appreciates, and builds upon student differences. Assessment and instruction are inseparable. The teacher adjusts content, process, and product in response to student readiness, interests, and learning profile. All students participate in respectful work. Students and teachers are collaborators in learning. Goals of a differentiated classroom are maximum growth and individual success. Flexibility is the hallmark of a differentiated classroom.Source: Tomlinson, C. (2000). Differentiating Instruction for Academic Diversity. San Antonio, TX: ASCDHow Does Research Support DI?Differentiated Instruction is the result of a synthesis of a number of educational theories and practices.Brain research indicates that learning occurs when the learner experiences moderate challenge and relaxed alertness readinessPsychological research reveals that when interest is tapped, learners are more likely to find learning rewarding and become more autonomous as a learner. 7when the learner experiences neither boredom or anxiety and when the learner is neither over- nor underchallenged. Agenda June 26,2009Beginning our Journey10 Principles of a Successful ClassroomDifferentiation Overview power pointCenters How to sheets, Exit Cards, Special Child, Totally Ten, Powerful Facilitation, Instructional Strategies, Differentiation Grid, Student Responsibility Gum Drop HousesRubricsAssessment OPTIONS FOR DIFFERENTIATION OF INSTRUCTIONTo Differentiate Instruction By ReadinessTo Differentiate Instruction By InterestTo Differentiate Instruction by Learning Profileadd or remove scaffolding vary difficulty level of text & equalizer adjustments (complexity, open-endedness, etc.supplementary materials adjust task familiarity vary direct instruction by small group adjust proximity of ideas to student experience encourage application of broad concepts & principles to student interest areas give choice of mode of expressing learning use interest-based mentoring of adults or more expert-like peers give choice of tasks and products (including student designed options) give broad access to varied materials & technologies create an environment with flexible learning spaces and options allow working alone or working with peers use part-to-whole and whole-to-part approachesVary teacher mode of presentation (visual, auditory, kinesthetic, concrete, abstract) adjust for gender, culture, language differences.useful instructional strategies:- tiered activities Tiered products compacting learning contracts tiered tasks/alternative forms of assessmentuseful instructional strategies: interest centers interest groups enrichment clusters group investigation choice boards MI options internet mentorsuseful instructional strategies: multi-ability cooperative tasks MI options Triarchic options 4-MATCA Tomlinson, UVa 97Planning LessonsFoundational to TransformationalConcrete to AbstractSimple to ComplexSingle Facet to Multiple FacetsSmall Leap to Great LeapStructured to Open-EndedDependent to IndependentSlow to Fast ReadinessPlanning LessonsInterest Areas-Fine arts, athletics, travel, hobbies, etc. Modes of Expression- oral, written, designed/built, artistic, service to community Student InterestPlanning LessonsIntelligence Preferences Culture-Influenced Preferences Gender-based Preferences Learning StylesAssessing Your StudentsDifferentiating in the Classroom By:*Readiness*Student Interest*Learning StylesDifferentiating By ContentConcept-based TeachingCurriculum CompactingUsing Varied test and resource MaterialsLearning ContractsMini-lessonsVaried Support Systems:Audio/Video/DVDCDs/CD-ROMSNote-Taking /Graphic OrganizersStudy GuidesPeer and Adult Tutors Differentiating By Process Learning Logs Literature Circles Journals Graphic Organizers Role Playing Think-Pair-Share Learning ContractsJigsawModel MakingChoice BoardsLabsCenters Differentiating By ProductTiered AssignmentsTotally TenDesign a Web PageDesign a GamePresent a Mock TrialPresent a Radio Program Make a VideoDesign and Make CostumesWrite Letters to the EditorsDevelop a CollectionCreate Authentic Recipes Tiered InstructionTiered Activities are important when we want to ensure that students with different learning needs work with the same essential ideas and use the same key skillsTiered Instruction is a stairway providing access within the large building of learning.Bottom Floor Students with less readiness & fewer Skills. We move students UP the stairway to reach the appropriate challenge level. Within each tier there can be multiple small-group activities presenting different ways to learn.On certain floors there can even be multiple stairways or elevators as our students access higher learning levels differently and at different rates.Center TipsExpectations -Make sure students know how they are to move from center to center. Students should know what to do with finished work.System for what students should do when they are have a questionStudents should know if they can talk quietly or must be silentClearly communicate expectations for their center tasksAlways let students know you trust them to be responsible, active learners during centers.Organizing CentersWe need to be clear about where materials are located.Centers can be in a specific location or just in folders.Noisy centers should be away from where students are working quietly. 18Sticky notes, red light ectA Student who UNDERSTANDS Something canExplain it clearly, giving examplesUse itCompare and contrast it with other conceptsRelate it to other instances in the subject studies, other subjects and personal life experiencesTransfer it to unfamiliar settingsDiscover the concept embedded within a novel problemCombine it appropriately with other understandingsPose new problems that exemplify or embody the conceptCreate analogies, models, metaphors, symbols, or pictures of the conceptPose and answer what-if questions that alter variables in a problematic situationGenerate questions and hypotheses that lead to new knowledge and further inquiriesGeneralize from specifics to form a conceptUse the knowledge to appropriately assess his or her performance, or that of someone else.Adopted from Barell, J. (1995) Teaching for thoughtfulness: Classroom Strategies Exit Cards Exit cards are.. A quick and efficient way to informally assess whether students understand a concept that has been taught. Written student responses to questions posed at the end of a class, learning activity, day or unit. used at any grade level and every subject area A vehicle for students to express in writing some of their thinking. Help students condense or summarize. Encourage deeper processing of the material. Facilitate review of key ideas. Exit Cards ContinuedExit Cards are useful to:
https://vdocuments.mx/beginning-our-journey.html
How has culture influence gender roles? Self-Awareness. Culture influences how men and women think about themselves within their gender role. … If a young boy grows up in a household with an overly masculine attitude that relegates all women to subordinate roles, a child may grow up to reflect those same attitudes in their own relationships and behaviors. Does culture affect gender inequality? Culture and Gender Inequality The most obvious indicator that gender inequality is influenced by the culture is that policies and rules are adopted to counter such inequality. Different countries in Europe have adopted such policies to protect gender equality. How does culture cause gender inequality? Traditional and cultural practices can be harmful to women’s health; they cause material differences; they create the dire situation of women being viewed as being available for the benefit of men; and they create stereotypical masculinity and femininity, which hamper women’s equal rights. What influences gender roles in today’s society? Gender roles are influenced by the media, family, environment, and society. A child’s understanding of gender roles impacts how they socialize with their peers and form relationships. What is the role of culture in society? Culture is the lifeblood of a vibrant society, expressed in the many ways we tell our stories, celebrate, remember the past, entertain ourselves, and imagine the future. … In addition to its intrinsic value, culture provides important social and economic benefits. What are the effects of gender inequality? The effects are expected to be, first, an increase in female wages as under conditions of equality women will no longer be discriminated against; second, a likely decrease in the wages of men as those men who had secured better jobs by virtue of being men will face competition from more productive women; and third, an … What are the reasons for gender inequality? The main factors responsible for gender inequality in India are or have been a) late and incomplete convergence of capabilities where women and men have to be on equal footing as to education, health, and nutrition domains, b) a predisposition to prefer sons, family members engrossed in economic, religious, social and … What are some examples of gender inequality? Far too many girls, especially those from the poorest families, still face gender discrimination in education, child marriage and pregnancy, sexual violence and unrecognized domestic work. These are some types of gender inequality. How does gender and culture affect communication? As a result, men and women often interpret the same conversation differently. Culturally diverse ways of speaking based on gender can cause miscommunication between members of each culture or speech community. These cultural differences are seen in the simple purpose of communication. What are cultural inequalities? Cultural status inequalities are defined as perceived or actual differences in the treatment, public recognition or status of different groups’ cultural norms, practices, symbols and customs. … Cultural status inequalities cover a range of practices and intentionality on the part of the state in question.
https://plug-magazine.com/about-gender/question-how-does-culture-affect-gender.html
Peace people and planet – Campaigners call for EU research agenda to shift focus from private benefits to global public goods Brussels, 30 June 2017 – Ahead of the conference “Research – Shaping our Future” on 3 July, gathering leading EU policymakers, innovators and research institutions, civil society groups call for the next EU Research policy to shift its focus from jobs, growth and competitiveness to delivering global public goods for its citizens. “With a needs-driven research agenda, the EU could concretely deliver impact for its citizens, address today’s societal and environmental challenges and contribute to a sustainable future for Europe. It could pave the way for the transition to a low carbon economy, sustainable food and farming systems and the development of new affordable health technologies.” says Fanny Voitzwinkler, Head of the EU office of Global Health Advocates. Research that will make Europe and the world an environmentally sustainable, healthy and peaceful place to live must now be prioritised over research that delivers profit and economic return. The belief that boosting industry’s competitiveness will create jobs and ‘trickle down’ to the benefit of all people is misleading. “Substantial portions of EU Research funds have been turned into subsidies for large corporations at a time when essential public risk assessment and research institutions are facing budget cuts. This is not acceptable: industry subsidisation is not what a public research policy should be about” says Martin Pigeon, Research and Campaigner at Corporate Europe Observatory. Private profitability is not a sufficient measure of public benefits – a position also defended by several renowned economists and confirmed by the widening levels of inequalities within our societies. “At a time when the EU needs to reconnect with citizens, this next research programme should ensure full public return on public investments. There is need for real democratic and participatory decision-making to enable greater accountability over the use of public funds and ensure they have social and environmental impact. Public funds should not be used for military spending.” says Leida Rijnhout, Resources Justice and Sustainability Programme Coordinator at Friends of the Earth Europe. Proposals for defence to become a much more prominent EU research area post 2020 represents a fundamental shift of the EU from a civilian peace-oriented project to a military-led one, with significant implications for the founding principles set out in the EU treaties. “The EU is a peace project. There can be no place for military research. The EU should invest in research projects which contribute to the peaceful prevention and resolution of conflicts rather than subsidise research for arms production.” warns Laëtitia Sédou, European Programme Officer at the European Network Against Arms Trade.
http://www.ghadvocates.eu/en/eu-research-policies-for-peace-people-and-planet-civil-society-perspective-on-fp9/
the context is often high-risk, and poor programming decisions can have negative effects on beneficiaries and on CARE’s operations overall. In non-emergency times, a programme management cycle typically follows a sequence of assessment and analysis, design, implementation, monitoring and evaluation, and back again to assessment and analysis. In an emergency context, the reality is that all of these steps are happening at the same time and being repeated many times throughout the emergency response. A needs assessment is an information gathering exercise to determine the most urgent needs of an affected population as well as the resources required to meet those needs. Needs assessment usually starts with a rapid assessment, which is then followed by a more detailed assessment later on in the response. The design stage is where the details of a project are worked out, based on the assessment results, including what to do, where to do it, and the resources and costs required to implement the project. Emergency activities are implemented as soon as possible. Implementation of first response activities should take place at the same time as initial assessments. Because the context changes relatively quickly, monitoring should be continuous and time should be allocated to regularly review results in real time. This allows activities to be adapted or modified to take account of the changing needs of the affected population and changes in the operational conditions (for example, political or security situation). An in-depth, systematic, objective review of actions is undertaken. Results of evaluative activities feed into organisational learning, including the adaptation and modification of programmes, and design of new programmes. The results of an evaluation may also help to better inform exit or transition strategies. Transition is the stage when emergency programmes shift from short-term relief activities into longer-term rehabilitation and development programmes (or conversely, when programmes must transition from regular development mode into emergency mode). Exit is when emergency resources are withdrawn, and emergency response operations close down or are handed over to local partners to continue. Ensure consistency with CARE’s mission, vision and programming principles. Have a design that is based on a clear understanding of the disaster risk, including a holistic analysis of hazards, capacities and vulnerabilities, as well as priority needs. Wherever feasible, protect the rights and needs of the poorest in their communities and engage in appropriate risk reduction-related initiatives. Review CARE’s Humanitarian Accountability Framework to understand what you and your team are accountable for, including adherence to applicable interagency standards and codes (namely Core Humanitarian Standard, Code of Conduct, Sphere Minimum Standards in Disaster Response). Ensure the active participation of project participants in the assessment, design, implementation, and monitoring and evaluation processes. Ensure that emergency assistance is targeted and proportional to the unmet needs of the ‘most vulnerable’ or ‘at risk’ groups. Don’t work in isolation – be informed by relevant governmental, inter-governmental and non-governmental policy context, and be linked into relevant coordination mechanisms. Take into account gender roles, based on a broad contextual understanding of cultural and environmental factors. Clearly link a relief and recovery programme to a disaster risk reduction strategy to reduce people’s vulnerabilities to future hazards by strengthening local capacities.
https://www.careemergencytoolkit.org/management/5-strategy/3-the-nature-of-programming-in-emergencies/
A major new report from the United Nations Conference on Trade and Development has identified income inequality as one of the main culprits behind continuing economic stagnation in America, Europe, and throughout the world. The 2012 UNCTAD Trade & Development Report evaluates the role played by income inequality and the shift in income from workers to investors in today’s economic malaise. The study concludes that “reducing inequality through fiscal and incomes policies is key for growth and development.” Unfortunately, inequality is everywhere been on the rise. The percentage share of worker’s wages as a proportion of total national income, the new UNCTAD report finds, has fallen by more than 5 points in most English-speaking countries and by more than 10 points in much of continental Europe. Total income in most major economies has grown dramatically over the past twenty years, but wage income has not. The result? Investors are becoming richer and richer relative to wage-earners in the same countries. Of course, inequality among wage-earners is also rising, especially in the United States, where income inequality hit a new record high in 2011. The UNCTAD report, subtitled Policies for Inclusive and Balanced Growth, finds that policies “that preserve the share of workers in national income and redistribute income through progressive taxation and public spending would improve equality as well as economic efficiency and growth.” In 200 pages of dense economic analysis, the report lays out a detailed roadmap for sustained growth based on high wages and low inequality. First, link “the growth rate of average wages and … the minimum wage to the overall performance of the economy as measured by overall productivity growth.” In other words, workers’ wages should be linked to the overall performance of the economy. In the United States, real economic output per person has more than doubled over the past forty years, yet median workers’ wages have been absolutely flat. The real minimum wage has actually declined, from $10.55 per hour (in today’s dollars) in 1968 to just $7.25 today. A U.S. minimum wage benchmarked to productivity growth, as the UNCTAD report recommends, would now be over $21 per hour. The one developed country that avoided the 2008 global financial crisis — Australia — has minimum wages in this range. The second ingredient for sustained growth: adjust wages for inflation. The UNCTAD report emphasizes that wages must keep pace with expected inflation, not past inflation. The expected inflation rates used to adjust wages should be based on the targets set by central banks. The US Federal Reserve began setting formal inflation targets earlier this year. The Fed’s current target inflation rate is 2 percent per year. Ever since the mid-1970s the Fed has argued against incorporating inflation expectations into wage agreements. The UNCTAD report argues that this was a mistake “based on static neoclassical economic reasoning.” Based on a detailed examination of the evidence from the past three decades, the report finds that when “wages in an economy rise in line with average productivity growth plus an inflation target … the economy as a whole creates a sufficient amount of demand to fully employ its productive capacities.” Third, and most provocatively, the UNCTAD report finds that low inequality promotes economic growth and high inequality hinders growth. It recommends that “as far as possible … the wage level for similar qualifications [should be] similar throughout the economy, and … not left to the discretion of individual firms.” The reasoning behind this claim: Flexible labor markets encourage firms to compete by reducing wages rather than by investing in new facilities and new technologies. When wages are uniform across firms and sectors, the most profitable firms are those with the best products and services. Unproductive firms are weeded out in the never-ending process of creative destruction. On the other hand, when wages differ dramatically across firms and sectors, the most profitable firms are those that are best at driving down wages. This results in a race to the bottom in which everyone loses except the owners of the most miserly firms. The OECD and other pro-market organizations have long argued that “flexible” labor market arrangements are good for employment. Policies that reduce minimum wages, encourage part-time work, make it easy to fire people, and limit unemployment benefits have been the standard toolkit of the past three decades. According to this theory, policies that keep wages low tend to promote full employment. The UNCTAD report, by contrast, argues that “downward adjustments of average real wages leading to greater inequality between profit and wage incomes is an entirely ineffective remedy for unemployment.” “Greater inequality does not make economies more resilient to shocks that cause rising unemployment. On the contrary, it has made economies more vulnerable.” High inequality is often portrayed by corporations and politicians as the price we pay for sustained economic growth. The 2012 UNCTAD Trade & Development Report exhaustively demonstrates just how vacuous this argument really is. High inequality is not a prerequisite for economic growth. It is not even an unfortunate but tolerable by-product of growth. High inequality inhibits growth and is preventing our recovery from the worst economic downturn of the past seventy years. According to UNCTAD, policies that “redistribute income through progressive taxation and public spending would improve equality as well as economic efficiency and growth.” In other words, to grow the realonomy — the real economy in which ordinary people live, work, and consume — tax and spend. It really is that simple.
https://truthout.org/articles/how-to-end-hard-times-reduce-inequality/
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.48683/1926.00084860 Abstract/Summary The transfer of carbon dioxide between the ocean and the atmosphere, and within the ocean interior, can be described by constituent “carbon pumps”. These carbon pumps are driven by biological and physical processes. The biological components can be separated into the “biological carbon pump”, which describes the cycling of carbon in the upper layers driven by photosynthesis in phytoplankton, and the “microbial carbon pump”, which describes the bacterial transformation of dissolved organic carbon into a slowly degradable form in the deep ocean. Understanding these processes requires both sophisticated marine ecosystem models and observations of the ocean carbon cycle. This thesis proposes that the simulation and understanding of the carbon pumps can be improved through data assimilation. Data assimilation is the process of incorporating observations (data) into a dynamic model to improve the accuracy of the simulations. This thesis makes use of ocean colour observations obtained by satellite imaging, assimilated into the marine ecosystem model ERSEM. The first objective of this study is to provide evidence that assimilating ocean colour data into a marine ecosystem model improves the simulation of carbon fluxes in the ocean, which is supported by results from identical twin experiments. The second objective is to improve the understanding of the biological and microbial carbon pumps and their variability across different marine locations. This was achieved by comparing the results of ocean colour data assimilation reanalyses at a nutrient rich coastal site and a nutrient-poor open-ocean site. A major finding of this study is that nutrient concentrations control the strength of the biologically driven carbon pumps, with the microbial carbon pump showing dominance in nutrient poor environments.
https://centaur.reading.ac.uk/84860/
Portland Tyson Plant COIVD-19 Tests Show 5 of 96 Positive Maine CDC director Nirav Shah announced that the first batch of testing for COIVD-19 has come in for the Tyson poultry processing plant on St. John Street in Portland, formerly Barber Foods. Before this batch of testing there were 12 cases of COVID-19 at the plant. This batch of tests that were conducted on 96 Tyson employees and of those 96, five came back as positive. Those five positive tests are not included in today's report of conronavirus cases as they came in just moment before Dr. Shah began his briefing. They will be added to Tuesday's numbers. Over 400 employees will be tested, so Dr. Shah expects more results to come in and it is likely there will be more positive results. The total number of reported cases of the coronavirus (COVID-19) in Maine is now at 1,205, an increase of 20 new cases since Sunday. 720 people diagnosed with COIVD-19 have recovered, an increase of 14. The total number of active cases is 428, an increase of 6. No new deaths were reported keeping total at 57. 186 patients have had to be hospitalized at some point. 37 are currently hospitalized with 18 of those in critical care and 11 patients on ventilators. Here's a breakdown of the numbers by county from the Maine CDC. Maine CDC And here's another handy graphic that was put together by the University of Maine Presque Isle GIS Lab that shows just about every number and stat you might be interested in, presented in a easy to understand format. Governor Janet Mills "Stay Safer at Home Order," is in effect through May 31, 2020 and we are currently in state 1 of her four phase plan to reopen the state. Phase 1 "continues the prohibition on gatherings of more than 10 people. People who are able to work from home should continue to do so, and people should wear cloth face coverings in public settings where physical distancing measures are hard to maintain. This stage allows for the limited expansion of certain business, religious, and quality of life activities, with appropriate safety precautions. These include: Health care from Maine-licensed providers, with recommendations that they prioritize care for patients with time-sensitive conditions; assure the safety of patients, staff, and communities; manage the use of essential resources such as personal protective equipment and testing supplies; and pace re-opening services to the level of community COVID-19 activity, maintaining capacity in our hospitals for potential outbreaks If you have symptoms of the coronavirus, including a runny nose, sore throat, dry cough, fever and in severe cases, difficulty breathing, the Maine CDC says you should call your doctor before going in so that they can prepare for your arrival. The Maine CDC continues to update with new information daily. Keep checking on our mobile app or website to get the latest.
Research needs and requirements vary with each assignment, project or paper. Although there is no single "right" way to conduct research, certain methods and skills can make your research efforts more efficient and effective. If you have questions or can't find what you need, ask a librarian. On this page Choosing and developing a research topic Suggestions for finding a topic - Discuss your ideas with your course instructor. - Discuss your ideas with a reference librarian. - Look over the index and the article titles in a specialized encyclopedia that covers a relevant subject area or discipline. Identifying a topic State your topic idea as a question. For example, if you are interested in finding out about Title IX (Title Nine) and women athletes in college athletic programs, you might pose the question, "How did Title IX impact women athletes in college athletic programs?" Identify the main concepts or keywords in your question. These are potential search terms. In this case they are "title ix," "women," "athletes," and "college athletic programs". Testing the topic Before you commit to a specific topic for your research, do a scan to make sure that your topic isn't completely covered in another paper; at the same time ensure that there is enough information available to complete the project. This can be particularly important if you are planning on using data in your research. If in doubt, ask your professor. If you are finding too much information and too many sources, narrow your topic. For example: "women and athletes and college and athletics". Finding too little information may indicate that you need to broaden your topic by using a more general term or terms in your search. Finding background information Once you have identified the main topic and keywords for your research, find one or more sources of background information to read. These sources will help you understand the broader context of your research and tell you in general terms what is known about your topic. They will give you an idea of how much and what kind of information is available on a given topic. Encyclopedias and dictionaries: You can find subject-specific encyclopedias and dictionaries by using the Library Catalog or by asking a reference librarian. For authoritative information on your topic, you can also consult our list of Dictionaries and Encyclopedias online or our guide to online encyclopedias for the arts and humanities, the social sciences, and science and engineering. Exploit bibliographies: Often there are scholarly articles that give an overview of research in specific fields (a review of the literature). The sources cited in the bibliography are good starting points for further research. Look up these sources in the Library Catalog. Check the subject headings listed in the subject field of the online record for these books and journals. Then do subject searches using those subject headings to locate additional titles. Finding books, articles, and other materials How do I find it? Find books, music, video and audio materials Find articles, databases, images Find dissertations Searching the Library Catalog Find library materials such as books, music, videos, articles and audio recordings via the Library Catalog. For more information on how to search using the Library Catalog see these help pages. What if Cornell doesn't have it? If Cornell does not own the item you need, you can: - Request it from another library to be delivered to you via Borrow Direct or Interlibrary Loan. - Request that the Library purchase a copy. What is "Get it!"? The Get it! Cornell link connects to the full-text of articles in places like Google Scholar or databases that only have article abstracts. (If you are off campus be sure to be logged in Kerberos with your NetID and password, or use PassKey.) Sometimes a direct link to full-text is not available on the Get it! Cornell page. In those cases, click the links to search the Library Catalog by ISSN or ISBN (preferred) or by title and determine whether we own or have access to the item, either online through another source or in print (hardcopy). If the Library does not own or have access to the item you need, use the link on the Get it! Cornell page to request it through Interlibrary Loan or Document Delivery. Evaluating resources When using a book, article, report, or Web site for your research, it is important to gauge how reliable the source is. Initial appraisal - Author or creator: What are the author's credentials (educational background, past writing, experience) in this area? Have you seen the author's name cited in other sources or bibliographies? Respected authors are cited frequently by other scholars. For this reason, always note names that appear in many different sources. - Year of publication: Is the source current or out of date for your topic? Topic areas of continuing and rapid development, such as the sciences, demand more current information. Topics in the humanities often require material that was written many years ago. - Edition: Is this a first edition? Later editions indicate a source has been revised and updated. Multiple printings or editions may indicate that the work has become a standard source in the area and is reliable. - Publisher: Is it a university press or a large reputable publisher? Content analysis Intentions: Read the preface (book) or abstract (article) to determine the author's intentions. Scan the table of contents and the index to get a broad overview of the material covered. Note whether bibliographies are included. Intended audience: What type of audience is the author addressing? Is this source too elementary, too technical, too advanced, or just right for your needs? Objective reasoning: - Is the information fact, opinion, or propaganda? It is not always easy to separate fact from opinion. - Does the information appear to be valid and well-researched, or is it questionable and unsupported by evidence? - Is the author's point of view objective and impartial? Is the language free of emotion-rousing words or bias? Coverage: - Does the work update other sources, substantiate other materials you have read, or add new information? You should explore enough sources to obtain a variety of viewpoints. - Is the material primary or secondary in nature? Primary sources are the raw material of the research process; secondary sources are based on primary sources. Writing style: Is the publication organized logically? Are the main points clearly presented? Do you find the text easy to read? Is the author repetitive? Evaluative reviews (books): - Locate critical reviews of books in a reviewing source. See Book Reviews: A Finding Guide for more information. Is the book considered a valuable contribution to the field? Does the reviewer mention other books that might be better? If so, locate these sources. - Do the various reviewers agree on the value or attributes of the book, or has it aroused controversy among the critics? Evaluating Web resources Purpose Occasionally, Web sites pretending to be objective have a hidden agenda and may be trying to persuade, promote, or sell something. - What is the purpose or motive for the site? (e.g., educational, commercial, entertainment, promotional) - Is the site trying to sell you something? - How easy is it to differentiate advertisement from content? - Based on your knowledge, is the information factual, opinion, propaganda, et cetera? - Who is the intended audience, and how is this reflected in the organization and presentation of the site? Authority - Is the author identifiable? Look for links that say "Who We Are," "About This Site" or something similar. - Is there contact information for the author? (e.g., e-mail address, mailing address, phone number) - What is the author's background? (e.g., experience, credentials, occupation, whether he or she has written other publications on the topic) - Does the author cite his or her sources? - Is this site linked to often by other sites? - Do links on this site lead to other reputable sites? - Are there spelling errors or incorrect use of grammar? - What domain does the site belong to? (e.g., .edu, .gov, .com, .net, .org)? Reliability The dependability of a Web site is important if it is going to be cited as a source in other works or recommended for use by others. - Do most of the links on the page work? - From your evaluation of currency and authority, do you think the site will be there next time you visit it? Currency - When was site last updated or revised? - How often is the site updated? - Do the links on the site work? Coverage - What information is included or omitted? - Is the page completed or under construction? - See also: University of California Berkeley's guide. Distinguishing scholarly from non-scholarly periodicals (articles and papers): Journals and magazines are important sources for up-to-date information in all disciplines. In this guide we have divided periodical literature into four categories: - Scholarly - Substantive news or general interest - Popular - Sensational Scholarly - Scholarly journals generally have a sober, serious look. They often contain many graphs and charts but few glossy pages or exciting pictures. - Scholarly journals always cite their sources in the form of footnotes or bibliographies. - Articles are written by a scholar or someone who has done research in the field. - The language of scholarly journals is that of the discipline covered. It assumes some scholarly background on the part of the reader. - The main purpose of a scholarly journal is to report on original research or experimentation to make the information available to the rest of the scholarly world. - Examples of scholarly journals: American Economic Review, Archives of Sexual Behavior, JAMA: The Journal of the American Medical Association, Modern Fiction Studies Substantive news or general interest - These periodicals may be quite attractive in appearance. Some are in newspaper format. Articles are often heavily illustrated and generally contain photographs. - News and general interest periodicals sometimes cite sources, a scholar, or a freelance writer. - The language of these publications is geared to any educated audience. There is no special training assumed, only interest and a certain level of intelligence. - They are generally published by commercial enterprises or individuals, although some come from professional organizations. - The main purpose of periodicals in this category is to provide general information to a broad audience of concerned citizens. - Examples of substantive news or general-interest periodicals: The Economist, National Geographic, The New York Times, Scientific American Popular - Popular periodicals come in many formats, although they are often somewhat slick and attractive in appearance and have many graphics. - These publications rarely, if ever, cite sources. Information published in such journals is often second- or third-hand, and the original source is sometimes obscured. - Articles are usually very short, written in simple language, and designed to meet a minimal education level. There is generally little depth to the content of these articles. - Articles are written by staff members or freelance writers. - The main purpose of popular periodicals is to entertain the reader, sell products (their own or their advertisers'), and/or promote a viewpoint. - Examples of popular periodicals: Ebony, Parents, People, Reader's Digest, Sports Illustrated, Time, Vogue Sensational - Sensational periodicals come in a variety of styles but often use a newspaper format. - The language is elementary and occasionally inflammatory or sensational. They assume a certain gullibility in their audience. - The main purpose of sensational magazines seems to be to arouse curiosity and cater to popular superstitions. They often do so with flashy headlines designed to astonish (e.g., "Half-man Half-woman Makes Self Pregnant"). - Examples of sensational periodicals: The Globe, The National Enquirer, The Star, Weekly World News Citing sources When writing a research paper, it is important to cite the sources you used in a way such that a reader could find them. These are the most common formats for citing sources. If you are unsure what style to use, ask your professor. - APA: American Psychological Association - MLA: Modern Language Association - Chicago Manual of Style Other resources - Annotated bibliography tutorial - Citation management software: This software will format your bibliography and footnotes for you!
https://www.library.cornell.edu/research/introduction
The i3B network measures with different sensing techniques the brain, body, behavior and (home) environment. It can be different type of behaviors like eating, buying, driving, walking, social interaction and stress. Design of small, low-cost, wireless sensors and body area networks for physiological measurements. Following of eye movements in stationary and ambulatory situations. Analysis of human speech and vocalizations of animals for the purpose of classifying content and detecting emotions. Pattern recognition in EEG, ECG, EMG, GSR, and other physiological signals. Analogue and digital video recording, storage and disclosure, decompression and compression. Digital image processing with monocular cameras, stereo cameras and multi-camera set ups. Following the movements of one or more persons or animals against static or variable backgrounds. Recognition of specific head poses and body postures. Recognition of gestures (head, arms, hands). Non-invasive measurement of facial expressions. Capture and interpretation of real-time keyboard and mouse events, detection of usage patterns and mental states. Integration of the signals from several different types of sensors. Integration of data from different sources (sensors, video, system events, databases, etcetera). Recognition of high-level (complex) events based on combinations of low-level (simple) events. Design of virtual reality and augmented reality environments for research and training purposes. Design of systems capable of processing very large numbers of low-level events in real time, detecting high-level events and on the basis of this giving feedback to a person or animal. Actuators based on light, sound, scent, haptics, and digital displays. Pattern recognition in brain signals for the purpose of controlling actuators. Design of robots capable of recognizing human behavior and responding adequately to this.
https://www.i3b.org/technology-areas
The resulting water can be highly acidic and is called acid mine drainage (AMD) or acid rock drainage (ARD). For the ion, see, Except where otherwise noted, data are given for materials in their, Reaction with water and dehydrating property, Hermann Müller "Sulfuric Acid and Sulfur Trioxide" in. After several refinements, this method, called the lead chamber process or "chamber process", remained the standard for sulfuric acid production for almost two centuries. Like sulfuric acid, selenic acid is a strong acid that is hygroscopic and extremely soluble in water. Festival of Sacrifice: The Past and Present of the Islamic Holiday of Eid al-Adha. Alternatively, dissolving sulfur dioxide in an aqueous solution of an oxidizing metal salt such as copper (II) or iron (III) chloride: Two less well-known laboratory methods of producing sulfuric acid, albeit in dilute form and requiring some extra effort in purification. , Sulfuric acid was called "oil of vitriol" by medieval European alchemists because it was prepared by roasting "green vitriol" (iron(II) sulfate) in an iron retort. The effect of this can be seen when concentrated sulfuric acid is spilled on paper which is composed of cellulose; the cellulose reacts to give a burnt appearance, the carbon appears much as soot would in a fire. Pyrite (iron disulfide, FeS2) was heated in air to yield iron(II) sulfate, FeSO4, which was oxidized by further heating in air to form iron(III) sulfate, Fe2(SO4)3, which, when heated to 480 °C, decomposed to iron(III) oxide and sulfur trioxide, which could be passed through water to yield sulfuric acid in any concentration. This type of reaction, where protonation occurs on an oxygen atom, is important in many organic chemistry reactions, such as Fischer esterification and dehydration of alcohols. The major use for sulfuric acid is in the "wet method" for the production of phosphoric acid, used for manufacture of phosphate fertilizers. , In 1831, British vinegar merchant Peregrine Phillips patented the contact process, which was a far more economical process for producing sulfur trioxide and concentrated sulfuric acid. In addition, it exhibits a strong dehydrating property on carbohydrates, liberating extra heat and causing secondary thermal burns. This results in the stratospheric aerosol layer. Reaction rates double for about every 10-degree Celsius increase in temperature. Home » Chemistry » Selenic Acid. United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances, 1988, National Institute for Occupational Safety and Health, "BASF Chemical Emergency Medical Guidelines – Sulfuric acid (H2SO4)", Consortium of Local Education Authorities for the Provision of Science Equipment -STUDENT SAFETY SHEETS 22 Sulfuric(VI) acid, "Ionization Constants of Inorganic Acids", sulfuric acid on sugar cubes chemistry experiment 8. The spinal cord is most often affected in such cases, but the optic nerves may show demyelination, loss of axons and gliosis. This substance can cause skin blisters and burns. International commerce of sulfuric acid is controlled under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances, 1988, which lists sulfuric acid under Table II of the convention as a chemical frequently used in the illicit manufacture of narcotic drugs or psychotropic substances. Some of the earliest discussions on the origin and properties of vitriol is in the works of the Greek physician Dioscorides (first century AD) and the Roman naturalist Pliny the Elder (23–79 AD). The common name for H2SO4 is sulfuric acid. This process is endothermic and must occur at high temperatures, so energy in the form of heat has to be supplied. Concentrated sulfuric acid has a very powerful dehydrating property, removing water (H2O) from other chemical compounds including sugar and other carbohydrates and producing carbon, heat, and steam. It is an alternative to electrolysis, and does not require hydrocarbons like current methods of steam reforming. Directly dissolving SO3 in water is not practiced. This combustion process produces gaseous sulfur dioxide (SO2) and sulfur trioxide (SO3) which are then used to manufacture "new" sulfuric acid. Aluminium sulfate is made by reacting bauxite with sulfuric acid: Sulfuric acid is also important in the manufacture of dyestuffs solutions. Heat is generated at the interface between acid and water, which is at the bottom of the vessel. The sulfur–iodine cycle is a series of thermo-chemical processes possibly usable to produce hydrogen from water. Because the reaction is in an equilibrium that favors the rapid protonation of water, addition of acid to the water ensures that the acid is the limiting reagent. Lead and tungsten, however, are resistant to sulfuric acid. These plants combust spent acid[clarification needed] with natural gas, refinery gas, fuel oil or other fuel sources. When sulfur-containing fuels such as coal or oil are burned, sulfur dioxide is the main byproduct (besides the chief products carbon oxides and water). This can react with small amounts of soap on paper pulp fibers to give gelatinous aluminium carboxylates, which help to coagulate the pulp fibers into a hard paper surface. CAS Number: 7790-59-2 . The sulfur is pushed to bottom of container under the acid solution, then the copper cathode and platinum/graphite anode are used with the cathode near the surface and the anode is positioned at bottom of the electrolyte to apply the current. A rigid column of black, porous carbon will emerge as well. , In the stratosphere, the atmosphere's second layer that is generally between 10 and 50 km above Earth's surface, sulfuric acid is formed by the oxidation of volcanic sulfur dioxide by the hydroxyl radical:, Because sulfuric acid reaches supersaturation in the stratosphere, it can nucleate aerosol particles and provide a surface for aerosol growth via condensation and coagulation with other water-sulfuric acid aerosols. pH values below zero have been measured in ARD produced by this process. In principle, sulfuric acid can be produced in the laboratory by burning sulfur in air followed by dissolving the resulting sulfur dioxide in a hydrogen peroxide solution. But note that all of the available energy in the hydrogen so produced is supplied by the heat used to make it. This process allowed the effective industrialization of sulfuric acid production. Although less dramatic, the action of the acid on cotton, even in diluted form, will destroy the fabric. There are references to it in the works of Vincent of Beauvais and in the Compositum de Compositis ascribed to Saint Albertus Magnus. Preparation of the diluted acid can be dangerous due to the heat released in the dilution process. Throughout the 18th century, this could only be made by dry distilling minerals in a technique similar to the original alchemical processes. This method does not produce an inseparable mist, which is quite convenient. This chemical compound is also sometimes referred to as vitriol oil or battery acid. Similarly, mixing starch into concentrated sulfuric acid will give elemental carbon and water as absorbed by the sulfuric acid (which becomes slightly diluted). This may take longer and emits toxic bromine/sulfur bromide vapors, but the reactant acid is recyclable, overall only the sulfur and water are converted to sulfuric acid (omitting losses of acid as vapors): Prior to 1900, most sulfuric acid was manufactured by the lead chamber process. Buckeye Health Plan Member Services , Philadelphia Cream Cheese Spread For Cheesecake , Kiwi Banana, Orange Smoothie , Ski Dubai Tickets Offer , Threaded Binary Tree , Goyard White Tote Price , South Shore Libra 4-drawer Chest Gray Oak , Ultimate Support Pro Rtt , Action Hero Biju Amazon Prime , Niger Seed Feeder ,
https://kirchengemeinde-breidenstein.de/slug/030786-h2seo4-acid-name
We begin implementation by importing the libraries we will be utilizing: - TensorFlow: An open-source platform for the implementation, training, and deployment of machine learning models. - Keras: An open-source library used for the implementation of neural network architectures that run on both CPUs and GPUs. - Numpy: A library for numerical computation with n-dimensional arrays. import tensorflow as tf from tensorflow import keras import numpy as np Next, we load the MNIST dataset using the Keras library. The Keras library has a suite of datasets readily available for use with easy accessibility. We are also required to partition the dataset into testing, validation and training. Here are some quick descriptions of each partition category. - Training Dataset: This is the group of our dataset used to train the neural network directly. Training data refers to the dataset partition exposed to the neural network during training. - Validation Dataset: This group of the dataset is utilized during training to assess the performance of the network at various iterations. - Test Dataset: This partition of the dataset evaluates the performance of our network after the completion of the training phase. It is also required that the pixel intensity of the images within the dataset are normalized from the value range 0–255 to 0–1. (train_x, train_y), (test_x, test_y) = keras.datasets.mnist.load_data() train_x = train_x / 255.0 test_x = test_x / 255.0train_x = tf.expand_dims(train_x, 3) test_x = tf.expand_dims(test_x, 3)val_x = train_x[:5000] val_y = train_y[:5000] In the code snippet above, we expand the dimensions of the training and dataset. The reason we do this is that during the training and evaluation phases, the network expects the images to be presented within batches; the extra dimension is representative of the numbers of images in a batch. The code below is the main part where we implement the actual LeNet-5 based neural network. Keras provides tools required to implement the classification model. Keras presents a Sequential API for stacking layers of the neural network on top of each other. lenet_5_model = keras.models.Sequential([ keras.layers.Conv2D(6, kernel_size=5, strides=1, activation='tanh', input_shape=train_x.shape, padding='same'), #C1 keras.layers.AveragePooling2D(), #S2 keras.layers.Conv2D(16, kernel_size=5, strides=1, activation='tanh', padding='valid'), #C3 keras.layers.AveragePooling2D(), #S4 keras.layers.Flatten(), #Flatten keras.layers.Dense(120, activation='tanh'), #C5 keras.layers.Dense(84, activation='tanh'), #F6 keras.layers.Dense(10, activation='softmax') #Output layer ]) We first assign the variable’ lenet_5_model'to an instance of the tf.keras.Sequential class constructor. Within the class constructor, we then proceed to define the layers within our model. The C1 layer is defined by the line keras.layers.Conv2D(6, kernel_size=5, strides=1, activation='tanh', input_shape=train_x.shape, padding='same'). We are using the tf.keras.layers.Conv2D class to construct the convolutional layers within the network. We pass a couple of arguments which are described here. - Activation Function: A mathematical operation that transforms the result or signals of neurons into a normalized output. An activation function is a component of a neural network that introduces non-linearity within the network. The inclusion of the activation function enables the neural network to have greater representational power and solve complex functions. The rest of the convolutional layers follow the same layer definition as C1 with some different values entered for the arguments. In the original paper where the LeNet-5 architecture was introduced, subsampling layers were utilized. Within the subsampling layer the average of the pixel values that fall within the 2×2 pooling window was taken, after that, the value is multiplied with a coefficient value. A bias is added to the final result, and all this is done before the values are passed through the activation function. But in our implemented LeNet-5 neural network, we’re utilizing the tf.keras.layers.AveragePooling2D constructor. We don’ t pass any arguments into the constructor as some default values for the required arguments are initialized when the constructor is called. Remember that the pooling layer role within the network is to downsample the feature maps as they move through the network. There are two more types of layers within the network, the flatten layer and the dense layers. The flatten layer is created with the class constructor tf.keras.layers.Flatten. The purpose of this layer is to transform its input to a 1-dimensional array that can be fed into the subsequent dense layers. The dense layers have a specified number of units or neurons within each layer, F6 has 84, while the output layer has ten units. The last dense layer has ten units that correspond to the number of classes that are within the MNIST dataset. The activation function for the output layer is a softmax activation function. - Softmax: An activation function that is utilized to derive the probability distribution of a set of numbers within an input vector. The output of a softmax activation function is a vector in which its set of values represents the probability of an occurrence of a class/event. The values within the vector all add up to 1. Now we can compile and build the model. lenet_5_model.compile(optimizer=’adam’, loss=keras.losses.sparse_categorical_crossentropy, metrics=[‘accuracy’]) Keras provides the ‘compile’ method through the model object we have instantiated earlier. The compile function enables the actual building of the model we have implemented behind the scene with some additional characteristics such as the loss function, optimizer, and metrics. To train the network, we utilize a loss function that calculates the difference between the predicted values provided by the network and actual values of the training data. The loss values accompanied by an optimization algorithm(Adam) facilitates the number of changes made to the weights within the network. Supporting factors such as momentum and learning rate schedule, provide the ideal environment to enable the network training to converge, herby getting the loss values as close to zero as possible. During training, we’ll also validate our model after every epoch with the valuation dataset partition created earlier lenet_5_model.fit(train_x, train_y, epochs=5, validation_data=(val_x, val_y)) After training, you will notice that your model achieves a validation accuracy of over 90%. But for a more explicit verification of the performance of the model on an unseen dataset, we will evaluate the trained model on the test dataset partition created earlier. lenet_5_model.evaluate(test_x, test_y) >> [0.04592850968674757, 0.9859] After training my model, I was able to achieve 98% accuracy on the test dataset, which is quite useful for such a simple network. Here’s GitHub link for the code presented in this article:
https://mc.ai/understanding-and-implementing-lenet-5-cnn-architecture-deep-learning/
From the moment a parent learns a baby is on the way, providing the best possible environment to nurture the child becomes the top priority. Parents begin researching topics such as nutrition, mentally stimulating playtime, and a safe home environment to make sure they create the best start for their child. As newborns come home from the hospital, mothers and fathers learn firsthand the importance of sleep—for both themselves and their babies. Adjusting to New Schedules As infants settle from restless nights into more regular sleep schedules, parents and caregivers temporarily breathe a sigh of relief at a more predictable routine. However, as these toddlers grow into pre-school aged children at three and four years old, the influence of pre-school prompt moms and dads to ask a new array of questions. How will school hours influence the naps my child already takes? How much sleep is needed to maximize my child’s mental and emotional development? Is my child getting enough sleep to take on the increasing activities in a day? Luckily for parents, a large amount of research-based evidence provides guidance on the ideal amount of sleep for the average four-year-old child. Recommended Amount of Sleep Most adults know (and likely struggle) to aim for six to eight hours of sleep per night. But many adults most likely are not as knowledgeable about the sleep needs of their preschooler. According to the American Academy of Sleep Medicine, children aged three to five should sleep anywhere between ten to thirteen hours in twenty hours. For those of us with restless children who fail to stay in bed for more than an hour, this number can appear rather daunting. However, the ten to thirteen hours recommended include nap time. Benefits of Sleep A well-rested child will function at peak social and cognitive abilities. Other benefits include better concentration, decreased risk of obesity, a stronger immune system, and increased mental health. 5 Tips for Improved Sleep Even for the most organized and routine-oriented parent, ensuring ten to thirteen hours of sleep can sound like an impossible obstacle—especially if a child is particularly opposed to a regular and sufficient sleep schedule. Parents themselves often struggle to get their own recommended shut-eye and may be wondering: how can I establish this healthy behavior for my child? Luckily, there are several actions to take to ensure your four-year-old achieves the ideal amount of sleep to grow up healthy and happy. 1. Model Healthy Sleep Habits People with young children know that children frequently mimic their parents’ behavior—both good and bad. It is vital to avoid staying up late and prioritize quality sleep every night. It is especially important to show your child healthy sleep habits such as: sleeping in a dark and quiet room, going to bed and waking up at the same time every day, and allowing time to wind down before bed, among other things. 2. Schedule “Downtime” For Relaxation Some children cannot stand naps and will not sleep during the day no matter how tired they are. It is important not to stress naps too much. Especially if your child puts up a strong fight against sleeping in the middle of the day. A great way to overcome nap wariness is to incorporate other activities that allow your child to wind down and relax. These can include reading, puzzles, or coloring. Another alternative is to remove any stimulation from their environment and create a restful atmosphere. As long as your child is in a natural state of relaxation every day, positive sleep habits are being established long before bedtime. Finally, if you are lucky enough to have a child who naps throughout the day, naps should not exceed an hour and forty-five minutes. By age four, these naps should only occur once a day. An important tip for both napping and downtime is to schedule it at the same time every day. Children around that age thrive on a schedule. 3. Establish A Bedtime Routine For many four-year-olds, predictability is crucial in creating a sense of comfort and normalcy. As other factors around them are changing, like beginning pre-school or day-care, it is important to create a predictable routine. Keeping meals, relaxation times, sleep, and wake times the same allows children to mentally and physically prepare themselves for sleep; no matter the external environment. As we travel for vacations and family get-togethers, creating a sense of importance and predictability around sleep is key. It also makes your bedtime routine a part of the quality time you have with your child. 4. Minimize Screen Time Near Naps or Bedtime It is easy to give a child a smartphone, computer, or tv screen to soothe them. However, this is detrimental and inhibiting to their ability to sleep. The light that these devices emit signals the brain to remain in an alert state that is not conducive to bedtime. As mentioned before, it is important to model this behavior and stay away from screens at least forty-five minutes before bedtime. 5. Include Playtime Throughout the Day Perhaps the most fun and easy suggestion to foster healthy sleeping habits is ensuring your child receives adequate playtime throughout the day. Running around and exploring the world around them offers a multitude of benefits. Playing at the park or playground improves heart health and establishes physical activity as a priority from a young age. Furthermore, engaging in imaginative play promotes language development and social skills for your child. Most importantly, play allows you to bond with your child and spend quality time together. Time To Give It A Try As a parent, it’s natural to want the best for your child’s mental and physical health. And sleep serves as the foundation. As always, make sure to consult with your child’s pediatrician if you notice any major changes in their sleep habits or behaviors. If anything, to help give you some peace of mind! Try incorporating some of the suggestions listed above into your daily routine. This will allow you to bond and enjoy the benefits of a well-rested child as well. Before you know it, your child will be reaching ten to thirteen hours of sleep from nap-time and bed-time in no time at all!
https://health4littles.com/how-long-should-a-four-year-old-nap/
[56] References Cited UNITED STATES PATENTS 1.254.579 1/1918 Collet 95/65 2,246,561 6/1941 Wheelan 355/ 125x 2.972.931 2/1961 Roob 95/65X Primary Examiner-John M. l-loran Attorney-Fulwider, Patton. Rieber, Lee and Utecht OPTICAL DIFFUSER BACKGROUND OF THE INVENTION 1. Field of the Invention This invention relates to the field of photography, and more particularly to the field of portrait and special effect photography. 2. Description of the Prior Art In photography, and particularly in portraiture, it is often desirable to focus the attention of the observer on a particular area, such as the face of the subject. and to reduce or eliminate extraneous distracting areas. Many different means have been used to achieve these results, but one of the simplest and most enduring is to form the portrait as a vignette, in which the head and upper portion of the neck are relatively sharply defined, and the image then, as the observer's eye moves downward, diffuses to the point where there is no detail, but only an overall color or shade. Heretofore, the vignette has usually been formed by locating a matte or baffle in front of the camera between the lens and the subject, and photographing the subject through or over the top of the matte. Generally the matte has an upper edge formed as the arc ofa circle, and notched to provide a series of relatively long tapered fingers pointed at their upper ends and broadening to wide bases that joined the corresponding bases of the adjacent fingers. This generally serrated edge ,was located so as to be out of focus when the subject was focused on the film, and the result of the serrated edge was to provide a gradual elimination of the image of the subject. By lighting the surface of the matte so that it approached the lighting of the background, the effect was that of a portrait including a head with the lower portion of the neck, em, fading away to a uniform color or shade approaching that of the background. The location of the matte and the balancing of the lighting on it have long presented problems to photographers. Furthermore, the mounting of the matte whose serrated or feathered edge provides the diffusion, has also presented problems. Since the matte must act to block out or remove some of the image, it must be spaced from the lens, lest its only effect be that of reducing the amount of exposure of the film. Additionally, since it must be out of focus, it must be spaced a considerable distance from the plane on which the lens is focused. These requirements have meant that the usual vignette matte has been mounted on a separate stand between the camera and the subject, which was always subject to being knocked over, wrongly positioned, etc, or mounted upon an arm extending out from the bed of the camera. This latter form of mounting obviously had disadvantages, including limitations on the positioning of the matte, and the upsetting of the mechanical balance of the camera on its support. While the present invention is here described as'it pertains to studio portraiture, it will be apparent that the invention may be used in many other fields including, without limitation, motion picture and television photography, and non-portrait work. SUMMARY OF THE INVENTION The present invention overcomes the difficulties ofthe prior art by providing an optical filter composed of a diffusing section and a non'diffusing section, the filter conveniently being mounted upon the lens barrel or housing of the camera lens, and spaced from the front element of the lens a relatively short distance, e.g., a distance of three inches or less for lenses of conventional focal length. The light transmissions of the diffusing and non-diffusing portions are balanced to meet the particular requirements, and two or more filters may be used simultaneously and, if desired, rotated with respect to each other to provide special effects. DESCRIPTION OF THE DRAWINGS In the drawings, FIG. 1 illustrates a camera of any suitable type equipped with the vignette filter ofthe present invention. FIG. 2 illustrates the type of portrait provided by the vignette filter that is a subject of the present invention. FIG. 3 is an elevational view of the vignette filter that constitutes this invention. FIG. 4 is an elevational view of an enlarged fragment of the filter shown in FIG. 3, the section being taken at the junction of the diffusing and non-diffusing portions of the filter, and illustrating the appearance of a dark line across a light background immediately behind the filter. FIG. 5 is an enlarged cross-sectional view of a fragment of the filter taken on the line 5-5 of FIG. 3 and showing the passage of light rays through both the diffusing and the nondiffusing portions of the filter. FIG. 6 is a perspective view indicating how a pair of filters may be used and rotated with respect to each other to produce different results. Referring to the drawings and particularly FIG. 1 thereof, the filter of the present invention comprises a filter assembly 10, used in a special filter holder 11 that in turn is mounted upon the lens barrel 12 of a camera 13. As will become apparent, the camera 13 may be almost any type of camera, but is generally one having a ground glass or focusing screen, such as a studio camera or a single lens reflex camera such as, but not limited to, the general type illustrated. The filter holder 11 mounts on the lens barrel 12 of the camera 13 primarily for convenience, though it could be otherwise supported. should this be desirable, in some other manner that maintains it stationary with respect to the front element of the lens. The conventional filter holder, in which the filter is held as close to the front element of the lens as practical, usually does not secure the maximum effectiveness ofthe filter 10, and hence the special filter holder 11 is intended to hold the filter 10 a moderate distance, for example. a distance up to approximately 75 millimeters from the front surface of the lens. This value will be affected by the focal length ofthe lens, the lens stop, etc. As shown in FIG. 3. the vignette filter 10, preferably of circular form. includes a diffusing portion 15 and a non-diffusing portion 16. The filter 10 is preferably ofa laminated construction as hereinafter described, and the edges preferably are protected by a mounting ring 17 usually formed of metal. Circular filters of laminated construction enclosed within a metal ring have been known and used for many years, but the use of both a diffusing portion and a non-diffusing portion in a single filter assembly as here proposed is new. The boundary between the diffusing portion 15 and the nomdiffusing portion 16 is preferably an arc ofa circle, as indicated in FIG. 3, and the construction of the filter assembly 10 is indicated in some detail in FIG. 5. As seen in FIG. 5, the diffusing portion 15 comprises a piece of optical glass having a smooth outer surface and a roughened inner surface. The toughening of the inner surface may be accomplished by any suitable means, such as etching, grinding, or sandblasting, and should be to a degree sufficient to prevent the formation of a recognizable image by a ray of light passing through this diffusing section. The non-diffusing portion 16 comprises a piece of optical quality glass abutting the diffusing portion 15, and both the front and rear surfaces of the non-diffusing portion 16 should be without blemishes or aberrations that would tend to distort the image. The diffusing portion 15 and the non-diffusing portion 16 are cemented to a supporting sheet 18 that strenghthens the diffusing and nondiffusing portion, as well as protecting the roughened surface of the diffusing section. It will be appreciated, of course, that while the roughened surface of the diffusing portion 15 is sufficient to prevent the formation of an image on the image plane, the roughness is not so great as to prevent the viewing of an object immediately adjacent the diffusing section. Thus, FIG. 4 represents the effect that will be seen when a filter assembly 10 is placed upon a surface, such as a piece of white paper, having a black line extending across it. The line 20 is sharply defined and clearly seen through the non-diffusing portion 16, but is much less sharply seen, and is diffused when viewed through the diffusing portion 15. A translucent section, such as might be provided by a piece of opal or flashed opal glass would provide too much diffusion, and the line 20 would generally not be visible through the opal or flashed opal glass. This excess of diffusion is undesirable. As seen in FlG. 5, when a light ray 21, illustrated as going from left to right, enters the nondiffusing portion 16 of the filter assembly 10, it passes through that portion and the supporting member 18 without substantial diffusion. it will be understood that the light ray then enters the lens of the camera 13 and is focused upon the film plane within the camera, in the conventional manner. A second ray 22, likewise shown passing from left to right, passes through the diffusing portion 15 and is diffused by the roughened rear surface of the diffusing portion and thus emerges from the supporting plate 18 as a number of rays 23 of scattered light. The sgtttered rays 23 pass on toward the lens of the camera 13, but cannot be focused upon the film plane to form an image as the non-diffused ray 21 can. Since the filter assembly is relatively close to the lens of the camera 12, and is thus out of focus, the dividing line between the diffusing portion and the non-diffusing portion 16 will not be seen on the image formed on the film, but instead, the dividing line will be softened or spread over an area extending from the sharply focused part of the image formed by the rays 21, to the completely diffused area formed by the diffused rays 23. An example ofa picture formed by such a filter is shown in FIG. 2, where it will be seen that the eyes, nose, and mouth of the subject are sharply in focus, these representing the critical portions of the portrait. while the lower portion of the portrait is gradually diffused into a complete absence of image. No sharp line dividing the diffused and the non-diffused portions is seen, and it will be appreciated that by reason of the optics of the arrangement, certain of the diffused rays 23 reach the same area of the film plane that the non-diffused rays 21 reach. The proportion of diffused and non-diffused rays vaties, and where the non-diffused rays are a maximum and the diffused rays are a minimum, as in the case of the facial features, the image is clear and sharp. Where the diffused rays 23 predominate, the diffusion is much greater and a maximum is reached where only diffused rays reach the film plane, and as a result the image is eliminated. When a picture is taken of a person in front of a white background, it is desirable to have the lower portion of the portrait fade into a white area matching the background behind the head of the subject. When the subject is wearing grey or medium dark clothing, the diffused light rays 23 must be of a sufficiently strong intensity to overcome the lack of light reflected from the grey areas. in effect, this means that more light must be provided for the outer edge of the picture, or conversely, less light must be permitted to expose the face of the subject. Since it is a matter of proportion, the simple way to solve the problem is to decrease the amount oflight passing through the non-diffusing section 16 while keeping the transmission of the diffusing section 15 near the maximum possible. it has been found that to secure best results where a white background is used, the non-diffusing section 16 should transmit approximately 50 percent of the light falling upon it, while the diffusing section 15 should transmit approximately 70 percent. lt is to be clearly understood that this transmission of light is not measured immediately behind the filter assembly 10, but is instead measured at the focal plane, where the film is located. Because of the scattering of the light rays 23, measurement immediately behind the diffusing portion 15 will not give a reading having value for the particular purpose for which the filter assembly 10 is intended. By reducing the transmission of the light through the nondiffusing section 16, an increased exposure must be given for the photographic film, and since the transmission in the example given is 50 percent, the photographer should open the lens one stop, or double the exposure time. Since the exposure time is often determined by other factors, such as by the use of electronic flash, the customary method of compensating for the filter assembly 10 is to open the diaphragm ofthe lens one stop. in most circumstances where the background is darker, and particularly in the case of color photographs where it is desirable to have the color of the background predominate over the color of the clothing of the subject, the diffusion will normally be somewhat greater. Furthermore, in order to prevent the diffused area from being lighter than the non-diffused area, the transmission of the diffusing section 15 and the non-diffusing section 16 must be substantially balanced. Under such conditions, it has been found best to provide both of these sections with a transmission of substantially 50 percent. it will be appreciated, of course, that the use of the filter designed for medium backgrounds will likewise require that the lens be opened up one stop, or that the exposure be doubled, to compensate for the decreased transmission of the vignette filter 10. There are times when it is desirable to have the vignetting effect present on the sides as well as across the bottom of the picture, as may occur when a portrait is taken outdoors, against a background of trees, flowers, etc. While the filter assembly 10 is very effective, the particular shape illustrated in FIG. 3 may not be most suitable for some particular purposes. Under such conditions, it may very well be that the diffusing areas should extend up along each side, as well as across the bottom, and this can be accomplished by using a pair of filters. Such an arrangement is indicated in H6. 6, where the rear or front filter 30 has been rotated counterclockwise, in the direction of the arrow, while the rear filter 31 has been rotated clockwise, in the direction ofthe arrow adjacent it. By rotating one or both filters, a wide variety of results are obtainable. The use ofa pair of filters, as indicated in FIG. 6, should not require a further increase in exposure, since this might be too restrictive in the fields in which they can be used. Preferably, the second filter 31 has a greater light transmission than the previously described filters, and generally it has been found desirable to have the non-diffusing section 16 transmit approximately percent of the light falling upon it, while the diffusing section 15 transmits about 75 percent ofthe light. As is customary in good filter design. the diffusing section 15 and the non-diffusing section 16 are both cemented to the supporting plate 18. This means that if desired, the reduction in light transmission, or filter action, can be achieved by coloring the cement used to secure the non-diffusing section 16 to the supporting plate 18. Alternatively, the glass plate forming the non-diffusing section 16 may be colored to provide the desired light transmission. Obviously, any desired combination of colored glass and colored cement may be used. Normally, it is not necessary to decrease the light transmission through the diffusing section 15 of the filter assembly 10 other than by that decrease caused by the roughening of the surface. In some cases, however, the transmission through the diffusing section 15 may need to be reduced, in which case either a colored cement or a colored glass forming the diffusing section 15 may be used. While the term colored" has been used in referring to the decrease in light transmission of the diffusing section 15 and the non-diffusing section 16, it should be remembered that the filter assembly 10 may very well be used with color films, and consequently the filter itself should not be selective in its transmission of colors, but should be a neutral density filter. Thus, the filter 10 presents generally a rather grey appearance, suitable for exposing both color and black and white films through it. From the foregoing, it will be seen that there has been shown and described a vignetting filter fully capable of achieving the results andsecuring the advantages heretofore set forth. While a preferred form of the invention has been shown and described, it will be appreciated that changes may be made without departing from the broad concept of the invention as herein set forth. Consequently, while an efficient and effective form of the device has been shown, the claims are not to be limited to the particular form or arrangement of parts herein described and shown, except as limited by the claims. lclaim: l. A photographic filter for attachment to a lens, which includes: a non-diffusing section adapted to transmit an undistorted image to a lens; a diffusing section adjacent said non-diffusing section, adapted to diffuse light passing through said diffusing section so that said light cannot be focussed to form an image of an object materially spaced from said diffusing section; means holding said diffusing section and said non-diffusing section adjacent each other and spaced from said lens, said object, and said image; and the light transmission of said non-diffusing section having a preselected ratio to the light transmission of said diffusing section. 2. A filter as defined in claim 1 in which said means holding said se'ctions adjacent each other is sheet cemented to both said sections. 3. A filter as defined in claim 1 in which the light transmission through said diffusing section is about 50 percent, and the light transmission through said non-diffusing section is about 50 percent. 4. A filter as defined in claim 1 in which the light transmission through said diffusing section is about percent and the light transmission through said non-diffusing section is about 50 percent. 5. A filter as defined in claim 1 in which the light transmission through said diffusing section is about percent and the light transmission through said non-diffusing section is about percent.
Introduction: Shooting for Photogrammetry Being able to create something 3 dimensional from 2 dimensional photographs is definitely magical, but it can also take quite a bit of finesse to get the right kind of photos. In this sense, a large part of learning how to scan with photogrammetry is about learning basic photography and lighting technique, so I highly suggest also checking out audreyobscura'sPhotography Class for some great foundational information about camera operations and lighting. Photogrammetry requires a very specific kind of photography, the aim of which is much more practical than aesthetic. Ideally every area of the object will be visible in at least two photos in order to give the reconstruction software enough points of reference to work with. There are a few different strategies you can use for your photo set-up depending on what kind of object you are shooting and your budget. In this lesson, we'll talk about everything you need to know to shot your photos in different environments. Step 1: The Two Shooting Strategies As I've mentioned there are two basic strategies when you are shooting for photogrammetry: you can either shoot by moving your camera around an object, or rotate the object while keeping your camera still. I'm going to show you both. A lot of sources will tell you that shooting around an object is the most effective, because reference points in the background help the software properly orient your photos in space. In my experience, however the other method produces much better results when done properly. It requires somewhat more careful preparation and more equipment, but it allows you far more control over your lighting and shooting environment. When you shoot a rotating object, you need your background to be perfectly blank or featureless. This means you will need a black or white backdrop that is big enough to allow you to shoot your object from both high and low angles. You will also need a tripod, a turntable and usually some kind of artificial lighting set up to light your object and create perfectly blank background with no shadows. Sometimes using this method will be impossible (say, if you want to capture a statue in an outdoor park) so depending on the object you are shooting and the equipment you will have available to you, you will have to choose for yourself which shooting method makes the most sense. Getting either method right may require some trial and error, so try not to get discouraged. When I first started experimenting with photogrammetry, I was using the "shoot around an object" method, and I was having a lot of trouble getting good results. Eventually I switched to the "rotate the object" method, which worked a lot better for me, but some people have gotten great results from shooting around an object even with only a camera phone. Step 2: Preparing Your Object for Scanning Depending on what kind of object you've chosen to scan, you may need to make a few preparations before shooting. As I mentioned in the last lesson, the easiest objects to scan with photogrammetry are stationary medium sized objects with a matte finish, a good amount of surface detail and not many thin delicate bits. So, if you've decided that you really want scan the large shiny tuna fish you saw swimming at the aquarium, or a tiny delicate flower with 27 petals... you might just want to re-think that decision. If you have chosen something a bit more rational (such as, say, the mind blowingly natural fractal geometry of this romanesco) you will still need to make sure you have your object set up correctly so you get the best possible photographs. It's often helpful to fixture your object in a certain position before photographing it. For example, If you have an object that you want to scan all the way around, put it on a mount that holds it above the surface you are scanning on. An easy way to do this for an object like my romanesco, is to simply take a board or piece of plywood and hammer a long nail or piece of stiff wire through the center. Then you can stick your object onto the nail like this. (Just be careful if you leave this spikey booby trap lying around. I drilled a hole in another piece of wood to cover my nail when I wasn't using it). Depending on what sort of object you are scanning, you might need to construct a different kind of stand. Another good option is to use a small tripod with a clamp so you can clap your object or clap a nail that you stick into your object. Sometimes you might not even need to capture the underside of your object, in which case a stand might be unnecessary. If you are trying to shoot an object that has some undesirable characteristics for scanning, there are a few things you can do to make getting a good scan more likely. If your object is shiny reflective or transparent: if you need to be able to return the object to its original state, try spraying it with a removable chalk spray paint. If you don't mind altering the object irreversibly, you can use a regular spray paint or brush on paint, just make sure you use something that is matte not glossy. Even small shiny areas on an object can sometimes cause problems in reconstruction, but sometimes just covering these areas up with carefully placed pieces of masking tape can help a lot. If your object doesn't have a lot of surface detail: objects that are all one uniform color sometimes don't have enough points of reference for photogrammetry to use in reconstruction. So,if you are scanning an object like this mannequin head and you don't care about capturing the color layer, you can add your own details by drawing on the surface, or sticking stickers or pieces of tape on the object. If your object is very small: if you determined to try scanning a very small object, you will need a camera with a very good macro lens and a tripod so your images come out extremely sharp. In fact, for extremely small objects, you may need to automate the process like Shapespeare did in this great Instructable. If your object is large: if you really want to tackle scanning a large object, there are a few specific strategies that will help make this easier, and I will address some of these later in this lesson. A lot of people also use camera equipped drones to capture landscapes or buildings. To learn more about how capturing landforms with photogrammetry from the air check out this awesome instructable by moon_goose. If your object is moving: it is impossible to scan objects that are actually in motion, but if you want to scan something like a human who has the potential to stand still, I will talk about strategies for this in the section on Scanning the Human Body. Step 3: Lighting a Rotating Object No matter how you are going to shoot your object, lighting is key. Low light or intense contrast will confuse the software, so strong diffused light works best. When you are shooting by keeping your camera still and rotating your object, you only need to light the part of the object that is facing your camera in any given shot. But you also need to make sure you are not casting light on your backdrop in a way that creates odd shadows or highlights. The background in these types of shots needs to be perfectly featureless or it will confuse the photogrammetry software. To get really good results every time with this method, you will usually need to create your own lighting set-up. It can take anywhere from 1-3 diffused lighting sources to effectively light your object. There are a lot of options for this, but you can create an easy and fairly cheap one by following the instructions in audreyobscura's Photography Class. Here's what you'll need: 2 or 3 white shower caps for diffusion Shine one of these lights on either side of your object from the front, adjusting them so they cast a diffused light with a perfectly even background. You may need an additional light shining down from above to catch the top of your object. You can also use a ring light that fits around your camera lens and shines on your object from the front. Just be sure you aren't casting any weird shadows as your object rotates. If you are using a white sweep you usually wand to shine a lot of light on the background, if you are using a black sweep, keep light on the background to a minimum. Step 4: Shooting a Rotating Object Once you have your object prepared and fixtured, place it on our turntable in front of your sweep and set up your lights so you are lighting the front of your object and creating featureless background. For either shooting method, what you want to do is capture a series of overlapping photos all the way around your object at several angles. In order for photogrammetry software to reconstruct any point on your object, that point needs to be visible in at least 2 photos, so you need to shoot a lot of photos to capture an object well, anywhere from 20-250 depending on the size and complexity of your object. If you are shooting a very complex object you might need to shoot three or four rows of photos all around the object and some vertical rows from top to bottom. For a simpler object you might only need two rows total. Set up your camera on a tripod in front of your object at an angle that you think will capture some important details. Make sure your object is filling up most of your frame, but not being cropped anywhere. Remember you are going to be spinning your object, so if it's an odd shape the framing could change as it rotates. If you are shooting with a camera that allows you to adjust the settings, and you know how to use them, there are a few things you can do to optimize your images. You want to get as much of our object as possible in focus in every shot, therefore you want to shoot with a long depth of field. To do this, you need to set your F-stop to a high number, F8 is usually a good choice. A high F-stop means the aperture of your camera is smaller, letting in less light (confusing, I know). The less light you are letting in, the darker your photos will be, so you may need to proportionally slow your shutter speed to increase the brightness of your photos. As you can see, it's a bit of a balancing at to get the right exposure. If your camera has an 'aperture priority' setting, that can be a good one to use, instead of full manual. This setting lets you set the desired F-stop, and then adjusts the rest of the camera settings accordingly. Once you have everything set up, it's time to capture your object. Shoot one photo, then rotate your object very slightly on the turntable, about 5-15 degrees, and shoot another photo. Keep doing this all the way around until you are back to where you started. Make sure you are moving your hand fully out of the way for each photo so you don't create shadows, and watch for changes in lighting as you shoot. Next, move your camera on your tripod to a higher or lower angle and repeat the same process. You need to capture every area of the object, especially any crevices, overhangs or details. Sometimes you also need to shoot a few vertical rows to get the bottom and top of the object. If your object has a lot of really fine textural details that you want to capture, you can also shoot some very close up detail shots to capture those. Step 5: Lighting for Shooting Around an Object If you are shooting by moving around an object, a lot of the same lighting rules apply. You always want diffuse even light, but this time you need the light to be even all the way around your object, which can be trickier to achieve. With this type of shooting, working outside on an overcast day is particularly ideal, and more feasible since you don't need a blank backdrop. If you need to shoot indoors, it is sometimes possible to find a good existing lighting situation. Look for places with bright even light and avoid rooms with windows. Having in a bright window appear in the background of some shots can change the exposure of your object and confuse the software. Places like grocery stores and some well-lit office buildings will sometimes have just the right kind of light, but a lot of indoor areas will be too dim, or have light that is too directional. If you decide to use additional lights for shooting indoors, you will need at least 3, and sometimes 4, diffused lighting sources that you can use to strategically light all sides of your object evenly at once. The diffused clip lights from audreyobscura's Photography Class are again very useful here. Your lights should be set up around your object so they cast an even light with minimal shadows. Try to position them close enough to shine as much light as possible one the object, but far away enough that they won't appear in your shots. This is hard to achieve, which is part of why I don't prefer this shooting method. It's almost impossible for the lights to not end up in some of your shots. It's not the end of the world if they do, but is not ideal. Step 6: Shooting Around an Object If you can, it's usually helpful put what you are shooting up on a stool or pedestal in the center of an open space to elevate it to a comfortable level for photos, which also makes it easy to walk around. Make sure you are shooting in a location where there is nothing moving in the background, as this will confuse the software. Also avoid places with a lot of shiny or reflective objects in the background. It helps to have a distinct visual pattern around your subject. This provides more reference points for the software. It also means you'll end up with some extra bits under your 3D model, but these can be erased in the editing process. If you can find an object outdoors with a pattern around the base, it will be a huge help. The pattern of bricks under this fire hydrant really made it turn out well. Or if you are indoors, just laying a piece of newspaper on your base or placing some small objects around it is enough, but make sure whatever you're using isn't shiny or reflective... I'm sure you're getting the point by now that shiny and reflective is the enemy of photogrammetry :) You can try shooting your object either with or without a tripod. A tripod will ensure that all the photos in each rotation around your object are taken at the same angle. It will also help make your images sharp, especially if the light is low, but shooting with a tripod is also more time consuming so if you have bright enough light you might want to try shooting freehand first. Using monopod is a good compromise here. The same camera setting suggestions apply here as for the other shooting method, and it can be especially useful here to have your camera on a manual setting so your exposure doesn't change as you move around your object. If you are shooting without a tripod, however be careful because a slow shutter speed, anything below about 60, can make your photos blurry, which is why lights and a tripod can help. Choose an angle and start shooting by moving around your object, moving about 5-15 degrees for each new photo. Try to hold your camera as steady as possible and maintain the same angle all the way around until you are back to where you started. Then choose a higher or lower angle and repeat the same process. Shoot as many rows as you think you need to capture your object, and make sure you get all the way over the very top. I find that it's easy to miss this part of the object which can result in scans like this: Step 7: Tips for Scanning a Human One thing that a lot of people want to do when they start out with 3D scanning is scan themselves or someone else. This is definitely a challenge, but it's not impossible. For one thing, humans are not static, so it's difficult to take all the photos you need for photogrammetry without your subject moving. Also, people are big, so you can't just put them on your little rotating turntable and call it a day. The absolute best way to capture a person with photogrammetry is to create a multi-camera rig, so that images of the person are captured from all angles at the same moment, eliminating the problem of movement. Mparker07 as a great Instructable that shows how to create your own rig like this, the only problem is, it costs quite a bit of money to set up. The cheapest way is to try shooting around your subject in even bright light like you did with the smaller objects. I've seen scans like this turn out quite well, you just need a really patient subject who is good at standing still. We were joking the other day that someday people will look back on 3D scanning today the same way we think of photo portraits from the 1800s: "Can you believe you had to stand still for so long just to be 3D scanned??!" :) If you want to try the 'rotate the object' method of scanning a human, you will need an automated turntable, a large white or black sweep and some very good lights. You can put your camera on burst shooting mode with a very fast shutter speed and just shoot continually as your subject rotates. In my experience however, this method is more trouble than it's worth when scanning people, and doesn't get better results than the 'shoot around' method. You can also check out this Instructable by Amy Karl for more info about human scanning. Step 8: In the Next Lesson... Once you have a good set of photos of the object you want to scan the hard part is over. In the next lesson I'll show you when and how to edit the photos you've shot, and then walk you through the process of uploading them to ReCap Photo where they will magically be transformed into 3D models!
https://www.instructables.com/Shooting-for-Photogrammetry/
The British Empire is known for being the biggest empire the world has ever seen. In 1922, 458 million people lived under the rule of the British crown, more than 20% of the world's population.1 Originally the British were not among the first European nations to discover our planet. Portugal and Spain acted as pioneers in the 15th and 16th century, discovering the world and building great empires that boosted their economies.2 The British, along with other European countries such as France and the Netherlands, started to follow this example. During the 17th century the British Empire established colonies in North- and Central America and Asia.3 Driven by commercial interests, the British also wanted to dominate the African continent which was widely recognized as being extremely rich in resources. The first regions were mainly controlled by companies and businessmen who were widely independent from the crown. The crown soon took over controlling the economic side of things. At first those colonies were self-managing; the British designated African tribe leaders who controlled the procedures.4 But the exploitation of resources was not the only economic factor. European nations captured many indigenous people and sold them as slaves on the American continent. During the 19th century, the British started to expand inwards and discovered many opportunities for agricultural projects that could benefit the European market. Competition aroused as many nations fought for land and resources. In 1984, the German chancellor, Otto von Bismarck, arranged a conference in Berlin to stop European conflicts on the African continent. The Berlin Conference - also known as the Congo Conference - is widely known to be the historic event were the “Scramble for Africa” was finally organized.5 Within a few years, European Empires had already colonized the entirety of the continent; only Liberia and Abyssinia remained independent. The British goal was to form a trade route from Cairo (Egypt) to Cape Town (South Africa).6 As a result they concurred numerous region, one of which was Kenya. Arabic and European settlement Kenya's geographical location is one of the main reasons for its colonial history. Prior to the invasion of European settlers, it was involved in trade partnerships with Arabic and Persian businessmen who worked with the local people on the Port of Mombasa, a city located on the Eastern Coast of the country.7 The Arabs have had a massive influence in Kenya's culture when it comes to food, language and music. In 1498, the Portuguese took control of the Port of Mombasa as they were interested in developing trade relationships with the country. They went on to leave in the 1600s; Kenya was now under the Islamic control of the Imam of Oman up until the British came along after the Berlin Conference in 1984. The East African Company took control of the region and founded the East African Protectorate in 1992. The British dealt with financial issues, thereupon the British government took matters into their own hands in 1895, opening Kenya for British settlers in 1902 (cf. footnote no.4). Kenya was officially declared a British colony in 1920 and gained independence in 1963 following numerous rebellions and uprisings by the local population. Oppression by the colonial government Due to its history Kenya was and still is a multicultural country made up of Asians (mainly Indians), Arabs and Bantu People. It is as rich in diversity as it is in culture. Even the indigenous population is very diverse; there are more than forty ethnic groups, the major ones being the Kikuyu (22%), Luhya (14%), Luo (13%) and the Kalenjin (13%).8 This heterogeneous society has led to a wide variety of cultures which the country is known for today. Nevertheless, the British reign has caused great damage to the people as individuals and as a society. Although it is widely known that British interests in Kenya were based on commercial and financial interests, authorities often operated under the cloak of bringing emancipation and civilization to the indigenous people. One of the first political actions taken by the colonial government was to open doors for British settlers in 1902.9 The fruitful Highlands, traditionally the homeland of thousands of indigenous people, were exclusively “reserved for settlers of European origin”10 who renamed them “The White Highlands”. All those people that lost their homes and their land were stripped of everything they have ever worked for and left behind with absolutely nothing. Segregation between the white minority and the local population was instantaneously normalized. Ahead of this in 1895, the colonial government started a colossal railway project, “The Uganda Railway”, which was supposed to help transport the goods from Kisumu, a Western city, to the Port of Mombasa. Kisumu and Uganda are close to Lake Victoria, where the colonial government hoped to profit from the fertile land..11 At this point, the British had already pushed many people into forced labor; they even brought in 30,000 workers from British India to help with the construction of the railway.12 The end of this project was a starting point for the westward expansion by the British settlers. As the settlers entered the Western regions, even more locals were stripped of their land. All over the country Kenyans worked for European settlers, especially in urban areas around cities and towns like Kisumu, Nairobi and Mombasa. Those areas are still particularly diverse and highly populated. Although these settlers were extremely rich in comparison to the locals, workers did not earn much money considering how hard their work must have been. In certain regions, where locals still had their land, they could not grow certain goods as a result of government regulations. As one would assume the Kenyans had no say in colonial politics as well. They were systematically oppressed, therefore being powerless towards the colonial superpower and its government. As a result of this, they had to fully focus on building a new existence and a new life, involuntarily leaving hundreds of years of traditions behind. Whole ethnic groups and communities were separated and had to find a “new homeland”. But it did not stop there. The government regularly tightened the regulations, so more Kenyans had to become low-paid workers. Settlers justified these actions by saying that the indigenous people "were [like] children and should be treated as such"13. To make things even worse, they had to pay taxes to their landlords and the government. After World War I which ended with the Treaty of Versailles in 1919, many British soldiers were rewarded with valuable land; this led to even more expropriations.14 The continuous exploitation displeased thousand of locals, some of whom were brave enough to oppose the regime. Resistance by the locals In the early 1920s, right after Kenya was officially declared a colony, locals started to demand a say in colonial politics. The East African Association (Young Kikuyu Association), established in 1921, was one of the first organized attempts by the locals.15 The group fought for higher wages and against the agricultural restrictions for Kenyan farmers. Despite the fact that it had very progressive ideas, it was not very successful because other ethnic groups were opposed to the idea of having a Kikuyu leader. Up until the outbreak of World War II, the situation did not change for the better. One of the main causes being that people were deeply divided on how they wanted to achieve social change and an African majority rule. On the one hand, the Kenya African Union (KAU), led by Jomo Kenyatta (left man on the picture above), advocated for a moderate approach that was merely political, the Mau Mau on the other hand, led by Field Marshal Mwariama (right man on the picture above), pressed for a violent revolution.16 The Mau Mau had supporters within the ethnic groups that had to deal with a lot of suffering; many of them claimed the KAU and Kenyatta could never achieve proper results. The violent group went on to plan and execute attacks on white settlers in the early 1950s (cf. footnote no.16). Subsequently the colonial government declared a state of emergency from 1952 to 1960 and sent over 50,000 soldiers to suppress the emerging rebellion (cf. footnote no.16). The result of those actions was catastrophic to the Kenyan population, especially to the Kikuyu. British soldiers detained millions of people for years, even though many of them had no relation to the Mau Mau. Over the course of those years over “90,000 people were executed, [imprisoned and] tortured or maimed”17 according to the Kenya Human Rights Commission. Imprisonment and torture took place in concentration camps. The Mau Mau was soon defeated by the far more superior British military (cf. footnote no. 16). In addition to that, even activists from the KAU were imprisoned although they had openly spoken out against the Mau Mau rebellion and had no connection to the group whatsoever. Nonetheless the rebellion carried on. Following a number of rebellions in Africa, the UK decided to guide Kenya into independence. In 1963, with the “Kenya Independence Act”, Jomo Kenyatta became the first elected president of the nation.18 Individuals coping with the aftermath of imperialism Even decades after the years of settlement and imperialism, there are still people that were direct victims of the oppression by the British. People still have the scars, still remember the family members they lost and are still traumatized, many have lost their land without receiving any reparations and some have been raped by British soldiers. Hundreds of people have tried to contact British courts to discuss this matter, with some of them actually being successful and receiving repair payments. (cf. footnote no.17) A story that shows us the struggles the people had to go through is Ngũgĩ wa Thiong'o's novel “Dreams in a time of war”. It is solely autobiographical and tells us how his life was influenced by the British Empire. Thiong'o explains how his brother got shot by a British police officer, his family lost their land and how schools were turned into prison camps. This is just one out of thousand examples one could be talking about but it was so remarkably successful because it is one of very few novels/books that provides a new narrative to the topic. It shows the cruelty and the horror Africans had to face during these difficult times. (cf. footnote no. 14) Cultural heritage and society Kenya's rich culture still lives on up until today but the colonial rule has undoubtedly shaped the country in various ways. Many ethnic groups were forced to leave their homeland, therefore, having to leave behind their traditions. New multicultural hotspots emerged. Nairobi, the current capital, is a by-product of the railway project initiated by the British. As well as that, the influence of the British and the Arabs can clearly be seen in today's society as 86% percent of Kenyans are Christian and 11% are Muslim (cf. footnote no.8). English, alongside Kiswahili, remains the national language. The influence of the settlers can also be seen in the countries food culture; the Kenyan kitchen varies from Indian and British to Arabic influences. It's the society that has taken a bigger hit by the colonial superpower. The division of ethnic groups started in the 1950s when the Mau Mau and the KAU and their separate views made people choose, they had to pick a side. Different ethnic groups could not solve their conflicts, even after the unity and the “Kenya Independence Act”. The ethnic violence spread prior to as well as after the elections in 2007 is a good example that shows the instability of Kenya's society.19 The country that exists today is clearly the outcome of the events during the settlement of British and Arabic settlers.
https://www.grin.com/document/1027029
Blind mole rats (Nannospalax xanthodon Palmer, 1903) are subterranean mammals that are well-known for their high tolerance to hypoxia and resistance to cancer. Due to their unusual habitat, these animals have developed several adaptations during their evolution. Therefore, this study aimed to identify possible structural differences between different regions of the skin in Nannospalax as well as to characterize the histological organization of a specialized skin patch namely scrub sensory organ that have possibly arisen as a result of adaptation to underground life. Skin from the neck, nasal pad and the scrub sensory organ was harvested from wildtype blind mole rats and fixed in 10% formaldehyde. Tissues were embedded into paraffin and blocked via routine histological procedures. 5-micron sections were taken and stained with Hematoxylin & Eosin and Cresyl Violet. Histopathological analysis of the skin revealed that the nasal pad and the scrub sensory organ were significantly thicker compared to the neck skin. Skin epithelia was structurally normal, although it was thicker in the neck skin in comparison to others. We observed structures that are similar to pyramidal cells in the scrub sensory organ. In conclusion, we defined some histological properties of the scrub sensory organ in Nannospalax xanthodon for the first time in the literature. Specifically, the detection of structures that are similar to pyramidal cells is significant. These results indicate structural differences between different skin regions and suggest a role for the scrub sensory organ in somatosensation. KeywordsBlind mole rat, skin histopathology, somatosensation, pyramidal cell Download full text | How to Cite | Download XML Files References - Adeli, S., Zahmatkesh, M., Ansari Dezfouli M. (2019). Simvastatin Attenuates Hippocampal MMP-9 Expression in the Streptozotocin-Induced Cognitive Impairment. Iran Biomed J. 23(4):262-71. - Begall S., Burda H., Schleich C.E. (Eds.) (2007). Subterranean Rodents: News from Underground. Springer, Berlin, Heidelberg. - Bekkers, J. M. (2011). Pyramidal neurons. Curr Biol. 21(24):R975. - Bronchti, G., Heil, P., Sadka, R., Hess, A., Scheich, H., Wollberg, Z. (2002). Auditory activation of "visual" cortical areas in the blind mole rat (Spalax ehrenbergi). Eur J Neurosci. 16: 311-29. - Burda, H. (2006). Ear and eye in subterranean mole-rats, Fukomys anselli (Bathyergidae) and Spalax ehrenbergi (Spalacidae): progressive specialization or regressive degeneration? Anim.Biol. 56: 475‒486. - Catania, K.C. (1999). A nose that looks like a hand and acts like an eye: the unusual mechanosensory system of the star-nosed mole. Journal of Comparative Physiology A: Sensory, Neural, and Behavioral Physiology 185:367–372. - Crish, S. D., Rice, F. L., Park, T. J., Comer, C. M. (2003). Somatosensory organization and behavior in naked mole-rats I: Vibrissa-like body hairs comprise a sensory array that mediates orientation to tactile stimuli. Brain, Behavior and Evolution, 62(3): 141-151. - Ebner, F. F. and Kaas, J. H. (2015) Somatosensory System. In The Rat Nervous System. 4th edition, Academic Press, Elsevier. - Hill, W.C.O., Porter, A., Bloom, R.T., Seago, J., Southwick, M.D. (2009). Field and laboratory studies on the naked mole rat, heterocephalus glaber. Proceedings of the Zoological Society of London 128:455–514. - İkinci, A., Odacı, E., Baş, O. (2013). Effects of ethyl pyruvate administration on female rats’ pyramidal cells of cornu ammonis after brain ischemia: a stereological and histopathological study. Turk J Med Sci. 43: 354-361. - Kaplan, S., Onger, M. E., Altunkaynak, B. Z., Elibol, E., Deniz, O. G., Karayiğit, M. Ö., Yarım, M., Marangoz, C., Ragbetli, M. Ç. (2018). Effects of spermine and the passive avoidance learning (PAL) following cerebral ischemia in chicks: Association with neuroprotection of pyramidal cells. J Chem Neuroanat. 88: 41-45”. - Kazmierczak, P., Müller, U. (2012). Sensing sound: molecules that orchestrate mechanotransduction by hair cells. Trends Neurosci. 35: 220-9. - Lacey, E.A., Patton, J.L., Cameron, G.N. (2000). Life Underground: The Biology of Subterranean Rodents. Chicago:University of Chicago Press. - Low, Z. W. K., Li, Z., Owh, C., Chee, P. L., Ye, E., Kai, D., Yang, D. P., Loh, X. J. (2019). Using Artificial Skin Devices as Skin Replacements: Insights into Superficial Treatment. Small. 15(9):e1805453. - Manns, I. D., Sakmann, B., Brecht, M. (2004). Sub- and suprathreshold receptive field properties of pyramidal neurones in layers 5A and 5B of rat somatosensory barrel cortex. J Physiol. 15;556(Pt 2):601-22. - Nevo, E., Filippucci, M. G., Redi, C., Korol, A., Beiles, A. (1994). Chromosomal speciation and adaptive radiation of mole rats in Asia Minor correlated with increased ecological stress. Proc Natl Acad Sci U S A. 91; 8160-8164. - Nevo, E., Filippucci, M. G., Redi, C., Simson, S., Heth, G., Beiles, A. (1995). Karyotype and genetic evolution in speciation of subterranean mole rats of the genus Spalax in Turkey. Biological Journal of the Linnean Society, 54; 203-229. - Nevo, E. (2013). Stress, adaptation, and speciation in the evolution of the blind mole rat, Spalax, in Israel. Mol. Phylogenet.Evol. 66: 515–525. - Partha, R., Chauhan, B.K., Ferreira, Z., Robinson, J.D., Lathrop, K., Nischal, K.K., Chikina, M., Clark, N.L. (2017). Subterranean mammals show convergent regression in ocular genes and enhancers, along with adaptation to tunneling. Elife. 6:e25884. - Petreanu, L., Gutnisky, D. A., Huber, D., Xu, N. L., O'Connor, D. H., Tian, L., Looger, L., Svoboda, K. (2012). Activity in motor-sensory projections reveals distributed coding in somatosensation. Nature. 13;489(7415):299-303. - Sanyal, S., Jansen, H. G. J., de Grip, W., Nevo, E., de Jong, W. (1990). The eye of the blind mole rat, Spalax ehrenbergi. Invest. Ophthalmol. Vis. Sci. 31: 1398–1404. - Spruston, N. (2008). Pyramidal neurons: dendritic structure and synaptic integration. Nat Rev Neurosci. 9(3):206-21.
https://www.nesciences.com/abstract_info.php?page=info&paperID=587
Greenstone Koru pendant 65mm x 48mm This carving is Inspired by the uncurling fern frond which is commonly found in New Zealand. It represents peace, tranquillity, personal growth, positive change, awakening. New life, new beginnings and harmony. A wonderful gift idea for a loved one New Zealand greenstone has been highly prized for hundreds of years. The stone is characterised by its strong deep colour which symbolises strength of character, good fortune and deep strong relationships. Wear this pendant with pride and affection. We believe a greenstone pendant is a gift for generations. Have you passed a treasured piece of jewellery onto a loved one.
https://thejadestudionz.com/all-products/pounamu-greenstone-koru-pendant-jade/
This book provides a step-by-step approach to the fundamentals of writing in English for beginning- to low-intermediate-level ESL students from secondary school through adult ed. The book is for students with limited knowledge of English grammar and sentence structure, and it teaches them systematically and methodically to write clear, coherent, grammatically correct paragraphs. It takes the students through the basics differentiating between a word, a sentence, and a paragraph, and it teaches the difference between a paragraph and a list and how to distinguish complete and incomplete sentences. The following aspects of writing are covered: - Conventions: punctuation and capitalization - Fundamentals of a paragraph: topic sentence, supporting information - Sentence structure: simple, compound, and complex - Grammar: simple present, present continuous, past, and future tense/aspect constructions and usage; negative sentences; stative verbs; nonreferential there is/there are; frequency adverbs; and adjectives - Parts of speech About the Author Elizabeth Weal, author of Writing in English: Step by Step, is an ESL teacher and a committed activist in the English learning community in California. Elizabeth was a founding member and a 9-year board member of Upward Scholars, an organization providing financial, vocational, and academic support to the immigrant community. She has also authored seven ESL books for Pro Lingua Learning.
https://www.oblongbooks.com/book/9780979612824
Determinism is the philosophical doctrine that claims that all behavior results from preceding events or natural causes. It is result of the two preceding ideas: Materialism and Causality. For example, a proponent of determinism would say that the relentless laws of physics (as applied to neurons in the brain of the reader) caused the reader to read this article today. Proponents of determinism sometimes claim that free will is an illusion, and that beings are no more able to control matter with their minds than any other soulless matter (such as a robot) can. One objection to determinism is that a universe in which people do not really make their own choices has no morality. Determinism and free will are often, but not always, seen as mutually exclusive. The idea that they might be compatible (or that free will even requires determinism) is called Compatibilism. It should be noted that even belief in a soul is not a solution, unless the soul itself behaves non-deterministically. Much of a belief in determinism was inspired by Newtonian physics in which the universe was seen as a collection of billiard balls interacting according to the laws of physics. In this view, once the initial conditions of the universe were known, the behavior of the universe for all time would be determined. However, some (but not all) interpretations of quantum mechanics assert that the universe is non-deterministic, meaning that some events (such as when a radioactive atom decays) do not have immediate causes. However, even non-deterministic interpretations of quantum mechanics do not necessarily allow for free will. (After all, a robot could be made to use quantum uncertainties to "decide what to do", but this would clearly not show that the robot had free will.) Although non-determistic interpretations of quantum mechanics allow for uncaused events, these events are statistically random and unaffected by human cognition. See also block time for an example of how the universe can be seen to be deterministic regardless of how the laws of physics operate within it.
http://www.fact-index.com/d/de/determinism.html