content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Snake is probably the most common name for an arcade game concept wherein the player navigates a series of horizontal lines that grow in length, usually along the same path, as serpentine. The game originated on the Atari arcade machine and has continued to be popular ever since. Many versions have been released for various consoles over the years, including the Nintendo GameCube, Xbox, PlayStation and others. The original arcade version was quite difficult, requiring good reflexes and careful timing.
Snake game has a number of different levels, each of them requiring a different kind of strategy to defeat. This game is a bit different from the other arcade games, in that there is no ‘enemy’ to fight against in these versions. Instead, the object is to navigate a series of rows and columns, dodging obstacles, shooting down snakes, and so on.
In the original version of Snake, the player can choose between three different levels, and the goal was to try and eliminate all snakes on each level. But in many versions, the levels are combined to create multiple levels in one, thus increasing the fun factor of the game.
To increase the adventure, many versions of the game’s story revolve around its hero. For example, the level in the original game where Snake first finds out that the snake he is trying to kill is harmless is made up of a series of different levels. Each level brings with it new obstacles, new enemies, and so on. Most of the level involves Snake chasing after the snake, dodging its attacks, avoiding its tail, trying to get to higher ground, etc. In a few cases, there are other characters introduced into the game, such as a princess, who helps Snake solve his problems.
When playing the snake game, it is very important to use the right tactics to overcome the challenges. If the game is too easy, the player can become complacent and lose interest, but if he/she is playing a difficult level, they can become frustrated and quit.
Another thing to remember when playing Snake is to make sure you are playing the game on the best possible platform. Different platforms have different speed limitations and may be prone to crashing, freezing, and crashes. The best option is to always play on a high quality computer or gaming console. That way, all game glitches and errors are avoided and the game runs faster.
The graphics and sounds of the game are also very important and the Snake game has two soundtracks: the primary soundtrack is composed of the main snake’s screeching and breathing, and the secondary track is composed of the player’s own breathing. In some versions, you are not required to stop and listen to the snake’s breathing at any time, but in some, the sound track does play.
The controls for the game are generally simple. If your joystick doesn’t move quickly enough, or the game is too challenging, try adjusting the sensitivity settings to suit your skills. When playing on consoles, the player can press the start button when the joystick is pressed and then let go when it reaches the right position on the screen.
Another important part of playing the Snake game is learning how to keep Snake hidden. This is most important when trying to sneak up on a snake and take it down quickly, especially if the snake is being guarded by other creatures.
The snake is a very smart creature, and once it senses a threat, it immediately springs back into action. Therefore, it is important to make sure you do not scare the snake by approaching it directly, or standing directly in front of it, because it will likely run away and hide.
Although the game is a lot of fun, it is important to note that Snake is not a realistic simulation. since it only allows Snake to run around and jump over obstacles. If you have difficulty controlling it, this is not a game you would want to play for hours, especially if you cannot handle sudden changes in the environment. | https://chatterdc.com/the-history-of-the-snake-game/ |
›Apocalypse Now‹ meets ›Bambi‹: this animated mêlée of blood, guts and dayglow colours pits teddy bears and unicorns, of all things, against each other and really lets it rip.
Whoever drinks the blood of the last unicorn will attain eternal youth and beauty – that’s what the teddy bears believe, and it’s what has kept them in a centuries-long war with the unicorns. The irony of these circumstances plays out when the combat unit of ursine brothers Bluey and Tubby is sent on a special mission into the enchanted forest. In all seriousness, a violent battle with the unicorns ensues. In this situation, Bluey is constantly filled with an insatiable desire to prove to everyone that he is destined for greatness. Thus, a showdown between the opposing factions becomes as inevitable as a struggle between the brothers themselves. Yes, ›Unicorn Wars‹ is a story of fascism, the dark side of religion and the senselessness of violence in contrast to the poetic beauty of life and nature – but one that’s told in bright dayglow colours, somewhere between the cuteness of ›Hello Kitty‹ and the brutality of ›South Park‹. Absurdly funny. | https://www.iffmh.de/festival/programme/films/unicorn-wars/index_eng.html |
The "Amazing Man" is a non-playable character in the first generation of Pokémon games that jumps to places that normally he would not be able to reach. There are multiple variations recorded.
This glitch only works in some language versions of Red and Blue. It does not work in any version of Yellow or the Spanish versions (and possibly others) of Red or Blue.
Celadon City's Amazing Man
- Fly to Celadon City and, if you have gotten one, deposit your Bicycle into the PC. Walk west toward Cycling Road.
- When you go inside the building, walk straight across towards the alternate exit.
- When the guard talks to you, forcing you to walk up to him, look behind him to see this:
Cinnabar Island's Amazing Man
- Fly to Cinnabar Island. Make sure you have not unlocked the door to the gym, nor the key from the Pokémon Mansion.
- Surf a bit in the eastern beach, and get out directly below the gym.
- Continue walking to the left until a message appears reading "The door is locked...". Look on the roof to find this:
His location will change depending on how far away you are from the gym when you exit the water. If you land directly in front of the gym, he will appear where he is in the picture. If you land one space below the gym, walk along the coast up to the gym, and then turn left so you are in front of the door, he will appear one space lower than the picture. Land 2 spaces away from the gym, walk along the coast, and then when you reach the gym walk in front of the door, and he will be two spaces lower from the picture. Thus, being three spaces away from the base of the gym and walking along the coast to the front of the door, you will be 3 spaces lower than the picture.
The Amazing Man of Cinnabar can also appear directly below the player if the player goes to the front of the Cinnabar Gym after exiting the Pokémon Lab in Cinnabar. The steps to activate the glitch follows:
- Exit Pokémon Lab.
- Walk right until you cannot go right anymore without surfing.
- Walk up until you're stopped by the gym.
- Walk left one space (you should now be in front of the gym). The glitch should activate; this will appear:
Cerulean City's Amazing Men
Nugget Bridge Amazing Man
- Go to Cerulean City and walk north towards the Nugget Bridge.
- Make sure that the first trainer is one space above the screen. (He is not visible, but if you walk one space north he will appear.)
- Walk north and press start at the same time to see:
If this glitch is performed while surfing, he may appear in the grass
Unknown Dungeon Amazing Man
Note: This can only be done before you beat the Elite 4.
- Go to the entrance to the Unknown Dungeon.
- Walk to the left of the man standing in front of it until he disappears.
- Step towards him while pressing START to have the same effect as the one shown above this!
Amazing Bike Seller
- Enter the Bike Shop in Cerulean City
- Step left once
- Step up, and while walking, press A to interact with the bike
The Bike seller should be floating into the void on the left side of the screen
Route 6 Amazing Man
- Go to the northern exit of Vermilion City, towards Route 6.
- Stand next to the roofs, walk up once and whilst walking, press and hold START.
Route 15 Amazing Man
- Go to the upper part of Route 15 (it is blocked by a cuttable bush, cut it down and walk to where it was)
- Walk four steps left, the biker who was in the upper right corner shoumd be offscreen
- Step right and immediately press START.
Cycling Road Amazing Man
- Go to the upper part of Cycling Road, just before the slope begins. Press and hold A to stay in place, and cycle right until you hit a wall.
- Walk up, and immediately press START.
- You should see a Biker on the top left corner of the screen until you close the menu
Pewter City Amazing Man
If one performed the Skip Pewter Gym glitch, one can walk east until the Youngster disappears from your screen. Then one should walk one step left, the immediately hit START ; the Youngster will be standing on a rock until the menu is closed.
Pallet Town Amazing Man
If one surfs on the channel south of Pallet Town, then upon passing the "barriers", hits and holds START to pop up the menu, the big guy will be on the top left corner of the screen
Victory Road's Amazing Boulder
Setting up an instant encounter script on Victory Road 1F (south entrance) can open the Start menu as soon as Red enters the floor, and bring up a boulder on the top-left corner of the screen.
Explanation
This glitch happens when a NPC becomes visible for the first time since the player has entered the map. It happens either naturally when a text box pops out on the same step (Celadon City and Cinnabar Island Amazing Men) or can be humanly provoked by popping the start menu on that step; in that case it only works if at that precise time you enter a new map.
The Amazing Bike Seller is considered a special case of the first case, because for some reason the bike uses the "hidden objects" code path, which takes precedence over updating sprites. Reading signs or talking to NPCs won't work this way.
In both cases, the NPC appears at the top left corner of the screen, at the time the player enters the map. That means that, for example, if the player has moved three steps left and one step up since entering the map, the NPC will appear three steps right and one step down from the top left corner (this is the case for the Celadon City Amazing Man).
Every 16 steps "wrap around"; for example, after exiting the Pokémon Lab, the player is considered to have entered the map at the position of the Pokémon Lab door. From there to the tile in front of the Cinnabar gym door, the net displacement of the player is 12 steps right and 5 steps up. Therefore the Amazing Man appears 4 steps right, 5 steps down from the top left corner.
In particular, in the "artificial" case, the NPC always appear at the top left corner of the screen, because it only works on the same step the player enters the map. A similar case may happen when performing the Ditto Glitch, when the Start menu pops up upon entering a new map.
This glitch was partially patched in European releases of Red and Blue, where NPCs have a correct position, but keep looking down while the text box is up.
This glitch was fully patched in all localizations of Pokémon Yellow. | https://glitchcity.wiki/Amazing_Man |
This is an easy chocolate quick bread that you can still make even when you are out of eggs, butter or oil. This bread is soft, moist and chocolatey. It takes only about ten minutes to prepare and everything can be mixed in one bowl, no mixer needed.
This is one of my favorite quick breads to make when I’m short on time and low on ingredients like eggs and butter.
Ingredients
- Chocolate Ice Cream
- Self-Rising Flour
This chocolate bread only need chocolate ice cream and self-rising flour. If you like adding chocolate chips to your chocolate bread, you can do that as well. I like adding chocolate chips on top for appearance.
Chocolate Ice Cream: Make sure to use full fat regular chocolate ice cream. I used Breyer’s chocolate ice cream which does not contain any eggs. I don’t recommend using low fat ice cream because it will change the texture of your bread.
Self-Rising Flour: Self-rising flour is all-purpose flour that already has baking powder and salt mixed in. If you don’t have self-rising flour, you can also make your own by mixing together all-purpose flour, baking powder and salt.
How to Make Chocolate Bread
The ice cream is first melted down. This will help provide a more accurate ice cream measurement. The self-rising flour is then whisked in until smooth. The bread batter is then ready to be baked. It’s that easy.
More Easy Bread Recipes
2 Ingredient Chocolate Bread
Ingredients
- 3 cups (710 ml) melted full fat chocolate ice cream
- 2 cups (250 g) self-rising flour
Instructions
- Preheat oven to 350°F (177°C). Line an 8.5 x 4.5 inch loaf pan with parchment paper. (You can also use a 9 x 5 inch pan but your bread will not rise as high and cooking time will need to be reduced).
- Add melted ice cream to a large mixing bowl. Sift in the flour using a mesh strainer or flour sifter. The flour needs to be sifted in to make it easier to mix. Mix with a whisk (or a mixer if you prefer) until the batter is smooth and no flour clumps remain. If you want to add chocolate chips to your batter, stir them in now.
- Pour batter into prepared loaf pan. If you wish to add chocolate chips on top, sprinkle them on now. Bake the bread for about 30 minutes or until toothpick inserted comes out clean (baking time can vary by 10-15 minutes depending on the brand of ice cream you use).
- Let bread cool fully before cutting and serving.
Notes
- I used this 8.5 x 4.5 loaf pan.*
- I used Lily's self-rising flour*
- *This product link is an affiliate link. This means I earn a commission from qualifying purchases.
- This bread is lightly sweetened. I wanted to make a chocolate bread that wasn't super sweet so that it can be enjoyed for breakfast. To add more sweetness, add 1 cup of chocolate chips to the batter. You can also add more chocolate on top.
- To make your own self-rising flour, combine 2 cups all purpose flour, 3 tsp baking powder and 1/2 tsp salt. Whisk until evenly combined.
- Make sure you melt the ice cream before measuring out the 3 cups. Ice cream brands can vary greatly on how much air they whip into their ice cream so if you measure it while it is still solid, you won't have an accurate measurement.
- The 2 cups of flour is measured before sifting. To sift the flour in, you can either use a flour sifter or use a fine mesh strainer.
- It is very difficult to mix the batter and completely eliminate flour lumps if you do not sift it in.
- The estimated nutrition provided is calculated using Breyers Classic chocolate ice cream and does not include the optional chocolate chips.
Nutrition
The nutrition information provided are only estimates based on an online nutritional calculator. I am not a certified nutritionist. Please consult a professional nutritionist or doctor for accurate information and any dietary restrictions and concerns you may have. | https://kirbiecravings.com/2-ingredient-chocolate-bread/ |
With the decline of the mining industry, many of the workers who lost their jobs in the mines turned to the construction industry because of the low barriers to entry and a relative employment security compared to mining jobs. Construction is now the first source of employment for male workers in Australia with 9 out of 10 construction workers being men.
In 2018, there were 1.2 million people working in the construction industry in Australia. This is just under 10% of the total workforce. Between 2012 and 2018, the number of workers in the industry has increased by almost 20%. This number is expected to increase by another 10% between now and 2022, which is very encouraging for the future of the industry.
Full-time employment in construction is better than in most industries
Construction fares better than other industries when it comes to the number of workers working full time. According to the Australian Bureau of Statistics, the national average across all industries is around 70% of full time workers. In the building industry, there are 85% of full time workers, which makes it the second best industry for full time employment after mining. This is quite impressive, in particular compared to the national average.
However construction is also the industry with the most self-employed workers with 1 out of 3 construction workers being self-employed. Regarding weekly hours and earnings, full time construction workers work an average of 41 hours a week and they earn on average $1,250 per week before taxes.
Trades and young workers are the most represented
Still according to the Australian Bureau of Statistics, the top 10 occupations in terms of people employed are carpenters, electricians, managers, plumbers, painters, labourers, plasterers, concreters, surveyors and plant operators. Together, they account for half of all the people working in the industry. Trade services represent 65% of the total number of people employed in the industry.
The building industry is also the first source of employment for young workers. 45% of the workers are 15 to 34 years old. 35% of the workers are 45 years old and older. Finally, only 20% of the workers are 35 to 44 years old.
In conclusion, construction is a fairly young industry that is dominated by male workers. It has the second greatest proportion of full time workers of all industries. The number of people employed has been increasing in the last few years and should keep increasing steadily in the next couple of years.
12/9/2019 02:20:19 pm
very nice post, i certainly love this website, keep on it
Reply
Thierry
16/9/2019 05:44:06 pm
Hi mate, thank you for your support, thumb up!
Reply
18/12/2019 03:31:57 am
The house makeover that you did was a bomb! I wasn't expecting it would be that visually good, but the design you simply came up to was impressive. When you have an eye for designing, you need to maximize it by coming up with certain things and new designs that you have in your mind. You can never be a designer if you don't welcome new ideas! These ideas deserve to be applied. Back on the design of your house, there is nothing that I can ask for more because I love it!
Reply
They have already completed the piling and first level for one of the towers but, from what I can see, they are still digging to set up the foundation for the second tower. I was wondering if you know how long the 'pile driving' part takes and if that really is the worst of the noise. I have to admit that they don't work on Sundays or after 6pm on weekdays (so far)
Reply
14/1/2020 11:44:37 pm
Home safety is the most important need for everyone. We should try to maintain every part of our home so that we can save our home & money such as home electric items, plugins, switches etc. Thanks for sharing your lovely stuff. :)
Reply
17/1/2020 06:03:20 pm
very nice post, i certainly love this website, keep on it
Reply
21/2/2020 12:35:58 am
I shared this article with my co-workers, nice work. I find value in great writings like this
Reply
28/2/2020 02:20:24 pm
Does Bluehost allow you to upgrade to VPS Hosting and add on Cloud Hosting if you select a different package up front? Or are you locked in once you subscribe?
Reply
3/3/2020 04:11:15 am
New home construction is often a better option than buying an existing home. Getting your dream home constructed is the best way to fulfill your specific residential needs. There are several benefits of getting a new home built, such as energy efficiency, modern decor, latest appliances, customized design and cost effective construction.
Reply
6/5/2020 03:42:54 am
Thanks for the wonderful post!Those who come to read your article will find lots of helpful and informative tips.
Reply
26/5/2020 12:34:02 am
Hello! I just would like to give huge thumbs up for the great info you have here on this post. I will be coming back to your blog for more soon.
Reply
17/6/2020 03:02:41 am
Reply
17/7/2020 02:52:58 pm
I am very happy and finding great thoughts and allowed informative methods always, thanks for sharing us water wastage restoration. I have found here how to prevent the flood damage.
Reply
25/1/2021 09:08:26 pm
Thanks for the wonderful post!Those who come to read your article will find lots of helpful and informative tips.
Reply
31/1/2021 06:04:03 pm
I am very happy with your construction services, we all are getting here wonderful scaffolding and more updates as well. Thanks for sharing us construction commercial projects.
Reply
19/3/2021 06:06:07 pm
Very informative post ! There is a lot of information here that can help any business get started with a successful social networking campaign.
Reply
Leave a Reply. | http://www.fitnologic.com.au/blog/profile-of-the-construction-industry-workers |
On more than one occasion I’ve walked into the gym to have Jason, my coach, say “Ok. We are starting out with snatch from the knee.” My response is always, “block or hang?” Both the blocks and the hang have particular benefits, but how do you decide which to use?
An Olympic Weightlifter has one goal: to lift more weight in the snatch and clean & jerk. To reach these goals, lifters will train in phases. They’ll use a strength cycle to increase leg strength and then work to transfer the new strength into the Olympic lifts. They may also do a phase where they put more emphasis on a weaker lifts. During these phases, athletes will often work from the blocks or the hang position to improve technique.
What’s the Difference?
Lifting from the blocks means you are lifting from anything placed higher than ground level, with the starting position resting on a pulling block or some other form of riser. The hang may be done from any of the same positions as blocks, but the athlete is holding the weight in the position rather than having it resting on a block.
Seems easy enough, but they couldn’t be more different. Try taking 85% of your snatch max and lift it off the blocks from the knee, and then take that same percentage and lift in from the hang at the knee position. You’ll see what I mean.
Blocks: Why?
While both training techniques can start from identical positions, they accomplish different things, and a coach usually programs them for a variety of different reasons.
Greg Everett explains that blocks are “better for increasing rate of force development relative to lifts from the hang.” Meaning, from the block, you should be performing the lift from a full stop, with no prior momentum gained from pulling off the floor. Some lifters use some form of a dynamic start from the blocks, but the weight still begins at a dead stop so the lifter has to get it moving from a new position.
Jim Schmitz, US Olympic Team Coach (‘80, ‘88, ‘92), says that lifting from blocks can also help the athlete emphasize the top of the pull. I believe that both exercises can provide this stimulus, however, but the block allows for me to better feel the positions without the fatigue of the weight in my hands.
Some lifters will be able to lift more from various block positions than can from the ground. This could be because the athlete is able to place themselves in a more balanced position while the weight rests on blocks, or because of a weakness in the athlete’s traditional starting position which causes a poor execution of the full lift. According to Schmitz, a big thing to remember is to make sure your work transfers to the full lifts. Block work can become an overused exercise if the athlete doesn’t learn to convert the strength to the floor properly.
Blocks: When?
Because blocks can be utilized in a wide range of positions and rep schemes, you can use them mixed in with pulls to emphasize the top of the lift, or you can work from different positions in order to focus on an athlete’s weak point. For example, if an athlete has trouble negotiating the pull around the knee, you could have them lift from directly below the knee or directly above the knee to get a feel for the positions. Often block work can be performed relatively heavy, if not heavier than an athlete’s maximum, depending on the skill from the chosen position.
Another great use for blocks is to work around an athlete’s injuries. If an athlete has a back or knee injury, it is possible that they could still perform the Olympic movements from a different position, without the same pain. This can help to prevent strength loss through detraining and adds variety to an injury program.
It is important to be cautious while performing lifts from blocks, because missing can become dangerous. A missed lift can hit the edge of a block and bounce in any direction. Make sure the pulling blocks are clear of change plates that the bar could land on.
The Hang: Why?
From the hang, the lifter can better feel and practice proper balance rather than having the bar supported by blocks. There is also something to be said for the strength gains accumulated due to time under tension — a mid-shin hang position causes greater tension during the start of the lift than it does from the floor.
The hang also forces your backside to work more in order to hold the position in place. Men’s Olympic Weightlifting Head Coach (2000), Gayle Hatch, never used lifts from blocks while training his athletes, because he believed strongly in the isometric strength gained from holding the barbell in the correct positions.
The hang is also a great way for the athlete to emphasis the turnover and catch of the lifts. The athlete can feel if they are balanced during the execution and because of the shortened range of motion, they have less time to make balance adjustments and must be more precise.
However, Schmitz reported that the hang can produce unnecessary hip, back, and leg motion prior to the execution of the lift. Even if you perform the hand variations with a pause or stop, the lifter can still generate some dynamic motion.
The Hang: When?
Similar to the blocks, the hang position can be used with a wide variety of starting positions. You could argue that the variety from the hang is even greater than that from the blocks, because depending on the height of your blocks, you may not be able to get your desired position. The most common variations are high hang or “hip clean,” power position, from the knee (may also be done slightly above or slightly below), and the hang from mid-shin. More detail about each position can be found here.
Rep schemes for hang lifts are consistent with that of the blocks and most traditional Olympic lifting schemes: 1-5 repetitions. As the starting position moves closer to the floor, the athlete will be able to perform the movements with a higher percentage of their true maximums.
Similarly to the blocks, some athletes will actually be able to complete more weight from the hang variations than they are capable from the floor. This is generally due to the athlete being unbalanced when performing the full lift, but it could be caused from weakness or flexibility issues from the floor. Very rarely did I have an athlete capable of lifting more from the hang, but in cases where I did, they were almost always former football players who completed a lot of reps from the hang (generally with some major rocking motion) before converting to weightlifting.
As with all exercises, I believe it is important for the athlete to understand the intentions of the exercise programmed, but this is especially true with the hang. For example, if my coach is using a hang position to strengthen the turnover of the lift in the catch, I won’t use lifting straps because straps would defeat the point. If a coach is trying to strengthen my back and programs pause lifts, I try to truly pause and not use extra motion to get the weight moving.
There you have it, block vs. hang. Remember that all of these exercises are simply used as a tool to develop the full snatch and clean & jerk. Keep in mind that there are many different opinions about how and why these lifts are used. The Russian system uses a lot of variation in the lifts starting from many different positions, while the Bulgarian system uses a more simple approach taking everything from the floor. Generally, the best approach is the one that works for you and your athletes.
Featured Image: Catalyst Athletics (@catalystathletics)
Editors note: This article is an op-ed. The views expressed herein are the authors and don’t necessarily reflect the views of BarBend. Claims, assertions, opinions, and quotes have been sourced exclusively by the author. | https://barbend.com/weightlifting-blocks-vs-hang/ |
Development and Evaluation of a Pharmacist-Driven Screening Tool to Identify Patients Presenting to the Emergency Department Who Are Eligible for Outpatient Treatment of Deep Vein Thrombosis.
Deep vein thrombosis (DVT) is a critical and costly health issue. Treatment in the outpatient setting is preferred compared to the inpatient setting. However, there is a lack of evidence regarding how best to identify patients who are ideal for outpatient DVT treatment.To design and evaluate a pharmacist-driven screening tool for the identification of patients presenting to the emergency department (ED) at a community hospital with DVT who are appropriate for outpatient treatment.This study was conducted in sequential phases: compilation and vetting of screening criteria, descriptive evaluation of criteria through retrospective chart review, and quantification of potential cost savings by avoiding admissions. Criteria were collected via literature search and assembled into a screening tool, which was applied retroactively to a cohort of ED patients admitted with DVT diagnosis.A screening tool was developed with multidisciplinary input and consisted of 5 categories with individual patient and disease state criteria. The majority (91%) of patients reviewed would not have qualified for outpatient DVT treatment based on the retrospective application of the screening tool. The most common disqualification criteria category was high risk of bleeding/clotting (n = 81), and the most frequently represented parameter within that category was antithrombotic therapy prior to admission (n = 53).A screening tool may not be the most efficient method for health-care practitioners such as pharmacists to identify ED patients appropriate for outpatient management of DVT. Other avenues should be explored for improving the cost-effective management of these patients. | https://clinowl.com/development-and-evaluation-of-a-pharmacist-driven-screening-tool-to-identify-patients-presenting-to-the-emergency-department-who-are-eligible-for-outpatient-treatment-of-deep-vein-thrombosis/ |
This article was originally published on the Red Hat Customer Portal. The information may no longer be current.
Application threat modeling can be used as an approach to secure software development, as it is a nice preventative measure for dealing with security issues, and mitigates the time and effort required to deal with vulnerabilities that may arise later throughout the application's production life cycle. Unfortunately, it seems security has no place in the development life cycle, however, while CVE bug tracking databases and hacking incident reports proves that it ought to be. Some of the factors that seem to have contributed as to why there's a trend of insecure software development are:
a) Iron Triangle Constraint: the relationship between time, resources, and budget. From a management standpoint there's an absolute need for the resources (people) to have appropriate skills to be able to implement the software business problem. Unfortunately, resources are not always available and are an expensive factor to consider. Additionally, the time required to produce quality software that solves the business problem is always an intensive challenge, not to mention that constraints in the budget seem to have always been a rigid requirement for any development team.
b) Security as an Afterthought: taking security for granted has an adverse effect on producing a successful piece of software. Software engineers and managers tend to focus on delivering the actual business requirements and closing the gap between when the business idea is born and when the software has actually hit the market. This creates a mindset that security does not add any business value and it can always be added on rather than built into the software.
c) Security vs Usability: another reason that seems to be a showstopper in a secure software delivery process is the idea that security makes the software usability more complex and less intuitive (e.g. security configuration is often too complicated to manage). It is absolutely true that the incorporation of security comes with a cost. Psychological Acceptability should be recognized as a factor, but not to the extent of ruling out security as part of a software development life cycle.
With a and b being the main factors for not adopting security into the Software Development Life Cycle (SDLC), development without bringing security in the early stages turns out to have disastrous consequences. Many vulnerabilities go undetected allowing hackers to penetrate the applications and cause damage and, in the end, harm the reputations of the companies using the software as well as those developing it.
What is Threat Modeling?
Threat modeling is a systematic approach for developing resilient software. It identifies the security objective of the software, threats to it, and vulnerabilities in the application being developed. It will also provide insight into an attacker's perspective by looking into some of the entry and exit points that attackers are looking for in order to exploit the software.
Challenges
Although threat modeling appears to have proven useful for eliminating security vulnerabilities, it seems to have added a challenge to the overall process due to the gap between security engineers and software developers. Because security engineers are usually not involved in the design and development of the software, it often becomes a time consuming effort to embark on brainstorming sessions with other engineers to understand the specific behavior, and define all system components of the software specifically as the application gets complex.
Legacy Systems
While it is important to model threats to a software application in the project life cycle, it is particularly important to threat model legacy software because there's a high chance that the software was originally developed without threat models and security in mind. This is a real challenge as legacy software tends to lack detailed documentation. This, specifically, is the case with open source projects where a lot of people contribute, adding notes and documents, but they may not be organized; consequently making threat modeling a difficult task.
Threat Modeling Crash Course
Threat modeling can be drilled down to three steps: characterizing the software, identifying assets and access points, and identifying threats.
Characterizing the Software
At the start of the process the system in question needs to be thoroughly understood. This includes reviewing the correlation of every single component as well as defining the usage scenarios and dependencies. This is a critical step to understanding the underlying architecture and implementation details of the system. The information from this process is used to produce a data flow diagram (DFD) which provides the best representation for identifying different security zones where data will be in transit or stored.Depending on the type and complexity of the system, this phase may also be drilled down into more detailed diagrams that could be used to help understand the system better, and ultimately address a broader range of potential threats.
Identifying Assets and Access Points
The next phase of the threat modeling exercise is where assets and access points of the system need to be clearly identified. System assets are the components that need to be protected against misuse by an attacker. Assets could be tangible such as configuration files, sensitive information, and processes or could potentially be an abstract concept like data consistency. Access points, also known as attack surfaces, are the path adversaries use to access the targeted endpoint. This could be an open port or protocol, file system read and write privileges, or authentication mechanism. Once the assets and access points are identified, a data access control matrix can be generated and the access level privilege for each entity can be defined.
Identifying Threats
Given the first two phases are complete, specific threats to the system can be identified. Using one of the systematic approaches towards the threat identification process can help organize the effort. The primary approaches are: attack tree based approach, stochastic model based approaches, and categorized threat lists.
Attack trees have been used widely to identify threats, but categorized lists seem to be more comprehensive and easier to use. Some implementations are Microsoft's STRIDE, OWASP's Top 10 Vulnerabilities, and CWE/SANS' Top 25 Most Dangerous Software Errors.Although the stochastic based approach is outside the scope of this writing, additional information is available for download.
The key to generating successful and comprehensive threat lists against external attacks relies heavily on the accuracy of the system architecture model and the corresponding DFD that's been created. These are the means to identify the behavior of the system for each component, and to determine whether a vulnerability exists as a result.
Risk Ranking
Calculating the risk of each relevant threat associated with the software is the next step in the process. There are a number of different ways to calculate this risk however OWASP has already documented a methodology which can be used for threat prioritization. The crux of this method is to determine the severity of the risk associated with each threat and come up with a weighting factor to address each identified threat, depending on the significance of the issue to the business. It is also important to understand that threat modeling has to be revisited occasionally to ensure it has not become outdated.
Mitigation and Control
Threats selected from previous steps now need to be mitigated. Security engineers should provide a series of countermeasures to ensure that all security aspects of the issues are addressed by developers during the development process. A critical point at this stage is to ensure that the security implementation cost does not exceed the expected risk. The mitigation scope has to be clearly defined to ensure that meaningful security efforts align with the organization's security vision.
Threat Analysis/Verification
Threat analysis and verification focuses on the security delivery, after the code development and testing has started. This is a key step towards hardening the software against attacks and threats that were identified earlier. Usually a threat model owner is involved during the process to ensure relevant discussions are had on each remedy implementation and whether the priority of a specific threat can be re-evaluated.
How Threat Modeling Integrates into SDLC
Until now the identified threats to the system used to allow the engineering team to make better, informed decisions during the software development life cycle. Continuous integration and delivery being the key to agile development practices make any extra paper work a drag on the development flow. This also resurrects the blocking issues - Iron Triangle Constraint - mentioned earlier which might delay the release as a result. Therefore, it is essential to automate the overall threat modeling process into the continuous delivery pipeline, to ensure security is enforced during the early stages of product development.
Although there is no one-size-fits-all approach to automating threat modeling into SDLC, and any threat modeling automation technique has to address specific security integration problems, there are various automation implementations available, including Threat Modeling with Architectural Risk Patterns - AppSec USA 2016, Developer-Driven Threat Modeling, and Automated Threat Modeling through the Software Development Life-Cycle that could help integrating threat modeling into the software delivery process.
Should you use Threat Modeling anyway?
Threat modeling not only adds value to company's product but also encourages a security posture for its products by providing a holistic security view of the system components. This is used as a security baseline where any development effort can have a clear vision of what needs to be done to ensure security requirements are met with the company benefiting in the long term.
About the author
Hooman Broujerdi is a former member of the Product Security team at Red Hat focused on JBoss Fuse and ActiveMQ. | https://www.redhat.com/en/blog/how-threat-modeling-helps-discover-security-vulnerabilities |
Kris Roberts is back to cover the second GDC 2014 Oculus VR Session: Developing Virtual Reality Games and Experiences.
Virtual reality is significantly different to monitor-based games in many ways. Many choices that are merely stylistic for traditional games become incredibly important in VR, while others become irrelevant. Working on the Team Fortress 2 VR port to taught me a lot of surprising things. Some of these lessons are obvious when looking at the shipped product, but there are many paths we explored that ended in failure, and minor subtleties that were highly complex and absolutely crucial to get right. In this talk, I'll focus on a handful of the most important challenges that designers and developers should be considering when exploring virtual reality for the first time.
Tom started his presentation with a quick history of Oculus, an overview of the specs for DK2 and shared some interesting statistics about just how fast their developer community has grown. In March 2013 they shipped the first 10K kickstarter and initial order devkits. Over the course of the rest of the year 55K more devkits have shipped. But interestingly, there are 70K developers registered on the Oculus dev portal. That means that there are five thousand developers who are registered but don't have a devkit!
Before getting into the meat of the content of his talk, Tom asked the audience to allow him to do a little "preaching" and the message was loud and clear: be kind to your players. His feeling is that as developers we tend to get used to the VR we are working on and build up a tolerance to aspects or issues which can be jarring and uncomfortable for our users. It's important to keep in mind that everyone responds to VR differently and that care needs to be taken to keep the intensity down so that the experience is enjoyable for the majority of players. He suggests having options that allow eager players to turn up effects and movement if that's what they want, but to have the default be low and make it easy for players to change and find the level that is best for them.
The Vestibulo-Optical Reflex (VOR) is the adaptation we have which helps keep our eyes fixed on an object even while our head moves. It's a smooth motion in our eye muscles controlled by our ears sensitivity to rotation – it's involuntary, happens whether we are seeing anything or not (eyes closed or in the dark) and usually gives a 1:1 compensation between head rotation and eye motion. The tuning of the system is also extremely slow – on the order of weeks and most commonly experienced by people in the real world when they get a new eyeglass prescription. VOR-Gain can be thought of as the ratio between ear motion and eye response. Like when we get new glasses, VR can change the proportion and mess with the way our brain responds to the difference in VOR-Gain, and it's almost always unpleasant. To preserve VOR Gain, it's imperative that the simulation must render images that match the HMD and user characteristics. Unlike a desktop game, FOV is not an arbitrary choice but rather needs to be calculated with regard to the physical pitch of the display and the user's IPD. The SDK helps you match this precisely with data from the user configuration tool and we are discouraged from changing the settings no matter how tempting that may be.
Moving on to talking about the IPD, Tom explained that its more complex that most people think. Instead of just being the distance between the eyes, its actually two components per eye: nose to pupil distance and eye relief (distance from the lens surface to the pupil) and neither of these are related to the dimensions of the HMD. It was interesting to note that these are seldom symmetrical. Taken together, the components form a center-to-eye vector which is set during user configuration and stored in the user profile. This center eye position is roughly where players "feel" they are and is a good place to use for positional things like audio, line of sight checks and the origin for reticule/crosshair ray-casts. Within the application, there should be an easy way for users to reset their position when they are in a neutral forward pose, set by calling sensor->Recenter().
Although Tom was emphatic about not messing with the user's settings, scaling them uniformly is a way of effectively changing the world scale – and something he suggests we do experiment with. In general most users find reducing the world scale helps reduce the overall intensity as it scales down all motions and accelerations – but dont go too far or convergence can get tricky.
One question that every VR application needs to answer is how tall is the player? The SDK does provide a value for the eye height off the ground calculated from the user's provided height in real life. Sometimes that makes sense to use, and other times it doesnt. If your game is about being a character of a particular stature, the player's real life size may not be a good value to use. In other applications, using the players real size may help them feel comfortable and ease them into presence. Another interesting observation is the phenomenon of "floor dragging" which is the distance your brain tells you is how far away the floor is. The same VR experience can feel very different with the player seated as opposed to standing up!
Animating the player character presents a set of problems that most every game is going to have to consider. There are often unavoidable transition animations when you enter/exit vehicles, get up after being knocked down interacting with elements in the world and the like. There is the temptation to animate the camera as you would in a desktop game, but in Tom's experience from TF2 this almost never works well for the player. In practice his advice is to almost always do snap cuts or fade out and fade back in while never taking camera control away from the player.
Animating the player's avatar can have a strong positive impact, especially with first person actions like high fives, or calling for a medic in TF2. But they need to play without moving the camera position – the virtual camera should always move with the player's real head and the position of the avatar's head should coincide with the camera position. To accomplish this, Tom suggests an approach he calls "Meathook Avatars". The idea is pretty simple, in that you find the avatar's animated head position, eliminate (scale to zero) the avatar's head geometry and then move the body so it lines up with the player's virtual camera position. Visualized by hanging the animating body of the avatar from a meathook located at the camera position.
The last couple topics Tom talked about had to do with maintaining framerate. For a normal game, a fluctuating framerate can be annoying but in VR it will almost certainly break the player's sense of presence. Rendering a stereo scene at the higher resolution required by the DK2 at 75FPS is challenging for even the strongest PCs and GPUs today and the main costs are draw calls and fillrate.
This is not news to developers who have worked on stereoscopic projects in the past, but for many people working in VR doing 3D is new as well. For good VR the trick of doing 2D plus depth doesn't work very well and it is strongly recommended to do two renders – which in general results in twice as many draw calls, but a number of things can be done once: culling, animation, shadows, some distant reflections/effects and certain deferred lighting techniques. Fill rate on the DK2 is set with the 1080x1920 frame buffer (and dont change this!) but the camera-eye typically renders 1150x1450 per eye and is determined by the user's face and eye position (set by the profile & SDK). The advice is that it's okay to change the size of the virtual camera renders, but not the framebuffer size. The distortion correction pass will resample and filter it anyway. It's also okay to dynamically scale it every frame – if you have lots of particles or explosion effects that frame, drop the size. The SDK supports this use case explicitly. | https://www.mtbs3d.com/articles/editorial/13812-oculus-vr-at-gdc-part-ii |
Defining a megagame too rigorously can be a tricky business, because the genre covers such a wide range of potential subjects and game structures. There are games that involve a lot of players but they are not megagames, and games with only a few players that are megagames. Over the years I have often described a megagames like a boardgame, but not a boardgame, like a role playing game, but not a role playing game, and like a wargame, but not a wargame. Most megagames combine aspects of all of these, but also involve lots of people usually in the same location interacting in a structured way and following a common emerging narrative around a theme.
There are some key features that appear in most megagames although I would suggest than none of these features, alone, define the megagame but that if a game has many of these features then it is probably a megagame. If it has none of these features than it definitely isn’t a megagame.
Open Possibilities. The game is open-ended and allows a wide range of possibilities through emerging gameplay and player-determined narrative. We often say that the game should accommodate anything that could be done in real life.
Meaningfulness. There are relevant and meaningful interactions both within teams and between teams.
Urgency. There is time pressure and a sense of urgency. Players cannot have unlimited time to make decisions and the game moves at a pace that is not determined by the players.
These large structural features are distinct from mechanisms and game procedures.
Superficially many megagames might resemble a board game or a kriegspiel1 expanded to accommodate many more players. In terms of mechanism and game systems this might be true (though we will discuss later the essential differences in the requirements of megagame mechanisms and how they differ, fundamentally, from board games).
Something happens when a game concept is expanded beyond the familiar 2-8 players you might typically find in a role playing game, wargame or board game.
What changes is the how the the experience of participating in a megagame is determined by players’ interaction and communication with other players.
Face to face social interaction is at the core of the megagame experience – a megagame cannot be satisfactorily played in an on-line virtual world (at least not with technologies currently available) or using on line tools because the social interaction in these environments is currently too limited and cannot replicate the actual experience of talking to real people, or groups of people, face to face.
If we take a real world analogy – when world leaders want to discuss or negotiate something important they travel somewhere and meet face to face – because it is worth the time an effort, even for Presidents and Prime Ministers to do this. Skype or Google Hangouts is not the place to have any sort of in depth or subtle negotiation in the real world.
Megagames are usually trying to simulate the real world and this is why getting everyone in the room for a megagame is an essential part of the dynamic and is one of the reasons megagames are popular and very engaging for the participants.
1 A kriegspiel, or literally ‘wargame’ is a term borrowed from 19th century military wargames, characterised by armies represented by blocks moved around on maps and the results of the campaigns and battles being determined by written rules. There would typically be a map per side, and a master map, so that the opposing sides would be unable to see enemy movements that they would not be able to see in real life. This term is used in the modern sense to distinguish games like this from the more mainstream ‘open’ wargames using miniatures and no hidden movement or fog of war. | https://megagamemaker.com/2016/10/ |
Today’s video from David Michael Cantor, a Phoenix DUI Defense Lawyer, is about the infamous Immigration Bill known as SB 1070. This bill was set to go into effect this month, April of 2011, but has been held up in Federal Court on Appeal after U.S. District Judge Susan Bolton halted key provisions of the bill in Nov 2010. At this time it is not clear when the Appeal will be heard and decided on leaving the controversial bill in limbo.
In April of 2010 Governor Jan Brewer signed SB 1070 into law and became an immediate lightning rod for the debate concerning Illegal Immigration. Many people felt that this law was unconstitutional as it required the police to ask for identification from anyone they suspected to be an illegal immigrant. The people who felt this way include the President of the United States, Brack Obama, and his administration filed a suit against the bill.
A new wrinkle in the debate is Arizona Senator Russel Pearce’s request to be added to the lawsuit so that he can make his voice heard. Pearce is the author of SB 1070 and feels he can best describe the bill’s intent. He cites a new Arizona law that was passed this year which would allow him to be added to this type of lawsuit. Others point out that this action would be unprecidented in legal history.
What do you think about SB 1070? Is it a good law? Should Russel Pearce be added to the lawsuit simply because the Arizona legislature passed a law saying its okay?
Here is more from the Arizona Republic:
The future of Arizona’s controversial Senate Bill 1070 will remain in limbo until the 9th U.S. Circuit Court of Appeals issues a ruling, a federal judge decided Friday.
And there’s no telling when that will be.
The court heard arguments on Nov. 1 regarding U.S. District Judge Susan Bolton’s decision to halt most of the key provisions of SB 1070 from going into effect. The immigration bill was signed into law last April.
Bolton said Friday that she had been waiting to move forward with a lawsuit filed by the U.S. Department of Justice, which challenges the law’s constitutionality, until there was a ruling on her injunction. It now has been five months, she said.
“I had anticipated that we would have had a decision. I was betting for February, and now, March has come and gone,” Bolton said, adding that she gets no advance notice on when a ruling might come.
Bolton said that she was reluctant to continue to wait but that she and attorneys representing the federal government on one side and the state and Gov. Jan Brewer on the other agreed that they couldn’t move forward with the underlying case until they had an appeals-court decision.
Bolton said once a ruling does come down, the two sides will have 30 days to let her know whether they will appeal again, which could take the case as high as the U.S. Supreme Court.
But there are some things that will happen while they continue to wait.
Bolton said they will move forward with a countersuit that Brewer filed in February alleging that the federal government has failed to secure the border.
Varu Chilakamarri of the Justice Department said it will file a motion to dismiss the countersuit within the next couple of weeks.
Bolton also will make a decision on a request by Arizona Senate President Russell Pearce, author of SB 1070, to join the lawsuit as a defendant. She had initially denied Pearce’s request, but the Legislature passed a measure earlier this year authorizing Pearce and Speaker of the House Kirk Adams to intervene on behalf of the Legislature.
Brewer supports Pearce’s request; the Justice Department opposes it. On Friday, Bolton heard arguments from all the involved parties.
Pearce said he needs a seat at the table because he can best explain the intent of the Legislature in writing and passing SB 1070.
“I think we would enhance the debate,” he said. “This impacts the entire nation, and we need to be at the table.”
Attorney Paul Orfanedes of Judicial Watch, a conservative non-profit, is representing Pearce at no cost to the state.
Orfanedes admitted to Bolton that it is unprecedented for a legislature to ask to join a lawsuit in which state or gubernatorial attorneys are already defending a state law. But, he said, passing a law saying that the state wants the legislature to be represented in such a case also has never happened.
“This is the Legislature’s baby,” he said. “It knows this very controversial legislation better than anybody else.”
Chilakamarri said it would be unwarranted and unprecedented for Bolton to allow Pearce to intervene just because the state passed a law saying he could.
Bolton said she will rule on the matter later. She said she is concerned that adding another party to the case would mean more lawyers, more documents filed and more time.
“You are really not offering me anything that says the interests of the state Legislature are not being adequately represented,” she told Orfanedes.
Fill out the form below to recieve a free and confidential intial consultation. | https://blog.dmcantor.com/arizona-immigration-law-sb-1070-still-in-legal-limbo |
Translational Research takes scientific discoveries made in the laboratory, in the clinic or out in the field and transforms them into new treatments and approaches to medical care that improve the health of the population.*
NC TraCS is part of a national consortium, funded through the NIH Clinical and Translational Science Awards (CTSA), that shares a common vision to expedite the translational research process and the time it takes for laboratory discoveries to become treatments for patients.
How does clinical research relate? An essential part of translating laboratory discoveries to treatments is clinical research, a type of research that involves a particular person or group of people or uses materials from humans. Clinical research is essential to bringing research discoveries to communities.
Translational research includes two areas of translation. One is the process of applying discoveries generated during research in the laboratory, and in preclinical studies, to the development of trials and studies in humans.
The second area of translation concerns research aimed at enhancing the adoption of best practices in the community. Cost-effectiveness of prevention and treatment strategies is also an important part of translational science. **
The NIH has published an article explaining how research works, to help put the process of science into perspective. It is a useful resource to help explain why science changesand why it's important to expect that change. View the article at www.nih.gov and the accompanying graphic: How Research Works: Understanding the Process of Science (pdf) | en español (pdf)
NC TraCS aims to enhance wellness and reduce the burdens of disease by supporting basic, clinical and population research; dissemination and implementation science; comparative effectiveness and health services research; development of new methods and best practices; approaches aimed at individual, family, community, institutional and universal application; and efforts aimed at prevention, treatment and cure of disease.
We partner with a number of community groups throughout North Carolina, particularly Healthy Carolinians, to identify health priorities in the state. Though we support efforts to enhance wellness and reduce the burden of all diseases, the following priorities have been identified for the 2018-2023 funding period:
Interested in helping advance promising treatments and medical care? Participate in a clinical trial. Research for Me @UNC and ResearchMatch.org can help pair you with studies you may find interesting.
On average, it takes the participation of 5,300 study volunteers to obtain the results needed for a new drug application. If you are looking for a clinical research study at UNC-Chapel Hill, search our local databases for appropriate opportunities as well as link with specific study coordinators in the area(s) of research for which you are most interested. Visit the Center for Information and Study on Clinical Research Participation (CISCRP) for more information about clinical research and questions you should ask before volunteering.
In addition to participating in a clinical research study, we offer opportunities for you to guide and support health research on the UNC-Chapel Hill campus and in your community. Connect with our Community and Stakeholder Engagement Program to learn more about the ways you can contribute to and inform the design and conduct of research at UNC. | https://tracs.unc.edu/index.php/clinical-translational-research |
Gender expression refers to the external communication of one’s gender to other people. It is the way in which an individual chooses to perform their gender role.
Gender Expression
More About Gender Expression
Gender expression should not be confused with gender identity. Gender identity simply refers to the gender an individual identifies with, whether male, female, or neither. Gender expression, on the other hand, refers to the way in which individuals choose to express their femininity, masculinity, or androgyny. Gender expression can be defined as one’s mannerisms, behavior, clothing, haircut, voice, etc. | https://loveohyes.com/gender-expression/ |
Events
Origins of Japanese Epigraphy: Inscriptions of the Ancient Capitals (Kokyō ibun; 1818/1912)
In 1818, the antiquarian and philologist Kariya Ekisai (1775-1835) edited an annotated compendium of pre-Heian epigraphs in the pioneering work Inscriptions of the Ancient Capitals (Kokyō ibun). Preceded by a two-century-long renaissance in the study of Japanese antiquity, this work joined a number of proto-archaeological treatises by collectors and aficionados who had begun to investigate the physical artifacts of the distant past. As a philologist, Ekisai focused on the texts of the inscriptions in his commentaries, but he also considered the provenance and physical conditions of the objects that bore them: stelae, cinerary urns, Buddhist images, and so on. Nearly a century later, in 1912, the linguist Yamada Yoshio (1875-1958) and the metalworker Katori Hotsuma (1874-1954) augmented the earlier collection with their Inscriptions of the Ancient Capitals Continued (Zoku kokyō ibun), printed alongside an edition of Ekisai’s earlier work. Together these two books remain, even today, fundamental sources for pre-Heian Japanese epigraphy, but they showcase radically different conceptions of scholarship and its audiences, and also contrasting technologies of reproduction (manuscript and rubbing versus moveable type and lithography). Inscriptions on stone and metal, along with paper-and-ink rubbings of the same, may seem to be ancient textual modes, but in this context they prove to be unexpectedly ephemeral, mutable, and up-to-date.
| |
Dear NGO Colleagues,
As we end the Year 2012, let me take this opportunity to express my very best wishes and gratitude to all of the NGOs around the world for their monumental cooperation and I look forward to our continuing fruitful, productive partnership in the New Year.
We have a challenging year ahead!
At the beginning of the New Year, the Secretary-General will outline his priorities for 2013, which we will disseminate to all of you. The challenges faced by the international community can only be successfully addressed through solid cooperation and partnership, and I call on NGOs to make their views and voices heard about the United Nations’ wide-ranging agenda through proper channels. I would like to remind you of civil society’s valued contributions to bringing a number of priorities in the past to the forefront of global debates.
Although there was disappointment at not having held this year an Annual UN DPI/NGO Conference for the first time in its 64-year history, it allowed time for reflection on the Conference, its periodicity, its core funding, as well as the question of rotating the annual meeting between UN headquarters and various regional capitals depending on the interest and commitment of Member States. There has been a record number of Member States, including Brazil, Canada, Cuba, Hungary, Ireland, Qatar, Republic of Korea, Trinidad and Tobago, Tunisia, Turkey and Ukraine, that expressed interest in hosting the next Conference, but there has not been a firm commitment from any Government. DPI looks forward to a commitment by a Member State.
As we begin 2013, I will only touch on two of our latest, proud achievements: a.)
the redesign of our website (http://outreach.un.org/ngorelations/). The section continues expanding its social media strategy and providing skills enhancement through its Communications Workshops to allow the NGOs to familiarize themselves with our website as a new communication tool as well as the social media platforms: Facebook, Twitter and a blog titled NGO VOICES that are aimed at encouraging intergenerational dialogue among members of the NGO community; and b.) out intensifying efforts to attract younger NGO representatives to the work of the DPI/NGO community as drivers of the UN’s social and political change. Currently, about 50 NGO youth representatives are actively participating in person or via Skype in monthly meetings facilitated by the NGO Relations.
We will continue facilitating in building greater dialogue between the Secretary-
General and the NGO community by hosting another interactive meeting with the Secretary-General during 2013 to discuss priorities and issues of mutual concern.
We are also hoping to increase efforts to mobilize NGOs from developing countries and reach out to civil society partners around the world, through local UNIC/UNIS offices, in order to enhance their interaction with and understanding of the work of the United Nations. Reaching out to new communities has indeed created greater diversity within our community and opportunities for reaching out to new grassroots organizations especially in developing countries.
NGO Relations looks forward to maintaining the unending solid partnership with NGO/DPI Executive Committee and invites all NGOs to be in touch with this body.
We are extremely grateful to all of your for all your hard work and contributions on the challenges ahead and how the United Nations can strengthen its partnership with
this key stakeholder group. The dialogue between NGO Relations and the NGO community is essential to ensuring that your voices are heard within the United Nations.
I would like to conclude my message by expressing my deep gratitude to the “older/seasoned generation” for giving, advising and guiding the “new generation” with
your example of a commitment to a lifelong dedication to the UN Charter and its principles that are needed as we face the new challenges ahead.
Your contribution to the UN has been guidance for the new generation to continue the commitment to the UN!
Best wishes from all of us in NGO Relations to you and your families for a Prosperous and Healthy 2013!
I hope that 2013 will be another year of strong and productive collaboration in support of the United Nations’ work for a better world.
Working Together: Making a difference! | https://zoroastrians.net/2013/01/01/message-from-un-dpi-ngo/ |
An astronomical- geodetic technique is referred to as degree of measurement that was used from the 16th to the 20th century for measuring the earth's shape. The name comes from the precise determination of that distance ( 111-112 km), which is located between two 1 ° different latitudes.
Methodology and first measurements
The method is based on measuring the curvature of the earth between distant points by their distance ( arc length B) with the angle β between their astronomically determined Lotrichtungen is compared. The quotient B / β gives the average radius of curvature of the earth between these points. Best to select these two locations of Lotrichtungsmessung in north-south direction so that β corresponds to the difference of their latitude.
The principle of the measurement of a degree goes back to the Alexandrian mathematician Eratosthenes and Library Director; he estimated the Earth's circumference around 240 BC from the 7.2 ° different Sun between Alexandria and Syene (now Aswan ). His result of 250,000 stadia met - depending on the exact length of the stadium used - the true value to about 10 percent.
The method was refined by the Arabs under Al- Ma'mun to 1-2 % accuracy in the early Middle Ages. In France, Jean François Fernel received (1497-1558) in 1525 of a 100 -km-long arc of the meridian from Paris to Amiens the local mean radius of the earth (about 6370 km ) already at some kilometer, where the meridian degrees were measured with a measuring wheel.
Later combined with the method of triangulation large triangles to measure precise distances can. They revealed that a locally varying curvature of the earth, ie deviations from the spherical shape. Multiple profiles in the north and in the south of France 1669 should clarify the question of whether the curvature of the earth from the pole increases or decreases towards the poles and the earth flattened or ovoid.
In the 20th century it went from about profile - area networks and on certain regional curvature of the earth through various Geoidstudien and transnational projects. Since the practicality of the Navstar GPS, many surveys but no longer relate to the true earth's shape ( the geoid ), but on an average Erdellipsoid - which of course has problems with the height measurement result.
French Geodesy Lapland Peru
Because of conflicting results fitted out the Paris Academy of Sciences two major expeditions to Peru (La Condamine ) and Lapland ( Maupertuis ).
The results of these measurements (1735-1740) should define not only the Erdellipsoid also a new international measure of length - with exactly 10,000,000 meters from the equator to the pole.
Various problems with rust and calibration of the scales used (see Toise ), however, led to 1 km shortened Ellipsoidradien ( present data indicate the meridian quadrant with 10,002,249 meters of ).
The Earth flattening ¹ surrendered with f = 0.0046 (instead of 0.00335 ), thus shortening the radius of the earth to the poles ( 6378 ⇒ 6357 km ) or the increasing radius of curvature ( 6335 ⇒ 6400 km ) was first identified:
¹) Cassini's final measurement in 1740 showed flattening f = 0.00329
Other major meridian arcs in the 18th - 20th century
Longitude measurements and subsequent cross-linking
The degree of measurement along meridians is easier to implement, because the astronomical work require only width measurements. However, east-west profiles and measurements for longitude are required for accurate continental projects - the first by radio time signals and technical precision chronometers were possible on a large scale:
International degree and Geodesy
To the international coordination of these major projects was founded in 1862 to German - Austrian initiative, the Central European degree measurement Commission. My longtime head was the Prussian General Johann Jacob Baeyer. It was in 1867 extended to the European level measurement and Figures ( 1919) the precursor to the international geodetic Union IAG, as well as today's geoscience Union IUGG.
Since around 1910 and 1940, respectively, the profiles are no longer observed in the direction of north-south and east-west or evaluated separately, but increasingly connected to large surveying networks. Although the computational complexity of such large-scale area networks and their minimization process increases enormously ( with 2nd to 3rd power of the number of points ), but well worth it due to higher accuracy and homogeneity. The first of these major projects related to the U.S. and Western Europe; for the "Third Reich" returns the initial cross-linking of East and West Europe country surveys.
Since the 1970s and the development of computer networks, this area also be combined with 3D measurements of satellite geodesy. Thus, the classical concept of going " degree measurement" in that the " earth measurement " on.
Reference and earth ellipsoids
In the national survey, the individual States have until 1850 their own as " geodetic datum " is defined ( reference system ). With the international extension and crosslinking degree of the mentioned measurement profiles developed the ability and the desire to be based on the individual areas of large-scale valid data. The result was a series of so-called reference ellipsoid that the " middle Erdellipsoid " annäherten with increasing extension.
Of the approximately 200 state worldwide surveying networks today are based on 90 % of the data from a dozen more widely ellipsoids, which increases its quality and facilitates international cooperation. The older of these ellipsoids based on the great meridian arcs of the second section, the newer emerged from intercontinental and satellite networks. The most important of these ellipsoids are:
For many Central European countries, the Bessel ellipsoid is important, also, the ellipsoids of John Fillmore Hayford and Krasovsky and GPS survey the WGS 84
The pioneering work of Jean -Baptiste Joseph Delambre based only on local measurements. On the other hand creates the big difference between Everest ( Asia) and Hayford ( America) by the geologically -related geoid curvature of the different continents. | https://memim.com/grade-measurement.html |
A family have been left fearing for their lives after thugs doused their front door with petrol in a case of mistaken identity.
The homeowner said he and his wife and children could have been killed by the arson attack, at about 1.30am on Saturday, and are terrified the thugs could strike again.
He said: "The only reason it wasn’t a massive fire is because my son happened to be awake and heard the explosion as the petrol was lit. He looked out the window and saw light he clearly recognised as coming from flames and two people wearing hoods. One shouted ‘lets’ go, let’s go’ and they ran [away]."
When the man got downstairs and the smoke alarms started going off less than two minutes later the flames had already burned through the front door.
He and his wife doused the fire with water but now they and their two sons live in fear the thugs might strike again.
The homeowner said: "The children are terrified, they’re not sleeping, they’re up half the night seeing what activity is going on outside, and my wife is traumatised."
The family moved into the dilapidated house in the Mill Hill Road area of Acton in April, transforming it into their ideal home.
Their first reaction to the attack was to move out but they have so far decided against it because of the support of their neighbours.
"Everyone is shocked and has been extremely sympathetic," the homeowner said.
"Some people have lived here for up to 40 years and nobody has ever had this sort of experience before."
The family believe their home used to be the house of a known criminal who was the real target of the attack and who is thought to be still claiming to live there.
Police would not confirm this but did say they believe the thugs targeted the wrong address.
The homeowner said police installed a rapid response alarm but have failed to keep other promises such as CCTV and visible patrols and have done a poor job keeping in contact.
He added: "I’m fearful not enough is being done to stop this happening again. I don’t know if I’m going to walk out of my front door and face someone with a gun, or even something worse."
Police said the South Acton Safer Neighbourhoods Team has increased patrols in the area and CCTV is being installed this week. | https://www.mylondon.news/news/local-news/arson-attack-acton-home-mistaken-5964212 |
In 2014, Planning and Development Services (PDS) initiated a code development project to review the regulations for the county’s commercial and industrial zoning classifications. This review is considering the allowed uses, parking, building height, setbacks, landscaping, design standards and review processes within the county’s diverse commercial and industrial zoning districts. The projects’ goal is to use commercial and industrial land more efficiently and expand economic development, while ensuring compatibility with the vision for the future.
The Commercial & Industrial Standards project is being conducted in two parts. Part 1 was adopted on March 8, 2017 and focused on allowed uses, minimum parking rates, definitions and performance standards for woodwaste and non-woodwaste uses. Part 2 is currently on hold in order to take a comprehensive review of the project scope and recognize the establishment of the Maltby Area Advisory Board (MAAB).
PDS is also working on another separate but related code project to the Commercial and Industrial Standards. The Multiple Family Residential Development is reviewing development regulations as they apply to an area of unincorporated Snohomish County along and within 2,000 feet of State Route 99 north of State Route 525 to the city limits of Everett. The project considers increased density, building height and reduced setbacks. For more information on this project contact email Steve Skorney. | https://www.snohomishcountywa.gov/3908/Commercial-Industrial-Standards |
In recent summers, New Zealanders have experienced a series of marine heat waves. Temperatures in some regions have exceeded 6°C above average. While some of us enjoy the warmer water temperatures for water sports and holiday activities, most marine organisms experience heat stress, and warmer water drives further changes in our weather and climate.
New Zealand is surrounded by ocean, and both our climate and our climate extremes (such as droughts, floods and tropical storms) are highly impacted by ocean temperatures. The state of our ocean is controlled by the interplay of heat and moisture between the ocean and the atmosphere over the Tasman Sea. Here, oceanic heat travels from the subtropics through the East Australian Current (EAC), raising the heat in the Tasman Sea. Recent research has shown that when the heat content in the Tasman Sea is elevated, or increases rapidly, then the ocean ‘catches a fever’, which can result in heatwaves and climate extremes over New Zealand.
Marine heatwaves also affect the fishing and aquaculture industries (e.g. mussels and salmon farms). Exceptional southward migrations for some fish species looking for cooler waters has been reported during these heat waves. These species rely on a constant supply of cold water year-round to thrive. But many marine organisms can’t swim, which puts them under severe pressure and risk of death.
The current version of the New Zealand Earth System Model (NZESM) does not simulate this heat transport as precisely as it could, and modelled ocean currents in the Tasman Sea (e.g. EAC, EAC-Extension and Tasman Front) require significant refinements. Improved modelling of these ocean currents will lead to a better representation in the NZESM of ocean heat content and heat transportation, enabling our models to more accurately predict changes in future change, including to temperatures in the ocean and on land and related climate extremes.
Better knowledge of future climate extremes is vital for effective decision-making on how to respond, adapt and thrive in a changing climate.
This project in the news:
- NIWA gives dire warning about ‘severe’ marine heatwaves, 1News, TVNZ
- Marine heatwaves forecast to devastate, Otago Daily Times
- Marine heatwaves expected to get longer, hotter and more severe, Stuff
- Why scientists are expecting another ‘crappy’ year for glacier melt, NZ Herald. | https://deepsouthchallenge.co.nz/research-project/marine-heatwaves-and-the-link-with-climate-extremes/ |
On 19 February, Agnes Förster from the Munich University of Technology (Technische Universität München) delivered a lecture on “Regional design as a communicative planning practice – Approaches to its performance” as part of the Spatial Planning Seminar Series.
Here you will find the slides and a video from the lecture.
Abstract
In numerous European regions planning professionals and politicians are experimenting with regional design approaches to overcome limitations that statutoryplanning systems pose to planning. Practices in these regions vary highly; they come in many facets. Despite the broad interest in practices, few lessons have been learnt from experiments. The question presents itself then whether regional design delivers on what it promises. Since design initiatives are often taken spontaneously, expectations remain implicit and the performance of regional design nebulous. To uncover benefits a concept of performance is needed that focuses on the communicative and interpretative aspects of regional design and that incorporates the multiple expectations on knowledge, arrangements and organisational learning. | http://spatialplanningtudelft.eu/?p=3387 |
BACKGROUND/RATIONALE:
The Veterans Health Administration (VHA) was an early adopter of telehealth care starting in 2003. As a result of a number of telehealth initiatives, VHA conducted over a million telehealth visits in 2018. More than half of these visits provided care to Veterans located in rural areas, and 10% of these were conducted using VA Video Connect (VVC) which allows providers to see Veterans on their mobile devices or personal computers at Veterans' location of choice. In 2018, as part of the MISSION Act, the VA set the "Anywhere to Anywhere" telehealth initiative, seeking to ensure that by 2021, 100% of providers in outpatient Mental Health and Primary Care service lines nationwide would be both capable and experienced with telehealth service delivery into the home.
SARS-CoV-2, the virus that causes coronavirus disease (COVID-19), has potentially left individuals with opioid use disorder at risk of not receiving evidence-based treatment. Access to healthcare for all Veterans has been significantly decreased due to social distancing guidelines which has left some of our most high-risk Veterans, those with opioid use disorder (OUD), vulnerable to poorer health outcomes. Individuals with an OUD are at a significantly high risk of overdose, unintentional death, and a wide range of negative health related consequences. The U.S. saw a 4.1-fold increase in opioid-related deaths between 2002 and 2017, and Veterans experience opioid overdose at twice the rate of non-Veterans. Fortunately, evidence-based medications for OUD exist including buprenorphine. However, due to the potential for misuse, there are additional training requirements for providers to be certified to prescribe these types of medications, resulting in inadequate numbers of providers in some areas, particularly rural ones. Telehealth is a potentially effective method of service delivery to mitigate this access to care issue, but the Ryan Haight Act of 2008 mandates that the first visit with a prescriber of schedule II-IV controlled substances must be done in person. This is particularly challenging for rural Veterans who live in areas that already have a limited number of eligible prescribers and face significant time/travel constraints. Due to the public health emergency caused by COVID-19, the Diversion Control Division of the U.S. Drug Enforcement Agency has temporarily waived (as of March 16, 2020) the in-person requirement for OUD prescriptions issued for a legitimate medical purpose and which are in accordance with state and federal law.
Waiver of Ryan Haight Act due to COVID-19 creates potential for treatment retention for high-risk Veterans with OUD. The Ryan Haight Act waiver presents a unique opportunity to understand the impact of the VHA's preexisting telehealth structure for the treatment of OUD, and about the costs/benefits of this 12-year-old policy. Will telehealth allow for prescription maintenance, or will prescriptions drop? Relatedly, will relaxing this requirement lead to a dramatic increase in prescriptions for controlled substances? And, finally, what are the barriers and facilitators associated with this recent change in policy for substance use providers, and how can this information inform the VHA's response to future natural and/or public health disasters, particularly for high-risk Veterans?
OBJECTIVE(S):
Objective 1 - Develop methods required to conduct robust analyses assessing the impact of COVID-19 and related changes in policy and service design on access to care and medication management for Veterans with OUD.
Aim 1a: Conduct qualitative interviews with providers and key local stakeholders in the Substance Treatment and Recovery (STAR) and telehealth clinics to understand a) modes of patient interaction (i.e., in-person, telephone, VVC, or other modes of video conferencing) used, b) documentation patterns for these visits, and c) perceived facilitators and barriers to the rapid expansion of telehealth for OUD.
Aim 1b: Incorporate key stakeholder findings to accurately record and measure evolving telehealth visit modalities in order to construct structured Corporate Data Warehouse (CDW) queries to assess visits as well as prescription rates for schedule II-IV opioid medications.
Objective 2 - Conduct preliminary analyses of temporal trends in schedule II-IV narcotic prescription rates for Veterans who would normally fall under the parameters of the Ryan Haight Act using an interrupted time series design during the COVID-19 window (estimated to be from 3/16/20 - 6/30/20 at time of grant submission).
Aim 2a. Examine and compare the rate of prescriptions for buprenorphine (Suboxone®).
Aim 2b. Examine and compare the rate of prescriptions for other schedule II-IV narcotics (hydromorphone (Dilaudid®), methadone (Dolophine®), meperidine (Demerol®), oxycodone (OxyContin®, Percocet®), fentanyl (Sublimaze®, Duragesic®), morphine, opium, codeine, and hydrocodone, products containing not more than 90 milligrams of codeine per dosage unit (Tylenol with Codeine®).
METHODS:
Aim 1 Methods: Individual qualitative interviews with providers and key stakeholders.
Aim 2 Methods: We will extract data from the CDW and pharmacy databases, and will use segmented regression interrupted time series (SR-ITS) to assess changes in prescribing behavior potentially attributable to the waiver of the Ryan Haight Act to access narcotic prescriptions via telehealth. SR-ITS allows for the assessment of long-term effects on an outcome attributable to a specific event (policy intervention) in time, i.e., the implementation of legislative mandates. We will see if there are differences in the effect of the intervention by rurality, age, gender, and race/ethnicity.
FINDINGS/RESULTS:
We conducted qualitative interviews with providers delivering video telehealth to high-risk Veterans with OUD. Preliminary analysis of the interviews with providers new to video telehealth have shown that providers on the whole are receptive to providing mental healthcare via video telehealth. However, the perceived burden of having to deal with the extra logistical steps required to provide care via telehealth deters their use in favor of telephone contacts. Lead prescribers, in particular, appear to be more likely to prefer a phone appointment versus 'struggling' with video telehealth. Providers reported comments such as "[I] only have 30 minutes, I'm going to waste it troubleshooting. if there are issues, I just call the Veteran." When it comes to high-risk Veterans, such as those with OUD, providers also report more anxiety about ensuring the safety of their patients and/or about handling clinical emergencies over video telehealth at a time when they believe their patient loads include a higher number of severe/acute patients in need of welfare checks.
Despite the MISSION Act, and although demand for virtual mental healthcare service delivery is at an all-time high due to the COVID-19 pandemic, most of the virtual care (46%) within VHA was delivered by telephone during the study window, with only an average of 28% of visits provided via VVC. As the Ryan Haight Act waiver does not acknowledge a telephone contact as adequate or sufficient for prescribing a new patient BUP, new Veterans with OUD, particularly those residing in rural areas with few available providers, were at risk for delays in care and/or receiving suboptimal care.
With the Ryan Haight Act's in-person requirements waived, if providers are delivering care via video telehealth, prescription rates for MOUD should, in theory, remain steady despite the pandemic. We examined prescription information from the VA Corporate Data Warehouse for 42,579 Veterans diagnosed with OUD (91.6% male, 71% white, 16.8% black, 27% rural dwelling). During this 12-month window, 56.6% of the sample were prescribed suboxone, 53.6% were prescribed sedatives, and 13.8% were prescribed anxiolytics. Monthly an average of 33,323 (SD = 3,190) prescriptions were filled, with an average of 1.45 (SD = .08) medications prescribed per visit. As expected, the largest dip was seen in April 2020, with only 28,376 prescriptions filled, with 1.33 prescriptions written per visit. As of August 2020, the rates for prescriptions for controlled substances had not returned to pre-COVID levels. These data suggest that while telehealth is a legal option to appropriately prescribe controlled substances, it was not utilized in a way that replicated in person care. We are currently examining potential differences by group, such as age, gender, and race/ethnicity. Pending the outcome of these analyses, we will integrate the qualitative results from Aim 1 with the quantitative results of Aim 2 to fully describe how telemental healthcare was or was not delivered for Veterans with OUD, and potential provider-level factors underlying these results.
IMPACT:
Findings from this rapid pilot will be of immediate relevance and impact for VA operations partners who include VA Analytics and Business Intelligence (RAPID), VA Office of Mental Health and Suicide Prevention, and VA Office of Connected Care. Further, the results of this pilot study will go to support a Merit award in the future examining the individual impact of telemental health treatment for high risk Veterans during COVID-19, with the goal of developing a toolkit enabling VHA to better respond to future natural disasters and other healthcare system disruptions.
PUBLICATIONS:
Journal Articles
DRA: Substance Use Disorders, Health Systems
DRE: None at this time.
Keywords: None at this time.
MeSH Terms: None at this time. | https://www.hsrd.research.va.gov/research/abstracts.cfm?Project_ID=2141707416 |
Welcome to the Marple Newtown School District. We want to do everything possible to make the registration process efficient and productive. There is certain information needed for new students, and we must ask you to complete several forms. Some of this information may have been collected elsewhere for your child, but unless you brought the school records with you to the Registration Office, we will need the information in order to make the registration complete. Although the school will be requesting your child’s records from the previous school, we do not always get all the information in a timely manner. We appreciate your cooperation.
When you have completed or obtained all necessary documentation, PLEASE CALL 610-359-4260 TO SCHEDULE AN APPOINTMENT in the Registration Office to register your child. Appointments are only scheduled on Tuesday, Wednesday or Thursday from 9 a.m. until noon (12:00 p.m.). The appointment should take 5-10 minutes per child. Your child should be able to start school two (2) school days after all the necessary documents have been finalized.
Residency must be verified by presentation of required documents.
New proof of residency is required ( a form will be supplied).
submission of appropriate evidence of residency and upon compliance with all other enrollment procedures, the Board may either charge tuition for some or all of the period of attendance or dismiss the student.
Students permitted to attend in accordance with this subsection are not entitled as a matter of right to such attendance and are dismissible at will by the Board.
Attendance in accordance with this policy shall be conditioned upon full compliance with all enrollment procedures and policies applicable to district residents and upon submission of satisfactory evidence of immunization or establishment of lawful cause for exemption from immunization requirements.
When a resident of the school district keeps in his/her home a child of school age not his/her own and requests registration of such child, the resident will be required to provide proof of legal guardianship at the time of registration. A copy of completed IRS form transferring tax exemption of child to resident, or a copy of Federal or State tax form which lists child as a dependent of the resident, or copy of completed County form transferring child support payments to resident.
When a non-resident child is placed in the home of a resident of the school district by order of the court or by arrangement with an association, agency, or institution having care of neglected or dependent children, the resident is required to present a placement letter from the agency, association, or institution. The resident must meet all the residency requirements (see above) and prove immunization and proof of age documents.
In accordance with the Pennsylvania Department of Health regulations, your child will not be admitted to school until immunization is in compliance with the stated law. The Health Services website contains more detailed information about immunizations and other health services.
The Parent or legal guardian must present ALL of the following documents at the time of registration. The student’s registration will be considered incomplete until all the documents are presented. The parent/guardian must personally complete the registration process at the Registration Office.
- The parent/guardian must present proof of ownership in the district. Acceptable documents: Deed, current real estate tax bill, mortgage payment statement.
- If the parent/guardian leases a property. The parent/guardian’s and all occupants names must be designated on the lease. (The landlord’s name, address and telephone number shall be on the lease or made available to the district). When no current lease is available, a notarized statement from the homeowner, leasing agent or apartment manager stating that the parent/guardian and student(s) reside at the stated address is required. The district for verification may contact the landlord, real estate company or apartment manager.
- Multiple Occupancy Affidavits are required when a parent/guardian is residing with relatives or other families. THE PARENT AND THE HOMEOWNER/LEASEE EACH MUST COMPLETE A Multiple Occupancy Affidavit (a sworn statement attesting to the parent and students residency).
- These forms must be notarized. The affidavits indicate that the parent and the Homeowner/Leasee are both responsible for the tuition should the student be disenrolled for non-residency.
- All Multiple Occupancy Affidavits are reviewed by the Marple Newtown School Districts Home and School Visitor.
- Township code enforcement officers may be contacted to verify multiple living arrangements. The homeowner/lessee will be required to verify ownership and provide the necessary forms of documentation and identification.
- Multiple Occupancy Affidavits must be updated before the start of the every school year.
- Nonresident: A district resident who houses and continuously fully supports a child not his/her own (except an orphan receiving Social Security), may, upon presentation of dependency affidavit enroll a child in the district.
- Identification: The parent/guardian must present FOUR forms of identification: A valid Pennsylvania driver’s license. DOT identification card or other photo identification and deed/lease or a current property tax bill, plus phone and cable bill.
- Proof of the child’s age. Any one of the following constitutes acceptable documentation: original birth certificate; notarized copy of birth certificate; baptismal certificate; a valid passport; a prior school record indicating the date of birth.
or a medical office that the required immunizations have been completed, with records to follow.
- Parental Registration Statement (must be notarized).
Parents/Guardian, Students must physically reside in the Marple Newtown School District.
There is certain information needed for all students, and we must ask you to complete several forms for your change of address. Some of this information may have been collected elsewhere for your child, however, unless you brought the records with you to the Registration Office, we will need the information again in order to make the process complete. We appreciate your cooperation.
When you have completed or obtained all necessary documentation, please call (610) 359-4260 to schedule an appointment in the Registration Office. Appointments are only scheduled on Tuesday, Wednesday or Thursday from 9 a.m. until noon (12:00 p.m.). The appointment should only take 5-10 minutes. Your change will be entered into the computer within two (2) school days after all the necessary documents have been submitted. | https://www.mnsd.net/calmonthview.aspx?schoolid=0&t=0&schoollistids=&y=2019&m=4 |
Mechanical Engineering - Essay Example
Since mechanical engineering isn’t taught in high school, young students don’t know much about It. Can you describe mechanical engineering? Mechanical engineering Is by far the most broad-based branch of engineering. Most high school students associate the field with auto mechanics, but that’s an enormous misconception. Mechanical engineers today are concerned with the design, development and manufacture of a variety of energy conversion and machine systems.
Those systems include aerospace, automotive, marine, manufacturing, bohemianism, power generation, heating ventilation, alarm conditioning and robotics. They also work in merging Industries such as nanotechnology and particle technology. Mechanical engineers work with conventional fuel sources but they are Increasingly developing alternative fuel sources such as geothermal, wind, tide, solar and hydroelectric energy. What kind of high school students should major in mechanical engineering?
Any high school student with an aptitude for mathematics and physics has the basic foundations to be a successful mechanical engineering major. If the student Is creative, with a natural curiosity about how things work, coupled with a desire to build tangible devices, mechanical engineering could be in his or her future. Do mechanical engineering majors at NIST work on hands-on projects? The Mechanical Engineering department here prides itself on the various hands-on projects that our students work on.
Such projects force students to use the principles they learn In the classroom, but they must take that theory and develop a tangible product. Using devices or products that they fabricate in their capstone design courses, our students enter a wide range of national design contests sponsored by the American Society of Mechanical Engineers and the Society of Automotive Engineers. Those contests include a Mini-Baja (an all-terrain vehicle) contest; an Indy Car (a Formula speed car) contest; and an Rare-Design (a remote-controlled aircraft) contest.
How has mechanical engineering changed over the years? Though still a very hands-on field, mechanical engineering has also evolved into a computer-intensive field. Extensive use of sophisticated analysis software is routinely used by mechanical engineers to aid in the development of products, devices and systems. In addition, the Implementation of computer controls and electro- mechanical systems In machines and robotics have made the Job of the mechanical engineer even more versatile and broad based.
Does NIST teach students how to use those computer tools? In our classrooms, the use of computing has become an indispensable design and research tool. Mechanical engineering students have the sophisticated analysis software to create computer simulations. And those simulations prepare our kind of computing is also used in the analysis and testing in the aerospace and automotive fields, as well as in robotics, manufacturing and energy conversion. Our students are well prepared to work in those industries. How do you teach mechanical engineering at NIST?
What classes do mechanical engineering majors take? Of all the engineering fields, mechanical engineering students take the widest range of courses. And those courses prepare students for the broadest range of careers. In the area of solid mechanics, mechanical engineering majors take courses in static, dynamics, kinematics of machines, vibrations, strength of materials, manufacturing processes and control systems. In the area of so-called soft mechanics, students take courses in fluid dynamics, thermodynamics and heat transfer.
A number of courses are offered that are related to computer-aided design and manufacturing, as well as a number of hands-on purgatories. An extensive array of elective courses are also offered in areas such as bohemianism, computer-aided engineering, aerodynamics, principles of space flight, plastics as well as polymers and particle technology. Design projects are infused throughout the curriculum, and students do a major design project in their senior year. In addition, we help our students get co-pop and summer internships, so that they graduate with real work experience.
That helps Jumpstarted their careers. Do mechanical engineers work in teams? The team approach is implemented throughout the educational experience of our canonical engineering students. Mechanical engineers work extensively in large inter-disciplinary teams, so our students must be trained to work together. Their broad based-education and training make them valuable assets in any engineering project. In fact, mechanical engineers often take leadership roles in projects due to their broad- based approach to engineering design and development.
What are the main industries in which mechanical engineers work? Mechanical engineers work in the widest range of industries, more so than any other kind of engineer. They work in the aircraft and aerospace industries, the automotive industry, as well as in the fields of marine systems, heating ventilation and air conditioning, robotics, bohemianism, power generation and general manufacturing. New emerging industries such as nanotechnology and particle technology are major employers of mechanical engineers.
What are some other career paths that mechanical engineers take? Many mechanical engineers opt to continue their educations and earn Masters and Pads in engineering. Those degrees prepare them for research careers in industry or at a university. However, with the general educational background offered by the canonical engineering curriculum and the rigorous analytical skills acquired during that training, many of our students find careers in a wide range of professional disciplines.
Those areas include intellectual property law, medicine, financial markets and consulting. As I said earlier, by virtue of our students’ broad-based education, they naturally make effective leaders. They understand the overall view of projects and have sufficient mastery of engineering principles from all disciplines to supervise such projects. Historically, mechanical engineers have been industry leaders in the widest range of businesses. Are mechanical engineers well paid? They are. Starting salaries average about $55,000.
But salaries are considerably higher for graduates who have co-pop or internship experience. Many of our students rise quickly in their companies and their salaries rise accordingly. And as I mentioned earlier, mechanical engineering is the most broad-based and also the most marketable branch of engineering. It gives students the widest range of employment opportunities and is hence less affected by economic downturns. So if you have the aptitude and the motivation, come Join us at NIST and start a fulfilling career in mechanical engineering. | https://wiretrip.net/mechanical-engineering-2/ |
Officials announced that a treasure containing sacks of diamonds and gold coins as well as golden idols, jewelry and other riches has been discovered in the secret subterranean vaults of Sree Padmanabhaswamy temple, in the southwestern state of Kerala, India. Estimates of its worth have been rising and it is now thought to be worth US$20 billion.
The Hindu temple was built in the 16th century by the kings of the then Kingdom of Travancore to serve as a royal chapel for the rulers of Travancore. The six vaults containing the treasure have been undisturbed for over a century. Assessment of the treasure began on June 27 after a lawyer concerned about the security of the treasure petitioned India’s Supreme Court, which then appointed a seven-member panel of experts to inventory the treasure. The panel does not have the power to determine to whom the treasure will belong. Estimates of the treasure’s worth are rising, provoking a heated debate as to how the treasure will be used in a country that has 450 million poverty-stricken people.
The chief minister of Kerala, Oommen Chandy, announced on Sunday the treasure would remain with the temple, and security matters would be decided in consultation with the Travancore Royal Family, the temple management, and the temple priest.
The gold was offered to the lord. It is the property of the temple.
“The gold was offered to the lord. It is the property of the temple. The government will protect the wealth at the temple,” Oommen Chandy said. Meanwhile, hundreds of armed police have been deployed around the temple to protect the treasure.
Five of the six vaults of the Sree Padmanabhaswamy Temple have been inventoried.
God’s wealth belongs to the people, not to the king. It’s meaningless to say that it belongs to Hindus or any particular religious community.
On Saturday, reports leaked to the press revealed that the treasure, including a golden idol of Mahavishnu and a golden ‘anki’, were found in one of the vaults, estimated to weigh 30 kilograms, along with precious stones, silver, two coconut shells of pure gold and another golden idol as well as other jewels and valuable coins. The panel hopes to find more treasure when the sixth and final vault is opened, but the attempt was suspended on Monday because the iron door inside presented “technical problems” requiring further consultation before opening. This vault is thought to contain the bulk of the wealth.
Keralan officials in a preliminary estimate said that the treasure was worth over US$11.2 billion; those estimates have now risen to US$20 billion. Historians say that the temple’s location on a site through which passed lucrative trade routes support the higher evaluations.
Some suggest that the profit from the sale of the treasure would be enough to wipe out the entire public debt of Kerala and fund future Kerala projects such as seaports, airports and highways. | http://www.magyarazurben.net/hidden-treasure-worth-billions-of-dollars-discovered-in-indian-temple/ |
Newswise — After natural disasters, communities need fast access to damage assessment maps to aid relief, recovery and rebuilding efforts. On 28 April at the European Geosciences Union General Assembly (EGU), IIASA researchers will present a new way to link volunteers around the world with a way to help communities after major disasters. They invite the general public to test out the app for the next few weeks, using data from Hurricane Matthew, which devastated Haiti last year.
“Our goal is to provide a simple way for people to help disaster response efforts. At the same time, the app could help raise public awareness of natural disasters,” says IIASA researcher Olha Danylo, who presented the work at EGU.
The new campaign takes advantage of Picture Pile, an app for mobile phone, tablet, and computer, which has previously been used to crowdsource the mapping of deforestation in Tanzania. IIASA researchers have now redesigned the app to use satellite imagery provided by DigitalGlobe in partnership with the Humanitarian OpenStreetMap Team under the Crowd4Sat project with Imperative Space and the European Space Agency (ESA).
The app provides a simple platform for volunteers to classify differences between two satellite images. Volunteers are asked to look at before-disaster and post-disaster satellite images, noting in this case where they see damage to buildings. This sort of image processing is simple for a human being, but difficult to program into computer algorithms.
Because time is of the essence in such disaster response efforts, this initial public test will provide the researchers a chance to rapidly analyze app functionality, evaluate the data quality provided by volunteers, and make improvements before applying it in a real situation.
“This is kind of like a fire drill. When a disaster occurs, it’s important that we are able to respond quickly, that the app and data collection run smoothly, and that we know what quality of data to expect,” says IIASA researcher Steffen Fritz, who leads the institute’s citizen science team. “Hurricane Matthew, for example, damaged an estimated 200,000 homes in Haiti. The idea of this project is to help provide a quick initial damage assessment, so that help can also quickly reach those who need it.”
Led by Imperative Space in collaboration with IIASA and the Humanitarian OpenStreetMap Team, the project is a part of the ESA initiative Crowd4Sat, which combines citizen science with satellite observations in the fields of biodiversity & wildlife, environment and disaster relief. | https://www.newswise.com/articles/citizen-science-campaign-to-aid-disaster-response |
Half the population of the UK have had at least one adverse childhood experience by the time they enter adulthood.?
These experiences - from living in a household where there is regular domestic abuse or substance misuse, to dealing with bereavement or being taken into care - are likely to be traumatic for children, affecting their psychological development and wellbeing.
Professionals' understanding of the impact that trauma has on children has grown significantly in recent years and is changing the shape of the practice and services.
For example, children's centres are putting more emphasis on working earlier with families to identify and address issues that may cause trauma, while some schools are developing trauma-informed approaches to improve behaviour in the classroom.
For the most vulnerable children, such as those with care experience and offenders, social care and youth justice teams are developing therapeutic interventions geared to building resilience in young people to help them overcome trauma.
Meanwhile, there is increasing recognition of the effects that working with traumatised children can have on practitioners, with more emphasis on training and supervision.
CYP Now's special report on trauma-informed practice assesses recent research on the impact of trauma on children, hears from experts on how policymakers and services are responding to traumatised young people and highlights the work of four projects that have developed innovative interventions.
Click on the links for more:
Trauma-Informed Practice: Policy context
Research evidence by Deanne Mitchell, information specialist, the Social Care Institute for Excellence (SCIE):
- Systematic Review of Organisation-Wide, Trauma-Informed Care Models in Out-Of-Home Care Settings
- Evidence Review: Developing Trauma Informed Practice in Northern Ireland
- Healing Environments For Children Who Have Experienced Trauma
Practice examples: | https://www.cypnow.co.uk/features/article/trauma-informed-practice-special-report |
BUSI 6905 [0.5 credit] Advanced Statistical Methods for Business Research
A practical introduction to advanced statistical methods used in business research, with particular focus on discrete categorical data. Topics include the analysis of two-way and three-way tables; loglinear modeling; logistic regression; generalized linear models. Students will analyze real data using appropriate software packages. | https://calendar.carleton.ca/search/?P=BUSI%206905 |
Andrew Small is a freelance writer in Washington, D.C., and author of the CityLab Daily newsletter (subscribe here). He was previously an editorial fellow at CityLab.
Keep up with the most pressing, interesting, and important city stories of the day. Sign up for the CityLab Daily newsletter here.
***
Won’t you be my neighbor? A lot happens when a neighborhood gentrifies. Existing residents may see some positive effects—affluent neighbors tending to bring safer streets or improved schools—and newcomers might even pick a place based on the potential for the kind of community they seek. But the sense of community in these neighborhoods can suffer as a result of these changes. That’s a key finding from a new paper on Philadelphia’s gentrifying neighborhoods, where residents reported a lessened sense of trust and belonging compared to people in neighborhoods that weren’t gentrifying.
While gentrification may not cause direct displacement, it foreshadows a slower demographic turnover that can cause fear, alienation, and other tensions that erode community ties. “These neighborhoods may be, in a demographic sense, integrating, but socially they’re not integrating,” one researcher tells CityLab’s Tanvi Misra. Read her story: What Happens to Community Bonds When a Neighborhood Gentrifies
The streets were never free. Congestion pricing makes that plain. (New York Times)
1 in 3 high-speed chases at the border ended in a crash (ProPublica)
Will Amazon HQ2’s effect on Northern Virginia’s housing be as feared—or hoped for? (Washington Post)
Stickering is an increasingly popular art form for D.C. artists, particularly women (Washington Post)
Tell your friends about the CityLab Daily! Forward this newsletter to someone who loves cities and encourage them to subscribe. Send your own comments, feedback, and tips to [email protected].
The short-term rental market is reeling from the coronavirus-driven tourism collapse. Can the industry’s dominant player stage a comeback after lockdowns lift?
Renters in apartments and houses share more than just germs with their roommates: Life under coronavirus lockdown means negotiating new social rules.
Will COVID-19 change how cities are designed? Michele Acuto of the Connected Cities Lab talks about density, urbanization and pandemic preparation.
Because of coronavirus, millions of tenants won’t be able to write rent checks. But calls for a rent holiday often ignore the longer-term economic effects.
What do we know so far about the types of places that are more susceptible to the spread of Covid-19? In the U.S., density is just the beginning of the story. | https://www.citylab.com/newsletter-editions/2019/04/citylab-daily-how-gentrification-changes-sense-community/586507/ |
This zucchini strata from Yotam Ottolenghi is made with ciabatta bread soaked in a rich custard-like concoction of milk, cream, and egg, then cooked in a single skillet with zucchini and basil and a crisp golden Parmesan crust.
This magnificent zucchini strata bears all the typical understated elegance found in every creation from Yotam Ottolenghi. The custardy texture is ethereally light, fluffy, and comforting in that way that only happens when you soak bread in milk and finds a lovely contrast in a crisp, golden brown Parmesan crust. Everything else—eggs, cheese, zucchini, basil, bread—is in perfect proportion to one another. Thank you yet again, Ottolenghi.–Angie Zoobkoff
☞ Looking for more zucchini recipes? Try these:
- Zucchini Gratin with Fresh Herbs and Goat Cheese
- Zucchini, Peach, and Burrata Pizza
- Spaghetti with Zucchini, Lemon, and Basil
☞ Table of Contents
Zucchini Strata
Ingredients
- 1 pound store-bought or homemade ciabatta crusts removed and reserved for bread crumbs, bread torn into small chunks (6 cups)
- 3/4 cup plus 2 tablespoons whole milk
- 3/4 cup plus 2 tablespoons heavy cream
- 2 large garlic cloves minced
- 6 large eggs lightly beaten
- 3/4 teaspoon ground cumin
- 3/4 cup finely grated Parmesan
- Kosher salt and freshly ground black pepper
- 2 medium zucchini coarsely grated, (3 to 4 cups)
- 1 1/4 cups basil leaves torn into pieces
- 2 tablespoons olive oil
Directions
- Preheat the oven to 400°F (200°C).
- In a medium bowl, combine the ciabatta, milk, and cream and mix well. Cover and wait for the bread to absorb most of the liquid, about 30 minutes.
- In a large bowl, combine the garlic, eggs, cumin, 1/4 cup Parmesan, 3/4 teaspoon salt, and 1/4 teaspoon pepper. Mix well and then add the bread and its liquid followed by the zucchini and basil. Stir gently.
- Place an 8-by-10-inch (20-by-25-cm) baking dish in the oven until hot, about 5 minutes. Remove from the oven, brush with the oil, and pour in the zucchini mixture, smoothing the surface. Bake for 20 minutes. Sprinkle the last of the Parmesan evenly on top of the strata and then bake until the strata is golden brown and cooked through (a knife inserted in the center should come out clean), 20 to 25 minutes more. Let rest for 5 minutes before slicing and serving.
Show Nutrition
If you make this recipe, snap a photo and hashtag it #LeitesCulinaria. We’d love to see your creations on Instagram, Facebook, and Twitter.
Recipe Testers’ Reviews
This zucchini strata was delicious! It was extremely easy to assemble and it came together quite fast. My bread soaked up the milk and cream in 10 minutes so I didn’t wait the full 30 minutes as instructed in the recipe. All other instructions were perfect, so follow it exactly and you can’t fail.
I made this a second time and used sourdough instead of ciabatta and half-and-half in place of heavy cream and it was equally delicious! My one recommendation, don’t bother removing the crusts from the bread. It’s unnecessary, the bread soaks up the milk and egg mixture just the same, and the crust add a nice variance in texture.
I would describe this more as a strata than frittata (I’m Italian and frittata never contain bread or that much milk. That being said, this made a lovely vegetarian dinner but could also be served for brunch.
I baked mine in a square stone baker—9 1/2 by 9 1/2 inches—and it was the perfect size.
This zucchini strata recipe came together quite easily and was very light and fluffy. The flavor was very good, even my husband liked it, and he’s not a zucchini fan. The zucchini was not overpowering.
I used a standard 7-by-11-inch baking dish as I did not have an 8-by-10-inch dish and it worked out fine.
The flavor combinations of the zucchini with basil Parmesan, and garlic was both mild and exciting. Each bite both comforted and excited the palate.
Although the flavors were good, my mouth could not get past the texture of the custardy, soggy bread. For those who don’t mind that texture, this zucchini strata is great! | https://leitesculinaria.com/225876/recipes-zucchini-strata.html |
Articular cartilage subpopulations respond differently to cyclic compression in vitro.
The inferior biomechanical properties of in vitro-formed tissue remain a significant obstacle in bioengineering articular cartilage tissue. We have previously shown that cyclic compression (30 minutes, 1 kPa, 1 Hz) of chondrocytes isolated from full-thickness cartilage can induce greater matrix synthesis, although articular cartilage is composed of different subpopulations of chondrocytes, and their individual contribution to enhanced tissue formation has not been fully characterized. This study examines the contribution of chondrocyte subpopulations to this response. Bovine articular chondrocytes were isolated from superficial to mid zones (SMZs) or deep zones (DZs), placed in three-dimensional culture, and subjected to cyclic compression. DZ chondrocytes on calcium polyphosphate substrates formed thicker tissue than those from SMZs. Compression increased matrix accumulation in SMZ chondrocytes while decreasing accumulation in DZ chondrocytes. The SMZ and DZ chondrocytes also differed in their type 1 membrane-bound matrix metalloproteinase (MMP) and MMP-13 expression, enzymes that play a crucial role in mediating the response to mechanical stimulation. In addition, the duration of the culture period was important in determining the DZ response, raising the possibility that matrix accumulation plays a role in the response to stimulation. Understanding the cellular response to mechanical stimulation during tissue formation will facilitate our understanding of tissue growth and allow for further optimization of cartilage tissue formation in vitro.
| |
Garhmukteshwar HapurGarhmukteshwar is a holy place in Hapur district. Garhmukteshwar is situated at the banks of river Ganges, and lies 32 km east of Hapur. It is the closest point from Delhi, where holy river Ganges flows. It is an ancient town and is mentioned in the Bhagvata Purana and the Mahabharata. Garhmukteshwar is believed to be a part of Hastinapur which was the capital of Kauravas.
Garhmukteshwar derived its name from an ancient temple Mukteshwar Mahadev. The temple is dedicated to goddess Ganga. She is worshipped at this place in four temples; two of them are situated at the cliff and two below it. The water of Ganges River is considered very pious and sacred and it is believed that those who take a dip in this holy water, all their sins are washed away. There are eighty pillars at the banks of Ganges River which are known as sati pillars. They mark the spots where the Hindu widows became sati.
Garhmukteshwar Fair
Every year a bathing fair is organized on the full-moon day in the month of kartika at Garhmukteshwar. The Garhmukteshwar bathing festival is a symbol of tradition, faith and trust and people from nearby and far off places come here to have a dip in the holy Ganga. Apart from this bathing fair, another bathing fair is organized in the month of Jyaistha on the occasion of Dusherra. During this fair also pilgrims come from all over India and take a dip in the holy water of Ganges River.
Garhmukteshwar became a tehsil on 26th January, 1977 under the Ghaziabad district. In 1979, Ghaziabad Zila Parishad constituted Garhmukteshwar and since then the historical Ganga Fair in Garhmukteshwar is sponsored by the Zila Parishad. Proper arrangements are made by the government and Municipal Corporation of the town during the Garh Ganga Fair. The entire region becomes like a city of tents during the bathing festivals.
Another fair organized in Garhmukteshwar is Gadho-ka-Mela which is a kind of animal market where ass/horse/donkey are purchased as well as sold out. This fair is attended by residents of Hapur district and nearby places. Not only this, business man from all parts of the country and even the neighbouring countries such as Bangladesh and Pakistan also become a part of this animal fair.
Special arrangements are made during the Garh Ganga Fair time for the pilgrims so that they do not face any problem. | https://www.hapuronline.in/city-guide/garhmukteshwar-hapur |
Birmingham’s two oldest cemeteries
They are Birmingham’s two oldest cemeteries with Key Hill opening in 1836 and Warstone Lane in 1848, they have provided a final resting place for many notable Birmingham figures. The landscapes of both cemeteries are listed on the Historic England Register of Parks and Gardens in recognition of their great historic importance. This project aims to restore them to their former glory and protect them for future generations.
Alongside the restoration work, there will also be an extensive programme of events celebrating the heritage, natural environment and community value of the cemeteries.
The key aims of the project are:
- To change perception of the cemeteries and raise awareness of their historical importance
- To get more people to visit the cemeteries and become actively engaged with the project
- To add to the ‘sense of place’ in the JQ, boosting pride and confidence in the area
- To build strong partnerships with local residents, business and communities
- To provide new resources for education and life-long learning about nature and heritage.
Cemeteries
Key Hill Cemetery
Originally known as the Birmingham General Cemetery and opened in 1836, Key Hill was Birmingham’s first garden cemetery. It was founded by a group of non-conformist businessmen to solve the shortage of burial space in the city and was open to all creeds and denominations.
Warstone Lane Cemetery
Founded by the Church of England Cemetery company and consecrated by the Bishop of Worcester in 1848. The second garden cemetery in Birmingham was also in the Jewellery Quarter, probably inspired by the success of the neighbouring Key Hill cemetery.
Explore more about the project
Restoration project
The restoration work began in July 2019 and is due to be completed during Summer 2020.
Phase 1 – July 2019 to January 2020: Boundary walls, railings and catacombs
Phase 2 – January 2020 to Summer 2020: Landscaping, drainage and pathways
Volunteering
Our amazing Jewellery Quarter Heritage Squad volunteers support the project in loads of different ways, and we couldn’t do this without them! Get involved with Outdoor Conservation, Events, Research, Blogging, Social Media, Promotions and more.
Places We Love
Love the cemeteries as much as we do?
Check out these other Jewellery Quarter attractions and sites of interest. You can spend a whole day exploring the area. | https://cemeteries.jewelleryquarter.net/about/ |
A crew is made up of a minimum of five members consisting of a pilot, copilot, navigator, aerial reconnaissance weather officer, and loadmaster who is also the dropsonde operator.
While the pilots handle the controls, there is a third person positioned behind them, known as the navigator or ‘nav’ for short.
Navigators are responsible for preparing flight plans, which include routes, headings, checkpoints, and times. During flight, they operate from their station using equipment such as GPS, radio, radar, and communication systems that assist in guiding the aircraft through weather.
Maj. Mark Withee, 53rd WRS navigator said “As a nav, we have to be the middle man between the weather officer and pilots, and we have to be able to compromise on a route to get to an area of interest, which is crucial in a storm”.
Withee explained that while weather officers are gathering weather data and are requesting flyover of an area of interest, it may not be safe for the aircraft to take a direct route. Thus, the navigator plots the safest course to accomplish the request and accomplish their mission.
You can track the hurricane reconnaissance missions on Tropical Tidbits, which plots out active recon data. And you can track the recon team’s movements on the National Hurricane Center website. | https://ifatceg.com/the-hurricane-hunter-nav/ |
Searching Availability...
History Center
HOME TO THE EGG HARBOR HISTORICAL SOCIETY (EHHS)
About The EHHS
Started in 2009, the Egg Harbor Historical Society’s mission is to make Egg Harbor History come alive for future generations through the collection, preservation and sharing of the heritage of Egg Harbor.
Prior to 2009 there was no organization dedicated to historical preservation of the community of Egg Harbor, but there were several independent efforts with preserved photographs and documents, written historical narratives, art, preserved structures and even video presentations.
Their mission is to bring these efforts out into the open by collaborating with those who have an interest in Egg harbor history and initiating their own efforts where there is a need for financial resources and collaborative ventures.
For more information about the Egg Harbor Historical Society, visit: EHHS
Event Calendar
Check back for Events!
Renting The History Center
With a 50” flat screen TV featuring current tele-conferencing capabilities and a board room meeting table, this room has a comfortable seating capacity for 8 people. This room is available for rent Monday-Sunday 8 am – 10 pm outside of regularly scheduled library programs. | https://kresspavilion.org/history-center/ |
We are not currently scheduling patients for COVID-19 vaccines. Please help us keep our phone lines open. Do not call to ask for vaccine updates. We will keep you informed as we learn more. The most up-to-date vaccine information is available on our vaccine page.
More COVID-19 resources: Testing & Treatment | Visiting MultiCare
EPFM will accept applications through ERAS for first year residents. Upon receipt of a file, a preliminary review will be conducted to evaluate the applicant’s academic fitness.
Minimum elements for an application include:
Interviews will be scheduled based on the needs of the applicant and the program. The interview day includes tours of the clinic and hospital, meeting with the program director and other faculty, an introduction to Puyallup, and a discussion about the curriculum, minimum job requirements, and pay and benefits.
Applicants are not expected to bring any additional documentation or materials to the interview. All relevant information should be submitted through ERAS.
The residency office can be reached at (253) 697-5757. Melissa Yeager is our Residency Coordinator and is responsible for recruiting, coordinating applicants, and scheduling interviews. She can also answer questions related to the application process.
International Graduates Please Note: Our parent hospital does not sponsor visas. Your ERAS application must be complete, including all of the above documentation as well as verification of passing the USMLE Step II-CS examination and verification of current certification by the ECFMG in order to be pre-screened by faculty.
Applicants who have not received ECFMG certification by the deadline for submission of program ranking list will not be listed for our program. | https://www.multicare.org/epfm-application-process/ |
Twenty (or even ten) years ago, mental health was a taboo subject. While it was fine to talk about your deviated septum or a root canal, mental health was something to be endured quietly, behind closed doors. A mixture of shame, stigma and stoicism kept everyone suffering from mental health problems from sharing their troubles with the world.
However, studies have shown that even while we weren’t talking about mental health problems, we were having them. In fact, in a study conducted between 1980 and 2013, between 17.6% and 29.2% of adults had some kind of mental health problem.
Things have changed a bit since then, but let’s take a closer look at how mental health workplace policies and standards have changed, and why, and what we might expect next.
The Changing Face of Mental Health
One of the biggest barriers to helping people cope with mental health problems has always been the stigma. It was never acceptable to talk about depression or anxiety and being on medication for a mental health condition was something you had to hide.
These days, we have big name celebrities stepping up, sharing their mental health struggles with the world. We haven’t eliminated mental health problems (and we probably never will) but we’ve made it okay to admit that we’re not superhuman. It’s okay to be struggling. It’s okay to see a therapist. It’s even okay to be open about it.
We’re still learning how to make mental wellness part of the health narrative, but we’ve come a long way in a relatively short while.
Mental Health Workplace Trends
Of course, as we’ve all become more comfortable with the idea that mental health is health, and that we need to take as much care of our minds as we do our bodies, there have been societal changes to match.
Since most of us spend around a third of our adult lives at work, and the rest asleep or enjoying down time, it only makes sense that mental health workplace trends and regulations would have to keep up with that shift.
In some cases, like “right to disconnect” laws in various parts of the world, workplaces themselves, and their ability to contribute to stress related conditions have been recognised and somewhat neutralised. In others, laws protecting people from discrimination based on their mental health status have emerged.
Mental health is now seen as much a part of employee health and wellness as a safe workplace, and if you aren’t already taking steps to address it, it’s time to start.
Mental Health Trends in the Business World
As more and more companies embrace the idea that you can’t have happy, healthy, productive workplaces without mentally healthy employees, there are some clear trends emerging. These include:
- Allowing employees to take time off for mental health reasons when they need to, without delving into their reasons for being away from the office
- Tailoring employee benefit packages to include coverage for mental health and psychology treatments
- Providing enough support and training for employees, to ensure that their workload does not compound mental health problems
- Implementing zero tolerance policies for office bullying
- Providing information about mental health services
- Incorporating mental health resources and information in employee manuals
- Recognising the connection between substance abuse and mental health, and assisting employees in getting treatment for both
- Building stronger, more supportive teams where collaboration instead of competition is rewarded
- Investing more time and effort into team building
- Setting specific times during the day where no meetings are allowed to be scheduled
- Having emails sent after a certain time to be held until the next workday starts
Company culture used to be about the office softball league and annual picnics. However, as mental health workplace connections come to the fore, more companies are recognising that there’s more to it than that. Human resources departments need to build strategies for dealing with mental health into their operating procedures, from recruiting, throughout the employee relationship and beyond.
We now know that you can’t compartmentalise people into work and outside work. When people have problems, they will bring them to the office, and vice versa. So, we must make sure that we’re equipped to help them cope.
Benefits for Companies That Prioritise Mental Health
Research has shown that companies that have proactive and positive mental health policies and programs in place retain more staff. There’s a growing link between people leaving companies and being unable to cope in their roles. In other words, people are more likely than ever to quit because their job is bad for their mental health.
Even if the job is not the cause of the mental health problem, if it’s not helping (or making it worse) you are far more likely to lose good people.
There’s also a link between well managed mental health and productivity. When people feel happier and have greater levels of positive emotion, this translates into more energy, greater interest and focus, higher motivation, and thus productivity. In fact, productivity is boosted by approximately 12% when wellbeing is addressed at work, according to the Mental Health Foundation in the UK.
So, while there are certainly moral, ethical and societal reasons to take a closer look at mental health workplace policies and processes, there’s a strong business case too. If you make mental health part of your human resources strategy, you will attract better people, retain more of them, and have people who do a better job.
That, as they say, is a no brainer.
Whether it’s making it easier to get a mental health day or gifting your employees Return of the Panda’s Coping Cards, every step you can take to better mental health in your office is a step in the right direction. | https://returnofthepanda.org/2022/04/26/has-your-company-led-or-followed-mental-health-in-the-workplace-trends/ |
The Richmond Tennis Club invites all Migrant Communities to their Club for an introduction to the game of tennis "A Game for Life"
What to bring - We ask that soft soled shoes are worn on the courts. The Club will supply rackets, balls, instructions and lots and lots of fun, music and laughter.
Cost $5 pp for BBQ lunch and drink.
If raining, the event will be postponed to 27th July (11am-2pm). | http://events.stuff.co.nz/2019/multicultural-day-of-tennis/waimea?utm_medium=rss |
‘The Bold and the Beautiful’: Where Is Reign Edwards Now?
The Bold and the Beautiful has had many talented actors, including Reign Edwards. The three-time Daytime Emmy nominee is best known for her role as Nicole Avant on the CBS soap opera. Edwards was on the show for three years since departing in 2018. Let’s look at what she’s been up to since then.
Former ‘The Bold and the Beautiful’ actor Reign Edwards is starring in ‘The Wilds’
After three years on The Bold and the Beautiful, Edwards left to go on to star in other primetime TV shows. From 2016 to 2020, she was in the recurring role of Leanna Martin on the CBS series MacGyver. She then was cast as Melody in the FX drama Snowfall.
RELATED: ‘The Wilds’ Cast Bonded After 1 Intense Stunt Scene
Currently, the actor can be seen on the Amazon Prime series The Wilds, about a group of teen girls stranded on an island after a plane crash. Aside from TV, Edwards also has film roles under her belt, including the 2018 flick Hell Fest. According to IMDB.com, she finished wrapping up the upcoming movies Old Dads and Love You Anyway.
Soap fans best know Reign Edwards as Nicole Avant
In January 2015, Edwards made her The Bold and the Beautiful debut as Nicole. Nicole is the younger sister of Maya Avant (Karla Mosley), who arrives with a secret about her sibling. While many fans thought Nicole was Maya’s daughter, they were stunned when Nicole revealed Maya was her brother Myron.
Although they had a strained relationship, Nicole and Maya reconciled, and Nicole became her sister’s biggest supporter when Maya’s secret was revealed. Nicole defended Maya when their disapproving parents, Julius and Vivienne Avant (Obba Babatundé and Anna Maria Horsford), arrived in town.
RELATED: ‘The Bold and the Beautiful’: Jacob Young Disagreed Over Ending Rick and Maya
Nicole found love with Zende Dominguez (Delon de Metz), yet their relationship was filled with complications. One of the couple’s issues was Nicole being a surrogate for Maya and Rick Forrester (Jacob Young). Although Zende was against the idea, he changed his mind when he saw Maya and Rick’s joy with their daughter.
Nicole and Zende marry and plan to start a family. However, their dreams are crushed when Nicole is informed she can’t have more children. Nicole contemplates suing May and Rick for custody of Lizzie with her father’s encouragement, but Zende talks her out of it.
Shortly afterward, Nicole and Zende leave town. However, he returns in 2020, a single man after he and Nicole divorced.
Will Nicole Avant ever return to ‘The Bold and the Beautiful’?
Edwards is busy with her other projects, so it seems like a The Bold and the Beautiful return is out of the question. However, that hasn’t stopped fans from wishing Nicole would return. Nicole coming back would provide a good storyline for Zende.
First, fans would get answers on why the couple divorced. Did Zende stray again from Nicole? Or was Nicole the one who broke his heart? Those are the questions on everyone’s mind.
Nicole’s comeback could also open up a reunion with Zende. The young fashion designer was left heartbroken by Paris Buckingham (Diamond White), who dumped him for Carter Walton (Lawrence Saint-Victor). Nicole could provide comfort to her ex-husband while igniting a rivalry with Paris. | https://www.cheatsheet.com/entertainment/the-bold-and-the-beautiful-reign-edwards-now.html/ |
POTW labs to start training for study examining antibiotic-resistant bacteria, genes in effluent
SCCWRP and its POTW member agencies in May will begin practicing collection and analysis techniques for a year-long study examining whether viable antibiotic-resistant bacteria – and the genetic material that codes for antibiotic resistance – are being discharged into the environment following the wastewater treatment process.
The study, scheduled to begin in June, will measure the prevalence of antibiotic-resistance bacteria entering nine wastewater treatments across Southern California, including an international plant at the U.S.-Mexico border. Researchers will track which bacteria and genetic material survive treatment and are discharged into receiving waters.
Researchers are particularly concerned about antibiotic resistance genes in wastewater effluent because these genes may survive the treatment processes that destroy most bacterial cells, and then may travel via treated effluent into aquatic systems. Once in the environment, potentially pathogenic bacteria in the environment can take up the antibiotic resistance genes, which could confer antibiotic resistance to other bacteria, including pathogenic strains that make humans sick.
Previous studies have documented a broad array of antibiotic resistance genes in wastewater effluent, as well as how commonly bacterial cells swap their antibiotic resistance genes with one another.
In preparation for the study’s kickoff, standard operating procedures are being circulated to all participating labs, so they can practice the techniques and ensure they can generate high-quality, comparable results. | http://www.sccwrp.org/news/potw-labs-start-training-study-examining-antibiotic-resistant-bacteria-genes-effluent-2/ |
Since better outcomes for mesial temporal lobe epilepsy (MTLE) was reported in 2001, epilepsy surgeries have been established as a treatment for medically intractable epilepsy. Among them, five types of epilepsy surgery (amygdalohippocampectomy for MTLE, lesionectomy for focal epilepsy with or without apparent MRI lesions, hemispherotomy for hemispheric epilepsy, and callosotomy for drop attacks) are known as surgically remediable epileptic syndromes. Furthermore, in 2010, vagal nerve stimulation was approved as a palliative surgery for epilepsy in Japan.
When the epileptogenic focus is undetermined in non-invasive evaluations such as scalp electroencephalography (EEG), MRI, and positron emission tomography, invasive evaluation using intracranial electrodes might be performed subsequently. Conventionally, subdural grid EEG was mainly used for detecting the epileptogenic focus ; however, the percentage of the use of depth electrodes with the stereotactic method of EEG implantation (SEEG) has been increasing recently. SEEG is supposed to be insufficient in terms of confirming cortical seizure propagation and performing functional mapping with electrical stimulation, but it has been widely adopted because of its low invasiveness. When planning the location of the electrodes, a hypothesis of the seizure focus and its propagation needs to be set up based on the anatomo-clinico-electrical correlation.
When the epileptogenic focus is in a limited area of the brain, less invasive approaches should be selected for the removal of the lesion. On the other hand, when the epileptic network affects a wide part of the brain, surgeries based on the disconnection concept, such as corpus callosotomy, multi-lobe disconnection, and hemispherotomy, should be considered.
Although unapproved in Japan, new concepts of surgery, such as focus coagulation, deep brain stimulation, and responsive neurostimulation, have become popular instead of focus resection, especially in cases of epilepsy originating from eloquent areas. Here, we describe the concept of epilepsy surgery and the current topics in this field.
Deep brain stimulation (DBS) is a well-established surgical treatment for patients with advanced Parkinson's disease (PD) who show on/off motor fluctuations, dyskinesia, and/or tremor. DBS therapy has been widely administered over the past two decades, and the advantages and disadvantages of this therapeutic approach have been revealed. Although various studies have proved the efficacy of DBS, advances in this therapeutic modality and a multidisciplinary clinical approach are essential for further improvements in the clinical outcomes of this treatment. Lately, various DBS devices including directional leads, as well as a pulse generator that senses real-time neurophysiological activities through the intracranial lead have become available in Japan. These novel technologies have refined the treatment paradigm ; however, an increase in stimulation parameters and greater complexity of programming necessitate further development of clinical knowledge. While adjustability and reversibility are acknowledged as advantages of DBS therapy, studies have reported that DBS surgery has failed to show the expected efficacy secondary to complications including hardware infection and intracranial lead misplacement in some cases. Notably, the advantage of conventional radiofrequency lesioning has been reconsidered as an effective treatment option, following the advent of thermoablation therapy using magnetic resonance imaging-guided focused ultrasound system. These treatment modalities have led to complexity in the treatment of PD. Among the various surgical treatment options currently available, standardized surgical procedures performed by well-trained neurosurgeons are necessary to select DBS as the first-line procedure for patients with PD.
Surgical treatment, including deep brain stimulation (DBS) and ablative surgery targeting the basal ganglia-thalamo-cortical circuit, can provide substantial improvement even in refractory conditions. Ablative procedures include radiofrequency, gamma knife, and focused ultrasound. In particular, focused ultrasound ablation has attracted the most attention, allowing intracranial focal lesioning without incision.
Tremor is the first common movement disorder, which is the best candidate for DBS or ablative surgery of the thalamic nucleus (ventral intermediate nucleus : Vim). Bilateral thalamotomy was abandoned because of severe complications, such as dysarthria, dysphonia, and dysphagia, and in those who require bilateral intervention, DBS plays a significant role. However, recent studies have emphasized the safety and efficacy of bilateral Vim thalamotomy. Tremor is the most well-investigated minimally invasive procedure, such as gamma knife and focused ultrasound ablation.
Dystonia can develop from focal to generalized, and the available treatment targets differ according to their distribution. The globus pallidus internus (GPi) is the current mainstay target for cervical, segmental, or generalized dystonia. Distal limb dystonia (hand and foot dystonia) requires the intervention of the ventro-oral (Vo) nucleus of the thalamus. Vo-thalamotomy using radiofrequency, gamma knife, and focused ultrasound has been reported to have long-term effects on focal hand dystonia.
Transcranial MR-guided focused ultrasound (MRgFUS) can be used for tissue coagulation by focusing ultrasound on a single target at high density. MRgFUS has been developed and clinically applied as a device that can create coagulation foci non-invasively, as the coagulation focus and temperature can be observed in real-time by MRI.
The procedure to focus on coagulation with MRgFUS is as follows : preoperative MR images are performed to plan the target and CT scans are used to mark areas, including air and calcification where ultrasound cannot penetrate. The stereotactic surgical frame was fixed to the patient's head. After setting up, MRI was performed again to confirm the target. The treatment was performed while confirming that the temperature increased at the planned target. Once the tissue temperature rose to approximately 45℃, improvement of the patient's symptoms was examined. The power of ultrasound and duration of sonication were further adjusted to increase the temperature to create a sufficient coagulation focus in the targets.
Thalamotomy by MRgFUS for essential tremors has been covered by public insurance in Japan since June 2019. To this date MRgFUS has been used in more than 250 cases, and its efficacy has been recognized. MRgFUS has also been used to treat Parkinson's disease since 2020. In the future, MRgFUS is expected to expand its indications to include neurological and psychiatric disorders other than neurodegenerative and movement disorders.
Methotrexate-associated lymphoproliferative disorder (MTX-LPD) is an iatrogenic disorder that develops during low-dose methotrexate (MTX) therapy. Two patients who had been treated with MTX for rheumatoid arthritis showed multiple lesions with ring contrast enhancement in the cerebrum on head magnetic resonance imaging. We performed an open biopsy, and histopathological examination showed the presence of Epstein-Barr virus-positive diffuse large B-cell lymphoma. The patients were diagnosed with MTX-LPD induced by low-dose MTX therapy. The lesions regressed after MTX discontinuation. Although MTX-LPD is mainly composed of extra-nodal lesions, MTX-LPD in the central nervous system is rare. Since this disease is likely to be improved simply by discontinuing MTX, it is important to recognize this disease and to make a diagnosis by biopsy as promptly as possible.
Craniometaphyseal dysplasia is a hereditary osteosclerotic disease characterized by hyperostosis of the skull and enlargement of the metaphysis of the long canal bone. By 2017, 105 cases had been documented, but cases associated with Chiari malformation type Ⅰ are extremely rare. We report a case of Chiari malformation type Ⅰ complicated with craniometaphyseal dysplasia.
A 15-year-old boy with craniometaphyseal dysplasia and progressive scoliosis was referred to our department because of tonsillar herniation and syringomyelia. Thermal hypoalgesia in the extremities was the only neurological finding of this lesion. Foramen magnum decompression was performed via suboccipital craniotomy and C1 laminectomy without duroplasty. The postoperative course was complicated with epidural hematoma, which required reoperation within 3 days. Follow-up MRI showed good decompression at the craniocervical junction and a significant decrease in the size of the syrinx.
Foramen magnum decompression is an effective treatment for Chiari malformation type Ⅰ complicated with craniometaphyseal dysplasia. However, patients should be treated with strict management because the perioperative course is likely to be associated with a higher incidence of complications because of time-consuming procedures, large postoperative dead space, and craniofacial deformities. | https://www.jstage.jst.go.jp/browse/jcns/30/7/_contents/-char/en |
Saudi Arabia’s energy minister, Eng Khalid Al-Falih, said the kingdom will seek $425bn (SAR1.6tn) in infrastructure investment through a Vision 2030 programme driven by Crown Prince HRH Mohammed Bin Salman.
According to a Bloomberg report citing Al-Falih, who is Saudi Arabia’s Minister of Energy, Industry, and Mineral Resources, the Crown Prince will deliver details of the plan on 28 January. Designed to cut the kingdom’s reliance on petrodollars, the new Crown Prince-led programme will seek investments in energy, mining, and industrial projects, Bloomberg reported.
Rail projects spanning thousands of kilometres, and extensive airport refurbishments, are also a part of the over-arching drive that comes as part of Vision 2030, Saudi Arabia’s economic diversification mandate.
Around 70 contracts worth more than $53.6bn (SAR200bn) are expected to be signed as the Crown Prince presents details of the plan.
Al-Falih reportedly said that Saudi Arabia would create an independent power purchasing agency to reorganise its power generation, transmission, and distribution activities.
Bloomberg reported that a new airport for Riyadh is “under consideration”, according to Saudi Arabia’s transport minister, Nabil Al-Amoudi.
“The kingdom also plans to refurbish and expand five airports, and build 2,000km of railways,” the report continued.
Saudi Arabia's Crown Prince leads Public Investment Fund, the organisation playing a crucial role in driving the achievement of the goals of Vision 2030.
The long-term mandate seeks to diversify the kingdom's economy away from oil revenue and ensure the long-term, sustainable growth of the country. | https://www.constructionweekonline.com/168935-al-falih-says-saudi-arabias-crown-prince-to-present-425bn-infra-plan |
A Balanced Approach
If you've spent the last five months following my Health Foundation columns, you've acquired an overview of five lifestyle choices that can have an overwhelmingly positive effect on your health.
The most important message I'd like people to take away from these blogs is the importance of a multi-faceted approach to health. Health is not just about organic food, or exercise or sleep. It's not about a super food or supplement that can change your life. Health is a series of small, consistent choices working in tandem with each other. It's important to put health information into a contextual framework and constantly reassess how relevant, correct, or important new information is.
Often, people are willing to revamp their diet in an extreme way- no gluten, no dairy, only organic, perhaps even vegan or vegetarian. It takes money, time, and tenacity to stick to this type of diet. I actively endorse this type of diet in many cases, but not all. For some people, this type of diet is easy and that is great. But for others, they sacrifice optimal mental health as they stress about their food choices. Sometimes they limit food choices to a few "green light" foods, which severely restricts their nutrient intake or perhaps, their diet becomes overly dependent on a certain food. For example, brown rice is great, but if you consume too much, you can have increased arsenic levels in your system. (Arsenic is a byproduct of rice production, not a component of the rice itself.) Or what about legumes? These are the healthy peoples' mascot, right? Beans are great- but for people suffering from autoimmune disorders, they could cause an inflammatory affect and should be avoided.
Health information needs context to qualify it. Often, health isn't about the best choice, it's about the better choice. If I decide I will only eat what is best for me, I probably won't eat a lot or a get a variety of foods in my diet. How do you define best? If it is organic, is it enough to eat organic fruits and veggies or do you need a place that also has confirmation of soil testing? What standards are you using for the soil testing? Is it better to eat a certified organic granola bar with organic sugar or a non-organic apple? Keeping up-to-date with the most relevant scientific knowledge can be stressful in and of itself. Fake-science (shaky science based on poorly designed studies, rumors, or isolated observations and generalizations), can be the most stressful. The effects of stress from worrying about a healthy diet can undermine every organic choice one makes.
Being healthy shouldn't be stressful, cause shalom bayit problems, make your children resentful, or overstrain your budget. By following the five health foundations outlined in the previous five months, you can set up a healthy lifestyle. Do it at a pace that works for you and your family, do it with happiness, and trust Hashem that if you make the effort, those efforts will lead you in the right direction. After you have set up a solid foundation, you can build on it by adding in things like organic fruits and vegetables and doing the homework I outline here. Good luck and help spread the healthy message by sharing your favorite blogs from my site with your friends and family. | https://www.nomiknows.com/post/2017/06/30/a-balanced-approach |
“In an attitude of contemplation and gratitude, they recognize Creation as a gift from God. Faced with its global deterioration, the Daughter of the Heart of Mary is called to live an ecological conversion to safeguard it and not transform it into an object of consumption and domination. In solidarity with present and future generations, each one, according to her possibilities, will contribute to promoting environmental justice and taking care of the Earth, Our Common Home.” – DHM Constitutions
The Daughters of the Heart of Mary are conscious of the impact we have on God’s creations, including water, air, land, and every living being. We are called to be mindful of the footprints we leave on Mother Earth. The Daughters of the Heart of Mary are aware that we must be good stewards of the natural gifts God has given us. | https://www.dhm.org/social-justice/environment.html |
Theoretical studies on gas-phase reactions of sulfuric acid catalyzed hydrolysis of formaldehyde and formaldehyde with sulfuric acid and H2SO4···H2O complex.
The gas-phase reactions of sulfuric acid catalyzed hydrolysis of formaldehyde and formaldehyde with sulfuric acid and H2SO4···H2O complex are investigated employing the high-level quantum chemical calculations with M06-2X and CCSD(T) theoretical methods and the conventional transition state theory (CTST) with Eckart tunneling correction. The calculated results show that the energy barrier of hydrolysis of formaldehyde in gas phase is lowered to 6.09 kcal/mol from 38.04 kcal/mol, when the sulfuric acid is acted as a catalyst at the CCSD(T)/aug-cc-pv(T+d)z//M06-2X/6-311++G(3df,3pd) level of theory. Furthermore, the rate constant of the sulfuric acid catalyzed hydrolysis of formaldehyde combined with the concentrations of the species in the atmosphere demonstrates that the gas-phase hydrolysis of formaldehyde of sulfuric acid catalyst is feasible and could be of great importance for the sink of formaldehyde, which is in previously forbidden hydrolysis reaction. However, it is shown that the gas-phase reactions of formaldehyde with sulfuric acid and H2SO4···H2O complex lead to the formation of H2C(OH)OSO3H, which is of minor importance in the atmosphere.
| |
Apple Oatmeal Muffins are a cinch to whip together. One bowl and one muffin pan and you are on your way to a warm muffin for breakfast.
Prep Time
10
mins
Cook Time
20
mins
Total Time
30
mins
Course:
breakfast
Cuisine:
american
Servings:
12
muffins
Calories:
121
kcal
Author:
jodiemo
Ingredients
1 1/2
cups
All Purpose Flour
1/2
cup
quick cook oatmeal
not instant
2
t
. baking powder
1/2
t
. baking soda
1
t
. cinnamon
1/2
cup
sugar
1
egg
1
cup
buttermilk
1
cup
diced apple
1
T
. brown sugar
Instructions
Preheat the oven to 400 degrees. Mix the flour, oatmeal, baking powder, baking soda, cinnamon and sugar together in a large bowl. Mix in the egg and buttermilk and stir until mixture just comes together. Stir in the apple.
Grease a muffin pan with cooking spray and fill each cup 2/3 full with the muffin mixture.Sprinkle the top of each muffin with the brown sugar. Bake for 15-20 minutes until golden brown.
Nutrition
Calories: | https://twoluckyspoons.com/wprm_print/recipe/2913 |
For a number of years, we have endeavoured to comply to high standards of ethical trading.
We would like to take this opportunity to explain the steps that we as a company take to ensure that we maintain the highest possible standards as we continue to source more product overseas and further East.
We are a member of SEDEX (Supplier Ethical Data Exchange) and we encourage our suppliers also to become members. We also endeavour to ensure that all its suppliers are compliant with all local standards and laws of their country.
At Banner, we refuse to have any dealings with any company or individual that requires to make or receive any form of inducement in order to secure a commercial transaction and will terminate any ongoing relationships if evidence is found to demonstrate this.
We endeavour to audit its ethical trading standards firstly through its local agents who supply quality control facilities and who are entrusted to ensure that our high expectations of factory facilities are maintained. Secondly, directors and senior management of our business make regular visits to our suppliers worldwide during the year. Banner has entered into Supply Agreements with all its suppliers, which includes their agreement to be bound by the Banner Code of Practice. This clearly defines the expectations and standards we require.
The areas we covered are:
- Employment conditions for workers involved in the production of our goods.
- Health & Safety standards.
- Wages and working hours, including adherence to the young person’s working regulations.
- Human Rights.
- Disciplinary and employment records.
- Risk Assessments.
- Legal requirements.
- Sewing and fabric quality standards.
- Safety requirements, especially with regard to babywear garments.
- Manufacturing requirements including CMT procedures.
A copy of our Code of Practice is available on request.
We firmly believe that the guidelines that we have in place ensure that we can produce a good quality garment compliant to all aspects of ethical trading. | https://www.monkhouse.com/ethical-trading |
UX/UI expert with 10+ years of experience in designing and developing for mobile and web. An energetic leader with a passion for problem-solving and creating amazing products from concept to market delivery.
A Product Designer looking to craft the next generation of tools and experiences for more effective learning to make an impact to a global audience of educators and learners.
FUNCTIONAL EXPERTISE
Creative Problem Solver / Strategic Thinking
Visual and Interaction Design
Cross-functional Communications
Product Execution from Concept to Delivery
Excellent Interpersonal Skills
User-Centered Design / Customer Research
SKILL SET
EDUCATION
Berry College, Mt. Berry, GA
B.S., Business & Marketing
3.5 GPA
SFSU, San Francisco, CA
M.A., Education / Instructional Technology (Multimedia & Design)
4.0 GPA
CONTINUING EDUCATION
Stanford University, Stanford, CA
Technology Entrepreneurship,
Intro Computer Science
(Extended Learning/Non-Credit)
And a TON of online Courses! | http://michaelwinningham.com/profile/ |
Los Angeles has become the second California city, after San Francisco, to adopt a so-called “ban the box” ordinance. The new Los Angeles Fair Chance Initiative for Hiring (the “Ordinance”) takes effect on January 22, 2017, and will prohibit private-sector employers operating in the city from inquiring into applicants’ criminal history until after making a conditional offer of employment. Here’s an overview of the new Ordinance.
Who’s Covered
The Ordinance will apply to private employers located or doing business in the City of Los Angeles that have 10 or more “employees.” An “employee” is any person performing two or more hours of work each week within the city and who is entitled to minimum wage, and includes owners, management and supervisorial employees.
Strict Limits on Criminal History Inquiries
The Ordinance prohibits covered employers from including on an application for employment any question that seeks disclosure of an applicant’s criminal history. The Ordinance also prohibits employers from making any other inquiry into criminal history unless and until a conditional offer of employment has been made. Exceptions exist for employers that are required by law to obtain information regarding convictions; applicants who would be required to possess or use a firearm in the course of employment; applicants who have been convicted of a crime that excludes them from holding the position sought; and employers that are prohibited by law from hiring an applicant who has been convicted of a crime.
Once a conditional offer of employment has been made, an employer may not withdraw it or refuse to hire an applicant based on the individual’s criminal history unless the employer first prepares a written assessment that “effectively links” the specific aspects of the applicant’s criminal history with risks inherent in the duties of the position for which the applicant is being considered. In performing this assessment, employers must, at a minimum, consider all of the following factors:
- the facts or circumstances surrounding the offense or conduct;
- the number of offenses for which the individual was convicted;
- the individual’s age at the time of conviction or release from prison;
- evidence that the individual performed the same type of work, post-conviction, with the same or a different employer, with no known incidents of criminal conduct;
- the length and consistency of employment history before and after the offense or conduct;
- rehabilitation efforts, g., education and training;
- employment or character references and any other information regarding fitness for the particular position; and
- whether the individual is bonded under a federal, state, or local bonding program.
If, after conducting the written analysis, the employer decides that the applicant’s criminal record merits revoking the employment offer, the employer must then afford the applicant the opportunity to participate in a “Fair Chance Process.” The Fair Chance Process requires the employer to do all of the following: (1) give the applicant written notification of the proposed adverse action, a copy of the written assessment, and any other information or documentation supporting the proposed adverse action; (2) hold off on taking adverse action or filling the position for at least five business days, to give the applicant an opportunity to provide information or documentation; (3) consider the applicant’s additional information or documentation and perform a written reassessment of the proposed adverse action; and (4) assuming the decision on the reassessment is still not to hire, notify the applicant of the decision and provide a copy of the reassessment. Documents related to this Fair Chance Process must be retained for three years.
Note that employers that do post-offer criminal background checks must also continue to comply with the federal Fair Credit Reporting Act (“FCRA”) and the California Investigative Consumer Reporting Agencies Act. Employers also must continue to follow the limitations set forth under California Labor Code 432.7, which forbids certain inquiries relating to an applicant’s criminal history, including questions about arrests not leading to a conviction, certain marijuana convictions more than three years old, and (as of January 1, 2017) juvenile criminal history.
Notice and Posting Requirements
The new Ordinance also saddles employers with a number of rigorous notice and posting requirements. Specifically, employers must specify in all job solicitations and advertisements that the employer will consider for employment qualified applicants with criminal histories in a manner consistent with the law. Additionally, employers must post, in a “conspicuous place” in the workplace that is visited by applicants, a notice informing applicants of the provisions of the Ordinance. Union employers must also send a copy of that notice to each labor union or representative of workers with which they have a collective bargaining agreement.
No Retaliation; Enforcement
The Los Angeles Ordinance prohibits retaliation against individuals who assert their rights under the law. Also, applicants may file civil lawsuits for Ordinance violations, but only after completing an administrative enforcement process through the city’s Department of Public Works. The city will only issue written warnings for violations of the Ordinance until July 1, 2017; thereafter, employers may be fined up to $500 for a first offense, $1,000 for a second offense, and $2,000 for a third offense.
Getting Ready
Employers with 10 or more employees who work in Los Angeles should act now to ensure that their hiring practices are compliant with the new Ordinance by January 22, 2017. Here’s a quick to-do list:
- Remove criminal history inquiries from all application forms and ensure such questions are removed from the interview process (unless an exemption applies).
- Implement procedures for conducting “written assessments” and following the Fair Chance Process, as required by the Ordinance prior to withdrawing a job offer based on criminal history.
- Include the required language in job solicitations and advertisements.
- Post the new workplace notice. We expect that it will be available on the City of Los Angeles' website sometime in the next few weeks.
- Train all managers and HR personnel involved in the hiring process on the new requirements and procedures.
Click here to read the new ordinance.
Miller Law Group exclusively represents business in all aspects of California employment law, specializing in litigation, wage and hour class actions, trials, appeals, compliance advice and counseling. If you have questions about these developments or other workplace obligations, please contact us at (415) 464-4300.
This Alert is published by Miller Law Group to review recent developments in employment law. This material is designed to provide informative and current information as of the date of the Alert, and should not be considered legal advice. | https://www.millerlawgroup.com/alerts/l.a.-adopts-ban-the-box-ordinance-what-employers-need-to-know |
Lowry never gives a detailed description of the Giver's appearance but there are minor descriptions of what he looks like in the story. When Jonas initially arrives at the Annex for his first training session, he is greeted by an attendant, who allows him inside a locked door leading to the Giver's unique dwelling. Inside the Annex, Jonas comes face-to-face with the Giver. Jonas notices that the Giver is wearing a special long robe, which designates the Elders from the other members of the community. He then notices that the Giver has pale eyes, which is why it seems like Jonas is staring directly into a mirror of himself. The Giver also has "sagging flesh" on his face and there are dark circles around his piercing, intense eyes. Jonas comments on the Giver's age by saying that he can tell he is very old. However, the Giver responds by saying that he is not as old as he seems. The Giver's job is extremely difficult and the massive amount of stress, pain, and trauma have aged him significantly. Overall, the Giver looks like an old man with pale eyes, who has seen a lot and is very wise.
In The Giver, we are not provided with a great deal of physical description of the Giver. He wears the same "special" (Lowry 75) clothes as the Elders in the community wear. We can infer that he has eyes that are not brown, since Jonas sees "pale eyes that mirrored his own" (75). Jonas cannot see color, yet, with the exception of occasional shimmers of red, but paleness suggests eyes that are blue, green, gray, or hazel. Jonas tells the Giver he can see the Giver is "very old" (75), with wrinkles, "sagging flesh" (75), and dark circles under his eyes. The Giver tells Jonas that he is not as old as he looks, but being the Giver has aged him. And of course, Jonas is twelve, and when we are young, we often tend to think that adults are much older than they actually are.
We’ll help your grades soar
Start your 48-hour free trial and unlock all the summaries, Q&A, and analyses you need to get better grades now.
- 30,000+ book summaries
- 20% study tools discount
- Ad-free content
- PDF downloads
- 300,000+ answers
- 5-star customer support
Already a member? Log in here. | https://www.enotes.com/homework-help/what-givers-appearance-book-giver-707914 |
South Africa is a country of breathtaking natural beauty, and Cape Town is one of its highlights. Over the past few years Cape Town has become an extremely popular holiday destination, not only is it the most popular international tourist destination in South Africa, but Africa as a whole. It is a city that offers tourists a remarkable natural setting, a good climate and wealth of attractions.
In 2014 Cape Town was voted the best holiday destination by the New York Times and the British Daily Telegraph. Not only is it the most popular international tourist destination in South Africa, but Africa as a whole. It is a city that offers tourists a remarkable natural setting, a good climate and wealth of attractions.
The city is inextricably linked to Table Mountain which forms a large part of the Table Mountain National Park. The mountain is a haven for tourists – it can be reached by cable car or by hiking. Cape Point is another hugely popular place to visit. This dramatic headland at the end of the Cape Peninsula provides scenic day trip to the south-western tip of Africa. There are many other small towns and routes to explore around the peninsular, most notably the route linking Hout Bay, Noordhoek, Simons Town and Kalk Bay.
At the foot of Table Mountain lies the splendid shopping complex, the Victoria & Alfred Waterfront (V&A). This maze of shopping malls and restaurants offers another fantastic tourist attraction. There is also a wonderful sea life aquarium and smaller areas for music sung and played by people from Cape Town. Seals can be seen from the walkways, sunning themselves alongside glamorous yachts. Dolphins play close by in the Atlantic ocean. Helicopter rides offer a birds eye view of city and the ocean.
The V&A also hosts the Nelson Mandela Gateway, through which ferries depart for Robben Island. The island being most famous for the imprisonment of Nelson Mandela for 18 years. There is also a museum dedicated Nelson Mandela and the struggle for equality in South Africa.
Cape Town has a temperate climate with a Mediterranean feel. Unlike much of South Africa, which is tropical, the summers are warm and dry. Summers last from early December to March with an average maximum temperature of 26’C, and a minimum of 16’C. The winter months, which last from June to August, are fairly mild with an average high of 18’C. They see the occasional cold front the Atlantic Ocean which brings significant rainfall. Both spring and autumn are notable for their moderate temperatures, with the biggest distinction being the wind – the spring generally features a strong wind from the south-east, whereas the autumn tends to be wind free.
Our recommend areas offer exceptional holiday rentals in the best places to enjoy when visiting Cape Town and its surrounding areas. View these recommended areas and discover the finest luxury properties to stay in.
Camps Bay is considered to be Cape Town’s premier holiday destination. It offers clean, sandy beaches located at the foot of the Twelve Apostles mountain, and many excellent luxury holiday rentals.
Clifton is an exclusive residential suburb along Cape Town’s Atlantic Seaboard. Famous for having some of the most expensive real estate in South Africa and home to some of the best luxury villa rentals.
Bantry Bay is home to some of Cape Town’s most exclusive villa rentals. Its properties line the base of the Lion’s Head mountain side, climbing up from the rocks overlooking the Atlantic Ocean.
Set on the slopes of Lion’s Head, between Sea Point and Bantry Bay, Fresnaye is one of Cape Town’s most fashionable residential areas.
Ideally situated between the Victoria & Alfred Waterfront and the historical Mouille Point Light House in Cape Town, and within waking distance of the Cape Town Stadium and Green Point Park.
Sea Point is characterized by high-rise, luxury apartments and offers urban entertainment in a host of pubs, music clubs, coffee shops and restaurants.
The Southern Suburbs is a leafy green residential area to the South-east of Table Mountain. It is famous for its breathtaking natural scenery, long-standing cultural heritage and world-class wine producing estates, not to mention is superb area of grand residential properties.
The west coast is a long, dreamy coastline that stretches north towards Namibia. There are plenty of sightseeing opportunities and excellent places to stay.
The Cape Winelands is a truly magical landscape comprised of small wine producing towns, rural farms and interconnected valleys. The breathtaking natural scenery is beautifully compliment by classical Dutch architecture and refined holiday rentals. | https://timeandaway.co.za/where-to-stay-in-cape-town/ |
Need to teach or review short story elements, but you’re running out of time? These songs feature the characters, plot, conflict and themes inherit in short stories, and can be studied in one class session. You can often find figurative language in the songs, as well. Check out the videos and lyrics below:
Red Headed Stranger by Willie Nelson
This story line may be a little mature for middle school students, but I think it would work for older teens. I recommend using this song to teach about characterization, figurative language and making inferences.
You can download the lyrics here.
The Boy Named Sue by Johnny Cash
This song may not be appropriate for middle schoolers, but I think high school students will appreciate the story of an outsider with a really bad name. Use this video to teach about conflict and initiate discussions about having a growth mindset.
Coward of the County by Kenny Rogers
I recommend using this song to teach about dynamic versus static characters and conflict. This song could also lead into a discussion on bullying and gender roles.
Get this lyrics here.
Harper Valley PTA by Jeanne C. Riley
Again, this one is more suited to high school students, but would be a great lead-in to discussion on plot structure, man vs. society conflict and bullying.
Get the lyrics here.
Cats in the Cradle by Harry Chapin
This is an excellent lead-in to a discussion on foreshadowing, irony and dynamic characters. You may ask students to discuss their expectations of their parents, and how they expect to parent their own children.
Get the lyrics here.
The Devil Went Down To Georgia by The Charlie Daniels Band
This song sets itself up with exposition, features rising action in the form of a competition, followed by a climax and resolution. You can help your students identify each part.
Ocean Front Property by George Strait
Short stories often feature irony and this song is no exception. Strait clearly doesn’t mean what he’s saying, and any student who knows their geography will know that there is no ocean front property in that particular state.
Download the lyrics here.
Every Rose Has Has Its Thorns by Poison
While much of the story behind this song is implied, the lyrics contain metaphor and symbolism students will identify with.
Download the lyrics here.
Comment below if you have any other recommendations for songs that tell a story.
I am a secondary English Language Arts teacher, a University of Oklahoma student working on my Master’s of Education in Instructional Leadership and Academic Curriculum with an concentration in English Education, and a NBPTS candidate. I am constantly seeking ways to amplify my students’ voices and choices. | https://www.rethinkela.com/2016/01/teaching-short-story-elements-with-music/ |
Combating youth incarceration through theater
Maine Inside Out premiered its latest original play "Exposed" that shares personal stories of Mainer's impacted by youth incarceration
Author:
Sean Stackhouse
Published:
11:00 PM EST November 6, 2019
Updated:
11:51 PM EST November 6, 2019
PORTLAND, Maine — Emotions ran high on Wednesday, as more than 40 Mainers impacted by youth incarceration shared their experiences with the audience in Hannaford Hall at the University of Southern Maine.
The non-profit Maine Inside Out premiered its newest original play "Exposed" in Portland on Wednesday. The nonprofit hopes to spearhead change in Maine's criminal justice system, and end youth incarceration. The method this group takes is by sharing personal experiences on stage through theater.
"That audience is hopefully going to be moved to go make change and be active around these issues," said the organizations' co-founder and co-director Chiara Liberatore.
Maine Inside Out was founded in 2007 and has since worked with youth incarcerated at Long Creek Youth Development Center to heal and share stories.
"It provides where we can be happy and excited about something even when we're locked behind a metal door," said Matthew Fortin, who was incarcerated for two years as a teenager.
Fitting with the name 'Inside Out' the organization works with incarcerated youth, as well as those who have been released. Others who have been impacted by the criminal justice system, but have not spent time behind bars are also part of the organization.
Maine Inside Out has community groups that meet regularly in Portland, Waterville, Lewiston and Biddeford. In Wednesday's performance, more than 20 from all four groups performed. | |
A young construction worker fell to his death while doing work on One Pierrepont Street in Brooklyn Heights.
The 23-year-old worker was only on the job for a week, NBC 4 New York reported. He fell from the roof of the 13-story luxury building.
He had died by the time authorities arrived and his identity hasn’t been released, the report said. The investigation is ongoing. According to the Department of Buildings, two workers were laying bricks under the water tower when the 23-year-old fell. | https://therealdeal.com/2019/04/10/construction-worker-falls-to-his-death-working-on-brooklyn-rooftop/ |
Last modified: 22 May 2019 17:07
This highly regarded course will take your understanding of statistics to the next level and provide you with the skills and confidence to analyse your complex biological data. Through a combination of lectures, computer based practicals and group work you will gain an understanding of how to deal with pervasive issues in the analysis of real world biological data such as heterogeneity of variance and spatial and temporal non-independence. Hands on computer tutorials will allow you to apply statistical models using modern statistical software (R) to real data, collected by researchers to answer real biological questions.
|Study Type||Postgraduate||Level||5|
|Session||First Sub Session||Credit Points||15 credits (7.5 ECTS credits)|
|Campus||Old Aberdeen||Sustained Study||No|
|Co-ordinators||
|
This course will be divided into themed weeks during which you will gain experience in understanding complex sampling methodologies and dealing with pervasive issues in the analysis of real world biological data. You will be taught using a combination of lectures, computer practicals and directed group work and emphasis will be placed on the practical implementation of various modelling strategies using the statistical programming environment R.
Weeks 1 and 2: Following a recap of linear models, you will be introduced to some the limitations of using standard linear models for analysing biological data and gain experience in identifying common issues arising from model misspecification. During this week you will focus on dealing with the common issue of heterogeneity of variance using a generalised least squares (GLS) approach.
Weeks 3 and 4: During this week you will learn how to fit models which can account for correlated data arising from repeated measurements from the same sampling unit or from sampling units that are not spatially independent. You will learn to extend the GLS approach introduced in week 1 to model this non-independence.
Weeks 5 and 6: The final weeks will bring together concepts introduced during the first two weeks and introduce you to analysing data from complex experimental or survey designs using the linear mixed effects modelling framework.
Information on contact teaching time is available from the course guide.
Individual Report (100%)
Resit: Resubmission of failed individual elements of continuous assessment
There are no assessments for this course.
Written, individualised feedback; formative feedback throughout practical sessions of the course
We have detected that you are have compatibility mode enabled or are using an old version of Internet Explorer. You either need to switch off compatibility mode for this site or upgrade your browser. | https://www.abdn.ac.uk/registry/courses/postgraduate/2018-2019/biology/bi5302 |
The Primitive Painter The Primitive Painter
Johnson had inscribed the verses "To Edward Hicks on Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. Built on the Johns Hopkins University Campus.
Laugier also noted that the deviation or misuse of the principles lead to inherent faults in typical buildings and in architectural practice. In particular he recognised logical faults, issues such as proportion and unintelligent design. Instead, advocating that "by approaching the simplicity of the model, fundamental mistakes are avoided and true perfection achieved". The idea also claims that Ancient Greek temples owed their form to the earliest habitations erected by man.
In the primitive hut, the horizontal beam was supported by tree trunks planted upright in the ground and the roof was sloped to shed rainwater. This was an extension of the primitive hut concept and the inspiration behind the basic Doric order.
The essay advocates that architecture approach perfection through the search for The Primitive Painter The Primitive Painter beauty, specifically by returning to the hypothetical original hut as a model for building. The Primitive Hut made an important contribution to the theory of architecture. It marked The Primitive Painter The Primitive Painter beginning of a significant analysis and debate within architectural theory, particularly between rationalist and utilitarian schools of thought.
While previously Photodementia Fig 03 field of architecture concerned the search for the ideal building The Primitive Painter The Primitive Painter through truth in building, the primitive hut questioned the universal in architecture. It was through The Primitive Painter The Primitive Painter reading of the Laugier Essay questioned the fundamental and the universal requirements of architecture, the text marked a new field of inquiry into the field of architecture that changed the understandings and the approach to architecture.
In particular, there were the beginnings of an attempt to understand the various individual The Primitive Painter The Primitive Painter of architecture. The Primitive Hut is an a-historical point of reference that is not necessarily a historical object that is investigated through speculation or an archaeological investigation. The Primitive Hut was instead a self-evident realisation that created a new perspective of architectural inquiry.
Architectural inquiry would be engaged to justify the validity of the primitive hut model. Wednesday 21 August Thursday 22 August Friday 23 August Saturday 24 August Sunday 25 August Monday 26 August Tuesday 27 August Wednesday 28 August Marvin Gaye Whats Going On 29 August Friday 30 August Saturday 31 August Sunday 1 September Monday 2 September Tuesday 3 September Wednesday 4 September Thursday 5 September Friday 6 September Saturday 7 September Sunday 8 September Monday 9 September Tuesday 10 September Wednesday 11 September Thursday 12 September Friday 13 September Saturday 14 September Sunday 15 September Monday 16 September Tuesday 17 September Wednesday 18 September Thursday 19 September Friday 20 September Saturday 21 September Sunday The Primitive Painter The Primitive Painter September Monday 23 September Tuesday 24 September The Primitive Painter The Primitive Painter Wednesday 25 September Thursday 26 September Friday 27 September Saturday 28 September The Orb Little Fluffy Clouds Sunday 29 September Monday 30 The Primitive Painter The Primitive Painter Tuesday The Primitive Painter The Primitive Painter October Wednesday 2 October Thursday 3 October Friday 4 October Saturday 5 October Sunday 6 October Monday 7 October Tuesday 8 October Wednesday 9 October Thursday 10 October Friday 11 October Saturday 12 October The utopian end toward which primitivists aspire usually lies in a notional "state of nature" in which their ancestors existed chronological primitivismor in the supposed natural condition of the peoples that live beyond "civilization" cultural primitivism.
The desire of the "civilized" to be restored to a "state of nature" is as longstanding as civilization itself. Primitivist idealism between gained new impetus with the onset of industrialization and the European encounter with hitherto unknown peoples after the colonization of the Americas, the Pacific and other parts of what would become the modern imperial system. During the Enlightenmentthe idealization of indigenous peoples were chiefly used as a rhetorical The Primitive Painter The Primitive Painter to criticize aspects of European society.
Vico was writing in the context of the celebrated contemporary debate, known as the great Quarrel of the Ancients and the Moderns. This included debates over the merits of The Primitive Painter The Primitive Painter poetry of Homer and the Bible as against modern vernacular literature. In the 18th century, the German scholar Friedrich August Wolf identified the distinctive character of oral literature and located Homer and the Bible as examples of folk or oral tradition Prolegomena to Homer Vico and Wolf's ideas were developed further in the beginning of the The Primitive Painter The Primitive Painter century by Herder.
The 19th century saw for the first time the emergence of historicismor the The Primitive Painter The Primitive Painter to judge different eras by their own context and criteria. A result of this, Robert Marshall Bad Girl schools of visual art arose that aspired to hitherto unprecedented levels of historical fidelity in Matthias Frey Secret Ingredients and costumes.
Neoclassicism in visual art and architecture was one result. Another such "historicist" movement in art was the Nazarene movement in Germany, which took inspiration from the so-called Italian "primitive" school of devotional paintings i. Where conventional academic painting after Raphael used dark glazes, highly selective, idealized forms, and rigorous suppression of details, the Nazarenes used clear outlines, bright colors, and paid meticulous attention to detail.
This German school had its English counterpart in the Pre-Raphaeliteswho were primarily inspired by the critical writings of John Ruskinwho admired the painters before Raphael such as Botticelli and who also recommended painting outdoors, hitherto unheard of. Two developments shook the world of visual art in the The Primitive Painter The Primitive Painter century.
The first was the invention of the photographic camera, which arguably spurred the development of Realism in The Primitive Painter The Primitive Painter. The second was a discovery in the world of mathematics of non-Euclidean geometrywhich overthrew Igorrr Baroquecore EP year-old seeming absolutes of Euclidean geometry and threw into question conventional Renaissance perspective by suggesting The Primitive Painter The Primitive Painter possible existence of multiple dimensional worlds and perspectives in which things might look very different.
The discovery of possible new dimensions had the opposite effect of photography and worked to counteract realism. Artists, mathematicians, and intellectuals now realized that there were other ways of seeing things beyond what they had been taught in Beaux Arts Schools of Academic paintingwhich prescribed a rigid curriculum based on the copying of idealized classical forms and held up Renaissance perspective painting as the culmination of civilization and The Primitive Painter The Primitive Painter. In rebellion against this dogmatic approach, Western artists began to try to depict realities that might exist in a world beyond the limitations of the three dimensional world of conventional representation mediated by classical sculpture.
They looked to Japanese and Chinese artwhich they regarded as learned and sophisticated and did not employ Renaissance one-point perspective. | http://ghs-aichstetten.de/29/the-primitive-painter-the-primitive-painter%20flac%20mp3%20vinyl%20rip%20full%20album.php |
PROJECT SUMMARY/ABSTRACT Photochemical reactions constitute an underdeveloped set of enabling technologies for biomedical research. These reactions would be of great value to the discovery of new drugs and biological probes because the absorption of light results in the formation of high-energy, electronically excited intermediates that can produce strained and unusual molecular architectures that can be synthesized in no other way. However, control over the precise three-dimensional shapes of the products arising from these high-energy intermediates has been a long-standing challenge with no general solutions. This Proposal is based upon the discovery of a previously unknown effect that Lewis acid coordination exerts upon the excited states of organic molecules. We will study how this effect can be exploited to control the three-dimensional shape of compounds produced using photochemical reactions. In particular, the three Specific Aims of this research center on an exploration of the generality of this effect. Aim 1. We are exploring the generality of products that are accessible using this strategy. Aim 2. We are exploring the generality of organic substrates that can be activated using this strategy. Aim 3. We are exploring the generality of catalytic triplet sensitization to various other platforms for catalytic activation. These methods address an important, century-old problem in organic synthesis. Thus, we expect that the results of our research will have significant impacts both in fundamental academic chemical research and on the ability of biomedical scientists to synthesize and discover the next generation of life-saving drugs.
| |
Heed the saying "measure twice and cut once" when designing or renovating your tie stall. If stalls are sized properly to meet their needs, your cows will thank you.
by Emily Morabito and Jeffrey Bewley
The authors are a master's student at the University of Calgary and an associate extension professor at the University of Kentucky, respectively.
Understanding the spatial requirements of a dairy cow is crucial when designing or upgrading a tie stall facility. Dairy cows devote a majority of their day to lying and resting. Disrupting this natural behavior can have negative effects on production and welfare. Stall design is crucial to encourage cows to lie down correctly and ensure that they are able to stand up in accordance with their natural behaviors. Due to size variations, breed is an important factor when calculating proper dimensions. It is recommended that farmers take measurements of their cows for more accuracy.
Unlike freestall facilities, tie stall barns offer the ability to create sections with different sized stalls for cows in various stages of life. There should be stalls designed specifically for first-lactation heifers, milking cows and dry cows. This is beneficial for cow welfare, cleanliness, labor and general management practices. The diagram provided by the Ontario Ministry of Agriculture and Food depicts recommended stall dimensions for all three different stall sizes.
Size stalls to your cows
Stall length refers to the bed in the tie stall. Length is determined by the space a cow occupies when she is lying down, including the space from her knee to her tail. This is often described as imprint length.
To find the proper length for the stall, or imprint length, rump height should be measured in inches. Once this height is obtained, it is multiplied by 1.2. For example, if a cow has a rump height of 60 inches, the bed length should be 72 inches (60 x 1.2 = 72). Stall length should be considered for all three stall types. There should be no difference in length for dry cows and milking cows, but on average the stalls are 2 inches shorter for first-lactation heifers due to differences in size.
Traditionally, stall width was determined by imprint width. This was calculated by multiplying the hook bone width by 2. For a mature Holstein, this was typically 52 inches. Studies have shown that this width prevents cows from resting in other positions, reducing cow comfort and compromising animal welfare. It is now recommended that, for a mature Holstein, the minimum width should be 54 inches.
Studies have shown that, when cows were placed in wider stalls, they took full advantage of the extra space. Width should also be considered for all three different stall types. First-lactation heifers and lactating cows should have the same minimum stall width of 54 inches. Dry cows, or special needs' cows, should have a stall width that is 6 inches wider than the other stalls.
Stall dividers, or loops, should be installed as boundaries. Loops will help dictate how the cow stands, where the cow defecates, and how the cow enters and exits the stall. Farmers must keep in mind milking procedures when installing the loops. Milkers should have enough space to work efficiently in-between the cows.
Tie rail keeps cows aligned
The tie rail, also known as the head rail, is located on the front end of the stall. This is where the chain on the cow is attached, and it also serves as the water line to all of the stalls on one side. The rail should be placed over the manger where the cow eats. This means that the tie rail should extend past the bed of the stall.
Proper tie rail placement is imperative for functionality of the stall. If this rail is not correct, cows will not stand in the stall properly. It may also prevent a cow from being able to lie down and rise naturally. This can lead to cleanliness issues, as well as potential injuries.
To calculate tie rail height, rump height is measured and multiplied by 0.8. For example, if a cow has a rump height of 60 inches, the tie rail should be installed 48 inches above the bed. Again, this should be adjusted for all three of the stall sizes. The tie rail height should be the same for milking cows and dry cows but is typically 2 inches shorter for first-lactation heifers.
The chain attaches the cow to the stall and also helps keep the cow in a proper location. The chain can have an effect on cow cleanliness. If the chain is too long, the cow may be able to lie in the gutter. A long chain may also result in an injury if it becomes wrapped around the cow's leg. If the chain is too short, natural lying behaviors can be disturbed, potentially causing neck lesions. Short chains may also reduce cleanliness, possibly causing the cow to defecate inside of the stall.
Account for lunge space
Again, it is essential to recognize a cow's natural behavior and encourage lying time. If the cow is prevented from lying or standing comfortably, the cow will be less likely to rest. In a study by Ceballos and colleagues, it was determined that a Holstein utilizes 10 feet of space, measured from the rump, to lunge forward when standing up. It is important to have an unobstructed, open area in front of the stall.
Proper barn design and stall dimensions are crucial for a functional and efficient tie stall barn. Cow comfort is very important in tie stall facilities for production, cow welfare and public perception. It is very important that the design of the facility is done carefully and that measurements of the cows are taken for accuracy. | http://hoards.wehaaserver.com/article-15874-three-stalls-every-tie-stall-barn-needs.html |
Golden Krishna, Senior Designer at Samsung Innovation Labs, wants to upend the way we think about user interfaces. "Our love for the digital interface is out of control," he says. "It has become our answer to everything." If he has his way, the future of Samsung consumer electronics might work more like the Nest thermostat, which learns about your favorite temperature, or a Mercedes-Benz automobile, which automatically unlocks when it detects the keys in your pocket.
It's been more than a decade since Minority Report hit theaters, but its influence on product design doesn't seem to have waned — much to the dismay of designers like Christian Brown. In a recent piece for the Awl, Brown bemoans Steven Spielberg's disproportionate influence on interface design, arguing that Minority Report's futuristic vision has fueled misguided dreams of gesture-based and touchscreen interfaces that don't really add much to a product's function — "interfaces that look good, rather than... work well."
"Human hands and fingers are good at feeling texture and detail, and good at gripping things—neither of which touch interfaces take advantage of," Brown writes. "The real future of interfaces will take advantage of... | http://recorder.sayforward.com/category/calais-tags/interface |
This week, the Orange County Board of Supervisors moved closer to finalizing a ballot measure that would create an independent ethics oversight commission and strengthen enforcement of regulations on campaign finance, lobbyists, and other governmental ethics.
The newly formed ethics commission would oversee conduct relating to “Time is Now, Clean Up Politics” (TINCUP), which was passed in 1978 creating the Orange County Campaign Reform Ordinance; the County gift ban; the County Lobbyist Registration and Reporting Ordinance; and the County Code of Ethics.
“Creating an Orange County Campaign Finance and Ethics Commission will increase transparency and ethics reform in Orange County,” said Supervisor Todd Spitzer. “My view on an ethics commission has remained constant. I was looking to see that key issues were addressed including the budget, subpoena power, appointments and training. I’m confident that this Ordinance has the teeth to make a difference and to serve as a useful resource and tool.”
Supervisor Spitzer had been working with TINCUP author Shirley Grindle and Supervisor Shawn Nelson since 2013 to improve ethics oversight. Since returning to the Board of Supervisors in 2013, Spitzer had been pushing for a County ethics commission as proposed by Grindle and her colleagues, former Common Cause Chairman Bill Mitchell and Chapman University Professor Mario Mainero. When the County ethics commission effort was unsuccessful in 2013, Supervisors Nelson and Spitzer created Measure E, which the voters approved in November 2014 to authorize the County to contract with the California Fair Political Practices Commission.
Implementation of Measure E required a change in state law, but after legislation by Senator Lou Correa (D-Santa Ana) stalled in 2014 and by Assemblyman Matt Harper (R-Huntington Beach) stalled in 2015, Supervisor Nelson led a working group with Supervisor Spitzer, Ms. Grindle, Mr. Mitchell, and Professor Mainero to develop the Orange County Campaign Finance and Ethics Commission.
Supervisor Spitzer sponsored a Roundtable Discussion on the Future of Ethics in Orange County in April 2015. In July 2015, he formed the Orange County Ethics Committee, composed of representatives from each Board of Supervisors office to study models of ethics oversight, with Supervisor Spitzer’s appointee serving as chairman and Supervisor Andrew Do’s appointee serving as vice chairman. The committee took testimony from a wide variety of experts in the field of ethics and returned to the Board of Supervisors with a 249-page report in September.
The commission would have the authority to subpoena bank statements of campaign committees, create an independent campaign contribution tracking system, develop an annual ethics training program for county officials and staff, and investigate and enforce TINCUP.
After incorporating a number of key amendments proposed by Supervisor Do to improve the commission proposal, County Counsel will return to the board on October 20, 2015 with final measure language and a proposed directive to place the Orange County Campaign Finance and Ethics Commission on the ballot for the June 2016 election. | http://ocpoliticsblog.com/2015/10/09/board-of-supervisors-advance-campaign-finance-and-ethics-commission/?replytocom=25178 |
I am currently interested in researching questions of agency in networks with a specific focus on cultural institutions and their daily practices engaging with and maintaining Danish culture life. Through anthropological inqueries into culture institutions (such as DR/Danish Broadcast Corporaion), I do cultural analysis asking questions about how and why certain decisions are made and certain actions are taken.
I my PhD project I am specifically concerned with the processes and practices of selcting and planning music for public service radio station P3. Here I am investigating the networks of humans and things that govern daily practices of mainaining P3´s music profile. I have a specific interest in the resently introduced music scheduling software, Selector, and its significance for the music planning in terms of ´algorithmic power´ (Beer 2017).
I am practicing a post-ANT/STS-inspired anthropology, but are at the same time, while engaging in the field, questioning the value of the selfsame methodological standpoints. | https://artsandculturalstudies.ku.dk/staff/?pure=en/persons/5070 |
"... In PAGE 2: ... Three models were selected for further description, as they appear to be comprehensive representations of SDL. The key constructs associated with each model are summarized in Table1 . Descriptions and explanations of the models are provided in the following sections.... ..."
Table 2. Descriptive Statistics for Dependent Variables by Use of Self-Directed Materials
"... In PAGE 5: ...9 29.0 As presented in Table2 , of those respondents that used the media, most utilized one or two (76.8%).... In PAGE 5: ... For the statistical analyses, the three items were summed to represent the variable financial management practices, ranging from 5=never done any of the practices to 15=always to all three practices. As shown in Table2 , there was a significant difference between the financial practices of self-directed financial learners and non-learners; 84 percent of the active learners reported that they perform the three management practices often or always compared to only 67 percent of the non-learners. Financial and career satisfaction.... In PAGE 5: ... A five-item Likert scale ranging from very dissatisfied = 1 to very satisfied = 5 assessed this single measure. The number of those who were satisfied or very satisfied with their career progression was ten percent higher among the active self-directed financial learners than among the non-learners (see Table2 ). ... ..."
Table 3: Themes related to e-learning in integrated plans
"... In PAGE 5: ... It is important to acknowledge, though, that the number of documents was so small one should be cautious about what conclusions one draws from this data. Table3 shows the results of the corresponding analysis of the relevant sections of general documents in the cases of those institutions that have followed the route of integrating their strategy into a general plan for teaching and learning. The thematic analysis of integrated plans shows an even stronger focus on matters of a technical nature.... ..."
Table 3: E-Learning Maturity Model Levels
"... In PAGE 4: ...SPICE system ( Table3 ). The assessment is made as to the extent with which each level of the model is reflected in e-learning projects and courses undertaken by the institution.... ..."
Table 1: Institutional approaches to documenting e-learning strategies
"... In PAGE 4: ... Results The ways in which universities reported that they were documenting their strategies in relation to e- learning varied quite widely. The breakdown of the types of documents, according to the typology described above, is shown in Table1 . Rounding of percentages to the nearest whole number accounts for the fact these do not sum to 100.... In PAGE 4: ...What is evident from the data presented in Table1 is that the largest group was that comprising institutions that had integrated their e-learning strategies into more general documents. A small number of institutions reported having developed discrete e-learning strategies.... ..."
Table 2. Research Instruments adapted and used in the learner journal (Finch 2001a) Title of questionnaires Author(s) A Measure of Autonomy and Self-Direction Beliefs About Language Learning Inventory (BALLI) Classroom Environment Questionnaire (Actual) (CEQ) Classroom Environment Questionnaire (Preferred) (CEQ) Classroom Environment Scale (CES) Classroom Learning Environment (CLE) Deficiency Analysis Foreign Language Classroom Anxiety Scale (FLCAS) Language Learning Ideas Language Skills Self-assessment Learning Contract
"... In PAGE 3: ... It was stressed that there were no correct or incorrect answers, and that the process of exploring issues, ideas and preconceptions was most important. The original instruments are listed in Table2 (below), while the adaptations used in the journal can be viewed online at www.... In PAGE 4: ... 22-23 Hills, 1976, pp. 29-30 Activities based on the research instruments in Table2 did not suddenly appear in class, to be completed, returned to the teacher and forgotten. Instead, exploration of the issues concerned was an ongoing part of normal classwork and students had time to discuss issues together, to come to individual and group decisions, and to record those in the journal.... In PAGE 5: ... The learning environment of the courses in which the research occurred was therefore designed to promote positive affect and autonomy, and students were encouraged to reflect on this (cf. Table2 : CEQ, CES, CLE, FLCAS). There was a minimum of lecturing, and students were expected to access relevant literature and to be adequately informed when attending the classes, which rapidly took on the format of workshop sessions.... ..."
Table 1. Activities supported to provide a university learning environment preparing students for self-directed learning in an information rich, distributed work environment
in Heterogeneous Information Resources and Asynchronous Workgroups: Creating a Focus on Information Discovery and Integration in Computer Science
Table 4 [Grow, 1999]. The major mismatches that we have observed in our courses are that (1) dependent, passive learners take courses with non-directive teachers, or (2) self-directed, discovery-oriented active learners take courses with directive, authoritarian teachers.
"... In PAGE 10: ... Table4 : Mismatches between Different Teacher and Learner Populations Evaluation Self-directed learning, learning on demand, informal learning, and collaborative and organizational learning are fundamentally different from the traditional classroom learning dominated by curricula and tests. Evaluation of these forms of learning is an important, unresolved research topic in itself; we cannot expect that there are off-the-shelf assessment techniques available for these new forms of learning.... ..." | https://citeseerx.ist.psu.edu/search?q=Integration%20of%20Learning%20Activity%20and%20Process-Oriented%20Assessment%20to%20Promote%20the%20Self-Directed%20e-Learning&t=table&sort=rlv |
Attention: All students currently undertaking a writing degree, arts degree, or indeed any course of uncertain outcome. Do you often ponder your future as a struggling creative, living on crumbs and failure? I am here to tell you that there is hope for us yet! Christine Piper, a UTS Graduate, has recently been awarded The Australian/Vogel Literary Award for her debut novel, After Darkness.
Originally written as part of her Doctorate of Creative Arts at UTS, the story follows the experiences of Tomakazu Ibaraki, a Japanese doctor who finds himself detained at the Loveday internment camp in Australia during World War II. Located in a remote corner of South Australia, the camp becomes home to a diverse and somewhat divided group of men, who together come to form a sort of community, connected by mutual isolation. Ibaraki himself is a compelling character, an honourable and unfailingly gentle man. As Piper expertly leads us deeper into his story, we are able to collect details of his past, and come to understand the way that trauma and regret continue to inform his future.
The tale is written clearly and is easy to follow despite its non-linear structure, and in any case it is an important story to tell. Piper provides an interesting human perspective on the civilian impact of World War II for anyone interested in learning more about this time in our national history. Dealing with themes of discretion and loyalty, of patriotism and personal honour, After Darkness explores the potentially devastating consequences of such ideals on the life of one well-meaning man. | https://utsvertigo.com.au/webexclusives/book-review-after-darkness/ |
The vast underwater wilderness of the deep sea may be largely unexplored by humans, but it's still incredibly polluted, a new study finds.
Researchers made the finding by using baited traps to capture tiny crustaceans in the Mariana Trench in the western Pacific Ocean — the deepest known spot on Earth — and the Kermadec Trench, which sits off the northeastern coast of New Zealand.
Surprisingly, pollution concentrations in the crustaceans plucked from the Mariana Trench were 50 times higher than those in crabs found in paddy fields fed by the Liaohe River, one of the most polluted rivers in China, the researchers wrote in the study. [In Photos: World's Most Polluted Places]
"The only Northwest Pacific [Ocean] location with values comparable to the Mariana Trench is Suruga Bay (Japan), a highly industrialized area," the researchers wrote in the study.
Humans know more about the surface of the moon than they do about the ocean floor. To learn more, the scientific team studied the hadal zone, "the last major marine ecological frontier," which encompasses the area 3.7 miles to 6.8 miles (6 kilometers to 11 km) under the water's surface, the researchers said.
The hadal zone includes deep-sea trenches. People usually assume that deep-sea trenches are pristine, but in reality, these trenches are the dustbins of the ocean, collecting debris as it slowly sinks to the ocean floor, the researchers said.
To get a better idea of the pollutants there, the researchers set baited traps for teeny crustaceans, called amphipods, that live and scavenge in deep-sea trenches. The scientists analyzed the amphipods' fatty tissues for levels of persistent organic pollutants (POPs), which can disrupt hormones in living beings.
POPs can enter the environment through industrial accidents and discharges, leakage from landfills or incomplete incineration, the researchers said. Two POPs of great concern are polychlorinated biphenyls (PCBs, used as dielectric fluid) and polybrominated diphenyl ethers (PBDEs, used as flame retardants), according to the scientists.
"The salient finding was that PCBs and PBDEs were present in all samples across all species at all depths in both trenches," the researchers wrote in the study.
The amphipods in the Mariana Trench had higher PCB levels than did the amphipods in the Kermadec Trench, but it's unclear why. One idea is that the Mariana PCBs come from the nearby North Pacific Subtropical Gyre — more commonly known as the Great Pacific Garbage Patch — the researchers said. The patch is about the size of Texas, and formed when millions upon millions of plastic and garbage fragments got trapped in a vortex between ocean currents, Live Science reported previously.
The results show that human-caused contamination can be found at the far reaches of the Earth, even in the Mariana Trench, which is deeper than Mount Everest is tall, the researchers said.
The findings are "disturbing," said Katherine Dafforn, a senior research associate in the School of Biological, Earth and Environmental Sciences at the University of New South Wales in Australia. Dafforn was not involved in the new study but wrote an accompanying editorial about it.
"This is significant since the hadal trenches are many miles away from any industrial source," Dafforn wrote in the opinion piece. "[It] suggests that the delivery of these pollutants occurs over long distances despite regulation since the 1970s."
Both the study and the editorial were published online Monday (Feb. 13) in the journal Nature Ecology & Evolution.
Original article on Live Science. | https://www.livescience.com/57888-pollution-found-at-mariana-trench.html |
Carson went on to do postgraduate work at Johns Hopkins University, obtaining a master's degree in 1932. She joined the zoology staff at the University of Maryland in 1931. Carson developed a particular interest in the life of the sea, which led her into further postgraduate research at the Woods Hole Marine Biological Laboratory in Massachusetts. In 1936, she accepted a position as an aquatic biologist at the Bureau of Fisheries in Washington, D.C. She went on to be editor in chief at the U.S. Fish and Wildlife Service, the successor to the Bureau of Fisheries. Here she prepared leaflets and informational brochures on conservation and the protection of natural resources.
Rachel Carson's first book, Under the Sea Wind, appeared in 1941 with the subtitle "a naturalist's picture of ocean life." The book, which grew from Carson's fascination with the seashore and the ocean as a result of vacations on the Atlantic coast, was well received. The narrative told the story of the seashore, the open sea, and the sea bottom.
Carson's important second book, The Sea Around Us, was published in 1951. Even more than her previous book, it was acclaimed for its approachable writing style. The Sea Around Us provides a layperson's geological guide through time and tide. In this book, Carson explores the mystery and treasures of the hidden world of the oceans, revealing its history and environment to the nonscientists. Carson maps the evolution of planet Earth—the formation of mountains, islands, and oceans—then moves into a more detailed description of the sea, starting with the sea surface and the creatures that live near the surface, descending through the depths to the sea bottom.
The Sea Around Us went to the top of the nonfiction best-seller list in the United States, won the National Book Award for Non-Fiction, was selected for the Book of the Month Club, and was condensed for Reader's Digest. It went into nine printings and was translated into thirty-three languages.
Such was the success of The Sea Around Us that it enabled Carson to accept a Guggenheim Fellowship and take a leave of absence from her job to start work on a third book, The Edge of the Sea, published in 1955. Written as a popular guide to the seashore, this book is a study of the ecological relationship between the Atlantic seashore and the animals that inhabit the coastline. While complementing her previous two books, this work evidences the growth of Carson's interest in the interrelationship of Earth's systems.
Rachel Carson's lasting reputation as a force in the environmental movement was made with her fourth and final book, Silent Spring, published in 1962. The title of the book was inspired by a phrase from a John Keats poem—"And no birds sing." Pesticides being sprayed indiscriminately were killing songbirds and thus bringing about the absence of birdsong: a silent spring.
In this book, Carson moves away from her focus on the sea and the land-sea interface to describe the interrelationship between communities and modern agricultural and industrial techniques. The book chronicles the disastrous results evident from the widespread use of pesticides, chemical fertilizers, and chemical treatments designed to increase agricultural production or simplify the production process.
As an example, Carson describes streams that became chemical soups, laden with the outpourings of chemical treatment plants. She describes runoff from fields treated with pesticides and chemical fertilizers, killing algae , plant life, fish, and animals. With this book, Carson educated the general public about the hazards of environmental contamination and made the case for careful consideration of both short-and long-term impacts of human-generated chemical contamination of our waterways.
The arguments contained in Silent Spring were not new. These concerns had been discussed in scientific journals, but Carson's approachable style brought the discussion of environmental management before a much wider general audience. On publication, Silent Spring attracted a great deal of adverse criticism, generated mostly by the chemical industry. More balanced reactions were found in the scientific press.
In 1963, the President's Science Advisory Committee concurred with Carson's assessment of the damage wrought by the widespread use of chemicals and the spiral of contamination that resulted from the development of ever more toxic treatments as insects developed resistance to pesticides. Her writing alerted the country to the dangers of chemical pollution to waters and helped transform water resources management.
SEE ALSO Environmental Movement, Role of Water in the .
Pat Dasch
Bonta, Marcia Myers. Women in the Field: America's Pioneering Women Naturalists. College Station: Texas A & M University Press, 1991.
Brooks, Paul. The House of Life: Rachel Carson at Work. Boston, MA: Houghton Mifflin, 1972.
Carson, Rachel. The Edge of the Sea. Boston, MA: Houghton Mifflin, 1955.
——. The Sea Around Us. New York: Oxford University Press, 1951.
——. Silent Spring. Boston, MA: Houghton Mifflin, 1962.
——. Under the Sea Wind. New York: Viking Penguin, 1941.
Lear, Linda J. Rachel Carson: Witness for Nature. New York: Henry Holt, 1972.
The chemical dichlorodiphenyltrichloroethane, or DDT, is a synthetic organic compound introduced in the 1940s and used as an insecticide. Its continual build-up in the food chain caused concern for human and animal health. As a result, DDT was banned in the U.S. in 1972, 10 years after the publication of Silent Spring. DDT remains in use in many countries of the world. | http://www.waterencyclopedia.com/Bi-Ca/Carson-Rachel.html |
CEESP NEWS - by IPBES Secretariat and submitted by one of the lead authors, Riccardo Simoncini
The world’s biodiversity is being lost and nature’s contributions to people are being degraded, which undermines human wellbeing.
The success of humanity’s efforts to reverse the current unsustainable use of our irreplaceable natural assets and heritage requires the best-available evidence, comprehensive relevant policy options and committed, well-informed decision makers.
Parts of Europe and Central Asia – an enormous region stretching from Iceland to Russia’s far east — are so developed and densely populated that much of their native biodiversity has been lost. Yet some of these States lead the world in policies that promote conservation and restoration, recognizing the fundamental links between biodiversity, nature’s contributions to people and human well-being.
The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) assessment reports provide the credible peer-reviewed information needed for informed decision-making.
Growing human-induced challenges and opportunities for people across the region are the focus of a major new scientific assessment report, one of five being prepared by inclusive teams of leading international experts working with the IPBES.
IPBES is the global science-policy platform tasked with providing the best-available evidence to inform better decisions affecting nature - by Governments, businesses and even individual households. IPBES is often described as ‘the IPCC for biodiversity’.
Three years in the making, the IPBES assessment reports evaluate the status of biodiversity and nature’s contributions to good quality of life in each region and their respective subregions, describing current status and trends, as well as their links to drivers of change and threats, identifying policy-relevant issues affecting them. The analyses will start by looking back several decades and then project likely interactions between people and nature for decades into the future.
‘Hot Topics’ in the IPBES Assessment Report for Europe and Central Asia Include:
- Valuation of nature’s contributions to people and wellbeing, including the role of biodiversity
- Transboundary ecological footprint
- Biodiversity trends across ecosystem types and taxa
- Direct and indirect drivers underlying biodiversity change
- Integrated future scenarios and pathways
- Progress towards Aichi Targets and implication for the SDGs
- Options for decision makers
The assessment report covers three subregions: Central and Western Europe, Eastern Europe and Central Asia, with a wide range of economic, social and political development, as well as very different levels of political and financial commitment to policies on biodiversity and nature’s contributions to people. Pressures on ecosystems vary, too, with some subregions growing, both economically and in terms of their population, significantly faster than others.
The large number of States within the region also creates many transboundary issues such as water quality and quantity, fisheries, climate change, air pollution and migratory species.
The assessment report will also examine Europe’s long experience with policies such as green certification, environmental labelling, offsetting, green infrastructure and payments for environmental services, experience which puts the region in an excellent position to learn lessons and assess trade-offs and costs. | https://www.iucn.org/news/commission-environmental-economic-and-social-policy/201803/ipbes-regional-assessment-europe-and-central-asia-a-primer |
The World of Warcraft development team is looking for a full-time Lead Producer for our Design team. We want someone that can collaborate with the best game developers in the world to turn our “wouldn’t it be cool if…” ideas into reality.
We have a very talented group of people that value respect, collaboration, and a passion for World of Warcraft. We strive to maintain a friendly workplace where creativity, teamwork and greatness bring the world of Azeroth to life.
Responsibilities
- Oversee design production activities for game development, content updates, and new features.
- Drive the completion of goals and facilitate communication, organization, and accountability across all design disciplines in collaboration with other disciplines and project priorities.
- Manage the team of Design producers (including annual reviews) and provide guidance/support to ensure project goals, objectives, milestones, and deliverables are achieved.
- Standardize process and status reporting methodologies for design production in order to communicate design activities and progress updates to senior leadership and the development team
- Partner with design and production leadership to make informed decisions on work methods, development pipeline, and priorities
- Participate in production leadership planning/strategizing for Team 2
- Represent Team 2’s Design initiatives and opportunities to support the company at the studio-level.
- Work closely with Loc/Regions to “think globally” about our content and make sure our creative leads/directors have the most up-to-date information regarding regulations/ratings. | https://jobs.jobvite.com/blizzard/job/o3O0bfwM |
In the last decades Nordic countries have been implementing quota markets and similar instruments to manage mainly the economic performance of their fisheries. Coming from a historical situation dominated by owner-operated fishing units closely connected to their supporting communities, market-based fisheries management plays a role in promoting company-organised fishing units, non-fisher ownership and new social relations. Introducing market-mechanisms to distribute the limited marine resources is therefore not just a change in the technical regulation. It is an active engagement in social change. The publication reviews the Nordic experiences with market-based fisheries management and discusses the implications for managers and future recruitment.
English
Introduction
This report explores the role of market-based fisheries management in the current transformation of Nordic fisheries and societies. The main focus of this report is on the social dynamics and social aspects of fisheries management. Based on a review of the Nordic experiences with quota markets and quota transferability, the report examines and discusses some of the changing relations between fishers, management, fishing sector and society. Throughout its history, Nordic fisheries have been dynamic and developed through the use of changing technologies and through internal competition and cooperation. Similarly, the fishing sector’s relation to state and society has been changing – shifting in the recent century from a state focus on economic expansion over a focus on scientific resource conservation to the current focus on economic performance.
English
- Click to access: | https://www.nordic-ilibrary.org/environment/nordic-fisheries-in-transition_224227a6-en |
The use of the conceptual review is to create a larger comprehension of the problem. Based on the goal of the author and the context where the literature review will be shown, a selective or thorough approach might be taken. A cutting-edge literature is well inside your capability and it isn’t as complex or time-consuming as you may think.
Think about your essay’s thesis for a promise to your audience about what sort of analysis you’ve made for the books and allow it to be specific. If there’s a crucial article or book that’s of significant value to the growth of your own research ideas, it is crucial to give added space to describing and critiquing that part of literature in more depth. Spending plenty of time revising is a sensible idea, because your principal goal is to present the material, not the argument.
If you would rather adhere to a chronological system of information organization, you’ve got to list your sources in a chronological order, as an example, the date when each source was published. samedayessay reviews You need to synthesise several of your reviewed readings into each paragraph, so that there’s an obvious connection between the numerous sources. Perhaps you’ll be refuting a current theory or substantiating and extending a present theory.
A literature review APA is never simple to write, which means you should always do it with the proper mindset and observing the ideal formatting and general structure to make it appear good enough. In some instances the literature does not quite fit the framework you’ve chosen. It is actually a summarization of whole previous works, which is why it is important to make it quick, consistent and focused.
The Basics of Review of Literature
You always need to assess the standard of the research studies you’re including in a review. Based on the degree of your course, a research paper may request that you report about topics in the area or maybe to conduct your very own original research. http://www.johnson.cornell.edu/Programs/Executive-MBA It is essential that your research fits logically within the current research in your town, and you might have found a perfect study to link with and to extend somehow.
As a consequence, you may not have the ability to see other research areas that may be related to your paper, even in case they don’t link directly. Also in accessible locations, the abundance of different resources like labor and capital are also vital to raise deforestation. You must read a great deal of sources to locate the most relevant and will most likely wind up discarding over half of what you read.
Indicate the method by which the approach fits the total research design. Your search terms are among the most essential elements of locating the most suitable sources for your research project and developing them is a continuous practice. To understand the issue well, knowledge has to be combined from several places.
Top Review of Literature Secrets
Literature reviews supply you with a handy guide to a specific topic. Whilst some literature reviews can be shown in a chronological order, it’s best avoided. Writing a great literature review is not an easy job.
When you’re learning how to compose a literature review, it can be complicated to work out what types of sources to include. The other businesses which provide literature reviews aren’t fully equipped with all the needed expertise and skill that’s needed for excellent literature review writing. You want some simple info.
After you’ve settled essay writer for you on the best way to organize your literature review, you’re prepared to compose each section. It describes academic papers which are relevant to a particular field or topic. It is a form of analysis with regard to articles and journals related to an area of study, or a theory in particular, and thereby conducting a critical evaluation of the works in question.
If you can locate a few really useful sources, it is sometimes a great idea to check through their reference lists to find the scope of sources they referred to. It’s possible to spend time deliberating the particulars of each article as you don’t have an exhaustive collection of references to experience. The narrower your topic, the simpler it is going to be to limit the amount of sources you will need to read so as to find a great survey of the material.
As a way to compose a literature review you’ve got to attain an already written literature on your preferred topic. As an example, terminology that’s used frequently in psychological literature may not be as powerful in searching a human resources management database. Therefore, if you are aware of how to do a literature review, you also need to understand how to prepare the annotated referencing style.
There might be the case, where you don’t cite a noted author or researcher. So you may want to take a look at the books utilized in related classes in sociology. The Professional Development Collection is also helpful for educators to discover proper literature.
While abstracts are incredibly beneficial in identifying the ideal forms of materials, they’re no substitute for the real items, themselves. Your dissertation advisor or mentor might be unwilling, or unable, to provide you with the help you want. Texts were no longer precious and pricey to producethey might be cheaply and rapidly put into the market. | http://www.grupoemporium.com.mx/2019/02/the-demise-of-review-of-literature/ |
Something unique happens in Arnaia and is related to the holy Metropolitan church of St. Stephen. It is the only church in Greece, that operates normally, serving the needs of the pilgrims, and at the same time is built over important antiquities of priceless historical value, that are visible in their most part. I.e. besides a place of worship it is a place of historical and archaeological interest. But lets take things from the start.
This church, according to an inscripted marble plate emnedded on the wall of its facade, was built in 1812 and honors the memory of St. Stephen, since there was a dependency of the Konstamonitou monastery of Mount Athos, the Catholicon of which is also dedicated to St. Stephen. The temple is a three-aisled basilica with dimensions 41x19,5m. It was burnt from the ground up during the 1821 Revolution, as the whole village did. The residents scattered in the nearby area and returned later in order to rebuild the village and the church, in which they placed a wooden chancel and one of the few in the area wood-carved ornate despotic throne. The one of a kind chancel was a donation from the Konstamonitou monastery and included 70 smaller wooden pictures and 14 silver-coated, larger ones.
On the night of September 5 of 2005, a great fire broke inside the temple and almost totaly destroyed it. The cause of the fire remains unknown. It is a fact that the fire left only the stone made walls behind. The roof collapsed, and everything inside the temple -such as pictures, books, several artifacts, objects of priceless historical and artistic value, the one of a kind, silver-gold coated wooden chancel, the wood-carved, ornate despotic throne-, turned into ashes. At once, the Ministry of Culture started the huge restoration, via the 10th Ephorate of Byzantine Antiquities, to the jurisdiction of which the temple belongs, with the cooperation of religious and local authorities, and the full support of the residents of Arnaia and the wider area.
Apart from these, fifteen tombs were discovered, some of which are dated in the christian era, whereas others are dated in the 16th century B.C. Thus, an important part of the area’s history was discovered due to the recostruction of the burnt building and thanks to the systematic excavations of the 10th Ephorate of Byzantine Antiquities. A historical part that shows the uninterrupted human presence and activity in Arnaia, and specificaly where the temple of St. Stephen exists today.
The protection and maintanance of the findings was next, after the completion of the renovation. At the same time, the archaeological site was configured in order to highlight the buried history of the place. The floor of the renovated temple was made of transparent panels, over which the visitor can stand or walk, while observing the enlightened archaeological site and the findings beneath it.
-In 2009, when the work of the 10th Ephorate of Byzantine Antiquities was complete, the uniqueness and the highlighting of the findings resulted in the arrival of large number of visitors of every age and nationality, on daily basis. Visitors arrive alone or in groups, contributing to the touristic development of Arnaia and its wider area.
-The temple of St. Stephen of Arnaia is open for the public every day, from the morning until the afternoon. | http://dimosaristoteli.gr/en/sights/saint-stefen-temple |
Thursday, 26th March until Saturday, 28th March 2020
Daily from 6 p.m.
Saffron is one of the most precious spices in the world. Its threadlike red stigmas—are quite literally the stuff of legend. But what is saffron, exactly? No matter how many tales have been told about this particular spice, most of us still don’t know what to do with it in the kitchen, nor can we tell whether it’s worth the price we paid for it.
The spice originates in a flower called Crocus sativus—commonly known as the “saffron crocus.” Saffron must be harvested entirely by hand (!) in the mid-morning when its flowers are still closed,
in order to protect the delicate stigmas inside. Saffron is extremely subtle and fragrant. Its slightly sweet, luxurious taste is enigmatic; it’s tricky to describe, but instantly recognizable when used
in a dish. It may sound like a cliché, but it’s true: you’ll know saffron when you taste it.
Together with Sari Safran, an organic saffron producer, we have created a 4-course dinner with guest chef Susanna Ghukasyan, to celebrate the unforgettable flavors of the world’s most mysterious spice.
Come be a part of The Saffron Tales.
Menu SAFFRON TALES
Salted salmon slice in Tequila, beetroot bread & saffronbutter
***
Salad with slices of veal liver or giant shrimp,
fresh herbs, tahin, lemon, garlic & saffron croutons
***
Veal entrecôte with saffron-potato mash & colorful cauliflower
or
grilled eggplant with chickpea, herbs, cherry tomatoes
& mascarpone-saffron sauce
***
Halwa with almonds, caramelized rose leafs, | https://www.parkhotel.ch/en/events/saffron-tales-postboned/ |
Q:
Proving that $\{f \in End(A): \forall a \in A:|a|<\infty \implies f(a)=0\}$ is an ideal
Let $A$ be an abelian group. I need to prove that
$I = \{f \in End(A): f(a)= 0 \ \text{for all $a$ of finite order}\}$,
is an ideal of $\text{End}(A)$. It isn't hard to prove that $I$ is a subgroup of $End(A)$, but it is quite hard to prove that:
$g(x) \cdot f(x)$ and $f(x) \cdot g(x)$ are in $I$ with $f(x)\in I$ and $g(x) \in End(A)$.
I thought that if you multiply $g(x)$ and $f(x)$ and you take an element $a$ that $g(a)*f(a) = g(a)*0 = 0$, so $g(x)f(x)=0$ for $x$ with finite order so $g(x)f(x) \in I$.
Is this proof correct?
A:
If $A$ is an abelian group, the multiplication on $\operatorname{End}(A)$ is the composition. It's customary to write $A$ additively and I'll use this convention.
It's clear that $0\in I$. If $f,g\in I$, then $(f+g)(a)=f(a)+g(a)=0+0=0$ for all $a$ of finite order.
Now, let $f\in I$ and $g$ be an arbitrary endomorphism. Saying $a$ has finite order means that $na=0$ for some integer $n>0$.
Since $ng(a)=g(na)$, we see that if $a$ has finite order, then also $g(a)$ has finite order. Therefore
$$
(fg)(a)=f(g(a))=0
$$
by hypothesis. Proving that $gf\in I$ is even easier: if $a$ has finite order, then
$$
(gf)(a)=g(f(a))=g(0)=0.
$$
| |
What are enzymes made of?
An enzyme is a kind of big, complicated protein molecule, made mainly of hydrogen and carbon atoms, but with some other atoms as well.
What are proteins?
What are hydrocarbons?
More about amino acids
Evolution of cells
All our biology articles
When did enzymes evolve?
Both prokaryote and eukaryote cells use enzymes (ENN-zimes), so the first enzymes probably evolved around four billion years ago, together with the first living cells.
The Hadean Eon
Prokaryote cells
How do cells make enzymes?
Prokaryote cells assemble enzymes in their cytoplasm. The cell’s DNA molecule uses enzymes to assemble RNA molecules, and these RNA molecules then in turn assemble more kinds of enzymes.
What is DNA?
Where did RNA come from?
What do enzymes do?
The cell pushes some of these enzymes outside the cell to digest food by breaking it apart into smaller pieces that can get through the cell membrane. Other enzymes can digest smaller molecules floating in the cytoplasm inside the cell. Some enzymes grab two small molecules and attach them together to make a larger molecule, like a lipid to fix the cell membrane.
How do cell membranes form?
Cell digestion
What is cytoplasm?
When the enzyme is done, it lets the new molecule go, and it’s ready to grab two more molecules and do it again. Each kind of enzyme has its own shape, and it will only work with molecules that fit into it exactly, like having the right puzzle piece in a puzzle, or the right key for a lock. That way each kind of enzyme can do its job and not interfere with anything else in the cell.
Eukaryote cells and enzymes
Eukaryote cells make enzymes in the endoplasmic reticulum, following instructions from the cell’s RNA. Then those enzymes, in turn, build more enzymes. The enzymes float out of the endoplasmic reticulum into the Golgi bodies and then into the lysosomes, where they break down large molecules of food or garbage.
Eukaryote cells
Endoplasmic reticulum
Golgi bodies
Lysosomes
Or the enzymes float into vacuoles, or into the cell’s nucleus where the enzymes help repair the cell’s DNA or build more RNA molecules. Some enzymes float around in the cytoplasm where they can break up viruses or germs that attack the cell.
How do new enzymes evolve?
New enzymes appear accidentally, when a old kind of enzyme accidentally gets broken or forms wrong. Usually these broken enzymes are no good to the cell, and sooner or later the enzymes in a lysosome break them down and recycle their parts.
What is a virus?
Natural selection
History of disease
But sometimes it turns out that the broken enzyme happens to be the right shape to match up with, for instance, a new virus that is trying to invade the cell and kill it. Maybe the new enzyme can break up that virus and kill it instead, saving the cell.
What is measles?
If the new enzyme turns out to be useful, then cells that make that enzyme will survive better, and the cells that don’t make that enzyme will die of the disease that virus causes. Measles is an example of a disease that we’re now better at surviving that we used to be. Your own cells each make more than a thousand different kinds of enzymes.
Enzymes that work outside of cells
Even in eukaryotes, some enzymes still leave the cell to digest food outside it. In your own body, you’ll find some of those enzymes in your stomach or in your saliva (spit).
What does a stomach do?
Those enzymes break down your food so it can get into your cells through the cell membrane.
Learn by doing – enzymes, bread and spit
Bibliography and further reading about the parts of a cell: | https://quatr.us/biology/enzymes-cell-biology.htm |
The Sort of Realism I Defend
I recently (Dec. 2014) described my philosophy of mathematics in three posts on my blog (putnamphil.blogspot.com). In brief, the main points were:
(1) An interpretation of mathematics must be compatible with scientific realism. It is not enough that the theorems of pure mathematics used in physics come out true under one’s interpretation of mathematics—even some antirealist interpretations arguably meet that constraint—the content of the “mixed statements” of science (empirical statements that contain some mathematical terms and some empirical terms) also needs to be interpretable in a realist way. For example, if a theory talks about electrons, according to me it is talking about things we cannot see with the naked eye, and not simply about what measuring instruments would do under certain circumstances, as operationalists and logical positivists maintained. I believe many proposed interpretations fail that test.
- (2) Both objectualist interpretations (interpretations under which mathematics presupposes the mind-independent existence of sets as “intangible objects” and potentialist/structuralist interpretations (interpretations under which mathematics only presupposes the possible existence of structures that exemplify the structural relations ascribed to sets), may meet the foregoing constraint. For example, under both Godel’s (or Quine’s) Platonist interpretations and Hellman’s and my modal logical interpretation the logical connectives are interpreted classically. In contrast to this, under Brouwer’s interpretation, the logical connectives (including “or” and “not”) are interpreted in terms of (Brouwer’s version of) provability. For example, in Intuitionism, “P or Q” means “There is a proof that either there is a proof of P or there is a proof of Q”. But according to scientific realists, the statement that a physical system either has a property P or has a property Q, does not entail that either disjunct can be proved, or even empirically verified. A statement can be true without being verifiable at all. But if statements of pure mathematics are interpreted intuitionistically, mustn’t statements of physics also be interpreted in terms of the same non-classical understanding of the logical connectives?
- (3) But, while positing the actual existence of sets as “intangible objects” may justify the use of classical logic, it suffers not only from familiar epistemological problems (not to mention conflicting with naturalism, which is the reason Davis gives for rejecting it), but from a generalization of a problem first pointed out by Paul Benacerraf, a generalization I call “Benacerraf’s Paradox”, namely that too many identities (or proposed identities) between different categories of mathematical “objects” seem undefined on the objectualist picture—e.g. are sets a kind of function or are functions a sort of set? Are the natural numbers sets, and if so which sets are they? etc. For me, the objectualist’s lack of an answer that isn’t completely arbitrary tips the scales decisively in favor or potential- ism/structuralism.
- (4) Rejecting objectualism (as Martin and I both do) does not requires one to say that sets, functions, numbers, etc., are fictions. (I hope Martin agrees.)
In “Mathematics without Foundations”, where I first proposed the modal logical interpretation), I claimed that objectualism and potentialism are “equivalent descriptions”, which was a mistake. I now defend the view that potentialism is a rational reconstruction of our talk of “existence” in mathematics, rather than an “equivalent” way of talking. Rational reconstruction does not “deny the existence” of sets (or, to change the example), of “a square root of minus one”; it provides a construal of such talk that avoids the paradoxes. In Davis’s language, the mathematician is talking about, for example, entities that play the role of a square root of minus one in certain hypothetical worlds, but unlike Godel she does not suppose that such entities exist in some Platonic realm. (Godel claimed we can perceive them with the aid of a special mental faculty.)
- The relevant publications are, in addition to the already mentioned “What is Mathematical Truth”and “Mathematics without Foundations”, are “Set Theory, Replacement, and Modality”, collectedin Philosophy in an Age of Science (Cambridge, MA: Harvard University Press, 2012), and “Replyto Steven Wagner”, forthcoming in The Philosophy of Hilary Putnam (Chicago: Open Court, 2015).
- Brouwer’s Intuitionism was my example of an interpretation that is incompatible with scientificrealism in “What is Mathematical Truth”, 75.
- Godel’s Platonism is a prototypical “objectualist” interpretation, but the term “intangible objects”wasusedbyQuinein Theories and Things, (Cambridge, MA: Harvard University Press, 1981), 149.
- For a fine defense of the claim that a statement can be true but unverifiable, see Tim Maudlin“Confessions of a Hard-Core, Unsophisticated Metaphysical Realist”, forthcoming in The Philosophy of Hilary Putnam. Maudlin rightly criticizes me for giving it up in my “internal realist” period(1976-1990); after I returned to realism sans phrase in 1990 I defended the same claim in a number of places, e.g. “When ‘Evidence Transcendence’ Is Not Malign: A Reply to Crispin Wright,”Journal of Philosophy 98.11 (November 2001), 594-600.
- Paul Benacerraf (1965), “What Numbers Could Not Be” Philosophical Review Vol. 74, pp. 47-73. | https://ebrary.net/48213/mathematics/sort_realism_defend |
Graded activity is a way of learning to manage chronic pain or illness by finding your baseline (the amount of activity you can do without exacerbating symptoms) and then gradually extending the difficulty, very slowly over time. The aim is to be able to do the things that matter to you without causing flare ups. Pacing is a strategy for avoiding a boom and bust cycle which you do a lot on good days and then very little on bad days. The idea is to even out your energy use and reserves so you can have a little more predictability. I’ve been practising both strategies for a little while and here is the latest update on how that’s going.
No problems last week so increasing to 14 minutes from today.
Using wrist braces and fingerless gloves. Still learning touch typing to reduce neck pain from looking down at the keyboard.
I haven’t been pacing with walking up to now. Just walking until I get tired. I would like to increase my fitness so I’m going to start timing how long I walk and taking rest breaks. I’m starting at 7 minutes which is how long it takes to get to my allotment plot and I can usually do that comfortably.
I’m still in the phase where my muscles are getting used to the activity and if I don’t pay attention to what I’m doing my posture slumps forward.
I’m hoeing the ground outside now in a bed that has already been dug over and isn’t needed for planting any time soon. This reduces the temptation to rush or overwork.
I’m going to continue making tiny 5 second increases every few days so that I build up stamina slowly without provoking a flare up in pain.
I’m increasing by a minute to 5 minutes and I’m following the movements provided by my audiologist (Qi Gong style movements).
My energy level has been much more consistent since changing to 15 minute breaks between each activity. This has slowed my day down but hopefully I’ll be able to build up over time. And if not then I am still happier if I don’t wind up resting in bed as often – and fingers crossed it seems to be okay as far as that’s concerned. | http://www.everydayheroics.com/2018/11/22/graded-activity-progress-update/ |
BACKGROUND
SUMMARY
DETAILED DESCRIPTION
The sense of hearing in human beings involves the use of hair cells in the cochlea that convert or transduce acoustic signals into auditory nerve impulses. Hearing loss, which may be due to many different causes, is generally of two types: conductive and sensorineural. Conductive hearing loss occurs when the normal mechanical pathways for sound to reach the hair cells in the cochlea are impeded. These sound pathways may be impeded, for example, by damage to the auditory ossicles. Conductive hearing loss may often be helped by the use of conventional hearing aids that amplify sound so that acoustic signals reach the cochlea and the hair cells. Some types of conductive hearing loss may also be treated by surgical procedures.
Sensorineural hearing loss, on the other hand, is due to the absence or the destruction of the hair cells in the cochlea which are needed to transduce acoustic signals into auditory nerve impulses. Thus, people who suffer from sensorineural hearing loss are unable to derive any benefit from conventional hearing aid systems.
To overcome sensorineural hearing loss, numerous cochlear implant systems—or cochlear prosthesis—have been developed. These devices seek to bypass the hair cells in the cochlea by presenting electrical stimulation directly to the auditory nerve fibers. This leads to the perception of sound in the brain and at least partial restoration of hearing function. To facilitate direct stimulation of the auditory nerve fibers, an array of electrodes may be implanted in the cochlea. A sound processor processes an incoming sound and translates it into electrical stimulation pulses applied by these electrodes which directly stimulate the auditory nerve.
Many cochlear implant systems, as well as other types of neural stimulators, are configured to measure the effectiveness of an electrical stimulation current applied to neural tissue (e.g., the auditory nerve) by using a process known as neural response imaging (NRI). In NRI, the neural stimulator delivers an electrical stimulus to the neural tissue with a stimulating electrode and then records the resulting electrical activity of the neural tissue with a recording electrode. This resulting electrical activity is often referred to as an evoked neural response and occurs when the neural tissue depolarizes in response to the applied stimulus.
An evoked neural response may serve as a diagnostic measure to determine whether the neural stimulator is functioning correctly. NRI may also be used to determine optimal stimulation parameters for each electrode or electrode configuration. For example, NRI may be used to determine the lowest level of stimulation current that is required to evoke a neural response in a particular nerve. This information may then be used to optimize the stimulation parameters or settings of the cochlear implant system. NRI may also be used for a number of additional reasons.
In practice, however, the signal recorded by the recording electrode often includes undesirable signals that interfere with detection of the desired neural response. The terms “neural recording” and “neural recording signal” will be used herein and in the appended claims, unless otherwise specifically denoted, to refer to any signal recorded by the recording electrode. As will be explained in more detail below, a neural recording signal may include any combination of a neural response signal, noise, and/or stimulus artifact. Neural recording signals are sometimes referred to as evoked potential recordings.
As mentioned, a neural recording signal may include noise. Noise refers to any signal that is not correlated with the stimulus that is applied to the neural tissue by the neural stimulator. Noise is generally unpredictable.
Furthermore, a neural recording signal may also include stimulus artifact. Stimulus artifact includes signals, other than the neural response, that are correlated with the stimulus that is used to evoke the neural response. For example, the stimulus artifact may include the voltage potential of the stimulus pulse itself. Another source of stimulus artifact is cross-talk between the recording circuit and the stimulation circuit.
The presence of noise and stimulus artifact often makes it difficult to determine whether a neural recording includes a neural response. A number of conventional techniques exist for removing noise and stimulus artifact from a signal. However, these techniques are often ineffective when applied to a neural recording signal.
For example, filtering may be used to remove noise that has a different frequency than the frequency of a particular signal of interest. However, in neural stimulation systems, the frequency of the noise and the frequency of an evoked neural response signal are often similar. Thus, conventional filtering cannot always be used to remove noise from a neural recording.
Signal correlation may also be used to remove noise from a signal of interest. In signal correlation, a measured signal is correlated with a known reference signal to remove uncorrelated noise from the measured signal. However, evoked neural responses are often variable from patient to patient. Hence, a single reference signal cannot be used to correlate evoked neural responses from multiple patients. The signal correlation technique is therefore ineffective in many instances in removing noise from a neural recording.
Likewise, a number of conventional techniques exist for removing stimulus artifact from a neural recording. These techniques include alternating polarity, forward masking, third-phase compensation, and scaled template techniques. For example, in the alternating polarity technique, the neural response within the neural recording is estimated to be the average of the responses to a first stimulation pulse having a first polarity (e.g. cathodic) and a second stimulation pulse having the opposite polarity (e.g. anodic). The neural response stays the same polarity with the reverse in polarity of the stimulus. However, the stimulus artifact reverses polarity with the reverse in polarity of the stimulus. Consequently, the average response to the two polarities has a lower artifact component than either of the responses taken by themselves. While the alternating polarity technique is sometimes successful in reducing stimulus artifact in a neural recording, it does not eliminate it in all cases. Furthermore, the alternating polarity, as well as the other conventional techniques, often leaves large, residual stimulus artifacts in the neural recording.
As mentioned, it is often desirable to determine the minimum stimulation current level needed to evoke a neural response. This minimum stimulation current level is referred to as a “neural response threshold current level” or simply as a “neural response threshold.” The neural stimulator may then be configured to apply effective, comfortable, and optimal stimulus levels that conserve the power available to the stimulator. However, when a neural recording signal is marred by noise and artifact signals, it is often difficult to visually distinguish between a neural recording signal that includes a neural response signal and a neural recording signal that does not include a neural response signal. Thus, it is often difficult to determine the neural response threshold current level.
Methods of automatically determining a neural response threshold current level include identifying one or more neural response signals at one or more corresponding stimulation current levels, identifying one or more non-response signals at one or more corresponding stimulation current levels, and analyzing a trend between the neural response signals and the non-response signals.
Systems for automatically determining a neural response threshold current level include one or more devices configured to identify one or more neural response signals at one or more corresponding stimulation current levels, identify one or more non-response signals at one or more corresponding stimulation current levels; and analyze a trend between the neural response signals and the non-response signals.
Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
Methods and systems for automatically determining a neural response threshold current level are described herein. A number of neural recording signals are obtained at different stimulation current levels. A minimum number of these neural recording signals are identified as including a neural response signal and a minimum number of these neural recording signals are identified as not including a neural response signal. The trend of the stimulation current levels corresponding to the identified signals are analyzed to determine the value of the neural response threshold current.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present systems and methods. It will be apparent, however, to one skilled in the art that the present systems and methods may be practiced without these specific details. Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearance of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
FIG. 1
FIG. 1
101
1
8
101
102
1
8
101
101
101
101
102
shows a lead () supporting an electrode array with electrodes E through E. The lead () may be attached to a neural stimulator (not shown). The stimulator is configured to provide an electrical current via the electrode array to stimulate target tissue, e.g., neural tissue (). The stimulation current output at each of the electrodes (E-E) maybe independently controlled by the stimulator. The lead () of includes eight electrodes for illustrative purposes only. It will be recognized that the lead () may include any number of electrodes. Furthermore, the electrodes may be arranged in any of a number of configurations. For example, the electrodes may be arranged as an array having at least two or at least four collinear electrodes. In some embodiments, the electrodes are inductively coupled to the stimulator. The lead () may be thin (e.g., less than 3 millimeters in diameter) and flexible such that the lead () may be readily positioned near target neural tissue (). Alternatively, the electrodes may be coupled directly to a leadless stimulator.
1
8
1
2
102
1
102
102
2
In some embodiments, each electrode (E-E) may be selectively configured to function as a stimulating electrode or a recording electrode as best serves a particular application. For example, E may be a used as a stimulating electrode and E may be used as a recording electrode. A stimulus, e.g., an electrical stimulation current, may then be applied to the neural tissue () via the stimulating electrode E. The resulting electrical activity of the nerve () when the nerve () depolarizes in response to the applied stimulus is recorded with the recording electrode E. As mentioned previously, this electrical activity is referred to as an evoked neural response or simply a neural response.
FIG. 2
FIG. 2
FIG. 2
120
120
120
1
2
120
120
illustrates an exemplary stimulus (), e.g., an electrical stimulation current pulse, that may be delivered to neural tissue via a stimulating electrode. The stimulus () of is biphasic. In other words, the stimulus () includes two parts—a negative first phase having an area A and a positive second phase having an area A. It is usually the negative phase that causes neural tissue to depolarize (fire). The biphasic stimulus () shown in has an amplitude of 1 milliamp (ma) and a pulse width of 20 microseconds (μsec) for illustrative purposes only. It will be recognized that any of the characteristics of the stimulus (), including, but not limited to, the pulse shape, amplitude, pulse width, frequency, burst pattern (e.g., burst on time and burst off time), duty cycle or burst repeat interval, ramp on time, and ramp off time may vary as best serves a particular application.
120
1
2
120
FIG. 2
The biphasic stimulus () shown in is “charge balanced” because the negative area A is equal to the positive area A. A charge-balanced biphasic pulse is often employed as the stimulus to minimize electrode corrosion and charge build-up which can harm surrounding tissue. However, it will be recognized that the biphasic stimulus () may alternatively be charge-imbalanced as best serves a particular application.
120
102
102
102
FIG. 2
FIG. 1
FIG. 1
In some embodiments, when the amplitude and pulse width of the stimulus () of reach a supra-threshold (i.e., a threshold stimulus large enough to depolarize a target nerve), the voltage gradient at some surface point on the nerve (; ) will be sufficiently negative as to cause the nerve (; ) to depolarize from its resting state and propagate an electrical signal along the length of the nerve (). The voltage gradient of this electrical signal propagation can be captured with a recording electrode as the evoked neural response of the target nerve.
Before discussing the present methods and systems of automatically determining a neural response threshold current, it is helpful to understand the components of a number of exemplary neural stimulators in which the present methods and systems may be employed.
FIG. 3
20
20
10
12
10
16
18
12
21
50
48
10
12
shows an exemplary cochlear implant system () that may be used as a neural stimulator in accordance with the present methods and systems. Exemplary cochlear implant systems suitable for use as described herein include, but are not limited to, those disclosed in U.S. Pat. Nos. 6,219,580; 6,272,382; and 6,308,101, all of which are incorporated herein by reference in their respective entireties. The cochlear implant system () includes a speech processor portion () and a cochlear stimulation portion (). The speech processor portion () may include a speech processor (SP) (), a microphone (), and/or additional circuitry as best serves a particular application. The cochlear stimulation portion () may include an implantable cochlear stimulator (ICS) (), a number of electrodes () arranged in an electrode array (), and/or additional circuitry as best serves a particular application. The components within the speech processor portion () and the cochlear stimulation portion () will be described in more detail below.
18
16
24
18
16
16
21
21
FIG. 3
The microphone () of is configured to sense acoustic signals and convert such sensed signals to corresponding electrical signals. The electrical signals are sent to the SP () over an electrical or other suitable link (). Alternatively, the microphone () may be connected directly to, or integrated with, the SP (). The SP () processes these converted acoustic signals in accordance with a selected speech processing strategy to generate appropriate control signals for controlling the ICS (). These control signals may specify or define the polarity, magnitude, location (i.e., which electrode pair or electrode group receive the stimulation current), and timing (i.e., when the stimulation current is to be applied to a particular electrode pair) of the stimulation current that is generated by the ICS ().
48
48
50
50
21
48
21
50
48
16
FIG. 3
FIG. 3
The electrode array () of is adapted to be inserted within a duct of the cochlea. As shown in , the array () includes a multiplicity of electrodes (), e.g., sixteen electrodes, spaced along its length. Each of the electrodes () is individually connected to the ICS (). The electrode array () may be substantially as shown and described in U.S. Pat. Nos. 4,819,647 or 6,129,753, each of which is incorporated herein by reference in its respective entirety. Electronic circuitry within the ICS () is configured to apply stimulation current to selected pairs or groups of the individual electrodes () included within the electrode array () in accordance with a specified stimulation pattern defined by the SP ().
21
16
14
16
18
20
21
48
20
16
20
20
14
16
21
21
16
The ICS () and the SP () may be electronically connected via a suitable data or communications link (). In some embodiments, the SP () and the microphone () comprise an external portion of the cochlear implant system () and the ICS () and the electrode array () comprise an implantable portion of the system (). In alternative embodiments, one or more portions of the SP () are included within the implantable portion of the cochlear implant system (). The implantable portion of the cochlear implant system () is implanted within the patient's body. Thus, the data link () is a transcutaneous (through the skin) data link that allows power and control signals to be sent from the SP () to the ICS (). In some embodiments, data and status signals may also be sent from the ICS () to the SP ().
20
14
20
20
20
16
21
16
21
14
16
21
14
The external and implantable portions of the cochlear implant system () may each include one or more coils configured to transmit and receive power and/or control signals via the data link (). For example, the external portion of the cochlear implant system () may include an external coil (not shown) and the implantable portion of the cochlear implant system () may include an implantable coil (not shown). The external coil and the implantable coil may be inductively coupled to each other, thereby allowing data to be transmitted between the external portion and the implantable portion. The data may include, for example, the magnitude and polarity of a sensed acoustic signal. The external coil may also transmit power from the external portion to the implantable portion of the cochlear implant system (). It will be noted that, in some embodiments, both the SP () and the ICS () may be implanted within the patient, either in the same housing or in separate housings. If the SP () and the ICS () are in the same housing, the link () may be realized with a direct wire connection within such housing. If the SP () and the ICS () are in separate housings, the link () may be an inductive link, for example.
FIG. 4
FIG. 4
16
21
16
21
16
21
is a functional block diagram of an exemplary SP () and ICS (). The functions shown in are merely representative of the many different functions that may be performed by the SP () and/or the ICS (). A more complete description of the functional block diagram of the SP () and the ICS () is found in U.S. Pat. No. 6,219,580, which is incorporated herein by reference in its entirety.
FIG. 4
18
22
28
29
As shown in , the microphone () senses acoustic information, such as speech and music, and converts the acoustic information into one or more electrical signals. These signals are then amplified in audio front-end (AFE) circuitry (). The amplified audio signal is then converted to a digital signal by an analog-to-digital (A/D) converter (). The resulting digital signal is then subjected to automatic gain control using a suitable automatic gain control (AGC) function ().
44
16
44
44
44
1
After appropriate automatic gain control, the digital signal is then processed in one of a number of digital signal processing or analysis channels (). For example, the SP () may include, but is not limited to, eight analysis channels (). Each analysis channel () may respond to a different frequency content of the sensed acoustical signal. In other words, each analysis channel () includes a band-pass filter (BP-BPFn) or other type of filter such that the digital signal is divided into n frequency channels. The lowest frequency filter may be a low-pass filter, and the highest frequency filter may be a high-pass filter.
FIG. 4
FIG. 4
44
1
1
1
As shown in , each analysis channel () may also include a detection stage (D-Dn). Each detection stage (D-Dn) may include an energy detection circuit (not shown), which may be realized, e.g., through a rectification circuit followed by an integrator circuit. As shown in , each of the detection stages (D-Dn) may alternatively be bypassed depending on the particular signal processing strategy being used.
44
41
41
44
46
41
44
45
42
43
21
48
FIG. 3
After energy detection, or bypassing of such, the signal from each of the n analysis channels () is forwarded to a mapping stage (). The mapping stage () may be configured to map the signals in each of the analysis channels () to one or more of the m stimulus channels (). The mapping stage () may be further configured to perform additional processing of the signal, such as signal compression. The signals output by each analysis channel () may then be serialized by a multiplexer () into one serial data channel. The multiplexed signal may then be further processed according to information included in a pulse table () connected to an arithmetic logic unit (ALU) (). After the signal is appropriately processed, compressed, and mapped, the signal may be input into the ICS () to control the actual stimulus patterns that are applied to the patient via the electrode array (; ).
44
46
44
21
48
46
50
48
FIG. 3
FIG. 3
FIG. 3
As mentioned, each of the n analysis channels () may be mapped to one or more stimulus channels (). In other words, the information contained in the n analysis channels () controls the stimulus patterns that are applied to the patient by the ICS () and its associated electrode array (; ). Stimulus current may be applied to any number of stimulation sites within the patient's cochlea via the m stimulus channels (). As used herein and in the appended claims, the term “stimulation site” will be used to refer to a target area or location at which the stimulus current is applied. For example, a stimulation site may refer to a particular location in the neural tissue of a cochlear implant patient. Through appropriate weighting and sharing of currents between the electrodes (; ), stimulus current may be applied to any stimulation site along the length of the electrode array (; ).
FIGS. 5A and 5B
110
110
show a spinal cord stimulator (SCS) system () that may be used as a neural stimulator in accordance with the present methods and systems. The SCS () may be used to treat a number of different medical conditions such as, but not limited to, chronic pain.
FIG. 5A
FIG. 5A
110
112
114
116
118
118
117
117
116
114
110
112
112
117
As shown in , the SCS () may include an implantable pulse generator (IPG) (), a lead extension (), and an electrode lead () having an electrode array () thereon. The electrode array () includes a plurality of electrodes (). The electrodes () may be arranged, as shown in , in an in-line array near the distal end of the lead (). Other electrode array configurations may also be used. The lead extension () need not always be used with the SCS (), but may be used depending on the physical distance between the IPG () and the stimulation site within the patient. The IPG () is configured to generate stimulation current pulses that are applied to a stimulation site via one or more of the electrodes (). Exemplary spinal cord stimulators suitable for use as described herein include, but are not limited to, those disclosed in U.S. Pat. Nos. 5,501,703; 6,487,446; and 6,516,227, all of which are incorporated herein by reference in their respective entireties.
FIG. 5B
118
110
120
119
115
116
112
112
114
112
115
shows that the electrode array () of the SCS () may be implanted in the epidural space () of a patient in close proximity to the spinal cord (). Because of the lack of space near the lead exit point () where the electrode lead () exits the spinal column, the IPG () is generally implanted in the abdomen or above the buttocks. However, it will be recognized that the IPG () may be implanted in any suitable implantation site. The lead extension () facilitates implanting the IPG () at a location that is relatively distant from the lead exit point ().
20
110
FIG. 3
FIG. 5A
The cochlear implant system (; ) and the SCS (; ) are merely illustrative of many types of neural stimulators that may be used to perform NRI. For example, the neural stimulator may additionally or alternatively include an implantable pulse generator (IPG) coupled to one or more leads having a number of electrodes, a deep brain stimulator, an implantable microstimulator, an external stimulator, or any other type of stimulator configured to perform NRI. Exemplary IPGs suitable for use as described herein include, but are not limited to, those disclosed in U.S. Pat. Nos. 6,381,496, 6,553,263; and 6,760,626. Exemplary deep brain stimulators suitable for use as described herein include, but are not limited to, those disclosed in U.S. Pat. Nos. 5,938,688; 6,016,449; and 6,539,263. Exemplary implantable microstimulators, such as the BION® microstimulator (Advanced Bionics® Corporation, Valencia, Calif.), suitable for use as described herein include, but are not limited to, those disclosed in U.S. Pat. Nos. 5,193,539; 5,193,540; 5,312,439; 6,185,452; 6,164,284; 6,208,894; and 6,051,017. All of these listed patents are incorporated herein by reference in their respective entireties.
As mentioned, it is often desirable to deliver a stimulus to neural tissue with a stimulating electrode and then record the resulting electrical activity of the neural tissue with a recording electrode. This resulting electrical activity is referred to as an evoked neural response or simply, a neural response, and occurs when the neural tissue depolarizes in response to the applied stimulus.
For example, in a normal ear, a single auditory nerve fiber or cell generates an action potential when the cell's membrane is depolarized to a threshold value, after which a spike occurs. Sodium ions entering the cell make the inside of the cell more positive, that is, depolarized. In some embodiments, an electrical stimulation current may be used to depolarize the nerve cell. This depolarization effect can be likened to taking a photograph by pressing the shutter button on a camera. Pressing on the button has no effect until it crosses a threshold pressure, and then “click”—the shutter opens and the film is exposed. In the same way, depolarizing a neuron has no effect until the depolarization reaches a threshold, and then, all at once, an action potential is generated.
FIG. 6A
FIG. 6A
FIG. 6A
160
160
1
1
The evoked neural response as recorded by the recording electrode includes a sum of action potentials of a number of nerve cells. is a graph depicting an exemplary evoked neural response signal (). As shown in , the horizontal axis represents time in samples and the vertical axis represents the amplitude of the response in microvolts (μV). As shown in , the evoked neural response signal () is typically characterized by a first negative peak (N) followed by a first positive peak (P). It will be recognized that evoked neural responses differ in timing and amplitude from patient to patient.
102
102
Unfortunately, the recording electrode may additionally or alternatively record noise and/or stimulus artifact. In general, a neural recording may include any combination of a neural response signal, noise, and/or stimulus artifact. In some instances, the neural recording obtained by the recording electrode only includes stimulus artifact and noise. For example, if the stimulus pulse is too low to trigger depolarization of the nerve (), the nerve () will not produce a neural response and the recording electrode will only record the stimulus artifact and any noise that is present.
FIG. 6B
161
161
is a graph depicting an exemplary artifact signal (). The artifact signal. () is typically characterized as a sum of two decaying exponentials, one with a fast time constant and one with a slow time constant.
FIG. 6C
FIG. 6A
FIG. 6B
FIG. 6C
FIG. 6A
FIG. 6B
162
160
161
162
160
161
is a graph depicting a neural recording signal () that includes both the evoked neural response signal () of and the artifact signal () of . As shown in , the neural recording signal () is a sum of the evoked neural response signal (; ) and the artifact signal (; ).
FIG. 7A
170
170
170
As mentioned previously, the neural recording signal obtained by a recording electrode may also include noise. Noise refers to any signal that is not correlated with the stimulus pulse and is generally unpredictable. is a graph depicting an exemplary noise signal () that may be recorded by the recording electrode. Because the noise signal () is unpredictable, the noise signal () may have any frequency or amplitude.
FIGS. 7B-7D
FIG. 7A
FIG. 6A
FIG. 7A
FIG. 6B
FIG. 7A
FIG. 6C
171
170
160
172
170
161
173
170
162
are graphs depicting the effect () of the noise signal (; ) on the evoked neural response signal () of , the effect () of the noise signal (; ) on the artifact signal () of , and the effect () of the noise signal (; ) on the neural recording signal () of , respectively.
160
FIG. 6A
It is often desirable to determine whether a neural recording signal includes a neural response signal or whether the neural recording signal only includes noise and/or artifact signals. Currently, medical practitioners typically need to be trained to identify signals as containing valid neural responses from a visual display of the neural recording signal. For example, a typical neural response signal of the auditory nerve to a stimulus pulse includes a negative peak followed by a positive peak, such as the signal () shown in . Waveforms that do not fall into this pattern are assumed to be recordings that contain only noise and/or stimulus artifact. However, because medical practitioners may have various degrees of training, the results of identifying signals containing valid neural responses may vary greatly from one practitioner to the next. In addition, some valid neural responses do not follow typical neural response patterns. Whether these responses will be correctly identified as valid neural responses depends on the judgment of the practitioner.
FIG. 8
FIG. 8
To overcome the inaccuracies of practitioner identification of valid neural responses and to improve NRI performance, the identification of a neural recording signal that includes a neural response signal may be automated. is a flow chart illustrating an exemplary method of automatically identifying a neural recording signal that includes a neural response signal. The method described in connection with is more fully described in a related application entitled “Methods and Systems for Automatically Identifying a Neural Recording Signal as Including a Neural Response Signal” to Litvak et al., client docket number AB-613U, which application was filed simultaneously with the present application on Jun. 1, 2005. The AB-613U application is incorporated herein by reference in its entirety.
FIG. 8
FIG. 8
FIG. 8
The method described in connection with may be used in connection with any type of neural stimulator. Furthermore, the steps shown in and described below may be modified, reordered, removed, and/or added to as best serves a particular application. It will be recognized that a computer, digital signal processor (DSP), mathematical application, or any other suitable device, signal processor, software, firmware, or application may be used to implement one or more of the steps described in connection with .
FIG. 8
210
As shown in , a neural recording signal is first obtained (step ). As described above, the neural recording signal may be obtained by stimulating neural tissue with a stimulating electrode and then recording the electrical response of the neural tissue with a recording electrode. It will be recognized that the neural recording signal may be evoked in response to stimulus applied to any neural tissue by any neural stimulator. For example, the neural recording signal may capture a neural response evoked by a stimulus applied to the auditory nerve with a cochlear implant system.
210
211
Once the neural recording signal has been obtained (step ), the neural recording signal is conditioned (step ). In some embodiments, the neural recording signal is conditioned by removing the mean of the data within the neural recording, removing a trend from the data, and/or removing an overall DC voltage level from the data. The neural recording signal may additionally or alternatively be conditioned using any other suitable conditioning technique.
212
The noise that is present in the neural recording signal is then estimated (step ). A number of different techniques may be used to estimate the nose in the neural recording signal. For example, the noise may be estimated by computing the standard deviation of the data near the tail of the neural recording signal. Alternatively, the noise may be directly estimated by analyzing variability between a number of different neural recording signals that are obtained.
213
The neural recording signal is then denoised (step ). The neural recording signal may be denoised using any of a number of different techniques. For example, the neural recording signal may be denoised by applying principle component analysis, as is more fully described in a related application entitled “Methods and Systems for Denoising a Neural Recording Signal” to Litvak et al., client docket number AB-611U, which application was filed simultaneously with the present application on Jun. 1, 2005. The AB-611U application is incorporated herein by reference in its entirety.
FIG. 9
FIG. 9
An exemplary method of denoising a neural recording signal by applying principal component analysis will now be described in connection with the flow chart shown in . The term “denoising” will be used herein and in the appended claims, unless otherwise specifically denoted, to refer to decreasing or removing noise from a neural recording signal or any other signal as best serves a particular application. The method may be used in connection with any type of neural stimulator. The steps shown in and described below may be modified, reordered, removed, and/or added to as best serves a particular application.
FIG. 9
180
As shown in , a number of basis functions are first derived using principal component analysis to describe a set of previously collected neural recording signals (step ). Principal component analysis is a statistical technique used to derive a number of functions that, when summed together, describe a given set of data. These functions are often referred to as basis functions or principal components, both of which terms will be used interchangeably herein and in the appended claims unless otherwise specifically denoted.
An example of deriving a number of basis functions that describe a set of neural recording signals corresponding to the auditory nerve will now be given. It will be recognized that the following example is merely illustrative and that the neural recording signal may be evoked in response to stimulus applied to any neural tissue by any neural stimulator.
A large number of neural recording signals were evoked and recorded by audiologists over a period of time. Each measured waveform was computed by averaging the response to a cathodic-anodic and anodic-cathodic stimulus pulse. A two-point averaging filter was then applied to the data. In addition, synchronized noise was measured by recording the response to stimulation with zero current. The synchronized noise was then subtracted from the response to the cathodic-anodic and anodic-cathodic stimulus pulse.
1
8000
1
8000
The evoked neural recording signals were then collected into a measurement matrix M=[m. . . m]. As used herein and in the appended claims, unless otherwise specifically denoted, bold capital letters will be used to refer to matrices and bold lower-case letters will be used to refer to vectors. Hence, M is a matrix containing 8,000 measured neural recording signals mthrough m. Although M contains 8,000 measured neural recording signals in the present example, it will be recognized that M may contain any number of measured neural recording signals as best serves a particular application.
M
full
full
Eigenvalue decomposition was then used to compute the principal components of M. MATLAB™ or any other mathematical tool may be used to perform the eigenvalue decomposition. First, the covariance matrix C=COV(M′) was computed. A vector of eigenvalues (λ) and a matrix of eigenvectors arranged in columns (V) were then computed. The matrix Vcontains the full components that account entirely for the measurement matrix M.
M
full
full
Because the covariance matrix Cis symmetric, the eigenvectors within the matrix Vare orthogonal. The eigenvectors within the matrix Vmay be normalized to have a norm of one.
full
FIG. 10
FIG. 10
FIG. 10
Although Vcontains the full components that account entirely for the data contained in measurement matrix M, it can be shown that a lesser number of these components may sufficiently account for the data in M. is a graph showing the percent of unaccounted variance in M as a function of the number of components. As shown in , the percent of unaccounted variance decreases as more components are included. However, as shown in , a small number of components (e.g., 5 to 10 components) may account for approximately 98 to 99 percent of the variance.
FIG. 11
FIG. 11
190
is a graph illustrating the difference of standard deviations of the errors in the beginning versus in the end of the waveforms in M as a function of the number of components included. The error bars (e.g., ) are approximately 99 percent confidence intervals around the mean estimate of the error. As shown in , the difference becomes zero for eight components. For higher numbers of components, some noise is captured in the measurements. Hence, the error in the beginning portion of the stimulus is less than the standard of deviation.
FIGS. 10 and 11
The results shown in may be used to determine an optimal number of basis functions or components for a given application. For example, seven components capture approximately 98.6 percent of the variance in the data and have a 2 μV mean difference. Thus, seven components are sufficient for many different applications. The examples given herein will use seven components or basis functions. However, it will be recognized that any number of basis functions may be chosen to represent the set of evoked neural recording signals in M.
FIG. 12
FIG. 12
1
7
1
7
is a graph showing seven basis functions or components. As shown in , the top basis function (basis function number 7) looks like a neural response signal. The remaining basis functions account for differences in the evoked neural recording signals in M. For purposes of the present example, the seven basis functions or components will be represented by the component matrix V=[v. . . v], where vthrough vare vectors representing the seven basis functions. As will be described in more detail below, the component matrix V may be used to denoise an incoming neural recording signal.
FIG. 9
1
7
1
7
1
7
181
Returning to the flow chart of , once the component matrix V has been determined, the next step is to determine relative weights for the basis functions vthrough vcorresponding to an incoming neural recording signal (step ). In other words, the amount of each basis function vthrough vthat is present in the incoming neural recording signal is determined. A computer, digital signal processor (DSP), or any other suitable device or application may be used to determine the relative weights for the basis functions. As will be described in more detail below, the incoming neural recording signal is denoised by multiplying the weights with the basis functions vthrough v.
1
7
For example, assume that the incoming neural recording signal is represented by m. The relative weights for the basis functions vthrough vare determined by correlating the incoming neural recording signal m with the basis functions in the component vector V. Hence, the weights are equal to V′ m.
FIG. 9
182
denoised
denoised
As shown in , the weights are then multiplied with the basis functions to denoise the incoming neural recording signal (step ). Thus, the denoised neural recording signal, m, is equal to V V′ m. For ease of explanation, m=T m, where T is the denoising matrix equal to V V′. A computer, digital signal processor (DSP), or any other suitable device or application may be used to resynthesize the weights.
denoised
denoised
denoised
denoised
denoised
Mathematically, the denoising effect of multiplying the weights with the basis functions can be shown by the following equations. Suppose that the incoming neural recording signal is m=s+n, where s represents the evoked neural response signal and/or artifact signal and n represents the uncorrelated noise. Without loss of generality, it can be assumed that n has a zero mean. The denoised waveform is then m=T m=T s+T n=s+T n. Therefore, the uncorrelated noise in the denoised waveform is n=m−s=T n.
Conceptually, the denoising effect of multiplying the weights with the basis functions can be illustrated by the following example. Suppose that there is only one basis function and the incoming neural recording only contains noise. When this incoming noise is correlated with the basis function, the resulting weight value is low, indicating that the noise does not correlate with the basis function. When the low weight number is multiplied with the basis function, the resulting signal is characterized by a smaller magnitude than the incoming noise signal.
On the other hand, suppose that the incoming neural recording is noiseless. Therefore, when the incoming neural recording signal is correlated with the single basis function, the resulting weight number is high, indicating that the incoming neural recording signal correlates with the basis function. When the high weight number is multiplied with the basis function, the resulting signal is characterized by a magnitude that is relatively close to the magnitude of the incoming neural recording signal.
n
n
n
FIG. 13
FIG. 13
FIG. 13
193
192
The noise can be described by the covariance matrix CD=E[n n′]=T E[n n′]T′=T CT′. The diagonal of the matrix CDis the variance at any point. Therefore, the square root of the diagonal is equal to the standard deviation at any given point. Assuming that the incoming noise is white, with unity variance, the decrease in the noise standard deviation is shown in . shows, as a function of sample number, the amount by which noise is reduced for each point of the waveform representing the neural recording. The horizontal line () represents the noise level of the incoming neural recording before denoising. The line () represents the noise level of the incoming neural recording signal after denoising. The shaded area represents the range of time where most of the response energy is maximal. In this area, as shown in , an average reduction in noise of nearly 50 percent is achieved by the denoising technique described herein.
In some embodiments, greater noise reductions may be achieved by including fewer components. However, the cost of including fewer components may be loss of some energy in the denoised signal.
FIG. 8
213
214
Returning to the flow chart of , after the neural recording signal has been denoised (step ), confidence intervals corresponding to the neural recording signal may be determined (step ). The confidence intervals take into account the uncertainty in the denoised neural response signal. The confidence intervals may be derived from any combination of a number of contributing factors including, but not limited to, estimates of noise levels, relative noise levels before and after multiplying the weights with the basis functions, and other factors.
FIG. 8
216
As shown in , the method also includes fitting an artifact model to the obtained neural recording signal (step ). The artifact model describes a typical or model stimulus artifact signal, and, as will be described in more detail below, may be used to determine whether a neural recording signal includes a neural response signal.
m
m
1
m
1
2
3
1
2
3
−α·t
−β·t
−α·t
As used herein and in the appended claims, unless otherwise specifically denoted, the variable a(t) will be used to represent an artifact model. As mentioned, a stimulus artifact signal can be characterized as a sum of two decaying exponentials. Hence, the artifact model may be described by the following equation: a(t)=A·e+B·e, where α and β are time constants. Since the time constant β is large compared to the time scale of interest, the second exponential in this equation can be estimated by a linear trend. Hence, a(t)=A·e+A·t+A. All of the parameters in this model are linear, except for the coefficient α. As will be described in more detail below, the values of the parameters [α, A, A, A] may be adjusted to fit the artifact model to a neural recording signal.
m
1
2
3
1
m
The variable m(t) will be used herein and in the appended claims, unless otherwise specifically denoted, to represent a neural recording signal. Hence, m(t)=a(t)+s(t)+n(t), where a(t) represents the stimulus artifact signal, s(t) represents the neural response signal, and n(t) represents the noise signal. To fit the artifact model a(t) to the neural recording signal m(t), the model parameters [α, A, A, A] are determined for which the error between the artifact model and the data within the neural recording signal is minimized. Heuristic optimizations may be applied to limit the artifact model. For example, the parameter Amay be required to have a positive value. A computer, digital signal processor (DSP), mathematical application, or any other suitable device or application may be used to fit the artifact model a(t) to the neural recording signal m(t).
216
217
Once the artifact model has been fitted to the neural recording signal (step ), the fitted artifact model signal is denoised (step ). The fitted artifact model is denoised to eliminate or reduce distortions or uncertainties in the model due to the noise that is present in the neural recording signal. The fitted artifact model signal may be denoised using principal component analysis, as described above, or by using any other suitable denoising technique.
217
218
230
231
FIG. 14
FIG. 14
After the fitted artifact model signal has been denoised (step ), confidence intervals for the fitted artifact model signal are determined (step ). These confidence intervals are determined by a number of uncertainties in the artifact parameters given the noise level in the neural recording signal. For example, there may be uncertainty in the stimulus, uncertainty in the model, and uncertainty in the noise. is a graph illustrating the relative contribution of the noise () and the artifact model () to the overall uncertainty of the fitted artifact model signal. The results in and/or additional or alternative factors may be used in determining the confidence intervals for the fitted artifact model signal.
FIG. 8
FIGS. 15A and 15B
214
218
219
240
240
Returning to , once the confidence intervals have been determined for the neural recording signal (step ) and for the fitted artifact model signal (step ), net or total confidence intervals are computed by summing the neural recording signal confidence intervals and fitted artifact model signal confidence intervals (step ). show exemplary net confidence intervals (). As will be described in more detail below, these net confidence intervals () are used to determine whether a neural recording signal includes a neural response signal.
FIG. 8
219
220
240
Returning to , after the net confidence intervals have been computed (step ), a strength-of-response (SOR) metric corresponding to the observed neural recording signal is computed (step ). The SOR metric describes the distance of the fitted artifact model signal to the observed neural recording signal relative to the net confidence intervals (). A neural recording signal may be identified as including a neural response signal if the SOR metric exceeds a pre-determined threshold.
240
The SOR metric may be any metric that describes the distance of the fitted artifact model signal to the observed neural recording signal relative to the net confidence intervals (). A number of different SOR metrics may be used. One exemplary SOR metric is
<math overflow="scroll"><mrow><mrow><mi>SOR</mi><mo>=</mo><mroot><mrow><mfrac><mn>1</mn><mn>35</mn></mfrac><mo></mo><mrow><munderover><mo>∑</mo><mrow><mi>t</mi><mo>∈</mo><mrow><mo>[</mo><mrow><mn>22</mn><mo>,</mo><mn>27</mn></mrow><mo>]</mo></mrow></mrow><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle></munderover><mo></mo><msup><mrow><mo>(</mo><mfrac><mrow><mo>(</mo><mrow><mrow><mover><mi>m</mi><mi>_</mi></mover><mo></mo><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow></mrow><mo>-</mo><mrow><msub><mover><mi>a</mi><mi>_</mi></mover><mi>m</mi></msub><mo></mo><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow></mrow></mrow><mo>)</mo></mrow><mrow><mi>c</mi><mo></mo><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow></mrow></mfrac><mo>)</mo></mrow><mn>6</mn></msup></mrow></mrow><mn>6</mn></mroot></mrow><mo>,</mo></mrow></math>
where c(t) is the net confidence interval size. This equation may be modified as best serves a particular application.
221
222
224
The size of the SOR metric is then evaluated (step ) to determine whether the neural recording signal includes a neural response signal or whether the neural recording signal only includes noise and artifact signals. The SOR metric evaluation may be performed automatically with a computer, DSP, mathematical application, or any other suitable device or application. If the SOR metric exceeds a pre-determined SOR threshold value, the neural recording signal is identified as including a neural response signal (). Conversely, if the SOR metric is below the pre-determined SOR threshold, the neural recording signal is identified as not including a neural response signal (step ).
223
Additionally or alternatively, further neural recording signals may be obtained and averaged (step ) if the SOR metric is too close to the SOR threshold to accurately identify as corresponding to a neural recording signal that includes a neural response signal or not. A new SOR metric may be computed and evaluated for these additional neural recording signals.
FIGS. 15A and 15B
FIG. 15A
FIG. 15A
242
241
241
240
241
242
An example of determining whether a neural recording signal includes a neural response signal by evaluating the SOR metric will be described in connection with . shows a first exemplary neural recording signal () and a corresponding denoised neural recording signal () that has been fitted by the artifact model. As shown in , the denoised recording signal () is relatively close to the confidence interval () and may therefore be difficult to visually identify as including a neural response signal. However, suppose that the pre-determined SOR threshold is m=35. Using the SOR metric equation shown above, the SOR metric for the denoised recording signal () is equal to m=39.059, well above the SOR threshold value of 35. Therefore, the neural recording signal () may be identified as including a neural response signal.
FIG. 15B
244
243
244
244
shows a second exemplary neural recording signal () and its corresponding denoised neural recording signal () that has been fitted by the artifact model. The SOR metric for this neural recording signal () is equal to m=20.2673, well below the SOR threshold value of 35. Therefore, the neural recording signal () may be identified as not including a neural response signal.
FIG. 8
FIG. 16
FIG. 16
250
251
252
252
An optimal threshold value may be determined using a number of different techniques. In some embodiments, the optimal threshold value is determined by comparing the results of the automatic neural response identification method of to the results of visual identification of the same neural response signals by expert medical practitioners. For example, is a graph that shows error rates of the automatic neural response identification method when compared to visual identification of neural response signals by expert medical practitioners for a number of different threshold values. Curve () shows the percentage of “false positives” (i.e., the percentage of neural recording signals falsely identified as including a neural response signal) per threshold value, curve () shows the percentage of “false negatives” (i.e., the percentage of neural recording signals falsely identified as not including a neural response signal) per threshold value, and curve () shows the net error rate per threshold value. The optimal threshold value is determined by choosing the threshold value that corresponds to the minimum value of the net error rate curve (). Hence, the optimal threshold value for the curves shown in is approximately equal to 35.
As mentioned, it is often desirable to determine the minimum stimulation current level needed to evoke a neural response, i.e., the neural response threshold current level. However, noise and artifact signals contained in a neural recording signal often make it difficult to determine the neural response threshold current level.
FIG. 17
FIG. 17
FIG. 17
Hence, an exemplary method of automatically determining a neural response threshold current level will now be described in connection with the flow chart of . The steps shown in and described below may be modified, reordered, removed, and/or added to as best serves a particular application. Furthermore, it will be recognized that a computer, digital signal processor (DSP), mathematical application, or any other suitable device, signal processor, software, firmware, or application may be used to implement one or more of the steps described in connection with .
FIG. 17
The exemplary method shown in includes identifying a number of neural recording signals at different stimulation current levels that most likely include neural response signals and a number of neural recording signals at different stimulation current levels that most likely do not include neural response signals. The neural response threshold current level may then be determined by analyzing the amplitudes of the neural recording signals and their corresponding stimulation current levels.
FIG. 17
step
step
step
step
300
As shown in , a neural recording signal is first obtained with Aaverages (step ). In other words, the neural recording signal is the average of Aneural recordings at a particular stimulation current level. The value of Amay vary as best serves a particular application. An exemplary, but not exclusive, value for Ais 32.
301
FIG. 8
An SOR metric is then computed for the obtained neural recording signal (step ). The SOR metric may be computed using the method already described in connection with .
crit
min
max
crit
min
max
302
The computed SOR metric is then compared against an SOR threshold value to determine whether the neural recording signal includes a neural response signal. As used herein and in the appended claims, unless otherwise specifically denoted, the variable SORwill be used to represent the SOR threshold value. As described previously, if the SOR metric is too close to the SOR threshold, the probability of falsely identifying the neural recording signal as including a neural response signal or not is greatly increased. Hence, in step , the SOR metric is compared against SORand SOR, upper and lower uncertainty limits, respectively, that surround the SOR threshold value. The values for SOR, SOR, and SORmay vary as best serves a particular application. Exemplary values obtained using the equation
<math overflow="scroll"><mrow><mrow><mi>SOR</mi><mo>=</mo><mroot><mrow><mfrac><mn>1</mn><mn>35</mn></mfrac><mo></mo><mrow><munderover><mo>∑</mo><mrow><mi>t</mi><mo>∈</mo><mrow><mo>[</mo><mrow><mn>22</mn><mo>,</mo><mn>27</mn></mrow><mo>]</mo></mrow></mrow><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle></munderover><mo></mo><msup><mrow><mo>(</mo><mfrac><mrow><mo>(</mo><mrow><mrow><mover><mi>m</mi><mi>_</mi></mover><mo></mo><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow></mrow><mo>-</mo><mrow><msub><mover><mi>a</mi><mi>_</mi></mover><mi>m</mi></msub><mo></mo><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow></mrow></mrow><mo>)</mo></mrow><mrow><mi>c</mi><mo></mo><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow></mrow></mfrac><mo>)</mo></mrow><mn>6</mn></msup></mrow></mrow><mn>6</mn></mroot></mrow><mo>,</mo></mrow></math>
crit
min
max
as described above, may be SOR=32, SOR=27, and SOR=35.
302
303
303
300
If the SOR metric value falls within these uncertainty limits (Yes; step ), the SOR metric is too close to the SOR threshold to accurately identify the neural recording signal as having captured an actual neural response. Averaging in additional neural recording signals may reduce the effect of the noise and artifact signals enough to make a more accurate identification of the averaged neural recording signal. Hence, the method next determines whether additional neural recording signals may be obtained at the same stimulation current level to include in the average (step ). If additional neural recording signals may be obtained at the same stimulation current level (Yes; step ), the method returns to step wherein additional neural recording signals are obtained and averaged.
303
303
304
max
max
In some instances, however, additional neural recording signals may not be obtained for inclusion in the average being calculated (No; step ). For example, a medical practitioner may determine that additional time at the present stimulation current level may be harmful to the patient. Alternatively, the practitioner may have only a limited time for making the measurement. Hence, a maximum number of averages, Aat the present stimulation current level may be specified by the medical practitioner. The total number of signals recorded and averaged together at a specific current level may not exceed A. If it is determined that no additional signals may be obtained (No; step ), the method proceeds to step , which will be described in more detail below.
302
302
304
305
min
max
crit
crit
Returning to step , if the SOR metric value does not fall within SORand SOR(No; step ), an accurate identification of the neural recording signal may be made. If the SOR metric is greater than SOR, the neural recording signal is identified as including a neural response signal and the total number of neural responses R that have been identified is incremented by one (step ). However, if the SOR metric is less than SOR, the neural recording signal is identified as not including a neural response signal and the total number “non-responses” NR that have been identified is incremented by one (step ). As used herein and in the appended claims, unless otherwise specifically denoted, the term “non-response” and “non-response signal” will be used interchangeably to refer to a neural recording signal that does not include a neural response signal.
306
309
309
311
300
306
If there has not yet been a neural response R identified (No; step ), it is next determined whether the stimulation current level can be increased (step ). If the stimulation current level can be increased (Yes; step ), the stimulation current level is increased from the highest previously-tried current level (step ). New neural recordings may then be obtained at this higher current level (step ). This process may be repeated until a neural stimulus R is identified (Yes; step ).
309
310
310
However, in some instances, the stimulation current level cannot be increased (No; step ). For example, a medical practitioner may stipulate that the stimulation current level cannot go above a particular maximum level. In these instances, more neural recording signals may be obtained and averaged at the current stimulation current level if the maximum number of allowable signals at that stimulation current level has not been exceeded (Yes; step ). As mentioned above, collecting and averaging additional signals may enable a more accurate identification of an actual neural response. If more signals for the average cannot be obtained at the maximum current level (No; step ), the method may end without identifying any neural responses R. In such a case, an option may be presented to the medical practitioner to obtain more neural recording signals at a higher stimulation current level and/or with a different recording electrode.
FIG. 17
306
307
307
312
307
min
min
min
min
min
min
min
As shown in , once a neural response R has been identified (step ), the method determines whether a minimum number of non-responses NRhas been obtained (step ). NRmay vary as best serves a particular application. For example, in some embodiments, NRis two. If the minimum number of non-responses NRhas not been obtained (No; step ), the stimulation current level (step ) may be gradually decreased until the minimum number of non-responses NRare recorded (Yes; step ). NRmay vary as best serves a particular application. For example, in some embodiments, NRis two.
311
308
min
min
min
Next, the stimulation current level may be increased (step ) until a minimum number of responses are measured (Yes; step ). Rmay vary as best serves a particular application. For example, in some embodiments, Ris four. The neural response threshold current level may more accurately be determined with higher values of R.
It will be recognized that the order in which the neural responses R and the non-responses NR are obtained may vary as best serves a particular application. For example, the non-responses NR may be obtained first. The stimulation current may then be increased to obtain the neural responses R. In any case, the identification of a minimum number of responses and non-responses increases the accuracy and confidence in the identified neural response threshold.
In some instances, a stray measurement may be obtained. For example, a neural response R may be obtained at a stimulation current level that is in between stimulation current levels corresponding to two non-responses NR or vice versa. These stray measurements may be ignored or otherwise dealt with as best serves a particular application.
min
min
min
min
FIGS. 18A and 18B
FIG. 18A
320
328
330
320
328
The method of obtaining a number of neural responses Rand non-responses NRwill be illustrated in connection with . For illustrative purposes only, both Rand NRare equal to four. shows a number of neural recording signals (-) obtained at different current levels. Confidence intervals (e.g., ) corresponding to each neural recording signal (-) are also shown.
FIG. 18B
FIG. 18A
FIG. 18B
FIG. 18A
320
328
320
320
321
321
320
324
325
325
min
min
shows the measurement sequence of the neural recording signals (-) of . As shown in , a first neural recording signal () is obtained at a stimulation current level of 350 μA. This neural recording signal () is identified as a non-response. The stimulation current level is then increased to 400 μA and a second neural recording signal () is obtained. This neural recording signal () is also identified as a non-response. This process is repeated until at least two non-responses (NR) are obtained. As shown in , five non-responses (-) are obtained before a neural response () is identified. If the neural response () is obtained prior to obtaining the minimum number of non-responses NR, the current may be decreased to a level below 350 μA to obtain the desired number of non-responses.
FIG. 18A
FIG. 18B
325
325
As shown in , the first neural response () is identified at a current level of 600 μA. The amplitude of the first neural response () detected is often relatively small and additional neural response signals may have to be obtained at this current level and averaged to determine whether a neural response is actually present. Hence, as shown in , after reaching the maximum allowable stimulation current level of 750 μA, the stimulation current may be decreased to 600 μA where additional neural recording signals are taken and averaged until a neural response is detected.
FIG. 18A
325
328
shows that neural responses are identified at current levels of 600, 650, 700, and 750 μA. Hence, the neural recording signals (-) have been identified as including neural response signals.
min
min
FIGS. 18A and 18B
FIG. 17
FIG. 19
FIG. 18A
FIGS. 18A and 18B
FIG. 19
FIG. 18A
FIG. 18A
313
320
328
320
324
325
328
Once the minimum number of neural responses Rand minimum number of non-responses NRhave been obtained, as illustrated in connection with , the neural response threshold current may be determined by analyzing the identified neural responses and non-responses (step ; ). illustrates an exemplary analysis of the neural responses and the non-responses that may be used to determine the neural response threshold current. The peak-to-peak amplitudes of the neural recording signals (-; ) obtained at each stimulus current level in the example described in connection with are plotted in the graph of . The unfilled points in the graph correspond to the non-responses (-; ) and the filled points correspond to the neural responses (-; ).
FIG. 19
FIG. 18A
FIG. 18A
340
325
328
320
324
340
As shown in , a closest-fit line () may be fit to a number of the points corresponding to the neural responses (-; ) and non-responses (-; ) to analyze a trend in the data represented by the points. This closest-fit line () is referred to as a growth curve or contour.
341
340
342
343
341
The region () of the growth curve () between the highest stimulation current () corresponding to a non-response and the lowest stimulation current () corresponding to a neural response may be analyzed to determine the neural response threshold current level. In some embodiments, the neural response threshold current level is equal to a value that falls within this region (). The accuracy of the neural response threshold current level may be maximized by increasing the number of signals averaged to obtain the responses and non-responses, increasing the number of neural responses and non-responses obtained, and/or decreasing the incremental step value of the stimulation currents from 50 μA to a smaller value.
FIG. 17
The method illustrated in of automatically determining a neural response threshold may be performed by an application, processor-readable instructions, or the like that may be stored in a processor readable medium. The processor readable medium may be a hard drive, optical disc, or any other storage device medium.
The preceding description has been presented only to illustrate and describe embodiments of the invention. It is not intended to be exhaustive or to limit the invention to any precise form disclosed. Many modifications and variations are possible in light of the above teaching.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings illustrate various embodiments of the present invention and are a part of the specification. The illustrated embodiments are merely examples of the present invention and do not limit the scope of the invention.
FIG. 1
1
8
shows a lead having an electrode array with electrodes E through E according to principles described herein.
FIG. 2
illustrates an exemplary stimulus that may be delivered to neural tissue via a stimulating electrode according to principles described herein.
FIG. 3
shows an exemplary cochlear implant system that may be used as a neural stimulator according to principles described herein.
FIG. 4
is a functional block diagram of an exemplary speech processor and an implantable cochlear stimulator according to principles described herein.
FIGS. 5A and 5B
show a spinal cord stimulator (SCS) system that may be used as a neural stimulator according to principles described herein.
FIG. 6A
is a graph depicting an exemplary evoked neural response signal according to principles described herein.
FIG. 6B
is a graph depicting an exemplary artifact signal according to principles described herein.
FIG. 6C
is a graph depicting an exemplary neural recording signal according to principles described herein.
FIG. 7A
is a graph depicting an exemplary noise signal according to principles described herein.
FIG. 7B
FIG. 7A
FIG. 6A
is a graph depicting the effect of the noise signal of on the evoked neural response signal of according to principles described herein.
FIG. 7C
FIG. 7A
FIG. 6B
is a graph depicting the effect of the noise signal of on the artifact signal of according to principles described herein.
FIG. 7D
FIG. 7A
FIG. 6C
is a graph depicting the effect of the noise signal of on the neural recording signal of according to principles described herein.
FIG. 8
is a flow chart illustrating an exemplary method of automatically identifying a neural recording signal that includes a neural response signal according to principles described herein.
FIG. 9
is a flow chart illustrating an exemplary method of denoising a neural recording signal according to principles described herein.
FIG. 10
is a graph showing the percent of unaccounted variance in a matrix of evoked neural recording signals as a function of number of components according to principles described herein.
FIG. 11
is a graph illustrating the difference of standard deviations of the errors in the beginning versus in the end of the waveforms in the matrix of evoked neural recording signals as a function of the number of components included according to principles described herein.
FIG. 12
is a graph showing seven basis functions or components according to principles described herein.
FIG. 13
is a graph showing the amount by which noise is reduced for each point of the waveform representing the incoming neural recording signal according to principles described herein.
FIG. 14
is a graph illustrating the relative contribution of the noise and the artifact model to the overall uncertainty of the artifact model according to principles described herein.
FIGS. 15A and 15B
are graphs illustrating net confidence intervals that are used to determine whether a neural recording signal includes a neural response signal according to principles described herein.
FIG. 16
is a graph that shows error rates of the automatic neural response identification method when compared to visual identification of neural response signals by expert medical practitioners for a number of different threshold values according to principles described herein.
FIG. 17
is a flow chart illustrating an exemplary method of automatically determining a neural response threshold current level according to principles described herein.
FIG. 18A
shows a number of neural recording signals obtained at different current levels according to principles described herein.
FIG. 18B
FIG. 18A
shows the measurement sequence of the neural recording signals of according to principles described herein.
FIG. 19
is a graph illustrating an exemplary analysis of the neural responses and the non-responses that may be used to determine the neural response threshold current according to principles described herein. | |
The Ivy Tech Community College Libraries are full partners in the educational and workforce development mission of the College. The libraries teach research strategies, support the curriculum, and encourage independent and lifelong learning by providing the space, information resources, instruction, and related services essential for academic success. The Libraries advance information literacy, critical thinking, and collaborative learning in a welcoming environment that promotes, and is enriched by, the diverse cultural and intellectual interests of students, faculty, and community.
Purpose and Use
The Library Staff invite you to visit the libraries to enjoy quiet study areas, pursue research projects, or just relax with a good book or magazine. Our libraries are quiet study areas. Using them is a privilege that is shared by all. Therefore,
• Everyone is expected to work quietly and keep conversation to a minimum.
• Cell phones are to be used courteously - turn ringers off or set to vibrate and step outside of the libraries to talk and keep conversations private.
• Open food and drink containers are not allowed in the Library; however, drinks with secure screw-caps are allowed if they are kept closed. (Keep the computer area clear of all drinks, however lidded.)
Information Resources Collection Policies
The primary mission of the library at Ivy Tech is to provide access to materials, information, and services that supports and supplements the educational mission of the College.
To further develop its collection, the Library's top priority is to purchase information resources and online access that directly support the needs of the students and faculty. The Library also recognizes its responsibility to respond to the needs of the College, faculty, administration, and staff, and to provide general information and some recreational reading for all its users. | https://library.ivytech.edu/sellersburg3/policies |
NEW YORK CITY (February 18, 2010): The National Federation of the Blind, which has led the fight for the equality of blind Americans for nearly seventy years, expressed support for the proposed settlement between Google and authors and publishers today in a hearing before Judge Denny Chin in the Federal District Court for the Southern District of New York. Dr. Marc Maurer, President of the National Federation of the Blind, told the court that the proposed settlement should be approved because it will provide access to millions of books for blind Americans.
Dr. Maurer said in part: “Digital books are quickly becoming the norm. This should be good news for the blind. Digital information can easily be presented in auditory, large print, or refreshable Braille formats. However, despite the simplicity of building accessibility provisions into digital management products, many of the manufacturers of the technology have refused to consider doing so. On the other hand, Google will give us access to 10 million books. In the process of doing this, Google will help to make the point that access to information for all is achievable and desirable. . . .We believe that access to the storehouse of ideas, books, is essential for participation in a free society. The ability to think, to write, to invent, and to create opportunity expands in the presence of the writings of others. If our talents are to be used, we must be able to read.”
The terms of the settlement among Google, the Authors Guild, the American Association of Publishers, and five individual publishing companies expressly allow Google to provide the material it offers users “in a manner that accommodates users with print disabilities so that such users have a substantially similar user experience as users without print disabilities.” A user with a print disability under the agreement is one who is “unable to read or use standard printed material due to blindness, visual disability, physical limitations, organic dysfunction, or dyslexia.” Blind people, like other members of the public, will be able to search the texts of books in the Google Books database online; purchase some books in an accessible format; borrow accessible digital copies of books through participating libraries; or access accessible books at libraries and other entities that have an institutional subscription to the Google Books database. | https://nfb.org/about-us/press-room/national-federation-blind-supports-google-settlement-court |
In This Issue:
Generating Power in Oakland to Take Back Public Land ● CDFIs Stepped Up During the Shutdown ● Who Will Benefit From Opportunity Zones? ● Loans *and* Policy for More Bay Area Affordable Housing ● Also: Jobs ● Shelter Shorts ● Events +
Loans *and* Policy for More Bay Area Affordable Housing
Miriam Axel-Lute,
Shelterforce
A pair of funds backed by philanthropic heavy hitters—convened by the Chan-Zuckerberg Initiative—tries to take advantage of a moment when all eyes are on housing to both bridge gaps in the capital market and influence policy makers...
Read Full Article
Shelter Shorts—the Week in Community Development
Shelterforce
Staff
Coffee Shops Into Community Assets | Sen. Crapo's Plan to Eliminate Affordable Housing Goals | Zoning Out Families | Zombie Properties in Florida | Social Determinants of Health in the News | Climate Resilience |
And More Quick Takes From Our Editors
Who Will Benefit From Opportunity Zones?
Tanner Howard,
Organizer
The competition to receive funding for Opportunity Zones from potential investors was a major focus of recent event led by the Urban Institute, but the question of who would benefit most from these investments marked a clear division among participants.
Read Full Article
CDFIs Stepped Up During the Shutdown
Miriam Axel-Lute,
Shelterforce
The recent government shutdown showed how crucial community development organizations, especially community development lenders, can be to keeping families and small businesses afloat.
Read Full Article
Generating Power in Oakland to Take Back Public Land
Vanessa Riles, East Bay Housing Organizations
When a vacant lot was close to becoming the home of a 24-story, market-rate development, local activists prevented it from happening by...
Read Full Article
Events
Thursday, Feb. 7, 1 p.m. CST | Creating Accountability With City Scrapers |
Through this webinar, participants can learn how to replicate the open source
City Scrapers project
, which allows stakeholders to track all those important but often overlooked municipal meetings and decision-making processes.
Info or register
here
.
Monday, Feb. 11, 3 p.m. EST | What Really Works in Homelessness Prevention: Lessons from Literature and the Field
| Presented by Abt Associates’
Center on Evidence-based Solutions to Homelessness
, this webinar will feature practitioners from three communities who will discuss their experience planning or implementing prevention activities, and how the evidence aligns with their work on the ground. Info or register
here
.
Resources
The National Low Income Housing Coalition just released
Opportunities to End Homelessness and Housing Poverty in the 116th Congress
,
a memo to incoming senators and representatives on concrete steps they can take to help address one of the most critical issues facing extremely low-income families today: the lack of decent, accessible, and affordable housing.
Click here to download a copy.
Looking for a Job? Scroll Down...
You Said It!
On
Hospital System Helps Housing Partners Unlock Capital
When hospitals make investments in conditions outside their walls—like affordable housing and access to healthy foods—everyone wins. This is the story of how
Dignity Health
has been able to work with other partners in San Bernardino to expand affordable housing in the community, guided by community priorities and bringing in new resources and...
— Rich Besser,
more
On
Mental Health and Community Development
Love this article because it brings out the larger issue that every industry needs to do what it can to cultivate positive mental health conditions, including Community dev. All us who have been touched by the tragedy of suicide understand the urgency. Thanks for writing! — John Lieber
,
via
Twitter
On
The Struggle for Housing in Los Angeles: A Review of
City of Segregation
Very thorough, comprehensive, fair review of this important book. Good job.
— David Willingham,
more
This is humbling and leaves me without words — identifying so much of what I hoped I had succeeded in in getting across...
—
Andrea Gibbons, via
Twitter
Sign up for
Shelterforce Weekly!
In Case You Missed It
Mental Health and Community Development
We Need State Law That Allows Multifamily Housing
Greening Vacant Lots: Low Cost, Big Effect in Philly
The Struggle for Housing in Los Angeles: A Review of City of Segregation
Help support
Shelterforce
, the
original
voice of community development!
Subscribe to
Shelterforce
in print!
Jobs
External Relations Manager
• The Center for NYC Neighborhoods seeks an individual who is driven to work for a mission-oriented organization and has the skills to expand our fundraising and income generation through the strategic use of impact storytelling, events, and other cultivation strategies. This is an exempt position that reports to…
Read Full Listing
Senior Project Manager
●
TNDC seeks a
Senior Project Manager to perform a wide variety of tasks related to planning and developing affordable housing for Tenderloin Neighborhood Development Corporation (TNDC). The Senior Project Manager coordinates and implements all activities relating to project development from ...
Read Full Listing
Racial Equity & Inclusion Program Manager
●
NPH i
s looking for an experienced individual dedicated to racial equity to move our work and programs forward in support of improving internal and member/sector-wide diversity, equity, and inclusion. The Racial Equity & Inclusion Program Manager is responsible for developing, growing and
.
..
Read Full Listing
Policy Director
●
NPH seeks a dynamic, innovative, collaborative, and team-oriented Policy Director to provide strategy, leadership and expertise in the Bay Area and statewide on affordable housing policy. The position provides key staff support to the organization in the areas of public policy development, strategic planning, legislative and…
Read Full Listing
Multifamily Housing Lender
●
Seattle's Office of Housing works to build strong, healthy communities and increase opportunities for people of all income levels to live in our city. We are looking for a Multifamily Housing Lender to help the City decide which projects should or should not be funded. You will review and evaluate multifamily and bridge loan applications for policy and…
Read Full Listing
Grant Support Technician
●
The City of Medford seeks a Grant Support Technician who performs a wide variety of professional and administrative services related to grant administration and departmental programs and activities. This position reports to the Housing and Community Development Principal Planner and. . .
Read Full Listing
Click here to post a job!
More Jobs
Real Estate Asset Manager/Affordable Housing Real Estate Development
●
DHIC
Closing Specialist
●
Houston Community Land Trust
Homebuyer Support Coordinator
●
Houston Community Land Trust
Editor,
Miriam Axel-Lute
Managing Editor,
Lillian M. Ortiz
Senior Editor/Development Manager,
Keli A. Tianga
Assistant Editor,
Elizabeth Oguss
Publisher,
Harold Simon
Assistant Publisher,
Ana Sanchez Bachman
Made possible
by the generous support
of
Robert Wood Johnson Foundation
●
Kresge Foundation
●
NeighborWorks America
●
Kaiser Permanente
●
Bank of America Foundation
●
JPMorgan Chase Foundation
●
John D. and Catherine T. MacArthur Foundation
●
Citi Community Development
●
PNC Bank Foundation
●
Hyde and Watson Foundation
●
Ocwen Financial Corporation
●
Valley National Bank
...and with the support of
readers like you
.
Email This
BlogThis! | https://www.cocnews.org/2019/02/chan-zuckerberg-funds-loans-and-policy.html |
The guide provides considerations and strategies for interdisciplinary teams, peer specialists, clinicians, registered nurses, behavioral health organizations, and policy makers in understanding, selecting, and implementing evidence-based interventions that support older adults with serious mental illness.
Prevention and Treatment of HIV Among People Living with Substance Use and/or Mental Disorders
Screening and Assessment of Co-Occurring Disorders in the Justice System
Published: June 2019
This report provides evidence-based practices for screening and assessment of adults in the justice system with mental illness, substance use disorders, or both. It discusses the importance of instrument selection for screening and assessment and provides detailed descriptions of recommended instruments. | https://store.samhsa.gov/?v=substances&f%5B0%5D=audience%3A4999&f%5B1%5D=audience%3A5000&f%5B2%5D=format%3A5029&f%5B3%5D=format%3A5030&f%5B4%5D=population_group%3A5319&f%5B5%5D=population_group%3A5329&f%5B6%5D=publication_category%3A6039&f%5B7%5D=publication_category%3A6040&f%5B8%5D=publication_target_audience%3A6037&f%5B9%5D=treatment_prevention_and_recovery%3A5483&f%5B10%5D=treatment_prevention_and_recovery%3A5510&f%5B11%5D=treatment_prevention_and_recovery%3A5515 |
When I was fifteen years old, I started my first business – making and selling swimsuits. Marketing intrigued me; it combined my creativity and intellectual curiosity and allowed me to curate a vision – creating something from nothing. To me, marketing is all about innovation and creation, it is both an art and a science, and that is why I want to pursue marketing in my future career. I want to vision, strategize, and create and do so using technology.
My internship for summer 2018 with Stewart’s Automotive Group in Kingston, Jamaica allowed me to work with and market various brands including: Mercedes- Benz, Jaguar, Land Rover and Suzuki. In Jamaica, most private sector companies do not offer internships to students, so I reached out to Stewart’s Automotive Group and the company agreed to create an unpaid internship for me. At times, it was extremely stressful, particularly because I was doing the exact same work as the full-time employees. I promoted brand engagement through digital marketing, creating photo compositions and video content to encourage consumer interaction. I also employed my love of writing and anthropology by creating interesting written content to increase audience participation and used data science skills to analyze trends in consumer behaviour and interaction from social media platforms. Furthermore, I helped to curate public company projects and events.
I learned a lot about myself, and about the professional world. In fact, I learned a lot more from the hurdles of this experience than I have from my previous experiences that were all smooth sailing – and for that, I am truly grateful. Whether it’s as a future employee or employer, I’m better equipped with the tools to navigate difficult situations without compromising the task at hand. This internship has also piqued my interest in my major – anthropology, as I better understand how local political, socioeconomic and cultural factors create a unique professional realm. Jamaica is a very interesting place to live and work, and I want to do ethnographic research at some point during my Bowdoin career to uncover the foundations behind this environment. | https://careerplanning.bowdoin.edu/amanda-rickman20/ |
Received:
22
May
2003
Revised: 25 November 2003
Accepted: 18 December 2003
Published online: 4 March 2004
Deposition of pure and Ge-doped silica as well as silicon oxynitride films has been studied in a recently developed matrix distributed electron cyclotron resonance (MDECR) reactor. Process parameters were optimized in order to obtain optical quality thin films at low substrate temperatures and high deposition rates without post-deposition treatment. The choice of injection system is shown to be of crucial importance for the deposition of high quality materials in low pressure PECVD. It has been found that injecting silane near the surface allows to obtain films with a low OH absorption independently of silane flow i.e. growth rate in a certain range of process parameters. On the contrary, in the case of uniform distribution of silane in the reactor volume the hydrogen content increases with silane flow, which affects the quality of films deposited at higher rates. With the optimized injection system, stress-free silica films with a low absorption have been deposited at the rates up to 70 nm/min at temperatures lower than 150 °C. Non-absorbing oxynitride films with a controllable refractive index ranging from 1.46 to 1.86 have been obtained from SiH4/O2/N2 mixtures. Ge-doped silica films with a Ge content of up to 4% has been deposited using a mixture GeH4 in H2 as a dopant. The properties of deposited films have been studied as a function of process parameters. The results show that the MDECR concept, that permits, in principle, unlimited scaling of substrate size, can be technology of choice for the deposition of optical thin films and functional coatings. | https://epjap.epj.org/articles/epjap/abs/2004/04/ap03083/ap03083.html |
The Canary Islands are situated in the blue Atlantic Ocean, with their tropical temperatures and different landscapes, such as volcanoes, forests, and the beautiful beaches of fine sand.
The archipelago, which is made up of the islands of Tenerife, Gran Canaria, Fuerteventura, La Palma, Lanzarote, La Gomera and El Hierro, is an ideal place for those who love the sea, the fresh and beautiful nature and the villages, which are rich of folklore.
It is also the perfect place for those who love nature, for those who prefer the excursions to natural parks, and also for those who want to taste the typical Canarian gastronomy taking part in folkloric events of rare intensity.
The cuisine of the Canary Islands is particularly interesting because it is very much influenced both by the European tradition and by the African and tropical cultures. Most of the dishes are based on a combination of local flavors and continental tastes.
Tenerife is the largest island of the archipelago of the Canary Islands, and it has a particular cultural and artistic heritage, and also an high tourist flow thanks to the beauty of its landscapes and to the many opportunities of fun.
Fuerteventura is the closest island to the African coasts and has many desert landscapes, with the sun that beats down and the winds that blows impetuous, it is the favorite destination for lovers of windsurfing. It is also one of the most interesting islands from a cultural point of view, with Gran Canaria, where there are many historical and religious buildings with its colonial charm.
El Hierro is particularly wild, where nature is uncontaminated and its nightlife is linked to the Canarian folklore, as well as to other islands, where many beach parties and the beautiful places meet all tourists’ needs.
E.C. | http://blog.icastelli.net/en/2013/08/01/from-island-to-island-with-freedom-the-canary-islands/ |
Firstly, a very Happy New Year! I hope you enjoyed the Festive Period in the company of loved ones and great friends.
The following is an article I read this morning by Craig L. Israelsen, Ph.D., a Financial Planning contributing writer in Springville, Utah, he is an executive in residence in the personal financial planning program at the Woodbury School of Business at Utah Valley University.
The article is aimed at US Financial Planners but during this uncertain period, I thought you might find the central message reassuring and interesting. I have highlighted the most interesting statistics, just in case you don’t have time to read the entire piece.
“Ask your client this question: "What was the last movie you watched?"
They probably didn’t have to think too hard to remember. Then try this one: "How about a movie you watched in 1985?"
No dice — right?
Clients recall the performance of their investments similarly; that is, they remember recent performance with greater clarity. This trait, called recency bias, leads them to extrapolate into the future the good or bad they are experiencing in the moment. That skews their expectations — for better or worse — and distorts their view.
But there’s one notable exception to recency bias: the period in which your client’s portfolio suffered a significant loss. Referred to as loss aversion, this sentiment is also quite real. Investors simply don’t like big losses. Case in point: Have your clients forgotten about 2008?
So a recent loss in portfolio value can trigger both recency bias and loss aversion, and that can lead to “sell everything” phone calls. In the worst case, this type of fear cycle can wreak havoc if long-term plans are abandoned abruptly.
A recent loss in portfolio value can trigger "sell everything” phone calls.
In the chart called “Big Picture” we see a summary of the annual returns of seven core asset classes (indexes) over the past 49 years — as well as two portfolios. The first portfolio included all seven indexes in equal allocations; the second was a 60/40 portfolio consisting of 60% U.S. large stock and 40% U.S. bonds. Both portfolios were rebalanced annually over the 49-year period of analysis from 1970 to 2018. The calendar year losses of each individual index and both portfolios are shaded in pink. It’s these pink boxes that test the resolve of investors. But, as can be seen, the losses are relatively infrequent.
For example, over the 49 years from 1970-2018, large cap U.S. stock has produced positive nominal calendar year returns 80% of the time and generated an average annualized return of 10.21%. If we consider the impact of inflation, large cap U.S. stock had positive real returns 71% of the time and an after-inflation (or real) average annualized return of 6.00%.
By comparison, U.S. cash (as measured by the 90-day Treasury bill) had a 49-year average annualized return of 4.80% and positive nominal annual returns 100% of the time. But, after factoring out the impact of inflation (as measured by the CPI) the average real return was 0.80% and real annual returns that were positive only 57% of the time.
More importantly, let’s consider the performance of the two portfolios. First, the seven-asset portfolio had positive nominal returns 86% of the time and a 49-year average annualized return of 9.48%. After inflation is factored out, the average annualized real return has been 5.30% with positive real returns 73% of the time. The 60/40 portfolio had positive nominal calendar year returns 80% of the time and a 49-year return that was 5 bps lower at 9.43%. After inflation, the 60/40 portfolio had positive returns 71% of the time and a real return of 5.25%. This information puts performance over nearly five decades into perspective.
The 49-year historical performance of large-cap U.S. equities was represented by the S&P 500 Index, while the performance of small-cap U.S. equities was captured by using the Ibbotson Small Companies Index from 1970-1978 and the Russell 2000 Index from 1979-2018. The performance of non-U.S. equities was represented by the Morgan Stanley Capital International EAFE Index (Europe, Australasia, Far East) Index. U.S. bonds were represented by the Ibbotson Intermediate Term Bond Index from 1970-75 and the Barclays Capital Aggregate Bond Index from 1976-2018. As of late 2008, Lehman Brothers indexes were renamed Barclays Capital indexes.
The historical performance of cash was represented by three-month Treasury bills. The performance of real estate was measured by using the annual returns of the NAREIT Index from 1972-1977 (annual returns for 1970 and 1971 were based on research in the book “Real Estate Investment Trusts: Structure, Performance, and Investment Opportunities,” Table 2.2). From 1978-2018 the annual returns of the Dow Jones U.S. Select REIT Index were used (prior to April 2009 it was the Dow Jones Wilshire REIT Index). Finally, the historical performance of commodities was measured by the Goldman Sachs Commodities Index. As of Feb. 6, 2007, the GSCI became known as the S&P GSCI.
There is a key observation that should not be obscured by so much data: Each index (i.e., asset class) that we are evaluating had positive calendar year returns more than 68% of the time (based on nominal returns) and at least 57% of the time if using “real” inflation-adjusted returns. More importantly, the two portfolios we are evaluating had positive calendar year real returns at least 71% of the time.
Having a clear understanding of long-term asset class performance (as demonstrated in “Big Picture”) can minimize the potentially negative impact of recency bias during and after periods of market volatility — particularly when the volatility results in portfolio losses. The reality is that a broadly diversified portfolio will generate positive nominal returns nearly 90% of the time over time measured in decades, not months. Of course, a person who only invests in a diversified portfolio for two years should not expect positive returns in 90% of the 24 months. Even a diversified portfolio can experience two consecutive negative calendar year returns, such as in 2001 and 2002.
In summary, the impressive performances of the asset classes and portfolios in this study are over a 49-year period. Said differently, long-term results take a long time to replicate. The key to achieving long-term results is to stay in the saddle for a long time. The challenge is our natural instinct to avoid losses (loss aversion) and our tendency to over-emphasize what we have experienced most recently (recency bias). (For more discussion about portfolio losses see “You Can’t Win if You’re Afraid to Lose” in the October 2018 issue of Financial Planning).
The solution to countering recency bias is accurate information and proper perspective. This article has provided you with nearly five decades of information. With that information, work to help clients develop a proper perspective about the impressive performance demonstrated by a diversified investment portfolio over the past 49 years.”
Although this study concentrates on US data, results for diversified portfolios with UK equity bias, are similar.
As always, if you have any questions about the contents of this e-mail or any aspect of your financial planning, please do not hesitate to get in touch. | https://www.clearwaterwealth.co.uk/blog/2019/1/23/how-to-avoid-the-urge-to-sell-everything-when-the-going-gets-tough |
Can These Bones Live? This is the question posed in Chapters 10 and 11 of Alister McGrath’s book A Fine-Tuned Universe: The Quest for God in Science and Theology where he gives a brief overview of the chemical requirements for the origin of life.
There are two facets to this discussion.
The first is really a continuation of the general observation of fine-tuning in the universe. Life as we know it requires (1) the intrinsically flexible chemistry of Carbon, with Oxygen, Nitrogen, and Phosphorous also thrown into the mix and (2) the unique properties of liquid water (H2O). The presence of these elements and the presence of a water layer on earth arise from the fine-tuning of the primitive universe to produce the right chemical elements and the right environments.
puzzle and McGrath only sketches the tip of the iceberg.
What is life? When can an ensemble of molecules, comprised of atoms, themselves composed of elementary particles, be said to be “alive”?
On the most elementary level life consists of an enclosed system capable of metabolism and reproduction – although the later needs some nuance. So a discussion of the origin of life must consider these elements.
evolution provides no mechanism for formation of the first cells. There are several complex questions in the formation of life, more than I can list in a simple blog post. But we can consider three as examples of the kinds of questions faced.
1. The synthesis of the fundamental organic building blocks, the molecules of life. But cosmic organic chemistry is relatively common – spectroscopic measurements have identified many of organic molecules in space, meteorites and comets have provided more evidence. The evidence includes the formation of many simple compound, but also more complex and fundamental molecules, including amino acids – the building blocks of proteins.
The Wikipedia article appears to give a balanced account of this meteorite. There is some evidence for terrestrial contamination but there also seems to be clear evidence for extraterrestrial formation of amino acids. The famous Miller-Urey experiment also demonstrated the formation of complex organic molecules under conditions potentially similar to those of the early earth. All these experiments or discoveries demonstrate is that organic chemistry is robust and that amino acids are stable and kinetically favored under a range of conditions.
2. The synthesis of an information carrying molecule. It is widely believed that RNA (ribonucleic acid) filled this function initially. RNA is capable of a multitude of functions – it can carry information, it can self-assemble and self-replicate, it can act as a catalyst – a ribozyme, it can synthesize proteins (which are even better catalysts – enzymes), and eventually it can modify to produce DNA (deoxyribonucleic acid). But there is no consensus on how the synthesis of nucleotides from prebiotic presursors came about – no known reactions appear capable of this synthesis. Research is ongoing.
3. Develpment of contained organisms. The transition from “chemical soup” to life requires more than nucleic acids and amino acids however. It requires the formation of a protoorganism. “This “protoorganism” can be thought of as a single cooperative aggregate consisting of a protocontainer, a protometabolism, and protogenes.” (p. 137) Here we really get into the importance of the unique properties of water as the biological solvent. Water enables complex acid-base equilibria, it dissolves polar molecules and excludes non-polar (think of the separation of oil and water). Lipids have nonpolar tails and polar headgroups combined in one molecule. In water these spontaneously form vesicles – rudimentary “cell membranes.” Water supports complex chemistry and the formation of complex structures. It is speculated that a protoorganism could form from lipid vesicles in water encapsulating RNA and other simple molecules.
Can we see the hand of God in this process?
It must be admitted that we have no firm ground for speculating on the mechanism for the initial formation of life at the present time. Science continues to progress however, and perhaps we will have a viable explanation, sooner rather than later. On the other hand, a natural explanation doesn’t negate the presence of fine-tuning in the universe, or eliminate the hand of God in the process. And here we (and McGrath) return to Augustine.
It will also be clear that Augustine’s notion of rationales seminales plays an important heuristic role in engaging with the complex chemical phenomena that have briefly been described in this chapter. The emergence of chemical complexity precedes that of biological complexity and is generally ignored in accounts of biological evolution. Yet the importance of this point is clear: without an inherent capacity for chemical complexification, the foundations for biological development would not have been in place. These chemical properties must be regarded as emergent. Augustine’s image of the dormant seed, awaiting the right conditions for germination, is a helpful analogue for understanding how certain chemical properties emerge under appropriate circumstances.
What do you think – What constitutes life? What role do you think that God played in the formation of life? Did he embed a seed in the big bang or did he play a more personal role guiding and directing the process? | https://www.beliefnet.com/columnists/jesuscreed/2009/07/a-fine-tuned-universe-5-rjs.html |
Using data from the NASA/ESA Cassini mission, we have now discovered molecules on Saturn’s largest moon Titan which we think drive the production of complex organic compounds. These are molecules that have never been seen in our solar system before. The discovery not only makes Titan a great contender for hosting some sort of primitive life, it also makes it the ideal place to study how life may have arisen from chemical reactions on our own planet.
The molecular building blocks of life are organic compounds including amino acids that can be assembled into proteins, RNA and DNA in living cells. To date, scientists have found these compounds in meteorites, comets and interstellar dust. But the problem is that these materials formed millions of years ago, which means we have no way of knowing how they were created.
Excitingly, it seems these compounds are being created on Titan today. Sunlight and energetic particles from Saturn’s magnetosphere drive reactions in the moon’s upper atmosphere, which is dominated by nitrogen, methane and hydrogen. These lead to larger organic compounds which drift downwards to form the moon’s characteristic “haze” and the extensive dunes – eventually reaching the surface.
| |
As a Software Development Manager, you will build, lead and inspire a cross-functional team to deliver product features and services while developing a team culture of ownership that reinforces the core values of KOHO. You will help prioritize work for your team, making difficult tradeoffs on design based on business constraints, while ensuring on-time deliverables and high-quality software. You will provide technical guidance and mentor developers to help them reach their potential and ship a world-class product.
Please Note: This is a remote position based in Canada that is available to those who are legally entitled to work in Canada.
What You’ll Do
– Manage the growth and development of a high-performing technical team
– Lead and inspire engineers, product managers, and designers to build great products while managing deadlines and priorities
– Provide technical guidance including system design, and code review
– Continuously improve software engineering practices
– Own the craftsmanship, reliability, and scalability of your solutions
– Participate in interviewing, hiring decisions, onboarding, and mentoring new engineers
Who You Are
– You are collaborative and understand the importance of working as part of a team
– You have experience with web or mobile applications
– You are an owner and are invested in the success of the product and the team
– You know how to reach consensus with your peers
– You are focused on the customer and the details that make their experience amazing
– You are practical, making the best use of time and resources to find the simplest solution that works
Desired Skills & Experience
– Bachelor Degree in Computer Science, Computer Engineering, Electrical Engineering or equivalent work experience
– Previous experience as a team lead or managing a team is a plus
– Strong analytical skills and a strong sense of ownership and urgency
– Fluent English speaker with excellent written and verbal communication skills, sufficient for a remote-first environment
– Ability to concisely communicate about complex technical issues
– Experience with cross-functional engineering teams is preferred
– Familiarity with Software Development Best Practices
–
Is this posting closed?
Report a Dead Link
We do our best to remove postings when they're taken down, but as a small team we sometimes miss a few. Thank you for helping us stay current. | https://greatergoodjobs.com/job/software-development-manager-koho/ |
The Senior Professional in the Enterprise Business Continuity program within the Enterprise Risk Management ('ERM') function is responsible for assisting in the development, implementation, maintenance, and governance of the Enterprise Business Resiliency Program framework. This role helps to drive and deliver effective business continuity strategies to support and, in time of crisis, recover the company's critical business functions.
Responsibilities
The Senior Professional works with assigned parts of the Enterprise to influence and mentor business continuity best practices, ensuring alignment with general regulatory requirements. The senior Professional oversees the BC lifecycle including BC plan annual maintenance, testing and BIA coordination, by building collaborative relationships with business stakeholders, bridging contingency gaps and understanding potential risks.
Additional Job Description
As a subject matter expert, this role is responsible for guiding the assigned business areas through the business continuity planning lifecycle to ensure effective and efficient plans that identify, prevent, detect and correct risk and noncompliance with applicable rules and regulations.
This individual contributor role drives the implementation of Humana's Business Resilience Program and provides assistance, expertise, and a common framework to the organization to follow for assessing business impacts, developing business continuity plans and recovery strategies.
This position involves partnership with many internal teams, as well as ongoing collaboration with Human's Crisis Management, IT Disaster Recovery, Corporate Security, Legal and Compliance teams, to facilitate quality and innovative risk management solutions that are calibrated across departments.
We stand behind our values and encourage our leaders to rethink routine, thrive together, pioneer simplicity, cultivate uniqueness and inspire health.
Other Job Requirements
Foster Humana values in all interactions with colleagues and business partners including :
Cultivate Uniqueness. Appreciate individual uniqueness, creating an environment where everyone can fully be themselves, reflecting all of us and the communities we serve.
Rethink Routine. Work and learn together, transforming the norm to strengthen operational excellence and outcomes.
Thrive Together. Collaborate openly, building positive relationships to achieve strong, sustainable results for us and the people we serve.
Pioneer Simplicity. Take personal accountability, working together to create simple, personalized, quality experiences.
Help to ensure that plans and strategies are appropriate, cohesive and viable, and could be used to recover key functions within required time frame
Help to conduct and facilitate BIAs* Develops training programs for stakeholders in the correct implementation of BC & DR processes, standards and impart training to ensure recoverability of
business processes and supporting services across departments
Support review and maintenance of business continuity policy, standards and processes.
Support internal reporting and tracking of business continuity related issues and remediation activities
Support the identification of Business Continuity related risks (internal / external), the assessment of their likelihood, as well as potential impacts and risk mitigation plans
Proactively identify and implement BCP program and process improvements
Provide ongoing SME guidance and assistance to departments on business continuity matters
Design, coordinate and execute BCP/DR annual test exercises for critical business processes, and produce test reports including lessons learned. Coordinate follow up on lessons as required
Review existing and proposed plans for recoverability effectiveness and identify opportunities for improvement
Provide guidance to management in self-assessing their control environment
Assist Crisis Management / Incident Management teams during service disruption events, and contribute to process improvement initiatives
Required Qualifications
Bachelor's Degree or equivalent experience
Ability to articulate business continuity concepts and methodology
Disaster Recovery Institute International (DRII), Business Continuity Institute (BCI), or other business continuity professional certification in place or must be achieved within 2 years
Experience facilitating tabletop or practical exercises
Excellent written and verbal communication skills that take complex ideas and simplify them for the audience
Ability to work effectively as a member of a cross-functional team
Ability to work in a fast paced, dynamic and changing environment while managing multiple projects simultaneously
Ability to work independently, effectively manage competing demands and priorities
Strong analytical and problem solving skills
Must be passionate about contributing to an organization focused on continuously improving stakeholder experiences
Familiarity with Governance, Risk and Compliance as it applies to BCDR planning
Experience with Business Resumption Planning, Disaster Recovery Planning, conducting BIAs, etc.
Demonstrate the ability to think strategically and drive tactical execution
Experience with data analytics and dashboarding and BCM tools a plus (e.g. Power BI, Archer, etc.)
Data driven to architect and analytically develop robust management monitoring dashboards to report on key risk metrics. | https://network.symplicity.com/honolulu-hi/senior-risk-management-professional/6734F927CF184199A78DF8DD8388C140/job/?vs=28 |
In this article, we’ll discuss the question “what is program development?” and all of the important details that are involved in program development.
Today, there are challenges facing aerospace development programs on almost every level imaginable. Teams in the aerospace and defense (A&D) industries still have trouble with upfront planning when they begin a new program.
What tasks are required? How much time will each task take? How many resources are needed? Poor program execution that results from a lack of understanding at the beginning of a program makes it nearly impossible to meet budgets and deadlines later in the process.
Resources frequently fall short of requirements, which appears to lack program discipline, especially when a new requirement or change in scope arises. The complexity skyrockets when software and electronic systems are included in the mix.
Although there are many internal and external challenges for A&D teams, it is also an exciting and innovative time. Get in touch with our team at CSMI for more information on how program development can help your business thrive.
The Challenges of Digitalization in the Aerospace Industry
Modern aerospace programs can rely on digitalization as their main advantage in the face of such complexity. Simply put, digitalization increases productivity by revealing how certain requirements affect later engineering and manufacturing processes.
This can be done by using a comprehensive digital twin and digital thread, which provide a thorough understanding of A&D products and procedures. Digital thread-based processes enable multidisciplinary processes and weave relevant data together to present a complete picture of the product, production, and process in a logical and useful manner.
Instead of avoiding complexity, aerospace teams and their tier-one suppliers should embrace it and use it as a distinctive competitive advantage.
Companies must move more quickly. They must reduce development costs in order to reduce production and operating costs. Successful businesses, in my experience, are those that can quickly evolve their business models and out-innovate the competition.
With digitization and digital threads, customers can face complexity head-on, allowing for greater productivity and innovation. A digital thread is a grouping of digitalized integrated solutions, software, and best practices that provide visibility, collaboration, automation, and traceability within a specific domain while also connecting to other digitized domains.
Based on industry requirements, there are seven areas where digital threads can assist in addressing this complexity and encompassing the entire product lifecycle:
Program Management
Problems with program planning and execution lead to significant schedule delays and cost overruns, which have an effect on profits and a company’s capacity to make investments and win new contracts.
Long-term business success depends on consistently excellent program planning and execution, which enables organizations to stay within budget and meet deadlines.
Teams can plan projects based on systems using a digital thread for program management. To create a fully planned, resourced, and budgeted program management solution, it integrates cost, schedule, risk, and technical requirements.
It offers a unified approach and an integrated perspective for the entire company’s pursuit or program.
Model-Based Systems Engineering (MBSE)
There is significantly more information that needs to be managed as complexity rises.
The document-based and disjointed systems and processes of the past are no longer able to handle this complexity, which results in a number of negative effects, including schedule delays, cost increases, and missed opportunities.
To automate processes and make the management of product and program data simpler, businesses are looking for model-based processes.
Even in the face of rising complexity, an MBSE digital thread can orchestrate the technical program and scope across the entire enterprise and lifecycle, providing the foundation for more rapid and effective product development.
Companies can lessen the impact and consequence of “issues” that have historically surfaced during system integration and evaluation by facilitating early analyses and simulations linked to requirements and functions.
By assisting businesses in the transition from early system modeling to creating the digital thread for the entire program lifecycle, MBSE improves the development process and positions A&D teams for future success.
Product Engineering and Design
Due to the need to design and develop new products more quickly and affordably, A&D companies are searching for new procedures for designing and building aircraft. Agile process development for product development is frequently sought after by businesses.
Additionally, in order to innovate, collaborate more quickly, and target niche markets, businesses must now rethink how they approach product design. Businesses must find ways to quickly implement new technologies in order to stay competitive – in weeks, not years – as new materials and technologies become available.
Agile engineering can be implemented within a company using a digital thread for product design and engineering. By tackling the most challenging design and engineering issues with the best electrical, mechanical, and performance engineering solutions, this digital thread encapsulates an integrated and open design ecosystem to speed up product development.
Furthermore, this digital thread has the potential to transform traditional engineering approaches by incorporating new materials (composites, additive manufacturing, and electronics) and advanced user experiences (virtual reality, augmented reality, mixed reality).
The comprehensive digital twin provides multi-disciplinary design and optimization for electrical and electronics design, analysis and simulation, software management, and manufacturing simulation, in addition to 3D computer-aided design.
Certification and Verification
Product complexity is causing new regulatory requirements, which puts more of a burden on businesses to verify new products, raising costs and raising the possibility of schedule overruns.
Verification and certification expenses have risen steadily and now account for up to 75% of product development expenses, making it difficult to find the money for impulsive, discretionary purchases.
A digital thread for verification makes it possible to integrate certification into the overall process of developing a product. It gives A&D businesses a solid execution plan for certification and incorporates all necessary certification activities into the overall program plan.
Thanks to this digital thread, businesses can develop a cooperative relationship with the regulatory body. Businesses can more easily involve the authorities in the planning, execution, and auditing activities by providing them with dedicated access to the product lifecycle management system.
Supply Chain Cooperation
Globalization forces continue to have an impact on the supply chain, partners, and workforce. Supply chains are dispersed across the globe, making collaboration and effective supply chain management more difficult. Internal product team collaboration is also becoming more difficult as the workforce becomes more dispersed and more employees work remotely.
Document-based or siloed processes stymie collaboration and innovation, and product certifications become unaffordable due to inefficient and out-of-date processes.
Suppliers can be automated for better collaboration, and a digital thread for supplier collaboration can serve as a link between functional domains. It develops the thorough digital twin to link requirements to source selection and all contract deliverables as a model-based process throughout the product lifecycle.
For exchanging data throughout the supply chain, it enables data rights management and intellectual property protection from anywhere in the world.
Production and Manufacturing
Production environments must have the ability to share data insights from existing processes and transform this information into actionable information with a digital twin as businesses strive to optimize their production processes to increase quality and decrease cost.
Companies are looking for a better way to manage capital-intensive manufacturing processes and resources, ramp up production more quickly, and adopt new manufacturing concepts or technologies more quickly.
Production processes can be coordinated by a manufacturing digital thread, which can also provide pertinent production data to all areas of program development.
By including manufacturing feasibility analyses through simulation early in the design process using the production digital twin, it establishes the viability of product concepts.
It uses thorough manufacturing planning and virtual commissioning to confirm production readiness, and it quickly incorporates design changes to the factory floor to minimize rework.
Support and Maintenance
Unplanned maintenance is a significant source of operating expenses. Operational costs are also increased by planned maintenance. The final consumer is looking for predictive health monitoring and condition-based maintenance techniques that can minimize the financial and time impacts of planned and unforeseen maintenance.
Many businesses are using the “pay by usage” of products to provide predictability of operational costs. Due to this, more accurate and reliable methods of estimating operational costs are required in order for providers to turn a profit.
Manufacturers, owners, and service organizations may be able to support complex products in a service management environment with the help of a maintenance digital thread. The entire support system, including spare provisioning and service plans connected to the model-based configuration, can be planned using this digital twin.
Program Development to Gain a Competitive Advantage
Due to the complete traceability built into a digital thread, aerospace manufacturers and their supply chain partners can make more effective, better-informed decisions through digitalization. In addition to addressing increased program complexity, digital threads also address higher levels of integration.
As the cornerstone of a digitalization strategy, they support organic learning across the organization and/or across multiple programs and enable Program Execution Excellence for A&D companies in a closed-loop process.
It is up to aerospace and defense companies to embrace digital transformation. Contact our CSMI team for expert advice on Program Development for your company. | https://csmi.com/what-is-program-development/ |
NAIROBI, Kenya (Landscape News) – Burkina Faso is a small landlocked country located in the dry Sahelian region and one of the most populated countries in West Africa.
Mathurin Zida from Burkina Faso is a veteran environmentalist and scientist working for the Center for International Forestry Research (CIFOR), whose focus in the past few years has been on smallholder farmers in relation to forests and climate change adaptation.
The project that began in 2011 led to publication of a report called “The Context of REDD+ and adaptation to climate change in Burkina Faso: Drivers, agents and institutions” highlighting issues that face 85 percent of the population (who) is rural and dependent on agriculture and livestock,” as stated in the report.
Burkina Faso has a substantially small dry forest base that is being rapidly degraded because of a fast growing population and development. Landscape News spoke to Zida to gain insight into how his country is trying to adapt to all these changes, including unpredictable climate.
Q: What were some of the highlights of your findings during your research project on smallholder famers?
A: We managed to show interesting results on the linkages between forest ecosystem services and food security. When people managed to restore forest ecosystem services, we showed that they are more resilient to climate change and climate variability because there is more biodiversity in restored land. They also have opportunities to rely on more products from the landscapes both for food security and for generating income based on the produce they harvest from the landscape.
Q: What else did you find interesting in your research?
A: People can adapt based on different strategies they are developing either on their own or with support from NGOs (non-governmental organizations) and development programs from the state. We can see for instance that a lot of efforts are being made in terms of recovering degraded land so regenerating ecosystem services from degraded land and several good practices and technologies have been disseminated all over the country; particularly in the northern part of the country and it is amazing to see the difference, for instance, in food production on restored land compared to other farmland where people don’t use restoration technology. The difference in terms of yield of cereals for instance is amazing. So one can see that with simple technology, smallholder farmers are able to produce more food in very dry and very variable rain patterns conditions.
Q: What are some of the challenges that Burkina Faso faces with land restoration?
A: There is a lot of waste of resources from outsiders trying to join in restoration work. There are a lot of NGOs and even state led projects without any coordination. This needs to be synergized to achieve more impact. This is a challenge the state is trying to address particularly with the support of the Global mechanism from the UNCCD (United Nations Convention to Combat Desertification) as part of the actions the government has to achieve in land degradation neutrality by 2030.
Another one is we are seeing that the same trends which led to land degradation in the northern part of the country; we observe the same trend of degradation in the southern part of the country where people think this is not the an issue for them because the environment is an area where there is rain, the tree cover is more important so people don’t take seriously the manifestation of land degradation. People in this area think that using restoration practices and technologies is only for the northern part of the country where land is degraded. There is not yet awareness and relevant action taken to prevent this degradation in the southern part of the country. So people need to be proactive in terms of addressing the issue before it passes the tipping point.
Q: Why is it important to pay attention to smallholder farms with relation to forestry?
A: When you observe where small holders draw their livelihood, you can see that they rely mostly from natural resources where livestock is the second or third source of wealth of the country that grazes mostly in forested land. For example, during the rainy season the only place the livestock can graze is within forested land.
This is just an example to show the importance of these forested lands. Moreover, people harvest timber for several uses, for medicine, for construction material so it is an important source of people’s livelihood.
In terms of biodiversity, these forested lands are very important in keeping animals. They draw much of their livelihood from animals in biodiversity.
Q: What visible change have you seen in the country in terms of restoration that you can comment on?
A: In the northern part where land degradation is very widespread, you never see wild fires, for instance, because people are very serious with eradicating practices that will worsen the situation. There is a lot of awareness and people are using widely good practices in terms of land restoration.
There are a lot of technologies; one of them widely used is what we call the dry farming. People dig small holes during dry season right before the rainy season starts. They fill these pits with manure and with the first rain, the runoff water is collected and usually combined with stone banks to collect water or to avoid the run-off water to going out of the area they want to restore.
A lot of these technologies help to harvest water so that plants, trees and grass can use it, so the biodiversity can be recalled on these degraded lands.
Q: What more needs to be done to help smallholder farmers in facing degraded land issues?
A: Thinking of the challenges in land restoration, I think there is this issue of means … particularly financial support to small holders involved in land restoration because it is costly. Sometimes, small holders can only undertake restoration work at limited length because sometimes they don’t have the necessary means in terms of finance and material.
There is a need to find a way to provide them with some form of support so that the up scaling of restoration can really happen. Otherwise, the challenge is too big for them only with their own means to overcome the degraded land issue.
Q: Why do you think the GLF is important?
A: The GLF is important because it provides this kind of opportunity and platform for all stakeholders to share and reflect on success and failures and learn from each other. For me the most important thing the GLF will be achieving is this opportunity for several stakeholders working on land restoration at several levels to share their perspectives, experiences, successes and failures and also to learn what the next steps are on the land restoration issue. | https://news.globallandscapesforum.org/29428/smallholder-farmers-need-greater-support-for-land-restoration-cifor-scientist-says/ |
The reason of that aspect of water is hidden in the different structure of the quarks composing the Oxygen and the Hydrogen nuclei.
A Hydrogen atom is made up of one positively charged proton whose charge is balanced by one negatively charged electron.
The Oxygen atom has eight protons (positive particle) and eight neutrons (neutral particles) in its nucleus. A nucleus surrounded by eight electrons. Because of the structure of Oxygen, they exist two open spots open in the electrons’ cloud. These spots are filled when a Hydrogen atom is in proximity of the Oxygen.
When Hydrogen bonds with Oxygen, the electrons that have bonded are formed into five pairs of ten electrons:
- one pair gives existence to the Oxygen atom's inner electron cloud;
- two pairs form the Oxygen atom's outer electron cloud;
- two remaining pairs form the O-H bonds, which create the polar molecule H2O.
Because of Hydrogen bonds, water has many of its known features.
Partial charges result in water molecules being strongly attracted to one another in the liquid phase. In other terms, for a group of water molecules, the positively charged H-atoms are electrically attracted to the negatively charged O-atoms.
One of the many features due to Hydrogen bonds is the familiar aspect for a drop of water. It derives by the H-bond extended to a macromolecular system with many participating molecules.
We can think of this attraction as water molecules tending to stick together, what in the video is clearly visible. This strong electrostatic attraction between Hydrogen atoms on one water molecule, to Oxygen atom on another, is called Hydrogen Bonding, or H-bonds. | https://www.y8.com/animation/why_water_looks_that_way_ |
Argument for tagging
This case provoked an impassioned debate within the women’s history community, much of which took place publicly in the press and in academic journals. In writing about the case, many historians and journalists conducted rigorous research of the court documents. In fact, the three collections that comprise this exhibit are the result of individuals researching the trial, and their research is characterized by the collecting of court documents and related articles and correspondence.
I think this idea, that this case inspired the amassing of documents that were literal records of the tiral itself, speaks to the greater philosophical questions that the testimonies of Rosenberg and Kessler-Harris projected about the nature of history. Is history a record or an interpretation of the past? What is the role of the historian- to simply recount history, or to use history as a means to a social end, such as employment equality? Should women’s history support the broader women’s movement, and if so, who is to say how it should go about supporting feminist goals? Is practicing the analysis of women’s history alone enough to support feminist aims, or do women historians need to do more? These collections of the "historical proof" of these arguments suggest that there is value in analyzing these documents. The individuals who complied these documents were looking for some truth in them, and I think making them searchable online can allow for new analyses of this case, and perhaps new answers to these questions.
A case involving such philosophical questions is ripe for unpacking, and promises a meaningful interrogation of the theory of history. It is appropriate for many ages and levels of education, and could be used in an introduction to historical theory, an analysis of women's history or labor history, or even legal analysis. I would suggest that the site would be most appropriate for undergraduate to graduate level use. Either way, detailed encoding will allow for a dynamic method of searching the material, encouraging innovative analysis.
Tagging specifics
All text, both transcribed documents and narrative created for the site, will be encoded with XML using TEI P5 guidelines. Encoding material with XML gives the creators of the site stylistic control over the material, and allows users to search the documents in a more effective way than allowed by simple HTML.1 XML tags stylistic elements of the textual document, such as title, speaker/author, type of document, bibliographic information, date, etc.; as well as divisions in the document like paragraph, page, and project specific divisions, which are discussed below. In tagging these elements and divisions, the creators then have the ability to manage the appearance/layout, and the searchability of the material. For example, if a user wanted to search the material by title, the documents would need to have an XML tag of <title> to identify a particular string of characters as a title.
Because many of the documents included in this exhibit are up to 100 pages long, the creators will implement a project specific "chunking" of each document. Chunking breaks text into manageable sections when a user conducts a search. For example, a user searches for the keyword "overtime," and finds that it is appears 22 times in Rosenberg's testimony. The search results will show the reader a list of "chunks" of text (the size of which will determined by the user, who will have the option to choose 10, 5, or 1 page chunks), and the number of times "overtime" appears in each chunk. Hence, the user can easily locate the needed information without searching trough the entire document.
Names, places, publications, etc. will be regularized such that when a user searches for "alice harris," he/she will be directed to results for "kessler-harris, alice."
see added value, encoding for more information on encoding specifics.
Why use TEI?
TEI is more appropriate for this text-based project than other encoding standards/practices, such as Dublin Core, which does not allow for detailed enough metadata, or EAD, which is generally used for archival finding aids. "For projects creating full-text resources, the TEI Guidelines are the predominant choice."2
Sample metadata to be included in header of written testimony of Rosalind Rosenberg
- document type: court document, written testimony
- title: written testimony of Dr. Rosalind Rosenberg
- author: Rosenberg, Rosalind
- date: date testimony was given
- collection/repository: Research files of Jon Weiner relating to Equal Employment Opportunity Commission v. Sears, Roebuck and Company, NYU Bobst Tamiment/Wagner Archives
- link to finding aid
- copyright: who holds rights
- LC subject headings: Rosenberg, Rosalind, 1946; Kessler-Harris, Alice; United States— Equal Employment Opportunity Commission — Trials, litigation, etc.; Sears, Roebuck and Company — Trials, litigation, etc.; Sex discrimination in employment — United States; Sex discrimination against women — United States; Women — Employment — United States; Sex discrimination in employment — Law and legislation — United States; Sex discrimination against women — Law and legislation — United States; Equal pay for equal work — United States; Pay equity — United States; Legal documents
- project specific subjects: Rosenberg, Rosalind, testimony, written testimony, Barnard College, historical interests of women, historical interests of men, professionalism, pay equality, workplace values, family life, commission sales jobs, family responsibilities, traditional socialization, and any publications, organizations, individuals, places, dates, or ideas mentioned in the transcript.
- record/item ID: | http://historynewmedia.wikidot.com/historians-on-trial-tagging |
Thank you for your interest in Community Alliance for Global Justice! We are a grassroots, membership-based organization in Seattle. CAGJ’s dedicated volunteers contribute their skills, time and money to work for a just local and global economy. CAGJ has three programs: Food Justice Project, AGRA Watch and Trade Justice – please see each program’s page for more information. CAGJ is led by a Steering Committee that meets every month, and we have two part-time staff, our Executive Director, Heather Day and our Organizing Director, Simone Adler.
CAGJ History
CAGJ was founded by Seattle-area activists who helped to organize the historic shutdown of the World Trade Organization meeting in 1999. We strive to carry on the protests’ legacy of effective and creative collective action for global justice. We aim to work in solidarity with the powerful social movements of the Global South who continue to inspire us with their growing resistance to the corporate-driven economic model – a model pushed by the US, Europe, and a transnational corporate elite. Organizing across the hemisphere, together we defeated the FTAA (Free Trade Area of the Americas) thanks largely to the mobilization of huge numbers of Latin Americans who answered with an emphatic No! to an extension of the North American Free Trade Agreement. CAGJ played an important role in nearly defeating the Central American Free Trade Agreement (CAFTA) when we helped to build the coalition in WA State that succeeded in getting all of WA’s Democratic Representatives in Congress to vote against the deal.
While we continue to monitor the institutions promoting corporate globalization, today CAGJ is focusing on building positive alternatives to corporate control by supporting the movements for healthy local food economies here and everywhere. In 2007 we began this work by organizing the Strengthening Local Economies Everywhere (SLEE!) Dinner, a successful event repeated annually. Our next major project was to co-convene a Food Politics Teach-in at Seattle Central Community College in December of 2008. Many of CAGJ’s volunteers and projects today stem from that amazing event!
CAGJ Mission
Community Alliance for Global Justice educates and mobilizes with individuals and organizations to strengthen local economies everywhere. CAGJ is grassroots, community-based and committed to anti-oppressive organizing as we build solidarity across diverse movements. CAGJ seeks to transform unjust trade and agricultural policies and practices imposed by corporations, governments and other institutions while creating and supporting alternatives that embody social justice, sustainability, diversity and grassroots democracy.
What We Do
Community Education
Organizing workshops, guest speakers, film screenings, and study groups, we offer the community information about corporate globalization, its local impacts (including on the food we eat!), and the economic and agricultural alternatives we have as resources for resisting it. We seek to connect folks in the Puget Sound area with their local farms and food producers by organizing farm tours and our annual community gathering, the Strengthening Local Economies Everywhere Fair and Dinner.
Grassroots Organizing
We build solidarity with allied organizations, mobilize locally through citizen outreach and consultation, and train new leaders to enact social change in their communities. We seek to empower individuals and communities to create and participate in alternative economic and agricultural models.
Research and Analysis
We are involved with ongoing research in the areas of corporate globalization, local economies, sustainable agriculture, food justice and access, immigration and labor rights, and trade policy and fair-trade alternatives.
Media Outreach
By providing a thoughtful analysis to journalists and writing op-eds and letters-to-the editor, we give voice to an alternative vision of local and global development. | https://cagj.org/about/ |
Does your child have dyslexia or any other learning and reading difficulties? If this is the case, school can turn into an overwhelming experience, impacting negatively on your child’s education. Many will resign themselves to thinking that is what a normal reading experience is supposed to be like and conclude that reading isn’t for them. This ultimately causes them to fall behind their classmates and lose confidence at school, resorting to disruptive behaviour as a coping mechanism.
Although it might feel like it sometimes, you can rest assured that your child isn’t alone in their struggle. In fact, a large proportion of the UK population live with learning difficulties caused by a visual dysfunction. According to research carried out by Professor Wilkins at the University of Essex, up to 20% of the population in the UK schools suffered from visual difficulties associated with reading.
Although dyslexic people do tend to suffer from visual difficulties more when reading, the symptoms of a condition called Meares-Irlen Syndrome, or Visual Stress, which can be similar to the signs of dyslexia can also be at the root of reading difficulties for some children.
Symptoms of Meares-Irlen Syndrome
Meares- Irlen syndrome can occur in individuals who have always had normal eyesight, but experience discomfort when viewing a printed page. Although it is particularly prevalent in people with dyslexia, it isn’t exclusive to them. Here is a list of symptoms associated with Meares-Irlen Syndrome:
- Glare from a white page
- Blurry words/letters
- Moving words/letters
- Skipping or missing words
- Having to use a finger to keep place on the page
- Re-reading the same line more than once
- Reading slowly or hesitantly
- Poor concentration
- Eye strain and fatigue
- Frustration when reading
- Headaches and migraines
- Low self esteem
Children’s written work can also be affected, as they find it difficult to read their own writing, which is often misspelt and written untidily.
Visual Stress Test
So, if you do suspect that your child might have experienced some of the symptoms we mentioned, what happens next?
If you’ve already taken your child to have their NHS funded sight test, to rule out other potential ocular causes of reading difficulties, the next step we recommend is to have a Visual Stress test, in the form of a clinical coloured overlay assessment for your child. This involves measuring their reading speed and comfort, whilst using precision coloured tints on a computer screen.
Research has shown that introducing an overlay of a specific colour in front of printed text can alleviate symptoms of Visual Stress and increase reading speeds. Coloured overlays can be used for dyslexia, if the individual shows indicators of Meares-Irlen Syndrome.
They are simply sheets of precision tinted translucent or transparent plastic that can be placed over a page of writing. Each child will have a different colour preference, so what works for one might not necessarily work for someone else. This is why is it so crucial to find your child’s preferred tint through the overlay assessment, instead of opting to use a random coloured overlay that schools sometimes provide.
Increased reading speeds, fluency in reading, less frequent headaches and more concentration will slowly bring back your child’s confidence and motivation, allowing them to reach their full potential at school.
Visual Stress Test in Southampton and Portsmouth
We are proud to provide a clinical computer based coloured overlay assessment in the towns of Southampton and Portsmouth for children suffering with dyslexia or any other learning and reading difficulties. No child should have to suffer through school and go through life feeling dejected and alone in their struggle, so we endeavour to do all that we can to help them flourish and grow positively. We have already managed to help several students successfully and with the new school year starting, we’re always ready to help some more!
Here’s a testimonial from a happy parent who came in to get a visual dyslexia test for her child:
“My son had his eyes tested and a dyslexia colour test done by Bish at Fair Oak. Bish was so patient and put my son at ease, did not rush him through the process and kept me informed of what he was checking for.
Previous glasses provided by another company turned out to be the wrong prescription! The office staff were amazing and couldn’t do enough to help, wonderful service. Would definitely recommend.” – https://ashleighsightcare.co.uk/testimonials/
Contact us online to speak to one of our friendly and experienced experts for advice on Meares-Irlen Syndrome or Visual Stress tests for dyslexia and reading difficulties at any of our branches. | https://ashleighsightcare.co.uk/coloured-overlay-assessment-how-it-could-help-your-child-at-school/ |
For years, an annual, must-pass federal spending bill has served as a vehicle for minor or contentious provisions that might otherwise falter in standalone legislation, such as the prohibition of new service member uniforms, or the indefinite detainment of individuals without trial.
In 2019, that federal spending bill, called the National Defense Authorization Act (NDAA), once again included provisions separate from the predictable allocation of Department of Defense funds. This time, the NDAA included language on deepfakes, the machine-learning technology that, with some human effort, has created fraudulent videos of UK political opponents Boris Johnson and Jeremy Corbyn endorsing one another for Prime Minister.
Matthew F. Ferraro, a senior associate at the law firm WilmerHale who advises clients on national security, cyber security, and crisis management, called the deepfakes provisions a “first.”
“This is the first federal legislation on deepfakes in the history of the world,” Ferraro said about the NDAA, which was signed by the President into law on December 20, 2019.
But rather than creating new policies or crimes regarding deepfakes—like making it illegal to develop or distribute them—the NDAA asks for a better understanding of the burgeoning technology. It asks for reports and notifications to Congress.
Per the NDAA’s new rules, the US Director of National Intelligence must, within 180 days, submit a report to Congress that provides information on the potential national security threat that deepfakes pose, along with the capabilities of foreign governments to use deepfakes in US-targeted disinformation campaigns, and what countermeasures the US currently has or plans to develop.
Further, the Director of National Intelligence must notify Congress each time a foreign government either has, is currently, or plans to launch a disinformation campaign using deepfakes of “machine-generated text,” like that produced by online bots that impersonate humans.
Lee Tien, senior staff attorney for Electronic Frontier Foundation, said that, with any luck, the DNI report could help craft future, informed policy. Whether Congress will actually write any legislation based on the DNI report’s information, however, is a separate matter.
“You can lead a horse to water,” Tien said, “but you can’t necessarily make them drink.”
With the NDAA’s passage, Malwarebytes is starting a two-part blog on deepfake legislation in the United States. Next week we will explore several Congressional and stateside bills in further depth.
The National Defense Authorization Act
The National Defense Authorization Act of 2020 is a sprawling, 1,000-plus page bill that includes just two sections on deepfakes. The sections set up reports, notifications, and a deepfakes “prize” for research in the field.
According to the first section, the country’s Director of National Intelligence must submit an unclassified report to Congress within 180 days that covers the “potential national security impacts of machine manipulated media (commonly known as “deepfakes”); and the actual or potential use of machine-manipulated media by foreign governments to spread disinformation or engage in other malign activities.”
The report must include the following seven items:
- An assessment of the technology capabilities of foreign governments concerning deepfakes and machine-generated text
- An assessment of how foreign governments could use or are using deepfakes and machine-generated text to “harm the national security interested of the United States”
- An updated identification of countermeasure technologies that are available, or could be made available, to the US
- An updated identification of the offices inside the US government’s intelligence community that have, or should have, responsibility on deepfakes
- A description of any research and development efforts carried out by the intelligence community
- Recommendations about whether the intelligence community needs tools, including legal authorities and budget, to combat deepfakes and machine-generated text
- Any additional info that the DNI finds appropriate
The report must be submitted in an unclassified format. However, an annex to the report that specifically addresses the technological capabilities of the People’s Republic of China and the Russian Federation may be classified.
The NDAA also requires that the DNI notify the Congressional intelligence committees each time there is “credible information” that an identifiable, foreign entity has used, will use, or is currently using deepfakes or machine-generated text to influence a US election or domestic political processes.
Finally, the NDAA also requires that the DNI set up what it calls a “deepfakes prize competition,” in which a program will be established “to award prizes competitively to stimulate the research, development, or commercialization of technologies to automatically detect machine-manipulated media.” The prize amount cannot exceed $5 million per year.
As the first, approved federal language on deepfakes, the NDAA is rather non-controversial, Tien said.
“Politically, there’s nothing particularly significant about the fact that this is the first thing that we’ve seen the government enact in any sort of way about [deepfakes and machine-generated text],” Tien said, emphasizing that the NDAA has been used as a vehicle for other report-making provisions for years. “It’s also not surprising that it’s just reports.”
But while the NDAA focuses only on research, other pieces of legislation—including some that have become laws in a couple of states—directly confront the assumed threat of deepfakes to both privacy and trust.
Pushing back against pornographic and political deception
Though today feared as a democracy destabilizer, deepfakes began not with political subterfuge or international espionage, but with porn.
In 2017, a Reddit user named “deepfakes” began posting short clips of nonconsensual pornography that mapped the digital likenesses of famous actresses and celebrities onto the bodies of pornographic performers. This proved wildly popular.
In little time, a dedicated “subreddit”—a smaller, devoted forum—was created, and increasingly more deepfake pornography was developed and posted online. Two offshoot subreddits were created, too—one for deepfake “requests,” and another for fulfilling those requests. (Ugh.)
While the majority of deepfake videos feature famous actresses and musicians, it is easy to imagine an abusive individual making and sharing a deepfake of an ex-partner to harm and embarrass them.
In 2018, Reddit banned the deepfake subreddits, but the creation of deepfake material surged, and in the same year, a new potential threat emerged.
Working with producers at Buzzfeed, comedian and writer Jordan Peele helped showcase the potential danger of deepfake technology when he lent his voice to a manipulated video of President Barack Obama.
“We’re entering an era in which our enemies can make anyone say anything at any point in time, even if they would never say those things,” Peele said, posing as President Obama.
This year, that warning gained some legitimacy, when a video of Speaker of the House of Representatives Nancy Pelosi was slowed down to fool viewers into thinking that the California policymaker was either drunk or impaired. Though the video was not a deepfake because it did not rely on machine-learning technology, its impact was clear: It was viewed by more than 2 million people on Facebook and shared on Twitter by the US President’s personal lawyer, Rudy Giuliani.
These threats spurred lawmakers in several states to introduce legislation to prohibit anyone from developing or sharing deepfakes with the intent to harm or deceive.
On July 1, Virginia passed a law that makes the distribution of nonconsensual pornographic videos a Class 1 misdemeanor. On September 1, Texas passed a law to prohibit the making and sharing of deepfake videos with the intent to harm a political candidate running for office. In October, California Governor Gavin Newsom signed Assembly Bills 602 and 730, which, respectively, make it illegal to create and share nonconsensual deepfake pornography and to try to influence a political candidate’s run for office with a deepfake released within 60 days of an election.
Along the way, Congressional lawmakers in Washington, DC, have matched the efforts of their stateside counterparts, with one deepfake bill clearing the House of Representatives and another deepfake bill clearing the Senate.
The newfound interest from lawmakers is a good thing, Ferraro said.
“People talk a lot about how legislatures are slow, and how Congress is captured by interests, or its suffering ossification, but I look at what’s going on with manipulated media, and I’m filled with some sense of hope and satisfaction,” Ferraro said. “Both houses have reacted quickly, and I think that should be a moment of pride.”
But the new legislative proposals are not universally approved. Upon the initial passage of California’s AB 730, the American Civil Liberties Union urged Gov. Newsom to veto the bill.
“Despite the author’s good intentions, this bill will not solve the problem of deceptive political videos; it will only result in voter confusion, malicious litigation, and repression of free speech,” said Kevin Baker, ACLU legislative director.
Another organization that opposes dramatic, quick regulation on deepfakes is EFF, which wrote earlier in the summer, that “Congress should not rush to regulate deepfakes.”
Why then, does EFF’s Tien welcome the NDAA?
Because, he said, the NDAA does not introduce substantial policy changes, but rather proposes a first step in creating informed policy in the future.
“From an EFF standpoint, we do want to encourage folks to actually synthesize the existing knowledge and to get to some sort of common ground on which people can then make policy choices,” Tien said. “We hope the [DNI report] will be mostly available to the public, because, if the DNI actually does what they say they’re going to do, we will learn more about what folks outside the US are doing [on deepfakes], and both inside the US, like efforts funded by the Department of Defense or by the intelligence community.”
Tien continued: “To me, that’s all good.”
Wait and see
The Director of National Intelligence has until June to submit their report on deepfakes and machine-generated text. But until then, more states, such as New York and Massachusetts, may forward deepfake bills that were already introduced last year.
Further, as deepfakes continue to be shared online, more companies may have to grapple with how to treat them. Just last week, Facebook announced a new political deepfake policy that many argue does little to stop the wide array of disinformation posted on the platform. | https://blog.malwarebytes.com/artificial-intelligence/2020/01/deepfake-rules-take-hold-in-the-us/ |
: Course Catalogue :
School of Engineering
:
Mechanical
Undergraduate Course: Dynamics 4 (MECE10002)
Course Outline
School
School of Engineering
College
College of Science and Engineering
Credit level (Normal year taken)
SCQF Level 10 (Year 4 Undergraduate)
Availability
Available to all students
SCQF Credits
10
ECTS Credits
5
Summary
The Dynamics 4 course provides an understanding of core aspects of advanced dynamic analysis, dealing with system modelling, dynamic response and vibration analysis, structural dynamics both in the linear and non-linear regimes, wave propagation and the dynamics of continuous and multi-degree of freedom systems. The main objective is to obtain an understanding and appreciation of the potential and limitations of analytical approaches and solutions, and the value of these in underpinning modern computer methods for simulating dynamic structural response.
Course description
The Dynamics 4 course covers the following three main subject areas:
1. The Lagrange method of analytical dynamics. This is a formal approach for setting up equations of motion (EoM) for complex dynamic systems with dynamic constraints (e.g. constrained motions). Free Body Diagrams (FBD) prove quite difficult when dealing with complex systems which operate under dynamic constraints. Lagrange's method, however, allows the derivation of correct Equations of Motion through formal calculations from the energy functions of the system. Covered applications include the analysis of the conditions for dynamic system stability.
2. Wave propagation in continuous systems. Systematic approaches for deriving the parameters of lumped-parameter descriptions. Properties of wave propagation, including sound propagation, and the standing waves which characterise the fundamental vibration modes of continuous systems with boundaries. Longitudinal and transverse waves and solutions to the corresponding differential equations (e.g. standing and travelling wave solutions)
3. Vibration of multi-degree-of-freedom systems, using the more formal approach of principal coordinate analysis to describe vibration behaviour, and to analyse vibration hazards in engineering structures.
[AHEP outcomes: SM2m, EA1m, EA3m]
Entry Requirements (not applicable to Visiting Students)
Pre-requisites
Co-requisites
Prohibited Combinations
Other requirements
None
Information for Visiting Students
Pre-requisites
None
High Demand Course?
Yes
Course Delivery Information
Academic year 2021/22, Available to all students (SV1)
Quota: None
Course Start
Semester 1
Timetable
Timetable
Learning and Teaching activities
(Further Info)
Total Hours: 100 ( Lecture Hours 20, Seminar/Tutorial Hours 10, Formative Assessment Hours 1, Summative Assessment Hours 2, Programme Level Learning and Teaching Hours 2, Directed Learning and Independent Learning Hours 65 )
Assessment
(Further Info)
Written Exam 100 %, Coursework 0 %, Practical Exam 0 %
Additional Information (Assessment)
Final Examination 100%
Feedback
Not entered
Exam Information
Exam Diet
Paper Name
Hours & Minutes
Main Exam Diet S1 (December)
2:00
Resit Exam Diet (August)
2:00
Learning Outcomes
On completion of this course, the student will be able to:
Apply virtual work-based methods to dynamical systems, relating between Lagrangian and Newtonian Mechanics.
Derive system differential equations of motion for dynamical systems from energy-based approaches (e.g. Lagrange's method).
Recognise forms of advanced dynamical behaviour, such as system instability and non-linearity, and appreciate their effects on the dynamical response and methods used to analyse them.
Identify structural dynamic instability causes and propose solutions.
Know the common wave equations for basic structural elements and be able to use these to find natural frequencies and mode shapes.
Reading List
S.S. Rao. Mechanical Vibrations (5th Edition in SI units), Prentice Hall, ISBN 978-981-06-8712-0, 2011.
Additional Information
Graduate Attributes and Skills
Not entered
Keywords
Dynamics,Vibrations,Wave Propagation,System Response,Continuous Systems,Discrete Systems
Contacts
Course organiser
Dr Filipe Teixeira-Dias
Tel:
(0131 6)50 6768
Email:
[email protected]
Course secretary
Mr James Foster
Tel:
(0131 6)51 3562
Email: | http://www.drps.ed.ac.uk/21-22/dpt/cxmece10002.htm |
Héctor Daniel “Tito” Villalba is a Paraguayan professional football player. He plays for Club Libertad and the Paraguay National Team as a winger. Hector has some pretty interesting tattoos on his body. Let us take a look at them and the meanings behind them.
1. ‘Jesus’ Tattoo
Tattoo: The right shoulder of the player has the face of Jesus Christ tattooed on it. Christ is seen looking downwards and is wearing a crown made of thorns.
Meaning: The tattoo on his shoulder represents his religious views. The tattoo of Christ represents his Christian faith and how Jesus sacrificed his life to save humanity.
2. ‘Heart’ Tattoo
Tattoo: Below the tattoo of the face of Jesus, his right arm has a tattoo of a heart on it. The heart has a chain of thorns wrapped around it and is dripping blood. There is also a cross on top of it surrounded by flames.
Meaning: The tattoo design is a popular tattoo in the Christian faith and is called the Sacred Heart of Jesus Christ. The heart represents, “God’s boundless and passionate love for mankind”
3. ‘Cherubs’ Tattoo
Tattoo: The upper part of his right forearm contains a few cherubs tattooed on it. There is also a dove bird tattooed above the cherubs with light behind it.
Meaning: The cherubs which are baby angels on his forearm represent guardian angels. They are a symbol of innocence and protect the bearer and his family from harm. The dove on his arm is also a symbol of his faith and represents the Holy Spirit.
4. ‘Medal’ Tattoo
Tattoo: The back of Tito’s right arm, just above his elbow contains a medal tattooed on it. The medal contains the cross in the middle with the letters, “C S S M L – N D S M D” which stands for, “Crux sacra sit mihi lux! Numquam Draco sit mihi dux!” The letters, “C S P B” are also present on the side of the cross. The letters stand for, “Crux Sancti Patris Benedicti”
Meaning: The medal on Villalba’s arm is the backside of the Medal of Saint Benedict. The letters and the writing mean, “May the holy cross be my light! May the dragon never be my overlord!” The letters, “C S P B” means, “The Cross of our Holy Father Benedict”. The tattoo of Saint Benedict’s medal represents the Villalba prays to him for protection. St. Benedict’s motto is, “Ora et Labora” which means “Pray and Work” which is what the player believes in. Hector got the tattoo in 2013 when his San Lorenzo team won the Championship of the Argentine Primera División.
The back of the cross can be seen in the image below.
5. ‘Roses’ Tattoo
Tattoo: The lower side of his right forearm has two rose flowers tattooed on it.
6. Name Tattoo
Tattoo: The side of his forearm has the name, “Juana” tattooed on it.
Meaning: The name on his forearm represents the name of his beloved mother, Juana Villalba.
7. ‘Numerals’ Tattoo
Tattoo: The roman numerals, “XIX V MCM..” is tattooed around his right wrist. The whole date has not been revealed in his pictures.
Meaning: The date represents 19 May of the 1900s which represents the date of birth of his mother, Juana Villalba. Though the whole date is not seen, the rumors suggest that it is his mother’s date of birth.
8. ‘Lion’ Tattoo
Tattoo: The side of Hector’s right thigh has the face of a lion tattooed on it.
Meaning: Hector showcased the tattoo on one of his Instagram stories. The tattoo of the lion on his thigh represents his sun sign, Leo. Hector was born on 26 July 1994 and the tattoo represents a part of him and his personality. | https://bodyartguru.com/hector-villalba-tattoos/ |
In 1972, Brindley and Craggs measured the electric potential from the surface of the baboon brain using a 1-mm-diameter electrode. They found that the power in the 80–250 Hz frequency range of the electric potential time series was dynamically increased in motor areas during movement. Sites 2 mm apart from one another were specific for different movements of the same limb. This observation, that power in the high-frequency portion of the brain surface electric potential was specific for local cortical activity, was again demonstrated in electrocorticography (ECoG) by Crone et al. (1998) for different functions in distant regions of the human brain. Both groups proposed that this high-frequency power was a correlate of specific cortical activity, but it was unclear what this power increase meant at the neuronal level. More recently, Miller et al. (2009a,b) proposed and demonstrated that these observed high-frequency power changes actually reflected “broadband” power spectral change, across all frequencies. The low-frequency portion of these broadband changes was often obscured at lower frequencies by coincident changes in rhythmic phenomena (e.g., α and β), so that only the high-frequency portion of the broadband change was observed. These broadband changes have a particular form (a power law in the frequency domain) (Fig. 1) and capture functionally specific cortical activity with a temporal precision of tens of milliseconds (Miller et al., 2009a,b). A recent Journal of Neuroscience article by Manning and colleagues (2009) directly shed light on the neurophysiologic nature of these broadband changes by measuring what aspects of the power spectral density (PSD) of the local field potential (LFP) correspond with single-neuron firing rates measured at the same cortical site.
Manning and colleagues (2009) performed the following experiment: in the course of treatment for epilepsy, penetrating microwires were transiently implanted in 20 human patients during the clinical identification of seizure foci. Each patient participated in a spatial navigation task while single-neuron action potential (AP) firing rates and the surrounding LFPs were measured from an array of microwires throughout different brain sites. The firing rate of each neuron and the corresponding normalized PSD of the LFP were calculated in half-second epochs. For each epoch, the power in the PSD was extracted in five discrete frequency ranges: delta (2–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–30 Hz), and gamma (30–150 Hz). In addition, an estimate of the broadband power in the PSD, across all frequencies, was obtained from each epoch. The firing rate was then compared with each power spectral feature using a regression approach, and an associated significance level was estimated by resampling (randomly time-shifting the LFP and AP event times with respect to one another to obtain a surrogate distribution). The best predictor of firing rate was the broadband feature of the PSD. There is a clear relation between increased firing rate and increased broadband power in the LFP [Manning et al. (2009), their Figs. 1 and 2]. This relation was robust, significant, and reproduced across a large number of individuals and brain sites. Manning and colleagues (2009) experimentally demonstrated, for the first time, that broadband spectral change in the electric potential is correlated with neuronal AP firing rate.
In the same week that the article by Manning et al. (2009) was published, Whittingstall and Logothetis (2009) published an article showing that 30–100 Hz aspects of the LFP are significant predictors of multineuron firing rate; it is likely that this high-frequency change reflects a broadband change and represents a secondary confirmation of the finding by Manning et al. (2009). The electrical potential from both studies was measured at the spatial scale of the LFP, which has recently been demonstrated to reflect neuronal activity within ∼250 μm of the recording electrode (Katzner et al., 2009). Because this broadband spectral change is correlated with the action potential rate at the LFP scale, broadband electric potential spectral changes may generically represent mean firing rate at larger scales as well. If true at larger scales, then the spatial scale that the recording electrode reflects would then dictate the size of the neuronal population that the firing rate is being averaged over. Seen in this light, the article by Manning et al. (2009) provides empirical evidence that broadband (or associated high-frequency) changes observed at larger spatial scales, in ECoG, are a correlate of the mean firing rate of the neuronal population beneath each recording electrode.
How might the reader gain intuition for the measured correlation in terms of neurophysiology? From a modeling perspective, heuristics for the relationship between changes in action potential rate and broadband, power-law, changes can be constructed relatively simply. Properties of the physiology underlying the current source density (CSD) in different cortical lamina were established experimentally in the late 1970s and early 1980s (Mitzdorf, 1985). Propagating action potentials in axons and axon terminals does not contribute strongly to the CSD at spatial scales of ∼50 to ≥300 μm, e.g., the scales where CSD varies, LFPs pool from, or macroscale ECoG potentials average over. Instead, dendritic synaptic current influx and efflux modulate the CSD and, by extension, the LFP and the ECoG-scale potential. Emerging in vivo, simultaneous recordings of intracellular potential and LFP by Okun et al. (2009) show that the LFP and single-neuron transmembrane potential are tightly coupled temporally, independent of the spiking pattern of the neuron. This implies that the correlation observed by Manning et al. (2009) likely reflects the postsynaptic influence on many neighboring neurons by the neuron whose action potential times are being measured, and this may very well be augmented by redundant firing patterns across neighbors.
A very simple model for producing broadband spectral changes from changes in firing rate can be illustrated to provide intuition for this correlation, and also to illustrate why the experimental findings of Manning et al. (2009) provide evidence for a macroscale correlate in population firing rate. A model based on original research by Bédard et al. (2006) (later extended in Miller et al., 2009b) shows how the time course of the intracellular dendritic charge concentration might result from spatiotemporal summation of postsynaptic current influxes from each arriving AP (Fig. 1). The broadband in the PSD results from the noise-like distribution of AP arrival times, and its 1/f falloff with frequency results from the shape of the synaptic current decay and the effect of temporal integration in the dendritic arbor. The Manning et al. (2009) finding supports models of this type, where basic phenomena, firing rate changes, produce spatially larger scale field potential changes. Furthermore, the strong correlation between firing rate and broadband spectral change in the electrical potential demonstrated empirically by Manning et al. (2009) provides powerful evidence that broadband power spectral changes observed at larger spatial scales may be a generic correlate of mean population firing rate.
Footnotes
Editor's Note: These short, critical reviews of recent papers in the Journal, written exclusively by graduate students or postdoctoral fellows, are intended to summarize the important findings of the paper and provide additional insight and commentary. For more information on the format and purpose of the Journal Club, please see http://www.jneurosci.org/misc/ifa_features.shtml.
K.J.M. is supported by the National Aeronautics and Space Administration Graduate Student Research Program and the National Institutes of Health-National Institute of General Medical Sciences Medical Scientist Training Program. I thank Dora Hermes and Teresa Esch for reading of this manuscript. | https://www.jneurosci.org/content/30/19/6477 |
Paul Forman’s article “Weimar Culture, Causality, and Quantum Theory, 1918-1927” (Forman 1971) permanently changed the disciplinary landscape of the history and philosophy of science. Commonly called the Forman thesis, it profoundly affected the work of a generation of historians and philosophers of physics. As a classic essay in the “externalist” history of science, it contributed just as significantly to the appeal of the new sociology of scientific knowledge. It helped define the cultural history of science that spread through the field in the 1980s and 1990s, and it has been a touchstone for general historians of Germany and continental Europe seeking contact with science.
Although one of the most frequently cited works in the field, the Forman thesis nevertheless cannot be considered universally accepted. On the contrary, it remains as controversial as it is famous, the subject of polarized opinions and scholarly positions. Indeed, its contested nature is responsible for its ongoing influence, as it continues to spark methodological discussion and inspire new empirical studies. The current state of that debate, including some exciting recent contributions of younger scholars, leads us to propose our conference as a venue where different lines of research and reflection can be brought into productive exchange. Its tangible products will be two cohesive collections of conference papers, in both English and German, to serve the discipline at large.
Speakers in the conference included UC Berkeley’s John L. Heilbron and Cathryn Carson. | https://cstms.berkeley.edu/current-events/the-cultural-alchemy-of-the-exact-sciences-revisiting-the-forman-thesis-at-the-university-of-british-columbia/ |
1/- The Depression :
1- Definition :
Depression is a decrease of neuropsychological tone, is a mood disorder, accompanied by a state of suffering psychic, it is expressed by :
- The mood of sadness 4 The moral pain (loss of self-esteem)
- L’inhibition psychomotrice (asthenia, difficulty concentrating)
- The anxité
- physical slowing (sleeping troubles, digestive disorders).
- suicidal thoughts….
There are primary and secondary depression depression (Drug or organic origin)
2- Pathophysiology of depression :
A- The neurochemical theory :
This is the most accepted theory and involves biogenic amines, Depression is accompanied by a deficiency :
- norepinephrine : giving vegetative and anxiety signs and psychomotor inhibition.
- serotonin : involved in the regulation of mood.
- Dopamine : c 'is the precursor of noradrenaline.
B- The neurohormonal theory :
(hypothalamic-pituitary axis) the basal level of cortisol is high.
C- New theories :
decreased brain levels of interleukins (EL-2 and IL-6), and prostaglandins (PG-E2).
2/- antidepressants :
1- Definition :
These are psychotropic substances that improve depressed mood in its most serious manifestations by acting on the entire existing depressive syndrome for at least 2 weeks.
On distingue :
- the Antidepressants : stimulants depressed mood ; (covered in this course)
- Les thymorégulateurs : mood stabilizers (lithium salt).
2- Classification :
a- chemical Classification : it is complex.
b- biochemical Classification : it classifies antidepressants according to their mechanism of action is based on single-aminergic systems.
- Tricyclic antidepressants.
- selective inhibitors of serotonin reuptake (ISRS)
- Reuptake inhibitors of serotonin and NA (IRSN)
- The iteurs inhibited MAO (HAD)
- Action on autoreceptors ct2présynaptiques (mianserin).
3- The different classes of antidepressants :
a- Tricyclic antidepressants : Designated by this term because of their chemical stmcture. On distingue :
—> Tricyclic antidepressants imipramine where Imipramine (Tofranil®) is the prototype.
|DCI||tradename|
|Imipramine||Tofranil®|
|Clomipramine||Anafranil®|
|Amitriptyline||Elavil®, Laroxyl®|
|desipramine||Pertofran®|
|Dosulépine||Prothiaden®|
|doxepin||Quitaxon®, Sinéquan®|
|Tr imipramine||Surmontil®|
—> The related antidepressants (tétracvcliquesl :
Maprotiline (Ludiomil®) : this molecule has the same pharmacological properties of tricyclic.
1- Pharmacological action tricyclic ATD :
Mechanism & rsquo; Action : act by inhibiting the reuptake of biogenic amines (noradrenaline, serotonin) by presynaptic terminals.
According molecules antidepressant effect is dominated either by the sedative effect, or by an effect psychotonic. The choice is made depending on the patient's symptomatology.
On the central nervous system :
The tricyclic antidepressants have a sedative effect that can be beneficial in the beginning of treatment in a depressed that this sleep disorder.
The autonomic nervous system :
Tricyclic antidepressants have anticholinergic effect, adrenergic blocking high dose ; and an anti histamine Action.
On the cardiovascular system :
The most common effect is represented by postural hypotension.
Sinus tachycardia is due to inhibition of the reuptake of norepinephrine and the anticholinergic effect.
2- Pharmacokinetics tricyclic ATD :
The pharmacokinetics of imipramine is complex and highly variable from one molecule to another. Generally, still we can retain the following :
- good absorption
- Very high plasma protein binding (80-95%)
- high volume of distribution (10-50 L / kg)
- Metabolism by N-demethylation with the formation of active metabolites
- Extended half-lives (often allowing once daily.
- There placental transfer and in breast milk.
3- Indication : These are the depression of all kinds
Additional information:
- Algies rebels : Amitriptyline, Imipramine
- Preventing panic attacks : Clomipramine
- pediatrics : behavioral disorders : Maprotiline, Imipramine The gradual increase in dosage is a rule to follow.
4- Adverse effects :
Adverse effects of imipramine are common, they are often benign..
On distingue :
- The effects nenropsvchiaues : Anxious or delusional reactivations are possible, and a too rapid inversion of mood may lead to a manic state..
- The effects anticholinergiaues : dry mouth ; epigastric pain ; constipation ; tachycardia ; vision trouble ; urinary retention.
- Cardiovascular effects : L’hypotension orthostatique, arrhythmias and conduction disorders with risk of sudden death (related overdoses).
- Endocrine effects : decreases in libido or defects erection during treatment, dysmenorrhea, hyperprolactinemia, weight gain are described.
- Hematological disorders are exceptional. Allergic skin rash, of cholestatic hepatitis are possible.
5- Drugs interactions :
- Drug interactions with other highly bound drugs (Aspirin, phenylbutazone, phenytoin).
- Potentiation of the effects of tricyclic some oral contraceptives.
- Barbiturates and many sedatives accelerate the hepatic metabolism of tricyclic.
- Tricyclic potentiate the effects of alcohol and other sedatives.
- The association is responsible for nonselective MAOI Serotonin syndrome .So The combination with non-selective MAO inhibitors is against-indicated, the rule is to respect an interval of 15 days after stopping MAOIs (Conversely, an interval of 5 days is enough to get tricyclic antidepressants with MAO).
- Tricyclic and fluoxetine (ISRS) come into competition in metabolism which causes a rise in plasma concentration of tricyclic and can reach toxic threshold.
b- The ATD selective serotonin reuptake inhibitors (ISRS) :
They specifically and potently inhibit the reuptake of serotonin.
|DCI||tradename|
|Sertraline||Zoloft®|
|fluoxetine||Prozac ®|
|Fluvoxamine||Floxyffal ®|
|paroxetine||Deroxat ®|
|citalopram||Séronam ®|
They are much more developed because of a similar efficacy to tricyclic but have more clinical benefit :
- They have no anticholinergic activity
- They are devoid of cardiac toxicity
- onset of action.
1- Indication :
These are the depression of all kinds. Their first-line use is easily justifiable in the elderly poly-medicated, suffering from cardiovascular disease and in patients with suicidal thoughts.
2- Side effects :
- Digestive disorders (nausea, vomiting, anorexia)
- Insomnia are described as well as headache and decreases libido.
- withdrawal syndromes have been described.
- Serotonin syndrome, often misunderstood, justifies immediate discontinuation of treatment. It is linked to certain overdoses or interactions and can result in hospitalization, even setting life-threatening. It combines a set of symptoms of digestive order (diarrhea), vegetative (sweats, thermal dysregulation, hypo ou hypertension), engines (tremors), neuropsychiques (confusion, agitation or coma).
3- Drugs interactions :
- Association with the same selective MAO inhibitors ; et Clomipramine: serotonin syndrome.
- Certains ISRS (ex : Fluvoxamine, paroxetine) are strong inhibitors. Therefore, risk of toxicity when using SSRIs and tricyclic antidepressants.
- Half-life of & rsquo; elimination of very prolonged SSRI (up & rsquo; to 15 days for some d & rsquo; them), persistence in the & rsquo; body five to six weeks after arrest: l & rsquo; introduction & rsquo; an MAOI remains dangerous.
- + Lithium: risk of confusion
- + neuroleptic: troubles extrapyramidaux, heart rhythm disorders.
- + Triptans: serotonin syndrome.
- + carbamazepine : overdose (nausea, vomiting, visual disturbances, tremors, dizziness)
- phenytoin : tremors, headaches, cognitive disorders
- oral anticoagulants : sometimes bleeding (increased oversight needed)
- diuretics : Many comments d & rsquo; hyponatremia, potentially serious (attention ++ in elderly)
c- The reuptake inhibitors of noradrenaline and serotonin : "Antidepressant dual action"
1- Venlafaxine (Effexor ): inhibits the reuptake of both serotonin and NA, their pharmacological effects similar to tricyclic but it carries fewer side effects.
Cons-indications : Association with non-selective MAO inhibitors or MAO-B inhibitors
You have to respect an interval of 15 days when switching from treatment with MAOIs and Venlafaxine and 7 days in case of reverse flow.
2- Milnacipran (Ixel®) : reuptake inhibitor of serotonin and NA. Indicated in major depressive disorder in adults.
Adverse effects ; dizziness, Heat and sweating access, digestive signs type (nausea and vomiting).Exceptionally, serotonin syndrome may occur, increased risk by association with MAOIs.
Cons-indications : Association with MAOIs.
d- Inhibitors of mono amine oxidase (HAD) :
There are two forms of MAO :
La MAO-A (responsible for the degradation of the amines mono-aminergic; noradrenaline , serotonin , and many dietary amines (ex. Tyramine).
La MAO-B (responsible for the conversion of certain toxic amines potentially involved in neurodegenerative processes)
Most MAOIs are no or very specific and inhibit 2 MAO but it is known that anti depressant effect is related to inhibition of MAO-A.
We distinguish 2 types of MAO used as antidepressants: non-selective MAO-A and MAO
1- The irreversible and non-selective MAO inhibitors :
They bind covalently MAO, the result is an irreversible inhibition and long-term (about several weeks) which lead to the destruction of the enzyme, we speak of "suicide inhibition".
The restoration of the MAO activity after a single dose of an irreversible MAOI requires 8 at 15 days.
On distingue :
|iproniazid||Marsilid®|
|Nialamide||Niamide®|
Their effectiveness is comparable to the reference tricyclic antidepressants but they are never offered as first-line because of their handling difficult. These products are reserved for depression resistant to treatment well led by tricyclic antidepressants; in fact they are hardly prescribed.
a- Side effects
- antimuscarinic effects
- Orthostatic hypotension
- Insomnia
- Access brutal hypertension if dietary intake of tyramine.
- severe headache
- cytolytic hepatitis (rares)
- Suicide attempt, inversion of mood, delirium
b- Drugs interactions :
- tricyclic antidepressants, ISRS, and NA reuptake inhibitors and 5-HT = serotonin syndrome
- L-DOPA = potentiation of pharmacological effects
- Reserpine = psychomotor agitation
- sympathomimetic alpha, indirect sympathomimetic = hypertensive crises
- Do not associate with foods rich in tyramine and tryptophan (Cheese)
2- The reversible inhibitors and selective MAO-A (HAD-A) :
MAO is not destroyed so that the lock is reversible and limited in time. On distingue :
|Moclobemide||C lamine®|
|Toloxatone||Humoryl®|
a- Side effects :
- SNC : Insomnia, irritability, agitation (Moclobemide)
- Digestive disorders
- Dizziness, headaches
- severe hepatitis (exceptional)
- Lifting P psychomotor inhibition
b- drug interaction :
- + Tricyclic antidepressant, hyperthermia, convulsion and coma.
- The combination of levo-dopa and MAO inhibitors causes agitation and hypertension.
- MAO inhibitors may interfere with the metabolism of other drugs. They extend and increase the effects of central depressants such as general anesthetics; sedatives ; l & rsquo; alcohol ; and antihistamines.
e- other antidepressants :
These antidepressants are of more recent onset and have features specific to each specialty. They do not show the toxicity of tricyclic (they often lack effects
|Mechanism & rsquo; Action||pharmacological effects|
|mianserin
|
(Athymif)
|Receptor antagonist is 2 adrenergic||~ No effect or cardiotoxicity parasympatholytics.
|
~ It enhances the effects of alcohol and barbiturates.
|Tianeptine (StabIon®)||dopamine pathway||Intermediate antidepressant effect|
|Viloxazine (Vivalan®)||noradrenergic way||No sedative effect|
1- Side effects :
Rares : gastric (gastralgies, nausea, dry mouth) or neuropsychiatric. Some specific but serious reactions are known :
-mianserin : agranulocytose (should be withdrawn immediately and final treatment) – Tianeptine : hepatitis (requires immediate discontinuation)
2- Drugs interactions :
Tianeptine, mianserin : The combination with non-selective MAO = risk of serotonin syndrome. | https://www.medicinus.net/antidepresseurs/?lang=en |
Search Library:
Go
Browse A-Z Listings:
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
A-Z Listings
Contact Us
Conditions & Treatments
Adult Health Library
Allergy and Asthma
Arthritis and Other Rheumatic Diseases
Bone Disorders
Breast Health
Cardiovascular Diseases
Complementary and Alternative Medicine
Dermatology
Diabetes
Digestive Disorders
Endocrinology
Environmental Medicine
Eye Care
Gynecological Health
Hematology and Blood Disorders
Home Health, Hospice, and Elder Care
Infectious Diseases
Kidney and Urinary System Disorders
Liver, Biliary, and Pancreatic Disorders
Men's Health
Mental Health Disorders
Nervous System Disorders
Non-Traumatic Emergencies
Oral Health
Orthopaedic Surgery
Otolaryngology
Pathology
Pediatrics
Physical Medicine and Rehabilitation
Plastic Surgery
Pregnancy and Childbirth
Prostate Health
Radiology
Respiratory Disorders
Skin Cancer
Spine, Shoulder, and Pelvis Disorders
Surgical Care
Travel Medicine
Women's Health
Pediatric Health Library
Adolescent Medicine
Allergy, Asthma, and Immunology
Blood Disorders
Burns
Cardiology
Care of the Terminally Ill
Child and Adolescent Mental Health
Child Having Surgery
Craniofacial Anomalies
Common Childhood Injuries and Poisonings
Dental and Oral Health
Dermatology
Diabetes Endocrinology
Digestive and Liver Disorders
Ear, Nose, and Throat
Eye Care
Genitourinary and Kidney Disorders
Growth and Development
High-Risk Newborn
High-Risk Pregnancy
Infectious Diseases
Medical Genetics
Neurological Disorders
Normal Newborn
Oncology
Orthopaedics
Pregnancy
Respiratory Disorders
Safety and Injury Prevention
Transplantation
Tests and Procedures
Cardiovascular
Gastroenterology
Gynecology
Lab Tests
Neurological
Orthopaedic
Pulmonary
Urology
Prevention Guidelines
Children Ages 0-2
Children Ages 2-18
Men Ages 18-39
Men Ages 40-49
Men Ages 50-64
Men Ages 65+
Women Ages 18-39
Women Ages 40-49
Women Ages 50-64
Women Ages 65+
Health Centers
Asthma
Understanding Asthma
Pulmonary Tests & Procedures
Controlling Asthma Triggers
Managing Your Asthma
Asthma and Other Conditions
For Your Family
Cancer
About Cancer
Cancer and Genetics
Cancer Test and Procedures
Cancer Diagnosis
Cancer and Nutrition
Cancer Treatment
Cancer Treatment Side Effects
Living With Cancer
Specific Cancers
Children and Cancer
Chronic Obstructive Pulmonary Disease
What is COPD?
Pulmonary Tests & Procedures
Managing Your COPD
Preventing COPD
Diabetes
Understanding Diabetes
Types of Diabetes
Complications of Diabetes
Managing Diabetes
Diabetes and Your Family
Heart Disease
Understanding Heart Disease
Cardiac Tests & Procedures
Heart Conditions and Diseases
Women and Heart Disease
Diabetes and Heart Disease
Congenital Heart Disease
Cardiac Rehabilitation
Managing Heart Disease
Preventing Heart Disease
Neuroscience
About the Brain and Nervous System
Neurological Tests and Procedures
Neurological Conditions and Diseases
Alzheimer's Disease and Memory Disorders
Headaches
Stroke and Cerebrovascular Diseases
Understanding Rehabilitation
Children & Neurological Conditions
Obesity and Bariatric Surgery
Understanding Obesity
Obesity and Health Problems
Treating Obesity
Bariatric Surgery
Lifestyle Changes
Childhood and Obesity
Orthopedics
Basic Anatomy
Orthopedic Tests and Procedures
Orthopedic Conditions and Disorders
Arthritis and Rheumatic Disorders
Osteoporosis
Fractures
Sports Injuries
Preventing Injuries at Work
Orthopedic Conditions and Children
Pregnancy & Newborns
Preparing for Pregnancy
Your Changing Body
Your First Trimester
Your Second Trimester
Your Third Trimester
Pregnancy Tests and Procedures
Healthy Pregnancy Lifestyles
Pregnancy Complications
Labor, Birth, and Postpartum
Your Newborn Baby
When Baby Needs Special Care
Feeding Your Baby
Drugs, Herbs & Supplements
Drug Interaction Checker
Drug Search
Drug Search by Physical Description
Herbs, Vitamins & Supplements
Amino Acids
Herbs
Minerals
Vitamins
Other
All
Nutrition & General Wellness
Healthy Recipes
By Dietary Considerations
By Food Category
Nutrition Facts
Wellness Library
At Work
Behavior
Dental
Diseases & Conditions
Fitness
Injuries & Emergencies
Nutrition
Parenting
Prevention
Safety
Today's Medicine
Weight Control
Your Body
Your Family
Children's Health
The Healthy Child
Children and Healthcare
Emotions and Behavior
Age-Specific Concerns
Parenting
Men's Health
In Good Health
Health Issues
Sexual Health & Reproduction
Sports Injuries
Older Adults
Living Better, Living Longer
Eating Well, Staying Fit
Prevention, Self-Care, and Safety
Using Medications
Health Concerns: Head to Toe
Long-Term Planning
For the Caregiver
Home Health and Hospice Care
Women's Health
A Woman's Journey
In Good Health
Women's Health Issues
News Center
Focus on Health
Health News
Newsletters
Cancer Awareness
Chronic Condition News
For Your Child
Heart Care
Men's Health
Women's Health
Healthy Living
Back and Neck Care
Back and Neck Basics
Causes of Back and Neck Pain
Keeping Your Back Healthy
Dealing With Back Pain
Preventing and Treating Neck Pain
Back Pain and Pregnancy
Blood Pressure
Blood Pressure Basics
Eating Healthier
Getting Exercise
Losing Weight
Managing Stress
Quitting Smoking, Limiting Alcohol
Taking Medication
Cholesterol
Understanding Cholesterol
Eating Healthy
Managing Weight
Getting Exercise
Breaking Bad Habits
Taking Medication
Fitness
Why Fitness Matters
Get Started
Get Moving
Stay Motivated
Make Time for Fitness
Improve Your Health
Children & Teens
Women
Older Adults
Mental Health
During Life's Journey
Common Conditions
Support and Stability
Nutrition
Nutrition Basics
Healthy Practices
Nutrition and Health Conditions
Children and Teens
Pregnant Women
Smoking Cessation
Facing Up to Smoking
Planning Your Quit Strategy
Taking the Big Step
Staying Smoke-Free
Teens and Smoking
Smoking During Pregnancy
Stress Management
Understanding Stress
Keys to Managing Stress
Stress on the Job
Stress and Family
Weight Management
Call to Action
About Weight Loss
Nutrition
Exercise
Behavior
Related Health Issues
Children and Teens
Tools & Multimedia
Calculators
Quizzes
Risk Assessments
Video Library
Documents relating to:
Bullies: Helping Your Child Cope
Related Items
News
1 in 5 Colorado Teens Has Easy Access to a Gun: Study
329 Americans Are Injured by Guns Every Day: Study
5 Million More Americans Became Gun Owners During Pandemic
Being Bullied Often Leads Teens to Thoughts of Violence
Buying Gun During Pandemic Might Raise Suicide Risk
Childhood Trauma Linked With Higher Odds for Adult Neurological Ills
Could You Save a Life After Mass Violence? Most Americans Say No
Counseling on Gun Safety Could Cut Suicide Rate in Military: Study
Death Threats, Trolling Common for Scientists Who Speak to Media About COVID
Does Hostility Predispose You to a Second Heart Attack?
Effects of Gun Laws Cross State Borders, New Study Suggests
Far More American Teens Carrying Guns These Days, Study Finds
Firearms Now the Leading Cause of Death Among U.S. Kids, Teens
Gun Deaths Continue to Rise in America's Cities
Gun Licensing Laws Help Keep Murders, Suicides Down
Gun Sales in Homes With Teens Rose During Pandemic
Gun Suicides Are Rising Steeply Among American Youth
Gun Violence Costs U.S. Health Care System $170 Billion Annually
Gun Violence Wreaks Havoc on Lives of Survivors, Their Families
Gun-Related Spinal Cord Injury in Childhood Brings Hardship Later
Handgun Ownership Raises Odds for Gun Suicide
High-Profile Police Brutality Cases Harm Black Americans' Mental Health: Study
Injuries From Bikes, Guns Rose During Lockdowns
Knowing What to Expect May Help After Sexual Assault
Many U.S. Mass Shooters Had Untreated Mental Illness: Study
Mental Illness Not a Factor in Most Mass Shootings
More Evidence Spanking Kids Doesn't Work, Can Cause Harm
More Guns on TV, More Gun Violence in Real Life: Study
Murders Surged in U.S. in 2020
Narcissist's 'Thin Skin' Can Easily Lead to Aggression
Neighborhood Gun Violence Means Worse Mental Health for Kids
One Type of Injury Should Raise Red Flag for Domestic Violence
Over Half of Police Killings Aren't Reported, Blacks Most Likely Victims
Pandemic Silver Lining: Global Decline in Urban Crime
Pandemic Stresses Enough to Trigger Political, Social Unrest: Analysis
Pandemic-Linked Rise in Crime Hit America's Poor Neighborhoods Hardest
Philly Study Finds Lockdowns Linked to Spike in Gun Violence
Poll Finds Americans Highly Stressed by Politics, Pandemic
Practice Gun Safety for Your Kids' Sake, Especially During Pandemic
Should There Be 'Gun Retirement' for the Elderly?
State Spending on Poverty Really Pays Off for Kids: Study
Suicides Involving Guns Have Key Differences, Study Shows
Tough State Gun Laws Help Save Lives: Study
Tougher Gun Laws, Fewer Gun Deaths: Study
Tougher State Gun Laws, Less Gun Violence Among Teens: Study
'Trigger Warnings' May Do More Harm Than Good, Study Finds
U.S. Gun Violence Rates Jumped 30% During Pandemic
U.S. Murder Rate Up 30% During Pandemic, Highest One-Year Rise Ever
U.S. Teachers Often Faced Harassment, Violence During Pandemic: Poll
Want Less Violent Prisons? Plant More Trees
When a Handgun Is in the Home, Suicide Risk Quickly Rises
When Black Americans Encounter Police Violence, High Anxiety Often Follows
White House Announces Plan to Reduce Gun Suicides
Who's Most Likely to Get Bullied at School?
Why Losing Someone to Violence Can Be Especially Tough to Get Over
Will the Pandemic Surge in Gun Purchases Raise Suicide Rates?
With Tighter Handgun Laws, U.S. Would See Fewer Suicides by Young People
Contact Our Health Professionals
Find a Doctor
Request an Appointment
Locations
Services
Follow Us
Powered by
Copyright ©
About StayWell
Stay Informed.
Connect with us. | https://healthlibrary.brighamandwomens.org/RelatedItems/RelatedDocuments.pg?d=&TypeId=1&ContentId=4488&Category=News |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.