content
stringlengths
71
484k
url
stringlengths
13
5.97k
How I became Nicest Bassist. Part 2. Getting ready for my first gig with Emm seemed easy enough. Four tunes to learn. A rehearsal or two to run through the tunes. How hard could it be? I remember back in my days playing in bands in high school where you’d think nothing of learning 30+ tunes for a show. I never wrote anything down back then. No charts, no road maps for where the tune goes. I just did it and went to a show and probably never even thought about whether I’d remember the tunes or not. Perhaps a 15+ year break from music has something to do with it but as I mentioned in Part 1, I had become comfortable with using charts and Fakebooks with a music stand on many gigs. There are people out there who know every jazz standard in every key . . . backwards. There are people with perfect pitch who know that when the car tire screeches it’s a C#. For the most part those musicians don’t need a book. I remember doing a gig in the last couple of years with Chris Norley and Charlie Rallo, two of London’s great jazz players. Charlie teased me, sort of, by asking Chris, “Why does he have a book for?” I’m sure Charlie was further horrified by the fact that I brought an electric ‘popsicle stick’ double bass to the gig. I digress but I am not one of those that has either a great memory or great ears. I played Emm’s tunes over and over to get them in my head and ears. My first rehearsal with Emm I suspect was for her to see who this bass player was and if he could measure up. I really didn’t want to use a music stand or notes on her show so after getting together a couple of time with Emm as well as Ashley and Jenna it was show time. The London Music Club show in May was sold out. I’ve played a few times at the LMC when there have been a lot of people there. I’ve also played with the Karen Scheussler Singers to a packed house but this was different. Emm was essentially playing solo with Ashley, Jenna and myself being called during the course of the evening. Nowhere to hide on this gig. The pressure really felt like it was on! I had to sit an wait at my table as I prepared to fly without a net. Suddenly I was more nervous for a gig than I had been in months if not years. Would I remember the tunes? What key is Lose My Head in? C? Wait. Bb? No that’s Get Brave. Speaking of keys, Stray Bullet is in F#. During rehearsals I asked Emm what possessed a keyboard player to write a tune F#? Lots of black keys on the piano. Because it sounds right in that key. Certainly does. I got brave and got up there. No turning back. In the end the night went well. I got through my 4 tunes with out any major dramas and as soon as the night was over I wanted to play all over again. It was at this gig that Emm said I was once of the nicest musicians she had met. At this point I could only hope that there would be more shows down the road but either way it was a great moment for me. My return to music started at the LMC and here I was playing bass for Emm Gryner at the LMC. What a great journey.
http://steveclarkonbass.com/2010/12/20/how-i-became-nicest-bassist-part-2/
You're sitting in a concert hall. The crowd tenses in anticipation as the vocalist approaches the song’s high note. Abruptly, the excitement changes to wrinkled noses and disgruntlement when the singer misses it. What happened? Why can most of us hear when someone sings off key? We went to the Department of Musicology at the University of Oslo (UiO) to find out. Most of us can hear when music is out of tune. So why can’t all musicians hear it? “We’re used to hearing music that isn’t live. The music is recorded in the studio and edited with advanced applications. Studio recordings have been going on for a long time, but in the past we didn’t have tools that could adjust the music perfectly. Because we’re so used to hearing studio recorded and edited music, I think we’re getting pretty strict reference points for what is and isn’t pure,” says Associate professor Åshild Watne at UiO’s Department of Musicology. “It may be easier to hear off-key notes in others than in oneself. Singing is more difficult than listening and requires more motor control. The vocal cords have to create tones that leave your mouth. These sounds come back in through your ears, get analysed and then corrected if necessary. That can be challenging,” says Professor Hallgjerd Aksnes. Both Aksnes and Watne believe that most people can get better at singing and hearing tones. “Very few people are unable to perceive and distinguish tones from each other at all. You could compare that condition with being colour-blind. You have the sense of sight, but the brain can’t understand the colours it’s seeing,” says Watne. But what exactly is a pure tone? “The standard answer is that a completely pure tone is a sine wave tone,” says Aksnes, “that is, an even Hertz frequency that forms an s-shaped sine wave.” However, she says, “a sine tone can only be produced using technology. But even pure sine tones will sound “off” – with a vibration or harshness in the sound – if they have a small difference in frequency. In the same way, you can hear “beats” when two piano strings or two singers aren’t perfectly in tune.” Natural overtones always occur when someone sings or plays an instrument. That is why pure sine waves, which don’t have any overtones, can only be produced technologically. Overtones are an infinite number of tones over the fundamental tone that we don’t necessarily notice. An overtone series is a series of natural overtones that occur over a fundamental. Musicians on instruments like the jaw harp and flute consciously use overtones by changing the shape of their mouth or varying the airflow to accentuate different overtones. “Since overtones occur naturally, you could call them completely pure tones,” says Watne. In this video you can hear the first overtones in the series. First they are played separately, as sine wave tones. Then the tones sound together, as when someone plays an instrument or sings. Sometimes notes may be sung accurately, but may still be perceived as being out of tune – and vice versa. “Genre has a lot to do with it. Ole Paus and Bob Dylan can sing as pure tones as a classical singer, but the genre requirements for how to sing the tone are completely different. Likewise, we can easily tolerate an “out of tune” note in a blues tune, because it’s part of the genre’s expression. But the last note you play or sing in a song should be pure. Otherwise, I think we perceive it as sour, regardless of genre,” says Watne. Often we think music sounds out of tune if musicians fail to hit exactly the same note. If the vocalist doesn’t stay in the same key as the band, or the flute plays the note slightly higher or lower than the rest of the orchestra, our ears notice that something is amiss. The note’s relationship to its neighbouring tones is an important element for our ears in determining whether the note is in tune or not. In Western music we are actually used to hearing impure music, according to the researchers. A piano has seven octaves. An octave extends from the note C to the next C, for example, and contains twelve half-steps, or semitones. On modern pianos that use equal temperament tuning, the distance between each semitone is 100 cents. This is not completely pure according to the natural overtones, or what is known as just-intonation (pure tuning). In just scale tuning, the distance between each semitone is not exactly 100 cents. Pianos are tempered so that notes played together will sound pure, regardless of the key they’re played in. That’s why we don’t perceive tempered tuning as impure, although technically it is. You can press the keys to hear the notes. The keys at the top are from a tempered scale, while the keys at the bottom are from a just scale. As individual notes sound, the just and tempered tones are very similar, but when playing multiple notes simultaneously as chords in a piece of music, the differences in sound increase. In the video below, Watne and freshmen musicology students attempt to find and sing natural overtones. They sing a triad with a fundamental, a fifth and a major-third. “Third” and “fifth” describe the note’s step on the scale. In the piano illustration above, the fundamental would be C, the major–third E and the fifth G. The singers’ fundamental corresponds with the piano’s. The fifth also correlates fairly well, but the singers’ third doesns’t exactly match the piano’s. This happens because the distance from the fundamental to the major-third is 400 cents on a tempered piano, but only 386 cents in the natural overtone series. (English subtitles are available in the videoplayer settings.) Watne believes the student chorus sings the most purely a capella, that is, without piano accompaniment. “You can choose to play or sing in the direction of just intonation, like the notes in the overtone series, rather than in tempered tuning. Doing this will make the notes and chords technically more pure than a piano. But you always have to compromise so that it sounds good,” says Watne. Tempered tuning has been the standard for pianos for many centuries. But in other musical cultures, different scales are more common. The answer is no. Between each semitone there are infinitely many other tones, which might be perceived as impure by current western music ideals. Norwegian folk music and Arabic music, for example, use these microtones exactly the same way as the twelve "pure" tones. You can hear an example of the use of microtones in Norwegian folk music below. The researchers emphasize that most individuals can learn to sing in tune with practice. “Many children sing with relatively pure tone, but they tend to be a little flat when they come to the song’s high notes. But you can practice raising your pitch at the high notes,” says Aksnes. “Concentration is sometimes also a factor with children. Their tone is pure, but then it goes flat a bit, and suddenly they’ve changed key. Their singing is pure in the new key, but off in the original one. In choral work I also notice that often the fifth is too low, and this causes the intonation of the whole chorus to drop. If you go a little light on this note, it’s easier for the chorus to maintain a pitch,” says Watne.
http://sciencenordic.com/what-pure-tone
Understanding What Kind Of Guitar Player You Are Over the years working with people as a guitar and music teacher, I have found 4 main types of personalities that people have when it comes to how they react to learning and playing music. It’s important for you to figure out what type of personality you have so that you can maximize the time you spend when playing and practicing guitar. It’s also important to note that these are just the most common traits that I see and work with. There are certainly more and you don’t necessarily need to be just one. In no specific order: #1-The Emotional player: These are the types of people who can let loose of their thinking and just play the instrument with lots of emotion and expression. I often find that these types of people physically use a lot of their body to feel the rhythm of what there playing and don’t mind letting their emotions take control of them. Things to focus on: I usually find that these types of players are not that interested in reading music and would rather learn songs by having other people show them how to play or by figuring them out by ear. The best thing to do if you’re a new player is to get proficient at playing open chords, movable bar chords and power chords. You have a lot of feeling and energy that your are dying to let out so the quicker you can play the essential chords that are used in guitar, the quicker you will be having fun playing what you want to play. #2- The Songwriter: This is the type of person that loves to write their own music and create songs from scratch. You usually have a lot to say and like to share your thoughts with other people through songs. I have worked with some people who don’t care much to learn other people’s songs and I have worked with other people who don’t even understand that they are capable of writing their own music. In either case, I firmly believe that writing music is an extremely important skill for an aspiring musician to learn. It gives you a lot of insight as to what type of player you are and what skills you need to work on. Things to focus on: You don’t have to be a musical genius to start writing your own songs. Some people are scared to even try it, others NEED to do it to get out all of their ideas. A lot of the most famous songs in modern music are extremely simple and easy to play, with only a few chords that make up the whole song. Along with knowing how to play the basic open chords and having a good understanding of melody, I always encourage songwriters to study a bit of music theory so that they understand Harmonic Structure and how songs work to sound pleasing to the human ear. Even if writers can crank out songs like lightning, they usually, at some point, get frustrated because they only have a limited knowledge of how music works and get stuck writing the same old song time and time again. Understanding theory will give you more options and ways to be expressive in songwriting. #3- The Technical Player: These are the types of players that love music theory, playing physically difficult songs and usually enjoy reading music. The ability to read music is not a must, however it does enable a player to understand much more about the song both emotionally and how to perform it with more expression. If you like a good challenge and don’t mind spending a lot of time to perfect a certain song or technique, then you will probably find yourself in this category a lot of the time. Also, the theory side of music, that is very math oriented, is usually very interesting to you. Things to focus on: Technique, technique, and more technique! If you like playing technical music you are going to want to focus a lot on perfecting your technique. This includes every aspect of how you pick (whether it be with a pick or with your fingers), strum, slide, sit, stand, etc.! Having a big vocabulary of scales, arpeggios, and patterns to link them together is also a must. Playing fast is also something that you are going to want to spend some time developing. The ability to rip through passages at very high tempos is incredibly FUN and EXCITING! If you focus on the right learning methods, it’s not as hard as you might think although it does takes a lot of continued practice to get your muscles up to snuff to keep with the fast pace. #4- High interest, Low attention span: I couldn’t think of a specific name for this description so sorry but that’s the best I could do. A lot of people with ADD can fall into this category but that’s certainly NOT a required trait. Now you might already be thinking that falling somewhere near this category puts you at a disadvantage but I will tell you from experience, nothing can be further from the truth. I have worked with a number of people who obviously have a really hard time keeping their thoughts in one place and focusing on a certain topic. While that might be the case, these people also had incredible pitch perception (sometimes even what’s known as perfect pitch) and the very valuable ability to learn music by ear. I couldn’t be more envious of these types of people because I can assure you that I am not one of them. Learning music by ear and developing pitch perception has never come easy for me and I have had to work my tail off to be able to do it well. I have seen some students who are able to master this ability with ridiculous ease. Things to focus on: Understanding theory and reading music is usually difficult for these types of people. When I work with people that are like this, I always focus on getting their technique as good as possible because they already have the tools inside of them to be an amazing player, it’s just a matter of giving them the ability for their fingers to move the way they need to so that they can play what’s inside of them. It’s good to do a lot of ear training exercises and learning songs by listening to them. After they get to a certain point, they usually form an interest in theory and want to understand more about how music works. This is because they become somewhat discouraged at what they don’t know and understand about music. I’m a firm believer in teaching what people want to learn and not pushing to hard for them to learn what they don’t. There are certain things that are crucial for every guitarist/musician to understand so I have always found a way to be able to teach those things in a manner that best fits their personalities. Again, these are just the main personality characteristics of people that I work with. There are plenty more and of course you can be some of one and some of another. The reason I think it’s important to understand where you might fit in to all of this is because it will enable you to have a lot more fun learning guitar and not get so frustrated when things become difficult. Always focus on learning the things that you want to be able to do and the rest will come with time if you are looking to become a well-rounded guitar player. If someone says that knowing how to read music is essential for a beginning guitar student and you do not take to reading easily, then I’m pretty sure your not going to be having much fun when you start out. One very important thing that you need to ask yourself is what style of music are you looking to play. If you say Classical or Jazz, then I say that hands down, without a doubt you need to learn to read music. However most other styles of music can be played, written, and performed well without knowing how to read notes on the staff. 7 Comments - December 25, 2010 at 4:52 pm You know, I never thought of it like that and am glad I took the time to read this. Thank you so much for writing about this aspect of playing! I probably fall into a split of 60% v 40% 1&4, as I'm now trying to improve and identify areas that I would gain the most or best ground. I've been playing for about 40 years give or take some period of abstinence entirely, and was impressed with your web offering, and now after reading this, your thought process and character. Peace and thanks! I don't chime in like this ever really. - January 19, 2011 at 6:05 pm i tend to get frustrated easily, also im still searching for that perfect amp and perfect tone that still drives me nuts and perfect guitar too. but im working on keeping my temper in check because it obviously does no good too lose it over needing more practice or something , or just plain being too tired and rest first then play.
https://www.rockguitarpower.com/understanding-what-kind-of-guitar-player-you-are/
What do artists like Ed Sheeran, Paul McCartney, Prince and Miles Davis all have in common? They are all well rounded musicians. When people discuss their favourite musicians, the phrase ‘well rounded’ often gets thrown around, so let’s dive in and unpack what it really means to be a well rounded musician and why it's so important for your growth. Successful musicians possess more than just a high degree of proficiency on their instruments. They are, more often than note, extremely dedicated, passionate and hard working individuals who have committed their lives to their craft. Improvisation Improvisation is a great way to inspire creativity. Guitar master, Yngwie Malmsteen had this to say about his songwriting process in an interview, “Improvisation is the genesis of composition. In order to come up with something cool, you have to jam. Here’s what I do: I have a little Marshall amp in my living room and it's always connected with a guitar. I just sit on my couch and play around while watching tv. If I come up with a really cool melody or chord progression or riff, it escalates from there.” Practicing on your own isn’t the only time to work on improvising. Getting together with friends to jam is one of the most fun things to do as a musician and will be quite beneficial to your development as a player. With the Greater Toronto Music School band program, you can be matched with like-minded students who want to play music with other people and jam sessions give you the opportunity to take what you’ve learned in your lessons and put it into practice. Improvising is a skill in its own right as well and our teachers can work with you to develop your rhythmic, melodic and harmonic vocabulary to equip you with ideas for when you are improvising. Certain genres of music, like jazz, are heavily focused on improvisation. These songs usually start out with a melody, called ‘the head’, over a set of chord changes known as 'the form'. After the band plays the head, each musician will have their chance to improvise over the form of the songs. Each soloist may take several 'choruses', or cycles trough the form. Jazz musicians also do something called trading, where each musician takes turns soloing for a set length, often for four bars but this will range. While trading, the band is still aware of the form and the chord changes and this is how they will seemingly come in together out of nowhere. Improvising is integral for taking these musical compositions to the next level and often leads to the most exciting and spontaneous moments in a concert. Contact us today to learn more about how our in-home or online music lessons in Toronto can help you. Composition Yngwie Malmsteen's approach of jamming and improvising to compose is not the only way to write a song. Each musician and composer will have a different approach as different things work for different people. Often, musicians will begin with a clear idea -- this could be a mood that they are trying to convey, a simple progression, rhythm or melody that came to them or maybe some lyrics that need music to accompany them. Sometimes, people will sit down with no clear ideas just to explore harmonies, melodies, or motifs in an attempt to create a new and inspiring idea. Saxophone player, Kamasi Washington, describes his EP Harmony of Difference, as an exercise in counterpoint, which he describes as, “the art of balancing similarity and difference to create harmony between separate melodies.” Other people enjoy writing songs more than performing them themselves, like Max Martin, who has written hit songs for everybody from Britney Spears to Taylor Swift to The Weeknd. The online and in-person music lessons in Toronto that we offer can teach you the theories and techniques that you need to write your own songs and help you gain the skills that you need to break down and analyze the songs that you love. Composition and songwriting are big parts of what makes a well rounded musician and the theory that our teachers can introduce you to can take your songwriting further. Reading When the band Snarky Puppy was getting ready to record their 2014 album, We Like It Here, their drummer Robert ‘Sput’ Searight wasn’t able to make it to the recording. With only a few days until they recorded the album, Toronto based drummer, Larnell Lewis, was called to the gig. The album was being recorded in the Netherlands and Larnell learned how to play most of the songs on the album by reading and studying the sheet music on his flight. This is an impressive feat in any event, but considering the complexity and difficulty of the music being recorded, this is absolutely mind blowing. This incredible performance catapulted Lewis into the international spotlight, often times being called the greatest drummer of our time from listeners all around the world. Although reading is not absolutely vital to your success as a working musician, having strong reading skills will make you a top call in the event that a band needs a last minute sub for a gig. Certain big band gigs or commercial recording studio gigs will also require that the musicians are able to read and reading music is also a requirement for auditions for going to study music in college or for conservatory prep exams. The instructors at the Greater Toronto Music School can give you all the tools that you will need to work on your reading with our online or in-person music classes in Toronto. Ear Training Strong listening skills and the ability to decipher the chords and intervals that you are hearing is an extremely valuable skill for any musician. Some people are born with the gift of 'perfect pitch' and are able to identify any note that they hear on command. The rest of us have to work at this and while we can gain 'relative pitch', will never have true perfect pitch. The ability to listen to a song and understand what is happening is a skill that any well-rounded musician needs to have as it will help you to learn and transcribe what you are hearing easily and efficiently. Music lessons in Toronto can teach you the tips and tricks that you need to know so that you can practice your ability to improve your ear to identify notes and chords. These skills go beyond just learning songs, if you are called on stage at an open mic night or show up to a jam session, when you can easily identify the chord changes that the other musicians are playing, you can seamlessly join in and add to the music. Production The music industry is rapidly changing and with it the ease of access microphones, interfaces, studio monitors and digital audio workstations, is becoming increasingly more accessible. Post Malone released his first single, White Iverson, on Soundcloud in 2015 and this self-produced track catapulted him to stardom. The Australian guitarist Plini is another person who got their start producing their own music at home.and had amassed a huge fan base before ever going on tour. Another well known example of this is Billie Elish, who's Grammy Award winning album was produced in her home. Well rounded musicians know that learning how to utilize these platforms to record professionally sounding music on their own is an excellent way of getting their music out to as many people as possible. It also gives you the ability to be hired for home recording sessions which can be a great source of income. Greater Toronto Music School offers classes on how to use digital audio workstations. Whether you want to use Ableton, Logic, Pro Tools, Garage Band, or anything else, our teachers have you covered and you’ll be able to record and produce your own music in no time. Become a Multi-Instrumentalist The most well rounded musicians often play many instruments at a high level. Earlier in this post we mentioned a prime example of this in Paul McCartney, who just put out a brand new album on which he plays every single instrument. There are many advantages to learning many instruments and doing so will almost certainly help you progress on your main instrument in ways that you did not imagine. A pianist who decides to learn how to play the guitar may be inspired to write a piece using new ideas that would have never come to them while playing on the piano. A drummer who starts playing piano may become more aware of melodies and harmony, making sure to compliment them more when he or she plays. Any rhythm section player who learns to sing can provide backup vocals, adding rich layers to the music. These are not the only ways that learning other instruments will help you become more well rounded. Multi-instrumentalists have become more common in recent years. We have seen the emergence of prodigies like Jacob Collier and Louis Cato and it’s not uncommon to see the same person playing different instruments on several high-profile gigs. Contact us today if you are looking for online music lessons or in-home music lessons to start learning your second instrument! Playing your instrument well is just one component of being a musician. In the current landscape of the music industry, musicians need to be wearing multiple hats to get ahead. The teachers from Greater Toronto Music School can help you get there. We are focused on inspiring your passions through quality, fun, and engaging music lessons.
https://www.greatertorontomusic.ca/post/becoming-a-well-rounded-musician
This post has been modified to reflect new information since its original publication. The age at which children start using computers has been trending downward due to the increasing availability of educational and entertainment software products aimed at preschoolers and toddlers. However, just because your kid has mastered a child-friendly platform like ABCmouse doesn’t mean he or she is ready to navigate the risks (like cyberbullying, stalking, and self-esteem issues) associated with having a social media account. So, how can you tell if your child is ready to start using social media, and what precautions should you take to protect his or her safety and online reputation? To help you make the right decision, ask yourself the following questions. Does your child meet the minimum age requirements? Social networking websites (and online games that enable users to interact) have minimum age requirements that users must meet before they can sign up. For example, many social networks, including Facebook, Pinterest, TikTok, Instagram, and Snapchat, require that a child be at least 13 years old to use their websites. If your child meets this requirement, then he or she has a better chance of being mature enough to handle the responsibility of a social account. If your child isn’t old enough, according to a particular site’s rules, then you’ll have to weigh your child’s level of emotional and intellectual maturity against the unique risks that the site poses. Of course, it’s easy for kids to simply enter a fake birthdate and sign up for one of these sites. The good news is that there are ways to lower the chances of your child using social media before he or she is ready. One of the most effective strategies is to educate yourself about which sites are popular with young people and then use this information to establish regular, ongoing discussions with your child from an early age about the risks and benefits of these platforms. During these talks, you can also share your expectations regarding when and how your kid will eventually use social media. For example, he or she will only be able to use social media on the family computer after he or she has completed their homework, or he or she can only sign up for one social platform, which you also have access to. If your child is responsible enough to follow your rules, then he or she is probably ready to handle the responsibility of a social media account. Does your child know the risks of being on social media? It’s difficult to defend yourself against an unknown threat. Therefore, you need to let your kids know what they’re up against before they ever go online. Some potential dangers of using social media include: - Cyberbullying—Children can be cruel, and the effects of bullying are often amplified in social media. Teach your children the importance of not spreading rumors or mean comments about other kids. Likewise, teach your children that they should come to you if someone is bullying them online. - Compromised privacy—Make sure that your children know to never post their address, phone number, email, or other personal information on any social network. Sharing this kind of information can put them at serious risk of identity theft or stalking. - Predators—Teach your kids not to accept friend requests or respond to messages from people they don’t know in real life. Unfortunately, there are many predators out there looking to trick young people into doing things that make them unsafe. Your kids should be aware of these risks and know how to mitigate them. Only after your child thoroughly understands the risks inherent in online interactions will he or she be ready to open a social media account. Can you successfully monitor your child’s online activity? Research shows that children can’t fully understand the consequences of their actions until they are in their mid-20s. As such, even the most responsible children can benefit from parental supervision of their social media interactions. So, before you decide to let your kids on social media, you need to determine if you have the bandwidth and knowledge necessary to protect them from posting. If you can’t commit to frequent audits of your child’s posts and messages, whether this is due to a lack of time or not understanding how a platform works, then you might want to hold off letting your child set up social accounts until you have the time to supervise them properly. Does your child understand the internet is forever? Although it’s easy to think of online communications as quick and fleeting, those awkward or inappropriate things your kid posts today will still be findable for years to come. Even if your child quickly sees his or her error and removes an ill-conceived post, someone might have already taken a screenshot of it in the few minutes it was visible. This means they could still repost it, thereby damaging your child’s online reputation (what people find when they search for your kid online) and limiting his or her educational and career options. Consequently, you should only permit your kid to have an account once you’re confident they understand the impact social media activity can have on their life. For more information about protecting your child’s online reputation, see Tips for teenage online reputation management. Does your child know proper online etiquette? Before your child starts engaging on social media, it’s important to teach him or her how to successfully navigate the online landscape without compromising his or her safety or privacy or that of other people. A good way to do this is to have them stop and T.H.I.N.K. before posting or sharing anything: - T: Is it True? - H: Is it Helpful? - I: Is it Inspiring? - N: Is it Necessary? - K: Is it Kind? If your child answers “no” to any of these questions, then he or she should not share that content publicly. In fact, knowing what not to post on social media is a key indicator that your kid might be ready to open his or her own account. ***** Now that you know what factors are involved in your child being ready for a social media account, you can safely make a decision that works for your family. You can find more information about how to keep your child safe online in the following articles:
https://www.reputationdefender.com/blog/protect-your-kids/should-your-kids-use-social-media
Be the one to introduce your child to the Internet. Try to find web sites that are exciting and fun so that together you achieve a positive attitude to Internet exploration. This could make it easier to share both positive and negative experiences in the future. 2. Agree with your child rules for Internet use in your home · Discuss when and for how long it is acceptable for your child to use the Internet ·Agree how to treat personal information (name, address, telephone, e-mail) ·Discuss how to behave towards others when gaming, chatting, e-mailing or messaging ·Agree what type of sites and activities are OK or not OK in our family (remember that it's ok to decide on your rules for your family, whatever others choose to do) 3. Encourage your child to be careful when disclosing personal information A simple rule for younger children could be that the child should not give out name, phone number or photo without your approval. Older children should be selective about what personal information and photos they post to online spaces. Once material is online you can no longer control who sees it or how it is used. Look for the privacy and security settings with your child and set them together. 4. Talk about the risks associated with meeting online “friends” in person Adults should understand that the Internet could be a positive meeting place for children, where they can get to know other young people and make new friends. However, for safety and to avoid unpleasant experiences, it is important that children do not meet strangers they have met online. 5. Teach your child about evaluating information and being critically aware of information found online. Most children use the Internet to improve and develop knowledge in relation to schoolwork and personal interests. Children should be aware that not all information found online is correct, accurate or relevant. Educate children on how to verify information they find by comparing to alternative sources on the same topic. Show them trusted sites they can use to compare information. 6. Don’t be too critical towards your child’s exploration of the Internet Children may come across inappropriate material by accident on the Web. It is important that they feel that they can tell you if this happens, so avoid scolding. Also a child may intentionally search for such web sites; remember that it is natural for children to be curious about off-limits material. Use this as an opening to discuss the content with them and remind them that there are lots of things online which are not meant for children. 7. Report online material and contact you may consider illegal to the appropriate authorities It is vital that we all take responsibility for the Web and report matters, which we believe could be illegal. By doing this we can help to prevent illegal activities online, such as child exploitation or grooming. Visit https://www.ceop.police.uk/ceop-reporting/. There is an NSPCC helpline you can ring for advice on 0808 800 5000. If your child is in immediate danger, call 999. 8. Encourage respect for others; stamp out cyberbullying There is an informal code of conduct for the Internet. These include being polite, using correct language and not yell at (write in capital letters) or harass others. In school we teach children to check what they are posting online using THINK (is it True Helpful Inspiring Necessary Kind) Children as well as grown ups should not read other’s e-mail or copy protected material. 9. Let your children show you what they like to do online To be able to guide your child with regard to Internet use, it is important to understand how children use the Internet and know what they like to do online. Let your child show you which websites they like visiting and what they do there. Acquiring technical knowledge could also make it easier to make the right decisions regarding your child’s Internet use. 10. Remember that the positive aspects of the Internet outweigh the negatives.
https://www.coleyprimary.reading.sch.uk/advice-and-support-for-parents/
This tip page is a synopsis of the “Be Involved” section of the Parent Tool Kit and was created by the PIC Planning Team for School Councils and PIC school reps to share with their parent community. This section of the tool kit is a great resource for parents on how to be involved and provide guidance to children in the online digital world. 'Social media' refers to the wide range of Internet based and mobile services that allow users to participate in online exchanges, contribute user created content, or join online communities. Tip # 1 - Share Digital Skills and Online Tools - Teach netiquette (i.e. parental guidance on online behaviour such as digital citizenship) - Ensure device free family time for face to face communication such as supper time Tip # 3 - Help Your Children Deal with Online Relationships - Encourage mindfulness of online interactions on social media. (e.g. Would you say that to someone in person?) - Advise your children never to post in anger, and to always review messages prior to posting - Help them understand differences between friends and acquaintances - Emphasize that people they meet online may not always be who they claim to be Tip # 2 - Know the Basics of Social Media Safety - Find a balance between respecting a child's privacy while keeping them safe - Talk to your child about why it's important as a parent to know their passwords or see profiles and posts of their friends - Know the rules of your child's school regarding internet and social media use and help support their enforcement - If you feel your child is not yet ready to participate safely online, be prepared to say no - Encourage your child to talk to you if they come across disturbing material on social media, "I don't want my parents spying on me, but at the same time, it's good to know they're there." - Student Tip # 4 - Assist Your Children to Manage Their Online Identities - Learn about the use of privacy settings on social networking sites your children are using - Discuss risks involved and the impact of a digital footprint (e.g. Posted content can be accessed by future employers) - Caution children not to post identifying information such as phone number, address or where they go to school - Advise them to share passwords only with you Tip # 5 - Reach Out For Help If Your Child Is Impacted By CyberBullying Or Sexting - Electronic bullying or cyberbullying is electronic communication that is used to upset, threaten, or embarrass another person. It can be done via email, cell phones, text messages, and social media sites - Encourage your children to let you know about incidents of cyber-bullying right away - Contact your child's school and work together with staff to bring about the best resolution - Teens need to know that sexting and cyberbullying are serious activities that could lead to criminal charges PIC Planning Teams Top Links - CyberBullying and its impact. "The Internet has no delete button. Bullying online lasts forever!"
https://www.eftsc.org/social-media-tips
At Arnold Hill Academy we take safeguarding very seriously and work hard to ensure that all children are safe at the academy. We have a strong pastoral support system in school, and pupils are taught about how to keep themselves safe through lessons, Bright Days, assemblies and guidance sessions. Please look at the links below for additional guidance for parents/carers, particularly for E-safety. We have four fully trained safeguarding officers in school: Simon Ward, Mel Loyeau, Pauline McLeod and Julia Adamson. If you have any concerns please contact them via the main switchboard 0115 9554804. Should you have any serious concerns about the safety of a child, in school or in the community, please report immediately to Children’s Social Care and/or Police. At Arnold Hill Academy we often use photographs and videos to capture our students’ learning experiences and memories. These images may be used in the academy’s prospectus and other printed material, as well as on our website and social media sites such as YouTube, Twitter and Facebook. From time to time, we may be visited by the media who will take photographs or film footage of events. Students will sometimes appear in these images, which may then be published in local or national newspapers, in televised news programmes and on social media sites. We follow the guidance provided by the Information Commissioner’s Office (ICO) on taking photographs in schools and the Data Protection Act 1998. Consent to use images and videos lasts throughout your child’s time at Arnold Hill Academy and continues to apply to images already in circulation once they leave. You can write at any time and ask the academy to stop using your child’s images, at that point they would not be used in future publications but may continue to appear in publications already in circulation. Changes to the information we hold about your child can be made, at any time, by logging on to Parent View on the Academy’s website. Further information can be viewed on the Parent View section of our website. We R Here is a service dedicated to supporting people of all ages who have experienced traumatic life events such as grief, loss, bereavement and domestic abuse. Children and young people spend a lot of time online. It can be a great way for them to socialise, explore and have fun, but children do also face risks like cyberbullying or seeing content that's inappropriate. More information about staying safe on the main social networks can be found using the links below. Where to report concerns about websites and social media sites. Also provides other advice for parents on keeping children safe when they are online. Run by Childnet International. Outlining the potential risks of interactive services online and advice on how to use these services safely. Watch this video! Useful advice on how parents and carers can "stay switched on" to online dangers when their children are playing Fortnite. Nottinghamshire Police recommend this site, which contains expert information and advice for parents on keeping their children safe online. Child friendly activities and advice to raise awareness of internet safety. The Nottinghamshire Safeguarding Children Partnership provides the safeguarding arrangements under which the safeguarding partners and relevant agencies work together to coordinate their safeguarding services. Their web site https://www.nottinghamshire.gov.uk/nscp includes useful information retained from the previous local safeguarding children board and updated content will be added as it becomes available. Please take the time to read this letter and leaflet which contains useful advice about the potential risks around access to multi-media devices. Additional information about keeping your child safe online is available within this section of our website. Find out about keeping children safe from abuse and other dangers. Keep them safe: an interactive tool for parents. Understand the issue of child sexual exploitation, know the signs and be equipped to act. Lots of advice about keeping your child safe, as well as information on supporting them with other issues such as anxiety. Often referred to as the ‘new flirting for children’, sexting is the act of sending sexually explicit messages, images or videos over a digital device. It’s important that parents understand the risks and legalities around sexting to ensure that their children know how to protect themselves from harm. This free guide for parents & carers covers what they need to know to help keep their children safe, including the law, information about what to do if an indecent image/video has been shared, and more. Videos and information for parents and children on what’s new in technology and how to stay safe. Useful resources for promoting the safe and responsible use of technology. Arnold Hill Academy cannot be held responsible for the content of external links.
http://www.arnoldhillacademy.co.uk/index.php/staying-safe
WORRYING: Concerns for social media use among young children CONCERNS UNDER-fives are becoming addicted to social media and the impact of cyberbullying are highlighted in a new report published today (Jun 11) by Barnardo’s. The report, entitled Left to their Own Devices, assesses the impact of social media on children’s mental health, while also recognising the many positives the online world can offer. The children’s charity surveyed some of its children’s services practitioners to build a picture of how vulnerable children and young people it supports are affected by social media. Their insights indicate some children start looking at social media as early as two-years-old. Half of service practitioners responding said they had worked with children aged five to 10 who had been exposed to unsuitable or harmful materials online, and more than one third said children in that age group had been victims of cyberbullying. When it comes to 11-15 year olds, almost 8 in 10 (79%) practitioners said children they work with have experienced cyberbullying and 58% in the 16+ age group which has led to self-harm and suicide attempts. Almost fourth fifths of practitioners surveyed (78%) also said they had worked with children in this age group who had been groomed online and 78% also said they'd worked with children in this age group who had accessed unsuitable/harmful content. One 11-year-old was supported by Barnardo’s after being driven to try and take her own life after being cyberbullied by children who discovered her dad had been jailed and was on the sex offenders’ register. She said: “I got horrible messages from children saying ‘Your dad’s a pervert Grace, you might as well just kill yourself now’. “I couldn’t tell my mum because some of them said horrifying things about her too and I didn’t want her to be upset and crying all the time again. “Due to the comments, I began to hate myself and felt ‘outside’ of everything so then I tried to kill myself.” Barnardo’s Chief Executive Javed Khan said: “Although the internet offers incredible opportunities to learn and play, it also carries serious new risks from cyberbullying to online grooming. “And, as our new report shows, these risks can have a devastating impact on the lives of the UK’s most vulnerable children. “Recently, the Government has proposed welcome changes that would help regulate the internet and make it safer for children. It’s vital that the next Prime Minister keeps up the momentum and focuses specifically on protecting the most vulnerable. “Our new report also calls for more research to help us understand the impact of social media on children’s mental health; high quality education for children, parents and professionals; and a focus on wellbeing in every school. “Children today see the internet as a natural part of their world. Our job as a society is to make sure children are protected online just as they are offline.” However, Barnardo’s practitioners agreed social media holds many positives for vulnerable children and young people, including reducing isolation and loneliness, the ability to experiment and establish their own identity, discuss their social and political beliefs, and connect to people dealing with similar experiences. Further steps Government, parents, schools and services can take to help keep children safe and healthy online will be one of the topics up for discussion at the Barnardo’s and Bright Blue conference tonight (Jun 11). Read every story in our hardcopy newspaper for free by downloading the app.
https://www.voice-online.co.uk/article/concerns-under-5s-are-becoming-addicted-social-media
How to Prevent Cyberbullying A proactive measure of preventing cyberbullying is often overlooked. However, parents and carers can do more to educate the younger ones who are more susceptible to being bullied online. According to a research published in the Journal of Medicine Internet Research JMIR, victims of cyberbullying under the age of 25, are more than twice susceptible to self-harm and suicidal behaviour. Further findings by the cyberbullying inquiry report assert that children who are experiencing a mental health problem are over three times more likely to have been bullied online. As a parent or guardian, this should be a cause for concern as the developmental years of a person are the most crucial period in his/her lifetime. In this article, we have put together five important tips to prevent your kids from cyberbullying and ensure internet safety. 1. Build the Right Awareness and Teach Them to Communicate The first thing you should do as a parent is to educate your child on cyberbullying and encourage them to report any form of abuse. Let them understand what it means, and what constitutes cyberbullying. In doing so, they can spot a bully before becoming a victim. Also, they can respond appropriately by reporting to you if they experience cyberbullying. A child who is aware and knowledgeable on the subject matter is less likely to suffer from the impacts of cyberbullying. 2. Monitor Your Child’s Online Activity As we all know, the internet is the new world, only that it is a virtual world. Impliedly, almost every activity that takes place in the real world happens in the virtual world; from shopping to stealing and even bullying. So the same duty of care you owe your child offline should be extended online. A recent study by the University of British Columbia shows that cyberbullying is more devastating and much more frequent than offline/traditional bullying. Always check their computers, know what they do online and with whom they communicate. Do not get too busy to monitor your child’s online activities; the implications could be devastating. It is strongly advised that you make use of monitoring tools and parental control software to ensure their safety. Do not let them on social media if they are not yet thirteen. 3. Set Limits and Boundaries Limit your child’s frequency on social media and the internet. A study by Rochester Institute of Technology shows that the more time a child spends online, the more likely he is to get cyberbullied. 4. Ensure that they Don’t Compromise Their Privacy Teach your kids not to share sensitive information and passwords with friends or strangers online. Let them know the implications of sharing personal and private stuff online, which includes blackmail and extortion. Also, teach them to set strong and unique passwords for their social media accounts to prevent hacking. 5. Be Sensitive to Behavioural Change Always watch your kids for any behavioural change and take appropriate actions immediately. Kids may sometimes not see the need to report a case of cyberbullying because of fear of backlash, shame or threats. Research has it that only 1 in 10 teenage victims are willing to inform a parent or trusted adult of their abuse. However, there could be other telling signs which may include depression, reclusion or unusual silence. Conclusion In a bid to protect our kids against cyberbullying, let us also help protect other kids from cyberbullying by educating our children to use the internet responsibly and practice the correct Netiquettes.
https://esafetytips.com/2020/05/26/how-to-prevent-cyberbullying/
Social networking sites (like Facebook, Twitter and Instagram) are online communities of internet users. Members of the communities create online profiles which provide other users with varying amounts of personal information, as a means of breaking the ice to get to know one another. Once users have joined the network, they can communicate with each other and share personal interests, in the hopes of connecting with other like-minded individuals. While these networking platforms serve the purpose of keeping family and friends connected to one another, they have also been abused by cybercriminals as platforms to prey on unsuspecting users, particularly children. Children are more likely to unwittingly expose their families to online risks, for example, by downloading malware that could give cybercriminals access to their parent's personal information. As adults, we too fall prey to online scams by cybercriminals, which is what makes it imperative to protect our children on the internet. While cybersecurity software can help protect against some threats, awareness and education is key to ultimately safe usage. Here are some areas to consider for your children when they begin using social media: The importance of personal information Before your children start using social media, its important that they understand who they should be connecting with and contacting, and who they shouldn’t. Personal information such as location, email address, phone numbers and date of birth, shouldn’t be divulged to anybody online, whether they think they know that person or not. Thankfully, according to ESET’s APAC 2018 Consumer Behaviour Survey, only 13% of Singaporean respondents approve anyone who adds them on social media. Beware the wolves in sheep's clothing. Child predators can stalk children on the internet, taking advantage of their innocence, abusing their trust, and, perhaps, ultimately luring them into very dangerous personal encounters. Educate your children that people they speak to online, may not necessarily be honest about who they are and what they are saying. Trust shouldn’t be given easily. 19% of Singaporean respondents in ESET’s APAC Consumer Behaviour Survey 2018, stated that they do not monitor their children’s activity when they are on their smart devices. Cyberbullying is still bullying Social media and online gaming platforms are the virtual playgrounds for today’s up-and-coming generation. In certain circumstances, the antics of the playground continue online, whereby children are mocked in social media exchanges or via online games, where their characters can be subjected to incessant attacks, turning the game into a humiliating ordeal. Overall, social media can be a fun platform for children to explore and enjoy. The best foundation a parent can provide is to not be overly strict, as a total ban may push your children to hide their online profiles and activity from you. It’s important that they include you in their endeavours so that you know who they are speaking to and you’re able to monitor their activity.
https://www.eset.com/sg/about/newsroom/press-releases1/eset-blog/the-dangers-that-lurk-amongst-us-1/
Today, digital technologies are everywhere. Technology has created some beautiful things, like communications assistive technology for children who don't use speech as their primary communication method. At the same time, there are also some problematic aspects to navigate, like cyberbullying. This article will explore where children are currently sitting with the effects of digital technology predominantly within four groups: babies and toddlers, school-age children, teens, and young adults. We’ll share tools you may find helpful to support your child interacting with technology in a way that is developmentally-appropriate and supports their overall health and well-being. Living in today’s digital age Every generation experiences technological advances that have a significant effect on their era as they become popular. The first computers were large enough to fill entire rooms. But processing technology advanced exponentially, doubling and compacting the size of circuits roughly every two years.1 Today, devices and their seamless integration with the Internet are awe-inspiring. However, if you think about how we have become acclimatized to today's technology, we haven't necessarily followed our past experiences. There is a pattern, though: • Innovators are the first to figure out different ways to use the technology. • Early adopters are fast followers who crave to be at the front of the line. • Early and late majority is where most people tend to operate. • Laggards tend to join in after much observation and consideration. They are almost conceding that there could be a small application of the technology in their lives that would be useful to them, maybe. We’ve also seen a tendency to apply expectations about competencies relating to ability and seamless usage stereotypically between different levels of adaptation. These theories originated within the education and business sectors and have always incited debate as to whether younger people naturally adopt the technology more readily when compared to older people. It might partly show a bit of bias from older people to assume that all young people know exactly how to accomplish tasks using technology. How is technology affecting children? Babies and toddlers Early on, parents and caregivers can see the possibilities that introducing their children to technology at a young age can have. On one side, there’s the aspect of wanting your child to learn and grow. On the other hand, there are times when you may be looking for a chance to have their kids use technology as a distraction, perhaps to amuse themselves. Regardless, children tend to develop skills from a very young age, and are very good at observing and mimicking. Babies and toddlers are also entertaining and appealing, so parents may try and share funny situations involving their children through their own social media accounts. Consider the longer-term repercussions of doing so. What’s cute now may become a source of embarrassment for them as they are older. It may be best to respect their privacy and choice as to when or if they enter digital social spaces in the future. It can be challenging for extended family members to understand why parents want to protect their children's privacy, especially when geography keeps them apart. There are always options to share these photos directly to family members via email or messaging. Remind them that your child’s privacy should be protected. School-aged children We see that children respond well to digitized educational games and activities that help them learn language arts, mathematics, and science. Children this age begin to explore online gaming through popular sites like Roblox, Minecraft, Fortnight and Pokémon. Parents should show interest and observe their habits, friends, and interactions online to ensure they understand how to play safely. It is also necessary with this age group to have conversations to help develop an awareness of abnormal or concerning behaviours kids could encounter while online. Tech familiarity will help school-age children get together with friends who share similar interests in school and different physical and virtual social settings. Establishing screen-time limits is very important for school-aged children. Parents and caregivers can establish loving limits on when and how technology is used (e.g. what time of day and for how long). It can be an excellent way to introduce the concept of balance and boundaries. Older children Teens and young adults tend to be quite comfortable with the technology in their lives. Still, parents may be concerned about how connected they are to devices such as smartphones. Engaging with older children who seem wholly absorbed in digital spaces can be pretty challenging. It can be quite difficult to reinforce screen-time boundaries and limits. Some educators observe that teens and young adults struggle with critical thinking and problem-solving. It may be because they have ready access to all the information they could ever need. Problems can show up in basic written and interpersonal communications skills too. Secondary and post-secondary students struggle with confidence in spelling, vocabulary, and grammar. They may also be uncomfortable with certain conversation techniques such as maintaining eye contact and interpreting. Many prefer using instant messaging and social media and are reluctant to engage with parents in these spaces. A more significant issue perhaps is the amount of screen time this group engages in. Some teens and young adults will even proudly declare that they are addicted to their devices. It's a problem that parents need to tread cautiously. Mental health concerns School-age children, teens, and young adults are particularly vulnerable to stressors. The Internet can offer exposure to inappropriate and upsetting content that could contribute to mental health concerns like mood disorders, depression, and anxiety. For example, exposure to the Internet may contribute to: - Body image issues - caused by comparing oneself to others. - Eating disorders - influenced by misinformation in advertising and observing celebrity endorsements of diet and weight loss products. - Dissatisfaction and disillusion with current lifestyles – arising when observing social media influencers and expressing a desire to find fame. - Shortened attention spans - from consuming tremendous volumes of content through constant scrolling. To help, parents and caregivers should first examine their habits and consider modelling appropriate levels of interaction and engagement as they interact online and through their devices. How has COVID-19 influenced online behaviour around education? Students experienced shifts to online learning at different times during the pandemic. Their experiences have been strained from a lack of social interaction and the difficulty of replicating physical classrooms through virtual formats. School-aged children required a lot of parental involvement to manage virtual learning and the complex schedules between times of independent work and convening in video classrooms. Some teens and young adults expressed worries about this learning format and missed important milestones and socializing opportunities. However, some students have determined that they prefer online learning. It has been an excellent way for students with anxiety to engage without feeling the pressures they have felt in the past at school. Young adults pursuing post-secondary studies have been able to find a better balance between school and work. It’s allowed them to save money on transportation and other living expenses. Problematic aspects of technology Non-traditional sources of income (work) It can be difficult for parents and caregivers to understand how their older children can be drawn to technology. Like the Metaverse, the interactions and ideas can be a bit abstract. Consider the number of teens and young adults seeking online ways to generate income. A decade ago, monetizing your online presence as a career wasn’t mainstream. Now, there are many examples of people who have taken interests and hobbies online with the hope of making it big and having something they load go viral on the Internet. As a result, parents and caregivers may find that older children are chasing the dream of becoming Internet famous, or at least getting the subscriber and engagement volumes needed to be paid by companies to post. They hope it will attract coveted attention to help them reach an "influencer" level of fame. Cyberbullying Any form of harassment that occurs while using technology to connect with people in online spaces is considered cyberbullying. It has disastrous effects on people's mental health and, in some cases, can contribute to people coping with self-harm, develop depression, and experience anxiety. There have been cases of cyberbullying that have affected some so severely that they have taken their own life. Cyberbullying isn't limited to older children and teens, but school-age kids can experience it too. Parents and caregivers need to recognize and investigate changes in their children's behaviours. Some signs of online harassment could include: - Suddenly losing interest in online activities they once enjoyed - Being easily distracted - Not wanting to attend school - Being very sad, withdrawn or emotional Cyberbullying can take many forms but here are just a few examples: - Making prank calls on a mobile phone - Sending mean messages through texts, instant messaging, or social media applications - Editing photos to create memes and publishing them to intentionally harm or embarrass someone - Posting someone’s private photos online or sending them to other people - Spreading misinformation and rumours about someone in an online space - Relentlessly attacking someone in an online game - Stealing or hacking into another person’s online account to impersonate them - Initiating or participating in a ranking or rating of someone’s appearance or popularity - Creating fake social media accounts for the purpose of sharing malicious information - Tricking someone into providing personal information and then threatening to release the details Parents and caregivers should know that there are legal consequences to cyberbullying. The person who is bullying can be charged and held liable for their actions, resulting in a payment for damages or prison. Some cases of harassment can receive charges under the Criminal Code.2 LGBTQ2+ youth often experience a disproportionate amount of online harassment and cyberbullying that’s often quite severe and can be present in addition to physical bullying. Victims of cyberbullying are also less likely to report it. They fear reprisal and rejection from their families or peers because it could “out” them and create additional complications within their day-to-day relationships. Maintaining connections to online spaces can be a way to build self-esteem, allow for self-expression, and provide a sense of community when they are feeling isolated. It’s a lifeline that offers support, understanding, and compassion. When it comes to setting limits on technology use, parents and caregivers should take into consideration the beneficial role that technology can offer LGBTQ2+ youth to have connection and a safe space. Cyberstalking Cyberstalking is a persistent form of predatory online behaviour that can involve threats of violence, sexual harassment, extortion, and even physical stalking. Cyberstalking is often more unrelenting, more deliberate, systematic, and escalating in its threat levels to incite fear and have their target comply with their demands. Cyberstalkers are also usually quite technically savvy and use this to their advantage to avoid detection. They will start initially by exploring their target's on and offline behaviours. A few examples include: - Following their target person online, performing similar activities and suggesting common interests and activities to join. - Messaging and tagging their target person excessively. - Asking their target person to send photos and/or videos. - Asking their target person to share valuable information, such as their full name and address and ID cards (driver’s licenses, health cards, financial information, and online account passwords). Such details are valuable and can be sold to help people assume someone’s identity or hack into accounts. - Sending unwanted gifts. - Hacking into the target person’s laptop camera or phone to watch them. - Suggesting that they meet in person. - Tracking their target person’s movement. Like cyberbullying, this kind of harassment is illegal. If you believe that a cyberstalker may have targeted your child, contact the authorities, and let other family members and friends know so they can offer support. While it may seem disturbing, never delete any of the evidence of the harassment, as you may need to show police proof. How can I find out what my kids are doing online? The ethics of monitoring Internet use are tricky. Many parents and caregivers fear what their children encounter while using online technology and demand to review their actions and accounts. Suppose parents or caregivers are the ones who have funded the purchase of devices and cover ongoing costs. In that case, they may feel entitled to regular usage reviews. But they need to weigh their oversight with trust and privacy and help educate children to become responsible online digital citizens. For younger children using a shared device in the household, talking about what they are looking at online and who they are speaking with (if they engage in any online communities or gaming) can be an excellent first step to encouraging dialogue. The goals of these conversations are to educate and instill self-discipline, so modelling good online behaviour is essential. Work on establishing boundaries and demonstrating trustworthiness to have kids feel more comfortable sharing information when they are unsure or possibly in trouble. This approach can help respect their child's privacy. Teens often have a reputation for being rebellious and more whimsical when taking high-risk actions. Parents or caregivers may become more worried and insistent on finding out what their teenager is up to. Some resort to desperate measures, demanding passwords be shared on devices and accounts and even going so far as to install tracking applications to locate their whereabouts. Most teens are aware of these attempts to raid their personal space and deactivate or mask them. Remember that while it may be uncomfortable, teens and young adults are entitled to privacy. Their independence should be to be met with the support and connection of their parents or caregivers. It’s important to make an effort to connect with your teen and work on nurturing the relationship. What about parental controls? Do they work? Some parents or caregivers may choose to invoke parental controls on devices or Internet access. It can make sense for younger children to keep them from accidentally stumbling upon inappropriate content. Still, it can also incentivize older children to enforce boundaries and screen time limits or restrict Internet access. You might want to consider: - Setting timed access to household Wi-Fi through settings in their modem by device IP addresses. - Using family management apps to associate completing household chores with earned screen time. - Setting screen time monitoring in devices and reviewing activity with your child weekly. Most experts agree that school-age children need to know how to be responsible for their online actions. Of course, this also extends to teens and young adults, especially for social media accounts. For parents and caregivers to provide guidance, they should become aware of: - The additional data, including device IP addresses and geotags in photos posted online. - How to activate privacy settings on accounts and devices to restrict access. - Spam/alias accounts that teens and young adults create and keep secret to have unfettered privacy on social media, away from parental views on their main named accounts. - A new form of relationship trust where people trade devices and divulge their passcodes and passwords so that they can each review the other's text and social media messages as “proof” that there is no cheating happening. There’s a lot of value in keeping information private Unfortunately, friendships end and relationships sour. It may not be until something happens that children realize the value of keeping information private. It could help if you encourage great discretion in providing contact information such as phone numbers or email addresses. One parent spoke about a breakup where their child was harassed every two minutes by their ex because they opted to stop replying to messages. This "O-bombing" (opening a message without responding immediately) and "ghosting" encouraged the ex to continue the behaviour. In these kinds of incidents, parents and caregivers need to encourage their children to: - Take screenshots of conversations because attempts to contact them or through friends may need to be collected as evidence. - Provide instructions to friends to discontinue contact and keep all personal information about their friend confidential. - Block the person on apps and devices. - In extreme cases, you may need to contact a service provider and change phone numbers. Raising healthy and fully aware digital citizens One of the best things you can do is speak openly to your children about technology and online safety and share your concerns. As a parent or caregiver, staying informed and showing your children that you are interested and aware of the challenges and experiences they face may be one of the best things you can do. This may help them view you as an ally rather than a threat. Talk openly, using age-appropriate language about interactions with friends that you may not have heard them mention before. Explain how you are looking out for them and helping them learn how to be safe, kind, and respectful online. Talk about how things uploaded to the Internet are there forever, even if they decide to delete them. While you don't want to make them fearful, talking about normal and appropriate behaviours for adults and kids is important. It's a way to broach the topic of cyberstalking, cyberbullying and other predatory behaviours they may encounter online.It's also incredibly healthy to have designated "no-tech" times. It could be when everyone gathers to eat dinner, play a game, watch a movie or program, or visit family members without devices in their hands. To get used to this, it may help to have a basket that devices go into for the duration of the activity. Finally, there is one more realization that parents and caregivers can embrace. They are preparing their babies, toddlers, school-aged children, teens and young adults for roles that may not exist yet in an ever-evolving world. References: - Dorrier, J. (2016, March 8). Will the End of Moore’s Law Halt Computer’s Exponential Rise? SingularityHub. Retrieved February 4, 2022 from https://singularityhub.com/2016/03/08/will-the-end-of-moores-law-haltcomputings-exponential-rise/ - PrevNet. (n.d.) Legal Consequences of Cyberbullying: It’s not just bullying – it’s criminal. Prevnet.ca. Retrieved on February 4, 2022 from https://www.prevnet.ca/bullying/cyber-bullying/legal-consequences Accessible versions Life Lines May 2022 - Print, Video, Audio Lignes de vie mai 2022 - Voir l'article, Lire la vidéo, Télécharger le fichier audio Need Help? All calls are completely confidential.
https://www.ualberta.ca/human-resources-health-safety-environment/news/2022/04-april/may-2022-life-lines.html
Bullying is not a normal part of childhood and should never happen. The long term impact of cyberbullying on a young person’s physical and mental wellbeing can be profound. Cyberbullying, as with all bullying can contribute to mental health disorders, substance misuse and in extreme cases, suicidal ideation. In this blog, we offer key advice to reduce bullying and mitigate its impact on victims. Always remember that every child has the right to live in a safe and healthy environment free from bullying, harassment and intimidation in all forms. What is Cyberbullying? The National Bullying Helpline defines Cyberbullying as bullying and harassment using technology. This includes trolling, mobbing, stalking, grooming or any form of abuse online. Cyberbullying can be more difficult to escape than offline bullying, as it doesn’t stop after school. Unfortunately, cyberbullying is getting worse. In 2011, 11% of parents in the UK reported that their child was the victim of Cyberbullying. In 2018, this figure rose to 18%. This number is expected to continue to rise and has increased worldwide during the lockdown. What You Need to Know Ditch the label, a youth charity estimates that 5.43 million young people in the UK have been the victims of cyberbullying, with 1.26 million people suffering extreme cyberbullying on a daily basis Childline has reported an 87% increase in calls concerning cyberbullying in the last three years The Cyberbullying Research Centre has found that girls are more likely to be cyberbullied than boys There are signs to look out for which may signal that your child is being bullied. Read about them in our recent blog here What to do if a Child or Young Person in Your Care is Being Bullied Online Children and young people in your care may not use the word bullying to describe what is happening to them, so it’s important to listen if they mention things which are upsetting them or worrying them online. Try using the following advice if a child or young person describes an experience which sounds like, or is online bullying: Take the time to listen to them and try not to interrupt. It is important not to get angry or upset at the situation - Don’t stop them from accessing social media platforms or online games. This will likely feel like punishment, and will stop them telling you in the future Reassure the child or young person that things will change, and that they have done the right thing by telling you. This can really help reduce any anxiety they might be feeling Make sure the child or young person knows that it is not their fault and that they have done nothing wrong - As a parent or carer, it is important not to get involved or retaliate in cases of online bullying. This will likely make the situation worse for the child or young person Talk to your child about what they would like to see happen. Involving them in how the bullying is resolved will help them feel in control of the situation Online bullying has the power to have serious negative effects on the lives of children and young people, but by remaining vigilant and following our key advice, it is possible to mitigate the impact on victims and stop the bullying. Further Support Make sure to teach children and young people how to block and report users. You can find instructions on how to do this for all major social media platforms at Our Safety Centre. For more information on cyberbullying, you can check out the NSPCC. The UK Government’s Department of Education has also published useful advice on cyberbullying which you can access here. Do you receive our Safeguarding Alerts? Receive regular updates to help you safeguard children in a digital era.
https://ineqe.com/2020/08/07/cyberbullying/
Safe internet use during Covid-19 The Safer Internet Centre has warned of an increased risk of children being coerced and groomed while the schools are shut and children spend more time online. These resources provide information and advice specific to the crisis. There are resources from Think U Know and BBC to help children and young people understand how they can stay safe online. You will also find advice on specific issues such as cyberbullying, safe online gaming and the use of apps like HouseParty. The following organisations provide information on how to keep children and young people safe while online during the Covid-19 crisis. Cyberbullying During this national lock down children and young people are increasingly reliant on social media and online networks to keep in contact with friends. This can leave children and young people vulnerable to experiencing bullying online, known as cyberbullying. The first three sites provide advice and support on how to recognise if a child or young person is experiencing cyberbullying and what you can do to support them: - Lucy Faithful Foundation ‘Parents Protect’ - NSPCC - bullying and cyberbullying - Internet Matters - cyberbullying The following sites are for children and young people who need support: Tips for safe online gaming With more children at home at the moment, online gaming is an entertaining way to pass time, and children and young people can access gaming in many different ways, from game consoles to mobile phones. Online gaming can offer children and young people an entertaining and fun environment, but there are potential risks. The resources below are from nationally recognised organisations. These resources provide information on how to protect children and young people when they are online gaming and advice on what to do if you have concerns. Resources for parents and carers: - Welsh Government - a parent and carer’s guide to the benefits and risks of online gaming - Lucy Faithful Foundation ‘Parents Protect' - gaming - NSPCC - online games - Internet Matters - gaming consoles and platforms. Resources for children and young people: Preventing online abuse of children and young people Learning to use the internet is a way of life for children and young people and important to help them function in a technology-dependent world. It can be worrying for parents, carers and professionals and difficult to know how best to keep children and young people safe. These resources have been gathered from leading organisations to help identify online abuse and provide advice on what you can do to keep your child safe. What is online abuse? - Lucy Faithful Foundation ‘Parents Protect’ - learn - NSPCC - online abuse - Internet Matters - inappropriate content What is online grooming and what can you do to protect children and young people? What is online radicalisation and what can you do to protect children and young people? General advice on safe internet and social media use How to report abuse If you suspect your child or young person is experiencing online abuse please contact the police. You can report concerns about inappropriate online contact or content through the Child Exploitation and Online Protection (CEOP) command, operated by the National Crime Agency through their CEOP safety centre. Safe use of online Apps The world of apps is fast paced, exciting and constantly changing. It is hard for parents and carers to keep track of the apps being used by their children, which ones are popular and how safe they are. These resources provide guides to apps, advice on age restrictions and top tips to help support children and young people to use apps safely. There is also a specific detailed guide on the popular Houseparty app. Screen time and wellbeing There has been much debate about screen time and how it can affect children’s health and wellbeing with particular links to physical wellbeing and mental health. With many children now also being schooled online and encouraged to utilise digital learning platforms, these useful links support parents/carers to understand the pros and cons of screen time and how to manage this with children and young people. Screen time, technology, mental health and well-being (Parents) This useful guide developed by Welsh Government looks at the impact of screen time and technology on children and young people’s physical, emotional and social wellbeing. It provides links to more detailed research and tips to developing healthy technology habits. There is also a section on identifying problematic or unhealthy technology use and where to go to seek help If you have concerns. Guidelines on physical activity, sedentary behaviour and sleep for children under 5 years of age The World Health Organisation have developed guidelines to support parents to understand how much screen time is appropriate by age range. It also looks at required physical activity and sleep to support development. The guidelines are quite detailed, but there are some infographics to support parents/carers at a quick glance. The health impacts of screen time: a guide for clinicians and parents This guide produced by the Royal College of Paediatrics and Child Health (RCPCH) looks at research into screen time for children and young people and the association between screen time and negative outcomes. It also provides messages and suggestions for parents/carers on how they can set boundaries etc as a family. There are also useful links to other bits of research and tools. Protecting your information and use of parental controls When we use the internet we store a lot of personal information online. It’s important this information is kept safe to prevent crimes such as identity and financial fraud. The use of parental controls can be confusing but is an important part of keeping children safe online. These resources developed by Child Net, UK Safer Internet Centre, Think U Know and Internet Matters provide tips on how to keep your information safe online and how best to use parental controls.
https://socialcare.wales/service-improvement/online-safety-and-well-being
Browser does not support script. Subjects taught at LSE | Services and divisions (administration) All research groups | Research subject areas The London School of Economics and Political Science is launching the first ever EU-wide survey which asks young people directly about their experiences of internet safety. To coincide with the European 'safer internet day' on 9th February, the EU Kids online project II, coordinated by LSE, has announced it will survey 25,000 young people across Europe about their experiences and perceptions of risks online. Researchers have, for some time, tried to answer some key questions about the safety of young people online, including which children are most likely to encounter online risks; how do they cope with these risks, and how can they be kept safe effectively? Up until now surveys about online safety drew predominantly on the experiences, perceptions and worries of parents. This EU-wide study will be the first which also questions young people, aged 9 to 16, and will address issues such as contact with strangers, exposure to inappropriate material, sexual messages and cyberbullying. Professor Sonia Livingstone, Head of the LSE Department of Media and Communications and coordinator of the EU Kids online project, commented on the launch of the new research: 'Existing studies have found significant gaps between how parents view online risks and how children themselves view such risks. Likewise, we know very little about how children try to cope when they do find some experiences online to be difficult. 'By asking young people directly about their experiences and perceptions this research will help us to answer some of the most pressing questions about online safety. It will identify which type of young people are more likely to have difficulties in the online world, how serious these difficulties are, and how what children encounter on the internet relates to what they encounter in the "real world" – as in the case of cyberbullying versus bullying. 'Hopefully our survey will become an effective tool to inform public policy, counter media panics and lead to improved support for young people as they take advantage of the range of positive opportunities and experiences which are available online.' As part of safer internet day, Professor Livingstone will be chairing an all-day seminar on social networking, children and young people in the European Parliament in Strasbourg. /end. For interviews or more information please contact Leslie Haddon on 020 7955 6651 or LSE Press Office on 020 7955 7060 or at [email protected] Notes: 1. The survey will involve 1000 children aged 9 to 16 and their parents in each of the 25 participating countries: Austria, Belgium, Bulgaria, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Lithuania, Netherlands, Norway, Poland, Portugal, Romania, Slovenia, Spain, Sweden, Turkey and United Kingdom. Interviews will take place between March and April and the first results will be published in October. EU Kids Online's new project (2009-2011), financed by European Commission's Safer Internet Programme. 2. Safer Internet Day 2010, is on 9th February and under the theme "Think before you post". This will focus on how we manage images online and consequently, how we deal with privacy in digital environments. More information on Safer Internet Day is available from www.saferinternet.org.
http://www.lse.ac.uk/website-archive/newsAndMedia/newsArchives/2010/02/eukidsonline.aspx
In an increasingly digitised world, kids are now spending more and more time online. Cyberbullying is one of the harsh realities that both children and parents are now having to face. To maintain a safe space on the internet for your child, keep on reading to find out exactly what cyberbullying is and how to avoid it. It’s a term that’s often thrown around, but what exactly is cyberbullying? 💻 Cyberbullying is becoming more and more common. In fact, a 2019 study showed that around 15.7% of secondary school students were electronically bullied in the last 12 months. But how exactly do we define cyberbullying? 🤷 Cyberbullying is defined as bullying that takes place on digital devices, such as phones, computers, and tablets. 📱 Cyberbullying doesn’t only happen online. It can also occur via text message and phone. Cyberbullying can take place on social media platforms such as Facebook, Instagram, Snapchat, and Tik Tok. 🖥️ There are many different types of cyberbullying but some prime examples include harassment, posting negative information about someone, and sharing private information without consent. 🔒 As a parent, cyberbullying can be difficult to spot. Phones and computers give children constant and instant access to communication. Cyberbullying transcends the classroom, as the NSPCC explains; “online bullying can follow the child wherever they go.” It’s difficult to be aware of everyone your child is communicating with online, which makes it harder to know whether they’ve ever encountered cyberbullying. So, what can you do to help your child avoid cyberbullying? 🤔 While cyberbullying can be tricky to spot, it is avoidable. We’ve compiled a list of tips to help you and your child stay safe online and steer clear of cyberbullying. 👉 Online education 101 It’s important for kids to understand the rules and consequences of digital communication. Remind them that once they post something online, it stays there forever. Make sure they’re also aware of the fact that what they post can be seen by people from all over the world, not just their friends. And finally, teach them how to communicate online. Texts and messages can often be misinterpreted due to a lack of context. 👉 Encourage open communication with your child We know that kids don’t tend to be willing to share what they’re up to online. They might be more willing to do so, however, if you encourage open and respectful communication from the get-go. Show an interest in what apps they like to use on their phone. Ask them how they keep in touch with friends online. This way, you’ll get a better picture of how they spend their time online and will know where to look if problems arise. 👉 Know what to do if they encounter a problem Make sure that both you and your child are aware of the resources that are available in the event of cyberbullying. Social media platforms always have options to block and/or report users. Ensure that you’re both aware of this function and encourage your child to use it if someone is making them feel uncomfortable online. 👉 Be able to spot the problem As we said, it’s hard to be completely aware of what your kids are doing online. There are, however, some warning signs to look out for that suggest your child could be dealing with cyberbullying: ⚠️ Your child actively tries to hide what they’re doing online and conceals their screen or phone when you walk into the room. ⚠️ A change in your child’s emotional state. Maybe they’re more cagey than usual or they’ve become quiet, depressed, or withdrawn. ⚠️ A noticeable increase or decrease in device use. ⚠️ Your child is not willing to discuss what they’re doing online and avoids answering questions about it. ⚠️ Your child isolates themselves and starts to avoid social events. The digital world has so many benefits when it comes to our children’s education, but it also comes with its fair share of problems. Cyberbullying being one of them. Although it can be difficult to spot, cyberbullying is both common and extremely serious. Work together with your child and maintain open communication so that cyberbullying can be avoided. Make sure that both of you are in the know about what to do if your child encounters negative behaviour online or over the phone. 🧠 If you realise your child is dealing with cyberbullying, you can always reach out for support. Your child’s school will most likely have someone you can discuss the problem with and you can also contact organisations like the NSPCC, Bullying UK, and SupportLine. At GoStudent, e-safety is our priority in our virtual classrooms. We also encourage students and parents to address any concerns they might have regarding online safety. You can book a trial lesson with one of our tutors here.
https://insights.gostudent.org/en/how-to-avoid-cyberbullying
The story isn’t new. No doubt you’ve heard it before. But a recent installment in my community illustrates the point that we need more kindness in real life and online. Here are the details. Local college football rivalry leads to a poor decision. One fan wears an unfortunate shirt depicting a former player from the opposing team with a less-than-tactful caption. Then he posts a pic of himself wearing the ill-advised shirt online. The photo gets shared, people get hurt, and the man at the center of the controversy loses his job, receives threats against himself and his family, and ends up paying a large sum of money in retribution. Unfortunately, even though he owned his mistake, expressed regret, and tried to make amends, his careless act initiated a series of unintended consequences that cost him dearly. All because he chose to be unkind in person and online. You might be surprised to learn that, even though he doesn’t root for my team, I feel somewhat bad for this man. Yes, he was unkind. And yes, like all of us, he must account for his unkindness to another. But the response he got online was brutal too. Fans who should have been gracious responded to his unkindness in kind. Rather than learning from his lack of judgment, many turned around and hurled unkindness right back. We can and must do better. We must teach our kids to find healthy ways to use technology and how to be kind online. Why Kindness Matters We need to help our children understand that kindness matters. Rather than believing that Darwin’s theory of survival of the fittest promotes selfishness and brutality, we need to embrace the idea that we thrive best when we are kind to one another. Karyn Hall Ph.D. explains: “Darwin, who studied human evolution, actually didn’t see mankind as being biologically competitive and self-interested. Darwin believed that we are a profoundly social and caring species. He argued that sympathy and caring for others is instinctual (DiSalvo, Scientific American, 2017).” In other words, we are naturally inclined to be kind. In fact, researchers have found that kindness in relationships leads to security, happiness, and overall well-being. Those who receive and witness kindness are more likely to treat others with kindness. Further, kindness is contagious. The more someone practices kindness, the more kindness they receive. The effect spirals upward, building and strengthening relationships between individuals. Kindness can also counter the negative impacts of loneliness and social isolation. If your child notices someone who lacks the ability to connect deeply with others or feels overwhelmed, isolated, or insecure, encourage them to show that person extra kindness. Conversely, if your children struggle with these feelings themselves, encourage them to extend kindness to someone else. They will reap the benefits of that kindness personally. Especially in our digital world, there has never been a greater need to be nice online. How to Be Kind Online If you’re looking for ways to teach your children how to be kind online, here are some ideas to get you started. Send Kind Texts A little goes a long way when it comes to being kind online. Extend a text challenge to your children. If they have cell phones (check out Troomi Wireless for some safe, kid-friendly options), encourage them to reach out to friends, family members, or new acquaintances with friendly gestures of kindness. A simple “Thinking about you” or “Have a great day” can go a long way to brightening someone’s day. Leave Positive Comments Teach your kids to resist wading into the quagmire of negativity that brews in the comment sections of sites online. Rather, encourage them to only leave kind, positive responses to the stories they read, the social media posts they encounter, and the message boards they frequent online. Give them some practice statements like, “I really liked what you said about ….” or “I agree with your idea that….” Even if your kids find it necessary to disagree online, help them learn how to respond kindly. Teach the value of civility in disagreement. Share Goodness Online Help your kids consider social media platforms and other online venues as opportunities to share kindness through uplifting content. For all the unkind things that are voiced on social media, encourage them to post positive messages that lift others. Each time someone takes a minute to watch an uplifting message, they will feel the kind intent with which it was shared. Be Willing to Say I’m Sorry Your kids are more aware than ever that communicating with tech is nuanced at best and a minefield at worst. Sometimes, digital words don’t come across as intended. In those instances, coach your kid to be willing to say “I’m sorry quickly” and sincerely when they unintentionally hurt someone’s feelings online. While much of our interaction, even with close friends and family members is done electronically, sometimes face-to-face apologies say it best. Help your kids know that kindness is paramount when it comes to mending fences they have damaged or broken down. Stand Up to Cyberbullying Bullying of any kind should be summarily stamped out. But cyberbullying is especially dangerous and hurtful because it’s often done in private. Help your children understand that if they, at any time, witness cyberbullying—whether through texts, social media posts, comments, or other means—they should tell a trusted adult so it can be stopped. If you wonder if your children are engaging in cyberbullying, check their phones and have open communication with them. You are their parents and should teach them to use their phones responsibly and kindly. And if you think your children might be bullied online, ask them and follow up. Find Ways to Be Kind in Real Life Even though it’s important to be kind online, you can also teach your kids to use technology to find ways to be kind in the real world. Serving others is a great way to show kindness. With easy and convenient websites like Volunteer.gov, JustServe.org, and others, you can help your children discover ways to be kind to others in your communities or across the globe. Avoid the Drama—Be Kind Online The story of one football fan’s bad choice in game attire has broader application. A jersey supporting his own team might not have received much attention. But in hindsight, that may have been the preferred outcome. It certainly would have been less dramatic. For us and for our kids, the lesson is clear: It’s always a good idea to be kind in real life and be kind online.
https://troomi.com/teaching-your-kids-how-to-be-kind-online-in-a-digital-age/
To encourage students to safely and responsibly make use of social media platforms. Objectives - Heighten students' knowledge of positive use of social media - Review the risks of misusing social media platforms - Encourage students to apply tips learned in the workshop for a safe social media experience - Reinforce the importance of responsible behavior online to limit students' participation in cyberbullying others and offer support to peers who are experiencing cyberbullying Approach The Real Impact of Social Media workshop is intended to bring awareness to students, caregivers, and school personnel on how to use social media platforms safely and responsibly. The workshop has been created with age-appropriate language for 3rd, 4th, and 5th grade students. We deliver in-class workshops to help them understand the ways social media can help cope with stress, how it can add stress and the overall risks of using social media irresponsibly. The Real Impact of Social Media also addresses the impacts of cyberbullying and the proper steps to take when a student is being cyberbullied or witnesses cyberbullying.
https://www.nyp.org/acn/community-programs/turn-2-us/student-services/social-media
(17-May-2022) With increasing amounts of time spent online, young people are at greater risk in the digital world. Save the Children Hong Kong commissioned a research team at the Depar`tment of Social Work and Social Administration of the University of Hong Kong to conduct the study “Hong Kong Kids Online” in primary and secondary schools from 2020 to 2021 to better identify the risks that young people are exposed to online and better understand the factors that influence their vulnerability and protection. The Hong Kong Kids Online study includes survey responses from over 1,300 children and teenagers from different socio-economic backgrounds and ages ranging from 8-17, as well as in-depth group interviews with secondary school students. The findings indicate that teenagers face significant online safety risks, and that there is an immediate need for parents, schools and the government to do more to help children `keep themselves safe online and to limit the online safety risks faced by the young generation. Online sexual abuse and harassment Teenager sexual abuse is happening twice as often in the virtual world compared to the physical world. The study results show that 4 out of every 10 secondary school students have had at least one unwanted online sexual content exposure, solicitation, or experience in the last 12 months, which equates to over 130,000 Hong Kong secondary school students getting virtually “flashed” or sexually harassed at least once in the last year. 1 in 20 teenagers in Hong Kong experienced sexual harassment in the form of unwanted requests for sexual photos of themselves and 1 in 10 experienced it as other unwanted requests for sexual information about themselves. 1 in 20 teenagers in Hong Kong faced the worst kinds of online sexual abuse, being pressured into some kind of sexual acts over the Internet – either by peers or adults. 28% of students taking our survey reported that something that happened online in the past year had made them uncomfortable, scared, or like they felt they shouldn’t have seen it – and about half of those children saying they felt this way once or twice in the past year but some saying they felt this distress every day. Teenagers who experience abuse or neglect in real life are at much greater risk of being re-victimised online than their peers and are 4 times more likely to face unwanted online sexual experiences on average.Lonely teenagers and those more dependent on the internet for socializing were also at significantly greater risk. Cyberbullying According to the findings, 1 in 5 teenagers in Hong Kong have experienced cyberbullying in the last year. There are various sorts of online bullying, including posting offensive photos and messages online, excluding individuals from online communities, and misusing another person’s photos or identity without consent. Online bullying is particularly hurtful because of the publicity and the ease with which it can be perpetuated for long periods of time. This is in part because information and images used for bullying that is put on the web is often impossible to remove. Young people in Hong Kong are equally likely to experience cyberbullying as they are to experience bullying in real life. Ms. Carol Szeto, CEO of Save the Children Hong Kong, expressed that “It is important to recognise that online life is a part of real life, and we must not underestimate the risks and impacts of online abuse and harassment on children. Both online sexual harassment and bullying can have long-term negative impacts on children’s emotional and social development as well as mental well-being, including feelings of being intimidated, shame, blamed, or guilty. We must work together to protect teenagers and children from all forms of harm.” Guardians in the digital world Parents play an important role in child protection in the digital world. The study shows that 27% of teenagers said their parents never talk to them about things that happen online that upset them. The findings also reflect that teenagers of parents who more often encourage their child to explore the internet and suggest ways for their child to stay safe on the internet face less unwanted online sexual experiences on average. Support from teachers and schools is also essential in protecting young people online. Students who report that their school or teachers regularly guide them in internet education appear to face unwanted online sexual experiences, the worst forms of online child sexual abuse, and cyberbullying less commonly. Students from the focus groups also made suggestions for the government to create a safe online environment for young people while allowing them to learn and express themselves. Dr. Clifton Emery, lead researcher of the study and associate professor of social work and social administration at the University of Hong Kong said, “This is the first study of online victimization and offline maltreatment involving a random sample of Hong Kong secondary students. It tells us that online victimization of teenagers in Hong Kong is a serious problem and that this problem is strongly related to child neglect. But it also tells us even in the face of these problems, there are things schools, parents, and policymakers can do to effectively protect teenagers and promote their healthy use of the internet in Hong Kong. “ Recommendations Save the Children urges all stakeholders to come together to create a safe online environment for youngsters. It is crucial to ensure the law, policies, and practice can create an environment that empowers and protects young people so they can realise the benefits of the digital environment. The organisation encourages relevant authorities to establish digital inclusion policies that ensure all youngsters have equal and effective access to the digital environment meaningfully for them. Save the Children also proposes a new role of child online safety commissioner and an independent body on eSafety be created for Hong Kong. These offices should coordinate a child-friendly help-seeking and complaint mechanism for cyberbullying, online child sexual exploitation, and abuse and inform and create regulations, guidelines and public resources for safety in the digital world. The organisation also recommends the government to provide adequate training to teachers and social workers with knowledge of identifying, intervening, and handling suspicious cases, and support education institutions in providing children and youngster techniques and rules to keep themselves safe online. Hong Kong Kids Online – Full Report: Click here Hong Kong Kids Online – Youth Friendly Version: Click here For the presentation slides at the press conference: Click here We would love to hear from you. Donation and General Enquiries: (852) 3160-8686 Child Sponsorship Enquiries: (852) 3160-8786 8/F Pacific Plaza, 410-418 Des Voeux Road West, Sai Wan, Hong Kong Monday to Friday: 9:30a.m. to 6:00p.m.
https://savethechildren.org.hk/en/survey-findings-on-hong-kong-kids-online/
This is a guest post by Pamela M. Anderson, Ph.D., Senior Research Associate at ETR “When young people are cyberbullied, why don’t they reach out to trusted adults for help?” This is a question a lot of youth health providers are asking. Think about it: here we are, a nationwide community of caring, concerned parents/guardians, and professionals. We’re teachers, health providers, counselors, outreach workers, researchers, and more. We want to support young people and empower them to live healthy, positive and productive lives. We see disturbing news accounts about adolescents literally bullied to death. We hear young people we work with minimize cyberbullying and harassment among their own peer network—“It’s not THAT big of a deal.” In my own research on electronic dating violence (EDV), I’ve found that affected youth tend not to report the harassment or abuse. They often don’t see it as a serious problem, they don’t want their time with the abusive partner restricted, and they certainly don’t want parents or other adults taking away their digital devices. Technology & Connection: Vital to Youth If you’re not a member of the digitally native adolescent and young adult cohort, it’s difficult to fully comprehend the importance that technology and connection hold for them. Mobile devices are more deeply integrated into the social fabric of young people’s lives than ever before. For example, in their 2015 overview of Teens, Social Media & Technology, the Pew Research Center reported that 1 in 4 teens goes online “almost constantly,” 92% are online daily, and nearly 3 in 4 have access to a smartphone. I’d venture to say that those figures are even higher today. The internet is the space where young people are building friendships and romances, testing out different identities, asserting their autonomy, and seeking out the kinds of experiences that build maturity and allow them to feel more adult. In the 2017 YTH report TECHsex: Youth Sexuality and Health Online, which surveyed a national sample of 13-24 year olds, a third of respondents reported using online dating sites and a similar proportion reported flirting on social media. Cyberbullying and harassment are a part of these experiences as well. About 42% of the YTH respondents reported experiencing cyberbullying and harassment online. Fifty-seven percent reported cyberbullying while playing online games. Almost 6 in 10 have witnessed someone else being harassed or bullied online. YTH’s 2016 report Blocking Cyberbullying provides even more detailed figures, including the fact that about a third of youth who’ve experienced cyberbullying (35%) knew their bully in real life. These acts of intimidation and violence are not only the acts of “trolls” or strangers, as stereotypes often suggest, but of people that they know intimately, offline. Taken together, these are astounding statistics. Thus, when young people have the sense that bullying and harassment are simply a part of online interactions, they’re not wrong. If they want to be online (and they do), they understand they are likely to witness or experience harassment. So Why Don’t They Ask Us for Help? Young people don’t seek help for cyberbullying for many of the same reasons they don’t reach out for support after real-world bullying. They feel embarrassed or ashamed to be a target. They don’t want to be seen as a snitch and lose even more social status. They fear retaliation. They feel like it’s their responsibility to deal with it. They don’t recognize it as bullying or as something serious. In short, like adults, they don’t see it as that big of a deal. But another very important reason is that they do not want to lose access to their technology—this lifeline to their social world. It is common for parents to “digitally ground” children and teens for misbehavior. The Pew Research Center reports that 65% of parents have taken away a teen’s phone or internet privileges as punishment. Young people realize that if they discuss problems with bullying and harassment, parents may close down their social media accounts, take away their phones or otherwise restrict their access to the online social world. What Do We Do? I’m a big fan of integrating learning and skills about these risks into existing education, and also taking advantage of teachable moments. Teachers can integrate conversations about cyberbullying into lessons about violence, self-esteem, seeking help, standing up for friends or preventing bullying. Health care providers can ask youth patients about their online lives and experiences. For parents, while digital grounding may be appropriate in some circumstances, I encourage them to first and foremost listen to what is being said, and then think of other responses when a child or teen comes to them reporting cyberbullying or other online harassment. For more information, parents can check out resources like StopBullying.gov for suggestions. Finally, and perhaps most importantly, we need to create avenues for young people to speak up themselves. Let’s get youth involved in sharing the message that cyberbullying is wrong, it is harmful and it needs to stop. I’m looking forward to seeing YTH’s SafeZone app, which is designed as a peer-to-peer support system to prevent cyberbullying. This is a promising approach to begin shaping positive peer norms for prevention. The technological sophistication many young people have is one of our greatest strengths in addressing these problems. Youth have innovative ideas and a lot of experience. Our role is to support them in using these attributes in creating a stronger, safer, more positive online world. Pamela M. Anderson, PhD, is a Senior Research Associate at ETR. Her research focuses on adolescent romantic relationship development and sexual risk outcomes.
https://yth.org/teens-dont-report-cyberbullying/
Skin And Threads An understanding of organic and natural fabrics has led Skin and Threads to create edgy designer basics that allow a woman to not only look but to feel beautiful in her clothes. Skin and Threads concentrates on clean, soft silhouettes that can be worn as separates or layered for a more unconventional aesthetic. Evolving with you, our basics create your wardrobe staples the pieces you throw on over and over again because they work.
https://www.weconnectfashion.com/businesses/skin-and-threads
In my previous article, I explored how I went about recreating vintage designs in a contemporary context. You can read that here. This project came with a complex brief. The replicas needed to fulfil a number of roles and appeal to a wide audience visiting the museum, from children to professional tailors. We especially wanted the garments to expand the experience for visually impaired visitors. One of the best ways to understand clothing is to touch it. This isn’t something the general public can do with museum pieces, so a recreation is the next best thing. Had I been asked to make other 1960s fashions, I could have sourced original vintage materials, used an original 1960s sewing machine and original patterns. However, when the request is to remake something to look the same as one in a museum, this is not an option. Sourcing the right fabrics was the biggest challenge. It’s always a struggle to find fabric suitable for particular eras. Making it look and feel like the original is nearly impossible. Unless you have it specially made, or the original mill is still working and making the same material. Even though the 1960s does not seem that long ago, textile production has changed. Fabrics popular and in fashion then are not necessarily the same now, and as a result aren’t produced. For example, the wool jersey mini dress was a real problem. Extensively made in the 1960s, wool jersey has since been replaced by a preference for cotton or synthetic jersey and scuba. Finding a wool jersey that has been bonded, as the originals were made, was difficult. After a lot of searching, I found a remnant of a piece of wool jersey from Italy which I snapped up, as it was the closest I was going to get. The original garments, though not as fragile as some I have recreated, have been collected by V&A South Kensington to protect them for the future. This includes protecting them for (and from!) passionate observers like me. I got to view each of the garments, ask questions and request alternate views. But I was not able handle the garments themselves, to prevent them getting handled too frequently and to preserve them for future generations. The museum has a wonderful catalogue and the curators were helpful at supplying me with photos once I returned to my studio. However, the reason for taking the photo is an important consideration when recreating pieces. Taking a photo for a catalogue is one thing, and detailing the overall style, back and front view, any damage, etc. is fantastic. But rarely does that include images of the inside, the hem, the stitching, whether it’s a French seam or a bound seam, etc. As a dressmaker, I have to fill in the blanks. Because I know the era of construction, I have a number of original vintage garments I can look at. Also, plenty of sewing guides and patterns to consult and an aunt who lived through the era! These were all essential when it came to getting the finer points right. Though I own a rather jazzy 1960s sewing machine, it is not up to the task of sewing a number of garments anymore without having a huff. I also used modern threads as they are indistinguishable from the artificial threads used in the 1960s. Vintage hooks, eyes and buttons were sourced to give a lovely realism to the overall garments. Despite all this, the PVC bag was a real challenge to make. PVC is renowned for being a difficult fabric to sew as it ‘sticks’ when you are sewing and doesn’t feed through the machine easily. You have to have a Teflon foot on the machine to make it work. You can’t use pins as the fabric doesn’t mend afterwards and there is a risk of tearing. Instead, clips can be used but even this doesn’t solve all problems. To do the binding, I resorted to gluing it down before top stitching it. Even knowing how many problems Quant faced in her ‘wet look’ collection didn’t make me feel any better by the end of this! Her first range caused numerous problems and she eventually teamed up with Alligator, a specialist rainwear producer. They had the equipment and expertise to create her designs in PVC. An extra challenge none of us could anticipate for this project was trying to complete it during a worldwide pandemic. It delayed the exhibition and my visit to V&A Dundee. It also closed fabric warehouses and haberdashers which hindered the progress of the project. Despite this, I think these pieces clearly demonstrate the timeless charm of Quant’s original designs. I want to wear them, and I don’t think anyone would look at me oddly if I did. They’re as fresh as a daisy, and as vibrant now as they were when they were first designed over 50 years ago. Meridith Towne is a costume historian and historic dressmaker who works with museums to complement exhibitions, enhance visitor experience and facilitate learning with dress-up boxes. Our Mary Quant exhibition is now closed. You can still enjoy videos and stories relating to the exhibition here.
https://www.vam.ac.uk/dundee/articles/reproducing-quant-challenges
These neoprene sheets are made with SBR or CR rubber foam, lamination glue, and fabrics that have infused with a special formulated treatment to prevent the growth of fungi, mildew, and bacteria. These neoprene sheets can alleviate some of the difficulties of cleaning in medical, industrial and other applications. Finishes: double-lined (both sides laminated with fabrics), single lined (one side laminated with fabrics, the other side can be smooth skin, mesh skin or skin cell). Anti-Microbial fabrics can be laminated onto regular neoprene sheets. These fabrics have been treated with special formulations to prevent the surface growth of bactreia, mildew and various microorganisms.
http://perfectex.com/closecellantimicrobial.html
The capsule collection In this final assignment we are asked to reflect on the samples we have produced and consider the supporting work from the first two projects. My capsule collection is based on the Nature’s Larder theme first explored in the Introductory project at the very start of ATV, however I have focussed on the marine environment this time. Looking at seaweeds and mussels. During my research I was very drawn to textiles produced using unusual materials or those that were manipulated into unusual shapes. These fitted my initial ideas for producing a collection with texture and colour. I wanted the pieces that reflected seaweeds to also reflect movement as if you were viewing them underwater. During my sampling experiments I was excited by the shapes that could be produced by manipulating the fabrics with heat either through use of a heat gun or steam. I also wanted to see if I could produce a piece of seaweed inspired lace through free-embroidery which is another new skill I’m experimenting with. The collection ‘Mussels’ fabric. I wrapped mussel shells into the fabric and steamed it. I then hand painted to produce the detailing. I was quite pleased with this piece. (Insert photo here) Seaweed embroidered piece with scrap threads and free machine embroidery Seaweed free embroidered piece. I linoprinted a design and then drew it on soluble stabiliser which I then free embroidered. I didn’t dry this totally flat as I did with the test piece, this allowed it to dry in a curly and crinkly way. Before the stabiliser was rinsed away. And after I was pleased with the way this turned out, but if I was doing this again I would maybe make the design larger as it shrinks when you wash out the stabiliser especially if you don’t dry it flat. I would also consider doing this again on chiffon or a similar fabric. Seaweeds Heat manipulated fabrics with a heat gun and sewn together. I wanted to give the impression of seaweeds under the water. I’m not as happy with this piece. I was most excited with the grey chiffon as I loved the shapes and shadows created by the heat manipulation. I feel I may have tried to do too much by adding several types of fabric. Seaweed – Sugar kelp This was made by manipulating clear PVC with a heat gun. I folded and sculpted the fabric as I used the gun. I coloured it with spray PVC dye after a failed attempt at painting with acrylics. I sewed the strips together to create the impression of fronds hanging together. Seaweeds – Bladderwrack A mixture of fabric that was wrapped around beads and steamed and crocheted chains which were stitched to a stretchy mesh that made me think of fishing net. This worked well, I think the mixture of fabrics worked together. The interesting textures complemented one another.
https://myoca.blog/2018/12/07/atv-part-5-assignment-5/
Stephen Foodi is a London-based visual artist who works with mixed media, digital design, painting and sketching that takes from contemporary art, modern fashion, and popular culture. Foodi's process focuses on using intuition to create thought provoking and edgy artistic designs that speak to the inner workings of our minds, with the mission to make you think differently. Work Foodi presents limited edition original garments that include tshirts, sweatshirts, unique cut & sew pieces and bespoke hand painted items. With a strong focus on sustainability and the environment, all products and fabrics are sourced and produced ethically. "I CAN'T TEACH YOU ANYTHING, I CAN ONLY MAKE YOU THINK"
https://www.stephenfoodi.com/
The Bush watering bag is actually the water bag. First of all, the bush bag must be waterproof and can hold water. Secondly, the bags for irrigation must be firm, because all bushes are grown outdoors, and they must be firm and durable under various natural and unprotected conditions. Then the bag also needs to accept wind, rain and sunshine. Baptism. It is self-evident that ultraviolet is destructive. Therefore, it must also be UV resistant. Therefore, generally speaking, when choosing Bush water bag, we will consider the above characteristics and choose one or several suitable materials as the final materials of Bush filling bag according to the existing materials. Here are two materials that we carefully selected. Tree bag material, usually we have PVC mesh, PE mesh. We have 300D, 500D and 1000D density and thickness PVC cloth. However, according to the weight of the different, there are different kinds of polyethylene mesh. 240gms, 200gms, 300gms, purchasers can choose carefully according to their needs.
http://m.lohasoutdoors.com/info/how-to-choose-tree-watering-bag-materail-40115895.html
Nonwoven Textile Colorants – Dyes & Pigments Chromatech Incorporated offers a broad selection of nonwoven textile colorants in both liquid dyes and resinless water-based pigment dispersions. Nonwoven textiles and nonwoven fabrics are engineered fabrics made by bonding fibers or filaments mechanically, thermally, or chemically. Typical resins include PVA latex, phenolic resin, and urea-formaldehyde resins. Nonwoven textile fibers include fiberglass, polyester-cotton blends and other fibers. They are typically flat, porous sheets that are not made by weaving or knitting (hence the term nonwoven). Nonwoven materials are used in the production of filter media, insulation, bedding, furniture, carpet components, vehicles, luggage, medical garments, apparel, blankets, absorbent pads, wipes, construction materials, landscaping, agriculture, and more.
http://www.chromatechcolors.com/industries/nonwoven-textile-colorants/
Facing is generally a non-decorative fabric (it could be decorative, too) attached to an edge of a garment for support, cleanliness, or both. Facing provides structure by adding thickness to your garment’s edges, adding strength to seams, and preventing garments from stretching out of shape over time. When the facing goes around a neckline, it is usually called a collar. It can also be applied to a sleeve hem or skirt waistband if no lining or seam allowance is already present in these areas. If you are making a lined jacket and want your lining to show when you wear it, this must be made from facing fabric visible from the outside of the coat when wearing it with its lining inside out. We have put together a detailed guide for choosing your beginner sewing machine. Be sure to check it! Table of Contents - 1 What does facing do in sewing? - 2 What are the different types of facing and interfacing? - 3 What does stitch facing mean? - 4 What is self-facing in sewing? - 5 What is shaped facing? - 6 What is decorative facing? - 7 How to sew facing to bodice? - 8 How are facing applied to different edges of the garments? - 9 What’s the difference between knit and woven facings? - 10 Conclusion What does facing do in sewing? The purpose of facing, especially in garment construction, is to finish the edge of the fabric. You may see this commonly seen on necks or armholes where you want a clean and finished look for your garment. Facing also helps reinforce high-stress areas, such as necklines and armholes. Facing can be made from the same fabric, a different fabric (self-facing), or even interfaced with something like fusible fleece or batting (fusible and non-woven interfacings work very well). Once applied, stitches are sewn to affix each side to itself. If sewing by machine, I recommend setting your stitch length at 2mm if possible. It helps immensely when you’re trying to sew as close as possible to the edge. Facing can be applied with either right or wrong sides together and easily turned right for a finished look. If using interfacing, I recommend matching it up with the facing so that you’re stitching through both pieces at once. What are the different types of facing and interfacing? There are many types of facings to choose from, depending on the fashion style of the garment you are creating and its function. Many garments require more than one type of facing to create a finished look. Below you will find descriptions of each facing and a brief list of the types of garments they should be used. Types of facings Fusible – Fusible web is an adhesive material that you can fuse to the wrong side of your fabric for added stability and body. You must use fusible interfacing on necklines, armholes, and curved areas such as roundness and princess seams. Stitched – These facings are stitched to the right side of your garment and then turned over onto the wrong side for stitching in place, just like a regular seam. You can stitch these facings into different styles such as surplice (shawls), rounded, square, and others. Self-faced – A self-facing is a facing that is the same fabric as your garment, which you fold over on itself and stitch down to create a finished look with no raw edges showing. It also provides stability at necklines and armholes where knit fabrics tend to stretch out and do not require fusible interfacing along the edges if used on curves. Lapped – Lapped facings are used around waistlines, necklines, and armholes, so they stretch with the garment when you move. They are easy to stitch in place with your machine or serger. Types of interfacing? Interfacing is a material with two layers, one of the fibers and the other facing. They come in many different types and forms. There are so many variations of interfacing available today, but this article focuses on the basic ones. Basic Interfacings - Canvas, - Fusible Fleece - Hip Hugger™ - Lightweight Fusible Interfacings (such as Vilene™) - Weft Insertion Fusible (Wel-Flex™) - Woven Interfacings (such as Twill-Weave). Types of Woven Interfacings: - Gentle - Heavy - Permanent Press - Terry Cloth. What are the different types of fusible interfacings? Fusible interfacing is a type of special material that is manufactured with heat-sensitive glue on one side. When the side comes in contact with an iron it will stick to other materials because the heat and pressure emitted by the iron will activate the glue; making them stick together. Fusible interfacing is available in several forms: woven, non-woven, lightweight, cutaway, and sew-in. The most commonly used are the non-woven, but woven interfacing is also becoming popular. When and why should you use them? Non-woven interfacings add body and stability to garments; they can be sewn into the seam allowance or applied with heat. This is a good option for sheer fabrics because it will not leave any visible stitches. They are also good for draping, lingerie, and tucks in blouses. Woven interfacings are stronger than non-woven ones but less flexible, so they are more suitable for waistbands, necklines, shoulder seams, cuffs, and sleeve hems. Woven interfacings are machine washable, proving useful when working with knits on children’s clothing. If you want to add support to a shirt or dress, keep skirts and pants in place, and provide body to fabric without stiffness, you should use fusible interfacing because it is user-friendly. What does stitch facing mean? Sewing facing gives the garment a more finished look. When you sew in a facing, it’s like putting in the hand-sewn hem on the inside of your clothes. Our bodies are not perfectly flat, especially when we sit or stand. It means that our clothes fit differently in different areas. It is challenging to keep seams and edges from looking puckered or pulled at odd angles when they come into contact with our bodies. However, when you finish them by stitching down (or “joining,” as some people call it), the seam line has less chance of showing through clothing and makes for a very professional look. It also flattens any puckers resulting from construction, so your seams will lie nice and flat against your body. What is self-facing in sewing? Self-facing is a method of sewing to apply facing fabric on the “wrong” side of the garment. This facing fabric will be applied to the edge and neckline before putting them together. The self-facing can then be folded and stitched into place, creating a neat finish at the front neckline and armholes. Self-facing allows you to avoid adding facings under your arms or onto your neckline, as this saves time and creates a smoother finish. On very tight-fitting clothes, self-facings are especially useful as they don’t bulge out as regular facings would. This method of self-facing is usually applied to tight-fitting garments like bodycon dresses, swimwear, and dance costumes. What is shaped facing? Shaped-facing is the opposite of flat facing. It’s easier to sew shaped facing because you can match the corners at the seams. The shaped facing prevents the edges from fraying, so your garment will have a much cleaner edge without any threads or fabric poking out. It evens out the bulk on the outside, so there are no bumps to be seen through your outer layer. What is decorative facing? The decorative facing is a type of visible stitching that appears outside your garment. It can be used to hide seams and raw edges or highlight decorative stitches. Decorative facings can be either machine-sewn or hand-sewn in place. How is decorative facing typically used in sewing? Decorative facings are usually attached with decorative topstitching, cover stitching, decorative machine embroidery, or by hemming around the facing. In some cases, decorative facings are exposed when you add a lining to your garment. In this case, you need to finish the edge so that it doesn’t unravel before you’re ready to attach your lining. Decorative facing and decorative stitches Decorative facing is a great way to highlight decorative stitching. For example, you can run decorative stitching on the edge of your garment and then sew decorative topstitching along the edge of the decorative stitch. It creates a cool contrasting design effect. What do you need to do before adding decorative facing in sewing? Before attaching decorative facings, you must decide which method to use: decorative topstitching, cover stitching, decoration with machine embroidery, or hemming. Decorative topstitching usually requires that both the under and upper pieces of fabric be finished (for example, by clipping or serving to unravel) so that they won’t when exposed. With decorative topstitching, you want the decorative stitching to look like it’s actually holding both sides together. When using decorative machine embroidery or hemming, only one side of the garment will need to be finished. What are decorative facing materials? You can use decorative fusible interfacing for your decorative facing, but it’s also possible to use regular fabric or lightweight ribbing if that’s what you have on hand. To attach the facing without sewing, try iron-on adhesive tape, which is easy to remove and won’t leave any residue. If you’re running decorative stitches on the edge of your garment, you might consider using clear elastic as a decorative facing. It stretches over seams nicely and isn’t visible outside your garment. Some decorative facing names Some common decorative facings available on sewing patterns include neckline facing, neckbands, wristbands, and sleeve heads. For shirt-style garments, decorative neckline facings are a great way to finish the top edge of the piece of clothing without adding bulk underneath. It is especially true if you’re putting on a collar or necktie over the finished edge. Decorative-faced waistbands can break up all those vertical seams at the hem of a skirt or dress, making them more attractive. Decorative wristbands give sweaters and cardigans an even more casual look by making otherwise plain cuffs stand out as decorative details when working with knits. Related: Find out how to transfer a sewing pattern How to sew facing to bodice? In making a garment, the first step is to make the bodice. When you sew a garment without facings, it won’t be easy to give hemline and curves smooth curves. In this case, you need to sew facing. Afterward, stitching on both sides of your fabric from shoulder line down until the hem edge of the clothes would create a clean finish for hemline and curve areas. Facing can help you ease the sewing process and add finishing touches to garments that look more elegant and polished. There are two types of facing: full or partial, depending on how deep the garment’s edge goes or how far up from the waist level it starts. For example, if it starts from the neckline area going down to the lower waist, it is called full facing. At the same time, partial facing starts only from the waistline going up to the lower neckline. The facing type determines whether you can skip stitching on the machine hemline or curve areas. However, the same steps are followed for cutting out fabric, making full or partial facing. How are facing applied to different edges of the garments? When working on a sewing project, specific steps must be taken for the garment’s look and feel to turn out perfect. One of these essential procedures is applying facing or facings if they are applied in multiples. Facing creates a smooth edge along the raw, unfinished seams of your garment. It also stabilizes seam allowances at necklines, armholes, waistlines, and other places where you don’t want them to stretch out when worn or during laundering. Garments with attached facings are more polished than those without facings because they have an attractive border around the edges instead of serged-off edges that can look messy, depending on how well you serge. Many different types of facings can be used when creating a garment. The two most common facing materials are knits and wovens, and both perform the same function; to make the raw edges of your garments look finished. It is important to note these differences to select the right facing for your project. What’s the difference between knit and woven facings? Knits and weaves (wovens) each behave differently from one others. Knit fabric was created specifically for making garments. Knitted fabrics have stretch, which enables them to fit more contoured areas around the body, including curves such as bustlines, hips, shoulders, and arms. In addition, because they contain index material, they also provide a soft, comfortable fit for the wearer. Knits are used as facings on the armholes and necklines because these areas stretch along with the body when you move. In addition, since they have elastic fibers woven into them, they will stretch over curves and roundness without puckering or pulling and give an attractive border around each edge. It is important to use a quality knit-facing material to hold its shape after laundering and stretches as needed to follow the contours of your body. Knit facings can be purchased as knit tricots, single knits (very fine), and double knits (medium weight). You can also purchase small amounts of knit fabric at your local fabric store to use as facings if you wish. Woven fabrics do not stretch and only come in two weights: very light and medium to heavyweight. They are pressed into shape, adding crispness and stability at the seams of your garment, which provides a tailored look. Wovens can be used on necklines and armholes, but since they will add bulk to the seams, they are not recommended for use on body-shaping areas such as bustlines and hips. Wovens come in a wide range of fabrics, from chiffons, georgettes, and silks to cotton prints, linen blends, and polyester satins. In addition to these popular choices, you can also purchase novelty weaves that will make your garment stand out. Conclusion One of the most important parts of any garment or project is assembling it correctly. It means you have to make sure that your seams are straight and either pressed open or back, depending on the material you add to your piece. Facing and interfacing are two different elements in sewing that allow for a clean look, whether exposed or hidden.
https://craftbuds.com/what-is-facing-in-sewing/
Five years on from the acclaimed Iraqi-British architect’s death, ‘Zaha Hadid: Abstracting the Landscape,’ is a celebration of her genius. Hadid came to be known as ‘The Queen of the Curve’ due to her unconventional approach of designing imposing, sweeping buildings with a futuristic look and feel. She experimented with edgy and angular or dream-like, floating shapes that seemed to defy gravity. “The world is not a rectangle,” she famously said. Hadid created multifunctional projects around the world, from Abu Dhabi’s Sheikh Zayed Bridge to Baku’s Heydar Aliyev Center, the spacious MAXXI Museum in Rome, and the opulent Guangzhou Opera House in China. To commemorate the fifth anniversary of Hadid’s passing, Galerie Gmurzynska is hosting an exhibition entitled “Zaha Hadid: Abstracting the Landscape,” which runs until July 31. It explores a versatile and rarely seen selection of her works, going back to the dawn of her career in the 1980s.
https://www.modernarabesque.com/en/news/3086
17.4 Summary by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license. periodic motion — motion that repeats itself over and over period — the time required for one cycle of a periodic motion frequency — the number of cycles per second, the inverse of the period amplitude — the amount of vibration, often measured from the center to one side; may have different units depending on the nature of the vibration simple harmonic motion — motion whose `x-t` graph is a sine wave `T` — period `f` — frequency `A` — amplitude `k` — the slope of the graph of `F` versus `x`, where `F` is the total force acting on an object and `x` is the object's position; for a spring, this is known as the spring constant. `nu` — The Greek letter `nu`, nu, is used in many books for frequency. `omega` — The Greek letter `omega`, omega, is often used as an abbreviation for `2pif`. Periodic motion is common in the world around us because of conservation laws. An important example is one-dimensional motion in which the only two forms of energy involved are potential and kinetic; in such a situation, conservation of energy requires that an object repeat its motion, because otherwise when it came back to the same point, it would have to have a different kinetic energy and therefore a different total energy. Not only are periodic vibrations very common, but small-amplitude vibrations are always sinusoidal as well. That is, the `x-t` graph is a sine wave. This is because the graph of force versus position will always look like a straight line on a sufficiently small scale. This type of vibration is called simple harmonic motion. In simple harmonic motion, the period is independent of the amplitude, and is given by Key `sqrt` A computerized answer check is available online. `int` A problem that requires calculus. `***` A difficult problem. 1. Find an equation for the frequency of simple harmonic motion in terms of `k` and `m`. `sqrt` 2. Many single-celled organisms propel themselves through water with long tails, which they wiggle back and forth. (The most obvious example is the sperm cell.) The frequency of the tail's vibration is typically about 10-15 Hz. To what range of periods does this range of frequencies correspond? 3. (a) Pendulum 2 has a string twice as long as pendulum 1. If we define `x` as the distance traveled by the bob along a circle away from the bottom, how does the `k` of pendulum 2 compare with the `k` of pendulum 1? Give a numerical ratio. [Hint: the total force on the bob is the same if the angles away from the bottom are the same, but equal angles do not correspond to equal values of `x`.] (b) Based on your answer from part (a), how does the period of pendulum 2 compare with the period of pendulum 1? Give a numerical ratio. 4. A pneumatic spring consists of a piston riding on top of the air in a cylinder. The upward force of the air on the piston is given by `F_(air)`=`ax`^-1.4, where `a` is a constant with funny units of `N*m^1.4`. For simplicity, assume the air only supports the weight, `F_W`, of the piston itself, although in practice this device is used to support some other object. The equilibrium position, `x_0`, is where `F_W` equals `-F_(air)`. (Note that in the main text I have assumed the equilibrium position to be at `x=0`, but that is not the natural choice here.) Assume friction is negligible, and consider a case where the amplitude of the vibrations is very small. Let `a`=`1.0 N*m^1.4`, `x_0`=1.00 `m`, and `F_W`=`-1.00 N`. The piston is released from `x`=`1.01 m`.Draw a neat, accurate graph of the total force, `F`, as a function of `x`, on graph paper, covering the range from `x=0.98 m` to `1.02 m`. Over this small range, you will find that the force is very nearly proportional to `x-x_0`. Approximate the curve with a straight line, find its slope, and derive the approximate period of oscillation. `sqrt` 5. Consider the same pneumatic piston described in problem 4, but now imagine that the oscillations are not small. Sketch a graph of the total force on the piston as it would appear over this wider range of motion. For a wider range of motion, explain why the vibration of the piston about equilibrium is not simple harmonic motion, and sketch a graph of `x` vs `t`, showing roughly how the curve is different from a sine wave. [Hint: Acceleration corresponds to the curvature of the `x-t` graph, so if the force is greater, the graph should curve around more quickly.] 6. Archimedes' principle states that an object partly or wholly immersed in fluid experiences a buoyant force equal to the weight of the fluid it displaces. For instance, if a boat is floating in water, the upward pressure of the water (vector sum of all the forces of the water pressing inward and upward on every square inch of its hull) must be equal to the weight of the water displaced, because if the boat was instantly removed and the hole in the water filled back in, the force of the surrounding water would be just the right amount to hold up this new “chunk” of water. (a) Show that a cube of mass `m` with edges of length `b` floating upright (not tilted) in a fluid of density `rho` will have a draft (depth to which it sinks below the waterline) `h` given at equilibrium by `h_0=m`/`b^2rho`. (b) Find the total force on the cube when its draft is `h`, and verify that plugging in `h-h_0` gives a total force of zero. (c) Find the cube's period of oscillation as it bobs up and down in the water, and show that can be expressed in terms of and `g` only. `sqrt` 7. The figure shows a see-saw with two springs at Codornices Park in Berkeley, California. Each spring has spring constant `k`, and a kid of mass `m` sits on each seat. (a) Find the period of vibration in terms of the variables `k`, `m`, `a`, and `b`. (b) Discuss the special case where `a=b`, rather than `a>b` as in the real see-saw. (c) Show that your answer to part a also makes sense in the case of `b=0`. `sqrt` `***` 8. Show that the equation `T=2pisqrt(m/k)` has units that make sense. 9. A hot scientific question of the 18th century was the shape of the earth: whether its radius was greater at the equator than at the poles, or the other way around. One method used to attack this question was to measure gravity accurately in different locations on the earth using pendula. If the highest and lowest latitudes accessible to explorers were `0` and `70` degrees, then the the strength of gravity would in reality be observed to vary over a range from about `9.780` to `9.826 m`/`s^2`. This change, amounting to `0.046 m`/`s^2`, is greater than the `0.022 m`/`s^2` effect to be expected if the earth had been spherical. The greater effect occurs because the equator feels a reduction due not just to the acceleration of the spinning earth out from under it, but also to the greater radius of the earth at the equator. What is the accuracy with which the period of a one-second pendulum would have to be measured in order to prove that the earth was not a sphere, and that it bulged at the equator? Equipment: Place the cart on the air track and attach springs so that it can vibrate. 1. Test whether the period of vibration depends on amplitude. Try at least one moderate amplitude, for which the springs do not go slack, at least one amplitude that is large enough so that they do go slack, and one amplitude that's the very smallest you can possibly observe. 2. Try a cart with a different mass. Does the period change by the expected factor, based on the equation `T=2pisqrt(m/k)`? 3. Use a spring scale to pull the cart away from equilibrium, and make a graph of force versus position. Is it linear? If so, what is its slope? 4. Test the equation `T=2pisqrt(m/k)` numerically. 17.4 Summary by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license.
https://www.vcalc.com/collection/?uuid=1e689403-f145-11e9-8682-bc764e2038f2
Vibration compaction is the most effective way of compacting coarse-grained materials. The effects of vibration frequency and amplitude on the compaction density of different backfill materials (No. 4 natural sand, No. 24 stone sand and No. 5, No. 8, No. 43 aggregates), were studied in this research. The test materials were characterized based on the particle sizes and morphology parameters using digital image analysis technique. Small-scale laboratory compaction tests were carried out with variable frequency and amplitude of vibrations using vibratory hammer and vibratory table. The results show an increase in density with the increase in amplitude and frequency of vibration. However, the increase in density with the increase in amplitude of vibration is more pronounced for the coarse aggregates than for the sands. A comparison of the maximum dry densities of different test materials shows that the dry densities obtained after compaction using the vibratory hammer are greater than those obtained after compaction using the vibratory table at the highest amplitude and frequency of vibration available in both equipment. Large-scale vibratory roller compaction tests were performed in the field for No. 30 backfill soil to observe the effect of vibration frequency and number of passes on the compaction density. Accelerometer sensors were attached to the roller drum (Caterpillar, model CS56B) to measure the frequency of vibration for the two different vibration settings available to the roller. For this roller and soil tested, the results show that the higher vibration setting is more effective. Direct shear tests and direct interface shear tests were performed to study the impact of particle characteristics of the coarse-grained backfill materials on interface shear resistance. A unique relationship was found between the normalized surface roughness and the ratio of critical-state interface friction angle between sand-gravel mixture with steel to the internal critical-state friction angle of the sand-gravel mixture.
https://hammer.purdue.edu/articles/thesis/Improvement_of_Stiffness_and_Strength_of_Backfill_Soils_Through_Optimization_of_Compaction_Procedures_and_Specifications/11312357/1
# Helmholtz equation In mathematics, the eigenvalue problem for the Laplace operator is known as the Helmholtz equation. It corresponds to the linear partial differential equation ## Motivation and uses The Helmholtz equation often arises in the study of physical problems involving partial differential equations (PDEs) in both space and time. The Helmholtz equation, which represents a time-independent form of the wave equation, results from applying the technique of separation of variables to reduce the complexity of the analysis. For example, consider the wave equation Separation of variables begins by assuming that the wave function u(r, t) is in fact separable: Substituting this form into the wave equation and then simplifying, we obtain the following equation: Notice that the expression on the left side depends only on r, whereas the right expression depends only on t. As a result, this equation is valid in the general case if and only if both sides of the equation are equal to the same constant value. This argument is key in the technique of solving linear partial differential equations by separation of variables. From this observation, we obtain two equations, one for A(r), the other for T(t): where we have chosen, without loss of generality, the expression −k2 for the value of the constant. (It is equally valid to use any constant k as the separation constant; −k2 is chosen only for convenience in the resulting solutions.) Rearranging the first equation, we obtain the Helmholtz equation: Likewise, after making the substitution ω = kc, where k is the wave number, and ω is the angular frequency (assuming a monochromatic field), the second equation becomes We now have Helmholtz's equation for the spatial variable r and a second-order ordinary differential equation in time. The solution in time will be a linear combination of sine and cosine functions, whose exact form is determined by initial conditions, while the form of the solution in space will depend on the boundary conditions. Alternatively, integral transforms, such as the Laplace or Fourier transform, are often used to transform a hyperbolic PDE into a form of the Helmholtz equation. Because of its relationship to the wave equation, the Helmholtz equation arises in problems in such areas of physics as the study of electromagnetic radiation, seismology, and acoustics. ## Solving the Helmholtz equation using separation of variables The solution to the spatial Helmholtz equation: ### Vibrating membrane The two-dimensional analogue of the vibrating string is the vibrating membrane, with the edges clamped to be motionless. The Helmholtz equation was solved for many basic shapes in the 19th century: the rectangular membrane by Siméon Denis Poisson in 1829, the equilateral triangle by Gabriel Lamé in 1852, and the circular membrane by Alfred Clebsch in 1862. The elliptical drumhead was studied by Émile Mathieu, leading to Mathieu's differential equation. If the edges of a shape are straight line segments, then a solution is integrable or knowable in closed-form only if it is expressible as a finite linear combination of plane waves that satisfy the boundary conditions (zero at the boundary, i.e., membrane clamped). If the domain is a circle of radius a, then it is appropriate to introduce polar coordinates r and θ. The Helmholtz equation takes the form We may impose the boundary condition that A vanishes if r = a; thus The method of separation of variables leads to trial solutions of the form It follows from the periodicity condition that The general solution A then takes the form of a generalized Fourier series of terms involving products of Jn(km,nr) and the sine (or cosine) of nθ. These solutions are the modes of vibration of a circular drumhead. ### Three-dimensional solutions In spherical coordinates, the solution is: This solution arises from the spatial solution of the wave equation and diffusion equation. Here jℓ(kr) and yℓ(kr) are the spherical Bessel functions, and Ymℓ(θ, φ) are the spherical harmonics (Abramowitz and Stegun, 1964). Note that these forms are general solutions, and require boundary conditions to be specified to be used in any specific case. For infinite exterior domains, a radiation condition may also be required (Sommerfeld, 1949). Writing r0 = (x, y, z) function A(r0) has asymptotics where function f is called scattering amplitude and u0(r0) is the value of A at each boundary point r0. ## Paraxial approximation In the paraxial approximation of the Helmholtz equation, the complex amplitude A is expressed as This equation has important applications in the science of optics, where it provides solutions that describe the propagation of electromagnetic waves (light) in the form of either paraboloidal waves or Gaussian beams. Most lasers emit beams that take this form. The assumption under which the paraxial approximation is valid is that the z derivative of the amplitude function u is a slowly varying function of z: This condition is equivalent to saying that the angle θ between the wave vector k and the optical axis z is small: θ ≪ 1. The paraxial form of the Helmholtz equation is found by substituting the above-stated expression for the complex amplitude into the general form of the Helmholtz equation as follows: Expansion and cancellation yields the following: Because of the paraxial inequality stated above, the ∂2u/∂z2 term is neglected in comparison with the k·∂u/∂z term. This yields the paraxial Helmholtz equation. Substituting u(r) = A(r) e−ikz then gives the paraxial equation for the original complex amplitude A: The Fresnel diffraction integral is an exact solution to the paraxial Helmholtz equation. ## Inhomogeneous Helmholtz equation The inhomogeneous Helmholtz equation is the equation In order to solve this equation uniquely, one needs to specify a boundary condition at infinity, which is typically the Sommerfeld radiation condition uniformly in x ^ {\displaystyle {\hat {x}}} with | x ^ | = 1 {\displaystyle |{\hat {x}}|=1} , where the vertical bars denote the Euclidean norm. With this condition, the solution to the inhomogeneous Helmholtz equation is the convolution (notice this integral is actually over a finite region, since f has compact support). Here, G is the Green's function of this equation, that is, the solution to the inhomogeneous Helmholtz equation with f equaling the Dirac delta function, so G satisfies The expression for the Green's function depends on the dimension n of the space. One has
https://en.wikipedia.org/wiki/Helmholtz_equation
The following are the basic concepts of vibration measurement. And vibration measurement equipment An ideal non-rotating machine produce vibrations. The absence of imbalances, desalinhamentos, days off, etc. would not exist causes for the vibrations happening. In practice this does not happen, then appearing Vibrations. The vibrations are relevant in many ways; may cause noise, discomfort, breakdowns etc.. A well-designed project will result in a machine with levels of vibrations and noise usually quite low. However throughout the machine life, the clamping bolts loose, components deform, increase the clearances, not to mention in misalignments, imbalances, etc. All these factors will contribute to an increase in vibrations and resonances which can cause increased load on the bearings. In turn vibrations accelerate the degradation processes of machine components for directing, so that a malfunction. The vibrations are indicative of the operating conditions of the machines While the forces generated in operation machines are more or less constant, vibration levels will also remain substantially constant. Furthermore, on most machines, the vibration level has a normal value, and when the machine is in good condition, its frequency spectrum has a characteristic feature. The Frequency Spectrum, obtained when the machine is in good working condition, It is so often called “signature” of the machine, and it is obtained through frequency analysis of vibrations in. When defects are beginning to develop, the vibrations begin to rise and increases the amplitude of certain spectral components. Thus, the vibration measurement is widely used for maintenance. But also in the development of a machine, in their manufacture and quality control often uses the measurement and analysis of vibration. 2 Measurement of vibration – characterization of a periodic vibration 2.1 . The vibrations that are It is said that a body is to vibrate when describing an oscillating motion relative to a point. There are basically three types of vibrations: - random - transients - periodic Random vibrations rarely occur in machines. The phenomena in the machinery that can give almost random vibrations are for example cavitation in pumps or fans aerodynamic phenomena. Transient vibrations only occur during starting and stopping or when changing a procedural operating condition. They do not have so much importance to characterize the machine status. Are periodic vibrations that are actually important to characterize the status of the machines. Each rotation cycle occurs a repetition of the occurrence of the phenomena in the machine, thus creating, functioning, periodic vibrations. 2.2 Periodic vibrations The number of times a complete cycle takes place over a given time interval is called Frequency. Normally, in the case of the machines speaks on the number of cycles per minute, RPM. When it comes to the number of cycles per second the unit (1 cycle per second) it is called “Hertz”. Figure – Periodic vibrations The body movement can be at a frequency, such as for example the case of a fan with a large imbalance, or several frequencies at the same time, such as for example the case of a gear. the machines, which is more common, and vibration occur at many frequencies at the same time so that looking for an oscilloscope can not distinguish if the amplitude of vibration at each frequency. This can be known an apparatus that presents the amplitude of the vibrations at various frequencies. This separation of components is called Frequency Analysis, which is a tool for the diagnosis of faults in the machines. When making a Frequency Analysis normally are predominant peaks, which are directly related to the movements of the various parts of the machines. Like this, this type of analysis can determine which parts of the machines which give rise to vibrations. 2.3. The quantification of the level of vibration The vibration amplitude, which is the characteristic that describes its severity, It can be measured in several ways. In the following figure, one can see the relationship between the peak-peak amplitude, or Pico, Media and the Effective Level (RMS). The peak-peak value is important in indicating the maximum vibration amplitude, which is an important parameter when it comes to knowing for example, maximum displacements in machine tools or measurements made with displacement transducers. 2.4 The use of the RMS value and the peak value The Effective Value (RMS) It is most often used because it takes into account a certain measurement time interval and gives a value that is directly related to the energy of vibration, that is, its destructive capacity; is, therefore, an average value. In the following figure one can see displayed three waveforms with the same peak amplitude and with different effective values. In a piggyback pulse comes to a sine, another is a sine, another is a truncated sine. The pulses that can be observed in the first waveform, of the previous figure, if they occurred in a rotary machine correspond to shocks. The measurement of peak amplitude detects better shocks or any other kind of impulsive phenomenon, measuring the RMS amplitude, due to the latter being an average value in a given interval while the peak value is, by definition, the maximum signal in time. To measure vibrations without effective pulse amplitude (RMS) It is due most suitable to provide an average value. To measure sinusoidal vibrations, whatever, because there is a fixed relationship between the peak amplitude and the RMS. A sinusoidal vibration has the following relationship between the various ways of measuring range: = Peak-peak amplitude 2 x Amplitude pico Effective Amplitude = 0,707 x Amplitude pico These relationships may in practice have a direct application. With a global level meter Vibration the relationship between the peak amplitude and the effective range is equal to 0,707 You are then in the presence of a sinusoidal vibration, with a single frequency, to be the machine's rotation speed. This deduction can then be used for diagnosis. 2.5 The displacement speed and acceleration of a vibration The relationship between the amplitude of the displacement speed and acceleration of a sinusoidal vibration is: - d = offset - Speed = d - acceleration = 2 d = v = 2f that I f- Frequency Hz The formulas relating to the displacement speed and acceleration we see also that the speed is equal to the displacement times the frequency, and the acceleration is equal to the displacement times the square of the frequency. Thus it is expected that the higher frequencies are where the vibrations appear with greater acceleration. Displacement Speed Acceleration In the spectra of figure, obtained in the same measurement point, one can see that effectively at high frequencies vibrations manifest themselves especially in acceleration. In practice this translates into that, while if you want to control with a meter vibration phenomena that manifest themselves at high frequencies, such as the first signs of faults in bearings, the measurement parameter is the acceleration using. 2.6. What is frequency analysis The vibration meter gives us only a measured level in a wide range of frequencies. In order to know is the individual value of each component is required to make a Frequency Analysis. frequency analyzers used in vibration measurement algorithm by using a designated Fast Fourier Transform (English Fast Fourier Transform). The various components of an operating machine originate, each per se, a vibration at a particular frequency. All these vibrations are added to each other, thus obtaining a total that is the vibration over time. It is the phenomenon that can be seen on an oscilloscope or what you might feel when you put your hand on a bearing. In the picture we can see the relationship between the signal at the Time and Frequency Spectrum. The frequency spectrum allows breaking that total in individual portions, that give rise. Thus it can be said that while the signal at the time is a total, the frequency spectrum shows the shares that give rise. Figure – Through the frequency spectrum can be known which the machine component that causes the vibrations. 2.7. – Parameters measured - offset - Speed – Acceleration With the meters based on the use of accelerometer, the user usually has the freedom to choose how measurement parameter, offset, the speed or acceleration. There are several possible choices and it is normal to find very different measurement procedures between the various users and according to the purpose of the measure. It may be noted that there are numerous standards for evaluating the severity of the vibrations in machines using as a measure the speed parameter (ex. ISO 10816). The experience shows that actual speed is the most appropriate parameter for controlling defects on most machines (imbalances, desalinhamentos, days off, desapertos, etc.). The only exception is the bearing damage, generating pulses, which are easier to detect accelerating. The offset is used when using displacement sensors or for example when the machine tool is at stake is the mounting tolerance. 3. Measurement of vibration – vibration sensors Typically in the industry are three types of transducers: - displacement - velocity - Acceleration 3.1. displacement sensors for measuring vibrations The Displacement Sensors (also known as proximity sensors or proximitors) Frequently in industry measure variations in the magnetic field and act as comparators contactless. The advantages and limitations arise from this fact. Displacement Sensor pair Benefits – They measure the vibrations directly into the veins. On machines with oil film bearings, It takes place a great vibration damping. Thus the shaft vibration measurements are often much larger than the measured vibrations in the bearings. In this kind of machines have sometimes place only phenomena that are detected by measuring the vibrations directly into the veins. – Measured vibrations to DC ( 0 RPM). Because the comparators function as contactless measuring vibrations almost to 0 RPM. – When installed in pairs, by bearing, determining the position of the center of the shaft. Disadvantages – The measures are influenced by the finish of shafts. The irregularities and vibrations are measured as ovalizações. – Only measure vibrations up 1 KHz. The frequencies above 1 KHz the amplitude of the displacements caused by physical phenomena in the materials is so small that irregularities blend with the surfaces of the shafts. – Sensors are installed permanently. For this reason it becomes a more significant investment that is justified only in larger machines. 3.2. speed sensors for measuring vibrations Speed sensors are constituted by a coil and a magnet. The voltage generated in the coil is proportional to the relative speed of the two. Benefits – are self-generators – No need signal conditioning system Disadvantages – High frequency lower limit (10 Hz). The natural frequency of these sensors, normally undertaken in time of 10 Hz. This means that measures the vibrations around this frequency are enlarged. Normally the gauges working with these sensors are devices for filtering vibrations at these frequencies. – Reduced upper limit frequency (1000 Hz). The oil damper having inside dampens the higher frequency vibrations 1 KHz. They are not well suited to detecting faults in bearings. – It has moving parts. Are subjected to, so the wear, malfunction, etc. – side elevational Sensitivity. This means that in addition to measuring vibrations according to its main axis also measure the second lateral directions. Today are falling out of use due to the fact that accelerometers replace them with multiple advantages. 3.3. acceleration sensors for measuring vibrations The most common type piezoelectric accelerometers are the. In them the generated electrical charge is proportional to the acceleration to which they are subject. Benefits – Measure high frequencies. Typically the measured frequency upper limit is imposed by mounting the accelerometer and can go up to a few tens of KHz – Measure low frequencies. The lower frequency limit is imposed by the amplifier where the accelerometer, and can go up to hundredths of a Hertz – Measure large and small levels of vibration. – They are very robust. – They are insensitive to lateral vibrations. Disadvantages – They need signal conditioning. 3.4. The assembly of accelerometers for measuring vibrations The manner in which the probe is brought into contact with the measurement point significantly affect the results of measures. As a general rule it can be stated that the more rigid is the binding of the probe to the machine, As will be stricter. So the ideal situation would be the accelerometer is attached to a threaded stud. Of course this often, It is impractical, and day to day the most common methods are the fixing with a magnet and the placement of a ferrule on the probe that allows its easy measurement points back to. This issue becomes critical when making vibration measurements at high frequencies. Figure – Frequency response of different fixings of an accelerometer. When using the tip and the frequency of the vibrations is in the range of 0.5 a 1 KHz easily perform higher measurement errors that 100%. Table given below you can see a comparison of the various techniques. 3.5. Choose the measurement point for measuring vibration The reason why a machine vibrations are measured the position of said measurement point. When you hold an accelerometer should choose the shortest path between the source of the vibrations (usually the rotor) and a point where it can make the measures. Usually it follows that during the measurements of the bearings in boxes or any rigid structure attached to them. Another issue that often arises is the direction in which it should measure. It is impossible to give a general rule, but often, measuring in three directions; vertical, horizontal e axial. The vibration behavior of machines, especially at high frequencies, It is quite complex. So be expected, that even in very together points vibration levels are different. Figure - measuring points on an engine group – bomb 4 Vibration measurement techniques 4.1. The measurement of the overall level of vibration in accordance with ISO 10816-3 This type of measurement gives a simple reading Effective Speed. The measured vibration values They can be directly compared with the standard values severity criteria vibration Standards. That's how this type of equipment is used in quality control and monitoring of simple machines condition, that up are the most common, such as e.g., electric motors, bombs, fans. The defects that are controlled with this measurement are usually imbalances, desalinhamentos, days off, desapertos The bearing failures are the most common fault that this technique does not handle in a satisfactory manner. Advantages of this technique: - simple to use - reduced investment Disadvantages - limited sensitivity - only detects faults in bearings in the final stages of degradation 4.2. The status monitoring bearings (measuring the acceleration of vibrations to higher frequencies 1 KHz) The vibration produced by a bearing on early deterioration is beyond the capacity of perception of the human senses. Indeed not only their amplitude is reduced but also the vibrations generated in it are submerged in the other vibrations generated by the machine. The general problem of fault detection in a bearing is how to separate the minute vibrations produced by the collision of the rolling elements, the roll a well lubricated surface, with the edges of a microscopic crack, not detectable to the naked eye, other machine vibrations. The fact that the Global Level Measurement of Vibrations, (10 Hz – 1000 Hz) often do not give satisfactory response to the detection of this type of damage is what led to the study of this subject. To understand the solutions that they came for the detection of faults in bearings is necessary to know how the vibrations are manifested as bearing degradation evolves. 4.2.1 Vibratory symptoms of a bearing in degradation Consider the case of a bearing fault degradation outer race, a machine that runs e.g. 3000 R.P.M.. 1STAGE The effect of fatigue phenomena are produced micro-cracks under the bearing surface. Have bursts place vibrations at very high frequencies (hundreds of Kilo Hertz) which are called Acoustic Emission. Typically these vibrations are lost in the machine background noise. 2second phase The micro-cracks reach the surface of the track. The edges of the slit are sharp. They occur when the impacts they produce shock waves very abrupt. These produce vibration shocks that extend 300 KHz. The vibrations produced are very small and less than the background vibrations produced by the machine until about a few kHz. 3second phase The slit increases and the successive impacts of the rolling bodies round their edges. The vibrations produced now extend only to about 100 KHz and increases the amplitude of vibrations at low frequencies. When there are vibrations at frequencies below 500 Hz defects are clearly visible. 4STAGE The track surface degradation becomes significant and easily viewable. The starting material has the effect, completely round the edges of the fissure. The vibrating effect can be detected in the mid-range, and end at low frequencies. 4.2.2. The status monitoring bearings – Limitations of this technique All methods that claim to detect faults in bearings at an early stage, They do it by measuring the vibrations at high frequencies. As it can be easily inferred from that stated above, methods of detection of bearing faults through measurements at high frequencies, start from two principles: - When a roller breaks down vibrations are produced at high frequencies. - The only existing vibrations at high frequencies are produced by a degraded bearing. In everyday life are many situations where this is not true, where the limitations of such measures. The first limitation relates to the fact that this technique for bearings lose sensitivity to rotate at speeds below 1000 RPM, and be of questionable effectiveness at lower speeds 600 RPM. Actually this speed range no longer produce shock vibrations at high frequencies as described above. The second limitation is the fact that high frequency vibrations are rapidly damped in materials and its amplitude is drastically reduced by the separation between the surfaces of machine components. It is thus that the measurement point is not located in proximity to the rolling technique loses sensitivity, or even crashes. The third limitation comes from there are other sources of vibration at high frequencies. These limitations have, therefore, to be considered when using this technique. Possible causes of vibrations and high at high frequencies (shocks) The diagram in Figure it appears that the measurement result from vibration at high frequencies and high amplitudes indicate to say that a bearing is degraded goes a long way. Phenomena from outside of bearings that can generate vibrations at high frequencies are different: - cavitation - aerodynamic phenomena - Shocks gear in poor condition - Shocks of loose parts - Etc. If the vibration originates in the bearing, still yet, it may be that he is not in poor condition. If the lubricant is not being done under suitable conditions will occur in the lubricant film breaks, you should separate the rolling elements of the tracks, that will give, therefore place the occurrence of shocks such as would occur if the bearing was run down. Technicians with experience, when a first measured bearing high levels of vibration at high frequencies, They make a rule, with which to undertake a lubrication of the same. Immediately levels will fall. If after some time (for example: three days) the level has not returned to rise, then the problem was due to poor lubrication. If the level, on the contrary, return to the previous is actually up before a degraded bearing. How then overcome these limitations? 4.2.3. The status monitoring of bearings - to overcome the limitations of this technique effectively, if somehow they are insurmountable, technical utility can often be jeopardized. Experience shows that an isolated measurement, few results can be drawn, due to limitations in. However, if instead of a measurement, to carry out a sequence of measurements most limitations can be overcome. In virtually all facilities where this technique is applied successfully, proceeds to the regular measurement of vibration levels of the machine. Not evaluating the state machines based on a single measure, but yes, based on a set of measures. Through regular measurements determines a normal level, and the results of new measures then compared to this reference level. 4.3. Measurement of vibration – the frequency spectrum analysis with a vibration analyzer simple vibration meters, such as the aforesaid, Global measure the level of vibration in a wide frequency band. The measured level reflects the amplitude of the principal components of the spectrum, which it is evident, It is important to control. But when this vibration is analyzed in frequency and spectrum placed in a graphic form, the level of many more components, possibly important, it is disclosed. This technique is called Vibration Analysis. Not only the increase of the amplitudes of the components in the frequency spectrum gives an early indication of faults, but also the frequency at which they occur indicates which parts of the machine are deteriorating. For each measurement point will be characterizing frequency offset, misalignment, days off, problems gears, etc. that are, therefore, diagnosed with the help of Frequency Analysis. Frequency spectrum analysis allows the diagnosis of faults Advantages of use - Diagnosis - It has no rotational speed limits limitations - Cost - qualification requirement of the operator In the picture above have come up several vibration analyzers. 5. Evaluation of the results of the measures 5.1. Introduction When, after stating that a given machine needs to be put out of service for maintenance work, it turns out that after all it is in good condition, this is a less happy situation from time to time may occur. If the alert was given under a condition of the control system of the machine, It is one of the worst things that can happen to your discredit. The correct evaluation of the results of the measures is one of the key success factors of a system of inspection of the machines. There are multiple criteria that can serve as a basis for evaluating the results of the measures. 5.2. Rating criteria - Rules - Figures provided by the manufacturers of measuring equipment - Values provided by machine manufacturers to control - Comparison with values measured under identical machines - Experience - Trend Tracking 5.2.1 – The ISO 10816-3 The rules relating to acceptable vibration levels are often used as a first guide to assess the operating condition of the machines. some standards, as ISO 10816 specify limits depending on several factors. The ISO 10816 - Assessment of Vibrations in measuring machines for non-rotating parts – It recommends that the measuring range covers all the relevant frequencies of the machine, which of course will vary from machine to machine. 5.2.1.1 The ISO 10816-3 – the classification of the machines In the 3 this standard, that is specifically dedicated to field measurements in industrial machines with rated power above 15 kW and nominal speeds between 120 r / min e 15.000 r / min machines are first classified according to their type, shaft power or height and stiffness of the support structure. - Group 1: machines with power more than 300 k ; electrical machines with shaft height H 315 mm - Group 2: machines with power between 15 kW 300 k ; electrical machines with shaft height 160 H 315 mm - Group 3 : multivuluta impeller pumps with separate drive with a power exceeding 15 kW - Group 4 : pumps with multivuluta impeller with integrated drive with power more than 15 kW As for the support they are classified as rigid and flexible. A support is considered stiff in one direction, when the natural frequency of the combined machine and support, lower, in the direction of measurement, is at least 25% higher than the rotational speed of the machine. They are considered two vibration assessment criteria: - whereas the amplitude of the vibrations - given variations of the amplitude of the vibrations 5.2.1.2 The ISO 10816-3 – the classification of To evaluate the vibrations in machines based on their magnitude are considered four zones: - zone: the vibrations of a new machine generally fall in this area - zone B: machines with vibration levels in this area are considered that are normally able to be operated for extended periods without restrictions. - zone C: machines with vibration levels in this area are considered that are not normally able to be operated for extended periods without restrictions. Usually the machine can be operated for a limited period until a chance to take corrective actions. - zone D: this magnitude vibration levels normally are considered likely to cause machine damage will This standard limits are for the levels that are valid for measurements carried out in radial directions and in axial thrust bearings. The limits are applicable in terms of effective and efficient speed shift, applying the latter to machine at low rotational speeds. Other criteria specified by this standard relates to the vibration level variations. In particular, that when an increased or decreased level of vibrations to higher 25% the upper value of the zone B, this variation must be considered significant, specially for sudden. For the definition of alarm values the standard recommends that this exceeds the reference values, by a level equal to 25% the upper value of the zone B. For machine stop recommends using not greater than 25% values of the upper zone C. Where these values can not be applied, are produced by vibrations of gears and bearings. 5.2.2 – Figures provided by the manufacturers of measuring equipment Most vibration measurement equipment manufacturers provide tables with evaluation criteria of the results of measurements made by their equipment. – Criteria based on standards The values provided for the evaluation of Level Global measurements of vibrations usually are based on the values of standards. – own criteria of meters The values provided for evaluating the condition of bearings are generally characteristic of each type of meter. The former have the limitations mentioned above. The latter have the limitations inherent to this type and method of controlling bearings. These devices work by measuring a particular band of vibrations at high frequencies, and assuming that these are exclusively originated in the bearings. When the bearing degrades the amplitude of the vibrations in this frequency band increases thereby detecting the fault. Thus a bearing in good produce, under normal conditions, vibrations with an amplitude determined, being able to build a table for assessing the status of bearings. This would work fine if no other sources of vibrations at high frequencies. However, the reality is that there are other sources thus resulting in limitations in the immediate application of the tables provided by the manufacturers of this type of meters. Quite often the meters indicate high values, according to the measuring table, and in the end it appears that the bearing is in good condition. Occur even situations where the measured values are always extremely high making it impossible to apply these techniques. This limitation is easily overcome if you already have previous experience on the machine in question because, in these circumstances, already in a position to know whether “shape is defective or”. So a first step in a particular machine, obtaining high values, We can not ensure that the bearing is in poor condition. 5.2.3 – Provided values by manufacturers of machine to control these values, when there are, They are always a good basis for assessing the state of a machine. Unfortunately it is not very often be provided. When data is usually because the machines are already of a certain size, which is not the case of the overwhelming majority. 5.2.4 – Comparison with Measured Values on Identical Machines It's a rare machine that is the only of its kind; Most lying in more than one copy, even within the same plant. Thus comparison of results of measurements with one of them the other is one of the most obvious basis to construct a safe criterion of evaluation results. 5.2.5 – The experience A good experience is, as in all, an excellent basis for evaluating the results of the measures. After all works very similarly to the previous criteria. 5.2.6 – Trend Tracking In each machine there are numerous factors that can influence the absolute values of the readings. So the safest method of evaluating the results of the measures, consisting of, after a series of measurements, during which it is known that the machine is in good condition, using the measured value, classified as normal, For reference, and define acceptability criteria / limits from it. Assuming that the evolution of the trend (constant / growing) It is more important than the absolute values, is achieved thus have a criterion that eliminates the constant errors and takes into account the specific characteristics of each machine. IS, for example, the only way to build a reasonable judgment on machines that are not new. The Trend Tracking 5.3. The definition of alert and alarm levels 5.3.1 – For those who start Here finally arrived the first day they will make a move! Make the measurements, note the results, and will be compared with those given in a table – They seem very high. What to do? This is indeed a critical phase for start. At this point the reliability of the advice is very low. Besides there is no history in terms of the measurement results on the machine in question, probably also who is the measure has little experience in this technique. And then, what to do? In these circumstances is recommended DO NOTHING. It is necessary to assume that, the reliability of the opinion, in this conditions, It is greatly reduced, and that the risk of error is excessively large. It can jeopardize the credit of future opinions when it misses miserably in the first. If indeed seem very high values, back to the machine in question in the following days and try to ascertain the trend of the results of the measures is, effectively, growing. If this is confirmed, beyond doubt, You can then suggest an intervention. When you start the implementation of an equipment of the Condition Monitoring system are other investments made in addition to the measuring equipment. We must also invest in the acquisition of a minimum history of the machines and the acquisition of experience who measures and interprets the results. It must be considered that the acquisition of the results of the first measures, part of the initial investment and is not intended to provide immediate results. It is intended rather to allow the construction of safe evaluation criteria and gain experience. 5.3.2 – When no experience When there is experience it tries to optimize. Indeed the warning and alarm thresholds are dynamic; are not static values that once established remains so, properties, forever. In its definition experience has a key role and this increases with time. We should avoid listings absurd alarms, no one believes, due to have been established at the time of implementation and after that no one had touched them.
https://www.dmc.pt/en/medicao-de-vibracoes/
An optical photorefractive frequency-domain method is described for measuring displacement amplitude and phase of vibrating surfaces. The method is applicable to diffusely scattering surfaces and usable in either a point-detection or imaging configuration. The method utilizes an optical lock-in approach to measure phase modulation of light scattered from continuously vibrating surfaces. Picometer displacement sensitivities have been demonstrated over a frequency range of 100 Hz to greater than 100 kHz. The response of the spectral method is independent of the vibration frequency above the photorefractive cutoff frequency. Two methods are described that produce a readout beam intensity that is a direct function of the vibration amplitude suitable for imaging.
http://proxy.osapublishing.org/ao/abstract.cfm?uri=ao-36-31-8248
[problem] the scene for the removal of centrifugal fan dynamic balancing method? [answer] new dry production line, between the various types of centrifugal fan installed capacity accounted for 30% of the total power production line to 40%, to strengthen the online maintenance and maintenance of centrifugal fan, it is very important; especially the fan impeller abrasion caused by unbalanced rotor, which leads to the wind machine amplitude increase, seriously affect the normal operation of production; therefore, how in the construction site for the wind machine dynamic balance and eliminate imbalances, in fan years of maintenance and management work, the author summed up a set of effective and simple method to identify the location of the impeller; tap by drawing method, and in a position add weight to balance the scavenging fan. 1, introduction To make the rotor rotor dynamic balance, the key is to find out the position of the impeller, and to determine the quality of the balance block; using the drawing method to find a balance, the specific steps are as follows: (1) open the fan, stable operation, in the most able to reflect the vibration of the wind turbine M (such as the bearing seat, etc.), with the vibration meter to measure the amplitude of A0, after recording downtime; (2) the impeller front (or rear) circle of 3 equal, respectively, for the 1 points, 2 points, 3 points; (3) at the point of the clip on the pre production of the block P (according to the size of the fan impeller to determine the quality of the general mp=150g ~ 300g), repeat step 1, the measured M point amplitude A1; (4) change the position of the clamping block P to 2 and 3, repeat step 3, and then measure the M point amplitude A2, A3; (5) mapping; circle with radius A0, Center for O, the circle is divided into 3 parts, respectively, denoted as O1, O2, O3; O1 is central, A1 is the radius of arc; with O2 as the center, A2 is the radius of arc; with O3 as the center, a radius of A3 for the above 3 arc; arc respectively to B, C, D three; (6) BCD angina O4, O4 point is easy, OO4 connection and extending the circle of O in O5 point, O5 point is the counterweight block; side length of OO4 is L, then O5 is m with counterweight =mp * A0/2L; (7) the fan impeller front plate (or rear disc) on the circumference to identify the actual O5 position, the weight is m with iron welding; thus, fan balancing finish; 2, examples Shandong building materials Co. Ltd. 1000t/d cement clinker production line of grate cooler is equipped with 1 sets of air blower, the wind turbine technology index; The basic structure of the air blower is shown; fan bearings are double row spherical roller bearing, the foundation for the concrete foundation for the rigid rotor rotor; the fan installed before installation due to rapid, only roughly do dynamic balance test and working medium amount of dust is too large, causing serious wear of impeller the result, in May 2002, due to large amplitude; fan vibration amplitude increase, so the scene for the 1 operation risk, 2, 3, 4, four measuring points were tested, the test results are shown in table 2; On the basis of the re strengthening of the fan base, excluding the factors such as mechanical looseness and bearing failure, the main cause of vibration is the unbalance of the rotor; (1) the selection of the measuring point; the #4 measuring point is close to the impeller, and the change of the vibration value can directly reflect the size of the unbalance of the impeller, so the #4 measuring point is selected as the measuring point M, and the amplitude of A0=210 is measured by m; (2) according to the size and vibration of the fan, as well as the operation and maintenance experience, decided to try to add weight mp=180g; (3) the impeller on the front panel on the circumference of the balance is divided into 3 equal parts, respectively for 1 points, 2 points, 3 points, and in turn the M amplitude of A1=226 m; A2=208 m; A3=256 m; (4) as mentioned before mapping; O4 is a position O5 with some heavy, measured OO4 long L=25 m, so the actual withcounterweight m with =mp * A0/2L=180 * 210/ (2 * 25) =756g; (5) the 756g weight is welded on the front plate O5, and the vibration value of the M point is measured as m after the start, and the test results of the measuring points for the dynamic balance of the centrifugal fan are shown in table 2; 3, conclusion (1) using the drawing method to do the dynamic balance of the centrifugal fan, the method is simple and the price of the instrument is low. The vibration meter mentioned in this paper is GZ-4B type pocket vibration meter, the price is only about 900 yuan; (2) the method of measured data for the fan during normal operation of the data, the most close to the working conditions of wind machine, than the general dynamic balancing machine (speed is far lower than the normal speed of the fan is generally 300 ~ 500r/min) high accuracy, with great popularization value in general industrial enterprises; the author has used test the type of comparative mapping method results and dynamic balancing instrument is introduced in this paper, the error is less than 2%; (3) the method does not need to remove the impeller in the fan working site can save a lot of manpower and downtime; master, do a dynamic balance only around 1H, on site after repair for impeller balance especially, the replacement of the new impeller rotor balance and other inspection standard; (4) the method is only suitable for centrifugal fans, and is not suitable for axial fans and volumetric fans.
http://www.luxuryairtravels.com/en/display.asp?id=63
Sign in: Staff/Students The University of Liverpool has been awarded £4.8 million funding from the Engineering and Physical Sciences Research Council (EPSRC) for a state-of-the-art Scanning Transmission Electron Microscope (STEM) that will allow researchers to characterise new materials and processes on the atomic scale. The 300kV aberration corrected STEM is one of the most advanced microscopes in the world and will support the University’s research activities in advanced materials, personalised health and energy, with a focus on advanced energy technologies and biomaterials. A key feature of the new instrument is its use of Artificial Intelligence (AI) to both improve the resolution and precision of images at very low signal levels. The AI function will also allow the microscope to optimise experiments itself, creating unique opportunities to characterise materials and processes which can lead to new technological innovations. The new STEM will allow observations and quantification of diffusion processes at surfaces, interfaces and defects in many new classes of potentially transformative materials, providing new insights into their properties and behaviour. This is important as the development of new classes of materials is dependent on atoms diffusing, reacting and attaching to specific locations. It will support the work of researchers across the University including the Materials Innovation Factory, the Digital Innovation Facility, the Stephenson Institute for Renewable Energy and the Albert Crewe Centre for Electron Microscopy, where it will be hosted. The new STEM is available for training, access and services to the wider UK scientific community and to industrial partners. Professor Nigel Browning, Director of the Albert Crewe Centre for Electron Microscopy, said: “This new flagship facility will significantly enhance our capabilities in imaging and help us to develop a new paradigm in imaging materials and processes. “It will have a significant impact on research in many areas across the University and beyond. Our aim is to open up new areas of study in dynamic materials processes and stimulate new ideas, Intellectual Property and scientific collaborations.” Further information on the Albert Crewe Centre for Electron Microscopy can be found here. All recent news COVID vaccines for under-16s: why competent children in the UK can legally decide for themselves Watch: Staff Open Meeting Pop-up vaccination centre on campus this week National Postdoc Conference 2021: four days to go New The Journal of Beatles Studies launches to establish The Beatles as object of academic research The new Yoko Ono Lennon Centre is taking shape, the new Beatles MA starts this week and now The Journal of Beatles Studies has launched with @LivUniPress https://bit.ly/3nR7t7i Catch Liverpool Literary Festival Director, Professor Dinah Birch chatting all things #LivLitFest with @chamiltonbbc Listen in, now https://www.bbc.co.uk/sounds/play/live:bbc_radio_merseyside #LivLitFest 8-10 Oct 2021 Tickets available here > https://www.liverpool.ac.uk/literary-festival/ 📺 Prof Chris Probert’s (@ChrisProbert62) research diagnosing #IBS and predicting treatment featured in How to Stop your IBS on Channel 5 last night.
https://news.liverpool.ac.uk/2021/06/29/new-4-8m-state-of-the-art-microscope-to-study-materials-on-the-atomic-scale/
A collaboration of researchers in Japan report on four years of extensive research into superconductivity, including the materials that were found not to have superconducting properties, as well as those that were, and their potential for wires and devices. Materials that superconduct at more practical temperatures than liquid helium are in high demand, both for the insights in fundamental physics they may reveal and their device potential. A review published in Sci. Technol. Adv. Mater. Vol. 16 (2015) p. 033503 by a collaboration of researchers in Japan now provides a detailed overview of the past four years extensive research on around 1000 materials, with detailed insights learnt from the new superconducting materials discovered. A unique feature of this review is incorporation of a lost of the roughly 700 studied materials that did not show superconductivity. "This is probably the first such paper with opening of a list of experiments that failed," says Hideo Hosono, a researcher at Tokyo Institute of Technology and first author of the review. It should be invaluable data for researchers in the field." Existing theory offers little that helps identify high-temperature conductors, leaving a vast array of material possibilities. "We decided to not to waste the time and effort of other researchers, and wrote this paper with the results of samples that did not go superconducting," adds Hosono, who also led the research behind the first discovery of iron-based superconductors in 2006. The research team members were also unique for a project in superconductivity since their research expertise emphasised solid-state chemistry over condensed-matter physics. They included researchers from Tokyo Institute of Technology, the International Superconductivity Research Center, the National Institute of Materials Science, Kyoto University, Hiroshima University, and Okayama University. In the past, similar research projects have been strictly focused on finding superconducting properties, resulting in an all or nothing outcome, and at the time the work in the review was carried out, funding for superconductivity research was in decline. Instead Hosono and his colleagues employed a flexible approach that led to valuable insights into other material properties, such as ammonia catalysis for fertilizer production, ambipolar oxide thin film transistors and metallic ferroelectricity.
https://www.powerelectronics.com/industry/reported-successes-and-failures-aid-hot-pursuit-superconductivity
...has so far forged alliances with more than 170 laboratories on six continents in a bid to enhance the ability of researchers to collect data at multiple sites on a massive scale...to enable researchers to expand their reach and collect “large-scale confirmatory data” at many sites.A selection committee has evaluated eight proposals and selected one based on experiments already replicated in the US and the UK. It aims to discover whether the research findings of Alexander Todorov, a psychologist at Princeton University, can be replicated on a global scale. Todorov has reported that people rank human faces on two components: valence and dominance. Valence is a measure of trustworthiness, whereas dominance is a measure of physical strength...More than 50 of PSA’s collaborating labs have already committed to collect data as part of the study. PSA isn’t the only effort aiming to change how researchers conduct psychological studies, which have received extensive criticism for a lack of reproducibility. Others include the Many Labs Replication Project and the Pipeline Project. Earlier this year, Chartier also launched StudySwap, an online platform designed to help researchers find collaborators for replication studies and exchange resources.
https://mindblog.dericbownds.net/2017/12/bringing-big-science-to-psychology.html
The iLENS project proposes to upgrade and extend the active measurement infrastructure Archipelago (Ark), to provide academic researchers an unprecedented laboratory in which to quickly design, implement, and easily coordinate the execution of experiments across a widely distributed set of dedicated monitors. Funding source: NSF CNS-0958547. Period of performance: March 1, 2010 - Feb 28, 2014. Effective Internet measurement raises daunting issues for the research community and funding agencies. Improved understanding of the structure and dynamics of Internet topology, routing, workload, performance, and vulnerabilities remain a disturbingly elusive priority, in part for lack of largescale distributed network measurement infrastructure available to scientific researchers. The dearth is understandable; measurement of operational Internet infrastructure involves navigating more complex and interconnected dimensions than measurement in most scientific disciplines: logistical, financial, methodological, technical, legal, and ethical. CAIDA has been navigating these challenges with modest success for fifteen years, collecting, coordinating, curating, and sharing data sets for the Internet research and operational community in support of Internet science. With previous NSF (CRI) and other funding, we have been able to design, implement, deploy, and operate a relatively small but secure platform capable of performing various types of Internet infrastructure measurements and assessments. We propose to upgrade and extend -- in geographic scope as well as function -- this active measurement instrument (Ark) to provide academic researchers an unprecedented laboratory in which to quickly design, implement, and easily coordinate the execution of experiments across a widely distributed set of dedicated monitors. In September 2007 Ark began to support ongoing global Internet topology measurement and mapping, and Ark now gathers the largest set of IP topology data for use by academic researchers. We are using the best available, but still rudimentary, techniques for IP topology mapping, and we also make several processed data sets (AS-links, AS relationships) available as "soft infrastructure" to researchers. We propose to deploy new techniques, as well as supporting software for analysis, annotation, topology generation, and interactive visualization of resulting annotated Internet graphs. More importantly, we have demonstrated, and now wish to operationalize, the ability for this infrastructure to serve other researchers undertaking macroscopic studies of the Internet. Our first two experiments with external use of the infrastructure resulted in publications in the Internet Measurement Conference in 2008 and 2009. We look forward to to a broad cross-section of research communities making substantial use of our Internet measurement infrastructure. Our top infrastructure development priorities are: (1) add monitors in geographic and topological areas we lack coverage; (2) improve tools for processing raw topology data, to enable an unprecedented range of Internet mapping research while reducing the burden on individual researchers and students to achieve results; (3) enhance and develop new software modules to support new types of experiments and validation. We propose to conduct annual workshops to collect, synthesize, and plan implementation of feedback on infrastructure operation. Sustainable funding for large-scale measurement instrumentation past the span of a given funded research project has eluded the Internet research community, which has inhibited the creation of an underlying discipline that formalizes our observations and understanding of this complex networked system. By lowering the cost in time and effort needed to implement a measurement idea, Ark allows researchers to test and evaluate more experimental, sophisticated, and risky ideas, and facilitates integration of measurements and data into course curricula. The data currently provided by our infrastructure has strengthened the intellectual merit of a wide range of network modeling, simulation, analysis, and theoretical research activities. The broader impacts of the proposed work are reflected in the new types of research and data enabled, including historical Internet studies, evaluation of future Internet architectures, and empirical grounding for the emerging discipline of network science. Throughout the project we will emphasize support for external researchers wishing to run experiments on Ark. We will provide data storage, analysis tools, Internet measurement expertise and advice, and a system for continuous feedback to improve the operation of our experimental infrastructure and to increase user satisfaction. The labor effort includes: 1) maintenance and support of the central server and remote monitors, 2) integration and deployment of new monitors and coordination with remote sites, 3) integration of data and compute servers, and network switch, 4) software development including bulk DNS queries, interactive visualization, and integration of real time routing data, 5) curation, archival and distribution of the data, 6) development of supporting documentation, web pages, surveys, and educational materials, and 7) organization of annual workshops and publication of resulting reports. CAIDA personnel will be responsible for accomplishing all proposed tasks. The detailed project timeline follows. Note that the submitted budget will support a full-time effort for only one system administrator, and only part-time effort for the other five researchers involved, so we spread some of the proposed tasks, particularly software development, over longer intervals than they would otherwise require.
https://www.caida.org/funding/ilens/summary.xml
This page outlines the facilities and resources available within the Institute of Mathematics and Physics. These are a mixture of teaching and research resources. Teaching Resources Library The Mathematics and Physics library is housed on the fourth floor of the Physical Sciences Building, allowing easy access for students to browse texts and find a quiet place to work between lectures. Laboratories Two dedicated teaching laboratories are located within the building, along with a specialised optics room. All physics lab modules take place in these laboratories, and they contain a range of equipment and computers needed for practical physics. Workshops The laboratories are supplemented by the electrical and mechanical workshops which custom build pieces of equipment for experiments and projects. Lecture Theatres A range of lecture theatres are also contained within the building, meaning that Mathematics and Physics students can find all the facilities and resources required for their studies located under one roof. Lobby The newly refurbished lobby provides a relaxation space for students between lectures. Vending machines provide drinks and snacks. Research Resources Synthetic Environment Laboratory The SEL houses the virtual reality equipment to allow researchers to visualise their data in 3D. Materials Laboratory The Materials Laboratory contains equipment to study the properties of different substances. Planetary Analogue Terrain Laboratory The PATLAB, or "Mars Yard" as it is affectionately known, is used to develop autonomous capabilities for planetary rovers. Robotic Telescope The robotic telescope located on the roof of the building allows study of both the seas and the sky. Supercomputing Facilities A 128 node, quad core supercomputer housed in the department allows researchers to run state-of-the-art simulations of e.g., atomic interactions, astrophysical phenomena, and foams.
https://www.aber.ac.uk/en/phys/supporting-you/useful-links/resources/
Several core strategies are important across all NINDS scientific goals and will be essential to implementation of the Strategic Plan. NINDS Strategic Plan home » Cross-cutting Strategies Table of Contents: - Rigor and Transparency - Investigator-initiated Research - Diversity and Inclusion - Team Science - Data Sharing and Data Science - Neuroethics - Patient Engagement - Technology Access - Models for Neuroscience Research - Collaboration and Partnership - NINDS Intramural Research Program Rigor and Transparency Promote scientific rigor and transparency throughout all NINDS programs and policies All scientific progress requires rigorous, creative, and high-quality studies that build upon validated prior discoveries. Many scientific reports, however, do not transparently describe the design, methods, or analysis of experiments so that others may adequately assess their quality. Potential flaws in the practice of science that cannot be evaluated based on published reports undermine future research efforts. To maximize the value of the taxpayers’ investment in our research, NINDS programs and policies must ensure that studies are conducted rigorously and reported transparently. NINDS has been a leader within NIH and the research community in promoting rigor and transparency. In 2012, a major Institute workshop convened stakeholders from academia, industry, academic publishing, and government toward this end. That workshop, subsequent meetings, establishment of the NINDS Office of Research Quality (ORQ) and NINDS Rigor Working Group (NRWG), and several activities of this group have improved attention to rigor and transparency within NINDS, across the NIH, and in the research and publishing community. Most recently, NINDS surveyed current training practices and convened a workshop that brought together subject matter experts capable of evaluating current educational practices to discuss how best to impart knowledge about the fundamental principles of rigorous research. Informed by this discussion, NINDS is developing a framework for advancing rigorous research that will include the formation of an educational platform on the principles of rigorous research as well as the establishment of networks of rigor champions in the research community who will contribute to the development of the educational platform and work together to change the culture of science to favor high quality research over novel but unsubstantiated findings1. NINDS is establishing communities of Rigor Champions to share resources and best practices for ensuring scientific research meets high standards of quality. Learn more about Rigor Champions and find rigor resources here. Investigator-initiated Research Maintain an emphasis on investigator-initiated research, balancing short- and long-term investments, small and large-scale efforts, and revolutionary (high risk/high reward) and evolutionary (high quality, more incremental) approaches NINDS will continue to rely primarily on investigator-initiated research to advance fundamental understanding of the brain, spinal cord, nerves, and neuromuscular system. Curiosity-driven, investigator-initiated research is especially well suited to supporting discovery research, which engages investigator’s insights about how the brain works and pursues new avenues revealed by unanticipated findings. There are myriad unsolved questions about the nervous system that research is unravelling at multiple levels of analysis, from molecules to the neural network dynamics underlying behavior of whole organisms, bringing to bear knowledge and methods from a wide spectrum of scientific, engineering, and medical disciplines. In this rapidly evolving landscape of opportunity, engaging the diverse perspectives and insights of thousands of scientists, engineers, and physicians to seek out the best opportunities to advance our understanding has been and continues to be the most effective path forward. NINDS provides a variety of funding opportunities for investigator-initiated research with differing review criteria, funding levels, durations, component structures, and other characteristics designed to support small to large-scale research efforts, collaborations, and team science, including potentially “revolutionary” (high risk/high reward) research and “evolutionary” (high quality, more incremental) research that is also essential for progress. Similarly, the Institute supports both short-term, exploratory studies, and investigations which, by their nature, must be long-term investments. For example, the Research Program Award (R35) grants can extend for eight years and support an investigator’s overall research program rather than a discreet set of specific aims. Thus, the Research Program Award is especially suitable for innovative and long-term basic research. Because individual investigators have historically driven progress, especially discovery research, in neuroscience, NINDS policies will maintain the vigor of the neuroscience research community, ensuring that as many laboratories as possible can be adequately supported. Investigators are especially vulnerable early in their careers, and NINDS will continue its aggressive policies (see, for example, funding strategies) to ensure that they have a fair chance. Similarly, the challenges of neuroscience dictate that NINDS must draw its workforce from the full breadth of the nation’s talent pool, as discussed in the training and diversity sections of this plan. To support the development of biomarkers, preclinical development of therapies, and large clinical studies, NINDS often relies upon targeted funding announcements with review criteria and grant characteristics designed to meet the special needs of more applied research and development, including milestone-based funding. Although these programs rely upon solicitations, the Institute also designs many of these targeted funding opportunities with a similar spirit to traditional investigator-initiated discovery-oriented research programs by focusing not on specific disorders or approaches, but rather providing broad flexibility for investigators and teams to address needs within the NINDS mission and pursue the most promising opportunities for progress. NINDS will continue to examine which funding mechanisms are effective for all types of research, including team science, and to modify these programs as warranted. Diversity and Inclusion Enhance the diversity and inclusiveness of our workplace and the neuroscience research workforce NINDS has long recognized that achieving diversity in the neuroscience and biomedical research workforce is critical to realizing our research goals. Enhancing the diversity and inclusiveness of our workplace and the broader neuroscience and biomedical research workforce will enhance our overall creativity and ability to adapt. All of neuroscience benefits if we can engage all segments of society in our efforts to reduce the burden of illness due to neurological disorders and stroke. As the U.S. population becomes increasingly diverse, reflecting that diversity in the biomedical research workforce is vital to the scientific enterprise and the NIH research mission. Diversity affects performance, creativity, and other organizational drivers of success (see Science of Diversity articles), and there are compelling reasons for NINDS to promote a diverse workforce and increase participation by underrepresented groups such as identified in the NIH’s Interest in Diversity Notice. Advancing diversity is expected to produce several tangible and overlapping benefits, including the recruitment of the most talented researchers and staff from all groups; higher quality research and training environments; broader perspectives in setting research priorities; more people from diverse backgrounds participating in clinical research studies; and a greater capacity to address health disparities. At NINDS, we view diversity, inclusion, and equity as cross-cutting issues that are an essential part of the way we work to fund, conduct, and support research. NINDS has a comprehensive strategy to enhance diversity at all stages of the biomedical research career trajectory which includes targeted training programs, an assessment of diverse perspectives in our select pay process, and an integrated approach to increasing workforce diversity across the Institute. Moreover, NINDS encourages activities to support diversity by all staff, throughout all corners of the Institute. To foster internal input and involvement, the Diversity Working Group (DWG), composed of program directors representing every scientific portfolio at NINDS, meets monthly to discuss issues related to diversity and to implement strategies for enhancing diversity in the neuroscience workforce. NINDS is also a committed partner in the NIH UNITE Initiative, an agency-wide effort to identify and address structural racism within the NIH and across the extramural scientific community. Team Science Support innovative team science approaches for emerging research opportunities of broad scope and complexity Progress has brought an ever-increasing knowledge base and armamentarium of technological capabilities to neuroscience. With this has come a trend toward increasing collaboration among scientists. To a large extent, researchers, as always, form temporary alliances to take on specific experimental challenges, and NINDS grants to individual investigators provide the flexibility to do so. For some types of research, such as clinical trials and drug development, team science has long been the norm, and NINDS has specific programs that address those needs. Team science has been less common in basic neuroscience research, although there are notable exceptions, including programs currently underway within the BRAIN Initiative® that bring together scientists and engineers from across several disciplines. NINDS is currently exploring programs to support team science that are underway across the NIH and beyond. The Institute is learning from these programs and assessing whether current NINDS grant mechanisms are optimal to support emerging research neuroscience opportunities of broad scope and complexity that may require a sustained team science approach. Beyond grant mechanisms, changes in the culture and reward systems of research may be necessary to fully realize the potential of team science to advance the NINDS mission, and the Institute will work with the research community toward that end. NINDS is committed to maximizing the value and reuse of data generated by our programs and to training the workforce to utilize that data. For more information see the NIH Strategic Plan for Data Science. Data Sharing and Data Science Develop and implement policies, infrastructure, and resources to take advantage of data science and foster sharing of high value data among the research community The NIH policy on data sharing notes that sharing scientific data helps validate research results, enables researchers to combine data types to strengthen analyses, facilitates reuse of hard-to-generate data or data from limited sources, and accelerates ideas for future research inquiries. Data science is increasingly important across the full spectrum of studies from basic research, in which new technologies are rapidly generating valuable data, though clinical research, in which tools and centralized databases facilitate team clinical science across institutions and investigators. As NIH implements this policy, which requires all NIH-funded research to include data management and sharing plans, the amount, emphasis, complexity, and cost of creating, curating, harmonizing, storing, accessing, and reusing neuroscience data will grow substantially in the next 5-10 years. As data from emerging technologies grows in scale and more powerful analysis tools emerge, issues for the research community more broadly will have a major impact on the effectiveness of NINDS. Among these, for example, are ensuring appropriate rewards and credit for creating and sharing data; determining how best to take advantage of burgeoning progress in the related areas of artificial intelligence and machine learning, which Institute investigators are rapidly applying across many areas of basic and applied research; defining appropriate policies to preserve data privacy; and fostering a neuroscience workforce trained in cutting-edge data practices. NINDS is currently developing an NINDS Data Science Plan, which will be aligned with the NIH Strategic Plan for Data Science and guide the Institute in developing data sharing principles, policies, infrastructure, and resources to maximize the opportunities and cost effectiveness of its research investments. Neuroethics Identify and navigate ethical challenges and implications arising from neuroscience by supporting neuroethics resources for the neuroscience community and fostering research and training in neuroethics Advances in science can present ethical challenges. Existing ethical frameworks may require interpretation in new contexts as science moves forward. For neuroscience, this can be especially trenchant because of the brain’s centrality to fundamental aspects of ourselves. As a specialization of bioethics that focuses on neuroscience, neuroethics can partner with neuroscience to scan the horizon for ethical challenges, identify and explore the underlying values and assumptions of diverse stakeholders, and assist in mitigating potential ethical concerns. Thus, neuroethics can empower neuroscience research and inform the design, conduct, interpretation, and application of research. The BRAIN Initiative® has a robust neuroethics component that includes a neuroethics research portfolio and an NIH-external Neuroethics Working Group that serve to provide BRAIN with input relating to neuroethics. Building on this exemplar, NINDS has established a new NINDS Neuroethics Program that will work with NIH staff and stakeholders to identify and navigate ethical challenges and implications of neuroscience research programs and discoveries, and to facilitate neuroscience progress. Patient Engagement Increase patient engagement in all appropriate aspects of NINDS research to better address the priorities of patients and their families and to improve the efficiency and effectiveness of research NINDS will engage people with neurological conditions and their families in setting priorities, planning, and conducting research. The priorities of individuals living with neurological conditions and their families may not always be apparent to those not experiencing the problems that a disease presents For example, surveys of people with spinal cord injury have revealed that walking may not be the highest priority; individuals with Parkinson’s disease have stressed the importance of non-motor symptoms on their quality of life; and the epilepsy community has noted the impact of comorbidities, the side effects of current drugs, and the concern about Sudden Unexpected Death in Epilepsy (SUDEP). Patient advocacy organizations also provide insight that can greatly improve the efficiency and effectiveness of research, not only in recruiting for clinical studies, but also in many other aspects of studies involving human participants, including reducing barriers to participation. NINDS has some important activities in place to support engagement, as discussed in the Communications section of this plan. Notably, the annual Nonprofit Forum is planned by a rotating Executive Committee that includes numerous patient advocacy groups. In 2020, the NIH HEALSM Initiative virtual workshop, “Engaging Patients in the Research Process,” explored the benefits for research of engaging people with neurological conditions in the planning and oversight of clinical research and patient recruitment for pain research, considering successful examples from other areas of medical research that are relevant across many areas of NINDS research. NINDS is committed to increasing patient engagement in all appropriate aspects of clinical research across all areas of the Institute’s mission. NINDS is committed to increasing engagement of patients and their advocates to better address the priorities of patients and their families and to improve the efficiency and effectiveness of research. Technology Access Ensure that the researchers throughout the scientific community can exploit emerging technologies, resources, and knowledge, including those emerging from the BRAIN Initiative® New technological research capabilities are emerging from NIH investments, most notably the BRAIN Initiative®. Among these, for example, are advanced microscopy methods, automated behavioral analysis tools, and large scale “-omics” analyses that can potentially be applied to many research questions but require substantial resources. To maximize the effectiveness of all NINDS research, the Institute must ensure that these capabilities are widely available. Furthermore, a growing proportion of the NINDS budget is directed to special programs including the BRAIN Initiative®, the NIH HEAL Initiative, and AD/ADRD research. NINDS will coordinate management of these investments to maximize impact and efficiency. NINDS also will continue to provide scientific resources that enhance the capabilities of investigators to carry out high quality research, while minimizing unnecessary duplication of efforts and taking advantage of economies of scale. Current NINDS resources, for example, provide access to genetically modified mice, human post-mortem brain tissue and related biospecimens, genetic samples and cell lines, and validated monoclonal antibody reagents. As the scientific and technological landscape changes, the Institute must continually assess what resources are best provided centrally and which are more appropriately focused on individual laboratories or institutions. Models for Neuroscience Research Maintain support for the full spectrum of neuroscience research models Looking back at the paths of discovery that led to successful therapies for neurological disorders reveals that a wide variety of experimental models were essential in their research and development. Humans and simpler organisms share many fundamental aspects of biology. Simpler organisms provide extraordinary opportunities for scientific investigations. For example, studies in the fruit fly, Drosophila melanogaster, revealed fundamental principles of genetics, and the worm C. elegans helps scientists understand neuronal development because it has only 302 nerve cells, which have been completely mapped, compared to the 80 billion nerve cells in the human brain. Similarly, genetic engineering has enabled scientists to study basic capabilities like memory and to model the mechanisms of key steps in disease by creating mice with the mutations that cause human disorders. For some types of studies, nonhuman primates are critical model organisms because of their anatomical, physiological, and behavioral similarity to humans. NINDS will continue to support research across the full spectrum of models, as appropriate to the scientific questions, with careful attention to ethical conduct of research and oversight frameworks. The Institute also will continue to support novel research approaches that may reduce the necessity for using animal models and improve the efficiency and effectiveness of research. This includes, for example, research on cells and organoids derived from human induced pluripotent stem cells, computer modeling, and an array of technologies, including advanced imaging techniques, that can noninvasively study the structure, function, and biochemistry of the human brain. Collaboration and Partnership Fostering productive collaborations and partnerships in their many forms is a major strategic priority across all scientific goals, both with respect to NINDS programs and for research projects themselves Collaborations and partnerships of many kinds are essential for advancing the NINDS mission and are becoming ever more crucial as science advances and reveals intersections between the interests of NINDS and other organizations, and among researchers with different areas of expertise. Integration among NINDS extramural programs is essential to ensure the seamless flow of insights among basic, preclinical, and clinical research. The Intramural Research Program provides an environment for collaborations across Institute, disciplinary, and basic/clinical boundaries that have historically been a strength of the program. The NIH Blueprint for Neuroscience Research provides a framework for collaboration and coordination among the many parts of the NIH whose missions intersect the brain and nervous system. Among the many reasons for increasing collaboration of NINDS with all parts of NIH is a growing recognition of the importance of studying how the brain (and nervous system generally) influences and is influenced by other regulatory and organ systems in the body. NINDS has several long-standing collaborations with the Food and Drug Administration, the Centers for Disease Control and Prevention, the Department of Defense, the Department of Veterans Affairs, and other federal agencies. Beyond these ongoing relationships, the BRAIN Initiative®, Helping to End Addiction Long-termSM (HEAL) Initiative, Accelerating Medicines Partnership Parkinson’s Disease (AMP-PD), and other recent activities are paving the way for more productive interactions with industry, non-governmental organizations, agencies, and the broader medical community with which NINDS has less frequently collaborated. The NINDS Intramural Research Program’s unique funding structure provides investigators with the stability, flexibility, synergistic environment, and resources to conduct distinctive long-term and high-risk, high-reward science. For more information, visit the NINDS Intramural Research Program. NINDS Intramural Research Program Exploit the NINDS Intramural Research Program’s unique capabilities to advance the NINDS mission Because it is not tethered to extramural grant review cycles, the NINDS Intramural Research Program (IRP) is in a unique position to capitalize on both long-term and high risk/high reward science that is more difficult for the extramural community to undertake. Additionally, the flexibility of the intramural funding structure allows the NINDS IRP to rapidly respond during public health emergencies. To fully realize its potential, the NINDS IRP is engaged in a detailed planning process to identify areas of science and scientific resources that should be augmented within the NINDS IRP; enhance clinical care within NINDS; increase collaboration across the basic to clinical spectrum, and with other Institutes and extramural researchers; and ensure that evaluations of faculty, staff, and trainees reward high-quality, innovative research and excellence in training and mentoring.
https://www.ninds.nih.gov/about-ninds/strategic-plans-evaluations/strategic-plans/ninds-strategic-plan-and-priorities/cross-cutting-strategies
This group includes metallurgists, soil scientists and physical scientists and researchers, not elsewhere classified, involved in the conduct of theoretical and applied research in fields of physical science. They are employed by governments, educational institutions and a wide range of industrial establishments. Job Duties for Other professional occupations in physical sciencesMetallurgists - Conduct studies into the properties and production of metals and alloys. - Conduct research into the composition, distribution and evolution of soils. - Conduct research into the properties, composition and production of materials such as ceramics and composite materials. - Command, pilot or serve as crew members of a spacecraft to conduct scientific experiments and research, and participate in space missions as mission specialists to maintain spacecraft, install and repair space station equipment and launch and recapture satellites. Working Conditions for Other professional occupations in physical sciencesWork may be performed indoors or outdoors, depending on the task or project.
https://www.ntab.on.ca/noc/noc2/noc21/noc211/noc2115/
Researchers make temporary, fundamental change to a material's properties Aided by short laser flashes, researchers at the Paul Scherrer Institute have managed to temporarily change a material's properties to such a degree that they have – to a certain extent –created a new material. To monitor these changes, they used very short flashes of x-rays generated by the x-ray laser, LCLS, in California. With these x-ray flashes they scanned the material and gained insight that could help in the development of materials for more high-performance electronic devices. Experiments like the ones with LCLS will be possible at PSI with the entry into service of the x-ray laser SwissFEL at the end of 2016. For researchers at PSI, the studies in the USA are an opportunity to gain experience that can then be used in setting up the experimental workstations at SwissFEL. At present, only two such x-ray lasers exist in the world. To conduct experiments there, researchers must apply for time to do so, and successfully compete with many others from around the globe. PSI researchers have already carried out diverse experiments in both facilities. They also benefit from the experience gained at the Swiss Light Source (SLS) at PSI. While the SLS is not an x-ray laser, it can generate very short, albeit much weaker, flashes of x-ray light. Some more simple experiments similar to those with an x-ray laser can be conducted at the SLS. New material generated for just a short time Materials can have very different properties. For instance, some are good electrical conductors, whilst others are good insulators, some are magnetic, others aren't. These properties are determined by the behaviour of the particles from which the material is made, and in some cases, by how electrons are arranged inside the material, and whether or not they can move. If you change the electrons' freedom of movement, you can also change the material's properties. In co-operation with scientists at ETH Zurich, the University of Tokyo and the research lab SLAC in Stanford (California), PSI researchers conducted an experiment on the material Pr0.5Ca0.5MnO3, where additional energy was injected into electrons with the help of a very short laser pulse. Electrons, which were almost all firmly bound to specific atoms, were then able to skip from atom to atom. The material transformed to a certain extent from an isolator into a metal. "We practically created a new material which doesn't occur in this form in nature," explains Urs Staub, a physicist at PSI. "This material, i.e. this new state, only exists for a very, very short period of time. But there is still enough time to investigate its properties." Short exposure time on the x-ray laser The new state was investigated using the x-ray laser LCLS which is operated by SLAC in California. LCLS generates very short and very intense flashes of x-ray light which reveal the processes inside a material. As this state changes fast, it is important that the flashes are short, to ensure the images don't "wobble." The researchers have repeated the experiment several times varying the time interval between the laser and the x-ray pulse. In this way they can determine how the inner state of the material is modified on an ultrafast time scale. Understanding the materials The material investigated has a similar structure to materials which could be of importance for electronic devices, because they manifest what is known as colossal magnetoresistance. This effect leads to major changes in the material's electrical resistance when it is near a magnet. This could be important, for instance, during readout of magnetic memories. "The results help us gain a fundamental understanding of how materials of this kind behave," explains Paul Beaud, physicist at PSI, who conducted the experiment together with Staub. "In this way a material's properties can be modified in a specific manner to develop new materials." PSI researchers prepare for the large-scale research project SwissFEL PSI researchers have already conducted numerous experiments with the x-ray lasers LCLS in the USA and SACLA in Japan. Like the large-scale PSI facilities, these facilities are open to external researchers who can apply for experiment time there. The competition is tough – only the most interesting and promising projects make the cut. "They have confidence in us because we have already been conducting successful experiments of this kind at the SLS for some time now," stresses Staub. PSI's Swiss Light Source is not an x-ray laser but it does produce intensive x-ray light from which very short flashes can be generated using special techniques. This means that experiments can be carried out which, in principle, are very similar to those using the x-ray laser. Nonetheless, an x-ray laser opens up completely new opportunities for specific experiments. It permits the investigation of processes which were not accessible to previous methods, mainly because the light of the x-ray laser is far more intense and in shorter pulses. That's why PSI is building its own x-ray laser, SwissFEL, which is scheduled to enter service at the end of 2016. Beaud and his colleague Gerhard Ingold, who were involved in constructing and operating the ultra-short x-ray source at the SLS, are now developing an experimental station for SwissFEL to investigate fast processes in solid-state physics. "For this project the experience gained in the USA and in Japan is of immeasurable value," comments Beaud.
https://phys.org/news/2014-09-temporary-fundamental-material-properties.html
Can companies rely on the results of one or two scientific studies to design a new industrial process or launch a new product? In at least one area of materials chemistry, the answer may be yes – but only 80% of the time. The replicability of results from scientific studies has become a major source of concern for the research community, particularly in the social sciences and biomedical sciences. But many researchers in the fields of engineering and the hard sciences haven't felt the same level of concern about independent validation of their results. A new study that compared results reported in thousands of papers published about the properties of metal organic framework (MOF) materials, prominent candidates for carbon dioxide adsorption and other separations, suggests the replicability problem should be a concern for materials researchers, too. One in five studies of MOF materials examined by researchers at the Georgia Institute of Technology were judged to be ‘outliers’, with results far beyond the error bars normally used to evaluate study results. Over the thousands of papers, there were just nine MOF compounds for which four or more independent studies allowed an appropriate comparison of results. "At a fundamental level, I think people in materials chemistry feel that things are reproducible and that they can count on the results of a single study," said David Sholl, a professor in the Georgia Tech School of Chemical and Biomolecular Engineering. "But what we found is that if you pull out any experiment at random, there's a one in five chance that the results are completely wrong – not just slightly off, but not even close." Whether the results can be more broadly applied to other areas of materials science awaits additional studies, Sholl said. The results of this study, which was supported by the US Department of Energy, are reported in a paper in Chemistry of Materials. Sholl chose MOFs because they're an area of interest to his lab - he develops models for the materials - and because the US National Institute of Standards and Technology (NIST) and the Advanced Research Projects Agency-Energy (ARPA-E) had already assembled a database summarizing the properties of MOFs. Co-authors Jongwoo Park and Joshua Howe used meta-analysis techniques to compare the results of single-component adsorption isotherm testing – how much CO2 can be removed at room temperature – for the MOFs in this database. This measurement is straightforward and there are commercial instruments available for doing the tests. "People in the community would consider this to be an almost foolproof experiment," said Sholl. The researchers considered the results definitive when they had four or more studies of a given MOF at comparable conditions. The implications for errors in materials science may be less than in other research fields. But companies could still use the results of a just one or two studies to choose a material that appears to be more efficient. In other cases, researchers unable to replicate an experiment may simply move on to another material. "The net result is non-optimal use of resources at the very least," Sholl said. "And any report using one experiment to conclude a material is 15% or 20% better than another material should be viewed with great skepticism, as we cannot be very precise on these measurements in most cases." Why the variability in results? Some MOFs can be finicky, quickly absorbing moisture that affects adsorption, for instance. The one-in-five ‘outliers’ may be a result of materials contamination. "One of the materials we studied is relatively simple to make, but it's unstable in an ambient atmosphere," Sholl explained. "Exactly what you do between making it in the lab and testing it will affect the properties you measure. That could account for some of what we saw, and if a material is that sensitive, we know it's going to be a problem in practical use." Other factors that may prevent replication include details that were inadvertently left out of a method’s description – or that the original scientists didn't realize were relevant. That could be as simple as the precise atmosphere in which the material is maintained, or the materials used in the apparatus producing the MOFs. Sholl hopes the paper will lead to more replication of experiments so scientists and engineers can know if their results really are significant. "As a result of this, I think my group will look at all reported data in a more nuanced way, not necessarily suspecting it is wrong, but thinking about how reliable that data might be," he said. "Instead of thinking about data as a number, we need to always think about it as a number plus a range." Sholl suggests that more reporting of second, third or fourth efforts to replicate an experiment would help raise the confidence of data on MOF materials properties. The scientific publishing system doesn't currently provide much incentive for reporting validation, though Sholl hopes that will change. He also feels the issue needs to be discussed within all parts of the scientific community, though he admits that can lead to “uncomfortable” conversations. "We have presented this study a few times at conferences, and people can get pretty defensive about it," Sholl said. "Everybody in the field knows everybody else, so it's always easier to just not bring up this issue." And, of course, Sholl would like to see others replicate the work he and his research team did. "It will be interesting to see if this one-in-five number holds up for other types of experiments and materials," he added. "There are other certainly other areas of materials chemistry where this kind of comparison could be done." This story is adapted from material from the Georgia Institute of Technology, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.
https://www.materialstoday.com/materials-chemistry/news/mofs-have-an-issue-with-replicability/
The NCSU Libraries and the American Society for the Prevention of Cruelty to Animals (ASPCA) are honored to receive a major grant award from the Council on Library and Information Resources (CLIR). The Libraries will partner with the ASPCA on the three-year project "'The Animal Turn': Digitizing Animal Protection and Human-Animal Studies Collections." A $360,384 Digitizing Hidden Special Collections and Archives award from the CLIR will fund the digitization of some 239,000 pages of archival materials from the Libraries' nationally significant animal rights and welfare collections, and approximately 150,000 pages from the ASPCA's records documenting its history as a national leader in animal protection since its founding in 1866. “The animal turn” describes the shift in scholarly interest in the growing field of human-animal studies which incorporates diverse and multidisciplinary components of animal advocacy discourse housed in the Libraries’ collections and the ASPCA’s records. Together, these important historical materials will become available to users and researchers through a single online access point, forming an unprecedented resource on human-animal studies that emphasizes the intersectionality between humans and non-humans. The ASPCA records, along with the Libraries’ holdings, will help scholars piece together a more complete historical narrative about animal protection and human-animal studies, and will allow researchers to identify new historical connections within the field. “We are honored and excited to have been awarded this grant from CLIR,” says Gwynn Thayer, Acting Department Head of Special Collections at NCSU Libraries. “We are especially thrilled to be partnering with the ASPCA. We believe that the archival materials that will be digitized during the next three years will allow scholars from NC State as well as around the world to have better access to important primary source materials that will help further scholarship in the field of human-animal studies.” “This collaborative effort with the NCSU Libraries is an important step in documenting and safeguarding the ASPCA’s rich history and the legacy of our founder, Henry Bergh,” noted Elizabeth Estroff, Senior Vice President of Communications for the ASPCA. “We are thankful for this generous grant from the CLIR to ensure future generations have access to the ASPCA’s extensive history of progress and innovation in the fight against animal cruelty.” Leading scholars in the field of human-animal studies have expressed their support and enthusiasm for the project. “This is a project of monumental scholarly and public significance,” University of Texas American Studies professor Janet M. Davis notes, adding that the partnership between the NCSU Libraries and the ASPCA is a “truly outstanding collaboration.” “Until now, no archives or libraries have been designed to document non-human life in the way this project will,” writes University of Guelph professor Susan Nance. “The potential impact of this digitization project simply cannot be understated.” Nance observed that the project would “revolutionize” her field of research. The former Chief of the History of Medicine Division of the National Library of Medicine, John Parascandola, also endorsed the project: “This project will support research in such diverse fields as history, philosophy, science and technology studies, sociology and animal studies.” This project will digitize many materials in the Libraries’ key collecting area of Animal Rights and Welfare, housed in our Special Collections Research Center (SCRC). The ASPCA collection to be digitized includes annual reports, awards, manuscripts, photographs, publications and visual resources that document and provide insight into both the ASPCA’s history, development and growth as an institution, and its position as a leader in the field of animal advocacy and protection. The archives document the growth of the animal welfare movement through the lens and experience of the ASPCA, which was the first animal welfare organization in the country. The Council on Library and Information Resources is an independent, nonprofit organization that forges strategies to enhance research, teaching, and learning environments in collaboration with libraries, cultural institutions, and communities of higher learning. Its Digitizing Hidden Special Collections and Archives awards program, which is generously supported by funding from The Andrew W. Mellon Foundation, supports the creation of digital representations of unique content of high scholarly significance that will be discoverable and usable as elements of a coherent national collection.
https://www.lib.ncsu.edu/news/libraries-partners-with-aspca-on-largescale-digitization-of-animal-welfare-materials
The 1990s have begun a lengthy period of unparalleled economic development throughout the world. Many international communities are on the verge of great industrial and economic strides. Nowhere is this more true than in Asia and the Pacific, where international industrial and economic connections are being completely refashioned. Yet despite the promise, the accelerated rate of economic and social change has created an information vacuum as change itself has outpaced the supply of information on which to base decisions that would assure prudent and successful development. In the resulting economic, industrial, and political reformation, implementing a system of rapid information exchange is of the utmost importance. An i mmediate supply of the appropriate information could place emerging multinational economies very quickly in positions of stability and health. Of course, informed decision making for development requires awareness of the most current relevant information. Libraries can play a crucial role in this process, and not simply as automated "middlemen" passing along information from publishers and vendors as they always have. Some innovative vendors are already providing those services better than libraries can, with such services as CompuServ e, America Online, and others. Instead, libraries, especially research libraries, can produce new information that is valuable and relevant to their users from the bodies of research they have at hand that is unique, valuable, and not available from any vendor in any form, virtual or otherwise. This information can become the content for original databases and other online resources that can promote national development, and give libraries a new and important role as points of information provenance. For example, paper is still very convenient and relatively economical. But it is not easy to search, making it unacceptable for data extraction. For libraries of the future, paper will be best suited for texts meant for breadth of coverage. This includes background readings, abstract treatments that require extensive reflection, aesthetic readings, and other literary works. As a result, in the modern research library, CD-ROM is most appropriate for less dynamic information, and for information of insufficient user need to justify the expense of accessing it from an online source. CD-ROM will also continue to be used for preliminary searches of off-line backfiles of expensive databases before going online to retrieve the most current information. Nevertheless, most of the online resources available today themselves have drawbacks that impede research and the discovery of new knowledge, as explained below. Research libraries of the next decade, therefore, need to exploit the strengths of online technology to minimize these drawbacks and make online an even more effective research tool. This model applies to information published in every format because even most CD-ROM publishers and online database vendors are either themselves publishers of print resources, or else they get the information for their electronic resources from providers of print publications such as journals and proceedings. So even the online databases we prize so highly are not as dynamic as they appear, nor as current as they could be. Because they depend on traditional publishing methods, they are not taking full advantage of online capabilities, nor of the community of researchers who daily make new and important discoveries. In this new model, libraries establish their own local online databases, and deal directly with researchers, beginning with those in their own institutions, and with researchers at partner institutions as well. In effect, this new model by-passes the commercial publisher and all of the enormous expense and lost time that traditional publishing entails, whether the format be print, CD-ROM, or con ventional commercial online services. Following this model, research libraries will create their own online databases specifically pertinent to their institutions. These specialized full-text, full-image resources will contain the latest original research, submitted online and updated regularly by the researchers themselves. These intensive institutes are attended by faculty from universities and colleges across the United States. As part of their activities, institute participants contribute instructional materials to the Center's rapidly growing collection. The Center then shares these materials with participants in subsequent institutes. The Center's purpose in collecting and sharing these materials is to help pa rticipants incorporate Asian studies into the curricula at their own institutions. By converting the materials to electronic format and creating an online resource for them, access will be greatly extended well beyond the institutes. In addition, the database will attract online submissions from other Asian studies faculty and experts around the world, causing the database to grow even faster. The East-West Center staff will provide the editorial services for the database, whi ch will be managed by the librarians at the Kapiolani Library. The Library received a research grant for the project, and creation of the database and conversion of the materials is underway. We expect the database to be launched this spring. The second audience for this document delivery service is the entire Pacific area. The head of the EMS department at Kapiolani College has consulted with officials in Beijing, Hainan, Taiwan, Hong Kong, and Singapore. These government officials have come conferred with our EMS department seeking assistance in designing quality EMS services, and in obtaining information about EMS medical procedur es, EMS administration, and curricula for EMS training programs. Hawaii's excellent EMS department, which is composed almost exclusively of Kapiolani graduates, is of particular interest to developing nations in Asia because foreign tourists are more likely to visit a nation that can provide excellent emergency medical services. Accordingly, an online resource to support EMS practice and curricula in Asia and the Pacific is of extreme interest to EMS professi onals in these same nations. Our plans are to establish a cooperative relationship with the U.S. Department of Transportation in Washington, D.C., which has a vast collection of EMS curricular materials; with the EMS Clearinghouse in Florida, which also has a rich store of EMS information; and with the U.S. Federal Emergency Management Agency (FEMA), also in Washington, D.C., which also deals with EMS matters. Our intention with the EMSD is to acquire information from these partner agencies that is especially relevant to Asia and the Pacific, and add it to our database in the Kapiolani Library for delivery when requested in this hemisphere. The database will also develop a distinctly Asian-Pacific orientation as a result of original contributions we will solicit directly from EMS professionals, instru ctors, and physicians in this hemisphere. We intend to gather records, statistics, and other pertinent raw data from the foremost of these establishments to input into the database where it will be available for study and analysis throughout Hawaii and across the Asian-Pacific hemisphere. We expect to cooperate with the Hawaii Visitors Bureau (HVB), which also has accumulated considerable information about Hawaii's tourism industry, and which is looking for an appropriate means to disseminate this information more broadly than the HVB has been able to do in the past. We also hope our cooperation with the Waikiki hotels and the HVB will include funding assistance from them for thi s project. Development in Asia, as with everywhere else, depends on the availability of information. The creation of local online resources by research libraries in Asia's universities and research centers can be an invaluable step to creating a critical mass of online research that will aid social and economic development and attract contributions from interested researchers internationally. Asia is in a perfect position to launch multiple databases of this type. For instance, universities and research centers in Greater China, which includes the Mainland, Hong Kong, Taiwan, and Macau, produce a wealth of new knowledge that is also of great value to the broader region and beyond. Online resources built on the original research produced here can make major contributions to the development of Asia and the Pacific. First, they would provide international exposure to the research being performed here, and will attract recognition as well as notable contributions. The availability of the current research would also be an incentive to increase research output among the faculti es; it would provide an important resource for students to enrich the content of educational programs, and would promote a research culture at higher education institutions. In addition, these same research studies, by being online, would also make the expertise of the research institutions immediately available to the larger professional communities to spur more rapid and prudent development. Partnerships in the management of such databases is a very attractive possibility that would promote cooperation between research institutions. For instance, the City University of Hong Kong's Materials Research Centre engages in the fabrication and testing of building materials. Their cooperation with Tsinghua would only enhance the value and the content of an online database on building materi als and practices. An online database of the researchers' findings, jointly managed by the combined research teams, constantly updated by new research, and supplemented by reports of court actions and rulings that will transpire over the next several years could smooth the transition, and would be of inestimable value to attorneys, courts, law faculties, and civic administrators. A large part of the value of this database would be its immediate accessibility and currency. It would expedite proceedings by providing the corpus of precedents that will emerge during this unique period in legal history. Another subject database on China Studies could be created to contain research on cultural development, educational changes, comparative law, financial and commercial trends, infrastructure development, and many other topics relating to Greater China assembled from the research work of several institutions. The point is that local universities and research agencies can decide which subject areas these databases will address. The databases would be more inclusive and representative of the institutions' research pursuits than any existing library collection, and would spread the research findings farther and more quickly than any other form of publication ever could. Perhaps more importantly, librarians are trained public servants. They are experienced in assisting information seekers and mediating some very inhospitable information resources. This training will serve them very well in creating and managing new online resources. With the interactivity that is possible on the Internet, management of online resources now has a much larger public relations resp onsibility, much like the person-to-person encounters librarians routinely conduct at reference desks. Librarians who are webmasters, for instance, are finding that they must respond to large numbers of online requests and suggestions. The type of database I am proposing will attract inquiries, contributions solicited and unsolicited alike, and volumes of chat, which must be answered or re-direct ed. Nevertheless, librarians will need to make certain professional adjustments. For one, they must become better team players to interact well with researchers and computer experts who will share the duties of creating and managing these online resources. Similarly, librarians must also become more aware of and active in the research process; they must become more expert in subject areas; and they must learn cross-disciplinary conventions and methodologies to assure the greatest efficiency of the databases they will design and mediate. Researchers, too, must adjust to the idea that exposure of their studies is no longer limited to print publications, professional meetings, and conference proceedings. They must also realize that local online databases will present a new wealth of information that must be consulted. But I have no worries about researchers making these adjustments. I believe that local area databases of the type I propose will only stimulate more research and discovery. Together, librarians and researchers also must learn to interact more effectively with the professional community, and even make partnerships with practicing professionals, government representatives, and business persons. This type of community involvement is alien to many university librarians and researchers, but it is essential if the online research resources proposed here are to have a rap id effect on social and economic development. Researchers can no longer be content to remain disconnected from the general community and conduct dialogues only among themselves. Also, close cooperation between research institutions and the business and professional communities can result in better support and funding for continued research and for further development of the online resources. But a word of warning to librarians and researchers is necessary: if they do not take action to provide this type of information and thereby secure the direction and open accessibility of the new online resources, private information vendors will eventually take charge of the untapped research, and impose all the constraints, restrictions, and marketing mentality that characterize existing onlin e resources. The technology for new online movement is ready, and the need for the information is real. If librarians and researchers fail to pursue the development of these new resources, we will once again be left out of the design and decision-making phases of a vital new field of information delivery just as we were with integrated systems, educational television, CD-ROM technology, and many others. This constitutes a major shift in research practice and in library management and professional librarian duties. But we're not changing our role as librarians; we're adding to it a very necessary dimension in response to the advent of new information technology and an epic effusion of research, inquiry, and discovery more insistent than that of the Reformation or the Renaissance. The library's j ob has traditionally been to acquire and organize information. In the modern research library, the job will also include producing information and providing access to it electronically. In this new task of librarianship, the lines between librarian, researcher, and publisher will become very flexible in order to capture information the university needs. In a way, the university, through the modern research library, becomes an online research publisher (LaGuardia). So while continuing to acquire information from the traditional sources, research libraries that are intent on promoting a research culture within their universities and institutions must exploit online technologies further. Working with subject specialists and computer scientists, librarians can create and manage these databases, and make them accessible online to researchers, students, and pra cticing professionals. By cooperating with businesses, professional associations, and other community organizations, libraries will enrich the content of their online resources and build support for their continued development. The Human Genome Database, the Birth Defects Encyclopedia Online, and other "knowledge bases," as they are sometimes called, can serve as models for any forward-looking library that aspires to the highest level of research service. The universities and research centers in Asia produce a unique body of knowledge that, if accessible on well-managed online databases in research libraries, would be of instant worldwide value. They can bring international recognition to their institutions, provide course enrichment to university curricula, disseminate information widely and quickly to practicing professionals, and assure more prudent and expeditious social and economic development for the general population. Soon, leading research libraries, and their universities, will be known for the quality of their local online databases as much as for the excellence of their book collections. Development of such databases is the most important work that research libraries have to undertake in the next decade, certainly more important than retrospective conversion, given the rapid rate at which new knowledge is being discovered. Jacobson, Robert L. "Desktop Libraries." Chronicle of Higher Education 42.11 (10 Nov. 1995): A23-26. "Researchers Temper Their Ambitions for Digital Libraries." Chronicle of Higher Education 42:13 (24 Nov. 1995): A19. Kaneshiro, Kellie N. "Birth Defects Encyclopedia Online (BDEO): A Knowledge Base." Medical Reference Services Quarterly 11.1 (Spring 1992): 17-30. LaGuardia, Cheryl. "Virtual Dreams Give Way to Digital Reality." Library Journal 120.16 (1 Oct. 1995): 42-44. Levin, Aaron. "The Log-On Library." Johns Hopkins Magazine 44.1 (Feb. 1992): 11-19. Ray, Ron. "Crucial Critics for the Information Age." Library Journal 118.6 (1 Apr. 1993): 46-49. Watkins, Beverly. "Many Campuses Start Building Tomorrow's Electronic Library." Chronicle of Higher Education 39.2 (2 Sept. 1992): A19-A21.
https://origin-archive.ifla.org/IV/ifla62/62-webt.htm
Close your eyes and acutely listen to the sounds around you, and youll find youre able not only to accurately place the location of sounds in space, but their motion. Imagine then that, strangely, you suddenly became unable to distinguish the motion of sounds, even while you retained the ability to pinpoint their location. Thats exactly the experience of a patient reported by Christine Ducommun and her colleagues, who used studies of the patient to demonstrate conclusively for the first time that the brain has a specialized region for processing sound motion. While it was known that the visual system has a specialized region for perceiving motion, it wasnt known whether the auditory system has such a region--or whether sound location and motion are processed by the same circuitry. Previous studies of the capabilities of brain-damaged patients had found only that both their location and motion processing abilities were impaired, and animal and human neuroimaging studies had not been able to conclusively tease apart the two abilities. Ducommun and her colleagues discovered the region by studying a woman who was to be operated on to alleviate intractable temporal lobe epilepsy. The operation would involve the removal of the affected regions of the right anterior temporal lobe and the right posterior superior temporal gyrus (STG). Heidi Hardman | EurekAlert! Further information: http://www.cell.com Research team creates new possibilities for medicine and materials sciences 22.01.2018 | Humboldt-Universität zu Berlin Saarland University bioinformaticians compute gene sequences inherited from each parent 22.01.2018 | Universität des Saarlandes On the way to an intelligent laboratory, physicists from Innsbruck and Vienna present an artificial agent that autonomously designs quantum experiments. In initial experiments, the system has independently (re)discovered experimental techniques that are nowadays standard in modern quantum optical laboratories. This shows how machines could play a more creative role in research in the future. We carry smartphones in our pockets, the streets are dotted with semi-autonomous cars, but in the research laboratory experiments are still being designed by... What enables electrons to be transferred swiftly, for example during photosynthesis? An interdisciplinary team of researchers has worked out the details of how... For the first time, scientists have precisely measured the effective electrical charge of a single molecule in solution. This fundamental insight of an SNSF Professor could also pave the way for future medical diagnostics. Electrical charge is one of the key properties that allows molecules to interact. Life itself depends on this phenomenon: many biological processes involve... At the JEC World Composite Show in Paris in March 2018, the Fraunhofer Institute for Laser Technology ILT will be focusing on the latest trends and innovations in laser machining of composites. Among other things, researchers at the booth shared with the Aachen Center for Integrative Lightweight Production (AZL) will demonstrate how lasers can be used for joining, structuring, cutting and drilling composite materials. No other industry has attracted as much public attention to composite materials as the automotive industry, which along with the aerospace industry is a driver... Scientists at Tokyo Institute of Technology (Tokyo Tech) and Tohoku University have developed high-quality GFO epitaxial films and systematically investigated their ferroelectric and ferromagnetic properties. They also demonstrated the room-temperature magnetocapacitance effects of these GFO thin films. Multiferroic materials show magnetically driven ferroelectricity. They are attracting increasing attention because of their fascinating properties such as...
http://www.innovations-report.com/html/reports/life-sciences/report-33616.html
Researchers discover unique material design for brain-like computations Over the past few decades, computers have seen dramatic progress in processing power; however, even the most advanced computers are relatively rudimentary in comparison with the complexities and capabilities of the human brain. Researchers at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory say this may be changing as they endeavor to design computers inspired by the human brain’s neural structure. As part of a collaboration with Lehigh University, Army researchers have identified a design strategy for the development of neuromorphic materials. “Neuromorphic materials is a name given to the material categories or combination of materials that provide both computing and memory capabilities in devices,” said Dr. Sina Najmaei, a research scientist and electrical engineer with the laboratory. Najmaei and his colleagues published a paper, Dynamically reconfigurable electronic and phononic properties in intercalated Hafnium Disulfide (HfS2), in the May 2020 issue of Materials Today The neuromorphic computing concept is an in-memory solution that promises orders of magnitude reductions in power consumption over conventional transistors, and is suitable for complex data classification and processing. The limited power efficiency in conventional transistors is a fundamental technology shortcoming impeding future progress in computing. Neuromorphic materials research conducted over the past 10 years has focused on understanding the unique properties of 2-D materials and their van der Waals multilayered structures. “The findings show great promise for these materials in electronic applications, but also show the unique interfaces in these materials provide an unprecedented opportunity for design of material properties,” Najmaei said. Over the past four years, the team conducted an effort focused on the design of material properties for high-performance electronic applications. “Our research led to our Materials Today paper, which expands this effort to design of reconfigurable properties in these materials based on van der Waal/organometallic hybrid systems and neuromorphic material design,” Najmaei said. Neuromorphic computing processes information using new models of computing similar to the brain’s cognitive processes. “In order to process and make rational inferences from the input, information and a new paradigm of computing is needed,” Najmaei said. “Neuromorphic hardware with in-memory computer capabilities promises to bridge this ever-growing technology gap.” This research is an important stepping stone towards development of in-memory computing in hybrid devices with unique functional properties for integration in cognitive sensory devices and overcomes significant technical challenges that impede a bottom up approach for streamlining of brain-inspired computing hardware, he said. If the researchers can ultimately develop a computer that can behave like the brain, it would be extremely useful to the warfighter, Najmaei said. Neuromorphic computing, like a neural system, would offer computing capability complete with perks, such as robustness to damage, ability to learn, adaptability to change and others. It would have the potential to reduce operational power by a magnitude of 1,000 to 1 million times in comparison to today’s computing paradigms. This level of processing would be highly desirable for image recognition in autonomous systems, and for artificial intelligence in general. Given the significance of AI and autonomous systems in modern day warfare, neuromorphic computing may very well be a cornerstone for a wide range of future leap-ahead warfighting capabilities, Najmaei said. The Latest Updates from Bing News & Google News Go deeper with Bing News on: Neuromorphic materials - POSTECH: Reduced Power Consumption in Semiconductor Devices Stepping stones are placed to help travelers to cross streams. As long as there are stepping stones that connect the both sides of the water, one can easily get across with just a few steps. Using the ... - Flexible Polymer Paves Way for Organic Semiconductors The material is also flexible and biocompatible, suiting it for implanted biosensors, wearable devices, and neuromorphic computing systems in which OECTs might serve as artificial synapses or ... - Aiming to build molecular neuromorphic computing technology Bengaluru (IISc), have designed neuromorphic devices using organic materials that have not been used hitherto. Their work since 2014 aimed towards this. Organic materials had been considered the ... - UC San Diego: Honoring a UC San Diego Landmark and Its Lasting Impact on Physics Is there magic in the walls of Mayer Hall? This is the question Oleg Shpyrko, chair of the Department of Physics at the University of California San Diego, asked the audience gathered in the auditoriu ... - Neuromorphic Chip Market Size Provides An In-Depth Insight Of Sales, Revenue, Analysis Of Growth Factors 2022-2031 TMR s report on the global neuromorphic chip market studies the ... chemicals and advanced materials, consumer goods & food, energy & power, manufacturing & construction, industrial automation ... Go deeper with Google Headlines on: Neuromorphic materials Go deeper with Bing News on: Neuromorphic computing - PNNL, Oak Ridge Among Winners of $15M in DOE Funds for Extreme-Scale Scientific Computing The U.S. Department of Energy today announced $15 million in funding for basic research to explore potentially high-impact approaches in scientific computing and extreme-scale science. DOE said the ... - “Researchers in Italy and Germany unveil neuromorphic approach to robotics” Scientists have tapped neuromorphic computing to keep robots learning about new objects after they’ve been deployed. Neuromorphic computing replicates the neural structure of the human brain to ... - Aiming to build molecular neuromorphic computing technology Bengaluru (IISc), have designed neuromorphic devices using organic materials that have not been used hitherto. Their work since 2014 aimed towards this. Organic materials had been considered the ... - Is Intel Labs’ brain-inspired AI approach the future of robot learning? Intel Labs is betting on it, with a new hardware and software approach using neuromorphic computing, which, according to a recent blog post, “uses new algorithmic approaches that emulate how the human ... - Intel's New AI Chip Enables 'Real-Time Learning For Robots' Intel's new AI chip enables 'real-time learning for robots' September 2, 2022 by Mark Allinson Leave a Comment . Intel Labs , in collaboration ...
https://innovationtoronto.com/2020/06/a-design-strategy-for-the-development-of-neuromorphic-materials/
The way educational researchers share their research is changing. Across the world, interest is growing in encouraging researchers to make their research data openly available for use by other scholars or interested parties. Julie McLeod, Kate O’Connor and Nicole Davis. This was originally published on AARE’s Education Matters blog on 1 October 2018. This is linked to seeing the potential significance research data may have beyond their original use. Governments, keen to maximise return on investment, want to support researchers to make their data available for others to use and move away from storing it in inaccessible repositories. One prevailing view is that if research has been publicly funded, it should be freely accessible: anyone who could use the data should have access to it because it is then more likely to generate further benefit or knowledge. So, in the educational research world, funding organisations are beginning to introduce new requirements for data sharing. Institutions are looking at new ways of storing and managing the data produced by their researchers. At the same time, there is much interest in experimenting methodologically with data sharing and in attending to the innovations and challenges associated with digitised data and digital worlds. These experiments in data sharing take different forms. Data sharing might mean completely open access in one context – accessible to all with no restrictions – or it might mean access to data is mediated by the lead researcher or repository staff. Data sharing repositories might also be used by researchers to store data in a way that would leave open the possibility for access to the data to be made available after a certain period has passed. As we see it, there are pressing reasons for the educational research community to engage more fully with these developments and to critically consider both the possibilities and problems they raise, particularly but not only in relation to qualitative research. In qualitative research, the sample size is usually small and the research is typically aimed at understanding motivations or gathering insights into subjective experience, ideas and opinions. In this type of research, context (who is involved and where) is vital. Educational and social sciences researchers more broadly may have concerns arising from opening up access to and sharing their qualitative research data. These include the potential for re-use or application of the data out of context, or the possible identification of research participants who might not have given consent for their data to be assessible by other researchers. The pathway to data sharing in Australia Internationally, new policies point to a changing context in how research data is managed and maintained. Since the OECD first published its OECD Principles and Guidelines for Access to Research Data from Public Funding in 2007, OECD nations have moved to promote increased access to data generated through publicly funded research. Although early efforts to promote data sharing following this report were stymied, there are strong signs that data sharing is back on the global research agenda in recent years. In 2016, the Higher Education Funding Council for England, Research Councils UK, Universities UK, and the Wellcome Trust jointly authored a Concordat on Open Research Data, which proposed a series of principles for working with research data. The first of these principles defined open access to research data as ‘an enabler of high quality research’ and included the direction that ‘researchers will, wherever possible, make their research data open and usable within a short and well-defined period’. That same year, the European Commission Directorate-General for Research and Innovation published Guidelines on FAIR Data Management in Horizon 2020 as part of the H2020 Program. These guidelines emphasise that data should be FAIR, meaning Findable, Accessible, Interoperable and Reusable. They require all projects participating in an extended Open Research Data pilot to develop a Data Management Plan and seek to make open access the default setting for research data generated as part of the Horizon 2020 Program. Similar developments are also evident within Australia. There are signs the Australian Research Council (ARC) may move towards requiring open access of data generated from ARC funded research. The ARC’s Research Data Management Strategy currently states: The ARC is committed to maximising the benefits from ARC-funded research, including by ensuring greater access to research data. Since 2007, the ARC has encouraged researchers to deposit data arising from research projects in publicly accessible repositories. The ARC’s position reflects an increased focus in Australian and international research policy and practice on open access to data generated through publicly funded research. This policy is also reflected in the ARC’s current funding rules for its Discovery and LinkagePrograms which both include the statement: The ARC strongly encourages the depositing of data arising from a Project in an appropriate publicly accessible subject and/or institutional repository. Participants must outline briefly in their Proposal how they plan to manage research data arising from a Project. The recently updated Australian Code for the Responsible Conduct of Research also includes a new section titled ‘Sharing of data or information’, which states: it is common for researchers to ‘bank’ their data or information for possible use in future research projects or to otherwise share it with other researchers. It is also increasingly common for funding agencies to require the sharing of research data either via open access arrangements or via forms of mediated access controlled by licenses. To this end, data or information may be deposited in an open or mediated access repository or data warehouse, similar to an archive or library, and aggregated over time. Archived data or information can then be made available for later analysis, unless access is constrained by restrictions imposed by the depositor/s, the original data custodian/s or the ethics review body. The new code includes detailed stipulations around the management and data and data sharing and the ethical practices required to do such work well. Additionally, while the ARC advise that the current preference is to encourage compliance of institutions and academics, expectation and scrutiny of this in final reports is becoming more stringent and there are signs that open access to research data could become more strictly enforced in the near future. If such changes to requirements do occur, it is unclear whether researchers, university repositories (infrastructure, resources), and research cultures and practices will be equipped to meet them. Possibilities and challenges of data sharing in qualitative research Data sharing opens up possibilities for greater transparency in the practices, methods, and outcomes of educational research and has the potential to enhance rigour and impact. However, data sharing and re-use also gives rise to specific ethical and epistemologicalchallenges. Well-documented challenges for sharing qualitative research data include: - managing the re-use of qualitative research materials without compromising the specificity of the context in which they were produced; - the creation of appropriate materials to guide re-analysis of archived qualitative datasets; - transparency and care in obtaining participant consent for archiving and re-use of research materials at the of data collection as well as subsequently; - the ethical and practical protocols governing the management of access to archival repositories; - identifying appropriate ways to mitigate perceived and/or actual risk; - responding productively and practically to the ethical and methodological dilemmas posed by making more widely available research materials that have been generated by research teams or individuals; and - distinguishing between data sharing as simply policy compliance and as an opportunity for creative methodological innovation. It is timely for educational researchers to actively engage with and address these issues. First, to ensure future policy dictates are not insensitive to the needs and nature of both the field and the subjects of its research. Second, to foster ethical, methodologically sound and creative approaches to archiving and data sharing. Such practices could have the potential to develop new avenues for research and allow it to reach wider audiences, thereby enhancing the impact and reach of such work. Policies within Australia aimed at encouraging data sharing typically do not adequately attend to the distinctive issues data sharing raises for qualitative research practices. At the same time, more robust engagement among Australian qualitative researchers with these matters will likely open up new possibilities for educational research community to more effectively influence the direction of policy and knowledge-making practices in this area. Our workshop on data sharing in qualitative research With this in mind, we recently facilitated a two-day workshop on Open Access, Data Sharing and Archiving of Qualitative Research that aimed to promote critical dialogue on these agendas, specifically addressing the affordances and challenges they present for the sociology of education. This workshop, supported by an AARE competitive grant, sought to canvas ideas and dilemmas across a number of intersecting but also distinct strands of work – spanning policy, research and cultural sectors – that are part of the changing context in which we conduct and communicate our research. Possibilities for a gateway website Following this workshop, a smaller group met to consider how we might further develop collaborations, demonstrator projects and a community of research practice in this area. Here, we discussed processes, exemplars and practices for digital archiving of qualitative projects along with possibilities for building a research community to support a web-based gateway or portal that could showcase programs and projects in the sociology of youth and education – tentatively titled Studies of Childhood, Education and Youth (SOCEY). This gateway website could link to individual project websites, including those that might be designed for archiving of their research materials as well as others which might have a web presence with publications available. The aim would be to enhance the profile, reach and communication of individual projects and to show the scale and scope of the field more broadly. Such a website might facilitate the preparation of research or educational materials for different audiences and communities – schools, participant or advocacy groups. Over the coming months, work will continue on the site and a core working group, drawn from researchers Australia-wide, to assist in its development. Keeping an eye on the creative, ethical, practical, methodological and regulatory dimensions of the cluster of activity occurring in data archiving and sharing is part of the challenge. We welcome any feedback, ideas or expressions of interest in this program of work. Please get in touch directly or leave a comment.
https://www.socey.net/2019/05/16/data-sharing-and-the-challenges-facing-educational-researchers/
To meet a growing complexity of threats and challenges, the Principal Associate Directorate for Science, Technology and Engineering (PADSTE) maintains broad and deep science, technology and engineering capabilities throughout its science organizations. These capabilities are leveraged to meet the Laboratory’s diverse national security missions. Integrating and applying these capabilities rapidly and effectively to meet new challenges—across the Laboratory and with our strategic partners—is a key advantage for the nation in an increasingly urgent security landscape. The people in PADSTE organizations are the key resource for Los Alamos, bringing together the advanced skills and expertise that make up our multidisciplinary capabilities. By strategically applying these scientific and technological capabilities, the Laboratory continues to resolve complex challenges facing the nation. PADSTE Science Organizations Chemistry, Life & Earth Sciences (ADCLES) Through its Chemistry, Bioscience and Earth & Environmental Sciences divisions, the CLES Directorate is home to a wide diversity of capability-focused groups whose work spans the Laboratory’s major mission areas of stockpile stewardship, global security and energy security. Engineering Sciences (ADE) The Engineering Sciences Directorate is the parent organization for engineering and fabrication services within the Laboratory. ADE maintains, develops and deploys all R&D engineering and fabrication resources to LANL technical programs. Experimental Physical Sciences (ADEPS) The EPS Directorate’s research experiments stimulate solutions for some of the most intractable national security problems, including nuclear weapons stewardship, homeland security, intelligence & information analysis and nuclear & alternative energy. The directorate's research and experimental facilities span a broad range of complex scientific areas. We often combine forces with colleagues in theoretical physics, high-performance computing, and computational secience to formulate experience to validate theories and computational models. Theory, Simulation & Computation (ADTSC) The TSC Directorate applies science-based prediction to nuclear weapon stewardship, the reliable replacement warhead and beyond. The Directorate supports all other current and emerging national security missions, such as threat reduction, intelligence, homeland security, energy, infrastructure, nano science and technology and biological science, as well as most science frontiers from biophysics to astrophysics. In today’s dynamic national security and technical environment, strategic partnerships and collaborations allow PADSTE to cost-effectively leverage our research to accomplish programmatic goals. Partnering with academic institutions, industry and other laboratories, we leverage our science and technology for the nation’s benefit. Applied Energy Programs With energy use increasing across the nation and the world, Los Alamos is using its world-class scientific capabilities to enhance national energy security by developing energy sources with limited environmental impact and by improving the efficiency and reliability of the energy infrastructure. Civilian Nuclear Energy Office The Civilian Nuclear Programs Office is the focal point for nuclear energy research and development and next-generation repository science at Los Alamos National Laboratory. The Civilian Nuclear Programs Office manages projects funded by the Department of Energy's offices of Nuclear Energy, Environmental Management and the Nuclear Regulatory Commission. Laboratory Directed Research and Development (LDRD) The LDRD Program is a prestigious source of research and development (R&D) funding awarded through a rigorous and highly competitive peer-review process. As the sole source of discretionary R&D funding at the Laboratory, LDRD resources are carefully invested in high-risk, potentially high-payoff activities that build technical capabilities and explore ways to meet future mission needs. National Security Education Center (NSEC) NSEC focuses on collaborations for education and strategic research and student opportunities. Its strategic centers, institutes, and education programs are chartered to foster high quality, multi-program and multi-division research efforts, specialized recruiting and strategy development. Office of Science Through the DOE Office of Science, Los Alamos conducts long-term, national-security-inspired, fundamental science. These often high-risk and high-payoff efforts enable remarkable discoveries and tools that transform our understanding of energy and matter and advance the national, economic and energy security of the United States. Science Resource Office The Laboratory's Science Resource Office (SRO) programs are the recognized model for driving scientific and technical excellence. SRO builds and maintains a suite of high-quality functions that enable the R&D mission of the Laboratory. SRO's major activities include - leading the Lab-wide effort to conduct annual science and technology assessments required by DOE for management of Los Alamos - managing the Research Library, one of the premier digital libraries in the world, providing state-of-the-art information technology tools to our research community and developing innovative web technologies to further information availability, accessibility and management - publishing 1663 magazine, which highlights the Lab's forefront research and programmatic activities - providing support and advocacy for Los Alamos' foreign national community, a critical source of world-class researchers furthering the Lab mission In addition, SRO facilitates a broad range of multi-faceted activities that promote the science and technology missions of Los Alamos The Laboratory has established the Science Pillars under four main themes to bring together the Laboratory's diverse array of scientific capabilities and expertise. The Science Pillar concept is the primary tool the Laboratory uses to manage our multidisciplinary scientific capabilities and activities. Information, Science & Technology for Prediction (IS&T) The IS&T Pillar is developing IS&T capabilities in computational co-design, data science at scale and complex networks. We are leveraging advances in theory, algorithms and the exponential growth of high-performance computing to accelerate the integrative and predictive capability of the scientific method. Science of Signatures (SoS) The SoS Pillar is revolutionizing measurements for threat-specific signatures; discovering signatures that identify and characterize threats, and deploying advanced technology. We are applying science and technology to intransigent problems of system identification and characterization in areas of global security, nuclear defense, energy and health. Materials for the Future The Materials Pillar will allow us to predict and create new materials functionality in previously inaccessible extremes. In materials science, we are optimizing materials for national security applications by predicting and controlling their performance and functionality through discovery science and engineering. Nuclear and Particle Futures (NPF) In the NPF Pillar, we are integrating nuclear experiments, theory and simulation to understand and engineer complex nuclear phenomena. The NPF Pillar focuses on fundamental advancements in: - Nuclear and particle science, astrophysics, and cosmology - Applied nuclear science and engineering - High energy density plasmas and fluids - Accelerators and electrodynamics Our signatures facilities are a critical component for maintaining the vitality and leadership of our science, technology and engineering capabilities for national security. These facilities support the transformative solutions realized by our employees’ skills and expertise in research and development. - Dual Axis Radiographic Hydrodynamic Test Facility (DARHT) - Electron Microscopy Lab - Ion Beam Materials Lab - Isotope Production Facility - Lujan Center - Matter-Radiation Interactions in Extremes (MaRIE) - Proton Radiography - Trident Laser Facility - Ultracold Neutrons - Weapons Neutron Research Facility» National User Facilities Capability reviews Each year, Los Alamos National Laboratory organizes numerous external reviews, called capability reviews, to measure and continuously improve the quality of its science, technology and engineering (STE). To learn more, go to the Capability Review webpage.
http://lanl.gov/org/padste/index.php
OVERVIEW : Wiksate are a bunch of passionate and committed educationists, technologists and strategists who have come together to make a difference in the way social and informal learning can be captured, analysed, credited and showcased. NPTEL OVERVIEW : The National Programme on Technology Enhanced Learning (NPTEL), a project funded by the Ministry of Human Resource Development(MHRD), provides e-learning through online Web and Video courses in Engineering, Sciences, Technology, Management and Humanities. This is a joint initiative by seven IITs and IISc Bangalore. Other selected premier institutions also act as Associate Partner Institutions. NPTEL is a curriculum building exercise and is directed towards providing learning materials in science and engineering by adhering to the syllabi of All India Council for Technical Education and the slightly modified curricula of major affiliating Universities. It has developed curriculum based video courses and web-based e-courses targeting students and faculty of institutions offering UG engineering programs. National Digital Library of India OVERVIEW :The National Digital Library of India (NDLI) is a virtual repository of learning resources that offers more than just search and browse capabilities to the learner community. The Ministry of Education, Government of India, sponsors and mentors it through its National Mission on Education through Information and Communication Technology (NMEICT). Filtered and federated searching are used to facilitate focused searching so that learners can find the right resource with the least amount of effort and in the shortest amount of time. NDLI offers services tailored to specific user groups, such as Examination Preparatory for high school and college students and job seekers. Services are also available for researchers and general learners. The NDLI is intended to hold content in any language and to provide interface support for the ten most commonly used Indian languages. It is designed to support all academic levels, including researchers and life-long learners, as well as all disciplines, all popular types of access devices, and differently-abled students. It is intended to allow people to learn and prepare from best practises from around the world, as well as to enable researchers to conduct inter-linked exploration from multiple sources. It is created, run, and maintained by the Indian Institute of Technology Kharagpur. Virtual Laboratory OVERVIEW : To provide remote-access to Labs in various disciplines of Science and Engineering. These Virtual Labs would cater to students at the undergraduate level, post graduate level as well as to research scholars. To enthuse students to conduct experiments by arousing their curiosity. This would help them in learning basic and advanced concepts through remote experimentation. To share costly equipment and resources, which are otherwise available to limited number of users due to constraints on time and geographical distances.
https://slrtce.in/library/free-online-resources
Abstract: A dozen University of Western Ontario research projects, including ensuring innovative research on Ontario's archaeological heritage and advancing wind research, have received $19-million from the Ontario Research Fund. In helping attract and retain top researchers who will strengthen the province's competitiveness in the global innovation-driven economy, more than 250 world-class researchers at Western will be supported with this latest funding. "We are extremely proud of our researchers, and grateful to the Province of Ontario for its continued support of advanced research through the ORF," says Western President Amit Chakma. "Innovative discoveries made by researchers across the disciplines are resulting in new knowledge that improve health, social and economic welfare throughout the province." Funding world-class research is part of Ontario's plan to build an innovation economy that turns new knowledge into new jobs, better healthcare, a cleaner environment and endless possibilities for Ontario families, says London West MPP Chris Bentley. "We are recognizing the work that our researchers do and the wealth and jobs they create in London," says Bentley. "Today's investment will support the work of more than 250 London researchers. New discoveries will continue to be made and we want those people, those ideas and those jobs right here in our community." This investment at Western is part of a broader $268-million province-wide investment that will support 214 projects and more than 3,300 researchers in 14 cities - creating and preserving more than 1,300 construction jobs over the next four years across the province. Projects receiving funding at Western include (but not limited to): Nanobeam Materials Analyser for Probing Planetary Evolution and Resources (NanoMAPPER) Learning more about the evolution of the planet Lead Researcher: Desmond Moser Provincial Funding: $310,051 Researchers Affected: 17 In early 2008, an international research team led by University of Western Ontario earth science professor Desmond Moser made a startling discovery when they unearthed three-billion-year-old microcrystals in northern Ontario. The researchers found that the crystals are not only resistant to change; they grew incrementally over 200 million years, preserving records of their movements through and around the planet during the formation of early North America. They are providing new information about planetary evolution and the processes that formed Earth's continents, and resources such as gold and diamonds. With a new electron microscope and analyzer, Moser will conduct more advanced micro- and nanomineral research that will improve our knowledge of planetary evolution. It will also have important applications for industry, including mining and advanced manufacturing. Using Advanced Light Sources to Better Understand Nanostructures Advancing nanotechnology Lead Researcher: T.K. Sham Provincial Funding: $1,052,286 Researchers Affected: 30 Nanotechnology holds the promise of transforming virtually every high-tech industry, from advanced manufacturing to life sciences to information technology. Nano-size semiconductors will lead to small, faster, less expensive computers. Nanomagnetic materials will increase data storage capabilities. Materials with nanofibres will be lighter and stronger. But realizing the potential of nanotechnology requires an understanding of the scientific properties of the materials, having a means of preparing them and the tools to assemble them. At The University of Western Ontario, Tsun-Kong Sham is using advanced light sources to examine the chemistry of nanostructures - research that will lead to the creation of innovative new devices. #### About University of Western Ontario Western is committed to its mission of providing the best student experience among Canada's leading research-intensive universities. A vibrant centre of learning, Western is home to approximately 3,500 full-time faculty and staff members and approximately 30,000 undergraduate and graduate students. Through its 12 Faculties, and three affiliated Colleges, the University offers more than 400 different majors, minors and specializations. Research is an integral part of the University's mission and external support for research projects exceeds $200 million per year. Western is located on 155 hectares of land along the banks of the Thames River in London, Ontario - a thriving city of 432,451 people, 200 kilometres west of Toronto. For more information, please click here Contacts: Publisher: Helen Connell Editor: David Dauphinee Copyright © University of Western OntarioIf you have a comment, please Contact us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content. |Related News Press| Jobs Leading Advanced Materials Manufacturer Pixelligent Closes $10.4 Million in Funding: Capital Will Boost Capacity for North American Manufacturing, Drive Asian Expansion, and Continue Innovation in Solid State Lighting and OLED Display Applications August 16th, 2016 SUNY Poly Welcomes DPS as the Global Engineering Firm Opens Its U.S. Advanced Technology Group Headquarters at Cutting-Edge ZEN Building November 20th, 2015 SUNY Poly CNSE Announces Milestone as M+W Group Opens U.S. Headquarters at Albany Nanotech Complex and Research Alliance Begins $105M Solar Power Initiative October 20th, 2015 Global Engineering Firm DPS to Establish U.S. Advanced Technology Group Headquarters at SUNY Poly CNSE and Create 56 New Jobs Under STARTUP-NY Initiative October 6th, 2015 Govt.-Legislation/Regulation/Funding/Policy Graphene key to growing 2-dimensional semiconductor with extraordinary properties August 30th, 2016 Analog DNA circuit does math in a test tube: DNA computers could one day be programmed to diagnose and treat disease August 25th, 2016 New approach to determining how atoms are arranged in materials August 25th, 2016 Johns Hopkins scientists track metabolic pathways to find drug combination for pancreatic cancer August 25th, 2016 Academic/Education AIM Photonics Announces Release of Process Design Kit (PDK) for Integrated Silicon Photonics Design August 25th, 2016 Nanotech Security Featured by Simon Fraser University: Company's Anti-Counterfeiting Technology Developed With the Help of University's 4D LABS Materials Research Institute August 21st, 2016 W.M. Keck Foundation awards Cal State LA a $375,000 research and education grant August 4th, 2016 Thomas Swan and NGI announce unique partnership July 28th, 2016 Chip Technology Graphene key to growing 2-dimensional semiconductor with extraordinary properties August 30th, 2016 Continuous roll-process technology for transferring and packaging flexible LSI August 29th, 2016 A nanoscale wireless communication system via plasmonic antennas: Greater control affords 'in-plane' transmission of waves at or near visible light August 27th, 2016 A promising route to the scalable production of highly crystalline graphene films August 26th, 2016 Memory Technology Magnetic atoms arranged in neat rows: FAU physicists enable one-dimensional atom chains to grow August 5th, 2016 New metamaterials can change properties with a flick of a light-switch: Material can lead to new optical devices August 3rd, 2016 Making magnets flip like cats at room temperature: Heusler alloy NiMnSb could prove valuable as a new material for digital information processing and storage July 25th, 2016 Research team led by NUS scientists develop plastic flexible magnetic memory device: Novel technique to implant high-performance magnetic memory chip on a flexible plastic surface without compromising performance July 21st, 2016 Nanoelectronics Graphene key to growing 2-dimensional semiconductor with extraordinary properties August 30th, 2016 Light and matter merge in quantum coupling: Rice University physicists probe photon-electron interactions in vacuum cavity experiments August 24th, 2016 New microchip demonstrates efficiency and scalable design: Increased power and slashed energy consumption for data centers August 24th, 2016 Down to the wire: ONR researchers and new bacteria August 18th, 2016 Materials/Metamaterials Graphene key to growing 2-dimensional semiconductor with extraordinary properties August 30th, 2016 A promising route to the scalable production of highly crystalline graphene films August 26th, 2016 Graphene under pressure August 26th, 2016 Semblant to Present at China Mobile Manufacturing Forum 2016 August 25th, 2016 Announcements Graphene key to growing 2-dimensional semiconductor with extraordinary properties August 30th, 2016 University of Akron researchers find thin layers of water can become ice-like at room temperature: Results could lead to an assortment of anti-friction solutions August 30th, 2016 Nanocatalysis for organic chemistry: This research article by Dr. Qien Xu et al. is published in Current Organic Chemistry, Volume 20, Issue 19, 2016 August 30th, 2016 Meteorite impact on a nano scale August 29th, 2016 |The latest news from around the world, FREE| |Premium Products| | Only the news you want to read!
http://www.nanotech-now.com/news.cgi?story_id=35991
The Energy Department is awarding over $35.5 million through its Nuclear Energy University Programs (NEUP) to support 48 university-led nuclear energy research and development projects to develop innovative technologies and solutions. These projects will be led by 31 U.S. universities in 24 states. A complete list of R&D projects with their associated abstracts is available below. Researchers will develop an interactive energy evaluation tool that puts the general public in the role of an energy manager to engage them in understanding the impact of choices made in different nuclear fuel cycles, as well as several other energy cycles (e.g., coal, solar, wind, hydro, biomass). The tool will utilize the DOE Nuclear Fuel Cycle Options Catalog to emphasize different viable fuel cycle options. Researchers will develop a web based visualization tool for comparing current and future nuclear fuel cycle options to other low-carbon and conventional energy technologies in the United States. The ALSEP process for group separations of trivalent actinide/lanthanide ions suffers from slow stripping kinetics. Researchers will perform detailed kinetic studies at different temperatures, supported by spectroscopic and computational studies to establish where the kinetic bottlenecks occur in the ALSEP process. They will then systematically alter the chemistry of the extractants and aqueous ligands to understand how to overcome the kinetic hindrances. Researchers will improve understanding of the thermal-mechanical-hydrologic-chemical (TMHC) coupling effect on reconsolidated granular (or crushed) salt-clay mixture used for seal systems of shafts and drifts in salt repositories. The project will integrate laboratory experiments and multiscale pore-to-continuum coupling simulations to explore the feasibility of using clay additive and moisture content to enhance performance of crushed salt as a seal material. Researchers will separate Am from high-level waste by exploiting the high oxidation states of Am through design, and testing of new materials. This will be accomplished by designing and testing new electrodes, derivatized with surface-stabilized ligands and redox mediators to facilitate oxidation of trivalent Am, and by designing porous sorbent materials functionalized with molecules which selectively coordinate Am(VI) to enable its subsequent separation. Researchers will characterize steam attack, hydrothermal corrosion and radiation swelling of SiC/SiC composites-based accident tolerant fuel (ATF) using a combination of experiments, microstructure evaluation and phase-field simulations using MARMOT. Central to the project is the mapping of the microstructure after steam/hydrothermal/irradiation tests through a unique non-destructive x-ray microscopy technique followed by phase-field simulations of chemical transport in MARMOT. Researchers will develop alloying agents to stabilize lanthanides against fuel cladding chemical interactions (FCCIs). Based on the available binary phase diagrams for the periodic table, Te and Sb are selected as fuel dopants. The research will focus on Te. The mechanisms of using minor additives to stabilize and immobilize the lanthanide fission products will be investigated. Resulting thermodynamic data obtained by the proposed research will be integrated into MARMOT to enhance its ability to model metallic fuel performance. Researchers will use carefully controlled experiments with modeling and simulation in the MARMOT tool to better understand the behavior of uranium silicide (U-Si) fuel in Light Water Reactors (LWRs). The team will study grain growth, amorphization, and grain subdivision behaviors in U3Si2 fuel in LWR conditions. Researchers will use a science-based approach to capture the connections between U and UZr alloys’ microstructure, thermal properties, and mechanical properties through closely coordinated experiments and modeling efforts from both unirradiated and irradiated fuels. Researchers will develop oxidation/corrosion resistant uranium silicide U3Si2 fuel by chemical doping/fillers to form a continuous borosilicate glass as a protective oxide layer with transformational fuel performance and accident tolerance. The team will use chemical doping/fillers include B or C showing effectiveness in improving high temperature oxidation resistance in transition metal silicides, and boron/silicon-containing compounds that can form a protective borosilicate glass layer. This project is focused on understanding the mechanisms of aging processes on silver exchanged mordenite and silver functionalized silica aerogel adsorbents under conditions of long term exposure to off gas streams containing air, I2, H2O, and NOx. The chemical and physical structural changes of the adsorbents under these conditions will be accounted for in equilibrium and kinetic models. Transport models will be developed to predict the performance of engineered off-gas treatment systems. Researchers will show that gamma-ray spectroscopy based on emerging microcalorimeter sensors can determine elemental and isotopic fractions with accuracy comparable to much slower mass spectrometry and with far better accuracy than germanium sensors. This advance will prevent the diversion of nuclear material by enabling nearly real time process monitoring in large nuclear facilities without loss of measurement accuracy. The project aims to use a science-based approach to set guidelines of selecting dopants for developing fuel cladding chemical interaction (FCCI)-resistant metallic fuel systems for fast reactors. Lanthanide fission products migrate to the fuel-cladding gap leading to cladding breaches. The team will perform both theoretical modeling and experiments to arrive at the guidelines that are based on sound science. If successful, this can lead to breakthroughs in minimizing the FCCI effect. Researchers will use unique capabilities to modulate the electrodeposition of actinides (depleted Uranium) and fission products (Ln) during nucleation and growth stages. The team aims to preclude formation of electrodeposits with dendritic morphology. Morphology of electrodeposits will be controlled by electrolyte composition. The research team will address two major issues in the pyroprocess based fuel cycle: low process efficiency and materials accounting for non-proliferation. There are currently no analytical techniques to ascertain the speciation and concentration of the actinides in the electrolyte. This project aims to develop an in-situ analytical method consisting of Raman spectroscopic analysis during electrochemical polarization. Researchers will develop phase equilibria and thermochemical information to model and simulate advanced fuel behavior, including critical phenomena such as the contribution to fuel swelling of dissolved non-noble fission products and secondary phase formation, composite fuel stability, and fuel-clad chemical interactions. Modeling of the nuclear fuel cycle (NFC) is often posed as a set of demands coupled with available technologies. Demands may be singular or multivariate. Additionally, available technologies may have constraints on when they are deployable. This proposal aims to bring demand and deployment decisions into the NFC simulator itself, thereby simulating a more realistic process by which utilities, governments, and other stakeholders actually make facility deployment decisions. The SSPM Echem tracks elemental and bulk material flows through batch operations in electrochemical facilities. This project will augment the existing SSPM Echem model with models of DA, NDA, and process monitoring measurements, including appropriate accuracy, noise, and uncertainty characteristics. Diverse measurement and assay results will be fused to improve material accountancy and anomaly detection across the electrochemical process. The primary objective of the project is the development of a new glass-bonded waste form for the suitable disposal of electrorefiner waste salt. To achieve this goal, the proposed tasks will focus on the 1) dechlorination of the electrorefiner salt and 2) subsequent encapsulation into a sintered waste form. Achieving these goals will result in a compact waste form with higher fission product loading, which is instrumental in economically closing the nuclear fuel cycle. Researchers will address the urgent need for a comprehensive and scalable visual analytics solution to facilitate analysis of simulation results. The team will develop a system that is cross-platform, support advanced computational workflows data analysis and can be tailored to both advanced and novice users. Specifically, the research will include: 1) A distributed web-based visual analytics system, 2) High-dimensional parametric analysis of ensemble of simulation runs, and 3) An extensible fuel cycle metrics framework. The objective of this project is to employ the data archive on the Molten Salt Reactor Experiment (MSRE) and develop a set of high-quality Molten Salt Reactor (MSR) reactor physics benchmarks for inclusion in the IRPhE Handbook that currently does not contain any benchmark set related to MSR technology. Researchers will develop and implement rigorous methods for generating multi-group transport cross section and diffusion coefficients for deterministic reactor models using Monte Carlo calculations. The project will eliminate the numerous approximations that currently contribute to significant mis-prediction of core power distributions when using deterministic core models. Researchers will develop efficient transient transport methods for the PROTEUS-SN, PROTEUS-MOC and the first-order SN solvers of the NEAMS neutronics code PROTEUS. The team will also perform transient benchmark analyses. Researchers will conduct a coordinated experimental and computational effort to quantitatively map the full-field 3-D velocity and temperature fields in the interstitial spaces within pebble bed reactor systems. Key outcomes will include advanced correlations to predict flow and thermal transport within the pebble bed over a much wider range operating conditions and reactor types than is currently possible. Researchers will develop a methodology that includes the appropriate governing physical fields within the realm of peridynamics to investigate fuel cracking, fragmentation, relocation, ballooning, pellet-cladding interaction, and cladding rapture and dispersion within the MOOSE framework. Researchers will develop a comprehensive, high-resolution two-phase flow database by performing experiments in two air-water two-phase flow test facilities and a new heated rod-bundle test facility that will be built under this project. These data will be used to validate the two-phase flow models implemented in the NEK-2P code. This project addresses the need for research informing plant engineers and control room operators faced with cyber-security threats on appropriate responses. The approach taken uses distinguishing features of safety related events versus cyber-security threats; attack modeling, prediction, game theory and PRA knowledge to update operators procedures; and simulation for validation. Researchers will utilize both experimental and computational methods to test and model transient behavior of a mock up PCHE, which is a scaled down representation of the sCO2, cycle high temperature recuperator (HTR). In addition, the team will develop a methodology for quantification of the PCHEs production cost. The proposed project will provide valuable experiment data required to develop the Section III, Division 5 and Section VIII evaluation approach for compact heat exchangers. Researchers will use Thermographic Imaging and Ultrasonic Doppler Velocimetry (UDV) techniques to generate high fidelity thermal stratification and flow field data under various geometric and physical conditions for scaled models of outlet plena in sodium-cooled fast reactors (SFRs). Data from these experimental studies will be compared with those obtained using 1D codes such as SAS4A/SASSYS-1. Researchers will develop structural design methodologies for Type 316H stainless steel and Alloy 617 compact heat exchangers (CHX) using elastic-perfectly plastic (EPP) analysis methodology for the assessment of elevated temperature failure modes under sustained and cyclic thermal and pressure loading. The project will develop a technical basis for Section III, Division 5 ASME Code Case for CHX in high temperature nuclear service. Researchers seek to advance EMP design efforts for liquid metal reactors by 1.) Improving MHD modeling capability, particularly near the MHD stability criteria, 2.) Advancing simulation accuracy and fidelity in the typical operating range of large electromagnetic pumps for liquid metal cooled reactors, and 3.) Constructing a low barrier EMP test loop for model validation and instruction that can be easily replicated at low cost at other facilities. Researchers will conduct eight new tests be completed at the HTTF investigating a range of gas reactor events including DCC, PCC, and inlet plenum buoyant plume mixing. The data from these tests will be ideal for use in the validation of safety analysis codes used in gas reactor analysis. The data may also provide valuable insight into the validation of CFD codes for gas-reactor applications. Researchers will develop a design method for rapid structural assessment of diffusion-bonded Hybrid Compact Heat Exchangers, used as secondary heat exchangers in coupling Sodium Fast Reactors (SFRs) with supercritical CO2 (sCO2) Brayton power cycles. The team envisions that the developed method will (1) be usable in creation of an ASME code case for structural assessment of hybrid CHX; and (2) create design tools that ensure CHX code compliance under desired load conditions. Researchers will develop a mechanistic understanding of accelerated fretting, wear, and bonding between Alloy 800H and Inconel 617 surfaces. In order to achieve the objectives, a series of tribological experiments will be conducted in helium environments at elevated temperature with controlled concentrations of gaseous specious, attendant microstructure characterization (using electron microscopy, spectroscopy and atom-probe tomography), and validated continuum models. Researchers will extend and enhance the experimental tests using existing water-cooled RCCS facilities. The activity will be conducted in close collaboration with the water-cooled NSTF research team and will produce a unique set of high quality experimental data for support of existing system codes, system codes under development, and computational fluid dynamics (CFD) codes. Researchers will conduct novel 3D concrete creep tests under controlled environmental conditions to build a 3D constitutive model for concrete typical of existing containment structures. Decades long creep will be extracted via shorter tests (years long). The material model will be used in the Grizzly structural FEM code for simulating the response of real, 3D containment structures to changes in loading. Full-scale, long-term structural tests will enable validation. Researchers will systematically evaluate the tribological response of 800H and 617 alloys in simulated HTGR/VHTRs He at relevant reactor operating temperatures (700-950°C). These studies will (1) provide the foundation for understanding the tribological performance of Ni based alloys in high temperature gas cooled reactor environment, paving the way for the forthcoming addition of Alloy 617, and (2) suggest solutions to mitigate tribological problems with these materials. Researchers will develop powerful codes for predicting environmentally-assisted localized corrosion in the primary coolant circuits (PCCs) of Generation II BWRs and PWRs. These codes and resulting models will be embedded into GRIZZLY. This will greatly extend the applicability of GRIZZLY to components in PCCs that are susceptible to pitting corrosion, stress corrosion cracking, and corrosion fatigue and will provide the stress intensity factor solutions for the crack growth rate models. Researchers will develop breakthrough efficiencies in segmented radioisotope thermal generators (RTG) power conversion systems through high temperature materials development. The approach will also reduce reliance on single-purpose supply chains. Researcher will aid in the development of RELAP-7 through required experimental and computational efforts. The validation of the two-phase modeling capability of RELAP-7 will be accomplished through a series of tasks which include synthesis of existing forced convective data, acquisition of natural circulation data from an existing well-scaled facility, and uncertainty quantification in constitutive modeling. Researchers will develop new thermal hydraulic models that will be implemented into RELAP-7. Simulation studies will be performed for postulated ELAP scenarios in the PB Unit 2 reactor. The overarching goal of the proposed work is to perform a simulation study using RELAP-7, to model BWR ELAP scenarios with various mitigation measures and determine the range of time available for transition to portable FLEX equipment. The degradation of medium and low voltage cabling is often associated with aqueous immersion. Electrical and mechanical data will be collected to develop a mechanistic model of the degradation of cabling to dielectric breakdown based on the temperature, oxidation, water conditions, and irradiation dose. The hypothesized model begins with solvent escape paths that expand and advance from the exterior surface toward the current carrying core at a rate dependent on the environment. Researchers will develop an online health monitoring system that integrates active and passive sensor networks and advanced signal processing algorithms to monitor ASR induced degradation in concrete structures. The obtained information will support an ongoing prognostic modeling research for health diagnosis and prognosis of aging concrete structures, in order to support long-term operational and maintenance decision making. Researchers will work to understand, quantify, and model the elevated-temperature tribological behavior of alloys 617 and 800H in helium gas in HTGR conditions. Impurities in helium can induce a variety of corrosion mechanisms in structural materials that can profoundly affect tribological behavior. This study will investigate tribological behavior in various impurity concentration regimes and surface treatments to mitigate tribological damage in these environments. This project is aimed at the improvement of EM pumps for advanced liquid metal cooled reactors specifically the SFR. Initial tests will be conducted in the University of Wisconsin sodium loop on an existing moving magnet pump to help in the development of the models and to look at end effects associated with this pump. Additionally, tests will be conducted at ANL on the CMI ALIP. Ultimately, a small ALIP will be developed to test the principles and models developed to improve commercial scale. This project will look at very low Prandtl number heat transfer from both an experimental and computational approach. Researchers will then establish key models to implement in plant dynamics codes. The team will also evaluate thermal stratification in large volumes making use of novel fiber optic distributed fiber measurements to get CFD quality data. Lastly, the team will evaluate some of the key remaining safety issues associated with the eventual commercialization of the sodium fast reactor. Researchers will make recommendations for design features of salt-reactor components that would take advantage of the phenomena characteristic to fluoride salts in order to be resilient to overcooling, and to recover gracefully from over-cooling transients. In support of this task a MOOSE-based computational tool will be developed, capable of modeling liquid-solid phase change, backed by experimental studies with the prototypical (flibe) coolant, and with simulant fluids. Researchers will generate computational fluid dynamics (CFD) validation data for forced, mixed, and natural convection through a parallel-path geometry relevant to gas reactor bypass flow. In addition, the team will independently build and validate CFD models based on the experimental results. The team has the experience and existing infrastructure for sharing complete validation datasets. *Actual project funding will be established during the award negotiation phase.
https://neup.inl.gov/SitePages/FY16%20RandD%20Awards.aspx
Postdoc for Planetary Imaging Group7 Mar 2018, 09:23 UTC The Planetary Imaging Group of the Space Research and Planetology Division at the University of Bern has two positions, one Postdoc and one PhD student, available for conducting experimental research within the framework of the National Centre for Competence in Research (NCCR) PlanetS (http:nccr-planets.ch). The Planetary Imaging Group operates the Laboratory for Outflow Studies of Sublimating icy materials (LOSSy) developed to experiment with a variety of samples prepared as analogues for the surfaces of icy Solar System objects. Various properties of the samples are measured and then used to interpret remote-sensing datasets either by direct comparison of measured data or by using lab data as inputs or ground-truth for testing physical models. Extensive work has recently been performed on cometary analogues in the context of the Rosetta mission to comet 67P/Churyumov-Gerasimenko. In the framework of the NCCR PlanetS, we are now looking at applying our knowledge and experience with comets in the direction of protoplanetary disks observations and planetary system formation theory. Regular interactions between the various groups working on these topics within the NCCR are key for that project. The experimental work will be conducted at the University of Bern but both researchers will frequently interact with the various groups involved in the NCCR at the Universities of Bern, Zurich, ETH Zurich and Observatory of Geneva. Ability to communicate and work with colleagues from a wide range of functional backgrounds (e.g. engineering, science, management, technical, non-technical, etc.) as part of a diverse international team is essential. The working language is English. A basic knowledge of German is desired but not required. Postdoc position: We are seeking a highly-motivated individual with both a good knowledge of the physics of planetary systems formation and solid experience with laboratory experiments. The successful candidate is expected to design and conduct experiments related to the formation, evolution and destruction of icy pebbles and planetesimals in early protoplanetary disks. The data measured in the laboratory should serve as inputs to models of planetary formation and as references for comparison with observations of protoplanetary disks over various spectral ranges. The candidate will also participate in the supervision of the PhD student hired on the same project. The ideal candidates will have a recent PhD (obtained less than two years before the starting date) in Physics/Astronomy and knowledge in the following areas and techniques: – physics (thermodynamics, dynamics) of protoplanetary disks and planetary formation scenarios (streaming instabilities, pebble accretion…) – design of laboratory experiments with dust and/or ice particles – various techniques of optical imaging / photometry / spectrometry – computer programming of laboratory equipment (cameras, sensors) – acquisition of laboratory measurements, storage, handling, reduction and analysis of large datasets Starting date: possible 1st of June 2018. Duration: up to three years depending on funding and performance. To apply, please submit electronically the following documents: – Letter of motivation – Academic curriculum vitae (including a list of publications, a list of courses and a list of talks given) – Description of research interests and possible research agenda – Contact details of people who could provide a letter of reference to: Antoine Pommerol ([email protected]) Complete applications received by April 1st, 2018, will receive full consideration. After this date, applications will be considered depending on availability. The post Postdoc for Planetary Imaging Group appeared first on NCCR PlanetS.
http://portaltotheuniverse.org/news/view/626557/
UC Riverside and University of Manchester researchers combine graphene and copper in hopes of shrinking electronics Researchers have discovered that creating a graphene-copper-graphene “sandwich” strongly enhances the heat conducting properties of copper, a discovery that could further help in the downscaling of electronics. The work was led by Alexander A. Balandin, a professor of electrical engineering at the Bourns College of Engineering at the University of California, Riverside and Konstantin S. Novoselov, a professor of physics at the University of Manchester in the United Kingdom. Balandin and Novoselov are corresponding authors for the paper just published in the journal Nano Letters. In 2010, Novoselov shared the Nobel Prize in Physics with Andre Geim for their discovery of graphene. In the experiments, the researchers found that adding a layer of graphene, a one-atom thick material with highly desirable electrical, thermal and mechanical properties, on each side of a copper film increased heat conducting properties up to 24 percent. “This enhancement of copper’s ability to conduct heat could become important in the development of hybrid copper — graphene interconnects for electronic chips that continue to get smaller and smaller,” said Balandin, who in 2013 was awarded the MRS Medal from the Materials Research Society for discovery of unusual heat conduction properties of graphene. Whether the heat conducting properties of copper would improve by layering it with graphene is an important question because copper is the material used for semiconductor interconnects in modern computer chips. Copper replaced aluminum because of its better electrical conductivity. Downscaling the size of transistors and interconnects and increasing the number of transistors on computer chips has put an enormous strain on copper’s interconnect performance, to the point where there is little room for further improvement. For that reason there is a strong motivation to develop hybrid interconnect structures that can better conduct electrical current and heat. In the experiments conducted by Balandin and the other researchers, they were surprised that the improvement of thermal properties of graphene coated copper films was significant despite the fact that graphene’s thickness is only one atom. The puzzle was solved after they realized the improvement is the result of changes in copper’s nano- and microstructure, not from graphene’s action as an additional heat conducting channel. After examining the grain sizes in copper before and after adding graphene, the researcher found that chemical vapor deposition of graphene conducted at high temperature stimulates grain size growth in copper films. The larger grain sizes in copper coated with graphene results in better heat conduction. Additionally, the researchers found that the heat conduction improvement by adding graphene was more pronounced in thinner copper films. This is significant because the enhancement should further improve as future copper interconnects scale down to the nanometers-range, which is 1/1000thof the micrometer range. The Latest on: Graphene and copper via Google News The Latest on: Graphene and copper - New, non-invasive blood sugar testing methods using salivaon October 14, 2021 at 5:54 am Despite breakthrough diabetes research over the past century, people with diabetes still need to rely on obtaining blood samples to monitor their sugar levels. Daily glucose monitoring by tracking ... - AIM WINNERS & LOSERS: Rambler up on financing; contract lifts Parityon October 13, 2021 at 3:02 am Rambler Metals & Mining PLC, up 26% at 23.60 pence, 12-month range 16.5p-70.37p. The copper and gold says NewGen Resource Lending Inc and and West Face Capital Inc arrange a USD1.0 million bridge loan ... - Xiaomi 11T Pro teardown showcases abundant use of graphite & copper for coolingon October 12, 2021 at 5:13 am The Xiaomi 11T Pro is the latest flagship smartphone from the company that offers top notch features and hardware. Now, a new teardown video offers us a look at the inside of the smartphone revealing ... - Xiaomi Mi 11T Pro teardown reveals a ton of graphite and copper to keep temperature in checkon October 12, 2021 at 3:53 am The teardown by PBKreviews reveals that the Mi 11T Pro houses an elaborate cooling system with graphene film, thermal paste and copper tape, which help the Snapdragon 888 chipset stay cool under ... - Xiaomi 11T Pro disassembly video reveals plenty of graphite and copper to keep thermals in checkon October 11, 2021 at 11:21 am Xiaomi’s 11T Pro is the maker’s de-facto flagship offering and ticked a lot of right boxes in our written review with its excellent display, strong performance and ludicrous 120W charging. Now trusted ... - A Better Night's Sleep Starts with This Tech-Savvy Pillowon October 11, 2021 at 5:30 am It's no surprise that entrepreneurs need sleep to do their best work. Really, anybody needs sleep to do their best work. And yet studies show that entrepreneurs regularly struggle to get the best ... - DOE announces Stage 1 CABLE Conductor Manufacturing Prize Winnerson October 10, 2021 at 1:11 am The US Department of Energy announced Stage 1 winners of the Conductivity-enhanced materials for Affordable, Breakthrough Leapfrog Electric and thermal applications (CABLE) Conductor Manufacturing ... - These 5 recent advances are changing everything we thought we knew about electronicson October 6, 2021 at 11:37 pm From wearable electronics to microscopic sensors to telemedicine, new advances like graphene and supercapacitors are already here. - Flash-heating efficiently recycles precious metals from e-wasteon October 5, 2021 at 10:13 pm Electronic waste (or e-waste) is not only a major pollutant in landfill, but huge amounts of useful resources are being thrown away. Engineers at Rice University have now shown that precious metals ... - Researchers Develop Graphene from Carbon Dioxideon October 4, 2021 at 5:00 pm Carbon dioxide (red-black) and hydrogen (gray) catalytically react to graphene (black) on copper-palladium surfaces. Researchers developed a way to use carbon dioxide to synthesize graphene for use in ...
https://innovationtoronto.com/2014/03/creating-graphenemetal-sandwich-improve-electronics/
What is the Efficient Market Hypothesis? The Efficient Market Hypothesis (EMH) states that financial markets are informationally efficient, which means that investors and traders will not be able to consistently make greater than market average returns. To put it simply, the EMH states that it is not possible to beat the market over the long run. Supporters of the theory hold that those who do in fact make more than average returns do so because they have access to inside information or alternatively have simply enjoyed a prolonged lucky streak. The modern form of the Efficient Market Hypothesis was developed Professor Eugene Fama of the University of Chicago during the mid 1960’s and was widely accepted within academia until the 1990’s when work in behavioural finance began to bring the hypothesis into question. Despite this many academics and some in finance hold the efficient market hypothesis to be true to this day. Those who support the EMH typically layout their claims in one of three mains forms, with each form of the claim having slightly different implications. - Weak Form Efficiency: In this formulation of the EMH, future market prices cannot be predicted by simply analysing past price performance. It is therefore impossible to beat the market in the long run by using investment or trading strategies which rely on historical data. While the use of technical analysis may not allow traders to beat the market in the long run, some forms of fundamental analysis may allow for market participants to beat the market. This form of the hypothesis, holds that future price movements are determined by information which is not contained in past and current market prices, essentially ruling out the use of technical analysis. - Semi Strong Form Efficiency: This formulation of the EMH, goes quite a bit further than it’s Weak Form cousin. Holding that market prices rapidly adjust to any new and publicly available information, this rules out both technical and fundamental analysis. Only those with access to inside information would be able to beat the market in the long run. - Strong Form Efficiency: Those who believe in the strongest form of the EMH believe that current market prices reflect all public and private information meaning that no one can beat the market, even those with insider information. It might seem that this version of the efficient market hypothesis can be easily refuted, as there are a considerable number of money managers who have been able to beat the market year after year. Those who support this hard line version of the EMH, often respond by pointing out that with the sheer number of people who actively trade the financial markets you will expect some to get lucky and make impressive returns year after year. Criticism of the EMH and Behavioural finance Investors and increasingly those in academia have been very critical of the Efficient Market Hypothesis, questioning the hypothesis on both theoretical and empirical grounds. Behavioural economists have pointed to numerous market inefficiencies, which can often be attributed to certain cognitive biases and predictable errors in human behaviour. The rise of algorithmic trading and quantitative finance hasn’t necessarily rid the financial markets from such cognitive biases, with the 2008 financial crisis demonstrating how cognitive biases can work there way into complicated quantitative models. In fact some have gone as far to suggest that the EMH was partly responsible for the 2007-2012 financial crisis, with the hypothesis causing financial and political leaders to have a “chronic underestimation of the dangers of asset bubbles breaking”. Much of the work of behavioural economists suggests that we have good reason to reject both the Strong and Semi-Strong versions of the efficient market hypothesis. Are the Forex markets an example of an efficient market? The majority of the research into the efficient market hypothesis has focused on Stock Markets, but there have been a number of researchers who have looked into whether the Forex markets are informationally efficient. A study published in 2008 by J.Nyugen of the University of Wollongong looking at 19 years of data found that it was possible to create trading rules which could deliver significant returns indicating that the FX markets may be inefficient. Though the study went onto say that the trading rules only delivered significant returns during the first five year period, suggesting that either the FX markets became more efficient during the time period or simply the trading rules created by the studies authors broke down. Another 2008 study, found that it was possible to predict movements in price using only statistical data, however it wouldn’t have been profitable to have traded the markets using the studies predictive model. These studies suggest that the FX markets are somewhat efficient but they certainly don’t demonstrate that it is impossible to consistently turn a profit trading Spot FX. Conclusion The Efficient Market Hypothesis (EMH) was extremely popular among those in academia during the late 20th Century, however many of those active in finance were never convinced by the EMH. During the 90’s, the hypothesis began to lose credibility with many behavioural economists beginning to seriously undermine the hypothesis. When it comes to the question of whether the Spot FX markets are efficient as defined by one of the forms of the EMH, there is simply not enough research to make any sort of conclusive statement. The data shows that it is possible to create rules and trading strategies which allow one to predict market movements with a significant degree of accuracy. The strategies in these studies struggled in regards to profitability, but this may only hint at the FX markets displaying some weak form efficiency under certain circumstances.
http://thefxview.com/2014/03/01/efficient-market-hypothesis-forex/
Table of Contents Introduction. Efficient Market Hypothesis. Behavioural Finance. Conclusion. Reference. The essay discusses about market efficiency and behavioural finance using real world examples and the management of funds using active and passive methods. The term fund management is primarily related to managing the funds of investors and/ or investors by investing and reinvesting in various financial securities to earn higher returns. In an Actively managed fund, the manager takes decisions based on analysis to outperform index while in a passively managed funds, the manager just invests in the indexes. Market efficiency is a hypothesis which was well expected in previous decades. It states that market prices are comprised of all available, material information. Thus there is no way to outperform the market. However such hypothesis does not prevails in the current financial scenario and there are various examples where investors have beaten the market. Such investors, financial analysts and economists strongly believed contrary to the efficient market thesis that future stock prices are predictable. These financial intellectuals emphasized behavioural and psychological elements to determine the stock-price. They observe that future stock prices can be predicted by analysing the historical patterns of stock price. In 1970 economist Eugene Fama, developed a hypothesis about Market efficiency which says that the investor can never outperform the market by exploiting the market anomalies because if there are any anomalies are present that it get immediately arbitraged away (Hamid et al. 2017). Efficient Market Hypothesis states that markets are efficient or there is market efficiency. Market efficiency states that market prices are comprised of all available, material information. In other words markets are efficient and all the material information has been already incorporated in the prices of the stocks. This is the reason that there is no way possible to beat or outperform the market. The concept further adds that there is no such thing as overvalued or undervalued financial securities available in the market. Eugene F. Fama won the Nobel Prize of economics for his services in the field of finance. The efficient market hypothesis leads the passive method of fund management. In passive fund management the analyst and the investors do not try to outperform the market considering the theory and just try to replicate the index funds or invest in the securities which are components of the index. As per financial analysts, the theory of efficient markets, limits the investors to earn average returns and does not allow above average return expectations. As per Fama’s theory, the efficient markets is a place where large numbers of active investors tries to increase or maximise their profits and thus compete with other investors and the market. They try to predict future values of individual securities. But since all the material and real time information is freely available to all market participants, hence the individual stocks in aggregation performs exactly same as the overall stock market. Fama’s theory distinguishes efficiency in 3 forms of efficient market hypothesis: In the practical (real) world, efficient market hypothesis does not holds strong as there are tons of cases and examples where market prices of stocks witnessed huge deviation from the fair values. For example the legendary investors and fund managers like Warren Buffett of USA and Kerr Neilson of Australia have outperformed the market consistently over long periods of time. Buffett generated returns of 1,826,000 % after he took control of Berkshire Hathaway about 50 years ago. This results in compounded annual gross returns (CAGR) of 21.6 %. During this period the S&P 500 index of USA managed to generate returns at a CAGR of only 9.9 %. Also Kerr Neilson, of Platinum Asset Management was able to deliver approx.13 % on an annual basis while Australian benchmarking index were able to generate just 6.3 % during the same period. As per the definition of EMH, the above achievements of Buffett and Neilson remain impossible. Another major event which does not hold the theory of EMH good, are the events such as the stock market crash of 1987, during this period the Dow Jones Industrial Average (DJIA) of America dropped by more than 20 per cent in one single day. Behavioural finance states the concept of psychology influence on the behaviour of traders, analysts, economists and investors in the process of their decision making especially in the finance industry or while making investments. It states an important fact that investors and traders are not rational always but are influenced by their own biases (Thaler and Ganser 2015). It states that investor’s psychology and emotionally interprets the available information in the market in order to make investment decisions. They do not behave in a manner which is predictable, rational and unbiased. The concept of behavioural finance puts impacts of behaviour of investors in investments decisions that lead to various market anomalies. As per Richard Thaler, the founding father of the concept of “Behavioural Finance” said that Behavioural finance is simply open minded finance. Richard Thaler, a founding father of behavioral finance who made a substantial effort to establish the field as a legitimate part of classical finance (Thaler, 2015 tells a thrilling history of behavioral economics and finance from the perspective of the author). He defines behavioral finance as simply open-minded finance (Thaler, 1993) Typically, the most common reasons that determine the reason behind irrational behaviour of investors in behavioural Finance are: For example investors do not want to quantify newly available information about some particular market or security, for example it is a belief all around the world that investing or trading in share market is gamble or speculation but actually it is like any other traditional business which requires market knowledge, understanding and requires analysis of the available information. The investors sometimes stick to the fact that banking stocks are always good and ignore some new information about involvement of a particular bank in some scam or fraud. For example: the retail investors are mostly affected and impacted by the herding bias. Such investors have access to the most limited sources of information and also reach very lately to them. Also such investors do not possess the competence to analyse the financial markets like stock market and make their decisions based on influences or advices. For example: the investors who don’t want to take higher risks or exposed to higher possibilities of incurring losses usually invests in risk free securities like bank deposits, fixed deposits, government bonds, etc. such securities delivers very low returns and sometimes even are not able to beat the inflation factor (Al-Khazali and Mirzaei 2017). The risk loving investors or the investors who wants higher returns can invest in securities like shares and derivatives of stocks and commodities or currencies. Such securities expose the investors to high level of risks and allow them to earn higher rate of returns. As per the Efficient Market Hypothesis (EMH), financial markets are considered as informational efficient. That means information is freely available and each and every individual have its access, thus exploitation of financial news is not possible. EMH has generated debate related to the two important concepts of access and availability because all the investors do not have the availability to all the information available in the market, also not all the investors have the access to the information which are available at the real time. Material information is broadcasted using various information channels including social media, news websites, analysts blogs, radio, TV, etc. such dissemination of information takes time, some people receives them early and other later. Investors are also not much capable of elaborating the available information and are also not fully competent to analyse and utilize such information. Such difference of availability of material information and its access creates a difference of winners and losers that creates gains and losses in the financial industry. That is why Behavioural Finance puts the opinion that the financial markets in terms of access and availability of information are inefficient. That means there are anomalies present in the markets and investors can exploit them to outperform the benchmarking indexes. Based on the discussion, the fund can switch its strategies towards active management of portfolio. The fund can analyse the stock market by using methods of fundamental and technical analysis in order to forecast the future in order to exploit the market anomalies. Al-Khazali, O. and Mirzaei, A. 2017. Stock market anomalies, market efficiency and the adaptive market hypothesis: Evidence from Islamic stock indices. Journal of International Financial Markets, Institutions and Money, 51, pp.190-208. Hamid, K., Suleman, M. T., Ali Shah, S. Z., Akash, I. and Shahid, R. 2017. Testing the weak form of efficient market hypothesis: Empirical evidence from Asia-Pacific markets. Available at SSRN 2912908. Han, B. and Hirshleifer, D. A. 2015. Self-enhancing transmission bias and active investing. Available at SSRN 2032697. Hirshleifer, D. 2015. Behavioral finance. Annual Review of Financial Economics, 7, 133-159. Ramiah, V., Xu, X. and Moosa, I. A. 2015. Neoclassical finance, behavioral finance and noise traders: A review and assessment of the literature. International Review of Financial Analysis, 41, 89-100. Rossi, M. and Gunardi, A., 2018. Efficient market hypothesis and stock market anomalies: Empirical evidence in four European countries. Journal of Applied Business Research (JABR), 34(1), pp.183-192. Thaler, R. H. and Ganser, L. J. 2015. Misbehaving: The making of behavioral economics. New York: WW Norton. Remember, at the center of any academic work, lies clarity and evidence. Should you need further assistance, do look up to our Accounting and Finance Assignment Help 1,212,718Orders 4.9/5Rating 5,063Experts Turnitin Report$10.00 Proofreading and Editing$9.00Per Page Consultation with Expert$35.00Per Hour Live Session 1-on-1$40.00Per 30 min. Quality Check$25.00 TotalFree Request Callback Doing your Assignment with our resources is simple, take Expert assistance to ensure HD Grades. Here you Go....
https://www.myassignmentservices.com/resources/efficient-markets-hypothesis-assignment-sample
You may have heard of ‘EMH’, the ‘efficient market hypothesis’. Anyone who’s read the introduction to any text book on financial theory can tell you all about it. EMH basically says that you can’t beat average returns because the markets are at all times “informationally efficient”, that is to say all prices reflect all available information at all times. One form of EMH, the “strong EMH” suggests that this efficiency of information extends beyond publicly known information to all information including insider information, which it has to be said, in Australia, in some of those small stocks, is probably not far off the mark.
https://marcustoday.com.au/member/webpages/7639_locked.php?guid=5febee425ba37bbae6090475fd1baecf&id=31520
There has been a lot of economic speak throughout the world recently due to the global recession. This is because as most countries were taken by storm, some very prominent individuals and econometricians for that matter had the storm coming. The debate has ever since the advent of this global recession, drifted from just failures by the governments to put in place correct mechanisms which could positively predict the future fluctuation of prices as well as the growth of the economies. To fully understand the truth or falsehood of the topic, it is in order understand the definitions and implications of the methods of investment analysis and the theory which generally involves this investment analysis. First, let us understand what the methods of financial analysis i.e. the fundamental analysis and the technical analysis entail. Fundamental analysis of a business basically involves the analysis of its financial statements, the management and all its competitive advantages, it’s very close as well as far off competitors and the markets. In this analysis investors make decisions regarding the profitability of the company in future. When doing an analysis of stocks, future contracts, or even currencies basing on the fundamental analysis, two approaches can be used. These approaches are bottom up analysis and top down analysis. Usually fundamental analysis is performed on both the historical and present data. This is done with the aim to make financial forecast being the ultimate goal. However, some possible objectives of fundamental analysis includes; conducting a stock valuation in a company and predicting probable evolution of the prices, making some projections on the company’s performance in business, evaluating a company’s management and making some internal business decisions and calculating the company’s credit risk. In simple term this type of analysis is very important in the event of buying shares as a form of long term investment. This because the criteria used for long investment is usually different from the one used when trading in a short term. Another type of investment analysis is technical analysis. This method involves charting and/or graphing the shares trading history. This is done using different tools such as trendlines, and support and resistance. In this, the demand and supply of a stock can easily be analyzed because all the relevant information needed can be retrieved from the company’s stock charts. Thus technical analysis is basically the study of prices and volumes of stocks. These two variables can combine to form patterns we can identify on the stock chart and also do offer signs of possible movements in the future. Whether this type of investment analysis is or is not enough information to base trading decisions on purely depends on the investment analysts involved. The Efficient Market Hypothesis (EMH) is a theory developed by Emmanuel Fama in the 1960s. This theory has the assertion that the financial markets are basically ‘informationally’ efficient or in other words, the prices on traded assets such as stocks, bonds, or even property already reflect all known information, and immediately change to reflect this new information. Therefore basing on this theory some critics such as () assert that it is impossible to outperform the market consistently by simply using information that the market already knows about and gives the only exception to this happening as sheer luck. About some few years ago, the EMH was very widely accepted by many financial economists such as the founder of this theory Eugene Fama. It was Fama’s very influential article that led to other academicians to believe in this theory and staunchly apply it in all their endeavors as financial analysts. The article, “Efficient Capital Markets” led to most of these elite economists to believe that markets especially the security markets were extremely efficient in the reflection of information about individual stocks and wholesomely about the stock markets. The view that was expected was that when information arises, news spread too fast and is quickly incorporated to the securities without delay. Therefore neither technical analysis, which studies past stock prices so as to predict the future prices nor fundamental analysis, which analyses financial information so as to effectively help the investors select stocks that are considered to be of “less value”, were in a position to enable an investor achieve higher returns than those obtainable by holding a random portfolio selection of individual stocks that have a comparable risk. (Lo, 2007), came up with a certain joke which is told among academic financial economists. The joke is about an economist who was taking a walk down the street with a companion. As they were walking they came across a USD 100 bill lying idly on the ground, and as the companion stretches so as to reach for the bill, the economist quickly says, “Don’t bother, if it was a genuine USD 100 bill, somebody could have already picked it up.” This very humorous example of an economic logic gone askew can be said to be a very fair rendiotn of the efficient markets hypothesis. It is unmistakably simple to state that the EMH has consequences that can be spread across the board for other academic theories and the business practice as a whole and suprising enough is still resilient to thorough empirical proof and/or refutation. This is because even after all the years of criticisms, economists have not yet come up with a far reaching consensus about whether the markets, especially the financial markets or the stock markets, are indeed inefficient. A financial economist (Samuelson, 1965), added his contribution to the EMH through his article titled “Proof that Properly Anticipated Prices Fluctuate Randomly”. In this article he states that in an “informationally efficient” market, if prices are forecasted in the right manner and using the correct variables, then the price changes must indeed be unforecastable on the condition that they fully contain the information and all the expectations of all the participants in the market. After delving into the development of a series of some linear-programming solutions to some spatial pricing models without any doubts or uncertainties, Samuelson eventually ended-up with the idea of the EMH during his developed interest in endeavors revolving around the temporal pricing models on storable commodities that are about to decay. Contrastively, Fama’s (1965b, 1970) seminal papers were purely based on his personal interests in measuring the properties of stock prices, and also in providing a resolution to the debate between technical analysis and fundamental analysis. Being the first to use the modern digital computer technology in conducting an empirical research in finance, and also the first to coin the term ‘efficient markets’ and at the same time the first to use it. Fama basically operationalized the EMH by the placing of structures on vast information sets that were available to the participants in the market. It was Fama’s attraction to the empirical analysis that led him to follow a very different path than that of Samuelson. This path yielded very significant empirical and methodological contributions such as the many econometric tests of both the single and multi- factor linear asset pricing models, and also a number of empirical irregularities and anomalies in stock, bond, currency and the commodity markets. In most cases the efficient market hypothesis is normally associated with the construed idea of ‘random walk’. This is a term which is loosely used in the finance literature to typify a price series in which all subsequent changes in price represents random departures from the previous prices. The sense of this random walk idea is that if flow of information is not hindered and the information is with immediate effect reflected in stock prices, then the following day’s price changes will only reflect that day’s news and will thus be very independent of the any changes in price of the current/present day. However, this news is unpredictable and therefore the resulting changes in prices must in every way be unpredictable and also random. What this means is that prices are considered to fully reflect all the acknowledged information and even the uninformed investors buying diverse portfolio at the prices given by the market will also obtain a rate of return as generous as that achieved by the experts. As we have seen the EMH is a proposition that the current prices in stocks fully reflect all the available information about the value of a firm or organization and that there is no way that excess profits can be earned more than the overall market, through the use of this information. The EMH deals with one of the most fundamental and infact very exciting issues in finance i.e. why there are changes in prices in the security markets and how these changes take place. The EMH has important implicaaitons for investors – both non-experts and experts- as well as for the financial managers. Many investors will always try to recognize securities that are undervalued, and those that are expected to increase in value in future, especially those that will increase more than others. Investors , including the investment managers, have a belief that they can choose securities that will outperform the market. These managers use a variety of forecasting and forecasting techniques to assist them in their decisions concerning investment. It is obvious that any edge that an investor takes is translated to substantial profits. The EMH strongly asserts that none of the techniques mentioned above are effective. This is to say that the advantage that is gained does not in any way exceed the transaction and research costs that have been incurred. Therefore according to this theory no one predictably outperform the market. Evidently, there is not a single theory of economics or finance that has generated such a heated academic debate between its proponents and vehement critics. ‘There is no other proposition in economics’, says Michael Jensen (a renowned economist in a key faculty in the Harvard) ,’that has more solid empirical evidence supporting it than the efficient Market hypothesis’, he concludes. His words goes against the grain of Peter Lynch who says of the hypothesis while investment in Fortunes, April 1995: 7) ‘Efficient markets? That is a bunch of crazy stuff’. The efficient markets hypothesis (EMH) claims that efforts to profit from predicting price movements are bound to abate. The driving tool that propels behavior in pricing is the arrival of new and relevant information. An efficient market in this regard, is one whose prices adjust relatively in a swift mode and, on the average, divorced from bias, to new information. Consequently, the current prices of securities reveal all the information availed at any particular time of interest. To go rather declarative, there is no data that guarantees the belief that prices are too high or, if you like, low. Security prices have been known to adjust before an investor has the opportunity trade for profits’ sake from a newly availed information. The summative reason for the presence of an efficient market is the rigorous competition among investors to profiteer from any newly availed information. The know to how to identify over- and-above the underpriced stocks is very indeed inevitable since it allow for investors to buy additional stocks for less than their “true” value and sell others far above their initial worth. As a result, many people will invite a significant duration of time and resources in a bid to detect the advantageous behavior of the market. Inherently, as the competition amongst the analysts soars in their bid to take capitalize on over- and under-valued securities, the eventuality of being capable of getting and exploiting the resulting mispriced securities becomes more and more elusive. In the event of an equilibrium, only a relatively lesser population of analysts will be able to garner significant profit from the detection of the aforementioned mispriced securities—and in fact mostly by fluke. For the widespread and oblivious category of investors, the information analysis payoff would apparently supersede the transaction costs. To conclude, fundamental analysis is more objective and predictive of the market behavior than do the technical analysis. This means that technical analysis is product oriented and therefore more effective in analyze markets based on the factual information that has been studied and recorded within the market. The short term nature of technical analysis provides for ample ground for its marriage with fundamental analysis. Fundamental analysis has proven to be very workable with the EMH in terms of giving market growth a sober and reflective analysis. - Business Plan to Establish for a New Venture in the Field of Medical - Strategically Crafting - Workplace Diversity - Recruitment Strategies | | Kris K. San Francisco, CA | | | | I would like to say thanks to writer #48376 for doing such an astonishing job on my paper. I really gave him very little to go on. Still, he was able to do the research and come up with a champion paper for me. I am speechless. I didn’t expect anything nearly this good. | | Allen P. Denver, CO | | | | Writer #3827 is a great writer! I got an A+ on my very first paper of the term. This is the way to set the pace for the rest of the term. Thanks! | | Lonnie K. Falcon Crest, MA | | | | I work very long hours. Therefore, I sometimes run behind in my school assignments. This is especially true for the long written assignments. It is so great to know that I can always count on EssaysEmpire.com to help me out. The papers that I buy from you are excellent, without fail. I really appreciate your help . Thanks for everything. | | David H. New York, NY | | | | I want to thank each writer who participated in helping with my large project. I know they spent a lot of energy turning out an admirable product, and I feel good for having hired them to take on this important project. EssaysEmpire.com is reputed as the best writing company on the Internet. Now, I know first hand, that you live up to that reputation. I could not have asked for a better deal. | | Pete F. Sparta, NJ | | | | Dear EssaysEmpire.com, I have never used a writing service like yours before. None of the ones I have tried previously offered the kind of guarantees that you offer. I even bought a paper that was plagiarized once. I never have to worry about things like that with EssaysEmpire.com! Thanks a million! | | Celeste H. San Francisco, CA | | | | EssaysEmpire.com always gives me a plagiarism report to prove that the work they wrote for me is authentic. I appreciate this. It helps eliminate worry. | | Zola R. Salulita, MX | | | | The paper that I bought from EssaysEmpire.com was a whole lot better than I thought it would be. Thank you for the excellent service and product. I will be happy to recommend you to my friends. | | Katy A. London, England | | | | Your writers are very good at what they do. I would recommend your service to anyone. It is impressive and it doesn’t cost too much. | | Marcus H. Eugene, OR | | | | You’ve done it again! Every time I order a paper from EssaysEmpire.com, you top the last one. This latest one was the best yet. I truly do appreciate the fact that you offer such high quality writing. It never fails. With some of the other writing services online, you ever really know what you are going to get. Sometimes, they just sell you a paper that someone else has already bought. I never have to worry about things like that happening when I use EssaysEmpire.com. | | Phil E. South Hadley, MA | | | | So often, when one deals with customer service, he or she is treated rudely. That was not the case with EssaysEmpire.com. The agents were helpful and friendly, knowledable and concerned. I, for one, was astounded with their competence. When it came down to the actual written material that I ordered, it was much better than I had anticipated. There are clearly some expert writers who work for EssaysEmpire.com. I would rate this writing service as exceptional, 5 stars out of 5 stars. I will happily return when I need help with another written work. | | Fred H. Berkeley, CA | | | | Thank you for doing what you do. You helped a lot of my friends, and now you have helped me. You provide a valuable service to students who need to get over writing hurdles. | | Cleo P. Oakland, CA | | | | I submitted my order as a ‘rush order’. I only had 2 days to get my paper finished and I just felt like I was sinking. I didn’t understand the material I was writing about, and I didn’t know what I was going to do. My friend recommended EssaysEmpire.com. I had very little lost by attempting to use this writing service, so I did. It was an excellent move on my part, because, not only was my paper finished on time. It was also excellently written. I will always be grateful to my friend for recommending your service. It is wonderful. | | Grace G. London, UK | | | | I was quite satisfied with the results of my paper order. EssaysEmpire.com sure has some good writers! I thought it might just be some college kids who were writing in their spare time to earn some extra bucks. No! EssaysEmpire.com hires legitimate writers. They all seem to be well educated and intelligent, too. My paper was terrific! | | Steve P. Portland, OR | | | | I am beginning to understand why your writing service was recommended to me so highly. Everything you do is a cut above your competition. You’re really great! | | Mary R. New York | | | | Writer #48376 was very easy to work with. My professor made some changes to his requirements after he had assigned the paper. All I had to do was call your customer service and request to speak with the writer directly. Was able to relay the changes that my professor wanted, and he (writer #48376) accommodated them with no problem. I thought that was great. No headaches and perfect work. I couldn’t ask for a better experience! | | Mark R. Seattle, WA | | | | I am so happy with my term paper. The writer did a spectacular job!! Thanks to the writing team and also to the customer service team. | | Ren B. Sparks, NV | | | | I ordered two papers from you at once. Both of them had strict deadlines and not that much time left. Your writers managed to get both papers to me on time, and both of them received high marks. I am so happy about this! | | Morgan W. Eugene, OR | | | | I don’t see how anyone could write a paper about something as boring as the psychology of the two year old, could make it sound so interesting, but your writers managed to pull it off. Thank you for putting forth such a concerted effort to do a good job. I really appreciate it. I will use your service again | | John G. Erie, PA | | | | Thanks goes out to the writers who presented me with the best paper I have ever turned in. My professor was so impressed with it that he took excerpts from it to put in his class handouts on how to write the perfect paper. EssaysEmpire.com, you made me one proud student today! | | Cleo C. Wichita, KS | | | | To the writers at EssaysEmpire.com: Next week, I will be graduating with a degree in clinical psychology. Over the course of my college years, I have had to turn to you for help many times. Each and every time, you produced some of the highest quality writing my professors had ever seen. Because of you, I was able to keep a high grade point average and get into an excellent grad school. There are simply no words adequate enough to fully express my appreciation, so I’ll simply say, “thank you” and let it go at that. Please know that you are appreciated.
https://essays.essaysempire.com/business/investment-analysis.html
The Efficient Market Hypothesis (EMH) is a theory that the price of a security reflects all currently available information about its economic value. A market in which prices fully reflect all available information is said to be efficient. The concept is important for investment management because it serves as a guide to expectations about the potential for profitable trading, the likelihood of finding an investment manager who can beat the market, and the limits of predictability in the capital markets. If the theory is precisely true, it is impossible for a speculator, an investment manager, or the clients of the manager to consistently beat the market. The intuition underlying the EMH is the invisible hand of the marketplace. In a quest for profits, competition among speculators to buy undervalued assets or sell overvalued assets will quickly drive expected gains to trade to zero. The statement that prices “reflect all available information” implies that no trader has any kind of informational advantage in the security markets. If this is so, then the price today reflects the common or “market” expectation of what the security would be worth tomorrow.
https://www.oreilly.com/library/view/modern-portfolio-theory/9781118469941/32_chapter17.html
The Inefficient Capital Market and the Implications of the "Incomplete Revelation Hypothesis" Date2004 Author Meier, Joseph A. MetadataShow full item record Abstract The Efficient Markets Hypothesis (EMH) has spurned debate in academia since the theory was published. The seminal work of Fama (1970) on the EMH states that market prices fully reflect all publicly available information. Bloomfield's (2002) "Incomplete Revelation Hypothesis" (IRH) asserts that statistics that are more costly to extract from public data are less completely revealed in market prices. This alternative to the EMH can account for many of the phenomena that are central to financial reporting but inconsistent with the EMH. The IRH also clarifies that informational inefficiency does not necessarily apply to a trader's level of irrationality. In addition, the IRH suggests that extraction costs viable and must be recognized. Here, it is hypothesized that Bloomfield's IRH model can (a) link price reactions and extraction costs, (b) elicit models for return predictability (drift), (c) provide tools for financial analysis and investment practice, (d) better characterize a managers' financial reporting behavior, and (e) recognize financial reporting regulation and its effects.
https://cache.kzoo.edu/handle/10920/26612
Has Burton Malkiel Abandoned the Efficient Market Hypothesis? Long-term readers of Angrybear know I often appeal to the logic of the Efficient Market Hypothesis (EMH). In comments to a recent post, one reader suggested EMH was so outdated that even Burton Malkiel has abandoned it. That claim surprised me in light of his April 2003 paper The Efficient Market Hypothesis and Its Critics: The intellectual dominance of the efficient-market revolution has more been challenged by economists who stress psychological and behavioral elements of stock-price determination and by econometricians who argue that stock returns are, to a considerable extent, predictable. This survey examines the attacks on the efficient-market hypothesis and the relationship between predictability and efficiency. I conclude that our stock markets are more efficient and less predictable than many recent academic papers would have us believe … For me, the most direct and most convincing tests of market efficiency are direct tests of the ability of professional fund managers to outperform the market as a whole. Surely, if market prices were determined by irrational investors and systematically deviated from rational estimates of the present value of corporations, and if it was easy to spot predictable patterns in security returns or anomalous security prices, then professional fund managers should be able to beat the market … A remarkably large body of evidence suggesting that professional investment managers are not able to outperform index funds that simply buy and hold the broad stock market portfolio. A couple of years later, Malkiel wrote Reflections on the Efficient Market Hypothesis: 30 Years Later: In recent years financial economists have increasingly questioned the efficient market hypothesis. But surely if market prices were often irrational and if market returns were as predictable as some critics have claimed, then professionally managed investment funds should easily be able to outdistance a passive index fund. This paper shows that professional investment managers, both in The U.S. and abroad, do not outperform their index benchmarks and provides evidence that by and large market prices do seem to reflect all available information. I’m not aware of a subsequent paper where Professor Malkiel has abandoned his long held view – but I’d appreciate it if someone would provide us with his latest thoughts.
https://angrybearblog.com/2006/10/has-burton-malkiel-abandoned-efficient
The Fractal Market Hypothesis (FMH) is a theory that suggests that financial markets behave in the same way as natural phenomena and are subject to the same physical laws as found in nature. It suggests that financial markets are composed of similar patterns which repeat over and over again at different scales. These patterns can be used to identify market trends and can help investors make more informed decisions. The Fractal Market Hypothesis is one of the alternatives to the Efficient Market Hypothesis (EMH) which states that all available information is already factored into the price of a security. The other alternative is the Adaptive Market Hypothesis (AMH). Reference examined how the fractal nature of the financial market can be quantified and used in investment analysis. It pointed out, This paper examined the fractal properties of developed and developing market indices and examined the evolution of these fractal properties over a two-decade period. The FMH, using empirical evidence, posits that financial time series are self-similar, a feature which arises because of the interaction of investors with different investment horizons and liquidity constraints. The FMH presents a quantitative description of the way financial time series change; so after the testing of observed, empirical properties of financial market prices, forecasts may be formalized. Under the FMH paradigm, liquidity and the heterogeneity of investment horizons are key determinants of market stability, so the FMH embraces potential explanations for the dynamic operation of financial markets, their interaction and inherent instability. During “normal” market conditions, different investor objectives ensure liquidity and orderly price movements, but under stressed conditions, herding behavior dries up liquidity and destabilizes the market through panic selling. We believe that the indicators presented in the paper can be implemented in trading systems. Let us know what you think in the comments below or in the discussion forum.
https://harbourfronts.com/fractal-market-hypothesis-quantification-and-usage/
The “Efficient Markets Hypothesis” is a popular target of anger and derision among lay critics of the econ profession. How can financial markets be “efficient” when they just crashed and took our economy down with them? And when sensible people like Bob Shiller, Nouriel Roubini, Bill McBride, et al. were screaming their heads off about a housing bubble years before the pop? Of course I have some sympathy for these complaints. But the more I learn about and teach finance, the more I learn what an important and useful idea the “EMH” in fact is. I don’t want to say that the EMH is unfairly maligned, but I do think that its vast usefulness is usually ignored in the press. First of all, people should realize that the EMH is misnamed—it’s not really a hypothesis, it’s not about “efficiency” in the economic sense of the word, and it’s not unique (so it shouldn’t have a “the” in front of it). Some of this miswording was just semantic clumsiness on the part of the people who came up with the theory. Some was sloppy science. The “efficient” part of “EMH” doesn’t mean that financial markets lead to a Pareto-efficient outcome. You could have externalties—for example, every time you make a financial transaction, God might kill a kitten—and the market could still be “efficient” in the way that financial economists use the term. Similarly, a vastly “inefficient” financial market might be Pareto efficient, since it might only be possible to make profits by taking advantage of someone else’s stupidity. The “efficient” actually just refers to information-processing efficiency. What that basically means is that if there’s some piece of information out there – some fact about a company’s balance sheets, or some pattern in past prices, etc. – the market price should reflect that piece of information. That’s what “efficient” means here. But exactly how should prices reflect information? Here’s the bigger problem with the term “EMH” (the “sloppy science” part)—it’s not really a hypothesis. How prices reflect information will always depend on people’s preferences. In finance, preferences include preferences about risk. So without a measure of risk, it’s impossible to scientifically test whether or not prices incorporate information. To be a real hypothesis, the EMH needs to be paired with a specification of risk (or, more generally, a hypothesis about people’s preferences with respect to uncertainty and time, and a hypothesis about the sources of risk). And since there are many possible such specifications, there isn’t just one “EMH”…there are infinite. To complicate things, “the EMH” says nothing about how long it takes for the market to process information. So even if an EMH happens to be true at one frequency (say, daily), it might not be true at the 1-second frequency. OK, so is even one of these EMHs true, at some frequency? We can do statistical tests, but we’ll never really know. First of all, our tests are all pretty weak. But more importantly, conditions may change! An EMH might be true for a while, and we might conclude it’s true, and then things might change and for one or two years it might stop being true, and then we’d do some more statistical tests and say “Oh wait, I guess it’s not true after all!”, and then it might go right back to being true! We generally assume the laws of physics don’t change from year to year, but it’s easy to imagine that the “laws” of finance aren’t as immutable. What if the market is “efficient” 99% of the time, and the rest of the time there’s a catastrophic bubble? And to top it all off, theory says that the strong form of the EMH can’t even be true. So “the EMH” is very limited as a scientific hypothesis or physics-like law of nature. And I think that ever since many of these points were pointed out (I think by Andrew Lo, though someone else may have preceded him), financial economists have stopped talking about “the EMH” as such, except in a vague hand-wavy way during informal discussions. Sloppiness has been much reduced. But I do seem to recall that the title of this post was “In defense of the EMH”. So I had better get around to defending it! What I want to defend is the idea behind the EMH. Even if the data rejected every single EMH, the idea would still be incredibly useful for the average person. Let’s call this idea the Random Markets Idea, or RMI. The simplest form of the RMI was stated by Paul Samuelson in 1965: “Properly anticipated prices fluctuate randomly.” Basically, if it was pretty easy to see where prices were headed, a lot of people would see it, and try to make free money by trading on it. Since people in the finance industry are doing a lot of work—watching the news like a hawk, doing constant analysis of changing numbers—chances are that the price change will happen so fast that you won’t have time to get in on the action. So from the perspective of any of us who doesn’t have a supercomputer in his head, prices movements must be unpredictable and surprising. They must seem random. That’s it. That’s the RMI. Note that this is very different than saying, “On average, people don’t beat the market average.” This is more than that. This is saying that even if you manage to beat the market average for a year or two years or even ten years, you shouldn’t expect to be able to repeat your performance next year. That may seem counterintuitive, or even silly. “Hey,” you think, “I beat the market last year, so I must be one of the smart guys! And that means I should be able to repeat my performance…right?” Well, maybe. But the RMI says that that’s actually very, very unlikely. It’s far more likely that you just got lucky. Now here we get to why the RMI is so useful to you and me and most people (and to the managers of our pension funds and mutual funds). It provides a check on our behavioral biases. Probably the most robust findings in the field of behavioral finance is that individual investors do badly. They are overconfident. They trade too much and take losses on trading costs. They suffer from biases like disposition effect, probability mis-weighting, recency bias, etc. And as a result they lose money, relative to the wise folks who just stick their money in a low-cost diversified portfolio and watch it grow. As for institutional investors – mutual fund managers and pension fund managers – we don’t know as much about what they do, but we do know that very few of them manage to consistently beat the market, year after year (and most don’t beat the market at all). It’s interesting to note that people usually think of behavioral finance as being an alternative to efficient-markets theory. And sometimes it is! But in the case of personal investing – i.e., the single most important way that you will probably participate in financial markets—the two ideas support each other. The RMI says “You can’t beat the market;” behavioral finance says “But you’re probably going to lose your money trying.” Of course, even the RMI isn’t quite true. There are some people—a very few—who correctly guess price movements, and make money year after year after year (I work with a couple). But you’re very unlikely to be one of those people. And your behavioral biases—your self-attribution bias, overconfidence, and optimism—are constantly trying to trick you into thinking you’re one of the lucky few, even when you’re not. The RMI is an antidote to this! Just remind yourself that market movements should be really, really tough to predict. Then, when you start to think “It’s so so so obvious, why can’t people see AAPL is headed for $900, I’m gonna trade and get rich!”, you’ll realize that no, it can’t be that obvious. And you’ll restrain your itchy trigger finger. And when you start to think “My money manager is awesome, he beat the market the last 5 years running, I’ll pay his hefty fee and he’ll make me rich!”, you’ll stop and realize that no, it was probably luck. And you’ll think about putting your money in index funds, ETFs, and other low-cost products instead. In general, the RMI focuses your brain on assessing risks instead of trying to outguess the market. This is important, because risk is a difficult thing to think about, while making bets and guesses about returns is relatively easy. But in the real world, most of your portfolio’s return will be determined not by how well you make bets and guesses, but by the riskiness of the asset classes in which you choose to invest (stocks, bonds, etc.). Most of the return you get, in the long run, will come from taking risk. But because risk is a cost (imagine if you have an emergency and need to withdraw your money while the market is down!), you need to carefully balance your desire for return with your tolerance of risk. This is what the RMI helps you think about. OK, let’s step back a second. Why do we put our trust in any scientific theory? Well, because it’s useful. We know Newton’s laws aren’t exactly right, but we know they’re very useful for landing a rocket on the moon, so when we land rockets on the moon we don’t worry about the slight wrongness. And as for our most advanced theories—relativity, quantum mechanics, etc.—well, even those might just be approximations of some more general theory that we just haven’t figured out yet. But in the meantime, we use what we’ve got if it’s a good baseline approximation. The RMI—the general idea behind the various EMHs—is a good baseline for the personal investor (and probably for the pension fund manager too). It works pretty darn well. There are plenty of other areas in which market inefficiency/predictability may matter—financial regulation, corporate compensation, etc.—but you won’t typically need to worry about those. You’ll be better off treating the market as if it’s more-or-less unpredictable and random. Addendum: It would be unfair not to point out that the RMI is also an important baseline, or jumping-off point, for most financial research. First of all, it leads to the idea that most of the observable factors that explain stock returns should be things that move many stocks at once (and thus can’t be diversified). Second, it helped motivate the “limits to arbitrage” literature – if predictable market movements are due to “the market staying inefficient longer than you can stay solvent” (as a famous hedge fund manager-turned-economist once put it), that tells us that when we see things like bubbles, we should look for reasons why “smart money” investors like hedge funds can’t stay solvent. Third, the RMI focused financial economists themselves on explaining risk. That has led to the observation and investigation of interesting phenomena like fat-tailed returns, clustered volatility, tail dependence, etc. Fourth, the fact that it’s hard to beat the market raises the important question of why so many people try (and why so many people trick themselves into thinking they did, when they didn’t). That investigation has led to much of behavioral finance itself. In other words, in science as in your personal investing life, the RMI serves as the fundamental baseline or jumping-off point. That doesn’t mean it’s the destination or the conclusion of financial economics. It isn’t. But having a good baseline principle is extremely important in any science. You need to know where to start. What I actually think about the RMI (or “the EMH”) is this: At any given moment, there are infinitely many models that describe financial markets better than the RMI. And at any given moment, there are a finite number of available, known models that describe financial markets better than the RMI. But all non-RMI-type models will stop working well shortly after they are discovered. In other words, if you had to pick one model of financial markets and stick to that model for a very long time, the RMI is the best one you could pick. So in some sense, the RMI is the closest you can get in financial markets to an exploitable, stable “law of nature” like the models we use in physics. For people who don’t have time or skill to constantly search for new models, the RMI is best. I plan to make this idea the subject of another post in the future.
https://sandyyadav.com/2013/04/05/why-the-efficient-markets-hypothesis-is-fatally-flawed-but-why-the-idea-underneath-it-is-kinda-useful-but-not-entirely-watertight/?shared=email&msg=fail
The Efficient Market Hypothesis (EMH) is an application of ‘Rational Expectations Theory’ where people who enter the market, use all available & relevant information to make decisions. The only caveat is that information is costly and difficult to get. This Efficient Market Hypothesis implies that stock prices reflect all available and relevant information, so you can’t outguess the market or systemically beat the market. This means it impossible for investors to either purchase undervalued stocks or sell stocks for inflated prices. There are three forms of market efficiency. Forms of The Efficient Market Hypothesis 1. Weak-form efficiency Future prices cannot be predicted by analyzing prices from the past meaning there are not meaningful patterns to gain from past performance. Future price movements are determined entirely by information not contained in the price series. 2. Semi-strong-form efficiency In a semi-strong-form efficiency, share prices adjust to publicly available new information very rapidly and in an unbiased fashion, such that no excess returns can be earned by trading on that information. 3. Strong-form efficiency In the strong-form market efficiency, the share prices reflect all information, public and private, and no one can earn excess returns. Evidence against the Efficient Market Hypothesis (EMH) 1. Small Firm Effect Smaller firms tend to pay a higher risk-adjusted return than they should be paying. 2. January Effect Stocks rally in January, from December going into January. There is no change in investor behavior. 3. Market Overreaction (Excessive Volatility) The market tends to react to good/bad news more severely than they should. 4. Mean Reversion If you look at stocks over a long period, a stock that was doing badly for that period of time will eventually do better for that same period of time. 5. The Incorporation of New Information is Slow New information may not be represented adequately or may take time.
https://www.intelligenteconomist.com/efficient-market-hypothesis/
India's No. 1 Broker with Best Software Trade @ Flat Rs 20Open Instant Account The Efficient Market Hypothesis, known as EMH in the investment community, is one of the underlying reasons investors may choose a passive investing strategy. There are 3 forms of the efficient market hypothesis: Strong, Semi-strong Weak When people talk of efficient markets, they are describing a situation in which all the decisions of market participants are completely rational and that they consider all of the information available. EMH believes this to be true and so states that the market price will always be completely accurate, as all new information will be priced in immediately. EMH argues that the only volatile movements occur after unexpected news, but that once the information is digested, the efficient market resumes. The efficient market hypothesis does not imply that investors can’t outperform the market, it believes that there are always outliers beating the market averages, together with those who dramatically lose to the market. Still, the majority is closer to the median.
https://www1.niftytrader.in/terms/e/efficient-market-hypothesis
Topic Page: Fama, Eugene (1939 - ) Eugene Fama was a tenured professor at the University of Chicago before he was 30, where he taught portfolio theory before modern finance became established. He has spent his career at the Graduate School of Business, University of Chicago, where he revolutionized thinking on the efficient markets hypothesis, and where he is now Chairman of its Center for Research in Security Prices. He is also Director of Research at Dimensional Fund Advisors, and is an advisory editor of the Journal of Financial Economics. He was the first elected Fellow of the American Finance Association, and is also a Fellow of the Econometric Society and the American Academy of Arts and Sciences. He has received numerous honorary degrees, and was the co-winner of the Smith Breeden Prize for the best paper in the Journal of Finance in 1992, and received the first Deutsche Bank Prize in Financial Economics in 2005. Eugene Fama is a prolific author and researcher, having written two books, and published more than 100 articles in academic journals. He is among the most cited of America’s financial researchers. He is identified with research on markets, particularly with regard to developments in the efficient market hypothesis, and the random walk theory, as well as his workon portfolio theory and asset pricing, both theoretical and empirical. Coined the term “efficient markets theory” in a 1970 paper on efficient capital markets, arguing that it is practically impossible for someone to consistently beat the stock market because of the wide availability of information. He was the first of many to study how stock prices respond to an event, using price data from a newly available database. Focuses much of his study on the relation between risk and return, and the implications for portfolio management. Has also made innovations in how we understand the functioning of markets, asset pricing theory, and corporate finance He helped popularize the efficient market hypothesis, and the random walk theory. The efficient market hypothesis evolved from his PhD thesis, and suggested that stock markets are efficient because securities will be appropriately priced, and reflect all available information in a market with well-informed investors. The random walk theory was discussed in one of his papers, concluding that stock price movements are unpredictable, and follow a random walk. His work on efficient markets proposed two key improvements, by classifying three types of efficiency—strong form, semi-strong form, and weak efficiency—and by identifying the notion of market efficiency with the model of market equilibrium. His work on the efficiency of markets has helped create many new finance products, and aided the development of new futures contracts for hedging risks. He has written a series of papers with Kenneth French that question the validity of the capital asset pricing model (CAPM), as not taking into account market capitalization and book value to market value. In portfolio management, Fama and French also developed a successful three-factor model to describe market behavior. “In an efficient market at any point in time the actual price of a security will be a good estimate of its intrinsic value.”Eugene Fama Related Articles Full text Article Efficient markets hypothesis Boudoukh Jacob Matthew P. Richardson Robert F. Whitelaw , “A Tale of Three Schools: Insights on Autocorrelations of...
https://search.credoreference.com/content/topic/fama_eugene_1939
Finance Essays - Evaluating Behavioural Finance Published: Last Edited: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. Evaluating Behavioural Finance. How to evaluate the bubble and market crash. using behavioural finance to explain the bubble, and efficiency market theory cannot explain it. therefore, behavioural finance has a good solid basis. Outline of research: in the project, the client mainly critically evaluates two theories, EMH and behavioural finance. they want to investigate how behavioural finance can explain the bubble and market crash. EMH cannot. Most specifically, they will use behavioural finance to explain the reason of bubble and crash, and how institutional investors could explain bubble. Note: This is literature review. Critical evaluate the relative theory. the project does not involve empirical work on your part. Literature Review In this section I will be looking at the relevant literature surrounding the efficient market hypothesis (EMH) and behavioural finance. I will also look at literature that relates to market bubbles. I will start with the literature looking at the EMH. Efficient Market Hypothesis Efficient Market Hypothesis (EMH) is the theory behind efficient capital markets. An efficient capital market is one in which security prices reflect and rapidly adjust to all new information. The derivation of the EMH is mostly credited to the work of Fama. In 1965 the doctoral dissertation written by Fama was republished. In this Fama looks at the current literature on stock price behaviour and examines the distribution and dependence of stock price returns. He concluded that, 'it seems safe to say that this paper has presented strong and voluminous evidence in favour of the random walk hypothesis.' Due to a better understanding of price formation in competitive markets, the random walk model was now seen as a set of observations that can be consistent with the efficient markets hypothesis. This switch began with observations published in a paper by Samuelson in 1965. Samuelson presented his proof in the general form, which helped in the understanding of the notion of a well-functioning market. His paper had the observation 'in competitive markets there is a buyer for every seller. If one could be sure that a price would rise, it would have already risen.' Samuelson stated that 'arguments like this are used to deduce that competitive prices must display price changes...that perform a random walk with no predictable bias.' Following on by the work done by Samuelson, as mentioned in the previous paragraph, a paper was published by Fama in 1970. This paper consisted of a comprehensive review of the theory and evidence of market efficiency. He defined an efficient market as 'one in which trading on available information fails to provide an abnormal profit.' This paper was one of the firsts to distinguish between the three forms of market efficiency. The three forms of market efficiency are the weak form, semi-strong form and strong form. He concluded that "the results are strongly in support" of the weak form of market efficiency and that "in short, the evidence in support of the efficient markets model is extensive, and (somewhat uniquely in economics) contradictory evidence is sparse." I will now summarise some papers that have been written on the criticism of the EMH. Although there has been a vast amount of literature published on the development and the support of the efficient market theory, there has also been various studies published criticising the EMH. This criticism comes about due to the fact that the EMH is difficult to test. A number of studies indicate anomalous behaviour, which appears to be inconsistent with market efficiency. Such anomalies include the small firm effect as talked about in a paper by Banz in 1981. Banz analysed monthly returns over the period 1931-75 on shares listed on the New York Stock Exchange. Over this interval, the fifty smallest stocks outperformed the fifty largest by an average of one percentage point per month, on a risk-adjusted basis. After the publication of this paper, many other authors published their own papers examining the subject of the small firm effect. A paper by Ball in 1978 points out that the evidence could equally indicate the shortcomings of the models of expected return. A paper by Fama in 1998 concludes that further study should not be done on developing behavioural based theories of stock markets that take into account the apparent anomalies, but that search for better asset pricing models should take president. There is also the area of behavioural finance that criticises EMH. I will look at this in more depth in the next section. Market Bubble While the EMH is generally regarded as the best theory that can describe the actions of market prices it is not perfect and sometimes events occur that contradict the EMH. One of these events is that of the bubble. A bubble is when a specific industry's market prices do really well, so well that prices seem to rise higher than the EMH dictates. Eventually, the bubble bursts and prices return to a price more in line with EMH. One famous bubble was that of the dot.com bubble. EMH does not explain why this bubble exists in the first place. This is one of the major criticisms of the EMH. Many academics have turned to the relatively new theory of behavioural finance to explain the bubble. Behavioural Finance One area that has recently undermined the EMH is the work published looking at behavioural finance. As observed by Shleifer (2000) 'At the most general level, behavioural finance is the study of human fallibility in competitive markets.' Behavioural finance incorporates elements of cognitive psychology into finance in an effort to better understand how individuals and entire markets respond to different circumstances. Behavioural finance is based on the principle that all investors are not rational. Some investors can be over-confident, while other less knowledgeable investors might be prone to herding effects. Shefrin (1999) was one such author to talk about behavioural finance. He is one author who argues that 'a few psychological phenomena pervade the entire landscape of finance.' Harrington (2003) agrees with the notion that overconfidence can lead to irrational behaviour. She states that 'investors can become irrational and their irrational behaviour affects their ability to profit from owning stocks and bonds.' Of course, behavioural finance does have its draw backs. One of which is the fact that using instincts alone can result in a loss. This is due to human error. The person that is using their instincts in determining where to invest might not have the greatest financial knowledge in the first place. Also, this person might be having a bad day or be under a great deal of stress or be distracted in some other way. This could result in the wrong decision being made. Therefore, it is a good idea to use both behavioural finance on top of the traditional theories already in use today. This view is supported by an article by Malkiel (1989) who agrees with the notion that behavioural aspects have a great importance in stock market valuation. He argues that behavioural factors play an important role in stock valuation alongside traditional valuation theories. This is summed up by the following quote, 'market valuations rest on both logical and psychological factors. The theory of valuation depends on the projection of a long-term stream of dividends whose growth rate is extraordinarily difficult to estimate. Moreover, the appropriate risk premiums for common equities are changeable and far from obvious either to investors or economists. Thus, there is room for the hopes, fears, and favourite fashions of market participants to play a role in the valuation process.' Another article from the Banker (2004) also supports the view that behavioural finance has a role to play alongside the traditional views. In this section I will look at literature that tries to see if behavioural finance can explain this bubble. Many authors have argued that bubbles can be caused by over enthusiasm. For example, the new communication technology of the 1990's was exaggerated (causing the dot.com bubble). By this I mean that the new innovation is by some corners, i.e. the media and governments, over triumphed. This can lead to irrational behaviour of investors. This can lead to investors becoming over confident in the technology or industry. Another factor of this over enthusiasm is that it could attract herding behaviour. The irrational investor will be more likely to invest in something that is being hyped up as they feel that others are doing the same thing. They will feel that if others are doing it then it must be a good idea for them to do it as well. A factor that will have led to the development of a bubble is that of speculation. One such author that observed the speculation effect on the dot.com boom was Giombetti (2000). Many informed investors would have probably over invested in a specific industry going against market theory. They will have done this on the hope that their investment will pay off. Even if their investment were initially at a loss they would have stayed with it. Authors of behavioural finance outline this behaviour. This behaviour of these investors would have distorted the market conditions for other investors. Also, the herding effect would have been greater due to this. These factors would have led to the stock prices of a certain industry being vastly over priced. This would, therefore, cause the bubble. This bubble that has been created will, in turn, attract other investors. These investors will invest as they feel they are missing out on a good thing. This is another example of herding. This meant that when the bubble burst stock prices would have fell rapidly, causing investors to lose vast sums of money. This would cause them to pull out of the industry, which, in turn, causes the companies themselves to collapse. If it were not for irrational investment then investors might have pulled out earlier, before the collapse. This might have even meant that the collapse would not have happened. Other authors talk about some of the factors that cause investors to become irrational. On such author are Johnsson, Lindblom and Platan (2002). In their masters dissertation they talk about the various factors of irrationality. One of these is the observation that investors will hang on to losing shares longer than market theory dictates. They say that this is because they are waiting for the performance of the share to change for the better. This is referred to as loss aversion. This is an example of a psychological factor that is effecting the investment decision. Another psychological factor that affects investors, causing irrational behaviour is that of the feeling of regret. Authors argue that past bad decisions cause investors to feel regret and this alters their behaviour in such a way as to become irrational. Another factor that causes irrational behaviour is that of when the investor uses mental shortcuts in investment decisions. These shortcuts usually make investors choose the right decision but occasionally cause the investor to make the wrong decision. Optical illusions are a good example of how shortcuts can cause mistakes. A paper on www.undicoveredmanagers.com is one such paper that covers this point. Of course there are many authors who do not believe in the theory of behavioural finance. These authors argue that traditional financial theory can still be used to explain current market conditions. One such author is the person credited with the idea of the efficient market hypothesis, Eugene Fama. Fama (1998) argues that anomalies can be explained by traditional market theory. He argues that, 'apparent overreaction of stock prices to information is about as common as under-reaction' and he suggests that this finding is consistent 'with the market efficiency hypothesis that the anomalies are chance events' Other authors have argued that behavioural finance is only a study of individual investor behaviour. They argue that this theory has not been proven on a market wide scale. The tradition theories of finance have been. References www.UndiscoveredManagers.com (1999) Introduction to Behavioral Finance Ball R. (1978) Anomalies in Relationships Between Securities' Yields and Yield-Surrogates, Journal of Financial Economics, 6, pp. 103-26. Banz R. (1981) The Relationship Between Return and Market Value of Common Stocks, Journal of Financial Economics, 9, pp. 3-18. Fama E. F. (1965) The behaviour of stock market prices, Journal of Business 38 (1), 34-105. Fama E. F. (1970) Efficient capital markets: a review of theory and empirical work', Journal of Finance 25 (2), 383-417. Fama, E. (1998a). 'Efficiency survives the attack of the anomalies', GSB Chicago Alumni Magazine, (Winter):14-16. Giombetti R. (2000) The Dot.com Bubble. www.EatTheState.org Vol 4, Issue 23 Harrington C. (2003) Head games: Helping quell investors' irrational antics. Accounting Today, v17 i11 p5(2) Johnsson M., Lindblom H. & Platan P. (2002) Behavioral Finance - And the Change of Investor Behavior during and After the Speculative Bubble At the End of the 1990s Malkiel B. G. (1989) Is the stock market efficient? Science, v243 n4896 p1313(6) Samuelson P. (1965) Proof That Properly Anticipated Prices Fluctuate Randomly. Industrial Management Review, 6, pp. 41-49. Scholes M. (1972) The Market for Securities: Substitution Versus Price Pressureand the Effects of Information on Share Prices. Journal of Business, 45, pp. 179-211. Shefrin H. Beyond Greed and Fear. (1999) Understanding Behavioral Finance and the Psychology of Investing. Harvard Business School Press.
https://www.ukessays.com/essays/finance/evaluating-behavioural-finance.php
Try it now! 1433 26th Jun 2017 Rooted in agency theory and in dividend signalling theory are several hypotheses, the most significant and most relevant of which include the market efficiency hypothesis, the substitution hypothesis, and the free cash flow hypothesis. Are you going to pass an MBA exam with an excellent mark? In such case you should order an essay in the MBA essay writing service beforehand. Consequently, you’ll be prepared for all eventualities. The market efficiency hypothesis, better known as the efficient market hypothesis (EMH), suggests that markets are information-capable, or, information-efficient and that therefore, “at any given time and in a liquid market, security prices fully reflect all available information” (Fama 1970, p. 1575). The implication is that in dividend announcement content, then, the same efficiency of information will exist and will be wholly reliable. In addition, as Strom elaborates, “If the market is efficient, prices will instantly adjust to and fully reflect new available information without tendency for further increases or decreases” (2013, p. 10). Yet, where prices are not consistent is where market reactions are delayed or overreactive (Strom 2013; Viswanath 1996; Fama 1970). Figure 1 demonstrates the theoretical stock market reactions to new information (Strom 2013): this includes both delayed reactions and overreactions (Viswanath 1996). Figure 1 Stock Market Reactions to New Information More specifically, Figure 2 illustrates overreaction and delayed reaction in particular—whereby in an efficient market, the market price(s) “instantaneously adjust[s] to and reflect[s] new information” (Viswanath 1996, para. 14), with no trends toward change (increase or decrease) and therefore with equilibrium (para. 14). With a delayed reaction, however, market price(s) initially only “partially adjust[s] to new dividend announcement content, taking time to fully “reflect” the new information” (para. 14); and with overreaction, market price(s) initially “overadjust[s] to new dividend announcement content” (para. 14). Figure 2 Overreaction and Delayed Reaction In sum, the composite theoretical approach thus far would take into consideration the dynamic of market reactions and the dividend effect to suggest that in an efficient market, the dividend effect is activated when dividend announcements are made, inciting activity, resulting in reaction(s) in/on the market. An agency theory-based hypothesis, the substitution hypothesis explained by Jensen (1986) points to substitution of dividends by way of a buyback system of share repurchase/debt. This asset substitution activity is acknowledged to reduce agency problems, to transfer wealth from bondholders to shareholders (Yahyaee 2006), and, where applicable, to provide tax breaks. In the latter instance, for example, Jensen explains that, “Interest payments are tax deductible to the corporation, and that part of the repurchase proceeds equal to the seller’s tax basis in the stock is not taxed at all” (p. 326). Moreover, Poterba and Summers assign a valuation effect by taxes on dividends. However, in Oman, with the absence of dividend taxation, combined with the ostensibly high leverage and its concentrated ownership trends [Yahyaee et al, 2011], the substitution hypothesis is displaced—returning theory to the dividend signalling paradigm that suggests that taxed dividends and capital gains are necessary for dividend announcement content to be informative. That is, accordingly, where Oman does not impose taxes, and where theory holds that taxation is necessary for dividend announcement content to be informative, it would follow that in Oman, dividend announcement content information would be expected to be poor (Yahyaee, Pham, & Walter 2011). Also agency theory-based, the free cash flow hypothesis suggests that in the typical manager-shareholder dynamic, the interests and incentives between the two agents conflict (Jensen 1986); and the greater the cash flows—defined by Jensen as “…more cash than profitable investment opportunities” (1986, p. 323)—the greater the potential for conflict: the greater the free cash flow the greater the insulation for managers from external monitoring/scrutiny. As a result, the free cash flow hypothesis is extended and used to propose that efforts are made to reduce costs, or, to “…motivate managers to disgorge the cash rather than investing it at below the cost of capital or wasting it on organization inefficiencies” (p. 325). But also signalling theory-based, where dividend announcements come in is where Bhattacharya asserts that dividends themselves “…function as a signal of expected cash flows of firms in an imperfect-information setting” (1979, p. 259). Thus, the dispensing of dividends serves to reduce the amount of a company’s free cash flow (Strom 2011). Given 1) the uniqueness of the high leverage, tax-free environment of Oman and the high rate of dependency by Omani companies on banking in addition to 2) the “concentrated ownership structure” of Omani companies—whereby the firms are owned by a limited number of investors with the controlling interests (Al-Yahyaee, Pham, & Walter 2011; Al-Yahyaee 2006), and given 3) extant agency theory and dividend signalling theory and their relative hypotheses (as outlined above), it would be typically predicted that a) “…dividend payments may not be necessary to reduce the tendency of managers to overinvest free cash flow…[which] reduce the announcement effects of dividends on stock prices…” (Al-Yahyaee, Pham, & Walter 2011, p. 607); that b) “…concentrated ownership structure should reduce the agency cost between managers and shareholders” (p. 607); and that c) dividend information content would be weak in Oman. However, this study intends to investigate the extent to which these theories, especially dividend signalling theory as it points to dividend announcement content, hold true. This study offers an investigation into how dividends in the tax-free market environment of Oman impact share prices. The findings reflect both announcements of dividend increases and announcements of dividend decreases that are inconsistent with the tax-based signalling paradigm in particular, but are nevertheless aligned to some extent with dividend signalling theory in general. Many studies have been done on dividend content. Many more studies have been done to propose and/or evidence dividend theory. And several separate studies have been conducted on region-specific information content of dividends as they impact market behaviour. Yet, only one study to date, by Yahyaee (2006), focusses on the dividends in the unique environment that is Oman—and that study spotlights capital structure and dividend policy and explores the determinants including ownership, age, and leverage, etc, as each impacts dividend policy, and not vice versa. This dissertation attempts to cover a phenomenon that is virtually undisclosed in almost any detail at all. In this respect, the researcher hopes to shed light on the practice of dividend announcements in Oman and at the same time perhaps inform dividend policy if not influence future research into dividend policy in Oman. Do you have a lack of time to write your critical review at the moment? Entrust writing a critical review to professional writers at Pro-Papers and receive a wonderful qualitative result! The following terms/abbreviations are used throughout this study: AAR—average abnormal return EMH—efficient market hypothesis FCF—free cash flow (hypothesis) GCC—Gulf Co-Operation Council MSM – Muscat Securities Market The remainder of this study is as follows: Chapter 2 is a review of the great body of literature that exists to propose, explain, and reinforce or negate theoretical conceptualisations of the relationship between dividend announcement content and market reaction(s). This chapter is comprised of a review of that theoretical and empirical literature in a discussion of Dividend Signalling Theory, the Dividend Effect, and the Efficient Market Hypothesis. Chapter 3 offers a discussion of the approach, statistical sampling, and descriptions of the variables used to test the theory and the hypotheses development relevant to the impact of the information content of dividends on market reactions in Oman and on the Muscat Securities Market (MSM). Chapter 4 chapter discusses the findings based on the SPSS model analysis and the five developed hypotheses. And Chapter 5 summarises the findings as they align with theory.
https://pro-papers.com/contributing/the-information-content-of-dividend-announcements-in-oman-part-3
Regular reader Jordan mentioned he was interested in my personal opinion on what is known as EMH, or Efficient Market Hypothesis. From a 30,000 foot perspective, EMH basically means that the market is efficient enough that it is very hard to outsmart. It doesn’t preclude the notion that people can beat the market, but that outperformance would be attributed to luck as opposed to skill. If you took all investors’ returns and plotted them on a graph, you would have something close to a normal distribution curve. There are three forms of EMH: Weak Form Believers in the Weak form believe that past prices have no bearing on future prices, therefore technical analysis is useless. More precisely, any information that could be gleaned from past behaviour is fully reflected in the current price. Semi-Strong Form Believers in the Semi-Strong form believe that current prices reflect all past prices, and all publicly available information. Therefore, not only is technical analysis of no use, neither is fundamental analysis. Any new public information made available to the market results in completely random (overall) of the affected stocks. Strong Form Believers in the Strong form believe that prices reflect all past prices, all public information, and all insider information. The market is perfectly priced. My Take Well, you have to put it into context. The EMH is just a model for trying to explain how prices and markets tend to work – even the person most often credited as the father of the theory will point this out. No form is right enough in my opinion. They are all too rigid. It’s like someone asking you if you are a liberal or a conservative. I’m liberal about some things, and I’m conservative about other things. Putting me solely into only one camp wouldn’t describe me. Plus, EMH can fall apart at times. I think what can be taken from EMH is the overall lesson that it’s quite a task to beat the market consistently, or identify the person to do it next. Equilibrium Markets I view the market as more dynamic (in a different way than volatility). There is an equilibrium that exists between market efficiency and inefficiency, and that equilibrium point moves. For example, if everyone believed strongly enough in EMH and therefore decided to index in large proportions then the market would lose its efficiency as investors as a whole become “price-takers”. They will just take whatever price is out there because they assume it to always be right. For example if 95% of invested dollars indexed the market then it might be quite easy to build a case for fundamental analysis and stock picking and active management because there are few active players who ultimately dictate prices. In a room of 5 people it is easier to identify the alpha dog. But if 95% of invested dollars were actively invested, the market starts looking pretty efficient as there are millions of educated analysts and investors looking under all the same rocks you are. Now there are a billion people in the room. I don’t know if we’ll ever get to the point where most of the invested dollars are indexed however. Beyond the efficiency-inefficiency spectrum, black swan events (1987 and 2008 for example) exist which in hindsight seem clearly irrational. In the end, I choose to believe the markets are efficient enough most of the time with periods of irrationality and just leave it at that. I find EMH to be a good way of thinking about the markets in an academic way, but we invest in the real world.
https://bondsareforlosers.com/efficient-market-hypothesis/
Many investors have been getting excited about the so-called profitability factor, originally posed by Novy-Marx. It’s worth looking at it more closely, as not all is necessarily as it seems. The basic idea is simple: Other things being equal, firms with high gross profits (revenue costs) have earned higher expected returns than firms with low gross profits. Even market heavyweights Eugene Fama and Ken French have integrated the factor into their new "5-factor model," which consists of a market factor, size factor, value factor, profitability factor and an investment factor. This research was not lost on Dimensional Fund Advisors (DFA), a quantitative asset manager that’s essentially an extension of the University of Chicago Finance Department. DFA has added the concept of profitability to its process (we assume it is the profitability factor identified by Fama and French). But how robust and reliable is the so-called profitability factor, and is it possible that it might already be captured in other dimensions? A new paper titled, "A Comparison of New Factor Models," by Kewei Hou, Chen Xue and Lu Zhang, shows that the profitability factor is not, in fact, a new "dimension," as has been suggested. The authors find that the profitability factor highlighted by Fama and French is captured in cleaner ways by their simpler and more robust four-factor model, which consists of a market factor, a size factor, an investment factor and a return-on-equity factor. The authors highlight that there are "four concerns with the motivation of the Fama and French model based on valuation theory," suggesting that the factors chosen by Fama and French are merely descriptive and/or data-mined, but not grounded in economic theory. Ouch. But the critique of the five-factor model isn't only on theoretical grounds. It is also based on the evidence. The Hou, Xue and Zhang four-factor model captures all the returns associated with the new factors outlined by Fama and French. This suggests that the "new" profitability factor may not be a new dimension at all, since it can be explained via exposures to the market, size, and Hou, Xue and Zhang's investment and ROE factors. Profitability is also questionable in international markets. In a working paper, "The Five-Factor Fama-French Model: International Evidence," by Nusret Cakici, the author looks at the performance of the five-factor model in 23 developed stock markets. There is only marginal evidence the factor works globally. In some markets, the factor is effective, but in other regions such as Japan and Asia Pacific, the factor simply doesn't explain returns. Our own internal research on the matter is consistent with this result. A lack of unified results often hints toward a lack of robustness and/or data mining. Only time will tell if the out-of-sample performance of the so-called profitability factor will hold. There are certainly many smart academics and investment houses leveraging the factor as a way to capture higher returns, so we can't rule anything out. However, our advice is to tread lightly in the factor jungle, being sure to always carry a heavy machete to chop away at noisy data and the overfitting problems that accompany them. The baseline theory for understanding asset prices is the efficient market hypothesis (EMH) pioneered by Eugene Fama. Of particular interest is semi-strong market efficiency, which claims that markets prices reflect all publicly available information about securities. As the story goes, when mispricings occur in markets, these arbitrage opportunities will be immediately eliminated by professional investors, who exploit these opportunities for a profit. And because of this competitive mechanism, in the EMH view, prices should always reflect fundamental value. The EMH is a great theory, and there significant evidence to suggest it holds in many cases. However, there is a complementary framework—behavioral finance—that helps solve many of the puzzles in the stock market. Behavioral finance is often considered a "new" thing, but the concepts have been around for a long time. Keynes’ quip highlights two elements of real-world markets that the efficient market hypothesis doesn't consider: Investors can be irrational. and arbitrage is risky. In academic parlance, "investors can be irrational" boils down to an understanding of psychology, and "arbitrage is risky" boils down to what academics call limits to arbitrage, or market frictions. These two elements—psychology and market frictions—are the building blocks for behavioral finance. First, let's have a discussion of limits to arbitrage. EMH predicts that prices reflect fundamental value. Why? People are greedy, and any mispricings are immediately corrected by arbitragers. But in the real world, true arbitrage—profits earned with zero risk after all possible costs—rarely, if ever, exist. Most arbitragelike trades involve some form of cost or risk. This could mean fundamental or basis risk, transaction costs or noise-trader risk. Oranges in Florida cost $1 per orange. Oranges in California cost $2 per orange. The fundamental value of an orange is $1 (assumption for the example). EMH suggests arbitragers will buy oranges in Florida and sell oranges in California until California oranges drop to $1. Prices will quickly correct and there is no free lunch. But what if it costs $1 to ship oranges from Florida to California? Prices are decidedly not correct—the fundamental value of an orange is $1. But there is also not a free lunch. Next, a discussion of psychology is in order. First, the literature from psychology makes it fairly clear that humans are not 100 percent rational all the time. Daniel Kahneman tells a story of two modes of thinking: system 1 and system 2. System 1 is an efficient heuristics-based decision-making component of the human brain. System 2 is the analytic and calculated portion of the brain—~100 percent rational. Unfortunately, the efficiency of system 1 comes with drawbacks—we can make instinctive decisions that are irrational. I write extensively on behavior and how it affects markets, both here and here. As stand-alone topics, limits of arbitrage and psychology are interesting, but they have limited potential to affect prices via their individual influences. However, crafting a hypothesis that involves elements of silly investors and market frictions—simultaneously—is a potent combination. For example, consider the concept of noise traders. J. Bradford De Long, Andrei Shleifer, Larry Summers and Robert J. Waldmann wrote an article called, “Noise Trader Risk in Financial Markets” in the Journal of Political Economy in 1990. Combining biased investors with an understanding of market frictions/incentives can create powerful investment concepts. This combination can also describe what behavioral finance is all about; namely, understanding how behavioral bias—in conjunction with market frictions—create interesting impacts on market prices. Wesley R. Gray, Ph.D., is the chief investment officer for Alpha Architect (AlphaArchitect.com), a systematic asset manager based near Philadelphia, Pennsylvania.
https://www.etf.com/sections/features-and-news/lighter-side-profitability-factor?nopaging=1
The Efficient Market Hypothesis (EMH) is an application of ‘Rational Expectations Theory’ where people who enter the market, use all available & relevant information to make decisions. The only caveat is that information is costly and difficult to get. This Efficient Market Hypothesis implies that stock prices reflect all available and relevant information, so you can’t outguess the market or systemically beat the market. This means it impossible for investors to either purchase undervalued stocks or sell stocks for inflated prices. There are three forms of market efficiency. Future prices cannot be predicted by analyzing prices from the past meaning there are not meaningful patterns to gain from past performance. Future price movements are determined entirely by information not contained in the price series. In a semi-strong-form efficiency, share prices adjust to publicly available new information very rapidly and in an unbiased fashion, such that no excess returns can be earned by trading on that information. In the strong-form market efficiency, the share prices reflect all information, public and private, and no one can earn excess returns. Smaller firms tend to pay a higher risk-adjusted return than they should be paying. Stocks rally in January, from December going into January. There is no change in investor behavior. The market tends to react to good/bad news more severely than they should. If you look at stocks over a long period, a stock that was doing badly for that period of time will eventually do better for that same period of time. New information may not be represented adequately or may take time.
https://www.intelligenteconomist.com/efficient-market-hypothesis/
Last month, after 19 years in operation and running dangerously low on fuel, the Cassini spacecraft executed its final assignment: a death plunge deep into Saturn’s atmosphere where it was crushed and vaporized. At the time of its launch, Cassini’s mission was unprecedented in its ambitions but also in its risks, among them a treacherous pass through the asteroid belt. Yet the expertly designed probe proved a model of reliability over its nearly two decades of service, allowing scientists to extend its mission a total of three times. In some ways, investors too began a journey into the unknown after the great financial crisis in 2007-2008. And if only we were so fortunate as NASA engineers in applying a rigorous set of scientific principles to timing the economic cycle, we would all be very wealthy indeed. But, alas, the journey of each economic expansion has unique characteristics and unpredictable timetables that are resistant to precise forecasting. The minds at the Federal Reserve Bank are perhaps the world’s best. However, not once since it was founded in 1913 has the Fed been able to accurately forecast the next recession a year before it materialized. In fact, Fed officials are notorious for being behind the curve. As investment managers, this is all the evidence we need that the better approach is to assess risks and, when they are high and rising, take appropriate steps to mitigate some of those risks where we can. Despite eight uninterrupted years of growth, and after a 346% rise in total equity returns to new record highs, our analysis indicates that the economic expansion is not yet in peril of ending from natural causes. However, it certainly can end from unnatural causes, like self-inflicted geopolitical problems, or unanticipated hikes in interest rates by the Fed. As the nearby chart shows, a lower 2.1% average growth rate over this expansion has led to economic slack being eroded much more slowly than in previous business cycles. In fact, the current output gap is not projected to close until sometime in 2018. This suggests that the economy is in no danger of overheating and that modest growth can continue for some time before excesses build that could lead to the next recessionary downturn. This is also a reason why equities are rising strongly this year. Investors are optimistic because they expect that continuing economic growth will support improving corporate profits over the next several quarters. We agree. Measuring risks is a process and is statistically supported through rigorous analysis. In thinking about what stage of the current economic expansion and equity bull market we are in, we like to rely on forward-looking indicators and financial market data that has historically been useful as a gauge. Our asset allocation committee continually assesses when risks have grown too high to sustain full exposure to an asset class. Below are some of the indicators we monitor and where each stands. - Leading Economic Index. While the lead time between when the Conference Board’s Leading Economic Indicators turn negative and when a recession begins has historically been highly variable, the index has done a good job of signaling when an expansion is in jeopardy of ending. Recently, the LEI has been improving, indicating that modest growth should continue and that near-term recession risk is low. - Equity Valuations. Valuations of U.S. equities are now at levels that historically have been categorized as high. However, all this really means is that risks related to valuation are elevated, not that the bull market will end soon. Analysis reveals there is not much correlation historically between valuation levels and the equity market’s return over the next 12 months. In fact, valuations can stay at high levels for years, without stocks correcting. - Margin Debt. The higher the level of margin debt, the more investors are confident enough to borrow money to buy stocks and bonds. While confidence is high today, we are watching to see if margin debt goes to levels of excessive speculation. So far, it has not. - High-Yield Debt Interest Rate Spreads. The excess return from investing in lower-quality bonds versus Treasuries totals 5.35% YTD. The present level of HY credit spreads is near a three-year low. However, this relatively low level can persist for years, as was the case from 1994 through 1998 and more recently from early 2004 through mid-2007. - Leveraged Loans. In the senior secured loan market, the amount of lending currently being made to medium- to low-credit-quality companies without covenants is 75% of new issuance (up from 24% in 2012). This means that borrowers are being granted relatively inexpensive financing without the standard covenant packages and therefore lenders have weaker controls over borrower operations. While we are confident this economic expansion will not last the almost two decades that Cassini did, if it continues for 22 months more, it will be the longest in U.S. history. Given the highly uncertain geopolitical environment, we might have to navigate an asteroid belt of our own over the next year to get there, but our best estimate is that modest economic growth will continue through 2018. Still, rather than try to predict what is not predictable, we believe assessing risks and investing in high-quality companies with improving future earnings prospects, adaptive management teams, and strong balance sheets remains the best way of sustaining your investment portfolio and achieving long-term investment goals. | | Important Disclosures The information presented does not involve the rendering of personalized investment, financial, legal, or tax advice. This presentation is not an offer to buy or sell, or a solicitation of any offer to buy or sell, any of the securities mentioned herein. Certain statements contained herein may constitute projections, forecasts, and other forward-looking statements, which do not reflect actual results and are based primarily upon a hypothetical set of assumptions applied to certain historical financial information. Certain information has been provided by third-party sources, and, although believed to be reliable, it has not been independently verified, and its accuracy or completeness cannot be guaranteed. Any opinions, projections, forecasts, and forward-looking statements presented herein are valid as of the date of this document and are subject to change. There are inherent risks with equity investing. These risks include, but are not limited to, stock market, manager, or investment style. Stock markets tend to move in cycles, with periods of rising prices and periods of falling prices. Investing in international markets carries risks such as currency fluctuation, regulatory risks, and economic and political instability. Emerging markets involve heightened risks related to the same factors, as well as increased volatility, lower trading volume, and less liquidity. Emerging markets can have greater custodial and operational risks and less developed legal and accounting systems than developed markets. Concentrating assets in the real estate sector or REITs may disproportionately subject a portfolio to the risks of that industry, including the loss of value because of adverse developments affecting the real estate industry and real property values. Investments in REITs may be subject to increased price volatility and liquidity risk; concentration risk is high. Investments in Master Limited Partnerships (MLP) are susceptible to concentration risk, illiquidity, exposure to potential volatility, tax reporting complexity, fiscal policy, and market risk. Investors in MLPs are subject to increased tax reporting requirements. MLP investors typically receive a complicated schedule K-1 form rather than Form 1099. MLPs may not be appropriate investments for tax-advantaged accounts because of potential negative tax consequences (Unrelated Business Income Tax). There are inherent risks with fixed-income investing. These risks may include interest rate, call, credit, market, inflation, government policy, liquidity, or junk bond. When interest rates rise, bond prices fall. This risk is heightened with investments in longer-duration fixed-income securities and during periods when prevailing interest rates are low or negative. The yields and market values of municipal securities may be more affected by changes in tax rates and policies than similar income-bearing taxable securities. Certain investors’ incomes may be subject to the Federal Alternative Minimum Tax (AMT), and taxable gains are also possible. Investments in below-investment-grade debt securities, which are usually called “high yield” or “junk bonds,” are typically in weaker financial health and such securities can be harder to value and sell, and their prices can be more volatile than more highly rated securities. While these securities generally have higher rates of interest, they also involve greater risk of default than do securities of a higher-quality rating. Investments in emerging market bonds may be substantially more volatile, and substantially less liquid, than the bonds of governments, government agencies, and government-owned corporations located in more developed foreign markets. Emerging market bonds can have greater custodial and operational risks and less developed legal and accounting systems than developed markets. As with any investment strategy, there is no guarantee that investment objectives will be met, and investors may lose money. Returns include the reinvestment of interest and dividends. Investing involves risk, including the loss of principal. Diversification may not protect against market loss or risk. Past performance is no guarantee of future performance. Index Definitions The Conference Board Leading Economic Index is an American economic leading indicator intended to forecast future economic activity. It is calculated by The Conference Board, a nongovernmental organization, which determines the value of the index from the values of ten key variables. The Goldman Sachs Financial Conditions Index (GSFCI) is a weighted sum of a short-term bond yield, a long-term corporate yield, the exchange rate, and a stock market variable. The Standard & Poor’s (S&P) 500 Index represents 500 large U.S. companies. The comparative market index is not directly investable and is not adjusted to reflect expenses that the SEC requires to be reflected in the fund’s performance. Indices are unmanaged, and one cannot invest directly in an index. Index returns do not reflect a deduction for fees or expenses.
https://newsroom.cnb.com/quarterly-update-october-2017
After undergoing a “healthy correction” in December, we believe U.S. equity markets are undergoing a multi-month corrective process driven by trade tensions, the path of Fed hikes, and the outlook for earnings. We have taken a conservative stance toward earnings growth in 2019 and have held a below consensus, base case forecast of 5%. This earnings season is perhaps the most important one in years. Any evidence of fundamental deterioration. While economic activity is slowing both domestically and internationally, it is still growing. Revenue growth generally produces positive operating leverage, which should also be helped by lower input prices as many commodity prices have declined. Rising interest rates are also a modest positive as the balance sheet of the S&P 500 has more cash than debt. Increased labor costs are a potential headwind to margins. Corporation cash flows should remain solid. Net, we believe reported results will be choppy, but overall fine. The tone and specifics of forward guidance from companies. Results of the positive stimulus of tax cuts are beginning to fade, and concerns over the implications of trade tensions, Brexit, and Fed policy are impacting business decisions as evident by declining PMIs and declining global trade. Given this backdrop managements are likely to be cautiously optimistic and conservative in their commentary about the future. As a result the bottom-up earnings forecasts for many companies, which have been declining rapidly, could be reduced further, and our current base case assumption may have to be trimmed. We have specific expectations for each of our holdings as it relates to revenues, earnings, and guidance. Should there be meaningful changes to our assumptions or our investment thesis, we will take appropriate action and maintain a solid pipeline of investment ideas. We believe our focus on high-quality companies with solid earnings visibility will serve us well as the Battle Royale continues. Certain statements contained herein may constitute projections, forecasts, and other forward-looking statements, which do not reflect actual results and are based primarily upon a hypothetical set of assumptions applied to certain historical financial information. Certain information has been provided by third-party sources, and, although believed to be reliable, it has not been independently verified, and its accuracy or completeness cannot be guaranteed. There are inherent risks with equity investing. These risks include, but are not limited to, stock market, manager, or investment style. Stock markets tend to move in cycles, with periods of rising prices and periods of falling prices. Investing in international markets carries risks such as currency fluctuation, regulatory risks, and economic and political instability. Emerging markets involve heightened risks related to the same factors, as well as increased volatility, lower trading volume, and less liquidity. Emerging markets can have greater custodial and operational risks and less developed legal and accounting systems than developed markets. Concentrating assets in the real estate sector or REITs may disproportionately subject a portfolio to the risks of that industry, including the loss of value because of adverse developments affecting the real estate industry and real property values. Investments in REITs may be subject to increased price volatility and liquidity risk; concentration risk is high. Investments in Master Limited Partnerships (MLP) are susceptible to concentration risk, illiquidity, exposure to potential volatility, tax reporting complexity, fiscal policy, and market risk. Investors in MLPs are subject to increased tax reporting requirements. MLP investors typically receive a complicated schedule K-1 form rather than Form 1099. MLPs may not be appropriate investments for tax-advantaged accounts because of potential negative tax consequences (Unrelated Business Income Tax). There are inherent risks with fixed-income investing. These risks may include interest rate, call, credit, market, inflation, government policy, liquidity, or junk bond. When interest rates rise, bond prices fall. This risk is heightened with investments in longer-duration fixed-income securities and during periods when prevailing interest rates are low or negative. The yields and market values of municipal securities may be more affected by changes in tax rates and policies than similar income-bearing taxable securities. Certain investors’ incomes may be subject to the Federal Alternative Minimum Tax (AMT), and taxable gains are also possible. Investments in below-investment-grade debt securities, which are usually called “high yield” or “junk bonds,” are typically in weaker financial health and such securities can be harder to value and sell, and their prices can be more volatile than more highly rated securities. While these securities generally have higher rates of interest, they also involve greater risk of default than do securities of a higher-quality rating. Investments in emerging market bonds may be substantially more volatile, and substantially less liquid, than the bonds of governments, government agencies, and government-owned corporations located in more developed foreign markets. Emerging market bonds can have greater custodial and operational risks and less developed legal and accounting systems than developed markets. As with any investment strategy, there is no guarantee that investment objectives will be met, and investors may lose money. Returns include the reinvestment of interest and dividends. Investing involves risk, including the loss of principal. Diversification may not protect against market loss or risk. Past performance is no guarantee of future performance. The Standard & Poor’s 500 Index (S&P 500) is a market capitalization-weighted index of 500 common stocks chosen for market size, liquidity, and industry group representation to represent U.S. equity performance. Indices are unmanaged, and one cannot invest directly in an index. Index returns do not reflect a deduction for fees or expenses.
https://www.cnr.com/content/cnrcom/en/insights/quarterly-updates/quarterly-update-jan-2019-2.html
More cool interviews coming 🌴👌 This week I talked with Groupe SEB Turkey’s CFO Mustafa Kilic about geopolitical risks in emerging markets and working capital/liquidity challenges (with a special focus on his approach to Hybrid Cash Pooling Solutions). Being the CFO at Groupe SEB Turkey, Kilic is responsible for accounting, controlling, planning, risk, treasury, legal and management information systems operations at the executive level. He joined Groupe SEB early in 2016 after 20 years of experience in the finance, treasury and risk management field at Siemens, Nestle, Vodafone, Indesit, and the Candy Hoover Group. As a frequent speaker and powerful advocate of better finance & risk management, he was recognized with several awards including being named in 2011 as one of the “100 Most Influential People in Finance” by Treasury & Risk Magazine and in 2010, he received the highly commended Adam Smith Award on Global Liquidity Management by Treasury Today Magazine. Arturo Pallardó: Cash Management is very present in your day-to-day job. What role does it play in the Corporate Financial Value Chain? Mustafa Kilic: As a CFO and former Treasurer, one of my main responsibility is to translate the components of the business cash cycle into solutions that result in optimized cash flow, cost savings and investment options. That’s why, in the Corporate Financial Value Chain concept, we usually break down our current financial processes into four main components: purchasing and sales; cash and risk management; financing and investment; and payments and collections. In practice, a number of processes are involved in resolving a specific issue and the most significant improvement opportunities are identified when reviewing the whole chain. For instance, from a cash and risk management standpoint, it’s quite critical to identify under-the-radar risk factors and keep updated risk-related KRIs and KPIs. Especially after 2008 crisis many multinationals updated their risk policy and its metrics; a tough job that took considerable amount of time to implement but it pays off. Having clear structures, sophisticated techniques, efficient reporting and extensive systems integration are the cornerstones of this kind of process. As financial professionals, our goal is to generate value by optimising net positions, working capital and, consequently, cash flows. Best practice often dictates a distinct – and frequently centralised – process. So again, as Treasurers and cash managers, our aim is to optimise processes dealing with net positions and cash flows. And what about the other processes you mentioned besides cash & risk management? Regarding purchasing and sales, much of the associated administration is integrated into the cash management concept. Streamlined administration has a direct effect on both cash flows and working capital. Just by examining how invoices are processed, my previous experiences proved administrative savings of 60% or more in comparison to mainly manual invoicing processes through a direct collection and payment system. And regarding cash management, a hybrid cash pooling solution would be alternative for many multinational to reduce the number of transactions and currency exchanges involved. What are the main risks we face in the current macroeconomic environment? I have a growing concern over emerging economies. Devaluation of many emerging currencies resulted in sharp fluctuations in global markets. Also, the intensifying concerns about the slowdown in the Chinese economy would affect global growth adversely and cause global risk sentiment to deteriorate. So, from the regional standpoint, I’m more concerned about currency risk, economic growth risk and credit risk, while globally I’m more worried about the failure of national governance (e.g. corruption, illicit trade, organized crime, impunity, political deadlock, etc.), interstate conflict with regional consequences, as well as state collapse or crisis (e.g. civil conflict, military coup, failed states, etc.). Absolutely and mainly because geopolitical risks can have cascading impacts on other risks. As state structures are challenged by conflict, the risk of the failure of national governance and state collapse or crisis can increase in areas where current state boundaries do not necessarily reflect popular self-identification. Failure of national governance features strongly this year, as one of the most likely risks across the global risks landscape. This risk area captures a number of important elements around the inability to efficiently govern as a result of corruption, illicit trade, organized crime, the presence of impunity and generally weak rule of law. Are companies taking enough actions to be protected against such political risks? Firms have increasingly focused their attention on financial, market and operational types of risk, especially since the economic crisis of 2008. However, recent large-scale risk management studies have found that most companies neither measure nor manage political risk. They tend either to accept (or ignore) these risks or to avoid situations that seemingly pose large political risks altogether, even when those risks are accompanied by significant opportunity. Unfortunately, many companies miss that an effective management of political risk can enable them to enter and navigate new markets and business environments, providing a potential for competitive advantage. Underestimating such risk is a serious mistake, especially since these political risks are taking new and different forms. It is common that the instruments used by many organizations are simply too blunt for the changing, complex political environment in which they operate. Political risk may have different characteristics than other types of risk, but it can – and should – be managed. And how should companies effectively manage such risks? They can integrate political risks into existing enterprise risk management (ERM) systems, yielding potential benefits such as lower risk management costs (through more rational hedging and insurance purchasing); new revenue streams from markets that would be too risky to enter without risk management support; better performance of existing businesses in emerging markets; and loss mitigation through improved business continuity planning and crisis management. And what happens once risks have been identified and measured? Then, they should put in place an effective system for active political risk management. The first element would be mapping potential risk management methods against the priority risks. Once the organization establishes a course of action, the risk management team can assign responsibilities and establish a schedule for consultation, reporting and review, as with other risk controls. Companies actually have multiple options for addressing identified risks. A company operating in a country where there are signs of corruption in trade practices, for example, may seek to review its overall code of conduct and step up local training activities to ensure that all rules are thoroughly understood. And what would you say are the main benefits from this? Effective management of political risk can enable companies to tap new revenue streams through access to markets and joint ventures that, without careful management, might seem too risky. Besides, clear identification, measurement and management of risk can facilitate organizational buy-in for growth strategies that target emerging markets and “frontier” markets, while improving the performance of existing businesses. You used hybrid hedging and cash pooling approaches that gave big advantages against the backdrop of the economy and fluctuation in currency flow. Why did you approach this hybrid technique? One of my main priorities has been unlocking liquidity, which can become a pretty challenging task in some markets. In complex environments, cross-border movements and ability to hold offshore and FCY accounts are highly regulated, and local regulations often restrict inter-company movements or require central bank approval and reporting. That’s why treasurers are continuously seeking a better platform where to strategically manage the entire cash management process to increase the intrinsic value of the business, and it is not feasible to have one approach for all countries when it comes to cash and treasury management due to different jurisdictions. The common approach taken by many multinational companies has been the so-called cash pooling solutions (liquidity management techniques whereby funds are physically concentrated or notionally consolidated into a single cash position), especially ‘notional pooling’ and ‘cash concentration.’ However, some entities may not be permitted to participate in cash concentration or a notional cash pooling agreement. And also, cash concentration and notional cash pooling often give rise to complex legal, regulatory, accounting and tax considerations. That’s why I opted for a hybrid cash pooling technique. “Hybrid cash pooling” is a system that uses two or more distinct cash balancing engines to maintain credit and debit positions of various accounts on the books of the service provider bank. In short, the solution offers that cash in excess of working capital at the local level no longer remains “trapped” or subject to local investment vehicles. Furthermore, we can access short-term funding in the currency of our choice without the traditional reliance on inter-company loan structures. Of course this strategy has to meet a few criteria to work. It needs freely convertible currencies; legal entities that must be allowed to open a local and foreign currency bank account outside its country; and a financial institution to provide the service. So, what are their main benefits? There are multiple ones: It creates efficient ways to consolidate liquidity with better geographic coverage. It eliminates local borrowing needs while reducing corporate guarantees for local credit lines, bringing economies of scale in financing conditions with incremental income statement improvements. Additionally, it enhances control of cash balance and increases its visibility. Moreover, Treasury can overdraw in the currency of their choice and maintain local bank accounts to serve local needs; eliminate intercompany loans while creating new funding vehicle; reduce or eliminate both FX and swap transactions to offset the debit positions of the accounts and the number of operational transactions and its cost; and eliminate the need for either physical movements of funds across entities or accounts at the header account level and as well as the change in the ownership of cash. And all this with no additional hardware/software on the user side. Are there are any drawbacks? The main one would be that hybrid cash pooling is prohibited or may be subject to restrictions in some countries. Also, some entities may not be allowed to participate in hybrid cash pooling agreements. It may also require the occasional physical movement of funds across accounts as well as the parent company guarantee.
https://www.getrevue.co/profile/cfobrain/issues/most-companies-neither-measure-nor-manage-political-risk-they-just-tend-to-accept-or-ignore-it-67336
Operational risk represents the potential economic, reputational or regulatory impact of inadequate or failed internal processes, people and systems, or from external events. Operational risks include legal and compliance risks and the risk of a material misstatement in Swiss Re’s financial reporting. Operational risk is inherent within Swiss Re’s business processes. As the company does not receive an explicit financial return for such risks, the approach to managing operational risk differs from the approach applied to other risk classes. The purpose of operational risk management is not to eliminate operational risks but rather to identify and assess them, in order to cost-effectively manage risks that exceed Swiss Re’s tolerance for operational losses. The Group’s framework for mitigating operational risk is based on its three lines of defence, assigning primary responsibility for identifying and managing risks to individual risk takers (first line of defence), with independent oversight and control by the second (Risk Management and Compliance) and third line of defence (Group Internal Audit). This approach is designed to achieve a strong, coherent and Group-wide operational risk culture built on the overriding principles of ownership and accountability. Management is responsible for assessing operational risks based on a centrally coordinated methodology. Members of Swiss Re’s Group Executive Committee are required to assess and certify the effectiveness of the internal control system for their respective function or unit on a quarterly basis. All operational losses and incidents are reported and tracked in a central system to ensure that they are resolved as well as to avoid the recurrence of the same or similar events. Strategic risk Strategic risk represents the possibility that poor strategic decision-making, execution, or response to industry changes or competitor actions could harm Swiss Re’s competitive position and thus its franchise value. Overall responsibility for managing strategic risk lies with Swiss Re’s Board of Directors, which establishes the Group’s overall strategy. The Board of Directors of the holding companies of the respective Business Units are responsible for the strategic risk inherent in their specific strategy development and execution. Strategic risks are addressed by examining multi-year scenarios, considering the related risks, as well as monitoring the implementation of the chosen strategy year-by-year in terms of the annual business plan. As part of their independent oversight role, Risk Management, Compliance and Group Internal Audit are responsible for controlling the risk-taking arising from the implementation of the strategy. Regulatory risk Regulatory risk represents the potential impact of changes in the regulatory and supervisory regimes of the jurisdictions, in which Swiss Re operates. Swiss Re is strongly engaged in the regulatory debate, striving to mitigate potentially negative impacts while supporting reforms that could enhance the overall health of the sector, facilitate convergence of regulatory standards or generate business opportunities. Regulatory developments and related risks that may affect Swiss Re and its Business Units are monitored as part of regular oversight activities and reported to the executive management and Board of Directors at Group, Business Unit and legal entity level in regular risk reports. In 2015, the global regulatory agenda continued to accelerate. Governments and regulators rolled out new policies and conducted numerous consultations and field tests on regulations with direct impact on the insurance sector. Many reform proposals reflect the financial supervision agenda set by the G-20, which includes a focus on internationally active insurance groups (IAIGs) and global systemically important insurers (G-SIIs). Furthermore, regulators are increasing their work on compliance and market conduct issues. Swiss Re is actively engaged in dialogue on these initiatives and supports regulatory convergence as well as increased application of economic and risk-based principles. At the same time, we share the broad concerns of the insurance industry around the cumulative and cross-sectoral impacts of the reforms. Some proposed regulations are more appropriate for the banking industry and do not adequately take into account the nature and benefits of insurance and reinsurance. Regulatory fragmentation is another key concern — particularly in Europe, with the challenges in introducing Solvency II, but also in the context of cross-border business and protectionist measures introduced in several growth and mature markets. After more than ten years of development, Solvency II became effective across the European Economic Area on 1 January 2016. Swiss Re has been actively engaged in the implementation process, particularly in supporting the equivalence for the Swiss insurance supervisory system. The European Commission has recognised the Swiss system, including the SST, as fully equivalent. Switzerland and Bermuda are currently the only jurisdictions worldwide that have obtained this status. As a next step, industry-wide public disclosure of companies’ solvency and financial condition will become mandatory in 2017 for both Solvency II and the SST. Furthermore, in China the main rules of the new China Risk Oriented Solvency System (C-ROSS) were published in February 2015. With this, China undertakes an important step towards an economic, risk-based system, similar to SST and Solvency II. Under the guidance of the Financial Stability Board, the International Association of Insurance Supervisors (IAIS) continues its work of refining the designation methodology for G-SIIs, and is elaborating corresponding policy measures, especially in the areas of international capital standards, including higher loss absorbency (HLA), recovery and resolution planning and enhanced group-wide supervision. The IAIS decided to adjust its delivery process for ComFrame, the common framework for the supervision of IAIGs. ComFrame includes a global insurance capital standard (ICS), which will be adopted in 2019 — one year later than originally planned — leading to its implementation in 2020. Until the adoption, the IAIS might substantially revise the ICS. Many countries impose restrictions on the transaction of reinsurance business. The Global Reinsurance Forum, which Swiss Re is currently chairing, actively promotes the advantages of open and competitive markets, in particular the greater choice of reinsurers, products and prices, as well as benefits from diversification through the spreading of risk and increased financial stability. Political risk Political risk comprises the consequences of political events or actions that could have an adverse impact on Swiss Re’s business or operations. Political developments can threaten Swiss Re’s operating model but also open up opportunities for developing the business. The Group adopts a holistic view of political risk and analyses developments in individual markets and jurisdictions, as well as cross-border issues such as war, terrorism, energy-related issues and international trade controls. A dedicated political risk team identifies, assesses and monitors political developments worldwide. Swiss Re’s political risk experts also exercise oversight and control functions for related risks, such as political risk insurance business; this includes monitoring political risk exposures, providing recommendations on particular transaction referrals, or risk reporting. They also provide specific country ratings that cover political, economic and security-related country risks; these ratings complement sovereign credit ratings and are used to support underwriting as well as other decision-making processes throughout the Group. In 2015, key issues addressed by dedicated task forces included the potential impact on Swiss Re of the ongoing Eurozone crisis, the UK referendum on EU membership and the conflict between Russia and Ukraine. Swiss Re seeks to raise awareness of political risk within the insurance industry and the broader public, and actively engages in dialogue with clients, media and other stakeholders. We also build relationships that expand our access to information and intelligence, and allow us to further enhance our methodologies and standards. For example, we participate in specialist events hosted by institutions such as the International Institute for Strategic Studies, the International Studies Association and the Risk Management Association, and maintain relationships with political risk specialists in other industries, think tanks and universities, as well as with governmental and non-governmental organisations. Sustainability risk Sustainability risk comprises current and emerging environmental, social and ethical risks that may arise from individual business transactions or the way Swiss Re conducts its operations and manages operational failures. Swiss Re’s continued business success depends on the successful management of such risks, thus helping to maintain the trust of its stakeholders. The Group has a long-standing commitment to sustainable business practices, active corporate citizenship and good, transparent governance. All employees are required to commit to and comply with Swiss Re’s values and sustainability policies. Potential sustainability risks are mitigated through clear corporate values and active dialogue and engagement with affected external stakeholders, as well as robust internal controls. These include a Group-wide Sustainability Risk Framework to identify and address sustainability risks across Swiss Re’s business activities. The framework comprises sustainability-related policies ‒ with pre-defined exclusions, underwriting criteria and quality standards ‒ as well as a central due diligence process for related transactional risks. Sustainability risks are monitored and managed by dedicated experts in Swiss Re’s Group Sustainability Risk team, which is also responsible for maintaining the Sustainability Risk Framework. In addition, this unit supports Swiss Re’s risk management and business strategy through tailored risk assessments and risk portfolio reviews, fosters risk awareness through internal training, and facilitates development of innovative solutions to address sustainability issues. Finally, it represents and advocates Swiss Re’s position on selected sustainability risk topics to external stakeholders. Swiss Re is a founding signatory to the UN Principles for Sustainable Insurance (UN PSI) and is currently a board member of this initiative. The UN PSI provide a global framework for managing environmental, social and governance challenges. Swiss Re has been actively contributing to the initiative for several years, co-chaired it from 2013 to 2015 and publicly reports progress against the principles in its annual Corporate Responsibility Report; the 2015 report is expected to be published in May 2016. In 2015, Swiss Re was again recognised as “insurance industry sector leader” in the Dow Jones Sustainability Indices. This is the ninth time since 2004 that Swiss Re has led the insurance sector in these rankings. The award highlights Swiss Re’s long-term commitment to sustainable business and our efforts to further embed sustainability into key business processes and operations. For more information on our sustainability practices, see also the Corporate Responsibility section. Emerging risks Anticipating possible developments in Swiss Re’s risk landscape is an important element of our integrated approach to Enterprise Risk Management. We encourage pre-emptive thinking on risk in all areas of our business, combining our broad claims experience and risk expertise with a structured horizon-scanning process. The key objectives are to reduce uncertainty and help diminish the volatility of the Group’s results, while also identifying new business opportunities and raising awareness of emerging risks, both within the Group and across the industry. The Group’s risk identification processes are supported by a systematic framework that identifies and assesses emerging risks and opportunities across all risk categories, including potential surprise factors that could affect known loss potentials. This internal SONAR system gives Swiss Re employees a forum to raise ideas on emerging risks and report early signals using an interactive platform. This information is complemented with insights from collaboration with think tanks, academic networks and international organisations and institutions. Findings are shared with senior management and other internal stakeholders, providing them with a prioritised overview of newly identified emerging risks and an estimate of their potential impact on Swiss Re’s business. We also publish an annual emerging risk report to share findings, raise awareness and initiate a risk dialogue with key external stakeholders. To further advance risk awareness across the industry and beyond, Swiss Re continues to participate actively in strategic risk initiatives such as the International Risk Governance Council, and the CRO Forum’s Emerging Risk Initiative. Over the past year, we contributed to several publications on emerging risk topics, including the International Risk Governance Council guidelines for emerging risk governance and a CRO Forum position paper “The Smart Factory — Risk Management Perspectives”.
https://reports.swissre.com/2015/financial-report/risk-management/risk-assessment/other-risks.html
Governments in the sub-Saharan African region are facing enormous pressure to remedy the deficiencies in infrastructure facilities and to improve the efficiency of public services. However due to budgetary and fiscal constraints nearly all governments in the region are unable to raise the massive financing needed for large-scale investment in the rehabilitation and expansion of public infrastructure facilities and to achieve improvements in services provision. The rising demand for infrastructure facilities and public disenchantment as a result of the inefficiencies and poor services provision in the face of severe capital shortages have compelled many governments in the region to look to the private sector as a means of financing infrastructure development and public services provision to supplement public expenditure. The options for private financing and provision of infrastructure facilities and services is broad ranging from public sector provision to public/private partnership to fully privately owned and operated infrastructure. Concession contracts under the Build Operate and Transfer (BOT) option have become the popular and most widely applied method for financing the expansion, rehabilitation and raising the efficiency of infrastructure facilities and public services provision in sub-Saharan Africa. The required debt and equity finance for BOT projects in sub-Saharan Africa is raised from abroad due to undeveloped financial markets. Overseas construction organisations have been the prime investors of the equity finance; the essential requirement for raising the necessary debt finance for the implementation of BOT infrastructure projects in the region. This research reviews the potential of equity foreign direct investment in BOT infrastructure projects in sub-Saharan Africa and identifies and evaluates the major risks of investing in such projects in the region. The research shows that that project specific risks (including cost overruns, design and technical and construction related delays and risks, operational risks and the uncertainty of the revenue stream) and country risks (including political, economic, financial payment, legal, corruption and market risks) are the greatest challenge facing overseas construction firms seeking to invest in BOT projects in sub-Saharan Africa. The research develops a framework within which overseas contractors seeking to invest in BOT infrastructure projects can identify, evaluate and manage these risks. The equity foreign direct investment risk management process model has been developed. using the fundamentals of the standard risk management process and the flow chart technique. The model captures and evaluates the risks with the view of putting in place suitable and effective countermeasures. An evaluation of the model demonstrates that it can be a useful tool for managing equity foreign direct investment risks from the early stages when the critical decisions to invest in a project are being made up to and including the operating stage of the facility. It provides a foundation that could be applied on BOT projects in sub-Saharan Africa in general.
http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553105
USAEstablished as a US governmental agency in 1971, OPIC:• Helps US businesses invest overseas by, among other things, providing financing, such as taxpayer-backed loans and loan guarantees, for qualifying projects and investments.• Fosters economic development in new and emerging markets.• Complements the private sector in managing risks associated with foreign direct investment. OPIC is one of the private sources of political risk insurance. This insurance is available to US investors, contractors, exporters and financial institutions and covers currency inconvertibility, expropriation and political violence.• Supports US foreign policy.• Promotes US best practices by requiring projects to adhere to international standards on the environment and worker and human rights. OPIC has adopted the Equator Principles and requires the projects it funds to comply with its environmental and health and safety principles. Practical Law Dictionary. Glossary of UK, US and international legal terms. www.practicallaw.com. 2010. - OPIC - Overseas Private Investment Corporation Short Dictionary of (mostly American) Legal Terms and Abbreviations. Look at other dictionaries:
https://law.enacademic.com/8780/OPIC
After completing this unit, you’ll be able to: - Identify recommended board priorities for cyber issues. - Use the WEF risk assessment framework. - Identify areas of concern for future cyber resilience. - Implement practices for boards to prepare for cyber resilience. The Purpose of the Framework You asked the right questions and got the information you need. Now it’s time to put a plan in place to deliver the right amount of cyber resilience oversight. The right framework can make risk identification and evaluation easier. With the right framework, boards and leadership can understand and evaluate the following. - Current risk tolerance/appetite - Cyber risks that the organization faces - Suggested risk management actions and costs Can You Handle the Risk? Every organization is unique when it comes to risk and how much it’s willing to take on. Risks increase or decrease over time. Are there strategic events on the horizon that can impact risk? Is the current risk appetite sustainable? The key is to find the right balance between benefits and the risk your organization can tolerate. Do You Know the Risks? To determine risk tolerance, you must know the risks and understand their implications. These include financial, legal, operational, regulatory, and reputation risks. Some risks are more probable than others, but you should take all risks seriously. You can identify risks through comprehensive reviews in all areas of the organization. The leaders of mission control ensure the right testing and analysis occurs to prepare the organization for what lies ahead. So, What Frameworks Are Out There? Even though cybersecurity is a young field, there are several risk-assessment frameworks to choose from. The WEF has reviewed these frameworks and developed one specifically for boards. The WEF framework supports high-level discussions to validate identified cyber risks, while measuring the probability and impact of these risks. The WEF framework helps boards identify the assets at risk, the impact to the organization should the risk occur, the most vulnerable areas, and possible threats. More information on this framework can be found in the Board Cyber Risk Framework section of Advancing Cyber Resilience: Principles and Tools for Boards. Where the Risks Lie The board and leadership team can work through four primary steps to achieve the overall risk picture for the organization. Step 1: Evaluate Assets to Determine Which Have the Greatest Risk Step 2: Predict Losses If Identified Risks Occur Step 3: Spot the Threats That Exist or Could Develop Step 4: Identify the Vulnerabilities in the Organization—People, Process, and Infrastructure Outside Influences to Consider Boards must also look outside the organization. External areas often play significant roles in cyber risks including political action, merger and acquisition activities, and new methods of cyber attacks. Changes in business models and business activities can lead to new threats. It’s important to monitor these areas and remain ready to respond. Actions to Take How likely is it that one of the risks you identified will occur? What’s the impact and how will the organization handle it? What actions must you take to manage these risks? Each risk should have an action plan. There are four primary types of actions. Mitigation actions - Institute stronger people controls - Establish consistent procedural controls - Test and confirm technical controls Transfer actions - Send the risk or cost elsewhere Acceptance actions - Accept certain risks as part of the cost of doing business Avoidance actions - Avoid risks outside the organization’s risk tolerance Understand that all actions have associated costs. Determine if the actions you take get results and encourage cyber resilience. The Future Awaits We know that the future holds amazing things. We are in the Fourth Industrial Revolution, which brings new opportunities, business shifts, and emerging markets. Using simple guidelines, your board can prepare for what’s to come and protect your organization. Guidelines for Emerging Technology Oversight - Stay aware of emerging technology - Include cyber resilience in all initiatives and in the business lifecycle - Maintain an acceptable level of security - Understand and manage cyber risk associated with vendors and partners - Make data privacy a priority - Maintain the highest ethical standards - Look for ways to improve - Develop the ability to adapt quickly The leaders of mission control always look ahead, searching for risks and preparing the organization to make the most of opportunities. It’s a Whole New World Technology can take us to places never imagined. Our devices can sync with and talk to one another, and they can access and interpret an ongoing feed of information. We’re all affected by the progress occurring every day. With each new discovery, we uncover new markets, business models, and hidden risks. We must cooperate to make the most of innovation and opportunity. Partnership means private enterprise and public entities sharing their expertise. Working together, we can all adapt to face challenges and expose new horizons. This takes leadership—people who are willing to seize the opportunity, and embrace the responsibility, to promote cyber resilience. Sum It Up Continuous improvement, foresight, and cooperation are the keys to success in this brave new world. Boards and leadership must roll up their sleeves and get involved to make sure their organization has the right strategy in place. The tools we’ve shared in this module are a starting point. Use these tools. Improve upon them. Share them with others. It is only through partnership and cooperation that we can embrace new opportunities and uncover a world beyond our imagination. Interested in exploring more cybersecurity-related information? Check out the Cybersecurity Learning Hub on Trailhead.
https://trailhead.salesforce.com/en/content/learn/modules/cyber-resilience/institute-cyber-resilience
By Natalia Gurushina, Chief Economist, VanEck Domestic issues – which include both economic and political considerations – drive policy decisions in EM. DM “tapering” and policy normalization are duly noted, but not key. And “issues” are not restricted to purely macroeconomic stuff, like growth or inflation. In China’s case, the regulatory, political and ideological considerations are just as important – and this can clearly be seen in today’s release of money and credit aggregates. The breakdown shows that tighter regulations in real estate and the tech sector continued to weigh on the new credit growth in August (which rose less than expected). The on-going decline in off-balance sheet (“shadow”) financing points in the same direction (see chart below). A big jump in the government bond issuance shows that the Politburo’s call for more fiscal support is already having an effect, and we should see more of it – in tandem with targeted monetary measures – in the coming months. Going back to traditional macroeconomic drivers, surging inflation is Challenge #1 for central banks in EMEA and LATAM. There is still a great deal of uncertainty regarding the nature of inflation spikes (how much is transitory and how much is not), but monetary authorities increasingly feel compelled to respond with rate hikes in order to avoid the “contamination” (this is a quote from a recent conference call) of inflation expectations. Peru accelerated the pace of tightening (to 50bps) at its rate-setting meeting yesterday. Russia delivered a smaller than expected rate hike (25bps), but it tightened by a total of 250bps so far this year, and the central bank’s statement left room for more hikes if necessary. A surprising acceleration in Czech headline inflation (from 3.4% year-on-year in July to 4.1% in August) pushed the real policy rate deeper into negative territory, increasing the probability of a larger 50bps rate hike on September 30. Such a move will lead to greater policy divergence between Hungary/Czech Republic and their Central European neighbor, Poland, which remains decisively dovish despite going through the same inflation. Chart at a Glance: Changes in China Shadow Financing – Firmly Below Zero Source: Bloomberg LP Originally published by VanEck, September 10, 2021 PMI – Purchasing Managers’ Index: economic indicators derived from monthly surveys of private sector companies. A reading above 50 indicates expansion, and a reading below 50 indicates contraction; ISM – Institute for Supply Management PMI: ISM releases an index based on more than 400 purchasing and supply managers surveys; both in the manufacturing and non-manufacturing industries; CPI – Consumer Price Index: an index of the variation in prices paid by typical consumers for retail goods and other items; PPI – Producer Price Index: a family of indexes that measures the average change in selling prices received by domestic producers of goods and services over time; PCE inflation – Personal Consumption Expenditures Price Index: one measure of U.S. inflation, tracking the change in prices of goods and services purchased by consumers throughout the economy; MSCI – Morgan Stanley Capital International: an American provider of equity, fixed income, hedge fund stock market indexes, and equity portfolio analysis tools; VIX – CBOE Volatility Index: an index created by the Chicago Board Options Exchange (CBOE), which shows the market’s expectation of 30-day volatility. It is constructed using the implied volatilities on S&P 500 index options.; GBI-EM – JP Morgan’s Government Bond Index – Emerging Markets: comprehensive emerging market debt benchmarks that track local currency bonds issued by Emerging market governments; EMBI – JP Morgan’s Emerging Market Bond Index: JP Morgan’s index of dollar-denominated sovereign bonds issued by a selection of emerging market countries; EMBIG – JP Morgan’s Emerging Market Bond Index Global: tracks total returns for traded external debt instruments in emerging markets. The information presented does not involve the rendering of personalized investment, financial, legal, or tax advice. This is not an offer to buy or sell, or a solicitation of any offer to buy or sell any of the securities mentioned herein. Certain statements contained herein may constitute projections, forecasts and other forward looking statements, which do not reflect actual results. Certain information may be provided by third-party sources and, although believed to be reliable, it has not been independently verified and its accuracy or completeness cannot be guaranteed. Any opinions, projections, forecasts, and forward-looking statements presented herein are valid as the date of this communication and are subject to change. The information herein represents the opinion of the author(s), but not necessarily those of VanEck. Investing in international markets carries risks such as currency fluctuation, regulatory risks, economic and political instability. Emerging markets involve heightened risks related to the same factors as well as increased volatility, lower trading volume, and less liquidity. Emerging markets can have greater custodial and operational risks, and less developed legal and accounting systems than developed markets. All investing is subject to risk, including the possible loss of the money you invest. As with any investment strategy, there is no guarantee that investment objectives will be met and investors may lose money. Diversification does not ensure a profit or protect against a loss in a declining market. Past performance is no guarantee of future performance.
https://www.etftrends.com/2021/09/domestic-issues-drive-em-policy-responses/
Investors are often surprised by the low rate of defaults among emerging market corporates. Historically the rate of emerging market high-yield defaults is very similar to the levels among US and European counterparts. In fact, according to credit ratings agency Standard & Poor’s (S&P), emerging market corporate defaults have been lower than US corporate defaults in 15 out of the 17 years to 2017. All investments involve risks, including possible loss of principal. Special risks are associated with investing in foreign securities, including risks associated with political and economic developments, trading practices, availability of information, limited markets and currency exchange rate fluctuations and policies. Sovereign debt securities are subject to various risks in addition to those relating to debt securities and foreign securities generally, including, but not limited to, the risk that a governmental entity may be unwilling or unable to pay interest and repay principal on its sovereign debt. Higher-yielding, lower-rated corporate bonds are subject to increased risk of default and can potentially result in loss of principal. These securities carry a greater degree of credit risk relative to investment-grade securities. Investments in emerging markets, of which frontier markets are a subset, involve heightened risks related to the same factors, in addition to those associated with these markets’ smaller size, lesser liquidity and lack of established legal, political, business and social frameworks to support securities markets. Because these frameworks are typically even less developed in frontier markets, as well as various factors including the increased potential for extreme price volatility, illiquidity, trade barriers and exchange controls, the risks associated with emerging markets are magnified in frontier markets. Bond prices generally move in the opposite direction of interest rates. Thus, as prices of bonds in an investment portfolio adjust to a rise in interest rates, the value of the portfolio may decline. The views expressed are those of the investment manager and the comments, opinions and analyses are rendered as of the 8 February 2019 and may change without notice. The information provided in this material is not intended as a complete analysis of every material fact regarding any country, region or market. Products, services and information may not be available in all jurisdictions and are offered outside the United States by other FTI affiliates and/or their distributors as local laws and regulation permits. Please consult your own professional adviser for further information on availability of products and services in your jurisdiction.
http://emergingmarkets.blog.franklintempleton.com/2019/02/08/three-reasons-to-embrace-emerging-market-corporate-credit/
Equities under pressure on rising coronavirus fears The continued spread of the coronavirus that first emerged in Wuhan, China, has further rattled capital markets. Global stocks, particularly emerging market equities, have sold off sharply, while U.S. Treasuries, considered a safe haven in times of crisis, have risen in price and fallen in yield to levels not seen for nearly six months. Investors are increasingly concerned that the transmission of the virus within China and across the globe could have a durable and negative impact on global economic growth and, by extension, hamper corporate profit growth. Markets under pressure Due to the economic risks posed by the coronavirus, the uncertainty surrounding its transmission and fatality rate and the near-term economic impact of reduced consumer activity in China, equity prices have been under pressure ever since virus fears first emerged in mid-January. Since January 17, emerging market equities, as represented by the iShares MSCI Emerging Markets ETF, have fallen more than 9 percent. Importantly, China is that index’s largest country constituent. China and Hong Kong-listed securities together represent one-third of the overall index. Global growth concerns have also pressured oil and other commodities, due to the expectation of a decline in global travel and factory activity in China, with crude oil falling 12.7 percent. U.S. large-company stocks, as measured by the S&P 500 Index, while not immune to current events, have proven more resilient, and have fallen around 3 percent from their prior highs. What we know about the virus As of January 31, according to the Center for Systems Science and Engineering at Johns Hopkins University, there are more than 9,700 confirmed cases of the coronavirus and more than 200 fatalities. While the virus appears to be spreading within China from its epicenter, Wuhan, at this point the virus has been more contained outside the mainland. Thus far, only 118 coronavirus cases have been confirmed outside China, and governments and global health authorities are attempting to contain the virus to the currently affected areas. The coronavirus’s failure to spread internationally is an extremely positive development, but if this changes that would constitute a considerable risk to global growth. Risk to the current outlook The virus is already having an impact on economic activity, with people staying home and spending less. Wuhan, a city more populous than New York, is at a virtual standstill. More than 45 million Chinese people are quarantined and 60 million face travel restrictions. Further, many international airlines have announced flight suspensions to China, reducing tourist traffic and disrupting general commerce. China’s economy is much more consumer-focused today than it was in 2003, when a similar coronavirus known as severe acute respiratory syndrome (SARS) broke out. Today’s likely consumer spending slowdown may have more than a transitory impact. China is now the world’s second-largest economy and has tight economic links with the rest of Asia. There is a risk a slowdown could reverberate globally. We observed a multi-speed global recovery as we began 2020, with emerging market economies exhibiting the strongest economic momentum of their global peers. While we continue to see this economic rebound in much of the data and indicators we assess, the data lags current events and doesn’t yet measure the coronavirus outbreak’s full impact. Because of the Chinese Lunar New Year holiday, key Chinese economic indicators covering the recent time period will not be released until March. We are monitoring the rapidly evolving situation closely and will be certain to update you if it impacts our capital market views. We firmly believe that commitment to a disciplined financial plan can improve investment outcomes and help investors meet their personal goals. As always, we are happy to answer any questions about the situation and its risks to portfolios and we thank you for giving us your trust to manage your hard-earned capital. Investment products and services are: NOT A DEPOSIT • NOT FDIC INSURED • MAY LOSE VALUE • NOT BANK GUARANTEED • NOT INSURED BY ANY FEDERAL GOVERNMENT AGENCY This information represents the opinion of U.S. Bank Wealth Management. The views are subject to change at any time based on market or other conditions and are current as of the date indicated on the materials. This is not intended to be a forecast of future events or guarantee of future results. It is not intended to provide specific advice or to be construed as an offering of securities or recommendation to invest. Not for use as a primary basis of investment decisions. Not to be construed to meet the needs of any particular investor. Not a representation or solicitation or an offer to sell/buy any security. Investors should consult with their investment professional for advice concerning their particular situation. The factual information provided has been obtained from sources believed to be reliable, but is not guaranteed as to accuracy or completeness. U.S. Bank is not affiliated or associated with any organizations mentioned. Based on our strategic approach to creating diversified portfolios, guidelines are in place concerning the construction of portfolios and how investments should be allocated to specific asset classes based on client goals, objectives and tolerance for risk. Not all recommended asset classes will be suitable for every portfolio. Diversification and asset allocation do not guarantee returns or protect against losses. Past performance is no guarantee of future performance. All performance data, while obtained from sources deemed to be reliable, are not guaranteed for accuracy. Indexes shown are unmanaged and are not available for direct investment. The S&P 500 Index consists of 500 widely traded stocks that are considered to represent the performance of the U.S. stock market in general. Equity securities are subject to stock market fluctuations that occur in response to economic and business developments. International investing involves special risks, including foreign taxation, currency risks, risks associated with possible differences in financial standards and other risks associated with future political and economic developments. Investing in emerging markets may involve greater risks than investing in more developed countries. In addition, concentration of investments in a single region may result in greater volatility. Investing in fixed income securities are subject to various risks, including changes in interest rates, credit quality, market valuations, liquidity, prepayments, early redemption, corporate events, tax ramifications and other factors. Investment in debt securities typically decrease in value when interest rates rise. This risk is usually greater for longer-term debt securities. Investments in lower-rated and non-rated securities present a greater risk of loss to principal and interest than higher-rated securities. Investments in high yield bonds offer the potential for high current income and attractive total return, but involve certain risks. Changes in economic conditions or other circumstances may adversely affect a bond issuer's ability to make principal and interest payments. The municipal bond market is volatile and can be significantly affected by adverse tax, legislative or political changes and the financial condition of the issues of municipal securities. Interest rate increases can cause the price of a bond to decrease. Income on municipal bonds is free from federal taxes, but may be subject to the federal alternative minimum tax (AMT), state and local taxes. There are special risks associated with investments in real assets such as commodities and real estate securities. For commodities, risks may include market price fluctuations, regulatory changes, interest rate changes, credit risk, economic changes and the impact of adverse political or financial factors. Investments in real estate securities can be subject to fluctuations in the value of the underlying properties, the effect of economic conditions on real estate values, changes in interest rates and risks related to renting properties (such as rental defaults). U.S. Bank and its representatives do not provide tax or legal advice. Your tax and financial situation is unique. You should consult your tax and/or legal advisor for advice and information concerning your particular situation. Member FDIC. ©2020 U.S. Bank.
https://privatewealth.usbank.com/insights/coronavirus-fears-fuel-capital-market-volatility
European Regulators identify vulnerabilities affecting the EU financial system2 min read The Joint Committee of the European Supervisory Authorities (EBA, EIOPA, ESMA – ESAs) published today its Spring 2016 Report on Risks and Vulnerabilities in the EU Financial System. The Joint Committee highlights three main risks affecting the European financial system and suggests a set of policy actions to tackle those risks: - Low profitability of financial institutions in a low yield environment. Yields in Europe remain at historical lows and risks concerning the low profitability of financial entities pose key concerns to the EU financial system. As financial institutions intend to reduce costs and adjust their business models, forward-looking supervisory approaches to scrutinize business model sustainability are needed. A proactive stance to address still high stocks of non-performing loans at banks in some regions is also needed. - Increasing interconnectedness of bank and non-bank entities. Over the last five years the role of non-bank and non-insurance financial institutions has increased. The interconnectedness between different entities represents a potential channel for the propagation of shocks. The Joint Committee believes that this risk should be tackled through enhanced supervisory monitoring of concentration risks, cross border exposures and regulatory arbitrage. - Potential contagion from China and other emerging markets. After a decade of positive contribution to the global economic growth, economic activity in China and other emerging markets has started to recede. The Joint Committee calls on national supervisors to include emerging market risk in sensitivity analyses or stress tests and to scrutinise optimistic assumptions of financial institutions with regard to emerging market exposure and returns from emerging market business. The statement and the JC Risk and Vulnerabilities report are available here.
https://www.planetcompliance.com/2016/04/07/european-regulators-identify-vulnerabilities-affecting-the-eu-financial-system/
With every passing day, businesses become more entwined in an ecosystem of partners, vendors, and suppliers in global markets. A local natural disaster, for example, can have far-reaching consequences throughout a global supply chain; so controlling, recognizing, and mitigating risks is critical to a company’s business continuity and financial stability. A risk management process involves identifying, controlling, and assessing the harm that risks would have on a company if those risks come to pass. Examples of potential risks include data loss, cyber-attacks, cybersecurity breaches, system failures, and natural disasters. Effective risk management means trying to control future outcomes as much as possible by acting early to reduce risk rather than reacting after a risk event. It also offers the possibility of reducing both the probability of a risk occurring and the risk’s potential impact. A risk management plan helps a company to understand and control risks, so it can make better decisions and achieve business objectives. A company must identify potential threats and their effect on the business. Identifying vulnerabilities in advance makes it easier for the organization to prevent them from occurring. A risk management plan begins with creating a stakeholder team to review top risks to the organization. This stakeholder team should include senior management, the compliance officer, and department managers. Always consider emerging trends in your efforts to improve your risk management plan. Implement new initiatives as necessary as new risks develop — which, incidentally, underlines the importance of enterprise risk management (ERM) in your organization. What Are the Current Risk Management Trends? The complexity of today’s enterprises is driving increased risk exposure. New risks develop and current threats mutate, and businesses can struggle to keep up. Here are some current trends in emerging risk management that need to be on your radar. Enterprise Risk Management Technologies Over the years, technology (and the savvy use of it) has enabled new business models. Innovative technologies such as artificial intelligence continue to drive new business models. In the risk management arena, technology has two emerging considerations. - It plays a crucial role in transforming companies so they can adopt more effective and efficient risk management practices that enhance performance, not just assure regulatory compliance. - The landscape of rapidly evolving new technologies provides both significant benefits and risks to the organization’s existing business model and long-term survival. A comprehensive governance, risk, and compliance (GRC) platform can integrate all types of risk management activities. These activities include managing policies, conducting risk assessments, understanding risk posture, identifying gaps in regulatory compliance, responding to incidents, and automating the internal audit process. Operational Focus of Risks Teams Simply relying on scheduled assessments leaves risk executives blind to the risks the organization is experiencing in real-time. If you focus on the theoretical risks, you miss the day-to-day threats that affect operations. Getting down to the operational level includes conducting assessments with active lower-level employees performing daily operations. By increasing awareness of operational risks, you can undertake root cause analysis to discover where your risk management processes are weak. You do need infrastructure to collect feedback and implement controls to prevent repeat incidents, but the investment is worth the effort. Risks Analytics Another trend is risk data analytics: that is, advanced data mining and analysis techniques to achieve risk management objectives. Organizations are increasingly focusing on using data-driven approaches to unlock the maximum amount of information hidden in their data to discover and manage their risks. The main advantage of risk analytics is that you can expand risk factors to include granular specifications that provide a more holistic and factual basis for risk management. You can also use it with predictive models to evaluate transactions to refine and improve early warning signals. Managing Emerging Risks A business can’t focus only on those risks whose likelihood and potential harm are already known. The business must also develop tools and techniques to identify newly emergent risks and then respond to those threats as well. Chief risk officers emphasize risk velocity (the speed at which a risk can go from detection to actually happening) to aid ERM teams in managing and prioritizing risks. The focus of high-velocity emerging risks should be on the early detection of risk occurrences and developing effective reaction plans. Key risk indicators (KRIs) can provide early warning of a probable risk occurrence. Reliable historical data will aid in identifying actual risks and preventing risk teams from focusing on phantom risks, which are hazards that are exaggerated due to bias, political motivation, or information withholding. Increased Regulatory Compliance Requirements Organizations around the world understand that regulatory compliance is critical. As regulations change, there is an increased focus on transparency and an increased risk of non-compliance. Compliance with HIPAA, FERPA, COPPA, GDPR, among others, influences organizations’ decision-making. Compliance with new and changing regulations will continue to be a critical component driving risk oversight within an organization. Successful organizations see risk management as a strategic component of their value chain, providing long-term sustainable growth and innovation. What Are the Challenges in Risk Management? Risk assessments are often structured in such a way that company managers only capture the known risks. If you want to identify the “unknown unknowns,” consider calling on outside experts to contribute to or facilitate risk assessments. It also helps to include employees from various levels, functions, and skill sets to provide insights from their work experience. Risk management silos occur when multiple corporate divisions each have their own procedures, spreadsheets, analyses, frameworks, and assumptions. These silos are significant roadblocks to a comprehensive enterprise risk management program. Separate business areas focus on their view of risk rather than the big picture, unable to recognize substantial and avoidable losses. Such a segregated approach lacks context and information, making it nearly impossible to relate risk management and decision-making to corporate strategy, objectives, and performance. Responsibility for risk is often distributed among different owners across the enterprise. Today it is critical that all these roles work with the same data and that this risk data is clean, reliable, and timely. Include ZenGRC in Your Risk Management Plans You can’t leave risk management to chance. Errors and omissions from manual processes and inexperienced hands can be costly and damaging to your company’s reputation. Instead of using spreadsheets to manage your compliance requirements, adopt ZenGRC‘s compliance, risk management, and governance platform to streamline risk and audit management for all your compliance frameworks. ZenGRC is a single source of truth that ensures your organization is always audit-ready. Policies and procedures are revision-controlled and easy to find in the document repository. Workflow management features offer easy tracking, automated reminders, and audit trails. Insightful reporting and dashboards give visibility to gaps and high-risk areas. Worry-free risk management is the way forward! Contact us for a free demo of ZenGRC.
https://reciprocity.com/blog/emerging-risk-management-trends-you-need-to-know/
This post is also available in: Chinese (Simplified), Dutch, German, Italian, Spanish When we look at the emerging-market companies in which we invest today, they are worlds away from the companies we were analysing a decade or two ago. The landscape of emerging-market corporations in general has undergone a significant transformation from the often plain-vanilla business models of the past that tended to focus on infrastructure, telecommunications, classic banking models or commodity-related businesses, to a new generation of very innovative companies that are moving into technology and much higher value-added production processes. Furthermore, we’re starting to see the establishment of some very strong globally represented brands which originate from emerging-market countries. Back in the late 1990s, when I was starting out in the emerging-market investing world, technology-oriented companies made up only around 3% of the universe, as represented by the MCSI Emerging Markets (EM) Index.1 Even six years ago, information technology (IT) represented less than 10% of investable companies in the index.2 Much has changed since then. Today, around a quarter of the MSCI EM Index is in the IT sector, which includes hardware, software, components and suppliers. And while much of this activity is originating in Asia, including Taiwan, South Korea and increasingly China, we are also seeing similar developments in Latin America, Central and Eastern Europe and even Africa. The IT sector can be a difficult space to understand and value. Business models are rapidly changing as they adapt to the shifting demands of consumers, and respond to new environmental requirements. Thus, one needs to spend more time understanding and evaluating individual companies before investing in the right stocks, also based on desired risk tolerance. Currently, we have identified opportunities among some larger-sized companies, but tend to generally favour mid-sized companies we think have the potential to outgrow the market as a whole. We look for companies we believe have the ability to adapt more efficiently and are more flexible in adjusting to a fast-changing environment, run by flexible and well-incentivised management teams. The Value of Active Management in Emerging-Market Investing While there has been a considerable evolution in the emerging-market investing universe over the last decade, we remain adamant in our belief that emerging markets remain an investment asset class in which active management should play a vital role for a number of reasons. Emerging markets tend to have their own business rules and regulations which affect companies, corporations differ largely in their attitude towards minority investors, governance standards vary significantly and local intricacies determine consumer trends and habits. We often need to develop fairly close relationships to gain a better understanding of business prospects and find successful management teams that respect the rules. We think these factors could be an important consideration as attention returns to emerging markets on the back of the generally improving performances we have seen in these markets recently. After more than three years of languishing at depressed levels, earnings in emerging-market companies are showing signs of recovery, and that is reflected in the attitudes of companies and their management as well as in their financial data. Recently, on a trip to Dubai, my team and I met a range of companies from Africa, the Middle East and other emerging markets, which were far more confident and open in sharing their outlook for the next 12-to-24 months. Even in regions that are still going through a phase of adjustment and rebalancing, we see improving visibility and increasingly evident robust underlying economic conditions such as low debt, stabilizing commodity markets, reduced currency volatility and improving consumer confidence. After a relatively bleak period for emerging markets, it seems that many of the factors that have attracted investors to the asset class, including stronger earnings growth, higher gross domestic product growth levels and far more attractive consumer trends, may be coming back into play. Carlos Hardenberg’s comments, opinions and analyses are for informational purposes only and should not be considered individual investment advice or recommendations to invest in any security or to adopt any investment strategy. Because market and economic conditions are subject to rapid change, comments, opinions and analyses are rendered as of the date of the posting and may change without notice. The material is not intended as a complete analysis of every material fact regarding any country, region, market, industry, investment or strategy. Important Legal Information All investments involve risks, including the possible loss of principal. Investments in foreign securities involve special risks including currency fluctuations, economic instability and political developments. Investments in emerging markets, of which frontier markets are a subset, involve heightened risks related to the same factors, in addition to those associated with these markets’ smaller size, lesser liquidity and lack of established legal, political, business and social frameworks to support securities markets. Because these frameworks are typically even less developed in frontier markets, as well as various factors including the increased potential for extreme price volatility, illiquidity, trade barriers and exchange controls, the risks associated with emerging markets are magnified in frontier markets. Stock prices fluctuate, sometimes rapidly and dramatically, due to factors affecting individual companies, particular industries or sectors, or general market conditions. ____________________________________ 1. Source: MSCI. The MSCI Emerging Markets Index captures large- and mid-cap representation across 23 emerging -market countries. Indexes are unmanaged, and one cannot directly invest in an index. They do not include fees, expenses or sales charges. See www.franklintempletondataservices.com for additional data provider information. 2. Ibid.
https://emergingmarkets.blog.franklintempleton.com/2017/03/23/why-things-arent-what-they-used-to-be-in-emerging-markets/?utm_medium=Email&utm_source=ExactTarget&nicamp%253D%2526nichn%253D%2526niemailseg%253D631556815%253AMarch232017%253A6467165
Risk Management System The purpose of the Bertelsmann risk management system (RMS) is the early identification and evaluation of as well as response to internal and external risks. The internal control system (ICS), an integral component of the RMS, controls and monitors the risks that have been identified. The aim of the RMS is to identify, at an early stage, material risks to the Group so that countermeasures can be taken and controls implemented. Risks are possible future developments or events that could result in a negative deviation from outlook or objective for Bertelsmann. In addition, risks can negatively affect the achievement of the Group’s strategic, operational, reporting-related and compliance-related objectives. The risk management process is based on the international accepted frameworks of the Committee of Sponsoring Organizations of the Treadway Commission (COSO Enterprise Risk Management – Integrated Framework and Internal Control – Integrated Framework, respectively) and is organized in sub-processes of identification, assessment, management, control and monitoring. A major element of risk identification is the risk inventory that lists significant risks year by year, from the profit center level upward, and then aggregates them step by step at the division and Group levels. This ensures that risks are registered where their impact would be felt. There is also a Group-wide reassessment of critical risks every six months and quarterly reporting even if no risk event occurs. Ad hoc reporting requirements ensure that significant changes in the risk situation during the course of the year are brought to the attention of the Executive Board. The risks are compared against risk response and control measures to determine the so-called net risk. Both one-year and three-year risk assessment horizons are applied to enable the timely implementation of risk management measures. The basis for determining the main Group risks is the three-year period, similar to the medium-term corporate planning. The risk, measured against possible financial loss, is the product of the estimated negative impact on the free cash flow should the risk occur and the estimated probability of occurrence. Risk monitoring is conducted by Group management on an ongoing basis. The RMS, along with its component ICS, is constantly undergoing further development and is integrated into ongoing reporting to the Bertelsmann Executive Board and Supervisory Board. Corporate risk management committees and divisional risk meetings are convened at regular intervals to ensure compliance with statutory and internal requirements. Under section 91 (2) of Germany’s Stock Corporation Act (AktG), the auditors inspect the risk early warning system for its capacity to identify developments early on that could threaten the existence of Bertelsmann SE & Co. KGaA, then report their findings to the Supervisory Board. Corporate Audit conducts ongoing reviews of the adequacy and functional capability of the RMS in the divisions of Penguin Random House, Arvato and Be Printers as well as the Corporate Investments and Corporate Center segments. The risk management systems of RTL Group and Gruner + Jahr are evaluated by the respective internal auditing departments of those divisions and by external auditors. Any issues that are identified are promptly remedied through appropriate measures. The Bertelsmann Executive Board defined the scope and focus of the RMS based on the specific circumstances of the company. However, even an appropriately designed and functional RMS cannot guarantee with absolute certainty that risks will be identified and controlled. Accounting-Related Risk Management System and Internal Control System The objectives of the accounting-related RMS and the ICS are to ensure that external and internal accounting is proper and reliable in accordance with applicable laws and that information is made available without delay. Reporting should also present a true and fair view of Bertelsmann’s net assets, financial position and results of operation. The following statements pertain to the consolidated financial statements (including the “Notes” and “Management Report” sections), interim reporting and internal management reporting. The ICS for the accounting process consists of the following areas. The Group’s internal rules for accounting and the preparation of financial statements (e.g., IFRS manual, guidelines, circulars) are made available without delay to all employees involved in the accounting process. The consolidated financial statements are prepared in a reporting system that is uniform throughout the Group. Extensive automatic system controls ensure the consistency of the data in the financial statements. The system is subject to ongoing development through a documented change process. Systematized processes for coordinating intercompany transactions serve to prepare the corresponding consolidation steps. Circumstances that could lead to significant misinformation in the consolidated financial statements are monitored centrally by employees of Bertelsmann SE & Co. KGaA and by RTL Group (for the preconsolidated subgroup), then verified by external experts as required. Central contacts from Bertelsmann SE & Co. KGaA and the divisions are also in continuous contact with the local subsidiaries to ensure IFRS-compliant accounting as well as compliance with reporting deadlines and obligations. These preventive measures are supplemented by specific controls in the form of analyses by the Corporate Financial Reporting department of Bertelsmann SE & Co. KGaA and RTL Group (for the preconsolidated subgroup). The purpose of such analyses is to identify any remaining inconsistencies. The Group- and division-level controlling departments are also integrated into the internal management reporting. Internal and external reporting are reconciled during the quarterly segment reconciliation process. The further aim in introducing a globally binding control framework for the decentralized accounting processes is to achieve a standardized ICS format at the level of the local accounting departments of all fully consolidated Group companies. The findings of the external auditors and Corporate Audit are promptly discussed with the affected companies, and solutions are developed. An annual self-assessment is conducted to establish a reporting of the quality of the ICS in the key Group companies. The findings are discussed in Audit and Finance Committee meetings at the divisional level. Corporate Audit and the internal auditing departments of RTL Group and Gruner + Jahr evaluate the accounting-related processes as part of their auditing work. As part of the auditing process, the Group auditor also reports to the Bertelsmann SE & Co. KGaA Supervisory Board Audit and Finance Committee about any significant vulnerabilities of the accounting-related ICS that were identified during the audit and the findings regarding the risk early warning system. Major Risks to the Group Bertelsmann is exposed to a variety of risks. The major risks to Bertelsmann identified in the risk reporting are listed in order of priority in the table below. In line with the level of possible financial loss, the risks are classified as endangering, considerable, significant, moderate or low for the purposes of risk tolerability. The risk inventory carried out did not identify any risks that would be classified as considerable or endangering. Given the diversity of the core business fields in which Bertelsmann is active and the corresponding diversity of risks to which the various divisions are exposed, the key strategic and operational risks to the Group identified below are broken down by business segment. Integration risks from acquisitions carried out and information technology risks were identified as the primary risks and are therefore described separately. This is followed by an outline of legal and regulatory risks and financial market risks. These risks are largely managed at the corporate level. Overview of Major Risks to the Group |Risk classification| |Priority||Type of risk||Low||Moderate||Significant||Considerable||Endanger-| ing |1||Customer risks| |2||Changes in market environment| |3||Supplier risks| |4||Cyclical development of economy| |5||Pricing and discounting| |6||Legal and regulatory risks| |7||Employee-related risks| |8||Integration risks Penguin Random House| |9||Audience and market share| |10||Financial market risks| |Risk classification (potential financial loss in three-year period): low: < €50 million, moderate: €50–100 million, significant: €100–250 million, considerable: €250–500 million, endangering: > €500 million. | ■ Existing risks Strategic and Operational Risks The development of the global economy in 2013 reflected the moderate growth level of the previous year. In 2014, the subdued global growth dynamic of recent years is expected to accelerate slightly. Although uncertainty over economic developments has eased somewhat, Bertelsmann’s business development is still dogged by certain risks. Assuming the continuing normalization of the overall economic situation, Bertelsmann expects stable development of Group revenues for 2014. In addition to the risk from economic development, other significant Group risks include customer risks, the risks from changes in the market environment, supplier relationship risks and pricing and margin risks. How these risks develop depends, among other things, to a large extent on changes in customer behavior due to factors such as the digitization of media, the development and implementation of products and services by current and future competitors, bad debt losses as well as default and interference along the production chains in individual sectors such as IT. Employee-related risks, the integration risks associated with the Penguin Random House merger and the audience and market share are moderate risks for Bertelsmann. The most important risks for RTL Group are a decrease in audience and advertising market shares as well as risks arising from changes in market environment and economic downturns. A decrease in audience shares could lead to decreasing revenues. RTL Group actively monitors international market changes and program trends. This is increasingly the case in the digital world, where audiences generally have more choice and market entry barriers are reduced. Higher competition in program acquisition, ongoing audience fragmentation and expansion of platform operators may also impact RTL Group’s ability to generate revenues. Furthermore, economic development directly impacts the advertising market and therefore RTL Group revenue. With a focus on developing non-advertising revenue streams this risk is countered. Apart from potential cost increases triggered by content suppliers, the business can be impacted by the risk of losing key suppliers of content and customers. To address these risks, long-term contracts with major content providers are closed, and active customer relationship management is established. RTL Group’s strategy is also to further diversify its business by establishing complementary families of channels and utilizing the opportunities presented by digitization. The principle risk for Penguin Random House arises from the merger of the two companies. As with any merger of this size, the process of integrating the two companies, and in particular the process of integrating the companies’ IT systems, creates significant risks. Management has established work streams to carry out the integration plan and is closely monitoring its progress. Otherwise, the creation of the larger company has increased the scale of, but has not significantly altered the nature of, the risks that Random House faced prior to the merger. The increase in the digital portion of the business presents opportunities, but also creates challenges related to pricing and customer margin. The overall market trend, especially toward declining physical sales in book stores, could threaten the long-trend viability of certain customers and will likely result in continued margin pressure. Also, higher paper prices and general economic uncertainty continue to pose risks. The risk minimization strategy includes credit insurance to limit bad debt risks, long-term contracts with suppliers and a flexible cost structure in response to economic downturns. The continuing decline in store space of physical book retailers will be partially mitigated by e-book and online sales of physical books and further measures to improve the competitive situation. The risks from a changing market environment constitute the greatest risk position for Gruner + Jahr. There is also the particular risk that higher agency discounts in the German advertising markets and the growing significance of digital advertising will lead to falling margins. The aim is to reduce risks through active customer management, including new forms of offers. The risk of a deterioration of the overall market environment and resulting falls in advertising and circulation revenues remains. Countermeasures include cost savings and reviewing individual titles. On the supplier side, there is still a risk of increasing commissions being charged by individual distributors. Furthermore, there is the risk of losing key customers, for example as advertising customers could switch to other media, coupled with the risk associated with upcoming tenders in the client business. These are to be addressed through targeted measures for key account customers as well as marketing measures. Advertising restrictions discussed at the EU level (e.g., car advertising) could lead to declining advertising revenues. Arvato sees itself particularly exposed to risks from customer relationships, risks from a changing market environment as well as risks from supplier relationships. The reorganization in the form of a matrix and a clear division into Solution Groups while simultaneously taking into account the regional dimension will make it possible to target customers more effectively and help to reduce these risks. The potential loss of key customers is being counteracted through active key account management, long-term contracts with flexible cost structures and through integrated service elements. Offering key customers a successful bundle of services reduces the risk of losing an entire service relationship. The markets in which Arvato operates and that are characterized by overcapacity (primarily replication) show sustained price pressure. In other areas, competitors are following Arvato’s strategy by expanding their value chains, which is increasing the level of competition. New competitors entering the market could intensify the competitive pressure and lead to lower margins. By constantly developing the range of services, the aim is to improve the competitive position and increase customer loyalty through integrated solutions together with a trend toward higher value added. A worsening of the economic environment could result in declining revenues and thus lower margins, which would necessitate cost-cutting measures and capacity downsizing. The broad diversification across customers, sectors and regions helps to reduce this risk. On the procurement side is the risk that the procured intermediate products could be of inferior quality, leading to corresponding subsequent costs. Increased procurement prices that cannot be passed on to customers constitute further risks. Countermeasures include agreeing long-term contracts and monitoring the supplier market. The ongoing trend toward digitization entails further risks for individual customer segments of Arvato, particularly in the manufacturing and distribution of physical media products. These risks are being addressed, for example, by developing business priorities, which comprise digital services. Furthermore, business segments that offer no strategic or economic prospects are being deliberately scaled back. The handling of IT risks with sector-specific requirements (data protection and data security requirements) is an additional risk for Arvato as an international service provider. This risk is being reduced by introducing an Information Security Management System based on the ISO 27001 standard, which is used to systematically identify and resolve information security risks. Customer risks, in particular the greater dependence on a few major customers in structural terms, are the most significant risks for Be Printers. There are also risks from the market environment, which is characterized by shrinking markets and overcapacity. Risks can arise from a continuing market concentration leading to tougher price competition and lower margins. Deterioration in the economic environment may lead to declining circulations with a negative impact on earnings. The same applies to the increasing spread of digital end devices, which is resulting in a decline in printed media. There are further risks on the supplier side associated with rising raw material prices – particularly for paper – that cannot be passed on to customers. The risk minimization strategy is based, among other things, on flexible contractual arrangements, particularly for key accounts. Other key elements of this strategy include the agreement of price-adjustment clauses, optimizing cost structures and making them more flexible as well as ongoing market monitoring. Corporate Investments essentially comprises the fund investments and BMG as well as the Group’s remaining Club and Direct Marketing activities. From a Group perspective, the identified risks are of minor importance. Finally, it should be noted that because of demographic change a greater emphasis in the risk reporting is placed on employee-related risks such as a shift in the age distribution of the workforce, challenges in recruiting qualified personnel and the departure of top executives. This risk applies to all divisions. Countermeasures include further training measures and health programs, increased recruiting measures as well as interdivisional talent development. Integration Risks from Acquisitions Carried Out As well as organic growth, the Group’s development strategy includes targeted acquisitions of promising businesses. These types of acquisitions, such as in 2013 the merger of Penguin Random House, the takeover of the remaining shares in BMG and the acquisition of Gothia, present opportunities as well as risks. Integration into the Group requires one-time costs that are usually offset by increased benefits in the long term thanks to synergy effects. In this context, there are risks in that the integration costs may be higher than expected or the predicted level of synergies may not materialize. The integration processes are therefore being permanently monitored by management. Information Technology Risks For a global media company like Bertelsmann, the reliability and security of information technology is crucial. This means that the Group is now facing a wide range of IT risks. Challenges are constantly increasing as the business environment becomes more and more complex due to the increasing networking and IT penetration of business processes, many internal processes that are not yet standardized and potential external risks. In the future, this issue will be actively addressed by the introduction of the Group-wide Information Security Management System. The implementation of the management system includes regular and structured monitoring of compliance with the regulations as well as systematic recording of information security risks and deriving appropriate measures. Legal and Regulatory Risks Bertelsmann, with its worldwide operations, is always exposed to a variety of legal and regulatory risks ranging from litigation to varying interpretations of tax assessment criteria. These risks are being continuously monitored by the relevant departments within the Group. In November 2008, RTL II filed legal actions against IP Deutschland, a wholly owned subsidiary of RTL Group, and Seven One Media (“SOM”) as a result of the proceedings in 2007 of the German Federal Cartel Office against the discount scheme agreements (“share deals”) offered by IP Deutschland and SOM. RTL II’s claim is currently limited to access to information on the basis of which the claimants want to prove that they suffered damages from these discount schemes. The court of first instance in Düsseldorf decided to order an expert report. At the end of January 2013, Kabel Deutschland (KDG) appealed a decision of the German Federal Cartel Office to settle a case in accordance with section 32b of the German Act Against Restraints of Competition following commitments of the channels of Mediengruppe RTL Deutschland to broadcast digital channels in standard quality unencrypted and to refrain from certain restrictions on the usage of digital signals in standard quality. The preliminary oral proceeding is scheduled for September 2014. Foreign investments in media companies in the People’s Republic of China are subject to restrictions. In order to comply with local legal provisions, some of the Bertelsmann participations in China are held by trustees. Bertelsmann has agreements with these trustees with respect to the securing of Bertelsmann’s rights. This type of structure is common for investments in China and has been tolerated by the Chinese authorities for many years. However, a basic risk exists that it will not be possible to safeguard such structures through Chinese courts if the People’s Republic should change its policies toward foreign investment and, for example, no longer recognize offshore investments in general or in the media area in particular. In addition, it cannot be ruled out that Chinese authorities or courts in the future will interpret existing provisions differently from the previous practice. In the event that legal violations can be proven, in an extreme case, Bertelsmann could be exposed to considerable fines and the revocation of business licenses leading to immediate closure of participations in China. This would affect Arvato and Gruner + Jahr companies as well as Bertelsmann Asia Investments (BAI). In the past, however, such extreme measures by the Chinese authorities have only been reported in exceptional cases. Aside from the matters outlined above, no further significant legal and regulatory risks to Bertelsmann are apparent at this time. Financial Market Risks As an international corporation, Bertelsmann is exposed to various forms of financial market risk, especially interest rate and currency risks. These risks are largely controlled centrally on the basis of guidelines established by the Executive Board. Derivative financial instruments are used solely for hedging purposes. Bertelsmann uses currency derivatives mainly to hedge recorded and future transactions involving foreign currency risk. Some firm commitments denominated in foreign currency are partially hedged when they are made, with the hedged amount increasing over time. A number of subsidiaries are based outside the euro zone. The resulting translation risk is managed based on economic debt in relation to operating EBITDA (leverage factor). Bertelsmann’s long-term focus is on the maximum leverage factor permitted for the Group. Foreign currency translation risks arising from net investments in foreign entities are not hedged. Interest rate derivatives are used centrally for the balanced management of interest rate risk. The cash flow risk from interest rate changes is centrally monitored and controlled as part of interest rate management. The aim is to achieve a balanced ratio of different fixed interest rates by selecting appropriate maturity periods for the originated financial assets and liabilities affecting liquidity, and through the ongoing use of interest rate derivatives. The liquidity risk is regularly monitored on the basis of the planning calculation. The existing syndicated loan, as well as appropriate liquidity provisions, form a sufficient risk buffer for unplanned payments. Counterparty risks exist in the Group in invested cash and cash equivalents and in the default of a counterparty in derivatives transactions. Financial transactions and financial instruments are restricted to a rigidly defined group of banks with an excellent credit rating. Existing risks from investing cash and cash equivalents are continuously monitored. Financial investments are generally made on a short-term basis so that the investment volume can be reduced if the credit rating changes (see also further explanatory remarks on “Financial Risk Management” in section 25 of the notes). Overall Risk The overall risk position has increased year on year primarily due to the increase in business volume through the Penguin Random House merger. The risks arising from the process of integrating the two companies are shown as an individual risk. Risks from technological challenges that were included in the top ten Group risks last year remain but they have become less significant. The continuing digital transformation of businesses is already largely anticipating the technological changes so that the risks in this connection are being increasingly reflected in other operating risks such as pricing and margin risks. As a result of the diversification of Group businesses, there are no concentration risks stemming from dependency on individual business partners or products in either procurement or sales. The Group’s financial position is solid, with liquidity needs currently covered by existing liquidity and available credit facilities. No risks endangering Bertelsmann’s continued existence were identified in financial year 2013, nor are any substantial risks discernible from the current perspective that could threaten the continued existence of the Group. Opportunity Management System An efficient opportunity management system enables Bertelsmann to secure its corporate success in the long term and to exploit potential in an optimum way. Opportunities are possible future developments or events that could result in a positive deviation from outlook or objective for Bertelsmann. The opportunity management system, like the RMS, is an integral component of the business processes and company decisions. During the planning process, the significant opportunities are determined each year, from the profit center level upward, and then aggregated step by step at the division and Group levels. By systematically recording them on several reporting levels, opportunities that arise can be identified and exploited at an early stage. This also creates an interdivisional overview of Bertelsmann’s current opportunities. A review of major changes in opportunities is conducted at divisional level every six months. In addition, the largely decentralized opportunity management system is coordinated by central departments in the Group. The department of Business Development and New Businesses continuously pursues strategic opportunity potential and seeks to derive synergies through targeted cooperation in the individual divisions. The interdivisional experience transfer is reinforced by regular meetings of the Group Management Committee. Opportunities While the opportunities associated with positive development may be accompanied by corresponding risks, certain risks are entered into in order to exploit potential opportunities. This close link to the key Group risks offers strategic, operational, legal, regulatory and financial opportunities for Bertelsmann. Strategic opportunities can be derived primarily from the Group’s four strategic priorities. Strengthening core businesses, driving forward the digital transformation, developing growth platforms and expanding in growth regions constitute the most important long-term growth opportunities for Bertelsmann (see section “Strategy”). In particular, there are general opportunities for exploiting synergies as a result of the portfolio expansions. Furthermore, there is potential in the existing divisions for efficiency improvements and the possibility of more favorable economic development as well as individual operational opportunities. For RTL Group, the TV advertising markets in some core markets could develop better than expected. The many different possible applications for the increasingly digital means of distribution will allow RTL Group to target their end customers and advertising customers more effectively. At Penguin Random House, successful debut publications, strong market growth and higher e-book revenues provide further opportunities. Gruner + Jahr has opportunities in international markets through new and digital businesses. In the magazine business, growth may be achieved particularly in Spain, China and India through higher advertising revenues. At Arvato, the ongoing trend toward outsourcing and the successful development of new businesses are creating opportunities. Arvato could benefit in particular from higher growth of SCM activities in the e-commerce, high-tech and health-care segments and additional new business from the CRM Solution Group. There are also opportunities for growth in the Solution Groups: IT Solutions, Financial Solutions, Digital Marketing and Print Solutions. The Be Printers print businesses, particularly in Southern Europe, may decline less steeply through additional volume and new customers. This would provide opportunities from the targeted servicing of market segments that are still growing. At Corporate Investments, there is potential for growth thanks to lower restructuring costs in the Club and Direct Marketing businesses. In addition, potential artist signings or music catalog takeovers could offer growth opportunities for BMG. The current innovation efforts detailed in the “Innovations” section offer further potential opportunities for the individual divisions. Other opportunities could arise from changes to the legal and regulatory environment. The financial opportunities are largely based on a favorable development of interest and exchange rates from Bertelsmann’s point of view.
http://ar2013.bertelsmann.com/reports/bertelsmann/annual/2013/gb/English/301080/risks-and-opportunities.html
The Russia/Ukraine war poses additional challenges for European economies, which are now facing higher inflation risks and more growth headwinds. Russia/Ukraine War, Consequences Russia continued its military offensive in Ukraine, despite chatter about another round of talks between the two sides. The Russian currency and depositary receipts are doing poorly this morning. Ukraine’s sovereign debt was also under pressure – the fact that Ukraine made a scheduled payment on its 2022 sovereign bond yesterday was noted, but made no market impact due to fat negative tail risks. Concerns about the global fallout are multiplying. However, the U.S. Federal Reserve Chair Jerome Powell said in his yesterday’s testimony to the House that the March rate hike is still on the table. As of this morning, the Fed Funds Futures price in 26-27bps in March, and a total of 5 rate hikes in 2022. Europe Growth, Inflation Risks As regards other major central banks, higher inflation pressures (via food and commodity prices) and more growth headwinds pose additional policy challenges – especially in Europe/Central Europe. Sell-side economists have already started to cut their 2022 growth forecasts for the region. Right now, the revisions are modest – about 1% or so – but if military operations last longer and affect more territory, we should expect sharper growth downgrades. A prospect of wider budget deficits and worsening debt metrics – against the backdrop of large-scale refugee inflows – further complicates the picture. The chart below shows that Central European currencies underperformed emerging markets (EM) peers by a wide margin so far this year, as did local currency debt. Persisting currency weakness can increase pressure on regional central banks to hike more – this scenario is now reflected in the market rate expectations for 6 months. An alternative is to step up interventions on the FX market – this option was used today by Poland’s central bank. Low-Income Countries Debt Relief The final point we would like to make today is the potential negative impact on lower-income countries, some of which can be hit by the triple-whammy of higher commodity prices, trade disruptions (wheat importers from Ukraine), and lower foreign direct investments/loans from Russia. Would the debt relief initiative for lower-income countries – which was put in place during the pandemic – have to be revisited in the coming weeks? Stay tuned! Originally published by VanEck on March 2, 2022. For more news, information, and strategy, visit the Beyond Basic Beta Channel. PMI – Purchasing Managers’ Index: economic indicators derived from monthly surveys of private sector companies. A reading above 50 indicates expansion, and a reading below 50 indicates contraction; ISM – Institute for Supply Management PMI: ISM releases an index based on more than 400 purchasing and supply managers surveys; both in the manufacturing and non-manufacturing industries; CPI – Consumer Price Index: an index of the variation in prices paid by typical consumers for retail goods and other items; PPI – Producer Price Index: a family of indexes that measures the average change in selling prices received by domestic producers of goods and services over time; PCE inflation – Personal Consumption Expenditures Price Index: one measure of U.S. inflation, tracking the change in prices of goods and services purchased by consumers throughout the economy; MSCI – Morgan Stanley Capital International: an American provider of equity, fixed income, hedge fund stock market indexes, and equity portfolio analysis tools; VIX – CBOE Volatility Index: an index created by the Chicago Board Options Exchange (CBOE), which shows the market’s expectation of 30-day volatility. It is constructed using the implied volatilities on S&P 500 index options.; GBI-EM – JP Morgan’s Government Bond Index – Emerging Markets: comprehensive emerging market debt benchmarks that track local currency bonds issued by Emerging market governments; EMBI – JP Morgan’s Emerging Market Bond Index: JP Morgan’s index of dollar-denominated sovereign bonds issued by a selection of emerging market countries; EMBIG – JP Morgan’s Emerging Market Bond Index Global: tracks total returns for traded external debt instruments in emerging markets. The information presented does not involve the rendering of personalized investment, financial, legal, or tax advice. This is not an offer to buy or sell, or a solicitation of any offer to buy or sell any of the securities mentioned herein. Certain statements contained herein may constitute projections, forecasts and other forward looking statements, which do not reflect actual results. Certain information may be provided by third-party sources and, although believed to be reliable, it has not been independently verified and its accuracy or completeness cannot be guaranteed. Any opinions, projections, forecasts, and forward-looking statements presented herein are valid as the date of this communication and are subject to change. The information herein represents the opinion of the author(s), but not necessarily those of VanEck. Investing in international markets carries risks such as currency fluctuation, regulatory risks, economic and political instability. Emerging markets involve heightened risks related to the same factors as well as increased volatility, lower trading volume, and less liquidity. Emerging markets can have greater custodial and operational risks, and less developed legal and accounting systems than developed markets. All investing is subject to risk, including the possible loss of the money you invest. As with any investment strategy, there is no guarantee that investment objectives will be met and investors may lose money. Diversification does not ensure a profit or protect against a loss in a declining market. Past performance is no guarantee of future performance.
https://www.etftrends.com/tactical-allocation-channel/russiaukraine-war-collateral-damage/
- We expect to see some progress on Mozambique’s large pipeline of power projects over coming years, as the country seeks to take advantage of its hydro, gas, solar and coal resources to expand power capacity, widen access and generate revenue through power export. - Hydropower and coal-fired power plants in particular are gaining traction, supported by foreign investment and aid from China and Japan, while the country also plans to build out its transmission grid infrastructure with development financing. - Many of the power projects will be delayed or struggle to move beyond the planning stages due to the government’s ongoing fiscal problems and the high risks associated with operating the Mozambican market. We are beginning to see some progress on a range of power plant and transmission grid projects in Mozambique and expect to see activity in the sector pick up over coming years. Mozambique has abundant natural resources in the form of coal, natural gas, solar and hydropower potential, which it is seeking to utilise for power generation. The country already has large established coal mines and proven hydro resources, while it is in the process of developing its natural gas resources, with production slated to begin in 2022. Despite these abundant natural resources, Mozambique’s power capacity is currently low, with power consumption per capita at an estimated 364KWh in 2019, below most other Southern African countries, and just 40% of the population connected to the national grid. There is also an incentive to increase power generation in order to export to neighbouring countries, which can be profitable for the government, although this objective is somewhat undermined by the fact that several other Southern African countries are also intending to export power regionally, potentially leading to supply gluts. Access To Electricity Still Lagging Southern Africa & Global Average - Electrification Rate (% of population) There is consequently a strong demand case for expansion of the power supply, and this is being addressed with a number of new projects in the pipeline. Many of these projects fall under the government’s ‘Energy For All’ programme, launched in March 2019 and supported by the World Bank, which aims to provide access to power for all citizens by 2030 through expansion of the grid as well as construction of new power plants. We are sceptical that this objective will be achieved in full, and none of the projects discussed below are yet factored into our power generation forecasts as we await concrete progress on construction. Nevertheless, we expect to see some progress on associated power projects over the next decade, with some of the recent developments including: - July 2019 - A consortium comprising Ncondezi Energy, China Machinery Engineering Co and the Swiss unit of General Electric signed a joint development agreement for an integrated 300MW coal-fired power project and coal mine in Tete province. The project, which will link a coal mine to a thermal power plant, is estimated to have an eventual capacity of 1.8GW, developed in phases. - August 2019 - Construction of the Metoro solar photovoltaic plant in Mozambique is expected to start later in 2019. The 41MW facility will be located in the Ancuabe district in Cabo Delgado province. Estimated to cost USD76mn, the project will be built by Neoen. - August 2019 - China Energy Engineering Corporation's subsidiary Somagec Mozambique announced plants to build a 200MW coal-fired power plant in Nampula province. Coal extracted by Vale Moçambique will be used as feedstock for the facility. - September 2019 - The Mozambican government signed financing agreements with various foreign institutions for the Temane Regional Electricity Project (TREP), a transmission line between Maputo and Temane. Financiers include the African Development Bank (USD33mn), the World Bank (USD300mn), the Islamic Development Bank (USD99.7mn), the government of Norway (USD24mn) and OPEC Fund for International Development (USD36mn). Electricidade de Moçambique is the project owner, and Globeleq, eleQtra and Sasol are other project partners. - September 2019 - Construction of a 400kV power transmission line from Caia, in Sofala province, to Nacala-a-Velha in Nampula, in Mozambique, is expected to start before the end of 2019. Phase one, which involves the construction of a nearly 370km transmission line to Alto Molócuè, Zambézia, is backed by a USD200mn loan from the Islamic Development Bank. Hydropower & Coal Projects Gaining Traction Mozambique - Power Plant Project Pipeline By Fuel Type, USDmn The prospects for success of these projects are improved by the backing of foreign governments, development financiers and multinational firms. In particular, financiers and contractors from China and Japan are involved in several power projects – providing concessional loans and grants as well as expertise to see projects through to completion. We would highlight that companies such as China’s State Power Investment Corporation and China Energy Engineering Corporation and Japan’s Sumitomo Corporation are major multinational firms with experience delivering power projects in emerging markets. The involvement of these companies in Mozambique’s power projects is consequently a positive sign. In addition, the country’s success in enlisting development financiers such as the World Bank and the AfDB to support power projects will increase the prospects for success – providing cheap financing and facilitating private sector involvement. We have previously highlighted that development financiers would drive transmission and distribution infrastructure development in SSA, and the recent developments show that Mozambique will be part of this trend. |Project Name||Project Type||Value (USDmn)||Size||Unit||Companies||Status| |Chemba Hydropower Plant, Sofala||Hydropower||2,552||1,000||MW||Government of Mozambique, Electricidade de Mocambique||At planning stage| |Mphanda Nkuwa Hydropower Plant, Zambezi River, Tete||Hydropower||2,000||1,500||MW||IMPACTO, Insitec Investments SA, International Development Association (IDA), COBA, Standard Bank, Electricidade de Mocambique, Camargo Correa||At planning stage| |Temane Gas-fired Power Plant, Inhambane||Gas||1,300||400||MW||eleQtra, Globeleq, Sasol, Electricidade de Mocambique||At planning stage| |Lupata Hydroelectric Dam, Tete||Hydropower||1,072||600||MW||Cazembe Holding Ltd, Hydroparts Holding, Rutland Holding, Sonipal, Electricidade de Mocambique||At planning stage| |Cahora Bassa Norte (Cahora Bassa Phase II), Zambezi River, Tete||Hydropower||700||1,245||MW||Redes Energeticas Nacionais, Government of Mozambique||At planning stage| |Ncondezi Coal-fired Power Plant Phase I, Songo, Tete||Coal||600||300||MW||State Power Investment Corporation, Ncondezi Coal Company||At planning stage| |Temane (Inhambane) - Maputo Transmission Line Project||Power Lines||551||400||kV||Development Bank of Southern Africa, OPEC Fund for International Development, African Development Bank (AfDB), World Bank, Islamic Development Bank (IDB), Government of Norway, Government of Mozambique, Electricidade de Mocambique||Project finance closure| |Somagec Mozambique Coal-fired Power Plant, Nacala-a-Velha, Nampula||Coal||355||200||MW||China Energy Engineering Corporation (CEEC)||At planning stage| |Caia (Sofala) - Nacala (Nampula) Transmission Line||Power Lines||n/a||400||kV||Electricidade de Mocambique, Islamic Development Bank (IDB)||At planning stage| |Moatize Coal-fired Power Project, Tete||Coal||n/a||600||MW||Vale, ACWA Power International, Electricidade de Mocambique, Whatana Investment Group, Mitsui, GS Engineering & Construction Corporation||At planning stage| Nevertheless, progress on these projects is likely to run into a variety of obstacles present in Mozambique’s difficult operating environment. None of these projects has yet entered the construction phase, and we highlight that delays are likely to all of the developments, while some may not make it past the planning stages. One of the major risks to project success is the Mozambican government’s weak fiscal position, with the hidden debt crisis in 2016 leaving the country with a high debt/GDP ratio and locked out of international financial markets. Even when gas production begins in 2022, a large proportion of the revenues generated will be directed towards debt servicing rather than capital spending. This will make it difficult for the government to provide financial support to power projects, leaving the sector heavily reliant on foreign investment and aid which may not be sufficient to get projects off the ground. Any power generation project will have to reach a power purchase agreement with the national electricity company, Electricidade de Moçambique, and there are likely to be issues surrounding tariffs and non-payment which could deter independent power producers (IPPs). High Risks Deter Investors And Cause Delays Mozambique - Operational Risk Scores There is also potential for problems with securing feedstock for power plants. Even though Mozambique has domestic resources of coal and gas, IPPs will need to reach agreement with mining companies and gas producers, which may prove difficult as selling feedstock domestically is likely to be less profitable than exporting. In addition, we see further potential for delays to power projects from a variety of legal, environmental and security risks. In particular, coal and hydropower projects often run into significant local and environmental opposition due to their adverse impact on the surrounding area. It will also prove difficult to attract development financing for coal-fired power projects, as multilateral agencies such as the World Bank pivot away from supporting high-polluting power developments in favour of renewables projects. The weak rule of law and persistent security threats from separatist groups and Islamist militants in Mozambique add further complications for any multinational firm considering investing in the country, and will likely hinder progress on some planned power projects. This report from Fitch Solutions Country Risk & Industry Research is a product of Fitch Solutions Group Ltd, UK Company registration number 08789939 ('FSG'). FSG is an affiliate of Fitch Ratings Inc. ('Fitch Ratings'). FSG is solely responsible for the content of this report, without any input from Fitch Ratings. Copyright © 2021 Fitch Solutions Group Limited. © Fitch Solutions Group Limited All rights reserved. 30 North Colonnade, London E14 5GN, UK.
https://www.fitchsolutions.com/infrastructure/mozambique-power-developments-gain-traction-risks-abound-12-09-2019
Exeter Resource Corporation (NYSEMKT:XRA)(TSX:XRC) (“Exeter” or the “Company”) announces that its previously scheduled and announced Annual General Meeting of Shareholders to be held on Thursday, June 22, 2017 (the “Meeting”) is cancelled in anticipation of the expiration on June 20, 2017, of the offer by Goldcorp Inc. to acquire 100% of the common shares of the Company. About Exeter Resource Corporation Exeter is a Canadian mineral exploration company focused on the exploration and development of the Caspiche project in Chile. Caspiche is well located in Chile’s Maricunga district, which has good infrastructure and is in close proximity to other large scale mining operations and projects in development. On behalf of Exeter Resource Corporation EXETER RESOURCE CORPORATION Wendell Zerb, P. Geol President and CEO |For further information, please contact:| Wendell Zerb, President and CEO or Rob Grey, VP Corporate Communications Tel: 604.688.9592 Fax: 604.688.9532 Toll-free: 1.888.688.9592 |Suite 1660, 999 West Hastings St.| Vancouver, BC Canada V6C 2W2 [email protected] Safe Harbour Statement – This news release contains “forward-looking information” and “forward-looking statements” (together, the “forward-looking statements”) within the meaning of applicable securities laws and the United States Private Securities Litigation Reform Act of 1995, including in relation to management’s assessment of the benefits to shareholders of the proposed transaction with Goldcorp, anticipated mailing and meeting days, timing for completion of the transaction, the Company’s belief as to the potential significance of water discovered and the potential to utilize the desalinated water secured under option, the timing and completion of a new preliminary economic assessment or other studies for the advancement of Caspiche, including a production decision on the oxide project, the potential to establish new opportunities for the advancement of Caspiche, results from the 2014 PEA including estimated annual production rates, capital and production costs or expected changes to such costs, water and power requirements and metallurgical recoveries, expected taxation rates, potential for securing water rights and adequate water and potential approval of water extraction, potential for reduced power costs, potential to acquire new projects and expected cash reserves. These forward-looking statements are made as of the date of this news release. Readers are cautioned not to place undue reliance on forward-looking statements, as there can be no assurance that the future circumstances, outcomes or results anticipated in or implied by such forwardlooking statements will occur or that plans, intentions or expectations upon which the forward-looking statements are based will occur. While the Company has based these forward-looking statements on its expectations about future events as at the date that such statements were prepared, the statements are not a guarantee that such future events will occur and are – 2 – subject to risks, uncertainties, assumptions and other factors which could cause events or outcomes to differ materially from those expressed or implied by such forward-looking statements. Such factors and assumptions include, among others, the receipt of all shareholder and regulatory approvals, no undue delays with respect to the transaction, effects of general economic conditions, the price of gold, silver and copper, changing foreign exchange rates and actions by government authorities, uncertainties associated with negotiations and misjudgments in the course of preparing forward-looking information. In addition, there are known and unknown risk factors which could cause the Company’s actual results, performance or achievements to differ materially from any future results, performance or achievements expressed or implied by the forward-looking statements. Known risk factors include risks associated with failure to complete the transaction, project development; including risks associated with the failure to satisfy the requirements of the Company’s agreement with Anglo American on its Caspiche project which could result in loss of title; the need for additional financing; operational risks associated with mining and mineral processing; risks associated with metallurgical recoveries, risks associated with operating in areas subject to drought conditions and scarcity of available water sources, power availability and changes in legislation affecting the use of those resources; fluctuations in metal prices; title matters; uncertainty and risks associated with the legal challenge to the easement secured from the Chilean government; uncertainties and risks related to carrying on business in foreign countries; environmental liability claims and insurance; reliance on key personnel; the potential for conflicts of interest among certain officers, directors or promoters of the Company with certain other projects; the absence of dividends; currency fluctuations; competition; dilution; the volatility of the Company’s common share price and volume; tax consequences to U.S. investors; and other risks and uncertainties, including those described herein and in the Company’s Annual Information Form for the financial year ended December 31, 2016 dated March 24, 2017 filed with the Canadian Securities Administrators and available at www.sedar.com and filed with the SEC as part of the Company’s annual report on Form 40-F available at www.sec.gov. Although the Company has attempted to identify important factors that could cause actual actions, events or results to differ materially from those described in forward-looking statements, there may be other factors that cause actions, events or results not to be as anticipated, estimated or intended. There can be no assurance that forward-looking statements will prove to be accurate, as actual results and future events could differ materially from those anticipated in such statements. Accordingly, readers should not place undue reliance on forward-looking statements. The Company is under no obligation to update or alter any forward-looking statements except as required under applicable securities laws. Cautionary Note to United States Investors – Exeter is required to describe mineral resources associated with its properties utilizing Canadian Institute of Mining, Metallurgy and Petroleum (“CIM”) definitions of “measured mineral resources”, “indicated mineral resources” and “inferred mineral resources” are defined in and are required to be disclosed pursuant to Canadian regulations; however, these terms are not defined terms under the United States Securities and Exchange Commission’s Industry Guide 7 and normally are not permitted to be used in reports and other documents filed with the SEC. Investors are cautioned not to assume that any part or all of mineral deposits in these categories will ever be converted into SEC Industry Guide 7 compliant mineral reserves. “Inferred mineral resources” have a great amount of uncertainty as to their existence and as to their economic and legal feasibility. It cannot be assumed that all or any part of an inferred mineral resource will ever be upgraded to a higher category. Under Canadian rules, estimates of inferred mineral resources may not form the basis of feasibility or pre-feasibility studies, except in rare cases. Disclosure of “contained ounces” in a mineral resource is permitted disclosure under Canadian regulations. However, the SEC normally only permits issuers to report mineralization that does not constitute “mineral reserves” by SEC Industry Guide 7 standards as in place tonnage and grade, without reference to unit measures. Accordingly, information contained in this press release or referenced herein containing descriptions of mineral deposits may not be comparable to similar information made public by U.S. companies subject to the reporting and disclosure requirements under the United States federal securities laws and the rules and regulations thereunder, including SEC Industry Guide 7. NEITHER THE TSX NOR ITS REGULATION SERVICES PROVIDER (AS THAT TERM IS DEFINED IN THE POLICIES OF THE TSX) ACCEPTS RESPONSIBILITY FOR THE ADEQUACY OR ACCURACY OF THIS NEWS RELEASE Click here to connect with Exeter Resource Corp. (TSX:XRC) to receive an Investor Presentation. Source: exeterresource.com MARKETS COMMODITIES |Commodities| |Gold||1827.54||+0.47| |Silver||21.15||+0.03| |Copper||3.74||0.00| |Palladium||1879.51||+6.50| |Platinum||911.26||+4.32| |Oil||107.06||+2.79| |Heating Oil||4.24||+0.02| |Natural Gas||6.17||-0.07| DOWNLOAD FREE REPORTS BROWSE COMPANIES BY SECTOR - Agriculture Investing - Phosphate Investing - Potash Investing - Base Metals Investing - Copper Investing - Iron Investing - Lead Investing - Nickel Investing - Zinc Investing - Battery Metals Investing - Cobalt Investing - Graphite Investing - Lithium Investing - Manganese Investing - Vanadium Investing - Critical Metals Investing - Magnesium Investing - Rare Earth Investing - Scandium Investing - Tantalum Investing - Tellurium Investing - Tungsten Investing - Energy Investing - Oil and Gas Investing - Uranium Investing - Gem Investing - Diamond Investing - Industrial Metals Investing - Aluminum Investing - Chromium Investing - Coal Investing - Molybdenum Investing - Tin Investing - Precious Metals Investing - Gold Investing - Palladium Investing - Platinum Investing - Silver Investing - 3D Printing Investing - Bitcoin Investing - Blockchain Investing - Cleantech Investing - Cloud Investing - Cryptocurrencies - Data Investing - Emerging Tech Investing - Artificial Intelligence Investing - Mobile Investing - Robotics Investing - Fintech Investing - Gaming Investing - Esports Investing - Nanoscience Investing - Graphene Investing - Nanotech Investing - Security Investing
https://investingnews.com/daily/resource-investing/precious-metals-investing/gold-investing/exeter-provides-update-2017-annual-general-meeting-shareholders/
1. About this page This page is designed to help financial institutions (FIs) quickly familiarise themselves with the topic of climate change mitigation and adaptation and to help identify potential risks and opportunities for transactions they finance. This should be read alongside the ‘Integrating Climate Risks into Governance and Risk Management Frameworks’ page to understand and manage the impact of climate change both at the FI and client level. It is not intended to be a detailed technical guidance document. Climate change mitigation is a reduction in emissions of Green House Gases (GHG) into the atmosphere or absorption of them from the atmosphere. Climate change adaptation is the process of adjustment through moderating or avoiding harm to actual or expected climate and its effects; resilience is the capacity of a system (natural or human) to cope with a shock, disturbance or hazardous event. In terms of mitigation this page identifies tools to help quantify GHG emissions associated with business activities and highlights the opportunities from adopting resource efficiency improvements. Regarding adaptation, this page aims to help FIs better understand climate change and weather-related risks to a transaction being considered and its supply chain to help improve the quality of the credit from a holistic risk perspective. Sector specific climate change risks and opportunities are also addressed in the respective Sector Profiles and the E&S Topics: Resource Efficiency and the Circular Economy. Given the improving understanding around climate change and its impacts this briefing note will be updated periodically in light of emerging science. - Additional considerations Additional technical guidance and resources are provided at the end of this page and the Resources Section. This page provides an overview and general guidance. FIs should carefully consider each company based on its specific characteristics and circumstances including scale, location, technology, management capacity and commitment and track record. Risks, impacts and opportunities relating to a particular company or sector can also change over time for several reasons (e.g. changes in the applicable laws and regulations or in the type of the company’s activities or assets). FIs may need to engage external experts in some situations (see ‘Advice for FIs’ section below). 2. Introduction Climate change will continue to have increasingly significant environmental, social and economic impacts. Global temperatures are projected to continue rising, with consequential changes in weather patterns, rising sea levels and increased frequency and intensity of extreme weather events. It will also lead to changes in precipitation and freshwater availability. The impact of climate change is exacerbated in emerging markets due to their reduced capacity to adapt or recover from climate related shocks. Climate change is already impacting the competitive context within which all companies operate through affecting the availability and demand for resources, products and services and the performance of day-to-day business operations, physical assets and supply chains. It also presents opportunities for new low-carbon / resource efficient products and services to help mitigate climate change and through improved understanding of climate change risks to build resilience into business operations. Climate change will not only impact a company’s own assets but also the natural resources and infrastructure upon which the company depends. Energy, water, transport and communications infrastructure may all be impacted by flooding or extreme events, and the on-going reliability of these services will also be affected, with consequential impacts for company operations. While some companies are starting to understand the full strategic implications of climate change, most are approaching it in a relatively superficial level. Corporate action tends to be driven primarily by a desire to reduce energy and regulatory compliance costs. The resulting emphasis on tackling direct emissions (mitigation) – and thus on reducing the impacts of business – has detracted from a consideration of the impacts climate change will have on business and their need to adapt over the longer term. While a number of companies are now looking to the future and predicting more regulation, increasing (and volatile) energy costs, and carbon-pricing – and are looking to future-proof themselves against these trends – these represent only a subset of the risks that climate change poses. Floods, storms, and increasing sea levels will physically impact corporate assets. Brands and reputations will be harmed (and some strengthened) by responses to climate change, and high-emitters may face stranded-asset and/or legal risk over time. 3. Why financial institutions and their clients should address this topic - Risks for the business Failure to understand potential climate change risks can result in a range of direct and indirect negative impacts on businesses, such as: - Increased regulatory requirements and regulatory costs (potentially as a direct result of a country’s commitment to international climate change negotiations). - Additional capital expenditure associated with asset damage, decreased asset performance or the upgrade of facilities to be more climate-resilient. - The efficiency and performance of infrastructure may decrease such as reduced access to water for cooling plants could require operating capacity to be reduced / shut down. - Disruption to supply chain, particularly from extreme events, especially agricultural supply chains. - Insurance costs are likely to increase as the cost of climate impacts are better understood, possibly making some activities uninsurable, or prohibitively expensive. - Stranded asset risk and portfolio risk if there is a high level of exposure to sectors with high climate related-risks such as carbon intensive energy generation. - Reputational risk as consumer pressure on ‘high-carbon’ products / industries grows. - Potential legal challenges against carbon intensive companies who could be seen to have materially contributed to the impacts of climate change. - Opportunities for the business In some cases, companies can generate positive revenue streams and other business benefits from active management of climate change risks and improving adaptive capacity and resilience: - Low carbon energy generation will form a significant part of the global approach to mitigating climate change. Climate finance and carbon market funding may enable an organisation to create market advantage from cleaner energy provision. - Implementing energy and water efficiency measures to reduce consumption and improve operational efficiencies and resilience to any changes in energy and/or water supply. - Taking climate change effects into account in the specification and design of new infrastructure (e.g. ports) may reduce the operating costs (including the cost of repair) and also the costs of retrofit. - Understanding supply chain exposure and taking early action to manage these risks will enable organisations to better withstand climate shocks and ultimately outperform less prepared competitors. - Opportunities for new energy efficient and lower embodied carbon products will increasingly become a differentiator in the marketplace. Business models such as manufacturing and marketing products where increased demand is related to increased temperatures/changing rainfall patterns (e.g. rainfall harvesting and water storage systems) will also perform better relative to their peers. - Increased consumer awareness of climate change and lower embodied carbon products and services will create positive brand value for companies that are ‘low-carbon’ leaders - Access to additional financing including climate finance and the carbon markets 4. Advice for financial institutions See CDC Environmental and Social Checklist as it contains questions and tips to help FIs to assess the E&S aspects of their clients who may be vulnerable to climate change. FIs should ensure that, at a minimum, companies’ management systems are designed to be compliant with local laws and regulations. In many cases, local regulations may not be fully aligned with good international industry practice (GIIP). FIs should assess companies’ alignment with international standards and where appropriate, develop Action Plans to ensure that any gaps are addressed within a reasonable time frame. Where climate change risks and impacts are evident, companies should be able to demonstrate that they have implemented management plans in accordance with GIIP. FIs and companies should understand the main climate change risks facing an organisation, which may include supply chain risks ( CDC E&S Topics: Supply Chains). In some instances, an assessment of climate change risks should be conducted. The assessment can be relatively simple (by reviewing the type and frequency of extreme weather events and considering the impact on business performance) but in sectors and locations exposed to significant climate change risks, a more detailed assessment involving external advisors may be required. Credit lines to clients that operate in activities which significantly contribute to climate change (e.g. sectors with large GHG emissions) and/or present opportunities for climate change mitigation, potential mitigation measures should be assessed and, where appropriate, implemented. In such cases FIs should be cognisant of a country’s Nationally Determined Contributions (NDCs). Monitoring and reporting of GHG emissions generated and avoided may also be required. - Climate change mitigation Sectors and activities that present opportunities for GHG reductions and energy and water efficiencies include: - Agriculture and Aquaculture (effective water resource management and water efficiency interventions to reduce input) - Healthcare (water and energy efficiencies to reduce operational costs; deployment of renewable energy technologies) - Infrastructure (opportunities to set the standards for green buildings and low carbon construction and reduce water and/or energy costs) - Forestry and Plantations (forestry will form a significant part of the global approach to mitigating climate change; carbon market/REDD funding may be available for avoided deforestation and reforestation) - Power Generation, Transmission and Distribution (energy efficiency improvements reducing input costs and obviating the need for additional infrastructure) Sectors and activities that are particularly vulnerable to climate change include: - Agriculture and Aquaculture (changes in temperature and increased incidences of extreme weather may change productivity or viability of crops) - Healthcare (temperature changes leading to a wider distribution of disease vectors) - Infrastructure (damage and operational interruption, and higher maintenance costs) - Forestry and Plantations (impacts on productivity or viability of plantations and increased incidence of plant/tree diseases) - Power Generation, Transmission and Distribution (reduced operational efficiencies and increased transmission and distribution lines losses) FIs should take into account the following when considering a transaction: - Climate change policyFIs are encouraged to implement CDC’s Climate Change Strategy in their portfolio companies in order to proactively assess climate change risks and opportunities and incorporate these factors into their strategies. This is further covered in the Value Add section on integrating climate change into FI’s Risk Management and Governance structures. - Assessing climate change risks – geography and sector risks FIs should identify potential climate change-related risks. As appropriate, FIs shall undertake a systematic review of how climate change risks could potentially affect the geographical location of the business and associated business activities, including the supply chain; and whether the business sector and individual business is particularly vulnerable. Climate-related risks should be assessed during due diligence and material risks addressed before providing the facility, wherever possible. In some instances, an agreed action plan to avoid (wherever possible), mitigate and/or manage and monitor climate change-related risks shall be agreed between the FI and its client. Broadly, FIs should assess whether a business is located or has activities in areas likely to experience and be vulnerable to: - Extreme weather events (heatwaves, droughts, storms etc.) - Changes in precipitation and freshwater availability - Changes in temperature - Sea level rise For example, is a potential credit to an agricultural client located in an area expected to experience reduced precipitation that could result in reduced yields, or an infrastructure project under consideration located in an area likely to experience sea level rise and increased flood risk? Local, regional or national meteorological agencies may be able to provide historical weather information to help identify particular risks. A media review may also help identify past weather events experienced in a particular geography. Country level National Communications and Nationally Determined Contributions (NDC) provide information on existing and anticipated climate change, including information by region in some cases. At a country scale, the University of Notre Dame Global Adaptation Initiative Index (ND-GAIN Index) provides data sets that summarise a country’s vulnerability to climate change and its readiness to improve resilience; and provides a useful tool to assess portfolio risk. Whilst this information is high level it can be used to help identify if a particular location where a business may operate is likely to be increasingly exposed to climate change risk. The KfW Development Bank and the Climate Service Center Germany (GERICS) have also developed country level Climate-Factsheets which present projected climate change in a condensed manner. Sector specific resources are provided at the end of this Briefing Note (see ‘Further Resources’). - Business specific risks In terms of climate change adaptation at an individual business level it is helpful to identify if a business could: - be at risk of flooding or contribute to flood risk to neighbouring communities? - compromise supply of water to external users or be at risk of water shortages? - affect the ability of local communities and the supply chains on which they depend to adapt to climate change? And if so… - What are the opportunities to address these? FIs may wish to engage external consultants to support such assessments. - Exploring climate change mitigation opportunities Climate change mitigation options to be explored by companies and investors may include, but are not limited to, alternative project locations, adoption of renewable or low carbon energy sources, climate-smart agricultural, forestry and livestock management practices, the reduction of fugitive emissions and the reduction of gas flaring. Monitoring GHG emissions of a client may help identify emissions reductions opportunities and consequently operational cost savings through reduced inputs. Reporting and verifying monitored data is also becoming increasingly important to international lenders, and in an increasingly carbon–sensitive world proactively reducing GHG reductions may also facilitate access to climate and carbon finance markets. FIs can commission energy and water audits to identify resource efficiencies and help increase business resilience to potential future changes in water or energy availability. Requiring certifications such as IFC EDGE, will help deliver operational energy and water savings and reduced embodied energy (and inputs) in construction. For further information refer to E&S topics: Resource Efficiency and the Circular Economy. - Exploring climate change adaptation opportunities It is impossible to predict what climate change or weather-related events will occur at any particular time or location. However, within the tenor of a credit provided by an FI it is prudent to assume that the most likely risks will relate to existing vulnerabilities and those business systems that are sensitive to climatic factors. FIs should also recognise that climate change risk will change over time and periodic re-assessment of risks and vulnerabilities of a facility should be undertaken with a view to understanding and mitigating risks management throughout the lifecycle of the facility. - Legislation and regulatory change FIs should ensure that companies actively monitor relevant environmental legislation related to climate change. While some countries currently have some climate change legislation, the Paris Agreement (2015), a universal climate agreement, has provided an increased focus on the need for more prescriptive regulatory environment. The Paris Agreement requires all parties to the United Nations Framework Convention on Climate Change (UNFCCC) to agree to a long-term goal of keeping the increase in global average temperature to well below 2°C above pre-industrial levels and aim to limit the increase to 1.5°C. To achieve this each Party has prepared Nationally Determined Contributions (NDCs) which sets out details of emission reductions the country will undertake and other action plans covering areas such as adaptation to climate change and how the country intends to transition to low-carbon economies. 5. Further resources - Further information and guidance - Greenhouse Gas Protocol - Platform Carbon Accounting Financials Carbon Accounting Methodology - International Finance Corporation (IFC) 2012 Performance Standard 3: Resource Efficiency and Pollution Prevention - IFC Climate Risk and Financial Institutions: Challenges and Opportunities - IFC Climate Risk and Business Practical Methods for Assessing Risk - ISO14064 Greenhouse gases - IFC Enabling Environment for Private Sector Adaptation.
https://fintoolkit.cdcgroup.com/es-topics/climate-change/
The Group is exposed to a number of risks in the markets it operates across. The Group Board considers the risks to the business and the adequacy of internal controls with regard to the risks identified at every Board meeting. It formally reviews and documents the principal risks to the business at least annually. Risk management structure 1. Identify risk The Board has overall responsibility for monitoring the Group’s systems of internal control, for identification of risks and for taking appropriate action to prevent, mitigate or manage those risks. The Board will continually assess and review the business and operating environment to identify any new risks for consideration. 2. Assess risk A detailed schedule of risks is considered at each Board meeting under the following categories: macro-economic and political, continuity and disruption, trading and product, operational and supplier, accounting and internal controls, legal and regulatory and external investment and performance. These risks are graded against a criteria of likelihood and potential impact in order to identify the key risks impacting the Group (see heat map below). 3. Mitigate risk The Board seeks to ensure that the Group’s activities do not expose it to significant risk. The Group’s aim is to diversify sufficiently to ensure it is not exposed to risk of concentration in product, market or channel. 4. Update risk register The risk register is updated at each Board meeting. The Board meets formally at least five times each year. 5. Review and evaluate risks The Board and senior managers are all responsible for reviewing and evaluating risk. The Executive Directors meet at least monthly to review ongoing trading performance, discuss budgets and forecasts and consider new risks associated with ongoing trading. Feedback from these meetings regarding changes to existing risks or the emergence of new risks is then provided to the Board. Principal risks and uncertainties Risk Whilst there is general optimism regarding the world economy, retail conditions remain challenging with uncertainty around Brexit. Any adverse conditions in the retail sector would have a detrimental impact on trading. Mitigation The Group monitors and maintains close relationships with its key customers and suppliers to be able to identify signs of financial difficulties early in order to prevent or limit any potential losses. Customer orders and sales trends in major markets are constantly reviewed to enable early action to be taken in the event of sales declining. The general economic factors affecting the Group during the period are discussed further in the Chief Executive’s Statement on pages 5 to 7 of the Report and Accounts for 2018 and the Financial Review on pages 20 to 21. Change Increase Risk The Group faces strong competition in most of the major markets in which it operates. This presents a risk of losing market share, revenue and profit. Mitigation The risk is managed by ensuring that high quality and innovative products are brought to market, maintaining strong relationships with key customers and ensuring the Group is aware of local market conditions, trends and industry-specific issues and initiatives. This enables the Group to identify and address any specific matters within the overall business strategy. Change No change Risk Skilled senior managers and personnel are essential in order to achieve the strategic objectives of the Group. Failure to recruit and retain key staff would present significant operational difficulties for the Group. Mitigation Existing staff are provided with relevant training and career progression to improve motivation. The Group has a clearly defined recruitment policy which ensures that new employees meet the required standard and experience for each position. Management also seeks to ensure that personnel are appropriately remunerated and that good performance is recognised. Change No change Risk The Group’s purchasing activities could expose it to overreliance on certain key suppliers or markets and, as a result, inflationary pricing pressure. Production is split between UK factories and outsourced supply, which allows the Group to mitigate some of the risk presented by suppliers. Mitigation For the manufacturing process conducted in the UK, the Group ensures that key raw materials are available from more than one source to ensure continuity and competitive pricing of supplies. For the sourcing process, suppliers are carefully selected and the Group seeks to maintain a sufficient breadth in its supplier base such that the risk remains manageable. The Group also ensures that all intellectual property rights are retained and easily transferable should an alternative supplier be required. Change No change Risk Financial risk is wide-ranging and covers capital management, credit risk, currency risk and liquidity risk. The risks presented in these areas include the failure to achieve business goals, potential financial losses caused by default, reduction in profitability due to currency fluctuations, insufficient funds to complete daily business functions and consequent threat to the going concern basis of the organisation. Mitigation Details of the Group’s approach to management of these risks and the systems in place to mitigate them are covered in the financial risk management objectives in note 32 on pages 86 to 89 of the Report and Accounts for 2018.
https://www.portmeiriongroup.com/our-business/corporate-governance/risk-management
Understanding Finance Risks When you take out a loan, you take on some risks, and these are called finance risks. Generally, a loan involves some risk, but not all. Here are some examples of finance risks. This risk is related to the currency used in the transaction. It can be a result of monetary policy, supply and demand, or a governmental action. It is crucial for finance providers to understand the risks before offering the loan. If the loan is not repaid on time, the risks could lead to losses for the finance provider. Another risk is bankruptcy. This happens when an organization does not have the resources to repay its debt obligations. Two of the biggest examples of this were the collapse of Lehman Brothers and Enron in 2001. Although there have been many bankruptcy cases in recent years, the number is down nearly three quarters from the highs of 1987. Defaults are not the only type of finance risk, however. There are many different types of risk, including credit and operational risk. A business can’t afford to ignore the risk of financial fraud. It must also consider the legal risk of a business failing to comply with the law. There are many potential risks to finance, so being aware of the risks associated with each can be beneficial to a company. As long as a company monitors the risks in an appropriate manner, it can plan ahead and pivot if needed. The risks associated with finance can lead to a high number of issues and should be carefully considered. The United States Department of Treasury’s recent National Risk Assessments highlight some of the biggest illicit finance risks facing the U.S. financial system. Whether the risks are due to money laundering, terrorist financing, or proliferation financing, these assessments can provide insight into the risks involved. The document identifies key areas for government policymakers and the ESAAMLG team to focus on. Further, it details the importance of assessing the risks associated with Kenya and South Sudan as a financial hub in Africa. There is an increased risk associated with trade financing when international trade is involved. There are numerous differences in laws and regulations, and foreign currency exchanges must be considered. Moreover, many risks are associated with insolvency, dilution, and fraud. Insolvency risks are not limited to trade finance, but include issues such as political instability and economic crises. A country can be particularly susceptible to a political crisis if it restricts the movement of goods. Fortunately, the methods for assessing finance risks are as varied as the tools used to measure them. There are two main ways to calculate the risks that finance companies face. In the former, the risks are a measure of how much uncertainty a financial firm is willing to accept in exchange for a financial gain. In the latter, it is a matter of determining which risk factor is the most important. You can use quantitative methods to determine the probability and severity of a financial crisis or an upcoming event.
https://tradeinbullmarket.com/understanding-finance-risks/
WASHINGTON – The Overseas Private Investment Corporation (OPIC) today announced it committed $3.8 billion in financing and insurance across 111 new projects in fiscal year 2017 to support economic development in emerging markets. OPIC’s global portfolio reached a record $23.2 billion, across 90 developing countries. OPIC, the U.S. Government’s development finance institution, marked the 40th straight year as a self-sustaining U.S. Agency that operates at no net cost to taxpayers and generates money for deficit reduction. In 2017 OPIC generated $262 million for the U.S. Treasury. That works out to more than $965,000 per employee, based on OPIC’s 271-member staff. Over OPIC’s history, the agency has generated $8.6 billion for deficit reduction. “OPIC’s growing portfolio and our financial success underscore the strength of our model of investing in development. The projects we support are having a tangible, positive impact in people’s lives around the world, while supporting American business and U.S. foreign policy objectives, and generating financial returns,” said Ray W. Washburne, OPIC President and Chief Executive Officer. Highlights from 2017 include: - A strong focus on American foreign policy priorities. In 2017, OPIC was an active investor in conflict-affected regions from the Middle East to Latin America’s Northern Triangle to Eastern Europe. Shortly after being confirmed as OPIC President, Ray Washburne traveled to Ukraine on his first official delegation, where he reiterated OPIC’s support for the region. OPIC later committed political risk insurance to a project that will bolster energy security in Ukraine through the construction of a nuclear fuel storage facility in the Chernobyl Exclusion Zone. Currently about one-third of OPIC’s global portfolio is in conflict-affected regions. OPIC’s strong focus on these regions underscores it’s understanding that stable economies promote stable societies. - A commitment to investing in the world’s women. The global female economy represents a market that is more than two times the size of India and China combined. In 2017, OPIC committed to multiple projects in increase support for women in the developing world and unlock the multi-trillion dollar investment opportunity they represent. New projects committed to empower female entrepreneurs include financing to India’s Yes Bank for lending to women-owned small and medium businesses, and financing to Mongolia’s XacBank to support female entrepreneurs in that country. - Ongoing support for American businesses in emerging markets. In 2017 OPIC worked to help American businesses of all sizes invest in emerging markets to bring innovative solutions to major world challenges from poverty to insufficient electricity, while also helping these businesses enter some of the world’s fastest-growing markets. OPIC’s American business partners include AES Corporation of Arlington, Virginia, which is helping build a 100 MW solar plant in El Salvador, and Noble Energy of Houston, which is developing the Leviathan oil and gas field that will transform the energy landscape in the Middle East. ### The Overseas Private Investment Corporation (OPIC) is a self-sustaining U.S. Government agency that helps American businesses invest in emerging markets. Established in 1971, OPIC provides businesses with the tools to manage the risks associated with foreign direct investment, fosters economic development in emerging market countries, and advances U.S. foreign policy and national security priorities. OPIC helps American businesses gain footholds in new markets, catalyzes new revenues and contributes to jobs and growth opportunities both at home and abroad. OPIC fulfills its mission by providing businesses with financing, political risk insurance, advocacy and by partnering with private equity investment fund managers. OPIC services are available to new and expanding businesses planning to invest in more than 160 countries worldwide. Because OPIC charges market-based fees for its products, it operates on a self-sustaining basis at no net cost to taxpayers. All OPIC projects must adhere to best international practices and cannot cause job loss in the United States.
https://nextbillion.net/news/opic-announces-3-8-billion-new-commitments-fiscal-year-2017/
for their Sales, Rental, Service & Repair division. Based in Luxembourg and reporting to the CEO of the division, the Corporate Development Manager will be in charge of actively supporting the strategic expansion plans of the group from identifying opportunities on the different markets through to the integration of newly acquired businesses or the development of strategic partnerships. Major responsibilities: - Lead the company’s strategy process in close collaboration with the CEO and Vice-President Finance. - Conduct analysis and reports on emerging trends, markets and products. - Identify gaps in performance and highlight areas of opportunity in the different markets. - Detect opportunities to further develop the business. - Evaluate the business risks involved, taking into consideration legal and regulatory requirements. - Locate or propose business partnerships by contacting potential acquisition partners: discovering and exploring opportunities, examining risks and potentials, estimating partners' needs and goals. - Develop strategy and provide overall financial/valuation and transaction support for mergers and acquisitions and other strategic initiatives. - Support due diligence review process in collaboration with the Finance team to ensure all merger and acquisition activities meet financial and business objectives. - Support the integration process of newly acquired companies. - Update job and industry knowledge by participating in educational opportunities; reading professional publications; maintaining personal networks; participating in professional organizations or events. - Cross-countries and cross-cultural project management. - Support change management process. - Act as an “Internal Consultant”, playing the role of a sparring partner towards the organization. - Leading projects to improve efficiency of the organization. - Perform any other duties as required. Profile requirements: - Master’s degree or MBA in a business-related subject with 5 to 10 years of experience in Business Expansion activities as operative responsibility. - Experience in Audit or Consulting company as well as in merger and acquisition processes is mandatory. - Experience in cross-cultural projects with operational background would be an asset. - Capacity to analyze, summarize, structure and present information in a variety of forms and formats. - Fluency in English, knowledge of Spanish, German and/or Dutch would be considered as an asset - Excellent analytical, communication and negotiation skills. - Highly motivated and target driven. - Open-minded, creative, people oriented with strong management skills. - Capacity to prioritize and organize work. - Enthusiastic, with a can-do attitude and team player. - Ability to work autonomously and to travel in line with business requirements - Strong development potential Are you interested by this opportunity? Please click the button below to apply to this offer and send us your resume with a short introduction .
https://lu.hudsonsolutions.com/jobs/46-corporate-development-directo-m-f-ref-ae10-777
Best Way to Enhance Your Problem Solving Abilities Mathematical aptitude tests evaluate your capability to make significant summary of large amount of data by means of mathematics or algebraic expressions rather than evaluating knowledge to work out percentages, ratios counts or related numerical operations. Therefore, the corner stone of such assessments is to evaluate your aptitude solving abilities rather than intense knowledge of mathematics. Some of the ideas to upgrade your problem solving capabilities or aptitude for arithmetical reasoning tests are as under: Practice reasoning test questions To enhance your problem-solving knowledge, it is important that you practice answering the sort of questions that are probable to emerge in your competitive exams. When you go through your practice materials do not just estimate the answer but slightly think about the answer and try to arrive at the right answer yourself. Ensure that you analyze the logic and relevant concepts to dig out the solution to the problem. Once you attain an understanding of such challenge, practice the thought on related sets of questions and see if you can work them out quickly. Practice multiple-step solutions In order to move ahead in your aptitude skills, you are required to have usual experience to the kinds of questions that cannot be solved simply by applying single mathematics formulae; but you are required to be tackle with the set of challenges that require you to solve diverse-steps, where you have to get through various levels of calculations or mental computations to get to the final answer. Improve in your weak areas To get ahead in your preparation you need books meant for the competitive exams to analyze any deficiency in your knowledge, mental arithmetic and problem-solving. Any weakness in any of these categories may impact on your capability to solve the questions and eventually lower your score. Hence, to enhance your performance you are required to attain, strengthen and acquire mental arithmetical and problem solving skills and work through the sort of exam papers that may come in your official numerical aptitude exam. And if you analyze any weak point within yourself, try some books on aptitude so that you can solve the issues. Just be calm and prepare for the exam with strong aptitude and problem solving skills.
https://www.magazinesworld.org/best-way-to-enhance-your-problem-solving-abilities/
Konica Minolta is transforming into a digital company with insight into implicit challenges, which is to say that it uses digital technology to identify and solve implicit challenges faced by customers, recognizing this as key to achieving sustainable growth. In so doing, Konica Minolta provides value to the professionals who work for its corporate customers, which it believes will lead to solutions to challenges that people and society face. This is why Konica Minolta is working to strengthen the abilities of its employees, ensuring that individuals thrive. The Group is working to enhance every members productivity and creativity and to create environments where everyone stays motivated to grow. Konica Minolta recognizes that good physical and mental health is critical to employees’ efforts to maximize their potential. Accordingly, the Group implements a strong health management program and is pursuing work-style reform and human resource management in order to support employees’ efforts to create customer value and accelerate self-directed growth. By rolling out these initiatives globally, the Group seeks to maximize the potential of all of its professionals and enhance their capacity to create value.
https://www.konicaminolta.com/about/csr/social/human-capital/index.html
Today's leaders must be able to recognize and respond to dynamic global challenges and opportunities while tapping into the intrinsic power of their employees to deliver industry leading solutions and services. These demands require leaders who have a high degree of emotional intelligence, are situationally aware systems thinkers, and have the ability to provide clarity and vision to the people who they serve. The ICAgile accredited Leading with Agility (ICP-LEA) learning experience is a 2 week program specifically designed to prepare leaders to respond to increasingly turbulent economic, technological, sociological and market forces as well as global paradigm shifts that require new leadership thinking. Topics Covered - The path to business agility - Agile Leader styles, key attributes and mindset - Relationship Agility for engaging today's workforce - Agile organizational design and culture - Developing adaptive strategies and processes to support organizational agility - Establishing outcome-based measurements that support change and drive business value Duration This is a 2 week remote leadership intensive designed to greatly enhance your leadership skills while minimally impacting your work schedule. We meet for five, 3 hour sessions. Timing for weekday sessions are from 6:00 to 9:00 pm and for weekend sessions are from 10:00 am to 1:00 pm. All times listed are Eastern Time (EST) Schedule Day 1: - Why do organizations need agility? We will explore the role of the leader in today's changing world and discover the relationships between leadership and Business Agility - Organizational Capabilities needed in today's exponential world; a deep dive into organizational structures, with case studies to help uncover unconventional companies and how they thrive Day 2: - Personal Agility; what are the skills and characteristics that are important in today's world. You'll learn and practice skills in a small group setting, discovering more about your personal leadership style. - Organizational Agility; continuing from the prior week, an exploration into organizational design Day 3: - Enabling a Learning Organization; what does a Learning Organization look like? Why is this important? We'll explore these topics and create strategies to take back to your company - Emotional Intelligence; there a lot of buzz lately on the importance of EQ, some go so far as to say it's more important than IQ. This course includes a complementary EQ assessment and we use that to help determine areas of improvement Day 4: - Leadership Development; we explore development models, focusing on brain based research that shows how adults learn best. Outcome is a better understanding of what makes you tick, and how you can set yourself up for success. - Team Development; a leaders main role is to build high-performing teams. We discuss strategies that make for this to happen, and provide you with tools and a blueprint to apply in your context Day 5: - Strategic and Systems Thinking; leadership is all about systems thinking, and we explore various tools of the systems thinker, and how they're applied - Tying it all together; your action plan! What You Get The ICP-LEA, Leading with Agility certification from ICAgile, the worlds premiere agile accreditation consortium. You also get free and forever membership in the Agile Meridian private Slack forum dedicated to our alumni. Here, you can collaborate with your class attendees and continue to learn and grow from your peers and your facilitators from Agile Meridian. To facilitate this event, we will be using Zoom for live, online sessions; Mural for collaboration, and Slack for asynchronous communication of course announcement, homework, etc. Equivalent tools may be substituted at our discretion.
https://www.agilemeridian.com/icagile-leading-with-agility-remote-learning/
The most significant impact on culture and methods in 2021 is the disruption caused by COVID-19. We see what's needed for good remote and the impact of bad remote bad, how management practices are evolving and the importance of people skills for technologists. Paying attention to ethical issues, diversity and inclusion, tech for good, employee experience and psychological safety are important. - Sociotechnical Implications of Using Machines as Teammates AI has become more than just a tool; it is now meriting consideration as an additional teammate. While this increases a project’s efficiency and technical rigor, AI teammates bring a fresh set of challenges around social integration, team dynamics, trust, and control. This article provides an overview of sociotechnical frameworks and strategies to address concerns with using machines as teammates. - Who is on the Team? Ahmad Fahmy and Cesario Ramos take the changes to the new Scrum Guide as an opportunity to explore what it means to be "on a team." They draw on research to create an ACID test to differentiate who is on the team and who isn't. They discuss different mental models around the idea of a team with the hopes that you take this opportunity to discuss and elevate the roles within your organization. - Improving Organizational Agility with Self-Management This article presents "self-management" as a possibility to natively support agility to plant seeds and let both institutions and people thrive and benefit from it. Agility may go hand-in-hand with self-management as a way to shift mindsets and open a conversation to really find new ways of working in organizations. - Adaptive Frontline Incident Response: Human-Centered Incident Management The third article in a series on how software companies adapted and continue to adapt to enhance their resilience zeros in on the sources that comprise most of your company’s adaptive resources: your frontline responders. In this article, we draw on our experiences as incident commanders with Twilio to share our reflections on what it means to cultivate resilient people. - Breaking the Taboo – What I Learned from Talking about Mental Health in the Workplace Mental illness is a topic that does not get discussed openly very often. Many people concerned hide their own history for fear of being stigmatized, especially in the workplace. This is a story about how speaking openly about mental illness, even with your boss and co-workers, can help yourself and others. The author shares with you what she has learned from breaking the taboo. - Q&A on the Book The Rise of the Agile Leader The book The Rise of the Agile Leader by Chuck Mollor is a blueprint for leaders navigating change in the pursuit of success. Mollor shares his story of self-awareness, self-acceptance, and self-development, while demonstrating a leadership paradigm, a roadmap of what makes a great leader, and what organizations can do to develop great leaders. - Q&A on the Book Responsive Agile Coaching Niall McShane has written the book Responsive Agile Coaching, aimed at people who are coaching individuals, teams and organisations in new ways of working to help guide others in adapting to changing circumstances and responding to new demands. He presents a model for coaching based on knowing when to tell clients the answer versus when to guide them to find the answer for themselves. - Meeting the Challenges of Disrupted Operations: Sustained Adaptability for Organizational Resilience The first article in a series on how software companies adapted and continue to adapt to enhance their resilience starts by laying a foundation for thinking about organizational resilience. It looks at what organizations can do structurally during surprising and disruptive events to establish conditions that help engineering teams adapt in practice and in real time as disruptive events occur. - Q&A on the Book Retrospectives Antipatterns Using the familiar “patterns” approach, the book Retrospectives Antipatterns by Aino Vonge Corry describes unfortunate situations that can sometimes happen in retrospectives. For each situation, described as an antipattern, it also provides solutions for dealing with the situation; this can be a way to solve the problem directly or avoid similar problems in future retrospectives. - Changes in the 2020 Scrum Guide: Q&A with Ken Schwaber and Jeff Sutherland The Scrum Guide has been updated to make it less prescriptive, using simpler language to address a wider audience. These changes have been done to make Scrum a “lightweight framework that helps people, teams and organizations generate value through adaptive solutions for complex problems”. An interview with Ken Schwaber and Jeff Sutherland about the changes to the guide. - Moving from Agile Teams towards an Agile Organization For organizational agility, we need to improve the system for teams and individuals to thrive, instead of expecting them to change and fix the culture. This article explores some elements from a systemic point of view that are essential to create the right conditions for moving from agile teams towards an agile organization.
https://www.infoq.com/teamwork/articles/
In the manga, 基本 (base) is translated as "baseline" by VIZ Media. First mention by Deneve in reference to Miria's speed (agility) during the sword-match between Miria and Clare. For general definition, see baseline. For Claymore warriors, the six baselines appear on the Yoma War Record datasheets, which are listed in the "Ability" infobox together with the warrior's type, innate ability and learned technique. For awakened beings, five baselines appear on the datasheets. Description Classification Organization's rating of each Claymore warrior's fighting potential, broken down in six Baselines: What is a baseline? Quantity of a native quality, being the measure of an innate trait or ability, unenhanced by Yoma power, or modified by other means. Yoma power baseline itself on datasheet is reserved/unused measurement. Abilities and techniques are usually founded on one baseline, though other baselines can be involved. Awakened Awakened beings' baselines are also given in the Yoma War Record. Yoma power Yoma power release can raise agility and strength performance above the baseline. Or lower mental, sensing and leadership performance below the baseline. A high Yoma power reserve, however, may accompany high baselines in other areas. Contrary to popular perception, Claymore warriors do not need Yoma power to enable other baselines. Usually, Yoma power is used to enhance agility and strength on an emergency basis only. During the 7-year timeskip, the Ghosts display the use of five baselines without resorting to Yoma power. History The history and methods used by the Organization to measure the baselines of warriors and Awakened are unknown. Dae displays an ability to sense Yoma energy emanating from Priscilla's arm during an Executive meeting. References Tankōbon Claymore volumes cited are VIZ Media (en-us) editions, unless otherwise noted. Manga scenes (chapters) not yet translated cite Shueisha tankōbon (ja) editions. Manga scenes not yet published in tankōbon form cite Jump SQ (ja) editions. Fragments of Silver Omnibus (総集編 銀の断章 Gin no Danshou) 1–3, Shueisha, are only available in Japanese. Anime scenes (episodes) cited are FUNimation (en-us) editions, unless otherwise noted.
https://claymorenew.fandom.com/wiki/Category:Baseline
This comes in the face of ongoing talent challenges, whereby executives globally indicated widespread concerns about shortages of critical skills in the market and the high churn rate among employees. With many changes being brought about in the working world over the past two years, navigating how you adapt may be a challenging feat. Following the previous year's research which identified companies leading the way to create a more agile workforce (aptly referred to as 'the Vanguards'), KellyOCG's newest survey, Calibrating to power the life-work shift, took a closer look this time into how the Vanguards are leading on the life-work shift. The new survey of 1,000 senior executives across 12 countries and 10 industries observed how leading businesses are recalibrating how they acquire the talent they need to win in today’s ultra-competitive markets. Setting the context, the study noted that Vanguards were those employers who actually saw employee wellbeing and productivity increase during the pandemic, and reported higher revenue growth – an indication their approach was giving them a competitive edge. In the 2022 research, the Vanguard groups were recreated using the same criteria to now make up 15% of the total sample, as compared to 11% of the total sample in 2021. The study examined what these firms are getting right and what others can learn from their approach, given that these firms are far more likely to say that their ability to recruit talent has increased. To gain a better understanding, the study identified four dynamics where the Vanguards are leading that will shape the future workforce. Dynamic #1: Strengthening workforce agility This raises the question: Do organisations have a comprehensive strategy for bridging skills gaps, from hiring new talent to bringing in contingent labour? Do they have the capabilities for rapidly meeting the business’s people requirements? Dynamic #2 Reinventing the employee experience Do businesses make it a priority to improve the employee experience? Do they benchmark their employee value proposition against those of competitors? Do they engage employees in reinventing work? Dynamic #3: Taking concrete action on diversity, equity and inclusion (DEI) Are company-wide DEI strategies in place to attract talent from underserved communities? Are these employees supported and offered training opportunities throughout their careers? Dynamic #4: Adopting the right technologies to empower today’s workforce Does the organisation make use of best-in-class tools for managing talent, boosting collaboration, and enhancing employee experience? The first half of the study looked into understanding the post-pandemic talent challenge. Having noted in the previous year that a quarter of US workers planned on looking for a new job when the threat of the pandemic decreased, that intent has played out over the past year in employers’ experiences of the 'Great Resignation' — organisations are still grappling with an acute talent challenge in 2022. When asked to identify the top talent challenges they face, executives indicated widespread concerns about shortages of critical skills in the market and the high churn rate among employees. Yet the number one barrier to companies accessing the talent they need, alongside a lack of skills in the market, is a poor ability to hire contingent talent. Per the report, this indicates that businesses increasingly need to be able to rapidly recruit the skills they need on a non-permanent basis. Another challenge identified was the rising employee expectations. For many employers, there is a growing sense that employee expectations have changed fundamentally since the pandemic – this view is held by 37% of the executives surveyed (and 42% of the Vanguards). This is driving recognition of a need to redesign the employee experience within the next five years In fact, many organisations have already started. 38% of Vanguards are rethinking leadership skills, in comparison to the 23% of non-Vanguard firms. Meanwhile, 51% of Vanguards reported that leaders and employees are already re-inventing work together, to create a future that works for all (vs just 33% of non-Vanguards). On the other hand, 61% of Vanguards’ executives say they are happy, compared with 38% among the rest. Further, 37% plan to stay at least two years (vs 27%), and only 5% are job hunting (vs 13%). Mental wellbeing was another area of concern unearthed in the findings. In non-Vanguard companies, only 26% agree that their employer cares about their mental health, compared with 40% in Vanguard companies. Exploring the possibility of hybrid working, it was noted that just 41% of executives in Vanguards and 30% in non-Vanguards say their organisation promotes flexible or hybrid options despite the changes brought about by the pandemic. Fewer than half have definitive plans for returning employees to the physical workplace (35%). About one in five are in favour of mandating a "hard return" (20%), though 28% think the complexity of hybrid will eventually drive a requirement for most employees to come back on-site. While overall more view the impact of hybrid working as positive for organisational culture than negative, there are indications of unintended consequences: a significant number say that in-office employees are perceived as higher performers than remote/hybrid employees, and are more likely to be promoted. That risks creating division and undermining DEI efforts. The second half of the report looked into the four dynamics and how organisations are calibrating to power the life-work shift. Dynamic #1: Strengthening workforce agility As touched on above, employers around the world face intense competition for talent. However, only a minority of firms are harnessing innovative approaches to improving workforce agility. Across the board, Vanguards are substantially more likely to have adopted these tools and tactics to improve their ability to work in new ways. Dynamic #2: Reinventing the employee experience Employers are continuing to face deep challenges in attracting and retaining talent, with fierce competition for roles at all levels of the market. While leaders say that a wide range of factors is contributing to the issues, they see the top causes as a lack of progression opportunities, uncompetitive pay and benefits, and poor life-work balance. Vanguards have a particular advantage in using contingent talent. They are more likely to have a clear strategic approach, and they are more likely to forecast an increase in the use of contingent talent in the next five years. KellyOCG also identified a lack of understanding among employees of the wider value of their work as a factor, which means that purpose is a vital tool. For Vanguards, the response is to redefine/redesign the employee experience and experiment with new ways of working (83% for both, compared with 68% and 62% respectively among non-Vanguards). The data shows that, compared with other firms, Vanguards understand more acutely the importance of the employee experience in improving talent acquisition. This group is more likely to: - Have started offering higher wages and better benefits (57% vs 38% of the rest) - Provide better skills and development opportunities (66% vs 51%) - Have improved the annual leave allowance (62% vs 45%) - Have introduced policies such as salary transparency (54% vs 42%). Interestingly, Vanguards admitted to being alert to their competition too, with 79% benchmarking compensation and benefits against competitors (vs 58%). Vanguards also seem to benefit from the commitment of their senior leaders to reinventing employee experience — 57% say that at least one C-suite stakeholder has been very involved to date, which was almost double the rest (29%). KellyOCG cautions that companies still face potential blind spots when it comes to understanding employee engagement, though. Less than half implement measures to assess employee engagement. Vanguards are more likely to use the tools available, from employee surveys and focus groups to assessing company reviews on Glassdoor. Dynamic #3: Taking concrete action on diversity, equity and inclusion (DEI) Research also shows that Vanguards continue to be differentiated by their efforts to make DEI a reality in their organisations. Despite the high profile of DEI issues in the past two years in the US and around the world, the survey reveals that only about a third of firms have implemented innovative initiatives to improve DEI. Over half of the executives (53%) say that leaders are failing to create an inclusive culture. Meanwhile a Vanguards, leaders are substantially more likely to regularly engage with employees on DEI. Despite so, many still recognise a need to go further, faster: Vanguards are more likely to agree that, in reality, their DEI strategy only pays lip service to the need to support underrepresented groups. Looking into mental wellbeing support, the survey shows that many employers have taken action to support employees, but concerns still remain. Across all firms, 27% say that the number of employees who took time off or left their jobs for mental health reasons has increased over the past year. Further, less than a third say they have a workplace culture that makes it acceptable to disclose mental health issues (30%), while just 40% feel that their organisation offers adequate resources to support employees’ mental health. Meanwhile, Vanguards are going further to promote mental wellbeing across the workforce: 47% of this group say they quickly address behaviours that would be considered bullying or discriminatory (vs 31% of the rest), while 41% say employees are entitled to paid leave for mental health reasons that have not been formally diagnosed (vs 27% of the rest). Dynamic #4: Adopting the right technologies to empower today’s workforce With the current rapid rate of digitalisation, businesses have better options now more than ever for finding the right tools and technologies to power their talent management strategies. Even with this shift, uptake is still limited. Just 40% of firms overall have adopted data analytics tools that capture key metrics, for instance. There is a huge opportunity to accelerate the digitalisation of HR and talent processes to gain advantage. The Vanguards understand the importance of technology adoption in improving workforce agility and productivity. Research shows they are more likely to have adopted all of the relevant technologies, including data analytics, knowledge sharing and pulse survey tools; as well as dedicated training platforms. In the era of hybrid working, another key dimension for technology is its potential for monitoring employee productivity. Generally, survey shows productivity tracking tools have generally been positively received by employees – especially in Vanguard firms.
https://www.humanresourcesonline.net/how-to-recalibrate-your-organisation-to-power-the-life-work-shift