content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Philippines Case Study: Government Policies on Nutrition Education The global and national food and nutrition situation indicates that more than 900 million people are hungry worldwide, yet more than 1 billion are overweight adults. In a study carried out by DOST-FNRI and Save the Children in 2013, Php 328 Billion or 2.84% of the Gross Domestic Product are lost due to child undernutrition while around Php 1.23 billion are lost due to stunting-related grade level repetition. With cognizance of the malnutrition problem, an integrated plan of action for nutrition was formulated by the national multi-sectoral nutrition community, consistent with the global call to eradicate malnutrition. Commonly known as the Philippine Plan of Action for Nutrition (PPAN) 2017–2022, the plan is an integral part of the Philippine Development Plan 2017–2022. It is consistent with the Administration’s 10-point Economic Agenda, the Philippine Health Agenda, and the development pillars of malasakit (protective concern), pagbabago (change or transformation), and kaunlaran (development). Major changes in our food system and eating environments over the past decades have been driven by technological advances, food and agricultural policies, and economic, social, and lifestyle changes. More processed and convenience foods are available in larger portion sizes and at relatively low prices. There are fewer family meals, and more meals are eaten away from home. Thus, policies and programs are extremely important to help make the healthful choices. In the Philippines, Republic Act (R.A.) No. 11037 known as the Masustansyang Pagkain para sa Batang Pilipino, aims to combat hunger and undernutrition among Filipino children. Under this, the Department of Social Welfare and Development (DSWD) implements a supplemental feeding program for daycare children while the Department of Education (DepEd) enforces the school-based feeding program. On the contrary, the rising obesity rates among Filipino children and adults have motivated policy makers to implement policies that can improve access to afford- able, healthy foods, and increase opportunities for physical activity in schools and communities across the country. One example is the DepEd Order 13, S. 2017 on Policy and Guidelines on Healthy Food and Beverage Choices in Schools and in DepEd Offices for the promotion and development of healthy eating habits among the youth and its employees. This DO from DepEd led to a subsequent issuance of a local ordinance in some cities (Pasig and Quezon City). Excise tax on sweetened beverages (SBs) is one of the new taxes imposed under Republic Act (RA) 10963 or Tax Reform for Acceleration and Inclusion (TRAIN) Law which took effect last January 1, 2018. The Industry sector has its respective shares in various nutrition education campaigns in the country. To name a few, the NutritionSchool.ph was launched in support of a common passion for wellness and nutrition education. To address the problem of child undernutrition, the United for Healthier Kids (U4HK) was launched in 2014. Other initiatives also include promotion of fortified milk drinking among school children through Laki sa Tibay School Nutrition Education and Pamilyang Laki sa Tibay Community Nutrition Education. While it is apparent that eliminating hunger and malnutrition is technically feasible, the challenge lies in generating the requisite political will, developing realistic policies, and taking concerted actions nation- ally and internationally. Action and advocacy by many stakeholders are needed to overcome these barriers. Past successes that can point the way forward include effective public health approaches to complex problems such as tobacco use, motor vehicle crashes, and occupational safety. These successes provide a template for a healthier food system: address the consumer, the product (agricultural commodities, food), the environment (retailers, restaurants), and the culture (unhealthy eating, marketing). Strong government policy is crucial to achieve a healthy, equitable, and sustainable food system that benefits all.
https://www.nestlenutrition-institute.org/nniw92-brochure/philippines-case-study-government-policies-on-nutrition-education
Positive leverage arises when a business or individual borrows funds and then invests the funds at an interest rate higher than the rate at which they were borrowed. The use of positive leverage can greatly increase the return on investment from what would be possible if one were only to invest using internal cash flows. For example, an individual can borrow $1,000,000 at an interest rate of 8% and invest the funds at 10%. The 2% differential is positive leverage that will result in income of $20,000 for the person, prior to the effects of income taxes. However, leverage can turn negative if the rate of return on invested funds declines, or if the interest rate on borrowed funds increases. Consequently, the concept of positive leverage is least risky when both elements - the borrowing rate and investment rate - are fixed. The amount of leverage is most subject to variability when both elements are variable. In the latter case, an investor can find that investment returns swing wildly within a short period of time. The best time to take advantage of positive leverage is when both of the following factors are present: - The borrowing rate is much lower than the investment rate; and - It is relatively easy to borrow funds When such a "loose money" environment exists, expect speculative investors to borrow large amounts of cash. When the lending environment later tightens, expect increasing numbers of these investors to become insolvent as their positive leverage turns negative and they cannot support their liabilities. In a tighter lending environment, at least expect investors to sell off their investments and use the resulting funds to pay back their highest-interest loans.
https://www.accountingtools.com/articles/what-is-positive-leverage.html
A software vendor’s primary objectives are to attract new users, adapt to the expectations of existing users, and prevent user outflow—all crucial to maintaining the agility, competitiveness, and profitability of the business. Every time the user base grows, the developer is put under pressure—tasked with scaling the entire user platform to accommodate increased workloads while keeping the user experience response (UX) time within a fraction of a second. No easy feat considering the development stacks commonly used today present significant trade-offs between system complexity and scalability. This mis-balance can be resolved within an emerging kind of software technology and architecture. The Trade-off Between System Complexity and Scalability In the context of today’s conventional development platforms, performance capacity of a single server runs out fast. Hence scaling operations often means scaling out, adding more machines—more layers of complexity—to a multi-tiered network of different kinds of servers along with performance enhancers like multi-node data redundancy, caches, and data grids. The increase in user base leads to an uptick in tiers and a number of machines. This scale-out architecture comes with weighty drawbacks including higher costs for hardware, software, and maintenance; increased complexity of development and system control; higher disaster risks due to more potential points of failure; greater risk for bottlenecks and implementation bugs; difficulty identifying the root of problems and their posterior fixings; and the need for costly and mistake-prone inter-module integration and configuration. Together, these drawbacks affect time-to-market for new releases, users’ satisfaction, and as a result—the business vitality. Adopting scale-out architecture can lead to a drop in performance, reliability, and data consistency—issues impossible to solve by just adding more hardware. Even if a scaled-out solution contains no complex logic or heavy computations, the amount of hardware required to run it with affordable UX response time might become ridiculously large. Unfortunately, such a state of affairs reveals not the immaturity of systems implementers—as one might reasonably suggest, but rather a considerable fault in the approach itself. Multi-tier, scale-out, data-excessive architectures can be seen as a tool to solve an effect in popular cases, but not a cause. They work well in different circumstances like social networks, historical data storage, or overnight business intelligence, but show failures in applications involving the management of valuable resources, including enterprise resource planning and line-of-business applications. Collapsing the Stack: Solving the Problem Luckily, instead of solving the effect, it is possible to address the root cause—a software development platform itself. Implementing the platform on the updated fundamentals enables a new breed of applications, globally optimized by simplicity, performance, and modularity as opposed to leveraging for local gains with multiple separate tiers. This vision is summarized by the concept of collapsing the stack. For software development, this means making the code concise along with eliminating the glue code, simplifying system architecture, and increasing flexibility and scalability of the resulting solution. In business terms, this translates to improved agility, competitiveness, and reduced total cost of ownership. Implementing “the collapsed stack” means a shift from engineering the tiers and their integration to a focused application platform. All features from tiers, like network communication, data persistency, or failover are available as before, while the tiers become either virtualized or removed for simpler facilities. The shift doesn’t mean a move from modularity of layers to a monolithic product. Instead, the key differentiator is facilitating highly modular solutions without sacrificing performance, simplicity, and cost. Looking into the recent micro services movement, a collapsed stack offers a highly-performing, open-ended node running a set of uploadable micro applications. The data integration, which is considered challenging for micro services, can be solved for micro applications by efficient in-memory data sharing. A new breed of in-memory technology is a primary component on the track. Simply put, if the application and database are two disjoint entities, they have to signal a lot while sending data between each other. It takes eight minutes for light to travel from Sun to Earth, 130 ms for a signal to get from Australia to the U.S. by wires. Even if these databases and applications run on the same machine, they still communicate and thus signal on a silicon chip. Physics laws define a strict upper bound of the multi-tier architectures performance. The solution is to minimize signaling by putting parties to the closest possible extent. First-generation, in-memory databases shifted from operating data on disk to operating in memory with securing log to disk, which is multiple orders of magnitude faster. Software platforms of a collapsed stack demonstrate a leap ahead, which is shrinking database and application tiers into a single layer. Today, stored procedures are often used to put chunks of code closer to the database for performance. By collapsing the stack, it is no longer necessary to dissipate logic between database and application code, since all applications running in a platform operate physically the same data instances that the database owns. Thus, delivery of data from the database to the application is not needed. The consequences seen in real-world are millions of fully-ACID transactions per second on a modest server . The shrinking principle heals other parts of collapsing stack as well, by saving on delivering messages from web server to application server, inter-process communication, data redundancy and alike, “middlemen.” In addition, the glue code, which was binding the layers, goes away resulting in the pure business logic expressed in a concise code. Dealing with Legacy Code Legacy code stands out as an obstacle for those looking to adopt the collapsed stack application approach. Depending on the application platform chosen, the code is either to be rewritten into different API or language, or simplified—both with increasing clarity and shrinking in size. It is an important investment in a company’s future, clean and concise code is easier to change and support. As Ken Thompson, designer of Unix, says, “one of my most productive days was throwing away 1,000 lines of code.” Any data/performance-critical business is a strong candidate for collapsing the stack and adopting the in-memory platform. As well, any business demanding agility and performance is a good match. Such platform can be used within any vertical, but likely within industries including banking, finance, retail, internet, telecommunications, and gambling/gaming. Benefits of Collapsing the Stack Depending on the platform chosen, gains are a subset of fast, responsive, and multiplatform GUI, improved business agility; faster deployment cycle and instant module/application integration; better technology learning curve—clean and concise code, no stack-related (glue) code; reduced development and maintenance complexity, significantly lower implementation risks; strong security and data integrity guarantees within in-memory technology and thin clients; less hardware (demands in memory space) and more performance; improved reliability by eliminating single points of failure; improved data ownership by shared in-memory data; and rich integration capabilities, exemption from vendor lock-in with multiplatform support. Getting Ready for the Future The approaching Internet of Things era makes utilization of in-memory platforms and collapsed stacks unavoidable. To put things in perspective, moving from around seven billion humans to 60 billion users and devices means we can expect an increase of at least seven times more transaction loads. Now is the time to break through complexity and get up to speed with the future. SW Dan Skatov is the head of development for Starcounter. He previously served start-ups in the field of high performance, natural language processing, and artificial intelligence over the past 10 years in roles of co-founder, R&D head, researcher, and public speaker.
https://www.softwaremag.com/now-is-the-time-to-collapse-the-stack/
An Interview With Martin Wheeler This article was originally published in UK Kung Fu magazine in 2000. 1. Martin, perhaps you could start off by giving us all an insight in to why and when you originally got involved in Martial Arts? I started training in the martial arts when I was around nine in Judo at a local YMCA. Some of my friends had started training and I went with them. I really enjoyed the workouts even though I was far to young to really appreciate an art like Judo. The instructor must have had a great deal of patience to have dealt with us so effectively, we were an unruly bunch of kids. 2. What was it that attracted you to Kenpo Karate? Again a friend of mine Iain Tozer was training at it. He showed me some of the techniques and ideas and I wanted to know more. I was around sixteen at the time. I ended up going to the club in Paignton, Devon and was instantly hooked. I liked the sophistication of the art and was always attracted to it’s explosive nature. There was only a few clubs in the country at the time but the level of instruction under teachers like Sean Cross, Mervin Ormand, Jackie McVicar and Gary Ellis was very high. Ed Parker, the arts founder, visited England and taught seminars periodically, when I saw him teach and move I knew that was what I wanted to do with my time. I was not a personal student of Mr. Parker, just one of the many students in a seminar, but I studied everything I could on the man by reading books and watching videotapes. 3. Did working as a doorman help your understanding of a Kenpo? Yes, definitely. I started working as a doorman at 17 in the local clubs (and continued on and off for the next ten years working in London and the USA) which forced me to look at Kenpo in very practical terms. For what appears at first to be a rather eclectic system the more practical my requirements of it became the more it seemed to have to offer. Torbay proved to be surprisingly violent for a beach town, with a mix of high unemployment in the winters and a summer influx of young holiday makers, football supporters (or pretty well just hooligans) and anyone else who decided to turn up, all thrown together at the night clubs. Many of the lessons I learned on the door showed me that even a “street fighting” designed system like Kenpo is fairly stylized compared to the reality of fighting. 3. Would you agree that Kenpo Karate is a well-balanced martial art? It depends on what you mean by well balanced. Kenpo is one of the best Boxing-Jujitsu systems devised and is probably one of the most logical ways of looking at a martial art as a mechanical egression. Mr. Parker developed it by taking the most logical ways that you can strike by using the body’s natural weapons (i.e hands, elbows fingers, knees, feet and so on) along with joint manipulations and assembled about 150 interrelated base techniques. These techniques are designed to correspond to natural bio-mechanical power, combat and motion principles which in turn are examples of the rules and principles developed in the nine Kenpo forms. If you look at it from this point of view it is well-balanced because the system completely contains itself. Kenpo is devised to be understood much the same way as you would learn a second language. The basics as the alphabet, the forms could be seen as a dictionary and reference books describing the grammatics and structure of the language and the techniques as specific examples of the language. All these components combine to encourage the practitioner to speak from a mechanical stage to a level of fluency and spontaneity. This could be seen as a very well balanced method of learning and understanding an art. But from another point of view Kenpo is really not primarily designed for weapons fighting or grappling. It does contain some of these aspects but more as peripheral ideas. The central theme of Kenpo is a hand-to-hand system. I am not saying it would not work in these situations, it definitely would, but it would be up to the practitioner to understand the types of environments they are working in for it to be genuinely effective. As Huk Planas often points out Kenpo is not magic you have to make it work. I think this was part of the brilliance of the founder Ed Parker that he created a conceptually based fighting system rather than a purely technique based system (even though it appears to be technique based on the surface). This method encourages the student to develop using bio-mechanical, combat rules and principles then applying them to the situation rather than relying on a specific technique. So if you can become comfortable with a specific environment, for example grappling, then you only need to adapt the concept of the rules to that environment rather than relying on having countless techniques to deal with these situations. The only thing I would say Kenpo really lacks is an internal health system which is probably required of any art to be truly balanced. Kenpo is designed as an external boxing system. 4. Over the last ten years or so compared to many other mainstream styles Kenpo in Great Britain has seemingly received little if any media attention, why do you feel this is the case? I didn’t know that it hadn’t. I haven’t lived in Great Britain for over seven years so it is difficult for me to comment on this. I would guess that it is maybe because Kenpo still requires a very full syllabus of study before a practitioner can reach the level of black belt. This can take a considerable amount of time and even then the practitioner needs to be around a quality instructor to genuinely understand how the system works. If you are training in Kenpo in the UK then you are probably well established in the area that you earned your black belt. This probably contributes to a slow expansion of the system and lack of media exposure. 5. What was it that prompted you to up sticks and move to the USA? I was training in London with Diane Wheeler when my friend Mark Waldron returned from living and training in the States. We both decided to try training over there. Even though Ed Parker had sadly passed away about two years before we left Kenpo is basically an American designed art and the best practitioners of it live in the U.S. Teachers such as Frank Trejo, Lee Wedlake and others. Luckily we both managed to meet and start training under an amazing teacher and who is in my opinion the leading authority in the art, Richard “Huk” Planas. From all of the great instructors of Kenpo I have had the privilege to be around I would have to say that Huk’s intellectual and physical grasp of the art is second to none. He is one of the world’s genuine masters of the arts, even though he would cringe to hear me say that. He has had a tremendous influence on my training. Meeting and training under Huk was definitely worth moving for. 6. Already having what many would describe as a fast and dynamic system why did you feel the need to begin studying other systems such as Judo and Eskrima? No pure fighting system has everything, I believe a smart practitioner should recognize a systems weaknesses along with it’s strengths. As practical as Kenpo is as a combat system it is also highly conceptual. Kenpo can be weak on it’s entry and trapping skills especially against a weapon, the Philippine systems are excellent at these. Huk introduced me to Eskrima and showed me how the Philippine concepts of weapons fighting (and indeed fighting in general) merged perfectly with Kenpo’s sequential movement and combat principles. The Philippine and Indonesian systems are some of the most sophisticated and practical fighting methods that I have come across. A practitioner from any art would do well to develop along the path pioneered by the Filipino fighters. A Kenpo practitioners knowledge of weapons fighting is usually pretty stunted and there are few (if any) timing and sensitivity based Kenpo flow drills, the Philippine systems excel in these types of training methods. Knowledge of these types of approaches to training can greatly enhance a Kenpo practitioner’s practical understanding of their art. Kenpo practitioners in general are training as stand-up fighters, and stand-up fighters tend to fair badly against grapplers unless they have some knowledge of grappling so it seemed logical to study Judo and Jujitsu. The more I got into these arts the more they influenced my concept of Kenpo and fighting as a whole. Understanding Kenpo conceptually and applying the art in reality is a major leap which can only be bridged in my opinion by experiencing actual combat, grappling is an excellent and relatively safe way of achieving this goal. I also studied boxing and Thai-boxing to try and understand Kenpo better. Kenpo is basically a boxing “combination” type system in application with the exception of any rules as to the targets and weapons used. You cannot genuinely fight using Kenpo in a training situation without seriously damaging or killing your opponent so you need a spontaneous free-flowing boxing application to understand how to read an opponent’s intention, ride blows and throw combinations in a real fight, however those combinations are structured. Understanding how to fight in these systems and then applying that intrinsic knowledge to your Kenpo is an extremely effective training regiment. 7. Do you feel that crosstraining is just another fad? I hope not. I believe it is the most insightful way forward for students to understand a system like Kenpo. Even Mr. Parker apparently cross-trained in Boxing, Judo and various other systems before he developed the full system. Cross-training is a superb way of understanding combat. I think your body, mind and spirit has to have a good understanding of combat before it can relate to a system with that level of sophistication. One of the main problems with Kenpo as a whole is not the art, the art is a proven way of combat, but the training methods employed by many schools in understanding the art. When an art is developed which cannot be used in a full contact training situation such as Kenpo, then the art ultimately suffers. Obviously you should not have to “prove” yourself as a fighter to practice Kenpo but you should at least understand within yourself how to fight in a genuine and spontaneous manner, this is very difficult without rigorous cross training in a “boxing” system, a “weapons” system and a “grappling” system. Personally I would like to see Kenpo schools introduce cross training into their syllabus of teaching, this would dramatically improve the level by which students and instructors alike understand the art and themselves as the martial artists. Learning Kenpo the way it is currently being taught would be the same as learning boxing for say five years, using combinations on the bag as “techniques” and understanding all the theory and principles behind the action but then sparring only using point fighting semi-contact rules. I think most people would agree that this would be a fairly pointless exercise. The mind and the body is simply not designed to be able to make the kind of leap needed to suddenly have the awareness and fortitude to be able to use the combinations and spontaneously read the language of a fight in a real situation without having trained that way. Cross training would also encourage a leap of innovation in the art as practitioners develop there own new ways of applying the combat principles in reality. At the end of the day it is all just theory until you put it into practice. 8. You’ve recently have begun to get pulled towards the fascinating art of Systema (The System). How did you discover this amazing art? I met a master of the art Vladimir Vasiliev at a seminar organized by Lee Wedlake and was further encouraged to study it by my friend Al Mcluckie. After what I saw I started training under him in his school in Toronto whenever I can. I would have to say I am blown away by the concept of Systema which is an internal Russian martial art. Vladimir and Mikhail Ryabko, who I was lucky enough to meet recently, are incredible teachers of the martial arts. Their concept of the art and teaching methods are quite simply amazing. I am not even entirely sure how they do what they do, I just know that it works. In a way I was searching for Systema without really knowing what I was looking for, the training regiment I was following was telling me to relax, stay in contact with the opponent , steer away from specific technique, keep in motion, allow my weapons to follow their own paths and let the bodies fluidity work for itself while encouraging the mind to intuitively strategize. But saying all that I think if I had carried on down the path I was taking for the next 20 years I still doubt I would have learned as much as I did in only my first week of training in The System under a teacher such as Vladimir. 9. From what I’ve seen of Systema it appears to contain just about every real art I’ve ever had the pleasure to encounter. What in your view, Martin, makes it so special? My view of the System is still rather limited as I have only been training in it for a relatively short period compared to the other arts I have studied, although the training I have had so far has made a profound difference to my martial arts and more importantly to my life. I am convinced that any training in The System will change a persons perception of a martial art. The System is special because it seems to work as much on a persons consciousness as is does on the their body. There are no techniques just exercises, concepts and spontaneous work which develop a profound sensitivity and relaxation into a practitioner’s movement and awareness. A student training in it is encouraged to teach the body to think for itself using natural reactions as a base rather than mechanical blocking, parrying and slipping skills. Out of this total freedom of movement and lack of a defining framework a student’s consciousness, energy and physical structure learns how to blend and effect the consciousness, energy and physical structure of the opponent. The system works on all levels of human ability, the psychological, the phsyiological and the psychic. I think the body can be considered to be constantly out of balance and only in “balance” for an instance both mentally and physically on a moment by moment basis. It is dealing with a tremendous amount of internal and external information. It is quite an achievement for an animal to stand on two feet and deal with the information that a human does, this fact it seems can as easily work against us as for us. It seems that it is not difficult to control this balance when you work at a bodies subconscious level. The subconscious, after all, is the level at which we naturally move. I think to contrive a movement such as moving into a pre-determined technique takes an logical act of will which guides the subconscious into a specific type of movement. This must imply that thought at some level is creating a logical sequence of events. This type of training can be honed to an extremely high level of spontaneity which can develop into a practical sequence of ideas that develop a system of martial arts. But it seems if you work slowly and very softly and with positive intention you can teach your body to act and react extremely smoothly out of its own natural reactions allowing “logical” intuition to guide your nervous system into action. Using your senses and ?energy? system to guide the body without a perceivable “lag” time created by a conscious thought process. A practitioner of The System is never searching for technique to apply instead they are taught to enhance the body’s natural sensitivity beyond that of physical contact into more of a state of intuitive driven empathy to create spontaneous defensive (or offensive) movement. We are energetic beings and our energy fields reach out way beyond what we would consider to be our “physical” body. I think the best way I can describe this is the feeling you get when you enter a room and know that someone is looking at you with an intention of some sort, instinctively you turn and look to see who it is because you want to rely on the senses that you are most familiar to you in society, looking, touching, smelling, hearing. But if you allowed yourself to relax and “feel” the intrusion then you would possibly start to develop the innate human ability to be intuitive with your senses. The training methods are designed to develop your intuitive nature with and beyond what you would consider to be the physical senses. A practitioner of The System is encouraged to see an opponent rather than just look at them and allow the body to act and react intuitively. The body is instinctively designed for the best methods of fighting, but we generally train to fight by imposing “techniques” upon it and relying on logic to apply these techniques at the appropriate moment. For this to become “instinctual” takes many years of training and even then only a few are capable of achieving this level of freedom. Most every martial artist trains with the goal of freedom of movement and reaction but even for those who manage to gain this they are still in some way maintained by the framework of their system in some way which possibly creates a level of “blockage”. I hope this is not construed as condescending in any way towards any particular system or martial artist, it is certainly not intended that way as I am just making a conceptual observation. The System seems to be designed to work out of a level of profound relaxation and total freedom of movement. By working at a level of sensitivity by enhancing a practitioner’s natural reactions allows the Systema practitioner to create a state of neuro-muscular blindness in an opponent. This state is achieved when an opponent enacts some form of attack be it punching, kicking, grabbing, throwing etc. When an attacker attacks they are no longer in control of the attack the movement is as instinctual as throwing a ball once you release into the toss. I mean you could not stop yourself from throwing the ball half way through the movement if you have committed to the action, the conscious part of the brain does not work that fast. Nor does it the in a fight, when someone genuinely commits to an attack such as a punch then they only really react again once that punch is blocked or lands. The contact tells the body to do something else, if that contact never comes then the body goes into a temporary state of “neuro-muscular” blindness. A Systema practitioner practices to develop a level of freedom with his reactions that create this sensation and produce spontaneous techniques based on moment by moment information being introduced to the nervous and physic system. This leads to a very unusual and extremely effective defense system which capitalizes the opponent’s tension, “blockages” and anatomical structure and requires no real knowledge of the opponent’s martial arts or combat background as the practitioner is only reacting out of what he or she feels. In fact the more unusual the defense the better as this also effects your opponent on a psychological level. Then there is the energetic level where a practitioner develops a sense of the opponent’s energy and learns to effect the opponent at that level. This is also extremely effective as seems possible to me that we negotiate the world on a subconscious level using this part of our senses, and when people fight they do so at a subconscious level. Subtle manipulation of the subconscious means controlling the action. The System works with multiple opponents, on the ground, against or with weapons, the applications seem limitless. Even though it appears to work on all human levels of movement and perception it tends to work as a whole system, like an intricately woven ball of silk it would be pointless to try and define one layer from another, it’s true beauty being accepting it as a whole. To me, so far, it seems to be like moving using a single concept rather than applying technique to a specific situation. The System allows a practitioner to let his or her physical and energetic movement ride a wave of a developed intuition rather than a more logical process. I am simply amazed as to how much information you give away about yourself just by the way you naturally stand. When you work out with someone like Vladimir or Mikhial it becomes abundantly obvious how much information you are projecting about yourself and how it can be manipulated, mainly when you realize you are on your back when you could have sworn you were just standing a moment ago. Strangely enough it appears that if you apply the concepts of the system to your life it seems to have the same effect of relaxing you and actually healing you and your training partners body’s and fortifying the spirit. They say when they punch you they heal you, and trust me they can strike with incredible power, but it is done with such a positive and natural energy that they are working to heal you by eliminating the tension from your body. No tension, no blockages of the energy flow through the body. You always come away from a workout feeling better than when you started. It is a simply amazing art. Also it has a very strong psychological, spiritual and physical health component, in fact these factors, I think, are more important than the purely physical components. Without a loss of ego and an understanding of how to develop positive energy in your life it is hard to see how one could do more than scratch the surface of Systema. I know that these are the things that I struggle the most with. But even just “scratching the surface” would still go far beyond most martial arts I have encountered on a physical level. The art itself, as Vladimir explained, is not really a martial art at all but a method of cleansing yourself and allowing the art (and your life) to come out of that. The intuitive method and training regiment presents a very fast learning curve and because you are working out of natural reactions it is extremely hard to forget. The System stands alone as a work of genius in my admittedly limited opinion, but just the concept of it can be used as a great enhancer for any art that you study. As you may have guessed I highly recommend it. 10. You have some videotapes and a book which you produced last year, could you explain what they are about? I have produced a fourteen tape series on Kenpo as a fighting system called the “Kenpo Fighters Videotape Series” that I produced before I really got into The System. They cover from the basics through technique application of the concepts and principles of combat. Also they cover weapon defense, multiple attacks, locking and controlling, stand-up grappling and Kenpo and ground fighting and Kenpo. The series really covers a lot of ground with full contact demonstrations. Everyone who has seen it so far has really liked it. I have also written a book called “The Kenpo Fighters’ Handbook” which is a companion piece, I am still deciding how to publish that. If any one is interested in the tapes they can see a full review of them by many of the top kenpo practitioner’s at my web site www.ironmonkeyma.com. I must also give a plug for Vladmir’s tapes on Systema which are superb and I highly recommend atwww.russianmartialart.com. 11. Have you any future plans to visit the UK to conduct seminars? I shall be coming over in April to do some seminars but don’t have anything planned beyond that at this time, but would be happy to share my experiences of the arts with any one who is interested if they wish to contact me. I would also like to thank you for the opportunity to discuss some of these incredible arts with you and the great time I had at your school.
http://wheelersystema.com/2009/03/21/interview-with-martin-wheeler/
Scientific writing is an aspect of professional communication and is the writing about science. History[edit | edit source] Scientific writing in English started in the 14th century. The Royal Society established good practice for scientific writing. Founder member Thomas Sprat wrote on the importance of plain and accurate description rather than rhetorical flourishes in his History of the Royal Society of London. Robert Boyle emphasized the importance of not boring the reader with a dull, flat style. Because most scientific journals accept manuscripts only in English, an entire industry has developed to help non-native English speaking authors improve their text before submission. It is just now becoming an accepted practice to utilize the benefits of these services. This is making it easier for scientists to focus on their research and still get published in top journals. Writing style guides[edit | edit source] Different fields have different conventions for writing style, and individual journals within a field usually have their own style guides. Some style guides for scientific writing recommend against use of the passive voice, while some encourage it. Some journals prefer using "we" rather than "I" as personal pronoun. Note that "we" sometimes includes the reader, for example in mathematical deductions. Publication of research results is the global measure used by all disciplines to gauge a scientist’s level of success. In the mathematical sciences, it is customary to report in the present tense. See also[edit | edit source] - Academic publishing - Citation styles - Disseminating psychological knowledge - EASE Guidelines for Authors and Translators of Scientific Articles - GLISC - Impact factor - IMRAD structure (Introduction, Method, Result and Discussion) - Parenthetical referencing - Peer review - Scientific literature - Scientific method Academic publishing |Journals||200px| |Papers| |Other types of publication|| | |Search engines| |Impact metrics| |Related topics| |Lists| References[edit | edit source] - ↑ Irma Taavitsainen, Päivi Pahta, Medical and scientific writing in late medieval English, http://books.google.com/books?id=tIKyquQR1ecC - ↑ Joseph E. Harmon, Alan G. Gross, "On Early English Scientific Writing", The scientific literature, http://books.google.com/books?id=0Ns56qzBMIUC&pg=PA34 - ↑ Nature Publishing Group (2010). Writing for a Nature journal How to write a paper. Authors & referees. URL accessed on 2010-08-05. - ↑ International Studies Review. Journal house style points. URL accessed on 2010-08-05. - ↑ Duke Scientific Writing Resource.
https://psychology.wikia.org/wiki/Scientific_writing
How many people have 5 rings in the NBA? Robert Horry also won seven championships (with three teams). Four players, Bob Cousy, Kareem Abdul-Jabbar, Michael Jordan and Scottie Pippen, won six championships each. … List. |Player||Larry Siegfried| |Seasons||Played||9| |Won||5| |Percentage||56%| |Championship teams||Boston Celtics (1964, 1965, 1966, 1968, 1969)| Who won the most NBA championships? Who won the last 10 NBA titles? Who won the 2021 NBA Finals? Which NBA team has no rings? Top 11 NBA Teams Without a Championship - Brooklyn Nets. Brooklyn Nets is one of the NBA teams with no titles. … - Charlotte Hornets. The legendary Michael Jordan owns the Hornet. … - Denver Nuggets. … - Indiana Pacers. … - Los Angeles Clippers. … - Minnesota Timberwolves. … - New Orleans Pelicans. … - Phoenix Suns.
https://baloncestoestepona.com/clubs/who-has-the-most-nba-rings-top-5.html
How do I know when I need help? 14 OCT 2020 When we are not feeling our best there is a thin line separating what we can do for ourselves and what may need outside counsel and support. Some of us are so focused on staying strong and quiet about our mental health that we may not understand or even notice when we have crossed that line. We are exploring the ways in which we can self-assess and figure out when it’s time to speak up and ask for help. Checking in and assessing According to Lifeline WA DBTeen program coordinator and provisional psychologist Emily Parker, a good place to start is assessing if your feelings are affecting your daily life. Health Direct have listed nine signs that something needs to be addressed for your mental wellbeing to be at its best: feeling stressed or worried; feeling depressed or unhappy; emotional outbursts; sleep problems; weight or appetite changes; quiet or withdrawn; substance abuse; feeling guilty or worthless; and changes in behaviour. Remember this does not mean that you are suffering from a mental illness, but it might mean that you are not functioning as you would if you were the happiest you could be. We all feel this way from time to time but if you have been experiencing these feelings for an extended period or you are regularly/consistently feeling anxious, stressed, or sad then it may be a good time to look into more professional help. Self-care practices So maybe you are at the point where you have just been feeling a little more stressed or anxious than usual. The term self-care has been thrown around a lot and its meaning wrapped up in the culture of ‘treat yourself’ when really at its core it’s about you focusing on what will make you the most happy and healthy and dedicating time every day to check in with yourself and your health needs. Self-care isn’t a one size fits everyone solution but here are a few that everyone can try. The important thing to remember is not to engage in unhealthy habits, such as sleeping all day or drinking alcohol, that won’t help you feel better in the long run. Needing more than self-care? Life can sometimes be overwhelming and stressful and it’s important to recognise that there is nothing wrong with realising that we need help to cope. According to the Black Dog Institute, “One in five (20%) Australians aged 16-85 experience a mental illness in any year”. Lifeline WA counsellor and provisional psychologist Nynke Vlietstra compares mental health to a physical injury; if our leg was cut, we would look after the wound (self-care) and if we thought it was broken, we would seek medical advice on how to best look after it (seeking professional support). Our mental health is no different, but we understand it can be daunting to make the call and set up an appointment. Here is a short guide to what happens when you make an appointment with your GP and what you can ask . Remember your GP appointments are confidential and seeking help is not about admitting a weakness but being assertive and ensuring you are your happiest, healthiest self. So many times in our busy manic schedules we push our mental wellbeing to the back or our priority list but if we are able to schedule in some time every day to check in with our self and then either take steps to better look after ourselves or to seek further advice and support we can become a more happy and resilient version of ourselves. Other self-care ideas here. Finding support in your area: www.lifeline.org.au/get-help/i-m-having-a-difficult-time Someone to talk to now: Call 13 11 14 for support, to have someone to talk to and keep you safe.
https://wa.lifeline.org.au/resources/helpful-articles/how-do-i-know-when-i-need-help/
ASF-UK facilitated a training event on participatory design methodologies at the World Urban Forum in Naples, Italy. These methods have been developed in the ‘Change-by-Design’ action-research workshops implemented by ASF-UK in Brazil and Kenya. The training attracted over 100 participants from a diverse range of backgrounds – residents, CBOs, NGOs, private sector, academics, as well as managers of large scale development projects and local and central government officials. It consisted of a short introduction to the overall concepts and approaches of participatory design followed by working tables representing the three scales of intervention: city, neighbourhood and housing level. In each of the tables relevant tools were demonstrated, applied, and discussed by participants. A concluding plenary session drew together the key themes and lessons and highlighted next steps and ways forward. The session demonstrated how participatory design can be of value in urban development to not only build more responsive products, but also build stronger and more resilient communities, engage citizens in a process of deepening democracy, and highlight the social construction of space. While there are numerous participatory design tools that can engage communities and marginalised/vulnerable groups in the design and planning of their environments, seldom have these been framed in terms of wider goals of critiquing the dominant modalities of urban development that do not challenge the structural conditions that perpetuate urban poverty and exclusion. A central mechanism for improving current urban development modalities, that was proposed and explored in this training session, is the linking of different scales of urban development: from the scale of the house (dwelling), to the community (neighbourhood), to the city (planning and policies). Such an approach helps overcome, on the one hand, the limitations of addressing only immediate needs at the household level (improved housing quality), and on the other hand the macro policies, institutions and structures at the city level that are often seen as divorced from the everyday reality of slum dwellers and the urban poor. A follow-up step from the workshop is the production of an edited book, ‘Participatory Design for Urban Inclusiveness’, that will include chapters from key organizations and professionals on their tools, approaches and experiences of participatory design in practice, complimented by theoretical/academic contributions placing pioneering ideas of participatory design in wider discourses surrounding the social production of urban space. In addition, another action-research workshop is planned for Ecuador in 2013. Compiled by Isis Nunez Ferrera For more information on the Change-by-Design workshops, follow the link to the latest publication: http://www.scribd.com/doc/75033019/Change-by-Design-Building-Communities-Through-Participatory-Design Architecture Sans Frontieres – UK at the Sixth Session of the World Urban Forum, Naples, Italy Training Event: Participatory design for slum upgrading and inclusive city building Training Event Coordinators: -Alexandre Apsan Frediani, Brazilian, Development Planning Unit, UCL, London -Isis Nunez Ferrera, Honduran, SCIBE, University of Westminster and ASF-UK, London -Matthew French, New Zealand, UN-Habitat. In partnership with: -Diana Kinya, Kenyan, Pamoja Trust, Kenya -Chawanad Luansang, Thai, Community Architects Network (ACHR) -May Domingo-Price, Thai, Community Architects Network (ACHR) -Sonia, Philippine, Shack/Slum Dwellers International -Nokukhanya Mchunu, Development Action Group (DAG) -Moegsien Hendricks, Development Action Group (DAG) -Paul Chege, Practical Action -Lucy Stevens, Practical Action -Mathew Okello, Practical Action And the invited experts reflections: -Camillo Boano, Italian, Development Planning Unit, UCL, London -Edgar Pieterse, South African, African Centre for Cities.
http://www.asf-uk.org/portfolio/world-urban-forum-2012/
How might COVID-19 change us as creators and consumers of culture? Pat Kastner Concerts and performances, theaters and museums are where we have gone for centuries to share experiences and hash out emotions — together. But we can’t do that right now. Rachel Skaggs, Lawrence and Isabel Barnett Assistant Professor of Arts Management and a sociologist by training, shared food for thought on how COVID-19 might change us as creators and consumers of culture. Scarcity breeds creativity. “When you have constraints, you must be creative. We’re all having to figure out how to cook a meal out of random things, how to exercise, how to entertain children while also holding a job. All of these things are creativity,” Skaggs says. “We’re in our domiciles, so it makes sense that domesticity is emerging from that. We’re literally going down a list of domestic crafts that are coming back, from capturing the wild yeast in one’s home to making kombucha. We’re asking, ‘What else is there to do, in my house, at this moment?’” Is FOMO over? “People have tried to live a more simple life enhanced by experiences, which is interesting now that we’re all shut up in our homes. FOMO (fear of missing out) has been huge the last few years. No one’s missing out on anything right now,” Skaggs says. We don’t know how COVID-19 ultimately will affect outings to museums, restaurants and music festivals — or how comfortable we will feel once we are able to go back to our favorite places. “People may change their approach to experience.” Togetherness creates magic. “One concept in sociology is collective effervescence, the idea that there is something really special about collective experiences. Think of the amazing, overwhelming feeling of being at a sporting event and having this kind of rush of collective joy or malaise,” Skaggs says. “Collective events or experiences can shape the way we relate to each other.” Visit virtually The Wexner Center for the Arts may be closed for now, but you can still get your contemporary art fix at home. Check out (free!) streaming films, artist talks, art tutorials and other ways to fall into a rabbit hole of art.
https://www.osu.edu/alumni/news/ohio-state-alumni-magazine/issues/summer-2020/ohio-state-covid-culture-art.html
New Zealand contains lots of natural beauty due to its towering mountains, open and lush plains, volcanic plateaus and incredible glaciers. Places you might like to visit during your study breaks On the North Island Bay of Islands Home to more than 144 islands, Bay of Islands is a three-hour drive north of Auckland and is the perfect location for sailing and yachting as well as fishing and whale watching. Here you can find penguins, dolphins, whales and marlin in their natural habitat. Rotorua Rotorua, located on the tumultuous Pacific Ring of Fire, is one of the most active geothermal regions in the world. Here you can witness a ‘living country’ where boiling mud pools, hissing geysers, volcanic craters, and steaming thermal springs are common. In this region, you’ll find sky-diving, luging, and mountain biking are some of the activities on offer. Hobbiton Movie Set, Hamilton – Waikato region The Hamilton-Waikato region is home to one of the world’s most famous movies sets, which was used for filming The Lord of the Rings and The Hobbit Trilogy. International visitors looking to experience the ‘Home of Middle-earth’ can wander through the heart of the Shire and see the unique hobbit holes including Bag End (Bilbo’s house). On the South Island Queenstown Located between the shores of Lake Wakatipu and the snowy peaks of the Remarkables, lies Queenstown – New Zealand’s adventure capital. Home to various ‘adrenaline-fueled’ sports, Queenstown has a lot to offer, including bungee jumping, jet boarding, white-water rafting, paragliding, rock climbing, mountain biking and downhill skiing, as well as stunning alpine scenery and hiking trails. Fiordland National Park and Milford Sound Listed as a World Heritage site, Fiordland National Park protects some of the world’s most spectacular scenery, sculpted by glaciers which have carved the magnificent fjords of Milford, Dusky and Doubtful Sounds. Here you can explore offshore islands, mountain peaks and rainforests as well as hike on some of the country’s best walks, including the famous Milford Track. Kayaking is also another popular way to experience the fjords or if you’d like to view these from above, you can enjoy a scenic flight over the park.
https://www.idp.com/iran/study-in-new-zealand/places-to-visit/?lang=en
Blood is carried through the body via blood vessels. An artery is a blood vessel that carries blood away from the heart, where it branches into ever-smaller vessels. Eventually, the smallest arteries, vessels called arterioles, further branch into tiny capillaries, where nutrients and wastes are exchanged, and then combine with other vessels that exit capillaries to form venules, small blood vessels that carry blood to a vein, a larger blood vessel that returns blood to the heart. Arteries and veins transport blood in two distinct circuits: the systemic circuit and the pulmonary circuit ( [link] ). Systemic arteries provide blood rich in oxygen to the body’s tissues. The blood returned to the heart through systemic veins has less oxygen, since much of the oxygen carried by the arteries has been delivered to the cells. In contrast, in the pulmonary circuit, arteries carry blood low in oxygen exclusively to the lungs for gas exchange. Pulmonary veins then return freshly oxygenated blood from the lungs to the heart to be pumped back out into systemic circulation. Although arteries and veins differ structurally and functionally, they share certain features. Different types of blood vessels vary slightly in their structures, but they share the same general features. Arteries and arterioles have thicker walls than veins and venules because they are closer to the heart and receive blood that is surging at a far greater pressure ( [link] ). Each type of vessel has a lumen —a hollow passageway through which blood flows. Arteries have smaller lumens than veins, a characteristic that helps to maintain the pressure of blood moving through the system. Together, their thicker walls and smaller diameters give arterial lumens a more rounded appearance in cross section than the lumens of veins. Notification Switch Would you like to follow the 'Anatomy & Physiology' conversation and receive update notifications?
https://www.jobilize.com/anatomy/course/20-1-structure-and-function-of-blood-vessels-by-openstax?qcr=www.quizover.com
Al Bithnah Conservation & Habitat Rehabilitation Rehabilitating the historic site of Al Bithnah in Fujairah and conserving its natural & cultural heritage to help nature and people thrive. Located in Wadi Ham between Masafi and Fujairah, Al Bithnah Fort contributed to the prosperity of local communities in the late 1800s. Today, the site has the potential to safeguard local communities once more, as a world-class example of sustainable rural development. Emirates Nature-WWF and Etihad Rail, in collaboration with Etihad Rail, Crown Prince Court- Fujairah, Fujairah Environmental Authority and Fujairah Adventure with the support of the local communities, are joining hands to renovate the ancient falaj irrigation system in Al Bithnah village, construct nature trails to enhance ecotourism potential and rehabilitate the surrounding habitat. By exploring solutions to restore access to fresh water and introducing modern water management techniques, the project aims to promote sustainable farming and preserve the site’s biodiversity by implementing Agroecology principles. Inclusive conservation in action. The local community is at the heart of the initiative. The project aims to improve the livelihoods, diversify the local economy and make a positive impact on the life and overall wellbeing of community members – in collaboration with the community. The project will create innovative opportunities for community members to partake in trainings, collaborate with stakeholders, participate as volunteers and engage in nature. We will also explore solutions to help establish a greener local economy through market transformation.
https://www.emiratesnaturewwf.ae/en/conservation-projects/al-bithnah-conservation-habitat-rehabilitation
CROSS-REFERENCE TO RELATED APPLICATIONS FIELD OF THE INVENTION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE INVENTION This application claims the benefit of U.S. Patent Application 61/794,636 filed on Mar. 15, 2013 which is incorporated herein by reference. The present invention relates generally to a stiffness adjustment device and more particular a plurality of stiffness adjustment devices located in equipment to allow tailoring of the product to the user and the conditions. In many areas of mechanical endeavor, means of adjustment are desirable, before, during, and after use. Sports equipment must be selected and/or adjusted for the particular conditions that currently exist, or which may be encountered. For example, users select skis, snow boards, tennis rackets, boots, and other equipment. Likewise, prosthetics must adapt to the various life situations in the course of a day. It has been recognized that a means of adjusting mechanical devices before, during and after use is more desirable rather than selecting another similar device with different characteristics. For example, sports equipment must be selected for the particular conditions that currently exist, or which may be encountered. Likewise, mechanical devices could include other types of mechanical devices and equipment. For example, prosthetics must adapt to the various life situations in the course of a day. In an embodiment of a device with stiffness adjustment, the device has a housing having a plurality of passageways, each defining an axis. The device has a plurality of stiffness devices each having a plurality of flat springs elongated in the direction of the passageway axis. The plurality of stiffness devices are each rotatable about their respective passageway axis. In an embodiment, the device has a casing interposed between the wall of the passageway of the housing and the stiffness devices. In an embodiment, the casing is a protective covering. In an embodiment, the casing is a spring. In an embodiment, the spring is an extension spring. In an embodiment, the spring is a compression spring. In an embodiment, the plurality of rotatable stiffness devices are each rotatable about their respective passageway axis between at least two distinct positions. In an embodiment, the plurality of rotatable stiffness devices each have a gearing mechanism for facilitating rotation. In an embodiment, the plurality of rotatable stiffness devices have an engaging mechanism. In an embodiment, the plurality of rotatable stiffness devices have an aperture adapted to receive a tool for adjusting the stiffness of the device. In an embodiment, the plurality of rotatable stiffness devices are rotatable in a group. In an embodiment, the device has a geared strip and the plurality of rotatable stiffness devices, the geared strip coupled to the at least two of the plurality of rotatable stiffness devices for rotating the rotatable stiffness devices to adjust the stiffness of the device. In an embodiment, the process of rotating the plurality of rotatable stiffness devices is automated. In an embodiment, the automated process is controlled by a user. In an embodiment, the device has a sensor wherein the automated process is controlled by the sensor. In an embodiment, the housing is from the winter sport equipment group of ski and snowboard. In an embodiment, the housing is a prosthetic. In an embodiment, the housing is from the shoe group of ski boot and bicycling shoe. In an embodiment, the plurality of stiffness devices each has a rectangular cross section having a ratio between a height dimension and a width dimension of at least 4:1. These aspects of the invention are not meant to be exclusive and other features, aspects, and advantages of the present invention will be readily apparent to those of ordinary skill in the art when read in conjunction with the following description, appended claims, and accompanying drawings. It has been recognized that means of adjusting mechanical devices is desirable, before, during, and after use rather than selecting another similar device with different characteristics. For example, sports equipment must be selected for the particular conditions that currently exist, or which may be encountered. Other mechanical devices could include other types of mechanical devices and equipment. For example, prosthetics must adapt to the various life situations in the course of a day. FIG. 1A 20 28 30 20 34 36 34 36 24 26 24 26 28 30 Referring to , a side view of a piece of equipment with stiffness adjustment having a pair of stiffness adjustment devices and is shown. The equipment with stiffness adjustment has a housing with a plurality of openings and . Each of the openings and receives a casing and . Within each of the casings and is located a stiffness adjustment device and . 28 30 28 30 28 28 28 28 Each of the stiffness adjustment devices and , also referred to as a beam stack, is formed of multiple pieces or elements. For example, the beam stacks and are illustrated with four pieces or elements A, B, C, and D. Each of the beam stacks may be formed with one, two, three, or more pieces forming the beam stack. Optionally, each of the pieces are positioned as layers touching neighboring pieces. While beam stacks are shown with four pieces, it is recognized that more pieces can form the stack. For example embodiments have been built with 9 layers and 16 layers. 28 30 24 26 28 28 28 28 28 30 While the beam stacks and are shown with the elements separated and spaced from the casings and , the elements A, B, C, and D engage each other. In an embodiment, the corners of the stacks and engage the casing. 24 26 24 The casing and can take several forms. For example the casing can be a stranded cable, where the cable is highly flexible in the XY and XZ planes, while capable of only tensile loads along the length of the cable, i.e. the X axis, perpendicular to the YZ plane. 24 26 It is recognized that in certain embodiments, the casing and might not be required. The casing is expected to make the rotatability possible or at least more convenient and consistent. A torque at one end of the beam stack might not propagate as smoothly without the casing. Thus, the casing transmits the force more or less evenly along the entire length of the beam stack. 28 30 20 20 FIG. 1A The stiffness adjustment devices, the beam stacks, and , also shown in , have a higher bending stiffness in the XZ plane, when compared to the bending stiffness in the XY plane. The beam stack also has greater ability to exert compressive forces along the X axis than a stranded cable . Under compression, the strands of the cable can separate from one another. FIG. 1A 1 3 3 4 4 4 4 4 4+ 1 2 3 4 n Still referring to , the beam stacks provide different levels of bending stiffness dependent on several factors including materials and geometry. The geometric (i.e. non-material dependent) term in the stiffness equation is expressed in the form: (b*h3)/12 where b is the width (along the Y axis) of a rectangular and h is the height (along the Z axis). Increasing values of “b” lead to an additive (A) accretion to the stiffness equation, while increasing “h” causes an exponential (A) increase in stiffness. This property can be used to make a device highly stiff in one direction, while relatively flexible in another. By contrast, a wire with a square or circular cross section typically has the same stiffness in both directions. Because of the necessary symmetry of wires, (and cables made therefrom) “b” is always equal to “h”. The factor reduces as follows: (b*b)/12 or (b)/12 adding additional wires (e.g., forming a cable), gives the following: (b+b+b+b++ . . . +b)/12 In such a configuration, the stiffness in the XZ plane can only be increased by stiffening the XY plane by the same amount. The elongated cross section of various embodiments is generally analogous to a cross-section of a 2×4 piece of lumber. In many implementations, a height to width ratio (Y:Z) of at least 10:1 is used, with some implementations having a ratio of 8:1 or 4:1. This ratio may also be 10:1 or greater. Resistance to buckling can be achieved by one or more of the following means: 1) adding a sufficient number of identical (or nearly identical) pieces, e.g. parallel layers; 2) alternating thinner and thicker laminates, and 3) constraining the laminates (whether of the same or different thickness) within a casing either to prevent buckling, or to keep it within acceptable limits. A final, and less desirable, means of limiting buckling is to 4) attach one or more layers together temporarily, as with fasteners, or permanently with welding or adhesives. Joining techniques such as these can cause the properties in the joined region to approximate a solid object. Over small regions, this may be necessary or even desirable, as in the fixation of one end in order that length changes will predictably occur at the other end of the elongate laminate(s). These length changes are due to geometric considerations, in which only the “neutral axis” of a bent member is stress free, with radial segments above and below the neutral axis changing in length due to the magnitude and direction of stresses acting upon them. In U.S. Pat. No. 7,670,351, a single set of beams is described with respect to a deflection and steering mechanism. In U.S. Patent Published Application 2010/0274088, a beam stack is described with respect to a highly specialized hinge near the tip of an endoscope or steered endoscopic accessory. Both U.S. Pat. No. 7,670,351 and 2010/0274088 are incorporated herein by reference. FIG. 1B FIG. 1A FIG. 4 FIGS. 5A-5C 20 28 30 28 30 28 Referring to , a side view of a piece of equipment with stiffness adjustment having a pair of stiffness adjustment devices and is shown. In contrast to where both of the stiffness adjustment devices, beam stacks, and are aligned to the Z axis, in this view the beam stack has been rotated ninety degrees. This rotation reduces the stiffness in the XY plane and increases the stiffness in the XZ plane. This balances stiffness in two directions, and also can be utilized to restrain or limit buckling. The term buckling means the unstable bending of the equipment; for example the flexing of a ski as discussed below with respect to and footwear in . 28 30 20 In order to rotate the stiffness adjustment devices and , the stack needs to be grabbed and rotated. In addition dependent on the friction, the stack may need to be retained in the proper orientation. The packaging interface with the housing can be tailored to hold the stiffness adjustment devices in the desired position. 28 30 Each stiffness adjustment device is stacked, laminated beams, where stiffness is adjusted by various combinations of mechanical properties and relative motion, typically a rotation. While it is recognized that the stiffness adjustment devices and could in theory be switched out in the field, normally the beams would only be rotated during use such as a triathlon or the ski trip. Typically switching stiffness adjustment devices out would be a repair, either at home or in the factory. FIG. 2 28 30 40 28 30 40 28 30 28 30 28 30 28 30 Referring to , a side view of a pair of stiffness adjustments, beam stacks, and with a gearing system for rotating the stiffness adjustment devices and is shown. The movement of the gearing system results in both beam stacks and moving together. The four pieces of each beam stack and are parallel to each other. In this situation, the stiffness in one direction decreases as the stiffness in another direction increases. The stiffness created by the stiffness adjustment devices and is greatest in the direction parallel to the stacks of the stiffness adjustment devices and . In certain embodiments the beam stacks or stiffness adjustment devices are packed so tightly that little or no buckling can occur, due to space available. As indicated above, the stacks is shown in groups of four elements which are drawn in a sort of exploded view, in actuality in most embodiments the elements are touching each other, and the corners of the outer elements would touch the package, the casing. All this friction would tend to limit buckling with the stiffness adjustment device. 40 24 26 FIGS. 4-9 As indicated above dependent on the friction, the stack may need to be retained in the proper orientation, for example the forces placed on the stiffness adjustment devices such as from walking/skiing/other motion(s) might simply move, rotate, the stack back to the low energy state, where elements of the stack bend and flex easily. Skiing has a lot of vibration; things would tend to move to low energy states. Gearing mechanism could involve actual gears, or it could be a simple pin that contacts a series of holes in the belts, or it could simply be friction between the belt and the casings and . As will be more apparent below in describing , the stiffness adjustment devices are just one part of the equipment with stiffness adjustment. The housing that contains the stiffness adjustment devices also has structural elements that influence the ability of the equipment to flex. The stiffness adjustment devices allow the equipment to be tailored to the conditions. FIG. 3A FIG. 3B FIG. 3A 50 52 54 56 62 64 66 50 70 62 66 52 56 62 66 52 56 Referring to , a perspective view of a plurality of stiffness adjustment devices encased by an external packaging is shown. shows the top view of the plurality of stiffness adjustment devices encased by an external packaging of . The equipment with stiffness adjustment has three stiffness adjustment devices, beam stacks, , , and . Each stack is encased with a casing , , and respectively. The equipment with stiffness adjustment has a gearing system which each engages the casings and . The two stiffness adjustment devices, beam stacks, and , are connected to their respective casing and such that rotation of the casing causes rotation of the beam stack. The rotation of the two beam stacks and transforms a flat, flexible, beam-like member into the mechanical approximation of a very stiff wire, rod, or tube when the beam stacks are orientated in directions which are 120 degrees from each other. It is also expected that the deflection, hinging, and stiffening mechanisms could be used singly or together. For example, shape, material, and location of the stacks could vary axially, with a deflecting hinge acting in one region where flexibility is needed, and the stiffening mechanism only present where stiffness, particularly adjustable stiffness must be present. For example, skis require different properties at the tip as opposed to under the boot. A ski bending or a flexible boot would have different requirements as well, based on where the ankle naturally bends. FIG. 4 80 52 54 56 80 52 54 56 82 84 86 52 54 56 Referring to , a sectional view of a ski having a plurality of stiffness adjustment devices , , and extending longitudinally in the ski is shown. Each stiffness adjustment device , , and is received in an opening , , and . A user would have the capability to rotate the stiffness adjustment devices , , and dependent on the weather and the condition of the slopes such as temperature, moguls, skier ability, powder vs. packed, etc. The ski could have color codes at the user stiffness adjustment device interface located at the tail of the ski to allow the user to select the proper condition. FIG. 4 80 52 54 56 52 54 56 In the embodiment shown in , there is no casing between the housing, the ski , and the stiffness adjustment devices , , and . The need for a casing is dependent in part on the material properties of the housing. If the stiffness adjustment devices , , and can be rotated so that the beam elements can rotate as desired from one end to the other without a casing, a casing may not be required. 80 As indicated above, the housing of the equipment with stiffness adjustment, such as the ski , can be part of the stiffness device. The inherent stiffness of the housing is factored into the design of the equipment. 80 80 While a portion of the ski is shown, it is recognized that other equipment such as a snowboard can be equipped with stiffness adjustment. In the snowboard, the stiffness adjustment devices would run laterally in contrast to running longitudinally in the ski . FIG. 5A 90 92 90 92 80 Referring to , a side view of a ski boot is shown with a sole stiffening channel shown in hidden line. The ski boot has a plurality of sole stiffening channels that each receive a stiffness adjustment device. Each of the stiffness adjustment devices can be adjusted similar to the skis dependent on the condition of the slopes and the user's capability. In addition, the user can adjust the stiffness for situations when not actually skiing, such as on the ski lift or in the lodge. FIG. 5B 90 94 90 90 28 94 Referring to , a top view of the ski boot showing a plurality of longitudinal stiffening channels is shown. In addition to the sole of the boot , other portions of the boot can be adjusted; for example, allowing the stiffening to be adjusted by rotation of stiffness adjustment devices located in the longitudinal stiffening channels . FIG. 5C 100 100 104 102 100 Referring to , a side view of a shoe having stiffness on the top and bottom is shown. The shoe for bicycling has both a plurality of sole stiffening channels and a plurality of dorsum pedism, upper sole, stiffening channels ; the dorsum pedism is the top of the foot. The user can adjust the stiffness for when the shoe is worn for cycling and a different stiffness for other conditions such as walking or running in the shoe as in a triathlon. FIG. 6A FIG. 6B FIG. 6A 110 110 110 110 114 114 112 Referring to , a schematic of an adjustable shoe insert is shown. The insert as configurable as an ankle support is shown. is a perspective view of the adjustable shoe insert of . The adjustable ankle support has a plurality of channels . Each channel has a stiffness adjustment device . Without adjustability, it would function analogously to the familiar “concrete rebar” where compressive loads are borne by the brittle concrete, and bending and tensile loads by the steel. In a rubber or other elastomeric/poloymeric sole, the resilient material would act as a damping mechanism, as a restraint upon buckling, and simply as a package. Oriented parallel to the axis of the digits, the flex of the sole would be controlled distinctly as to the flexion during walking, running, and climbing. Oriented at another angle, such as 30 degree or 45 degree, the effect would be more isotropic. In either case, the sole could be “tuned” to the expected use at the time of manufacture by the number and size of the embedded Endo-Lamina beams, and particularly by their orientation in the stiff or less stiff direction. FIGS. 6A and 6B As an adjustable device, the equipment with stiffness adjustment, the adjustable shoe insert, could be moved from less stiff to more stiff to adjust to terrain. On flat terrain, speed could be enabled by a flexible sole, where the time of contact with the earth is long, and where the length of individual steps is long, linearly. When climbing, or when walking across very rough terrain (rocks, or convoluted ice), the stiff sole adjustment might be preferred. If this device were wrapped around the ankle, and especially if incorporated in a “high top” boot or sneaker, the degree of ankle support could also be adjusted: loose for speed, stiff for safety in climbing and on very rough surfaces. As an insole or integral with the sole, adjustment would presumably be made by means of a knob or lever, suitably lockable, at the rear or along the sides of the boot. As an ankle support adjustment, or as part of a ski binding, a collar could partly or completely surround the ankle, with the down (towards the sole) position being locked, and the up position being for adjustment (stiffness on/off, for example). Much as a screw is simply an inclined plane wrapped around a circular shaft, the ankle support shown in is a flat sole or insole style grid of beams, wrapped partly or wholly around the axis of the tibia and other vertically-oriented leg bones. 90 100 While a ski boot and shoe for bicycling are shown, it is recognized that other types of shoe including for construction, hiking, mountain climbing, and for medical reasons can incorporate stiffness adjustment devices. FIG. 7 FIG. 8B FIG. 8A FIG. 8B 120 122 122 126 126 Referring to , a top view of a tennis or racquetball racket is shown without strings. After the strings are strung there still remains an unused space as seen in . The space is capable of receiving a stiffness adjustment device. In , the beam stacks of a stiffness adjustment device are disposed to resist forces perpendicular to the plane of the strings. In , the beam stacks of a stiffness adjustment device are disposed to resist in-plane forces, primarily acting along the axes of individual strings. FIG. 8A FIG. 8B FIG. 8A FIG. 8B The adjustment between the state shown in and can be done in various methods. For example, a wire, cable, or series of short rods could be disposed in a loop or chain beginning in the handle, along the outer periphery of the string area, and returning to the handle. Adjustment could be performed by a mechanism in the handle. In addition to maintaining or adjusting tension around the rim, it would twist all of the stacks together along the chain. In one embodiment, the adjustment could be adjusted continuously (as in the case of a wire or cable) or discretely, if the rods had segments in appropriate locations along the periphery. Particularly in the case of the continuous adjustment, states intermediate between the state shown in and could be obtained. A complex state of combined stress would exist, with some of the enhancement coming from hoop stress around the periphery, which would exist in some magnitude regardless of the state of rotation of the E-L beams. Rotational adjustment can be achieved in a number of ways. The stiffness adjustment device can be surrounded by a coaxial gear, connected to fixed rotation between stiffness adjustment devices in a regular manner. The gear might have interrupted teeth, or be only partially toothed, in order to arrest motion at a specified limit. The stiffness adjustment device might also be disposed inside springs or tubes, then connected directly by friction. Cam surfaces might surround the devices, causing them to move in a prescribed manner. FIGS. 4-6B 120 120 120 124 In contrast to the embodiments show in , the racket is shown with one stiffness adjustment device. The frame of the racket , which has some rigidity is a stiffness device that is not adjustable. The stiffness adjustment device is surrounded by the casing . FIG. 9 130 132 130 132 130 130 134 132 Referring to , a curve device with a plurality of stiffness adjustment devices is shown. The curve device could be a shoe or a board. Each of the stiffness adjustment devices are located in a channel in the housing of the device . The curve device has an adjustment mechanism that interacts with each of the stiffness adjustment devices . FIG. 1A 24 26 If the stiffness adjustment devices, the beams stacks, are surrounded by an extension or compression spring, this spring might be connected to the adjacent, fixed member in various ways. For example, the compression spring coils (and also extension spring coils, if elastically separated) could be intertwined for all or part of the length. Instead, or in addition, the overlapping region of the coils, the axes brought near to one another so as to form a third lumen, could be filled with a wire or cable. In addition to preserving orientation, this wire could incorporate an additional function. As an actuation mechanism, it could turn the moving coil through the angular motion which adjusts stiffness. For example if referring back to , the two casings and if compression springs are moved into overlapping engagement similar to a “Venn Diagram,” the outer two lumens can have stiffness adjustment devices sized to the space and capable of rotating. The center lumen could either have a smaller adjustment device or larger stationary device. For the greatest range of adjustment, the two or more stiffness adjustment devices, beam stacks, can be rotated. This may be particularly useful in situations where there are other constraints against buckling, such as the surrounding structure of the ski. At the apex of a newer, cross-cambered ski, the space itself is a restriction, with the top of a parabola approximating a circular arc for a substantial region. As the curvature changes, the more linear “legs” of the arch restrict buckling to a certain extent. FIG. 10 140 142 150 142 146 144 144 146 144 146 Referring to , a side view of a stiffness adjustment device having a plurality of spheres and flex limiter is shown. During operation, the equipment like a ski or prosthesis moves and the assembly may be compressed by a force in the direction of the arrow (Left) . This is reacted by the stack of unconnected stacks, shown as spheres as seen in the Figure. Collapse under load is resisted by the thin walled tube , and particularly by the metal or plastic coil , shown partly cut away in the Figure. This wrapping coil would maintain its circular shape, even in a case of 90 degree bending, although the tube which the coil surrounds may buckle locally. The tube itself could also be spiral cut for additional flexibility. A cut with the opposite hand to the external wrap may be beneficial. 142 In a flexible beam, a bending moment puts one surface into compression and the other into tension. While the dimensions of the beam contribute in a non-linear manner (specifically, the thickness to the third power in the invention example above), a simple beam, over a large elastic range, behaves linearly. It may be desirable to introduce a sharp non-linearity in a simple repeatable way. In a ski, for example, it may be desirable to keep the tip lifter angle from decreasing below a certain value when the combination of terrain and the inertia or weight of the skier produce a downward force beyond a certain value. A longitudinal cavity, preferably a cylinder elongated in the direction of ski length, located on the compression side of the neutral axis (normally the upper surface of a conventionally cambered ski), is suitable for various novel means of achieving this repeatable, protective non-linearity. A very stiff spring could partially fill this cavity, exerting no force until the compression exceeded a certain value. This spring, might, however, fill the cavity to such a degree that it unnecessarily augments the very bending stiffness it is intended to control. The cavity could be made larger in diameter, but this may make the ski less damage tolerant in this particular region. Therefore, an alternative is proposed: a stack of rigid or semi-rigid shapes, such as spheres to immediately resist the compressive loads when a certain threshold of compression is reached and the axial space occupied by the lumen is shortened. (Note that thermal expansion effects may be significant, and the matching of materials may be critical). Such spheres also have utility in other areas, particularly those not subject to temperature extremes as encountered in winter skiing. FIGS. 2 and 9 FIGS. 3A and 3B 70 As indicated above with respect to , if it is desirable to rotate more than one of the stiffness adjustment equipment to achieve the maximum stiffness adjustment, a wire, cable, or thick belt could be disposed. The tensile member could be attached at a different point from the S-shaped member in . The connection points could be further un-stiffened by being made from stranded cables. Deployment (the moving of the E-L beams into multiple planes to use their differential stiffness), could be effected in two or more ways: torsional springs could be deployed adjacent to the joints, or around the round cross sectional members. Control could also be maintained by the use of stops at or near the edges of the members, such as corner bends at the edges, or finger-like slots etched and or stamped into appropriate regions of the flexible layers. The forming and use of such etched features is well known to those skilled in the art, although their application to an adjustable/expandable, variable stiffness device as described here is both novel and useful in medicine, sports equipment, aviation, and elsewhere. While sports equipment has been described above, the equipment with stiffness adjustment can be found in other forms. For example, the stiffness adjustment devices, the beam stacks, can function as fingers and toes. The stiffness adjustment devices act as localized or continuous hinges for flexibility, and their width distributing load during grasping or locomotion. When combined with control mechanisms, these could function literally as the digits of a prosthetic hand or foot, or a robotic manipulator. As a penile implant, a pair of beam stacks would be mostly flexible when the penis is flaccid. To obtain an erection, one of the stacks would be rotated 90 degrees relative to the other, providing useful stiffness until the stack is manually rotated back. This rotation, effected extracorpeally by palpation alone, would eliminate a failure mode (fluid leakage) associated with current, pump-based penile implants. More generally, they could be used to increase the contact area of footwear, such as a hiking boot or crampon. The stiffness adjustment equipment can have a sensor that senses the condition. The user can provide input also to reflect the user's skill level. The damping and the enhanced range of adjustable stiffnesses will also apply to other types of skis and ski-like objects, such as snowboards and water skis. For example, in both recreational water skis, and skis for float planes, wave conditions might vary greatly from day to day, and during the day due to weather variations, and changing location. It could prove difficult, inconvenient, costly, or simply impossible to have a different ski for each condition. Hence, a readily adjustable stiffness is desirable. A hand adjustment may suffice for the recreational water skier; for the float plane, the adjustment could be made via remote cable, or electrically/electronically. The adjustment could even be made in response to real-time sensors, according to a pre-programmed arrangement whereby a force or vibration reading is used to calculate appropriate adjustment to the stiffness during a landing, as required to stabilize or otherwise assist in the landing process. A different stiffness might adjust a value, such as a resonant frequency, further away from an undesirable result. The system described above may provide a comfortable, soft landing in ordinary conditions, but a safer, albeit rougher (stiffer and/or less damped), landing during inclement weather. Another example is a control mechanism, and/or the pre-deployed digits, that could be contained in a space between the crampon's contact area and the sole of the boot. Using the space between the crampon's spikes is less desirable, as this region may be needed on very uneven terrain or ice, to allow the spike tips to contact and dig into ice, snow, or frozen earth. −3 4 −6 In an embodiment, the stiffness of a piece of rectangular steel of 1 inch by ¼ inch is 1.3×10inin the low stiffness state, and 8 times higher in the high stiffness state, the other. The same cross sectional space, filled with a beam element having 25 pieces each free to slide against one another and 0.010″ thick, gives a low stiffness value of 2.08×10in 4 or 1/625th of the value of the solid beam of identical shape (the difference in thickness raised to the second power). The stiffness of the beam element together is identical to the solid beam in the high stiffness state, since the 0.010 “thickness” is now simply width, and the width term is additive. If a lower ratio is desired, and/or if cost is a consideration, the number of beams can be adjusted downward, with an associated savings in material cost and labor (i.e. assembly) costs. (It is well known in the arts of mechanical and manufacturing engineering that assembly cost is driven by both the cost of an individual component, and the number of components. More numerous components, regardless of cost, require additional expenses during design, test, qualification, purchasing, inspection, stocking, materials handling and assembly.) It is recognized that the location of the stiffness adjustment device in the housing affects the adjustment. The stiffness adjustment devices could be disposed closer to, or further from, the neutral axis depending on the degree of stiffness and degree of adjustability desired. For example, a device intended to be very flexible in the relaxed condition would dispose the elements very near, or directly on, the overall neutral axis of the assembly. Other devices might desire a “stiff and stiffer” adjustment, and so locate the stiffness adjustment device farther from the neutral axis. In either case, the enhanced range due to the use of stiffness adjustment devices increases the adjustability of the device or equipment. Rotational adjustment can be achieved in a number of ways. The items may each be surrounded by a coaxial gear, connected to fix rotation between them in a regular manner. The gear might have interrupted teeth, or be only partially toothed, in order to arrest motion at a specified limit. The items might also be disposed inside springs or tubes, then connected directly by friction. Cam surfaces might surround the devices, causing them to move in a prescribed manner. If the rotatable stiffness devices are surrounded by an extension or compression spring, this spring might be connected to the adjacent, fixed member in various ways. For example, the compression spring coils (and also extension spring coils, if elastically separated) could be intertwined for all or part of their length. Instead, or in addition, the overlapping region of the coils, the axes brought near to one another so as to form a third lumen, could be filled with a wire or cable. In addition to preserving orientation, this wire could incorporate an additional function. As an actuation mechanism, it could turn the moving coil through the angular motion which adjusts stiffness. For the greatest range of adjustment, the plurality of rotatable stiffness devices, beam elements, could be rotated. This may be particularly useful in situations where there are other constraints against buckling, such as the surrounding structure of the ski. At the apex of a newer, cross-cambered ski, the space itself is a restriction, with the top of a parabola approximating a circular arc for a substantial region. As the curvature decreases, the more linear “legs” of the arch restrict buckling to a certain extent. While the principles of the invention have been described herein, it is to be understood by those skilled in the art that this description is made only by way of example and not as a limitation as to the scope of the invention. Other embodiments are contemplated within the scope of the present invention in addition to the exemplary embodiments shown and described herein. Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present invention. It is recognized that the plurality of stiffness adjustment devices can be used in other devices such as seats for vehicles, such as an automobile seat. It is recognized that the numerous metal on metal interfaces of the beam stack are also expected to provide significant frictional damping. Should this damping be undesirable for any reason, coatings or interlayers of low friction materials may be provided. It is recognized that while the stacks of the elements are shown as rectangular beams, other shapes such as elongated flattened ribbon are contemplated. BRIEF DESCRIPTION OF THE DRAWINGS The foregoing and other objects, features, and advantages of the invention will be apparent from the following description of particular embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. FIG. 1A is a side view of a piece of equipment with stiffness adjustment with a pair of stiffness adjustment devices; FIG. 1B is a side view of a piece of equipment with stiffness adjustment with one of the pair of stiffness adjustments rotated; FIG. 2 is a side view of a pair of stiffness adjustments with a gearing system for rotating the stiffness adjustment device; FIG. 3A is a perspective view of a plurality of stiffness adjustment devices encased by an external packaging; FIG. 3B FIG. 3A is a top view of the plurality of stiffness adjustment devices encased by an external packaging of ; FIG. 4 is a sectional view of a ski having a plurality of stiffness adjustment devices extending longitudinally in the ski; FIG. 5A is a side view of a ski boot with a sole stiffening channel shown in hidden line; FIG. 5B is a top view of the ski boot showing a plurality of longitudinal stiffening channels; FIG. 5C is side view of a shoe having stiffness adjustment devices on the top and bottom; FIG. 6A is a schematic of an adjustable shoe insert; FIG. 6B FIG. 6A is a perspective view of the adjustable shoe insert of ; FIG. 7 is a top view of a racquetball racket; FIG. 8A FIG. 7 8 8 is a sectional view taken along the line A-A in ; FIG. 8B FIG. 8A is the sectional view of with the stiffness adjustment in another position; FIG. 9 is a top view of a curve device with a plurality of stiffness adjustment devices; and FIG. 10 is a side view of a stiffness adjustment device having a plurality of spheres and flex limiter.
I have chosen my ‘top ten’ books about Agatha Christie by reference to the influence which they had on my decision to write my book and on the help they gave me in formulating my ideas. The list is therefore rather subjective but I suspect that it would stand up pretty well to objective scrutiny. The first book which I read about detective fiction was Julian Symons’ Bloody Murder. It was published in 1972 and I probably read it that year. By then I had read all of Agatha Christie’s published detective novels and Symons’ book was of great interest in contextualising her work within the history of detective fiction as a genre. In the years following Christie’s death in 1976, quite a lot of books were written about her before I decided to write my own book in 2005. By then, I had read a few of these and in my ‘top ten’ I would put: - Robert Barnard’s A Talent to Deceive – An Appreciation of Agatha Christie, 1990 - Charles Osborne’s The Life and Crimes of Agatha Christie, 1982, updated 1999 - Bruce Pendergast’s Everyman’s Guide to the Mysteries of Agatha Christie, 2004 Over the next couple of years, I tried to identify and read all the others and I would select the following for my ‘top ten’: - Earl F. Bargainnier’s The Gentle Art of Murder – The Detective Fiction of Agatha Christie, 1980 - Maida & Spornick’s Murder She Wrote – A Study of Agatha Christie’s Detective Fiction, 1982 - Mary S. Wagoner’s Agatha Christie, 1986 I also needed to read more widely about detective fiction – its theory, history, components, technique and the like – and I did so during the same period. Although none of these sources gets into my ‘top ten’ (because, like Bloody Murder, they are not specific to Agatha Christie), the ones that helped me most were Marie Rodell’s Mystery Fiction Theory and Technique and Howard Haycraft’s books, Murder for Pleasure: The Life and Times of the Detective Story and The Art of the Mystery Story. Perhaps I should also mention another series of books which do not get into the ‘top ten’, namely those in which Agatha Christie is said to have written about – or rather parodied – herself in her amusing portrayal of the fictional author, Mrs Ariadne Oliver, who appears in seven of her novels, the one Poirot Golden Age novel being Cards on the Table. Since 2005 more books have been published or republished about Agatha Christie. The following would complete my personal ‘top ten’: - Laura Thompson’s Agatha Christie An English Mystery, 2007 - John Curran’s Agatha Christie’s Secret Notebooks, 2009 - John Curran’s Agatha Christie’s Murder in the Making, 2011 - Kathryn Harkup’s A is for Arsenic: The Poisons of Agatha Christie, 2015 Over the 40 years since Agatha Christie’s death, other books have been published about her which I have enjoyed, most recently J.C. Bernthal’s highly original Queering Agatha Christie, 2016 and, although not specifically about her, Martin Edwards’ very impressive The Golden Age of Murder, 2015. No doubt, given the strong continuing interest in her works, we can expect valuable contributions to the literature concerning her to be published for years to come.
https://stylisheyepress.com/further-reading
A model proposed for predicting photodamage and development of plant protection mechanisms The model developed by Lobachevsky University scientists can provide a tool for quantitative prediction of photodamage and development of adaptive changes of the plant photosynthetic apparatus under variations in lighting intensity IMAGE: Schematic of a mathematical model of non-photochemical chlorophyll fluorescence quenching in plants under fluctuations in lighting intensity view more Credit: Lobachevsky University Light is the main source of energy for photosynthesis, it underlies the production process in plants. At the same time, excessive lighting can lead to photodamage of the photosynthetic apparatus and, indirectly, of other structures of the plant cell. In order to avoid such damage, plants have developed a number of protective mechanisms, including the so-called non-photochemical fluorescence quenching. Non-photochemical fluorescence quenching develops under the action of high intensity lighting and other stressors, which leads to a decrease in the light flux absorbed by the photosynthetic apparatus. According to Vladimir Sukhov, Head of the Laboratory of Plant Electrophysiology at the Institute of Biology and Biomedicine of Lobachevsky University, non-photochemical quenching plays a key role in protecting plants from adverse lighting conditions, and the research in this field is very important in plant physiology and agricultural sciences. "The development of mathematical models of non-photochemical quenching is of special importance, as such models allow us to predict photodamage and adaptive changes in plant resistance under certain lighting modes without additional experimental research", Vladimir Sukhov notes. The forecasts obtained by Lobachevsky University researchers can be used both for solving fundamental scientific problems and for applied purposes (for example, when developing new modes of artificial illumination of plants in greenhouses or for forecasting plant damage under certain weather conditions). The article of Ekaterina Sukhova, post-graduate student of the Lobachevsky University Department of Biophysics, published with co-authors in Biochimica et Biophysica Acta - Bioenergetics, one of the leading journals in the field of photosynthesis research, focuses on the development of a mathematical model of non-photochemical quenching of chlorophyll fluorescence in plants and describes the peculiarities of such quenching under fluctuations of lighting intensity. The proposed model describes the transitions between "open" (those that have not received a quantum of light) and "closed" (those that have received a quantum of light) reaction centers of photosystem II and the subsequent activation of the nonphotochemical quenching component by closed reaction centers. "The peculiarity of the model is the description of photosystem II activation in the light and its inactivation in the dark, which was implemented in one of the versions of our model. This description significantly expands the applicability of the proposed model. In particular, it became possible to use the model to predict the effect of rapid light intensity changes on plants", explains Ekaterina Sukhova. The model was verified on the basis of experimental data obtained using the modern method of photosynthesis research - PAM-fluorometry (pulse amplitude modulated fluorometry). The verification showed that the model allows an accurate prediction of the development of non-photochemical quenching and photodamage in plants under fluctuating light intensity, including alternating periods of darkness and periods of plant illumination. "From the practical point of view, the proposed model should become a tool enabling quantitative prediction of photodamage and the development of adaptive changes in plant photosynthetic apparatus under fluctuations in light intensity", concludes Ekaterina Sukhova. Lobachevsky University researchers have identified only some areas for potential application of this new and important tool: prediction of light damage to plants under specific weather conditions in the field (i.e., at certain intensity and fluctuations of natural lighting). theoretical search for lighting modes that provide additional resistance of the photosynthetic apparatus to stressors of different nature. ### The research was carried out with the financial support of the Russian Foundation for Basic Research, Project 18-34-00644 mol_a (headed by E.M. Sukhova) and Project 18-44-520009 (headed by V.S. Sukhov). Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
While many conversations around workplace diversity tend to focus on top-down strategies, there are creative ways individual employees and job-seekers can also support diversity in their careers. Consider one of the tips below to help improve your job search or help gain mobility in your career. DON’T BE AFRAID TO ASK RECRUITERS ABOUT DIVERSITY. If you’re looking to find a company that will support your identity and differences, take time to ask the right questions to screen employers. Use your interview as an opportunity to discuss topics around diversity that the average applicant may avoid. “If diversity is important to you, ask about it,” Morales suggests. “Don’t be afraid to ask recruiters or the person interviewing you about diversity at their company because the question should be welcomed and if it isn’t, then you may have your answer.” Research the company, as always, but once you do, feel free to ask honest questions. “Even during a phone screening, it’s fine to say: ‘Diversity and inclusion are very important to me. Can you tell me some of the ways that shows up at your company?’ This question can invite an open dialogue around work culture and values that you’d like to have on the job,” says Morales. In addition to preparing personal questions, she encourages jobseekers to also ask interviewers about their own experiences with gender and diversity in the workplace. As the recruiter shares stories, note the details you hear and follow-up questions you’d like to ask. This can give you a sense of what you may experience while working at the company and if it offers the right culture for you. BUILD YOUR INTERNAL AND EXTERNAL NETWORK. “No matter your identity or where you are in your career, you should make every effort to build your network—and that doesn’t just start at your job,” Morales says. There are a number of professional organizations that support women and individuals from underrepresented groups, such as The National Association of African Americans in Human Resources, Propsanica: The National Association of Hispanic Professionals, Accounting & Financial Women’s Alliance and the Society of Hispanic Professional Engineers. These groups are usually organized according to profession, so you should be able to find one that aligns with your career goals and needs. Also, if you’re a jobseeker, research companies that already offer established programs that can help build your internal network. For instance, LinkedIn provides an Allyship Academy that focuses on training employees to remove bias from their language and support inclusive partnerships at work. This kind of program provides opportunities for underrepresented employees to build new allies across LinkedIn and learn leadership skills while bolstering their network. Whether you’re searching for your first entry-level job or planning your next career move, make researching diversity programs and professional organizations part of your routine job search or career planning. When you speak to a hiring manager for a company, be prepared to ask about networking groups, workplace diversity and programs that support your goals. SEEK “MENTORING MOMENTS.” If you ask someone to be your mentor, depending on their work schedule and availability, sometimes that request can feel like a long-term commitment which can be difficult to maintain. For this reason, instead of requesting a lengthy mentorship, Morales encourages employees and jobseekers to consider “mentoring moments.” “Yes, mentorship is important but it doesn’t have to all fall on one person,” Morales says. “I focus more on having mentoring moments. That means when I have a question about a career move or an idea that I want to discuss with someone I trust, I call that person in my network and ask them out for coffee. During that conversation, they may coach or advise me, which is a mentoring moment. Strive for those in the early stages of your career.” Awwad also encourages women to pursue similar mentoring moments. For instance, members of EDGE were encouraged to participate in Girls on the Run, an event that allows female mentors to identify a young girl from the Girls on the Run organization to accompany during a 5K marathon. The girls and mentors run together during the event where they have a chance to interact and learn more about each other. “It’s really fun,” says Awwad. “We run with the girls and talk about their goals or anything on their minds.” Small, event-based activities can help foster “mentoring moments” that feel authentic and create opportunities for future networking. DO GREAT WORK AND HIGHLIGHT IT AT YOUR COMPANY.
https://careers.uw.edu/blog/2021/09/08/strategies-for-jobseekers-and-employees-from-underrepresented-groups-shared-from-devry-university-blog/
Various kinds of imaginary animals appear in ancient Asian art or writings and the majority were invented in China. I think that nobody has ever equaled the ancient Chinese for the way in which they were able to imbue these animals, that nobody had ever seen, with a wealth of characteristics and symbolism, yet of them all, the Dragon was probably the greatest. In the monsoon regions of Asia where rice is the main staple, the dragon is symbolic of the water that brings both riches and disaster, in other words, it is worshipped as the god of water. Essential to all forms of life, water cycles endlessly between earth and heaven and in ancient times it was thought that the water that fell as rain had a conscious desire to return to the skies. The god of water was generally happy to live in lakes or ponds in the guise of the carp, the whiskers on the fish symbolic of its dragon origins. Eventually, there came a time when the carp would swim up rivers, climb waterfalls then, reaching a sacred mountain, revert to dragon form and fly up to heaven with a clap of thunder. It is due to this belief that it became popular to make ponds with waterfalls in gardens and stock them with carp. In Japan, there is a festival known as boys' day that is held on May 5, during which the parents of young boys fly carp streamers from poles in their gardens to symbolize their desire for their children to rise up in the world like a dragon. The Chinese legalist philosopher, Han-fei-Tzu (Hanfeizi) who lived during the Warring States Period (5 - 3 century B.C.) wrote that the scales grew upside down on a dragon's throat making that their weak-point, and if somebody touches them there, they will fly into a fury and eat the offender. He wrote this as if it were actual fact and it can be seen from this that people believed that dragons would sometimes go into a rage and destroy things. Later, the expression, "To touch the upside-down scales" came to mean, "to incur the wrath of the Emperor". Recently, the vegetation at a famous beauty spot in Japan has begun to die, creating a lot of concern. The cause is a tunnel, nine-meters in diameter, that was dug to divert flood waters from a local river but which inadvertently cut across an underground river. As a result all the water flows into the tunnel and seven years after it was built its effect can be seen in a general drying out of the area due to the lack of ground water. This shows us that the power of nature is something that surpasses time and mankind's ability to control it. In our modern rationalism we forget to revere not only water, but all the other forces of nature that used to be thought of as gods, we have lost our humility and no longer give thanks for their blessings. The results of this are now beginning to make themselves known through a series of natural disasters and ultimately it is mankind who will have to pay the price. The priority placed on the economy in China, as shown in the recent creation of dams and the development of farmland, resulted in the terrain becoming incapable of absorbing the rain that fell and led to terrible floods, that shook the state to its foundations. Recently we see a lot of cheap imported crockery from China that is decorated with a design of the five-fingered dragon which only the Chinese Imperial family used to be able to use. Now their arrogance has reached the point where they have lost their reverence of water, the dragons who originate from China have probably run out of patience with them. The dragon is a theme I have adopted in several of my works, but I have often thought it curious that the ancient Chinese should have created a mythical creature like this and used it so often as a subject in art. However, recently I feel that it is not simply an imaginary beast for which they created a shape and attributed powers to for their amusement, rather, it represents the knowledge and demeanor that mankind needs in order to live with nature.
http://gd.ws14.arena.ne.jp/e/museE.html.data/Essay%20Corner/Dragons.html
Minutes of Islands by Studio Fizbin Minute of Islands is a game about sacrificing oneself for the greater good. A horrible plague has ravaged the islands; what were once bustling places are now abandoned and disheveled. People and animals affected by the harmful, lung-infesting spores quickly die. One for All Our main protagonist, Mo, has been gifted the ability to help cleanse these islands by activating purifiers placed there by skyscraper-tall giants, so she takes it upon herself to help the world alone. With that burden comes consequences for both Mo and the people close to her. Minute of Islands largely centers around Mo’s burden. The game tackles relatable themes like self-doubt and family conflict, but these themes aren’t expanded upon as much as I would have liked. It leaves the climax of the game feeling a tad unresolved in certain respects. The writing is good, and the characters all serve a purpose in Mo’s quest, but I was left wanting more from its narrative. On the gameplay side of things, Minute of Islands plays a lot like a familiar puzzle platformer: activate switches and push blocks to solve puzzles. It’s standard fare but not necessarily challenging. It’s linear and easy to know where to go and what to do next. The focus is more on the narrative, as opposed to its puzzles. And I Think to Myself, What Wonderful World-Building The game does mix up the puzzles and light platforming with some optional memories Mo can unlock in each area. These memories provide a good amount of backstory to help flesh out characters and the world. Hearing things about Mo’s youth, like how she liked to explore and crawl into dark caves, fleshed out her character nicely. And thankfully the world in which the game takes place is compelling. I was intrigued by the idea of underground giants within a fantastical world. The islands also have a lot of detail, making exploring them more engaging. Minute of Islands utilizes environmental storytelling here and there to great effect. Going into Mo’s uncles’ cabin gave me a good sense of what type of person he was by seeing his living space. Seeing what were once bustling theme park attractions turned into nothing more than rusted remains does wonders for world-building. These small details make the game’s five-hour journey more enjoyable. The game opts for an effectively bright hand-drawn style. It has this children’s storybook look to it. Plenty of meticulously drawn backgrounds really shine. Its style juxtaposed with the heavy narrative themes make some of the more grotesque imagery of the game more shocking, like one of those old fairy tales but modernized. It’s a look that fits the game well. Spooky Songs and Vivid Voice-Acting What makes the world even more engaging is the fantastic soundtrack. Many tracks feel eerie, giving off an unsettling, oftentimes unnerving vibe. Capturing the hopeless and unknown qualities of the setting, the soundscapes do a fantastic job of highlighting narrative moments even more effectively. Its large mix of echo-infused stringed instruments creates a sound that still stuck with me when I wasn’t playing the game. I also must mention the voice actress who narratives the story in this game. Her performance expertly captures Mo’s internal strife. It felt like I was listening to an engrossing audiobook. It’s a performance with the right amount of inflection to make the narrative and writing really shine. Minute of Islands is a good narrative-driven puzzle game. It creates an intriguing, sad world to engage with. It leans heavily on its story and will be more enjoyable for those who don’t mind that, but keep in mind that the way the narrative ends can leave something to be desired. Its overall message is clearly spelled out, but I wish there was a bit more depth to drive that message more effectively home. Minute of Islands is available via the Nintendo Online Store, Sony PlayStation Store, Microsoft Store, Steam, and GOG.com. Check out the official trailer for Minute of Islands below:
https://indiegamereviewer.com/minute-of-islands-review-explore-a-sad-intriguing-world/
First dates with Jesus, dinosaurs falling out of the sky, and a famous painting that eats art critics are among the quirky stories found in this collaborative collection. Each piece was written by Jack Dunn and one or more coauthors, and the joint creations are 18 highly entertaining and cutting-edge genre stories, many of them award-winning or award-nominated. Employees are drafted by corporations in the Nebula Award-nominated story "High Steel", and the first manned landing on Mars is imagined in "The God of Mars", just two examples of the futuristic flavour of the collection. Short, clever essays by the coauthors, among them Susan Casper, Gardner Dozois, and Gregory Frost, introduce each story and provide insight into the friendships, conflicts, and story conferences involved in collaborative writing. Genres: science fiction, short stories Jack Dann Jack Dann (born 1945) is an American writer best known for his science fiction, an editor and a writing teacher, who has lived in Australia since 1994. He has published over seventy books, in the majority of cases as editor or co-editor of story anthologies in the science fiction, fantasy and horror genres. He has published nine novels, numerous shorter works of fiction, essays and poetry and his books have been translated into thirteen languages. His work, which includes fiction in the science fiction, fantasy, horror, magical realism and historical and alternative history genres, has been compared to Jorge Luis Borges, Roald Dahl, Lewis Carroll, J.G. Ballard and Philip K. Dick. Links Jack Dann's Official Website. Jack Dann. Wikipedia. Photo author: Catriona Sparks. Photo source: Wikimedia Commons.
https://www.risingshadow.net/library/book/19292-the-fiction-factory
Embracing Conflict in Relationships If you are like me, you do not like conflict. If you are like me, you are an avoider. As avoiders, we feel uncomfortable when there is tension in our relationships. Communication scholars sometimes call this non-assertion: “the inability or unwillingness to express thoughts or feelings in a conflict” (Adler, Rodman and du Pre, 2014, p. 247). This is exactly how I often feel, what I often do. However, in my desire to be a good friend, husband, father, son and brother, I have had to learn how to embrace some aspects of conflict. It turns out that there are benefits to healthy conflict, tough conversations and engaging in dialogue with those who hold competing or divergent viewpoints from your own. Conflict is difficult because of the often uncomfortable (and consequently undesirable) emotions it arouses. In order to overcome my avoidance style, though, I had to learn that it was not just my emotions or how I handled my emotions that was the problem. The root problem was that I held false beliefs about myself and about the benefits and consequences of conflict. My first false belief guiding my response to conflict was that I did not need anything – I did not need to change. I would have never said this, but looking back on many conflict situations in my family and marriage relationships, I can now see how I thought they were in need of a change, not me. This attitude is destructive to relationships. Jesus calls us to be sober in our estimation of ourselves, to put others first, to be meek and poor in spirit. All of these suggest that we are in need and that living in community can meet those needs if that community is submitted to the Word of God, seeking to know God more and more each day. So, the first step is to look for the ways we may need to change; seeking others who can help see where we need to change is an important part of the first step (this may even be a counselor – they’re really good at it!). The second false belief I held is that difficult emotions or tension in relationships is bad. False. I have learned that tension in relationships encourages us to grow and become better people if we learn to effectively manage our communication when we experience emotional arousal or discomfort. Daniel Cohen, in a 2013 TED Talk, makes this point when he says that the “loser” in argument actually gets more out of the argument. In other words, conflicts refine us. When a conflict makes us more aware of our shortcomings, both relational partners win! So, embrace the tension in your relationships and learn effective ways to communicate with humility when conflicts arise. Communication is key in all aspects of life! Learn more about expressing yourself and the importance of communication in our previous blog posts. Interested in a liberal arts degree? Learn more by visiting our website. References Adler, R.B., Rodman, G., & du Pre, A. (2014). Understanding human communication. (12th ed.). New York, NY: Oxford University Press. Cohen, D. (2013, February). For argument’s sake [Video file]. The views and opinions expressed in this article are those of the author’s and do not necessarily reflect the official policy or position of Grand Canyon University.
https://www.gcu.edu/blog/language-communication/embracing-conflict-relationships
This introduction to organic spectroscopic analysis aims to provide the reader with a basic understanding of how nuclear magnetic resonance (NMR), infrared (IR) and ultraviolet-visible (UV-Vis) spectroscopy, and mass spectrometry (MS) give rise to spectra, and how these spectra can be used to determine the structure of organic molecules. The text aims to lead the reader to an appreciation of the information available from each form of spectroscopy and an ability to use spectroscopic information in the identification of organic compounds. Aimed at undergraduate students, Organic Spectroscopic Analysis is a unique textbook containing large numbers of spectra, problems and marginal notes, specifically chosen to highlight the points being discussed. Ideal for the needs of undergraduate chemistry students, Tutorial Chemistry Texts is a major series consisting of short, single topic or modular texts concentrating on the fundamental areas of chemistry taught in undergraduate science courses. Each book provides a concise account of the basic principles underlying a given subject, embodying an independent-learning philosophy and including worked examples.
https://pubs.rsc.org/en/content/ebook/978-1-84755-156-6
Learning how to make matcha properly is an important part of enjoying this delicious tea. Traditionally, a matcha tea ceremony is a process where priests follow set procedures laid out many years ago that include prayer and ritual, but you don’t have to follow these to make a delicious bowl of tea. Matcha is one of the most refreshing, delicious and invigorating teas you can drink. It doesn’t take long to make and it’s worth doing it right to get the best flavour. What You Need To Make Matcha Tea 1. A large cup or drinking bowl 2. A bamboo matcha whisk – we prefer to use a traditional bamboo whisk but you can mix it with a spoon or small kitchen whisk if you don’t have one. 3. A matcha scoop or teaspoon. 4. ½ - 1 tsp of DoMatcha conventional or organic matcha green tea powder. 5. 2 to 8oz/ 60-240ml of pre-boiling water. We recommend using filtered water if possible. The water temperature is important when making Matcha, as overheating the Matcha can kill some of the nutrients and bring out bitterness! Pour boiling water back and forth between two cups to bring the temperate down to 175F/80C (the ideal temperature). How To Make Matcha Tea 1. Place 1 – 2 scoops (½ - 1 tsp) of DōMatcha® into your bowl or cup. 2. Add 2 - 3oz (60 to 90 ml) pre-boiling water to DōMatcha®. The ideal water temperature for DōMatcha® is 175 F/80 C. 3. Using a bamboo whisk, briskly whisk in W motion until froth forms on top, or make a paste using a spoon. Slow down and move the whisk to the surface to remove larger bubbles and make a smooth froth. Move the whisk slowly around the bowl and lift carefully from the centre to remove. 4. Add more water to taste after whisking (up to 8oz/240 ml). Note - only add more water AFTER whisking. 5. Enjoy! How to Make Matcha Tea Without a Whisk - Simply use a spoon to mix the matcha into a paste then add more water and enjoy! DōMatcha® can be used both for ‘thin’ or ‘thick’ Matcha. Thick matcha or ‘koicha’ uses more matcha powder and less water. Thin matcha tea or ‘usucha’ uses more water and less matcha. Most people drink 1-2 cups per day. More About Making Matcha Tea Legend has it that the ancient Chinese emperor and inventor of Chinese medicine, Shennong, was the first to discover the pleasant flavor and medicinal properties of green tea. A new powdered form of tea emerged during the Song Dynasty (960-1279). Freshly picked tea-leaves were steamed to preserve color and freshness, then dried and ground into a fine powder called ‘tea mud’. Japanese Zen priests began their own tradition of cultivating, processing and preparing powdered green tea – and thus Matcha was born.
https://domatcha.ca/blogs/blog-recipes/how-to-make-matcha-tea
This is the most commonly used IQ test for children ages 6 to 16. Arguably the most thorough of the bunch, the WISC-V test not only produces a full-scale IQ score, but also produces five factor scores that measure different dimensions of functioning. Stanford-Binet Intelligence Scale - Fifth Editio †Chief of Psychology, Massachusetts General Hospital; Associate Professor, Harvard Medical School, Boston, Mass After completing this article, readers should be able to: 1. Define intelligence quotient (IQ) and what constitutes the normal range of IQ scores. 2. Describe the predictive validity of intelligence test scores. 3 It was a small sample, it's not clear that the tests they administered were age-appropriate for all of their participants, and some of the high-IQ children tested at the first time point may have found the test too easy (showing 'ceiling effects'). Future research should look at larger samples, using a number of different tests, on a wider. The optimum time for testing a gifted child's IQ is between ages 5 and 8. If you need rather than want to know if your child is gifted, then ages 5 to 8 would be good years to have testing done. Even if you don't think you have a need to have your child tested during that time, it might be a good idea to have it done anyway Should IQ Tests Be Administered to Kids With Autism? Typical IQ tests are built around the assumption that test-takers can understand and use spoken language at an age-appropriate level. Children with autism, however, almost never have age-appropriate communication skills. This means that they start at a disadvantage IQ classification is the practice by IQ test publishers of labeling IQ score ranges with category names such as superior or average.. The current scoring method for all IQ tests is the deviation IQ. In this method, an IQ score of 100 means that the test-taker's performance on the test is at the median level of performance in the sample of test-takers of about the same age used to norm. According to the David Institute for Talent and Development, the best and most predictive ages for IQ testing are ages 6-9 years old. However, I've heard various things from various other organizations. 0 IQ tests are among the most commonly administered psychological tests. In order to understand what these scores really mean, it is essential to look at exactly how these test scores are calculated. Today, many tests are standardized and scores are derived by comparing individual performance against the norms for the individual's age group Every time we meet to talk about a student and we realize the testing we're looking at is three years old, we kind of, you know, discount it a little bit For instance, the IQ test for kids and the IQ test for teens have different requirements. Therefore, here are some of the most popular IQ tests that you and your children can take according to your needs. 1. Stanford-Binet Intelligence Test. This test can be used to identify both gifted children and children with intellectual disabilities Kostenlose Lieferung möglic The test is equal to all cultures, languages and does not require children to be literate or to go to school already. The appropriate age for this test is from 4 to 9. For adults over 16 years old, test IQ with quick IQ test or Recruitment IQ test. The test consists of 22 questions that require to be completed in 15 minutes Today, the most commonly used individual IQ test series is the Wechsler Adult Intelligence Scale (WAIS) for adults and the Wechsler Intelligence Scale for Children (WISC) for school-age test-takers. IQ scores explained: Average, superior and low . Even though the IQ test industry is already a century old, IQ scores are still often misunderstood These tests no longer correlate with an IQ test. Note that the acceptance date applies to the date you took the test, not the date you join Mensa. You can still join Mensa by using older scores. * Many intelligence test scores will qualify you for Mensa, but Mensa's supervisory psychologists will have to individually appraise the documentation Fast IQ test for Adults - take for FREE. You need to answer 12 questions to find out what your IQ level is. This test will measure your logic abilities and doesn't require any cultural specific knowledge, so it is recommended to any nationality. Answer on your's own opinion, do not use search engines to get correct score. Should Three-Year-Old Get IQ Test? I certainly agree that your son sounds advanced for 33 months of age. However, I would not advise testing him at this point unless you have a certain need for testing (such as specific preschool admission policy). Intelligence tests are not considered to be as valid when they are administered to very young. Age test. Would you like to know how old do you act? Just answer 24 simple questions honestly and you will find out how old you really are. Questions. This test is quite simple and straightforward. Questions are about your personality and you can give only one answer to each question IQ tests have gone through significant changes through the decades to correct for racial, gender, and social biases, as well as cultural norms. Tao started high school in the 1980s at age 7. The Woodcock-Johnson Tests of Cognitive Abilities is a set of subtests developed in 1977 by Richard Woodcock and Mary E. Bonner Johnson. The test can be administered to children as young as two and adults into their nineties. The most recent version is the WJ-IV.Unlike many IQ tests, the WJ does require reading and writing.Tested skills include A child that is tested at age 10 will not likely have the same IQ test score at age 30 or even age 50. IQ Test Scores are not a perfect measurement tool because IQ tests are man-made and testing instruments cannot test all of the various elements that can affect a person's intelligence . While the average score of 113 was only one point greater than the previous test, the range of scores was quite different: 87 to 143 Through IQ, researchers sought to assess innate intelligence, meaning that IQ test results would not vary substantially as a person ages, even from childhood to adulthood. In other words, the score you get at age 10 should not be very different from the score you get at age 30 . So the average 10-year-old today would score..
https://dankbarmierda.com/national/archive/2009/07/the-truth-about-iq/22260/ybh738taf9sj
PAPUA New Guinea’s national government says it will improve law and order following the release of a World Bank Group report which warned high rates of crime and violence are slowing business expansion and hampering the country’s economic development. PNG asked the World Bank to assess the social and economic costs of crime and violence as they related to business, citizens, government and civil society. Eight out of 10 businesses surveyed in PNG said they suffered substantial losses and security costs as a result of high rates of crime and violence, slowing business expansion and hampering the country’s economic development. More than 80% of 135 companies surveyed said their business decisions are negatively influenced by the country’s law and order issues, with crime increasing the cost of doing business. The expense of avoiding criminal damage limits firms’ ability to grow, deters start-ups, and imposes long-term social costs on the country, the report states. Prime Minister Peter O’Neill said the report would be reviewed by the government as part of ongoing planning and policy development to continue to improve law and order in PNG. “The World Bank statement makes note that crime has stabilised, while at the same time more work needs to be done,” O’Neill said. “We have made a commitment to the people and businesses of Papua New Guinea to improve law and order and we are meeting this challenge. “There has been a decline in major crime over the past three years due to strong government policy on law and order that is supported through increased funding. “This has seen the number of inmates at several prisons decrease by more than half, such as in Bomana where the inmates have been reduced from over 1,000 to around 450. “The reduction in crime and decreased prison population clearly shows our youth are being given opportunity to engage meaningfully in education and employment opportunities. “We will continue to build on this success.” O’Neill said action was underway in line with the report recommendations relating to central police engagement alongside improvements in social and community infrastructure, and the report will aid in the review of progress and to set future direction. “At the policing level, improvements include enhanced training and additional resources used in core law enforcement duties, and better data collation and analysis for strategic and operational planning,” he added. “Through decentralisation initiatives, the government is better using local community knowledge and delivering programs around the nation to confront conditions and situations that can lead to crimes being committed.
https://www.lcci.org.pg/2014/08/png-vows-to-meet-law-and-order-challenge/
Der Begriff Intermittenz (von lat. intermittere: unterbrechen) beschreibt das Merkmal eines nichtlinearen dynamischen Systems, dessen im Wesentlichen. Intermittenz. Intermittenz ist ein Phänomen, daß im Übergangsbereich von regulärem und chaotischen Verhalten auftritt. Es drückt sich durch reguläres Verhalten. Lernen Sie die Übersetzung für 'Intermittenz' in LEOs Englisch ⇔ Deutsch Wörterbuch. Mit Flexionstabellen der verschiedenen Fälle und Zeiten ✓ Aussprache. < Übersetzung für "der Intermittenz" im EnglischIntermittenz. From bio-physics-wiki. Jump to: navigation, search. Periodische Fenster. Im einem speziellen. Übersetzung im Kontext von „Intermittenz“ in Deutsch-Englisch von Reverso Context: Pierre-Michel Menger stellt der Hyperflexibilität der Intermittenz. a) Intermittenz. Für die Wärmeleitungsgleichung mit zufälligem Potential sollen asymptotische Fragestellungen untersucht werden, die zu einem besseren. Intermittenz Fachgebiete VideoThe 'Most Effective' Method Of Intermittent Fasting The state of being sporadic or intermittent; a lack of flow or fluency. , H.G. Wells, The War of the Worlds, London: William Heinemann, page I remember he wound up. Lernen Sie die Übersetzung für 'intermittent' in LEOs Englisch ⇔ Deutsch Wörterbuch. Mit Flexionstabellen der verschiedenen Fälle und Zeiten Aussprache und relevante Diskussionen Kostenloser Vokabeltrainer. A malarial or intermittent fever, well known in Africa and India. If it is variable, intermittent, and otherwise ineffective, the discipline is bad. According to his observations and those of the nurse there was an intermittent coma. intermittency (countable and uncountable, plural intermittencies) The state of being intermittent; periodicity. New release with 25% discount up to ! Grilleau Quick Play Intermittenz is designed for those who don't want to memorize rules, make complicated settings or want to play roulette for hours. With Grilleau Quick Play you can start right away, and you can quickly win with a few bets!. A cost analysis will need to be made between long distance transmission and excess capacity. The sun is always shining somewhere, and the wind is always blowing somewhere on the Earth, and during the s or s it is predicted to become cost effective to bring solar power from Australia to Singapore. In locations like British Columbia , with abundant water power resources, water power can always make up any shortfall in wind power, and thermal storage may be useful for balancing electricity supply and demand in areas without hydropower. Wind and solar are somewhat complementary. A comparison of the output of the solar panels and the wind turbine at the Massachusetts Maritime Academy shows the effect. There is always no solar at night, and there is often more wind at night than during the day, so solar can be used somewhat to fill in the peak demand in the day, and wind can supply much of the demand during the night. There is however a substantial need for storage and transmission to fill in the gaps between demand and supply. The variability of sun, wind and so on, turns out to be a non-problem if you do several sensible things. One is to diversify your renewables by technology, so that weather conditions bad for one kind are good for another. Second, you diversify by site so they're not all subject to the same weather pattern at the same time because they're in the same place. Third, you use standard weather forecasting techniques to forecast wind, sun and rain, and of course hydro operators do this right now. The combination of diversifying variable renewables by type and location , forecasting their variation, and integrating them with despatchable renewables, flexible fueled generators, and demand response can create a power system that has the potential to meet our needs reliably. Integrating ever-higher levels of renewables is being successfully demonstrated in the real world: . Mark A. Delucchi and Mark Z. Jacobson identify seven ways to design and operate variable renewable energy systems so that they will reliably satisfy electricity demand: . Jacobson and Delucchi say that wind, water and solar power can be scaled up in cost-effective ways to meet our energy demands, freeing us from dependence on both fossil fuels and nuclear power. A more detailed and updated technical analysis has been published as a two-part article in the refereed journal Energy Policy. An article by Kroposki, et al. These systems must be properly design to address grid stability and reliability. Renewable energy is naturally replenished and renewable power technologies increase energy security because they reduce dependence on foreign sources of fuel. Unlike power stations relying on uranium and recycled plutonium for fuel, they are not subject to the volatility of global fuel markets. An accidental or intentional outage affects a smaller amount of capacity than an outage at a larger power station. The International Energy Agency says that there has been too much attention on issue of the variability of renewable electricity production. Variability will rarely be a barrier to increased renewable energy deployment when dispatchable generation is also available. But at high levels of market penetration it requires careful analysis and management, and additional costs may be required for back-up or system modification. In , the Intergovernmental Panel on Climate Change , the world's leading climate researchers selected by the United Nations, said "as infrastructure and energy systems develop, in spite of the complexities, there are few, if any, fundamental technological limits to integrating a portfolio of renewable energy technologies to meet a majority share of total energy demand in locations where suitable renewable resources exist or can be supplied". This approach could contain greenhouse gas levels to less than parts per million, the safe level beyond which climate change becomes catastrophic and irreversible. An intermittent energy source is any source of energy that is not continuously available for conversion into electricity and outside direct control because the used primary energy cannot be stored. Intermittent energy sources may be predictable but cannot be dispatched to meet the demand of an electric power system. The use of intermittent sources in an electric power system usually displaces storable primary energy that would otherwise be consumed by other power stations. Another option is to store electricity generated by non-dispatchable energy sources for later use when needed, e. A third option is the sector coupling e. The use of small amounts of intermittent power has little effect on grid operations. Using larger amounts of intermittent power may require upgrades or even a redesign of the grid infrastructure. The penetration of intermittent renewables in most power grids is low, global electricity production in was supplied by 3. The intermittency and variability of renewable energy sources can be reduced and accommodated by diversifying their technology type and geographical location, forecasting their variation, and integrating them with dispatchable renewables such as hydropower, geothermal, and biomass. Combining this with energy storage and demand response can create a power system that can reliably match real-time energy demand. A research group at Harvard University quantified the meteorologically defined limits to reduction in the variability of outputs from a coupled wind farm system in the Central US:. The problem with the output from a single wind farm located in any particular region is that it is variable on time scales ranging from minutes to days posing difficulties for incorporating relevant outputs into an integrated power system. The high frequency shorter than once per day variability of contributions from individual wind farms is determined mainly by locally generated small scale boundary layer. The low frequency variability longer than once per day is associated with the passage of transient waves in the atmosphere with a characteristic time scale of several days. The high frequency variability of wind-generated power can be significantly reduced by coupling outputs from 5 to 10 wind farms distributed uniformly over a ten state region of the Central US. Large, distributed power grids are better able to deal with high levels of penetration than small, isolated grids. Matching power demand to supply is not a problem specific to intermittent power sources. Existing power grids already contain elements of uncertainty including sudden and large changes in demand and unforeseen power plant failures. Though power grids are already designed to have some capacity in excess of projected peak demand to deal with these problems, significant upgrades may be required to accommodate large amounts of intermittent power. The International Energy Agency IEA states: "In the case of wind power, operational reserve is the additional generating reserve needed to ensure that differences between forecast and actual volumes of generation and demand can be met. Again, it has to be noted that already significant amounts of this reserve are operating on the grid due to the general safety and quality demands of the grid. Wind imposes additional demands only inasmuch as it increases variability and unpredictability. Was ist ein Satz? Wiederholungen von Wörtern. Wohin kommen die Anführungszeichen? So liegen Sie immer richtig. Die längsten Wörter im Dudenkorpus. Kommasetzung bei bitte. Subjekts- und Objektsgenitiv. Adverbialer Akkusativ. Aus dem Nähkästchen geplaudert. Haar, Faden und Damoklesschwert. Kontamination von Redewendungen. Lehnwörter aus dem Etruskischen. Informiere mich über neue Beiträge per E-Mail. Erstelle kostenlos eine Website oder ein Blog auf WordPress. Intermittenz von hagenvontronjevonbechelaren. Klima ist ein Attraktor im chaotischen System der Gesamtmeteorologie der Erde. Share this: Twitter Facebook LinkedIn. Gefällt mir: Gefällt mir Wird geladen Join our early testers! See how your sentence looks with different synonyms. Der Übergang zum chaotischen Verhalten vollzieht sich durch eine Folge von Bifurkationen. Intermittenz findet sich beispielsweise in turbulenten Strömungen. Es handelt sich dabei aber um keine Intermittenz, weil beim Sierpinski-Dreieck eine Normalverteilung vorliegt, damit ist es ein nicht intermittentes Verhalten. Die Unterbrechung der Normalverteilung ist eine der Bedingungen für intermittentes Verhalten. Als typisches Beispiel für intermittentes Verhalten wird häufig der Sonnenwind genannt.Intermittenz, das Auftreten langer Zeiträume regulären (laminaren) Verhaltens, die durch kurze irreguläre (chaotische, turbulente, stochastische) Abschnitte unterbrochen sind. In nichtlinearen dynamischen Systemen ist das Auftreten von intermittentem Verhalten stets verknüpft mit dem Stabilitätsverlust eines periodischen Orbits. Der Begriff Intermittenz (von lat. intermittere: unterbrechen) beschreibt das Merkmal eines nichtlinearen dynamischen Systems, dessen im Wesentlichen reguläres Verhalten durch seltene, kurzweilige Phasen chaotischen Verhaltens unterbrochen russkiy-suvenir.com Übergang zum chaotischen Verhalten vollzieht sich durch eine Folge von Bifurkationen.. Einen Hinweis auf Intermittenz liefert eine. Intermittenz Ist ein Begriff aus der mathematischen Chaostheorie und bedeutet das unvermutet Aussetzen eines Attraktors bei Erhöhung von Störgrößen. Die normale Reaktion des Chaossystems wäre die Verdoppelung der Attraktoren. Klima ist ein Attraktor im chaotischen System der Gesamtmeteorologie der Erde. Der derzeit bekannteste Klimaparameter ist die CO2 Konzentration in der Atmosphäre.
https://russkiy-suvenir.com/swiss-casino-online/intermittenz.php
While the Pirates of the Caribbean series isn't one of the more critically acclaimed film franchises, the first movie, The Curse of the Black Pearl, is still an enjoyable standalone movie adapted that capitalized nicely on the Disneyland attraction. Among the 2003's movie's most notable elements was its score, composed by Hollywood icon Hans Zimmer. It's hard to imagine Jack Sparrow, Will Turner and Elizabeth Swann's first outing with Zimmer's musical accompaniment. However, when director Gore Verbinski originally told Zimmer about his work on Pirates of the Caribbean: The Curse of the Black Pearl, Zimmer was less than impressed. The composer recalled: Let's be thankful that Hans Zimmer changed his tune upon seeing what Gore Verbinski had created with Pirates of the Caribbean: The Curse of the Black Pearl. It's one thing to have preconceived notions after just hearing an idea, but with actual footage in his head, Zimmer realized this was a project worth contributing to. However, because Zimmer was replacing Alan Silvestri, the original composer, he didn't have as long as he usually does to put together a score. He continued with Vanity Fair: Hans Zimmer went on to score the next three Pirates of the Caribbean movies, passing along comprising duties to Geoff Zanelli for Pirates of the Caribbean: Dead Men Tell No Tales. For now, it seems that his time with the franchise is over, but among the many movies he's composed for over the decades, his music on Pirates of the Caribbean (especially that theme) ranks as particularly memorable. Pirates of the Caribbean: Dead Men Tell No Tales is now available on Blu-ray, DVD and Digital HD, and while making Pirates of the Caribbean 6 has been discussed, the project has yet to be officially greenlit. To keep track of movies will be released this year (including X-Men: Dark Phoenix, which Hans Zimmer is scoring), look through our 2018 premiere guide. Connoisseur of Marvel, DC, Star Wars, John Wick, MonsterVerse and Doctor Who lore. He's aware he looks like Harry Potter and Clark Kent. Your Daily Blend of Entertainment News Thank you for signing up to CinemaBlend. You will receive a verification email shortly. There was a problem. Please refresh the page and try again.
https://www.cinemablend.com/news/2306211/what-hans-zimmer-originally-thought-about-pirates-of-the-caribbean
Applied Science: What are the only 2 letters that do not appear on the periodic table? Sunday Mar 24, 2019 at 5:00 AM The New Philadelphia Science Club is back with another science question in The Times-Reporter. Each week a new science-related question will be given. Everyone is invited to participate by either mailing the answer to the club at the address below or simply emailing the answer to the address given. At the end of the school year several winners will be chosen from all the correct entries submitted to receive a prize. To participate in this drawing, send your answer to: New Philadelphia High School-Room 331 343 Ray Ave. NW New Philadelphia, OH 44663 Email answers to: [email protected] Last week’s answer: The correct answer to last week’s question is C. Mary’s muscle cells performed anaerobic respiration and began to produce lactic acid. Under normal rest or light exercise conditions, cells produce ATP using an aerobic pathway. This involves glycolysis followed by the Kreb’s cycle (also known as the citric acid cycle) and electron transport chain. This happens when enough oxygen is present and the products of glycolysis can enter the mitochondria. However, human cells can perform anaerobic respiration in which glycolysis is followed by lactic acid fermentation. The production of lactic acid can cause the burning sensation that can be experienced immediately after or during intense exercise. A side effect of high lactic acid levels is an increase in the acidity of the muscle cells, as well with disruptions of other metabolites. The same metabolic pathways that permit the breakdown of glucose to energy perform poorly in an acidic environment. This seems counterproductive that a working muscle would produce something that would slow its capacity for more work. However, this is a natural defense mechanism for the body; it prevents permanent damage during extreme exertion by slowing the key systems needed to maintain muscle contraction. Once the body reduces exertion, oxygen becomes available which allows continued aerobic metabolism and energy for the body’s recovery from the strenuous event. This week’s question: Individuals that take chemistry will spend time using the periodic table in calculations and understanding chemical properties. There are currently 118 elements making up the periodic table with scientists experimenting to make even more. While some of the element names are fairly common (oxygen, hydrogen, carbon, sodium), there are many names that people do not know (seaborgium, actinium, hassium). Some of the elements have weird and surprising names. What are the only letters of the alphabet that do NOT appear on the periodic table? A. Y and Q B. Q and J C. X and Y D. J and Z Never miss a story Choose the plan that's right for you. Digital access or digital and print delivery. Sister Publications Original content available for non-commercial use under a Creative Commons license, except where noted. Times Reporter ~ 629 Wabash Ave. NW New Philadelphia, OH 44663 ~ Privacy Policy ~ Terms Of Service
The Categorical platform enables you to tabulate, plot, and compare categorical response data, including multiple response data. You can use this platform to analyze data from surveys and other categorical response data, such as defect records and study participant demographics. With the Categorical platform you can analyze responses from a rich variety of organizations of data. The Categorical launch window enables you to specify analyses as well as data formats.
http://www.jmp.com/support/help/13-2/Categorical_Response_Analysis.shtml
Recanati v. Roberts (In re Roberts), 594 B.R. 484 (Bankr. N.D. Fla. 2018) – In connection with acquisition of a restaurant the debtor agreed to assume debts of the business. After the debtor filed a chapter 7 bankruptcy, the former owners sought to except this obligation from discharge. The sellers contended that the debt was nondischargeable under (1) section 523(a)(4) – obtained by “fraud or defalcation while acting in a fiduciary capacity, or (2) section 523(a)(6) – intended to cause “willful or malicious injury” to another. In the court’s view the pleadings were more consistent with section 523(a)(2)(A) – incurred under “false pretenses, a false representation, or actual fraud.” So, it considered all three subsections. To put the facts in context, the court started by noting that the debtor retired from the US Postal Service where her annual salary had been ~$65,000-$70,000. Prior to retirement she patronized a restaurant and formed a relationship with one of the owners (Tony). She loaned Tony ~$8000 to help improve the restaurant and became what she characterized as a “silent investor.” She paid rent, shopped for and purchased equipment, used personal funds to make payroll and help relocate the restaurant to a new location. The loans totaled ~$60,000. During this time, she also became responsible for navigating the BP oil spill claims process which eventually resulted in two payments to the company. The first payment of $84,000 was incorrectly released to the debtor in her personal checking account as opposed to the company’s business account. However, as soon as the mistake was discovered, the money was transferred to the business account. Eventually the debtor agreed to purchase the restaurant through a stock purchase. As consideration the debtor agreed to assume all debts, liabilities, and obligations of the business, to forgive the personal debts of Tony, and to have all personal guarantees, loans, and vendor agreements paid off or assumed by the debtor within 30 days after the sale closed. The debts included a bank line of credit and note, an oil spill bridge loan with a credit union, a balance due to the US Food Service, and back tax payments. The liabilities totaled ~$155,700 and the company had ~$35,000 in assets. The parties agreed that the second BP claim payment of $25,000 would be used to pay back rent, vendor accounts, taxes and other debts. At some point the debtor signed an undated writing acknowledging that the debts were her sole responsibility. In another undated writing she asked for an extension of time to obtain financing so that she could meet her obligations. The debtor was able to keep the business open for about a year, but closed when the liquor license expired and she no longer had any income to keep the restaurant open. After the restaurant closed the sellers sued the debtor in state court alleging breach of contract, fraud in the inducement, and unjust enrichment. They obtained a judgment on the breach of contract claim and eventually a judgment establishing payment terms. When they had problems enforcing the judgment, they recommenced the state court action. After the debtor filed bankruptcy, the sellers filed an adversary proceeding seeking a determination that the debt owed by the debtor was nondischargeable. Turning first to the section 523(a)(2)(A) discharge exception for debt obtained by false pretenses, false representation, or actual fraud, the court outlined the elements as follows: (1) the debtor had an intent to deceive, (2) the other party relied on the misrepresentation/deceptive conduct, (3) reliance was reasonably justified, and (4) the other party sustained a loss as a result of the fraud/deception. The court further noted that fraud involving unfilled promises required showing that when the debtor made promises she either knew that she could not fulfill them or had no intention of fulfilling them. The court found that as a threshold matter the sellers failed to prove that the debtor engaged in deceptive conduct intended to deceive. Rather, given the facts described above the court found the debtor “to be generous; not deceptive.” The court further noted that there was no requirement in the purchase agreement that the debtor have funding in place. Her inability to obtain financing might result in a breach of contract, but was not evidence of fraud. With respect to section 523(a)(4), the sellers contended that handling the BP oil spill claim proceeds placed the debtor in a fiduciary position which was violated when the debtor used the proceeds for her own benefit. The elements of this discharge exception were: (1) the debt resulted from a fiduciary defalcation under an express or technical trust, (2) the debtor was acting in a fiduciary capacity, and (3) the transaction was a defalcation. The court explained that the meaning of “fiduciary” was a matter of federal law, and did not mean the traditional concept of a relationship involving confidence, trust, and good faith. Rather it is a narrow concept that requires an express or technical trust. This in turn requires a segregated trust res, identifiable beneficiary, and affirmative trust duties established by statute or contract. The court found that the sellers failed on all counts. They did not establish, or even plead, the existence of an express or technical trust. Further, even if there was a fiduciary relationship the debtor did not violate that relationship. Finally, section 523(a)(6) regarding willful and malicious injury required proof that the injury was both willful – meaning an intentional or deliberate act, and malicious – meaning the debtor’s action was “wrongful and without just cause or excessive even in the absence of personal hatred, spite or ill-will.” The court found that the sellers did not present any evidence that the debtor intended to act in a way that she knew would injure the sellers. Accordingly, the court found for the debtor on all counts. This case highlights the difficult position of a seller when a significant portion of the consideration for the sale involves assumption of debt and the buyer is unable to handle the debt. If feasible, perhaps the seller could require an affirmative release of the seller’s liability in connection with the buyer’s assumption of a debt. If not, perhaps the seller can negotiate terms designed to provide assurance that the buyer will be able to perform (such as requiring the buyer to demonstrate adequate funding as a condition of closing). In any event, the seller should recognize that an individual buyer will probably be able to obtain a discharge in bankruptcy absent really egregious conduct. Vicki R Harding, Esq.
https://bankruptcy-realestate-insights.com/2019/04/10/discharge-of-debt-a-seller-relying-on-a-buyers-assumption-of-debt-may-be-out-of-luck-if-the-buyer-files-bankruptcy/
This post is based on a panel presentation that I gave at the 2015 M-Enabling Summit in Washington DC on June 2nd. The goal of my presentation was to raise awareness of issues regarding mobile app design and users with cognitive disabilities. Several people at the presentation commented that the issues described would benefit all users — that is true and a good reason to implement design patterns that support access by the widest range of people. One distinction that must be made though is that while these may be usability issues for all users, when an issue creates a barrier or overwhelmingly inequivalent experience for users with a disability it becomes an accessibility issue — an access barrier to the user. Background There is currently a group of people working on these issues at the Web Accessibility Initiative within the W3C. A Cognitive and Learning Disabilities Accessibility Task Force (Cognitive A11Y TF) has been created that is part of the Protocols and Formats Working Group and Web Content Accessibility Guidelines Working Group. I am not on the Cognitive Accessibility Task force but I am on the Mobile Accessibility Task Force that is more generally looking at issues related to mobile accessibility. This post represents my own brief thoughts on the topic and may not reflect those of any task force. The Cognitive Accessibility Task Force is performing the following: - Researching gaps in current guidelines and best practices - Aggregating research - Proposing new techniques (that do not change WCAG) - Examining options including possible extension models to WCAG to support cognitive accessibility Benefits and Challenges of Mobile Apps for Users with Cognitive Disabilities The simplicity and context focused features of mobile apps have generally provided some benefit to users with cognitive disabilities. Apps tend to have: - Less features and thus less distractions - Features that tend to be context specific (e.g. Store locator, food menu, etc.) Thus the mobile first design in some sense has helped to reduce cognitive load of users and direct them to the most needed information without complex navigation requirements. At the same time the design may trigger other issues such as moving content out of sight and out of mind. Challenges of Mobile Apps for Users with Cognitive Disabilities include, but are not limited to: - A smaller screen can mean there is no room for descriptive labels or available options that are hidden - Mobile Apps often include advertisements that can easily be confused with main content or layered on top of content - Content, especially in a flat design paradigm, may not appear interactive - The speed of mobile connections may mean feedback from a user action is not initially obvious Mobile Best Practices for Users with Cognitive Disabilities Best practices can be grouped into (but not limited to) the following categories: adaptable, discoverable, distinguishable, and understandable (these categories are my own naming and don’t exactly map to the four WCAG principles) — instead they may represent guidelines within the four principles. Adaptable Keep the following in mind: - One size does not fit all – One size fits one - What works for one person may not work for another - Experience should be flexible, adaptable but predictable and consistent In the example below the wtop.com news site is shown in the Safari browser on iOS. When sites are structured in certain ways the “reader” function of Safari can display the main content without distractions in a linear format at the text size desired by the user. The image on the left shows the website without reader mode enabled while the image on the right shows the same page in the “reader” mode. The reader mode removes colors, enlarges the font based on user settings and hides non-main content. The reader mode is presented to the user when the site is structured in a way that allows for the adaptation to be enabled. Discoverable The user should be able to discover hidden content by visual clues Not so good apps – Require the user to swipe in from side to display menu or more options but do not provide any indication that a menu or drawer exists. Better apps – Provide a hamburger menu to disclose more options. The hamburger menu lets you know there are options available and where to find them. In the picture on the left a hamburger menu is displayed at the top left of the screen to indicate that a menu is available. The picture on the right shows the hamburger menu opened Although no button is shown to collapse the menu. Having a clear way to close the menu can also assist users. Overlay Tutorial Apps often have many and sometimes complex gestures. Some apps now present a gesture tutorial the first time the app is opened allowing the user to learn and practice the gestures. The tutorial should also be available to review after it has been closed. Not so good apps – Gestures are unknown and there is no tutorial to learn the gestures. Better Apps – Provides a tutorial (coaching) that can be reactivated to walk user through the gestures The screenshot above shows a gesture tutorial and provides a way for the user to dismiss the tutorial. The Nielsen Norman Group has some good information on Instructional Overlays and Coach Marks for Mobile Apps. Things that do and do not look interactive Not so Good App – Uses images and text that don’t look interactive e.g. don’t look like buttons or touch areas In the screenshot above the Done button does not contain a border or background — in this case the text itself implies it is interactive — but other visual clues are beneficial. In the case of iOS 7+ an accessibility feature exists called “button shapes” that for native apps will display a background assisting users in recognizing interactive elements. Mobile web apps do not have the benefit of this setting. Good apps – Have controls with visual clues letting you know they are interactive Distinguishable Boundaries When the borders are removed from controls such as input fields it may not be apparent where the input fields are located and what information needs to be entered to submit a form. In the example above there are no field borders on the input fields and the user may not know where to touch to select an input field or know what fields are for input. In the screenshot above field boundaries are available as borders and it is more clear which fields are for input and which text are labels. Understandable Labels & Placeholders All fields that require input must have visual labels. Placeholders are not labels — they are placeholders and as such should not be used as the sole method of providing visual labels. Not so good apps – Have fields without labels that will likely be problematic for users who may not know what data to enter in input fields. In the screenshot above, placeholders are used to provide field labels. When the field has contents or in some cases takes focus the placeholder disappears. This may cause confusion about the purpose of the field especially when the content was not entered in a way that the system will not accept. Labels that always appear should be added to make this accessible. Labels do not always have to be text — they can images or icons as well. The Nielsen Norman Group has a good article on this subject called Placeholders in forms Fields are Harmful. Understandable Icons Icons can be very helpful to users with disabilities as they do not require reading of text and the meaning may be picked up quickly. Keep the following in mind though: - Be consistent – e.g. don’t use a hamburger icon for favorites or use a star for favorites and rating. - Label icons with text or provide supports when users may not know what they mean - Make icons memorable — they should relate to things in the real world that the user can associate with Icon Challenges include:
https://www.levelaccess.com/designing-mobile-apps-for-use-by-people-with-cognitive-disabilities/
The sense of taste conveys crucial information about the quality and nutritional value of foods before it is ingested. Taste signaling begins with taste cells via taste receptors in oral cavity. Activation of these receptors drives the transduction systems in taste receptor cells. Then particular transmitters are released from the taste cells and activate corresponding afferent gustatory nerve fibers. Recent studies have revealed that taste sensitivities are defined by distinct taste receptors and modulated by endogenous humoral factors in a specific group of taste cells. Such peripheral taste generations and modifications would directly influence intake of nutritive substances. This review will highlight current understanding of molecular mechanisms for taste reception, signal transduction in taste bud cells, transmission between taste cells and nerves, regeneration from taste stem cells, and modification by humoral factors at peripheral taste organs.
https://kyushu-u.pure.elsevier.com/en/publications/recent-advances-in-molecular-mechanisms-of-taste-signaling-and-mo
PROBLEM TO BE SOLVED: To provide a donation-raising system capable of allowing a donator to easily donate an amount of money satisfactory for the donator. SOLUTION: The system includes: a user portable terminal 3 incorporating a non-contact IC for storing an ID and a remaining amount; data processing parts 7, 8 for communication with the non-contact IC; and a donation-raising information storage part 6. The donation-raising information storage part 6 includes a function of storing a unit donation amount, which is previously set by each user, in response to the ID stored in the user portable terminal. The processing part 8 includes: a function of specifying the unit donation amount corresponding to the ID, based on the ID read from the non-contact IC and the data stored in the donation information storage part 6; a function of subtracting the specified unit donation amount from the remaining amount data read from the non-contact IC, so as to calculate the remaining amount after the subtraction; a function of updating the remaining amount data stored in the non-contact IC into the remaining amount data after the subtraction; and a function of storing the donation unit amount in the donation-raising information storage part 6 as a donation-raising amount. COPYRIGHT: (C)2008,JPO&INPIT
Like Dutton/Turnbull, chances are at one time or another we’re going to find ourselves on the wrong side of failure. We all experience setbacks from time to time; some sting, some hurt. And then there are those blindsiding, jaw-dropping moments in our careers where we went out on a limb, and- for whatever reason- that limb wasn’t strong enough to hold us. The trick is knowing how to recuperate and recover. So I’ve put together a helpful guide for the next libspill hopeful- something we can all learn from- a way to support the team through most trying of times and to come out all the stronger for it. Acknowledge The Risk Before You Go In Not one for hindsight, it’s best to understand the likelihood of failure before we dive in. George Bell, five-time CEO- writes that being explicit about the risk of venture failure makes the possibility less intimidating, and actually has the effect of making success more likely. Obviously these things aren’t always hugely predictable, but if you have an idea that success is not the likely outcome, you can protect your team by acknowledging it. Chances are they’ve figured it out for themselves anyway, but acknowledging it helps show that you’re on the same page, and also makes a potential success even more enticing. Provide Support During the Venture The two most critical things to communicate to your team are your faith in them, and that failure of the venture will not be held against them personally. This means that they will be able to take the leap you want them to, without compounding the stress of the project with stress about how failure might impact them personally. Feeling supported was identified by famous psychologist Abraham Maslow as crucial for exploration, and the corporate world is no different. Before we can strive for higher ideals, basic needs of support need to be met; otherwise, we have no ground to stand on. Debrief Like a Boss Say the proverbial feces hits the fan. Things got messy. Malcolm won the challenge that you instigated. The leadership we show during this time sets the tone for how our team also deals with failure. Casting blame, catastrophizing the situation, or losing our cool doesn’t reflect well on anyone, or help at all. This is a crucial moment for us and our team, where we can either grow together or fall apart. There can actually be a few different parts to the debrief process, as outlined by Better Health Channel, depending on the circumstance. Demobilisation The ‘demobilisation’ technique is usually used after a critical stress incident- e.g., if a surgery patient were to be lost on the operating theatre. But if even if it’s not in such a dire situation, if the a failure has left the team reeling and feeling overwhelmed and unable to meet the demands of the situation, then it’s a good idea to practice this technique. A demobilisation meeting for those who were involved in the incident should take place as soon after the incident as possible, and definitely on the same day or shift as the stressor occurred. It’s purpose is to calm the team and address any immediate psychological needs. Important parts of the meeting are to first run through the incident as it happened, so that everyone is on the same page with regards to what the sequence of events were. It’s a time to invite questions, and show care and support. If need be, short-term work arrangements can be made, depending on the needs of workers- e.g., perhaps they could use a day off to recover, or a late start the next day to catch up on some sleep or practice other forms of self care. Debrief A debrief is usually carried out within three to seven days after the incident, when we’ve had enough time to take in the experience. We can set the tone by introducing the agenda, and reminding everyone that the debrief is a non-judgmental place to share concerns, answer questions, and together learn what we can from the experience. There are two major elements to be covered in a failure debrief; the first is to address any lingering psychological distress from the event. On this aspect, Better Health encourages the debrief to be facilitated by a trained individual, and to involve: - Clarifying the sequence of events - Discussing causes and consequences - Allowing individuals to share their own experiences - Discussing normal psychological reactions to acute stress incidents, and ways to manage them The second part of the debrief can involve discussing what can be learned. It’s a good opportunity to break out the whiteboard markers, ask the hard questions, and brainstorm how things could be done differently next time. As Bell writes, we should also make sure to examine our own role in the failure. Why did the venture fail? Was it customer experience? Do we look like a potato? Poor salesmanship? Did we take too long to make a decision, misread the situation, or were we poorly organised? Being a leader doesn’t mean we don’t make mistakes. Sharing those mistakes can encourage others to also bring their own honesty to the table, and help to destigmatize the idea of failure. Moving On “Failure is not the opposite of success, it is the stepping stone to success.” Arianna Huffington There will likely be another chance to succeed; to learn from those mistakes, and make it better next time. But as Emma Stewart, co-founder of Think Bold, writes, “While (starting a new venture straight away) seems like the best idea at the time, and it’s a great distraction from all the hurt, it’s often very rushed, and you miss out on that all-important time of reflection and learning from the last experience in business. This is the biggest mistake you can make. You need to allow time to reflect on things.” It’s also important to give the team time and space to process the events and their emotions; rushing into the next venture with the bitter taste of failure still at the back of our mouth is not the best idea. The best way to know when it’s the right time is to check in with ourselves, first and foremost. Do we still feel burnt out? From there, the best way to get the idea of where the team is sitting is to simply ask them. Broach the idea of a new venture, and pay careful attention to the response- verbal or otherwise. It will be the right time, again, soon. Just don’t rush it.
https://www.coworkme.com.au/2018/08/22/some-advice-for-the-next-libspill-hopeful-how-to-bounce-back-from-a-major-failure-at-work/
Illinois Minimum Wage Law increases minimum wage and adds penalties for record keeping missteps Illinois has been moving to a minimum wage of $15 per hour and $9 per hour for tipped workers by 2025. Under the Illinois Minimum Wage Law (IMWL), the minimum wage increased statewide on Jan. 1, 2021, to $11 per hour and $6.60 for tipped workers. The remaining minimum wage changes that are forecast for adult employees in Illinois are projected to continue as follows: - From Jan. 1 through Dec. 31, 2021, the minimum wage will be $11. - From Jan. 1 through Dec. 31, 2022, the minimum wage will be $12. - From Jan. 1 through Dec. 31, 2023, the minimum wage will be $13. - From Jan. 1 through Dec. 31, 2024, the minimum wage will be $14. - After Jan. 1, 2025, the minimum wage will be $15. It is important for employers to abide by these changes. “It is against public policy for an employer to pay his employees an amount less than that fixed by this Act [minimum wage]. Payment of any amount less than herein fixed is an unreasonable and oppressive wage, and less than sufficient to meet the minimum cost of living necessary for health.” See 820 Ill. Comp. Stat. Ann. § 105/2. Failing to pay the appropriate wage can result in violations of IMWL and the Fair Labor Standards Act. Besides satisfying the minimum wage requirement, Illinois employers must comply with proper payroll record-keeping obligations. Illinois record-keeping requirements vary by industry and the age of an employee. In general, employee information (name, address, Social Security number, etc.) along with records of pay rate, payment date, hours worked and other standard payroll information must be maintained for at least three years. Although Illinois has long required employers to maintain payroll records, including daily and weekly hours worked for all employees, it only recently added steep financial penalties. Starting in 2020, employers that failed to maintain payroll records could be required to pay a penalty of $100 for each infraction. Although Illinois permits employers to pay some employees on a salaried basis, there is no exception to the record-keeping requirement for salaried employees. Even though exempt or salaried employees are paid the same amount each week no matter the hours worked, by law, Illinois employers still must track their hours worked. Significantly, the IMWL increased the penalties for employer overtime violations to allow employees to recover triple the amount of the underpayment and 5% of the underpayment for each month the underpayment remains unpaid. Since many IMWL cases are prosecuted on a classwide basis on behalf of large employee groups, the financial impact of a violation, even if inadvertent and unintentional, can have significant financial consequences. The Illinois Department of Labor (IDOL) is authorized to conduct random audits to ensure compliance with the law and enforce the new and enhanced penalties. The IDOL may recover $100 per impacted employee and a $1,500 penalty for failure to pay the proper minimum wage on top of the previously authorized penalty of 20% of the total underpayment. Please contact one of Chuhak & Tecson’s Employment attorneys if you have questions regarding this Illinois minimum wage change. Client alert authored by Loretto M. Kennedy (312 855 5444), Principal, and Jasmine D. Morton(312 855 4334), Associate.
https://www.irglobal.com/article/illinois-minimum-wage-law-increases-minimum-wage-and-adds-penalties-for-record-keeping-missteps/
What is the difference between climate and weather? Climate: broad -describes conditions 1. over large regions 2. over seasons, years or longer Weather: narrow -describes conditions 1. locally 2. over hours or days What is the difference, and relationship between, global climate change and global warming? Global Climate Change: -Describes changes in Earth's climate Global Warming -Refers to the earths warming -Often used synonymously with Global Climate Change. -Is only one aspect of Global Climate Change The term "Global Climate Change" can refer to two (related) things. What are they? -General: changes in earth's climate which has been happening forever -Current: Speeding up of climate change due to anthropogenic causes What three main factors affect the Earth's climate? -sun -atmosphere -oceans How does the greenhouse effect heat cars? -Visible light is absorbed and heats object in the car -Heat is emitted as infrared radiation, but can't go through windows. -Heat is trapped!! How does the greenhouse effect heat the earth? -Earth is like the car -Greenhouse gasses in the atmosphere are like the glass of the car. What types of radiation don't pass through glass and greenhouse gasses? UV radiation What is similar between glass and green house gasses (concerning the greenhouse effect)? UV radiation can't go through glass or greenhouse gases Is the greenhouse effect warming the earth a new phenomenon? No why does CO2 go up and down every year? it goes up and down because in the summer plants take take in the CO2 and in the winter they dont as much With the greenhouse effect, there is a conversion of something into something else. What? global warming Is the greenhouse effect natural? yes If global warming leads to warmer average temperatures, why doesn't a year with warmer average temperature prove global warming is occurring? How about an especially cold year prove global warming is not occurring? variation from other factors will still lead to warmer and colder years What is "Global Warming Potential" of a molecule? This is expressed in relationship to what? -Measure of how much one molecule contributes to warming -CO2 Which gas is considered to contribute the most to greenhouse gasses? What are two other gasses that also contribute a large percentage to greenhouse gasses? - carbon dioxide - methane and nitrous oxide CO2 is a greenhouse gas. It is presently at levels higher than any seen in the past ____ years? 650,000 What "sector" (i.e. human activity) contributes the most to increasing green house gasses? The second most? 1. land use change and forestry 2. waste How does deforestation contribute to the concentration of greenhouse gasses? Forests remove carbon from the atmosphere: less forests, less CO2 removal. What are the two opposing effects an increase in water vapor would have on global warming? Which of these two are examples of positive and negative feedback? -Global warming predicted to increase water vapor concentration (positive feedback) -But more clouds also leads to more reflection of suns rays (negative feedback) How do oceans moderate CO2 in the atmosphere? If they moderate CO2, why does it matter if people are adding more to the atmosphere? -the ocean absorbs CO2 from the atmosphere -Contain 50 times more CO2. -Rate of absorption slower than we are adding CO2 -Absorption rate decreasing: warm water holds and absorbs less CO2 Warmer water leads to lower CO2 absorption. How does this lead to positive feedback in relationship to global warming? water keeps getting warmer and absorbs less CO2 and this keeps going on and global warming continues to increase How do emissions compare between United States and China in regards to total emissions, per capita emissions and intensity? -total emissions: China>US -per capita: China<US -intensity: China>US used to describe global climate change proxy measures What are 3 examples of proxy measures used to study GCC? -ice cores -tree rings -sediment cores International panel of scientists and government officials regarding climate change established in 1988 Intergovernmental Panel on Climate Change (IPCC) what does the IPCC do? -Represent consensus of scientific research -Document observed trends and predictions What are the predicted effects of global climate change on precipitation? -Will vary by region around the world. -Expect some areas to receive more rain, some less. -Droughts expected to be more severe. -Flooding expected to be more severe. What did the example of a farmer in Zambia exemplify? -Drought destroyed the corn crop of this farmer in Zambia. -Analysis suggest that drought may intensify across southern Africa due to warming of indian ocean. Old and recent pictures of glaciers indicate what? glaciers are disappearing How does a glacier melting at the top of a mountain affect the community at the bottom of the mountain in the long term? How could global climate change affect people that use water from glaciers? About how many people? -they will lose natural water storage -global climate change causes glaciers to melt -up to billions of people Why might a small body of water on a glacier get larger and larger? the water in these lakes are dark so they absorb heat and will keep making the Ice melt and the lake will keep getting bigger What are the two main causes of rising sea levels? -Expansion of water with warming -Glacier ice melting People living in the Maldives are worried about what? flooding How can GCC negatively affect forests? -Stress trees -Bark beetles eat more and reproduce faster -Warmer winters kill less beetles YOU MIGHT ALSO LIKE...
https://quizlet.com/134796100/isp-exam-3-global-climate-change-1-flash-cards/
The invention provides a ceramic matrix composite interface parameter identifying method. The method comprises the steps that a unidirectional ceramic matrix composite fatigue test is carried out to acquire a material loading and unloading stress-strain curve; stress and strain corresponding to the highest point and the lowest point of the loading and unloading stress-strain curve under differentcycle numbers are extracted, and the slope thereof is calculated, wherein the calculated values are the experimental values of hysteresis loop secant modulus under different cycle numbers; a crack observation technology is used to measure the average crack spacing of the material under the maximum fatigue loading stress; based on a shear lag model, a symbolic-graphic combination method is used todetermine the theoretical expression of the hysteresis loop secant modulus under different interface debonding and slip states; and the experimental values of the hysteresis loop secant modulus underdifferent cycle numbers are brought into the theoretical expression of the hysteresis loop secant modulus to identify the numerical values of the interface friction force under different cycle numbers. The method provided by the invention can identify the numerical values of the interface friction force under different cycle numbers based on the fatigue loading and unloading stress-strain curve, is simple and easy to implement, and consumes less time.
Market Update: Like the weather here in Indiana, the market seems to be ever changing and quite volatile. One day, we experience euphoria, while the next, there's rain drizzle followed by heavy, wet snow conditions. The market data seems to be questioning the strength of the recent rally based upon suggested concerns over the amounts of debt companies are holding on the balance sheets 1 These concerns, albeit valid, should be causing the market conditions to decline, but instead, this market has rallied recently, which is a bit of a head scratcher. According to Jeffrey Gundlach, the CEO of DoubleLine Capital, the US economy is being held afloat on a sea of corporate debt, and concerns over the ability to float indefinitely is weighing heavily.2 This concern certainly adds to the focus of the Fed rate increases in 2019 and beyond. Recently, I have been searching for high-quality companies with low to zero debt on their balance sheets and was amazed that so far I have found only 28 companies out of the S&P 500 companies have zero debt! This just goes to show you that in the past decade, since low borrowing costs have been the norm, just how many companies have not been able to resist the temptation of borrowing money. We know what happens with consumers when they borrow money and are found wanting when a job loss or critical illness happens. For corporate America, the same could happen with the next big recession and slow-down in economy and consumer demand! We will have to wait and see... Till we speak again, baby it's cold outside, so bundle up and keep your shovel handy! Jon Sources:
https://summitfinancialgroupofindiana.com/newsletter-and-blog/the-northern-star-newsletter-11419-markets-relax-and-rally
Previous installments of this blog post series discussed the need to verify SPICE model accuracy and how to measure common-mode rejection ratio (CMRR), offset voltage versus common-mode voltage (Vos vs. Vcm), slew rate (SR), open-loop output impedance (Zo), input offset voltage (Vos) and open-loop gain (Aol). In this sixth and final installment, I’ll cover operational amplifier (op amp) noise, including voltage noise and current noise. Noise is simply an unwanted signal, usually random in nature, that when combined with your desired signal results in an error. All op amps, as well as certain other circuit elements like resistors and diodes, generate some amount of intrinsic – or internal – noise. In analog circuits, it’s critical to confirm that the noise level is low enough to obtain a clear measurement of your desired output signal. Figure 1 shows an example of input voltage, ideal output voltage and output voltage with noise for a circuit with gain of 3V/V. Figure 1: Noise example With an accurate model, predicting the noise performance of an op amp circuit becomes quite straightforward. This is very appealing to most engineers, as calculating noise by hand can be cumbersome and difficult. Input voltage noise density The voltage noise of an op amp is usually given as input voltage noise density (en) in nanovolts per square root hertz (nV/√Hz), which quantifies how much noise voltage the op amp generates at its input pins for any given frequency. To measure en, configure the op amp as a unity gain buffer with its noninverting input connected to an AC source Vin. Figure 2 shows the recommended test circuit. Figure 2: Input voltage noise density test circuit Let’s use this circuit to measure the en of the OPA1692, a low-noise amplifier from TI. Simply run a noise analysis over the desired frequency range and measure the noise level at node Vnoise with respect to Vin. In this case, the simulated en matches perfectly with the data-sheet spec, shown in Figure 3. Figure 3: OPA1692 en result Input current noise density Op amps also generate noise currents at their input pins, called input current noise density (in) and typically given in femtoamperes per square root hertz (fA/√Hz). You can measure this in a similar way to en, but you will need to perform a simple trick. Some simulators have trouble measuring noise in terms of current, so a current-controlled voltage source converts the current flowing into the noninverting input pin into a voltage. Figure 4 shows the recommended test circuit. Figure 4: Input current noise density test circuit Let’s use this circuit to measure the in of the OPA1692. Run a noise analysis over the desired frequency range and measure the noise level at node Inoise with respect to Vin. Keep in mind that the resulting plot will have converted amps to volts due to current-controlled voltage source (CCVS1). Figure 5 shows the results after converting back to amperes. Figure 5: OPA1692 in result Again, the noise characteristic matches the data-sheet curve extremely well. Total voltage noise While knowing the input-referred noise of an op amp is useful, it doesn’t paint a complete picture of your circuit’s overall noise performance. A combination of factors like closed-loop gain, bandwidth and the noise contributions of other circuit elements will affect the total amount of noise that appears at the circuit output. Thankfully, most simulators provide a way to measure this type of noise, called total noise or integrated noise, since it’s the integration of all noise sources over the circuit’s effective bandwidth. Figure 6 shows a more complex op amp circuit, with the OPA1692 configured for a noninverting gain of 10V/V and an additional resistor-capacitor (RC) filter at the output to limit the effective bandwidth to roughly 150kHz. Figure 6: OPA1692 total noise example circuit Run a total noise analysis over a wide frequency range (shown in Figure 7) and measure the noise level at node Vnoise in order to find the total root mean square (RMS) noise, which will appear at the circuit output. You are looking for the level at which the total noise curve flattens out to a constant value at high frequency. Figure 7: OPA1692 total noise result The test result shows that the total noise of the circuit in Figure 6 is equal to 21.15µVrms, or 126.9µVpp. This is what you would expect to measure if you probed the output of this circuit in the real world. However, keep in mind that the random nature of noise means that the actual noise level may be somewhat higher or lower than what you calculated or simulated. For a deeper discussion, watch the TI Precision Labs – Op Amps video series on noise. Thanks for reading this sixth and final installment of the “Trust, but verify” blog series! I hope you’ve found the information and techniques in this series useful in your pursuit of more accurate SPICE simulations. If you have any questions about simulation verification, log in and leave a comment, or visit the TI E2E™ Community Simulation Models forum. Additional resources - Download the OPA1692 datasheet. - Learn more from industry expert Art Kay who literally wrote the book on noise.
https://e2e.ti.com/blogs_/b/analogwire/posts/trust-but-verify-spice-model-accuracy-part-6-voltage-noise-and-current-noise
The invention provides a triode amplification factor test circuit based on negative feedback. The circuit comprises an operational amplifier U1, the positive pole input end of which is connected with the C pole of a triode Q10 through a resistor R3 for negative feedback, wherein the negative pole input end of the operational amplifier U1 is respectively connected with a resistor R1 and a resistor R2, the resistor R1 is connected with a power supply, the resistor R2 is grounded, the output end of the operational amplifier U1 is connected with the B pole of the triode Q10 through a resistor R5, the power supply negative electrode of the operational amplifier U1 is grounded and the power supply positive electrode is connected with the power supply; a current sampling circuit, wherein the input end of the current sampling circuit is connected to the two ends of the resistor R3 through a follower circuit respectively; a differential operational amplifier circuit, wherein the input end of the differential operational amplifier circuit is connected with the output end of the current sampling circuit; a differential ADC chip, wherein the input end of the differential ADC chip is connected with the output end of the differential operational amplifier; the C pole of the triode Q10 is connected with the power supply through a resistor R4, and the E pole of the triode Q10 is grounded. The current gain beta of the triode can be accurately measured along with the decline condition of the total space dose received by the triode.
As announced in the message from Vice President for Human Resources Bryan Garey published Sunday, March 15, faculty and staff are encouraged to explore alternative work options including telework, flexible schedules, and leave. Managers and unit leaders are encouraged to arrange for employees to work remotely, if this can be achieved while still allowing the university to accomplish its important missions. While we acknowledge that not all jobs are able to be performed from another location, we encourage all managers and unit leaders to be as flexible as possible when working with employees. Alternative work options should be discussed at the unit level. As a clarification and as it relates to telework agreements as a result of the impact of COVID-19, formal agreements do not need to be completed. Telework arrangements due to COVID-19 impacts are viewed as temporary to accommodate unusual circumstances. Departments should document the temporary agreement for department files by memo or email that outlines work expectations, duration, and that this is a temporary modification to meet an employee's personal needs as a result of COVID-19. If an employee requests to regularly work from home or alternative work location, an approved telework agreement is required. If you have additional questions about alternate work options or telework agreements, please contact Hokie Wellness at [email protected]. If any employee has any concerns about discrimination and harassment in violation of Policy 1025 please contact the Office for Equity and Accessibility at 540-231-2010. NOTE: The guidance above will be updated should university operations change or public information guidelines are changed by the locality, state, or federal government.
https://vtnews.vt.edu/notices/hr-telework-agreement.html
Learning how to play guitar is a long journey, but you have to start somewhere. As you learn to play guitar, you’ll likely run into some challenges, but don’t worry, the beginning is always the hardest. If you stick to it, you’ll learn one of the greatest musical instruments in the world and discover what a fun, rewarding, and fulfilling experience playing the guitar is. This guide will teach you the basics of how to play guitar, so keep reading. Table of Contents The Anatomy of a Guitar Before you start learning how to play guitar, you need to know the anatomy of the instrument. You’ll find it hard to understand the instrument if you can’t tell the headstock from the bridge. As a beginner, you’ll most likely work with an electric or acoustic guitar, and they both have the same parts (pretty much). The parts of the guitar include: - Body: This is the big curvy part made of wood. The body of an acoustic guitar is bigger and more hollow than that of an electric guitar. This design enables the acoustic guitar to amplify sound. - Neck: This part, also made of wood, connects the guitar’s body and the headstock. The neck is long and thin, and it is where you’ll find the frets and fretboard/fingerboard. - Headstock: The headstock or head (in short) is the bit connected to the top of the tail. Here, you’ll find the machine heads, which are used to tune the guitar. - Machine heads: These are located on the headstock and are also known as the tuner pegs. Guitars usually have six machine heads, and one end of a string is attached to each of them. Depending on the direction you turn their knobs, you either tighten or loosen the string. Tightening a tuner peg raises the pitch of the attached string, while loosening will lower the pitch. - Fretboard: The fretboard is the wooden material on the front of the neck between the nut and bridge. It contains the frets, which create notes when you press a string against them. - Frets: Frets are the spaces between the thin strips of metal distributed across the fretboard. Each fret represents a musical note. - Nut: You’ll find the nut at the end of the neck. It supports the strings and leads them into the headstock and machine heads. The grooves on the nut evenly space the strings and hold them in place. - Strings: These are the long strands of wires, usually made from steel, that run from the bridge to the tuner pegs. A standard guitar usually has six of them. The guitar strings are arranged in order of thickness, from the thinnest to the thickest. - Bridge: The bridge is located on the soundboard. You’ll find the other end of the strings attached to the saddle (bridge nut) on the bridge. - Soundhole: If your guitar is acoustic, it will have a soundhole. The hole amplifies the sound of the guitar. - Soundboard: This is the top part of the guitar’s body. This thin wooden plate is responsible for projecting the guitar’s sound. The soundboard resonates with the vibrations received from the strings. Learning Guitar Terms While discussing the anatomy of a guitar, we have already covered some of the essential terms. Here are a few more that you need to understand to make guitar learning a little easier: - Chord: A combination of notes. Such as A major, E major, and E minor - Chord symbol: Characters used to identify chords, e.g., A for A major and Cm for C minor. - Chord diagram: A graphic that shows you where to place your fingers on the fretboard to create a particular chord. - Open string: A string plucked without placing a finger on a fret. - Barre chord: You play most chords by placing the tip of your fingers on specific frets. A barre chord is when you put one or more fingers flat or barre across 5-6 strings on the fretboard. - Tuning: Tuning is the acting of fixing the guitar’s tune (when it is off) by tightening and loosening the tuner pegs. - Picking: Plucking an individual string with your fingers or a guitar pick. - Strumming: A sweeping motion with your fingers or guitar pick, allowing you to play several strings. - Action: The distance between the fretboard and the bottom of the string. The more action, the more pressure it takes to make the string come in full contact with a fret. How Guitars Work To learn how to play guitar, you need to understand how the instrument works. When you strum the guitar, its strings begin to vibrate. In an acoustic guitar, the vibrations travel to the bridge, which channels them into the soundhole, making the soundboard vibrate. The vibrations then travel out the soundhole again, producing the guitar sound we know and love. Electric guitars have pickups, small microphones attached to the guitar’s body underneath the strings. Once these devices “pick up” the vibrations, they convert them into electric energy. This energy is then transferred to amplifiers to produce the intense sound of an electric guitar. How to Hold a Guitar You can’t learn how to play guitar if you don’t know how to hold one. Holding a guitar might seem easy, but knowing the proper technique will eliminate bad habits that can lead to frustrations, as well as back and neck pain. To hold a guitar correctly, you need to place it on your right leg, making sure you’re stabilizing it with your dominant hand. For an acoustic guitar, the first half of your arm, where the biceps are, should rest over the hip of the guitar in such a way that the forearm is hanging loose over the soundboard. As you learn how to play guitar, keep in mind that the other hand is for fretting. If it is helping you stabilize the guitar, you won’t be able to freely move it up and down the neck. This movement is vital because that is how you play different chords. Also, make sure the guitar is straight on your leg; don’t lean it forward or backward. Making the guitar lean forward or backward will add extra tension to your arm, which you don’t want. Posture is also crucial when holding a guitar. Always make sure your back is straight. You might feel the urge to bend over so you can see what is going on with your hands. Resist that urge — your back and neck will thank you. Knowing the Strings To learn how to play guitar, you also need to be able to identify the strings. The bottom string (the thinnest) is the first string, and the top (the thickest) is the sixth. From top to bottom, the guitar’s strings are named E, A, D, G, B, and E. Why are there two Es? Well, Top E is the low version, while the bottom E is the high version. The names of the guitar strings can be hard to remember for beginners. Luckily, there are some helpful mnemonics that you can use. The easiest and most popular one is Eddie Ate Dynamite, Good Bye Eddie. You can even make up your own if that helps. Tips for Buying Your First Guitar Know that you know the basics; it’s time you learned how to pick the right guitar. While the types of guitar you get largely depends on preference, here are some tips that will help you avoid common guitar-buying pitfalls: - Make sure you buy the guitar at your local music shop or reputable retailer online; avoid flea markets, yard sales, and pawn shops. - Bring someone along (if you can) who knows a thing or two about guitars and how to play them so they can give you advice. - If buying at a shop, make sure the guitar is in tune; the salesperson should have a tuner on hand and be delighted to test it out for you. - Make sure that the guitar remains in tune by strumming a few times and measuring the strings’ pitch. - You don’t always need to purchase an expensive guitar for $300-$500, as you can find a quality guitar for $100-$200. - Look for package deals, e.g., a guitar that comes with a tuner and case, or buy the accessories you need separately. - Don’t buy a guitar because of the brand. Some big-name brands produce the worst beginner guitars. - For a beginner, the recommended action for acoustic guitars is 2-2.7mm and 2-2.3mm for electric. About BYJU’S FutureSchool Music Program BYJU’S FutureSchool music curriculum was developed to empower the next generation of guitar players. It introduces children to the wondrous world of music and instills them with a passion that will last a lifetime. Through research-based teaching methods that range from live sessions to 1:1 challenges to interactive projects, kids learn to unleash their musical creativity in a fun and nurturing environment.
https://www.byjusfutureschool.com/blog/how-to-play-guitar-a-complete-guide-on-learning-to-play-guitar-for-beginners/
Scientists uncover new genetic cause of lupus on World Lupus Day A team of scientists and clinicians has identified a novel mutation causing an unusual form of the autoimmune disease lupus. The genetic analysis of a Belgian family sheds new light on the disease mechanisms underlying lupus, which could possibly yield new therapeutic approaches for patients. The findings are published in the Journal of Allergy and Clinical Immunology in the week leading up to World Lupus Day. Lupus is an autoimmune disorder, meaning that the body’s immune system mistakenly attacks its own tissues. Lupus can affect multiple organs but its cause is often not clear. Usually a combination of genetic and environmental factors is at play. Researchers in Leuven have now discovered a novel genetic mutation in a patient that presented at the age of 12 with both lupus and problems in the ability of the immune system to fight common infections. This unusual combination of symptoms was quite puzzling. By analyzing the patient’s DNA and that of the parents, the scientists could trace the problem down to a specific mutation in the so-called Ikaros gene. This gene encodes the Ikaros protein that in turn binds DNA to affect the expression of other proteins. Erika Van Nieuwenhove, clinician and scientist at VIB-KU Leuven, explains how the mutation caused the patient’s immune system to be hyperactive: “Because of the mutation, Ikaros can no longer bind its target DNA properly. We also observed that certain immune cells of the patients were hyperactive, even in the absence of stimulation. The link between both observations turned out to be CD22, a protein that normally dampens the immune response. In normal conditions, Ikaros stimulates the expression of this inhibitor, but this was not the case in this patient.” About 5 million people worldwide have lupus, but a causative mutation in Ikaros is very rare. “Small changes in Ikaros are associated with susceptibility to adult-onset lupus, but because the effects are weak it is hard to work out what Ikaros is doing to the immune system,” explains prof. Adrian Liston (VIB-KU Leuven), who heads the lab for translational immunology and is lead author of the study. “In this particular family, however, a mutation created a large change in Ikaros, causing early-onset lupus. The mutation was strong enough to allow us to work out how changes in Ikaros cause lupus and immune deficiency.” Although the patient in this study has a very rare form of lupus, the discovery nevertheless helps to map the overall disease mechanisms, underscores prof. Carine Wouters, pediatric rheumatologist at University Hospitals Leuven and co-lead of the study: “The mechanism we uncovered in this patient could also be meaningful in a different context with other patients. Now that we understand what goes wrong in this particular case, it could help us think of better targeted treatments for others as well.” Original research: Van Nieuwenhove et al. 2018 Journal of Allergy and Clinical Immunology. "A kindred with mutant IKAROS and autoimmunity" If you would like to support our clinical research, and allow us to take on more cases like these, you can make a tax-deductable donation the Ped IMID fund, by transferring to IBAN-number BE45 7340 1941 7789, BIC-code: KREDBEBB with the label "voor EBD-FOPIIA-O2010".
http://liston.vib.be/blog/tag/immunology
Condominium and Cooperative Law in Virginia Cooperative and condominium communities are examples of a class of housing developments identified as "common interest communities." This is a type of community in which the individual residents rent or own residential units in a building, or collection of buildings, but are collectively accountable for taking maintaining the common areas in their communities, such as lawns, gardens, swimming pools, and the like. This responsibility is typically taken care of by charging the residents a periodic maintenance fee, to pay for the upkeep of the common areas. If you simply look at a condominium or cooperative community, you likely won't be able to tell if it's one or the other. There are no physical features distinct to either one, which can be used to distinguish them. Rather, the difference lies in the legal arrangement that regulates the relationships between the residents and managers. In condominium communities, the residents own the units they live in, and collectively own the land and buildings in which they are located. In a cooperative community, the units are rented, and are owned by a single entity. Laws and Regulations Concerning Common Interest Communities in Radford, Virginia Various Radford, Virginia laws affect common-interest communities. However, almost all of these laws govern real estate more generally, and there are very few laws written particularly for common interest communities. Such generally-applicable laws include zoning regulations, contracts, and the relations between landlords and tenants. One's daily life in a cooperative or condominium community is more likely to be affected by the rules set by the owners or managers of the property, rather than the regulations of your state or city. The manager or owner of the land on which your residence is located will likely have a lot of rules concerning what can and cannot be done in and near the houses. These rules typically mandate cleanliness, keeping noise to a minimum, and regulate the presence of pets. The enforceability of some of these rules may depend on Radford, Virginia's laws controlling relations between landlords and tenants. Can a Radford, Virginia Attorney Help? If you have a dispute with a neighbor, your landlord, or your homeowners' association, a reliable Radford, Virginia real estate attorney can be instrumental in obtaining a desired outcome.
https://realestatelawyers.legalmatch.com/VA/Radford/condominiums-cooperatives.html
Pre-Calculus In Pre-Calculus, students continue to build on the k-8, Algebra I, Algebra II and Geometry foundations as they expand their understanding of mathematics. Students will use functions, as well as, symbolic reasoning to represent and connect ideas in geometry, probability, statistics, trigonometry and calculus to model physical situations. Finally, students will use a variety of representations (concrete, pictorial, numerical, symbolic, graphical, and verbal), tools and technology (including, but not limited to calculators with graphing capabilities, data collection devices and computers) to model functions and equations and solve Real-Life problems. Course Objective: Pre-Calculus students will acquire and demonstrate knowledge of concepts, definitions, properties and applications of topics listed below. The main goal of Pre-Calculus is to help students obtain critical thinking and decision making skills that will allow them to connect concepts, develop computational skills and learn strategies needed to solve mathematical problems. Course Assessment: 60% Exams 40% Other Assignments Chapter 1 Graphs The Distance and Midpoint formula; Graphing Intercepts; symmetry; Graphing Key Equations Solving Equations Using a Graphing Utility Lines Circles Chapter 2 Functions and Their Graphs 2.1 Functions 2.2 The Graph of a Function 2.3 Properties of Functions 2.4 Library of Functions: piecewise-defined Functions 2.5 Graphing Techniques: Transformations 2.6 Mathematical Models: Building Functions Chapter 3 Linear and Quadratic Functions 3.1 Linear Functions and Their Properties 3.2 Linear Models: Building Linear Functions from Data 3.3 Quadratic Functions and Their Properties 3.4 Build Quadratic Models from Verbal Descriptions and from Data 3.5 Inequalities Involving Quadratic Functions Chapter 4 Polynomial and Rational Functions 4.1 Polynomial Functions and models 4.2 The Real Zeros of a Polynomial Function 4.3 Complex Zeros: Fundamental Theorem of Algebra 4.4 Properties of Rational Functions 4.5 The Graph of a Rational Function 4.6 Polynomial and Rational Inequalities Mid Term Exams Chapter 5 Exponential and logarithmic Functions 5.1 Composite Functions 5.2 One-to-One Functions: Inverse Functions 5.3 Exponential Functions 5.4 Logarithmic Functions 5.5 Properties of Logarithms 5.6 Logarithmic and Exponential Equations 5.7 Financial Models 5.8 Exponential Growth and Decay Models 5.9 Building Exponential, Logarithmic, and Logistic Models Chapter 6 Trigonometry Functions 6.1 Angles and Their Measure 6.2 Trigonometric Functions: Unit Circle Approach 6.3 Properties of the Trigonometric Functions 6.4 Graphs of Sine and Cosine Functions 6.5 Graphs of the Tangent, Cotangent, Cosecant, and Secant Functions 6.6 Phase Shift: Sinusoidal Curve Fitting Chapter 7 Analytic Trigonometry 7.1 The inverse Sine, Cosine, and Tangent Functions 7.2 The Inverse Trigonometric Functions (Continued) 7.3 Trigonometric Equations 7.4 Trigonometric Identities Chapter 8 Applications of Trigonometric Functions 8.1 Right Triangle Trigonometry: Applications 8.2 The Law of Sines 8.3 The Law of Cosines 8.4 Area of Triangle 8.5 Simple Harmonic Motion; Damped Motion: Combining Waves ***If time permits, then the following lessons will be cover*** Chapter 9 Polar Coordinates; Vectors 9.1 Lines 9.2 Polar Equations and Graphs 9.3 The Complex Plane; De Moivre’s Theorem Chapter 12 Sequences; Induction; the Binomial Theorem 12.1 Sequences 12.2 Arithmetic Sequences: Geometric Series 12.3 Geometric Sequences; Geometric Series Chapter 14 A Preview of Calculus: The Limit, Derivative, and Integrals
The invention discloses a cultural relic model retail system based on interactive display, and belongs to the technical field of cultural relic model retail, the cultural relic model retail system comprises a display scene establishment unit, the input end of the display scene establishment unit is electrically connected with the output end of a voice recognition unit, and the display scene establishment unit is used for establishing a VR display unit. According to the invention, the VR display scene is established in the display scene establishment unit, and meanwhile, through the mutual intersection of the picture and the sound, the cultural relic model display culture and background are preaching, the retail display effect of the cultural relic model is improved, and then the display light and sound effect can be controlled in a linkage manner through the first person experience unit; according to the method, 3D virtual display is carried out in cooperation with a single cultural relic, so that the adaptability to cultural relic selling can be fully achieved, meanwhile, the display route of multiple cultural relics can be regulated and controlled according to admission personnel, the selling priority of some cultural relic models can be improved through comparison, and processing is effectively guaranteed.
Parental sport achievement and the development of athlete expertise. This study sought to examine how parental sport involvement and attainment were related to the eventual level of competitive sport attained by their children. Athletes (n = 229) were divided into three skill level groups (elite: n = 139; pre-elite: n = 33; non-elite: n = 57), based on the peak competition level achieved in their career, which were compared using chi-squares tests of independence and analyses of variance according to parents sport characteristics provided through the Developmental History of Athletes Questionnaire. Parental recreational and competitive sport participation was overrepresented among elite athletes, as were parents who reached an elite level of sport themselves. Results were found to differ according to parent sex, with athlete skill level significantly related to the sport participation and skill level of fathers, but not mothers. Results suggest parental sport experiences at different levels of competition influence the development of athletes, although these relationships are subject to many factors.
Argued: Jan. 18 and 19, 1965. Frank I. Goodman, Beverly Hills, Cal., for petitioner. Matthew R. McCann, Worcester, Mass., for respondent. The District Court upheld the validity of the warrant on a motion to suppress. The divided Court of Appeals held the warrant insufficient because it read the affidavit as not specifically stating in so many words that the information it contained was based upon the personal knowledge of Mazaka or other reliable investigators. The Court of Appeals reasoned that all of the information recited in the affidavit might conceivably have been obtained by investigators other than Mazaka, and it could not be certain that the information of these other investigators was not in turn based upon hearsay received from unreliable informants rather than their own personal observations. For this reason the court found that probable cause had not been established. 324 F.2d, at 868—870. We granted certiorari to consider the standards by which a reviewing court should approach the interpretation of affidavits supporting warrants which have been duly issued by examining magistrates. 377 U.S. 989, 84 S.Ct. 1910, 12 L.Ed.2d 1043. For the reasons stated below, we reverse the judgment of the Court of Appeals. 'An evaluation of the constitutionality of a search warrant should begin with the rule that 'the informed and deliberate determinations of magistrates empowered to issue warrants * * * are to be preferred over the hurried action of offices * * * who may happen to make arrests.' United States v. Lefkowitz, 285 U.S. 452, 464, 52 S.Ct. 420, 423, 76 L.Ed. 877. The reasons for this rule go to the foundations of the Fourth Amendment.' 378 U.S., at 110—111, 84 S.Ct., at 1512. 'The point of the Fourth Amendment, which often is not grasped by zealous officers, is not that it denies law enforcement the support of the usual inferences which reasonable men draw from evidence. Its protection consists in requiring that those inferences be drawn by a neutral and detached magistrate instead of being judged by the officer engaged in the often competitive enterprise of ferreting out crime. Any assumption that evidence sufficient to support a magistrate's disinterested determination to issue a search warrant will justify the officers in making a search without a warrant would reduce the Amendment to a nullity and leave the people's homes secure only in the discretion of police officers.' Johnson v. United States, supra, 333 U.S., at 13 14, 68 S.Ct., at 369. The fact that exceptions to the requirement that searches and seizures be undertaken only after obtaining a warrant are limited 2 underscores the preference accorded police action taken under a warrant as against searches and seizures without one. While a warrant may issue only upon a finding of 'probable cause,' this Court has long held that 'the term 'probable cause' * * * means less than evidence which would justify condemnation,' Locke v. United States, 7 Cranch 339, 348, 3 L.Ed. 364, and that a finding of 'probable cause' may rest upon evidence which is not legally competent in a criminal trial. Draper v. United States, 358 U.S. 307, 311, 79 S.Ct. 329, 332, 3 L.Ed.2d 327. As the Court stated in Brinegar v. United States, 338 U.S. 160, 173, 69 S.Ct. 1302, 1309, 93 L.Ed. 1879, 'There is a large difference between the two things to be proved (guilt and probable cause), as well as between the tribunals which determine them, and therefore a like difference in the quanta and modes of proof required to establish them.' Thus hearsay may be the basis for issuance of the warrant 'so long as there * * * (is) a substantial basis for crediting the hearsay.' Jones v. United States, supra, 362 U.S., at 272, 80 S.Ct., at 736. And, in Aguilar we recognized that 'an affidavit may be based on hearsay information and need not reflect the direct personal observations of the affiant,' so long as the magistrate is 'informed of some of the underlying circumstances' supporting the affiant's conclusions and his belief that any informant involved 'whose identity need not be disclosed * * * was 'credible' or his information 'reliable." Aguilar v. State of Texas, supra, 378 U.S., at 114, 84 S.Ct., at 1514. This is not to say that probable cause can be made out by affidavits which are purely conclusory, stating only the affiant's or an informer's belief that probable cause exists without detailing any of the 'underlying circumstances' upon which that belief is based. See Aguilar v. State of Texas, supra. Recital of some of the underlying circumstances in the affidavit is essential if the magistrate is to perform his detached function and not serve merely as a rubber stamp for the police. However, where these circumstances are detailed, where reason for crediting the source of the information is given, and when a magistrate has found probable cause, the courts should not invalidate the warrant by interpreting the affidavit in a hypertechnical, rather than a commonsense, manner. Although in a particular case it may not be easy to determine when an affidavit demonstrates the existence of probable cause, the resolution of doubtful or marginal cases in this area should be largely determined by the preference to be accorded to warrants. Jones v. United States, supra, 362 U.S., at 270, 80 S.Ct., at 735. The application of the principles stated above leads us to reverse the Court of Appeals. The affidavit in this case, if read in a commonsense way rather than technically, shows ample facts to establish probable cause and allow the Commissioner to issue the search warrant. The affidavit at issue here, unlike the affidavit held insufficient in Aguilar, is detailed and specific. It sets forth not merely 'some of the underlying circumstances' supporting the officer's belief, but a good many of them. This is apparent from the summary of the affidavit already recited and from its text which is reproduced in the Appendix. The Court of Appeals did not question the specificity of the affidavit. It rested its holding that the affidavit was insufficient on the ground that '(t) he affidavit failed to clearly indicate which of the facts alleged therein were hearsay or which were within the affiant's own knowledge,' and therefore '(t)he Commissioner could only conclude that the entire affidavit was based on hearsay.' 324 F.2d, at 868. While the Court of Appeals recognized that an affidavit based on hearsay will be sufficient, 'so long as a substantial basis for crediting the hearsay is presented,' Jones v. United States, supra, 362 U.S., at 269, 80 S.Ct., at 735, it felt that no such basis existed here because the hearsay consisted of reports by 'Investigators,' and the affidavit did not recite how the Investigators obtained their information. The Court of Appeals conceded that the affidavit stated that the Investigators themselves smelled the odor of fermenting mash, but argued that the rest of their information might itself have been based upon hearsay thus raising 'the distinct possibility of hearsay-upon-hearsay.' 324 F.2d, at 869. For this reason, it held that the affidavit did not establish probable cause. We disagree with the conclusion of the Court of Appeals. Its determination that the affidavit might have been based wholly upon hearsay cannot be supported in light of the fact that Mazaka, a Government Investigator, swore under oath that the relevant information was in part based 'upon observations made by me' and 'upon personal knowledge' as well as upon 'information which has been obtained from Investigators of the Alcohol and Tobacco Tax Division, Internal Revenue Service, who have been assigned to this investigation.' It also seems to us that the assumption of the Court of Appeals that all of the information in Mazaka's affidavit may in fact have come from unreliable anonymous informers passed on to Government Investigators, who in turn related this information to Mazaka is without foundation. Mazaka swore that, insofar as the affidavit was not based upon his own observations, it was 'based upon information received officially from other Investigators attached to the Alcohol and Tobacco Tax Division assigned to this investigation, and reports orally made to me describing the results of their observations and investigation.' (Emphasis added.) The Court of Appeals itself recognized that the affidavit stated that "Investigators' (employees of the Service) smelled the odor of fermenting mash in the vicinity of the suspected dwelling.' 324 F.2d, at 869. A qualified officer's detection of the smell of mash has often been held a very strong factor in determining that probable cause exists so as to allow issuance of a warrant. 3 Moreover, upon reading the affidavit as a whole, it becomes clear that the detailed ovservations recounted in the affidavit cannot fairly be regarded as having been made in any significant part by persons other than full-time Investigators of the Alcohol and Tobacco Tax Division of the Internal Revenue Service. Observations of fellow officers of the Government engaged in a common investigation are plainly a reliable basis for a warrant applied for by one of their number. 4 We conclude that the affidavit showed probable cause and that the Court of Appeals misapprehended its judicial function in reviewing this affidavit by giving it an unduly technical and restrictive reading. This Court is alert to invalidate unconstitutional searches and seizures whether with or without a warrant. See Aguilar v. State of Texas, supra; Stanford v. State of Texas, 379 U.S. 476, 85 S.Ct. 506; Preston v. United States, 376 U.S. 364, 84 S.Ct. 881, 11 L.Ed.2d 777; Beck v. State of Ohio, 379 U.S. 89, 85 S.Ct. 223, 13 L.Ed.2d 142. By doing so, it vindicates individual liberties and strengthens the administration of justice by promoting respect for law and order. This Court is equally concerned to uphold the actions of law enforcement officers consistently following the proper constitutional course. This is no less important to the administration of justice than the invalidation of convictions because of disregard of individual rights or official overreaching. In our view the officers in this case did what the Constitution requires. They obtained a warrant from a judicial officer 'upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the * * * things to be seized.' It is vital that having done so their actions should be sustained under a system of justice responsive both to the needs of individual liberty and to the rights of the community. That he has reason to believe that on the premises known as a one-family light green wooden frame dwelling house located at 148 1/2 Coburn Avenue, Worcester, occupied by Giacomo Ventresca and his family, together with all approaches and appurtenances thereto, in the District of Massachusetts, there is now being concealed certain property, namely an unknown quantity of material and certain apparatus, articles and devices, including a still and distilling apparatus setup with all attachments thereto, together with an unknown quantity of mash, an unknown quantity of distilled spirits, and other material used in the manufacture of non-tax-paid liquors; which are being held and possessed, and which have been used and are intended for use, in the distillation, manufacture, possession, and distribution of non-taxpaid liquors, in violation of the provisions of 26 USC 5171(a), 5173, 5178, 5179(a), 5222(a), 5602, and 5686. On or about July 28, 1961, about 6:45 P.M., an observation was made covering a Pontiac automobile owned by one Joseph Garry. Garry and one Joseph Incardone put thirteen bags of sugar into the car. These bags of sugar weighed sixty pounds each. Ten such bags were put into the trunk, and three were placed in the rear seat. Those in the rear seat were marked 'Domino.' The others appeared to have similar markings. After the sugar was loaded into the car, Garry together with Incardone drove it to the vicinity of 148 Coburn Avenue, Worcester, Massachusetts, where the car was parked. Some time later, the car with its contents was driven into the yard to the rear of 148 and between the premises 148 and 148 1/2 Coburn Avenue. After remaining there about twenty-five minutes, the same two men drove in the direction of Boston. On August 16, 1961 the Pontiac was observed. In the back seat bags of sugar were observed covered with a cloth or tarpaulin. A sixty-pound bag of sugar was on the front seat. Garry was observed after loading the above-described sugar into the car placing a carton with various five-pound bags of sugar on the top of the tarpaulin. The car was then driven by Garry with Incardone as a passenger to Worcester together with its contents into the yard at 148 and 148 1/2 Coburn Avenue to the rear of and between the two houses. About Midnight on the same night, the Pontiac driven by Garry with Incardone as a passenger was seen pulling up to the premises at 59 Highland Street, Hyde Park, where Garry lives. Garry opened the trunk of his car, and removed ten five-gallon cans therefrom, and placed them on the sidewalk. He then entered the house, and opened a door on the side. Incardone made five trips from the sidewalk to the side of the house carrying two five-gallon cans on each such trip. It appeared that the cans were filled. On each of these trips, Incardone passed the two cans to someone standing in the doorway. Immediately after the fifth such trip, Garry came out of the door and joined Incardone. They walked to the sidewalk, and talked for a few moments. Incardone then drove away, and Garry went into his home. About Midnight the Pontiac was observed pulling up in front of Garry's house at 59 Highland Street, Hyde Park. Garry was driving, and Incardone was a passenger. They both got out of the car. Garry opened the trunk, and then entered his house. From the trunk of the car there was removed eleven five-gallon cans which appeared to be filled. Incardone made six trips to a door on the side of the house. He carried two five-gallon cans on each trip, except the sixth trip. On that trip he carried one can, having passed the others to somebody in the doorway, and on the last trip he entered the house. He remained there at least forty-five minutes, and was not observed to leave. On August 28, 1961 Garry drove Incardone in his car to Worcester. On Lake Ave. they met Giacomo Ventresca, who lives at 148 1/2 Coburn Avenue, Worcester. Ventresca entered the car driven by Garry. The car was then driven into the yard to the rear of 148 and between 148 and 148 1/2 Coburn Avenue. An observation was made that empty metal cans, five-gallon size, were being taken from the car owned by Garry, and brought into the premises at 148 1/2 Coburn Avenue, which was occupied by Ventresca. Later, new cans similar in size, shape and appearance were observed being placed into the trunk of Garry's car while parked at the rear of 148 and in front of 148 1/2 Coburn Avenue. The manner in which the cans were handled, and the sound(s) which were heard during the handling of these cans, were consistent with that of cans containing liquid. With all deference, the present affidavit seems hopelessly inadequate to me as a basis for a magistrate's informed determination that a search warrant should issue. We deal with the constitutional right of privacy that can be invaded only on a showing of 'probable cause' as provided by the Fourth Amendment. That is a strict standard; what the police say does not necessarily carry the day; 'probable cause' is in the keeping of the magistrate. Giordenello v. United States, 357 U.S. 480, 486—487, 78 S.Ct. 1245, 1250, 2 L.Ed.2d 1503; Johnson v. United States, 333 U.S. 10, 14, 68 S.Ct. 367, 369, 92 L.Ed. 436. Yet anything he says does not necessarily go either. He too is bound by the Constitution. His discretion is reviewable. Aguilar v. State of Texas, 378 U.S. 108, 111, 84 S.Ct. 1509, 1512, 12 L.Ed.2d 723. But unless the constitutional standard of 'probable cause' is defined in meticulous ways, the discretion of police and of magistrates alike will become absolute. The present case, illustrates how the mere weight of lengthy and vague recitals takes the place of reasonably probative evidence of the existence of crime. Of the 10 factual paragraphs eight describe trips said to have been made to and from the vicinity of 148 1/2 Coburn Avenue by one Garry and one Incardone. On these trips, it is said, there were delivered to the vicinity of 148 1/2 Coburn Avenue large quantities of sugar (four deliveries) and empty metal cans (two deliveries, on one of which respondent himself is said to have been a passenger in the car); on one occasion it was observed only that the car was 'heavily laden.' It is said that on two occasions Garry and Incardone were seen taking apparently filled cans into Garry's house, 59 Highland Street, from the Pontiac; on one such occasion the Pontiac, it is said, had been at Coburn Avenue earlier in the day, apparently making a sugar delivery. And, finally, it is averred that on one occasion seemingly filled cans were loaded into the Pontiac near 148 1/2 Coburn Avenue, shortly after a delivery of empties to that address. The 'facts' recited in these eight paragraphs, it is said, permit the inference that a still was being operated on respondent's premises. But are these 'facts' really facts? A statement of 'fact' is only as credible as its source. Investigator Mazaka evidently believes these statements to be correct; but the magistrate must, of course, know something of the basis of that belief. Nathanson v. United States, 290 U.S. 41, 54 S.Ct. 11, 78 L.Ed. 159. Is the belief of this affiant based on personal observation, or on hearsay, or on hearsay on hearsay? Nowhere in the affidavit is the source of these eight paragraphs of information revealed. In each paragraph the alleged events are simply described directly, or else it is said that certain events 'were observed.' Scarcely a clue is given as to who the observer might have been. It might have been the affiant, though one would not expect that he would so studiously refrain from revealing that he himself witnessed these events. The observers might have been some other investigators, though the affiant does not say so; yet in the two paragraphs next to be discussed the observers are prominently identified as investigators. Perhaps the ultimate source of most of these statements was one or more private citizens, who were interviewed by investigators, whose reports on these interviews came in due course to Investigator Mazaka, who then composed the affidavit. Perhaps many of the 'facts' recited in the affidavit were supplied by an unknown informant over the telephone. In most instances the language of the affidavit suggests that some investigator witnessed the alleged events. For example, the second paragraph begins: 'On or about July 28, 1961, about 6:45 P.M., an observation was made covering a Pontiac automobile owned by one Joseph Garry.' But the presumed investigator who may have been 'covering' this automobile is in no way identified. There is no way of knowing whether the report of this alleged observation was made directly to the affiant or whether it went through one or more intermediaries. The Court's unconcern over the failure of the affidavit to identify the sources of the information recited seems based in part on the detailed, lengthy nature of the factual recitals. The Court seems to say that even if we assume that only some small part of the information is trustworthy, still enough remains to establish probable cause. But I would direct attention to the fact that only one of the 12 paragraphs in this affidavit definitely points the finger of suspicion at 148 1/2 Coburn Avenue: that is the paragraph describing the alleged events of August 28, 1961. In every other paragraph the recitals point no more to 148 1/2 Coburn Avenue than they do to 148 Coburn Avenue. The August 28 paragraph is critical to the finding of the existence of probable cause for the search of 148 1/2 Coburn Avenue. Yet the source of the information contained in that paragraph is in no way identified and it is therefore impossible to determine the truthworthiness of that crucial information. A discussion of the legal principles governing the sufficiency of this affidavit must, unhappily, begin with Draper v. United States, 358 U.S. 307, 79 S.Ct. 329, 3 L.Ed.2d 327. There an officer had been told by an informer, known to the officer to be reliable, that a man of a certain description would get off a certain train with heroin in his possession. The officer met the train, observed a man of that description getting off, and arrested him. The Court held that there was probable cause for the arrest. In Jones v. United States, 362 U.S. 257, 80 S.Ct. 725, 4 L.Ed.2d 697, the Court applied the holding in Draper to find an affidavit sufficient to establish probable cause for the issuance of a search warrant, even though the facts stated in the affidavit did not rest on the affiant's personal observations but rather on the observations of another. The Court held that an affidavit could rest on hearsay, 'so long as a substantial basis for crediting the hearsay is presented.' Id., at 269, 80 S.Ct., at 735. (Emphasis supplied.) In Jones the basis for crediting the informant's hearsay was: (1) the affiant swore that the informant had previously given information to him which was correct; (2) the affiant had been given corroborating information by other informants; and (3) the affiant was independently familiar with the persons claimed by the informants to be concealing narcotics in their apartment, and he knew them to have admitted to the use of narcotics. I dissented from the decisions of the Court in these two cases, for the reasons which I set forth most fully in Draper, supra, 358 U.S., at 314 et seq., 79 S.Ct., at 333. But though I regard these decisions * as taking a view destructive of the guarantees of the Fourth Amendment, they are in any event clearly not dispositive of the present case. As I have already shown, the affidavit here does not set forth a single corroborating fact that is sworn to be within the personal knowledge of the affiant. Moreover, there is not a single statement in the affidavit that could not well be hearsay on hearsay or some other multiple form of hearsay. We are told, however, that it is at least clear that 'Investigators' detected the smell of mash in the vicinity of 148 1/2 Coburn Avenue. And the Court says: 'Observations of fellow officers of the Government engaged in a common investigation are plainly a reliable basis for a warrant applied for by one of their number,' ante, p. 111. But I would make Taylor v. United States, 286 U.S. 1, 6, 52 S.Ct. 466, 467, 76 L.Ed. 951, my starting point, where the Court stated: 'Prohibition officers may rely on a distinctive odor as a physical fact indicative of possible crime; but its presence alone does not strip the owner of a building of constitutional guaranties against * * * unreasonable search.' In Johnson v. United States, 333 U.S. 10, 13, 68 S.Ct. 367, 369, 92 L.Ed. 436, the Court explained what the decision in Taylor meant: 'That decision held only that odors alone do not authorize a search without warrant. If the presence of odors is testified to before a magistrate and he finds the affiant qualified to know the odor, and it is one sufficiently distinctive to identify a forbidden substance, this Court has never held such a basis insufficient to justify issuance of a search warrant.' (Emphasis supplied.) It is hardly necessary to point out that a magistrate cannot begin to assess the odor-identifying qualifications of persons whose identity is unknown to him. Nor is it necessary to belabor the point that these odors of mash are not ever stated in the affidavit to have emanated from 148 1/2 Coburn Avenue. The Court of Appeals was surely correct when it observed that 'the affidavit leaves as a complete mystery the manner in which the Investigators discovered their information.' 324 F.2d 864, 869. Such being the case, I see no way to avoid the conclusion of the majority below: 'If hearsay evidence is to be relied upon in the preparation of an affidavit for a search warrant, the officer or attorney preparing such an affidavit should keep in mind that hearsay statements are only as credible as their source and only as strong as their corroboration. And where the source of the information is in doubt and the corroboration by the affiant is unclear, the affidavit is insufficient.' Id., at 869—870. That conclusion states a relatively clear standard of probable cause and is in sharp contrast to the amorphous one upon which today's decision rests. In Jones v. United States, supra, this Court forgot, as it forgets again today, that the duty of the magistrate is not delegable to the police. Nathanson v. United States, 290 U.S. 41, 54 S.Ct. 11, 78 L.Ed. 159. It is for the magistrate, not the police, to decide whether there is probable cause for the issuance of the warrant. That function cannot be discharged by the magistrate unless the police first discharge their own, different responsibility: 'to evidence what is reliable and why, and not to introduce a hodge-podge under some general formalistic coverall.' 324 F.2d at 870. And see Masiello v. United States, 113 U.S.App.D.C. 32, 304 F.2d 399, 401—402. That is the duty of the police—the rest is not for them. The Fourth Amendment's policy against unreasonable searches and seizures finds expression in Rule 41 of the Federal Rules of Criminal Procedure. 'Unquestionably, when a person is lawfully arrested, the police have the right, without a search warrant, to make a contemporaneous search of the person of the accused for weapons or for the fruits of or implements used to commit the crime. Weeks v. United States, 232 U.S. 383, 392, 34 S.Ct. 341, 344, 58 L.Ed. 652 (1914); Agnello v. United States, 269 U.S. 20, 30, 46 S.Ct. 4, 5, 70 L.Ed. 145 (1925). This right to search and seize without a search warrant extends to things under the accused's immediate control, Carroll v. United States, supra, 267 U.S., at 158, 45 S.Ct. at 287, 69 L.Ed. 543, and, to an extent depending on the circumstances of the case, to the place where he is arrested, Agnello v. United States, supra, 269 U.S. at 30, 46 S.Ct. at 5, 70 L.Ed. 145; Marron v. United States, 275 U.S. 192, 199, 48 S.Ct. 74, 77, 72 L.Ed. 231 (1927); United States v. Rabinowitz, 339 U.S. 56, 61—62, 70 S.Ct. 430, 433, 94 L.Ed. 653 (1950). The rule allowing contemporaneous searches is justified, for example, by the need to seize weapons and other things which might be used to assault an officer or effect an escape, as well as by the need to prevent the destruction of evidence of the crime—things which might easily happen where the weapon or evidence is on the accused's person or under his immediate control. But these justifications are absent where a search is remote in time or place from the arrest. Once an accused is under arrest and in custody, then a search made at another place, without a warrant, is simply not incident to the arrest.' 376 U.S., at 367, 84 S.Ct., at 883. See, e.g., Monnette v. United States, 299 F.2d 847, 850 (C.A.5th Cir.). Cf. Chapman v. United States, 365 U.S. 610, 81 S.Ct. 776, 5 L.Ed.2d 828; Steeber v. United States, 198 F.2d 615, 616, 618, 33 A.L.R.2d 1425 (C.A.10th Cir.); United States v. Kaplan, 89 F.2d 869 (C.A.2d Cir.). See, e.g., Rugendorf v. United States, 376 U.S. 528, 84 S.Ct. 825, 11 L.Ed.2d 887; Chin Kay v. United States, 311 F.2d 317, 320 (C.A.9th Cir.); United States v. McCormick, 309 F.2d 367, 372 (C.A.7th Cir.); Weise v. United States, 251 F.2d 867, 868 (C.A.9th Cir.). In these cases we might have drawn a clear, unmistakable line and held that hearsay evidence could not support a search warrant. But we did not so hold; instead we held that hearsay was competent for this purpose if there was 'a substantial basis' for crediting it, thereby muddying the waters with considerations of corroboration and informer's reliability. Thus, by forsaking precise standards, the discretion of police and magistrates became less subject to judicial control.
https://www.law.cornell.edu/supremecourt/text/380/102/
BMR can be defined as the total number of calories the human body requires to optimally perform life-sustaining functions including processing consumed nutrients, blood circulation, protein synthesis, production of cells, transporting ions and even breathing. What is the difference between BMR (Basal Metabolic Rate) and RMR (Resting Metabolic Rate)? Although most exercise and weight loss guides use these two terms interchangeably, they is a slight difference. While BMR measures the amount of calories your body needs to perform basic functions, it is measured in restrictive lab conditions. On the other hand RMR also measures the number of calories your body needs to perform basic life-sustaining functions when the body is completely at rest (typically measured before eating or exercising, ideally after a full night’s sleep) How to calculate Basal Metabolic Rate BMR can be calculated using a Mathematical formula known as the Harris Benedict Equation Men BMR = 88.362 + (13.397 x weight in kg) + (4.799 x height in cm) – (5.677 x age in years) Women BMR = 447.593 + (9.247 x weight in kg) + (3.098 x height in cm) – (4.330 x age in years) Another easier alternative to calculating BMR is using an online Basal Metabolic Rate calculator which will ask you for information such as weight, height and age. An online calculator gives you BMR in addition to an estimate of the total number of calories burned in a day. Factors that determine BMR Basal Metabolic Rate is determined by a number of factors including gender, age, genetic factors and body composition. While you cannot alter your age or gender, you can definitely do something about your body composition and as result boost your metabolism rate substantially. The most effective way to change your BMR is simply by building more muscle and cutting down on fats. This is because lean mass burns more calories as prepared to fats even when the human body is completely at rest. Numerous reliable studies have ascertained that even a few weeks of resistance training can help you gain enough lean mass to boost resting metabolism rate by 7% to 8%. How can Basal Metabolic Rate be used to lose weight? Since Basal Metabolic Rate gives you an idea of how many calories you burn on a daily basis; you will be able to ascertain how many calories you need to consume so that your burn more than you consume every day. Once you know what your Basal Metabolic Rate you can come up with a healthy diet plan that gives you the right amount of calories and still supplies you with the essential nutrients you require to remain healthy and fit.
https://www.holisticboard.org/weight-loss/basal-metabolic-rate-bmr/
Click here to edit. Click here to edit. Click here to edit. | | | | A Sign of Evolution, Transition and Change at a Time of Chaos Clear Winged Hummingbird Moth 6/1/2020 A Hovering Muse an Encouraging Sign of Transition and Change "It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change." Darwin # Animalia, arthropoda, Insecta lepidoptera Sphingidae, hemaris Thysbe Hummingbird clear wing moth It was after an unproductive few weeks I had taken my writing out to the deck. I picked up my pen and just then saw hovering above the Bougainvillea what I thought looked like a large bee. On closer inspection, I thought... no, can’t be, it's a tiny hummingbird, must be the tinker bell of Ontario hummers, rare I guessed. I did note something strange. It had feelers. Quickly I consulted my copy of “A Field Guide to the Birds”. I went directly to the section on hummingbirds. The ruby throated hummingbird at 3 ¾ inches is the only eastern species in Ontario. This curiosity was much smaller than that hummingbird. No feelers indicated.. My tiny whirring bird I soon discovered was not a hummingbird at all but a hummingbird clear wing moth. The moth was, in fact, closely related to Darwin’s hummingbird hawk moth a superb example of adaptive change and convergent evolution. Over millions of years it had developed hovering skill, rapid wing power and a well developed proboscis for probing flowers and feeding on the sweet energy giving nectar source it needed to survive. This creature, my muse, was all about change. The clear winged hummingbird moth had advanced through the miraculous stages of its own metamorphosis from egg to caterpillar,pupa to cocoon emerging as an adult moth. It used its own wing power to take to the air where it was battered by wind, rain and the elements all the while avoiding sharp eyed hawks and other predators. A survivor and a fighter, a rare specimen, I was seeing one individual that closely resembled another and yet in its own way was utterly unique adapting to the challenges of change and thriving. Sometimes when life seems out of sync and upside down, when you are seeking peace and grounding but find yourself surrounded by chaos and conflict the inspiration to carry on may be found in an observation close to home, in the littlest of things, like the fortunate sighting on my garden deck, a small clear winged hummingbird moth, that determined miracle of evolution I saw hovering there right before my eyes. Comments are closed.
http://www.artinpandemic.com/blog/june-01st-2020
At Boxted St Peter’s Church of England School, we believe these British Values are not just taught in discrete lessons, but rather, are demonstrated in how we speak to the children, what we offer them, the relationships we have with them, and how our curriculum is designed, in short – our tolerant, respectful, child-centred environment seeks to model the very values described in the previous paragraph. We think education is about helping people to understand how things work and how to challenge and change them for the better. Values won’t be assumed because schools demand them, particularly if they are very different from those at home. They have to be arrived at through mutual exploration and understanding. Our aim: Our children will demonstrate: - an understanding of how citizens can influence decision-making through the democratic process; - an appreciation that living under the rule of law protects individual citizens and is essential for their wellbeing and safety; - an understanding that there is a separation of power between the executive and the judiciary, and that while some public bodies such as the police and the army can be held to account through Parliament, others such as the courts maintain independence; - an understanding that the freedom to choose and hold other faiths and beliefs is protected in law; - an acceptance that other people having different faiths or beliefs to oneself (or having none) should be accepted and tolerated, and should not be the cause of prejudicial or discriminatory behaviour; - an understanding of the importance of identifying and combatting discrimination. How we ‘live’ the British Values at Boxted Democracy (linked to church value, Hope): Democracy is endemic within the school. Pupils have the opportunity to have their voices heard through our School Council and Pupil Perception Questionnaires. The elections of School Councillors allow children to democratically elect their representatives. School Council work to improve the school. They also raise money for charities. Furthermore, we hold Head Boy and Girl elections at the beginning of a new academic year. Year six children run campaigns and vote on their preferred candidate. Following speeches and campaigns, the whole school vote in a real election with ballot boxes and papers. Sports Council and Faith Group are also voted for; these ‘committees’ work together to improve sports and collective worship, respectively. We encourage volunteerism. This includes roles such as librarians, buddies (for new entrants) and ‘Play Leaders’. The Rule of Law (linked to church values Forgiveness, Compassion): The importance of laws, whether they be those that govern the class, the school, or the country, are consistently reinforced throughout regular school days and through school collective worship. Our children recently helped us to write our new Behaviour Policy; they developed a Code of Conduct and were instrumental in deciding how the Code should be reinforced though our ‘Class Dojo’ system. Pupils are taught the value and reasons behind laws, (e.g. that they govern and protect us), the responsibilities that this involves, and the consequences when laws are broken. Visits from authorities such as the Police; Fire Service; Lifeguards etc. are regular parts of our curriculum and help reinforce this message. Our Christian Values encourage children to know God’s law and to reflect upon how this relates to British law. Children are taught regularly about e-safety and are required to abide by a set of rules when using the internet. Individual Liberty (linked to church values, Compassion and Perseverance): Within school, pupils are actively encouraged to make choices, knowing that they are in a safe and supportive environment. As a school we educate and provide boundaries for young pupils to make choices safely, through of provision of a safe environment and an empowering education. Pupils are encouraged to know, understand and exercise their rights and personal freedoms and they are advised how to exercise these safely, for example through our E-Safety and PSHE lessons. Whether it be through choice of challenge, ways in which work may be recorded, or participation in our numerous extracurricular clubs and opportunities, pupils are given the freedom to make choices. One example of this is in our homework system – children get to choose what they learn from a grid of over 30 activities. Children are free to develop their learning at home in a style that suits them. Our school provides pastoral support and guidance through: play therapy, a family support worker, undertaking the ‘Summit’ programme, which includes access to therapists through Colchester mind. Growth Mindset is an important part of our school ethos, encouraging all children to become life-long learners, regardless of backgrounds and encouraging them to aspire and achieve. Mutual Respect (linked to church values, Compassion and Respect): An important part of our school ethos and Behaviour Policy revolves around Church Values: Respect, Perseverance, Compassion, Hope, Peace and Forgiveness. We believe that positive, respectful relationships are at the heart of school life. Respect is modelled in the behaviour and attitudes of all adults in school, and is evident in the relationships that they have with each other, as well as with our children. We have high expectations of pupil conduct and this is reflected in our Behaviour Policy. Children are taught to respect each other, to work cooperatively and collaboratively. Mutual respect is also promoted through the teachings of Jesus in the RE curriculum, PSHE lessons and collective worship. Tolerance of those of Different Faiths and Beliefs (linked to church value, Peace): We teach children that the freedom to choose and hold other faiths and beliefs is protected by law. Tolerance of different faiths and beliefs is promoted through the Diocese of Chelmsford Religious Education Syllabus, supported by the Essex Scheme of Work. Children learn about different religions; their beliefs, places of worship and festivals. Collective worship and discussions involving prejudices and prejudicebased bullying have been followed and supported by learning in RE and PSHE. Our topic based curriculum offers opportunity for children to study diverse cultures and faiths. Extremism Something which is clearly not part of any British or European value is extremism. It is important to remember that whilst the threat from so-called Islamic State has been a focus in the Counter Terrorism and Security Act, the Prevent Duty is clear that extremism of all kinds should be tackled too. In England, far right groups such as Britain First and the English Defence League need to be tackled, too. Extremism is not a new topic in education, but schools have a relatively new statutory duty to pay “due regard to the need to prevent people from being drawn into terrorism”. Read the government's Prevent Duty Guidance and its Guidance for Schools.
https://www.boxted.essex.sch.uk/about-the-school/british-values/
Epigenetic mechanisms, particularly DNA methylation, are a possible link between environmental and biological determinants of health. As the DNA methylome undergoes rearrangement in utero and is susceptible to environmental insults, it may be a mechanism explaining the developmental origins of human disease with public health importance. However, the epidemiologic studies needed to identify the role DNA methylation plays mediating environmental exposure disease risk still face several obstacles. This dissertation addresses knowledge gaps impeding the rigorous adoption of genome-scale measures of site-specific DNA methylation, like the Illumina Infinium HumanMethylation450 (450K) BeadChip®, into epidemiologic study designs. We then investigate the impact of prenatal exposure to polybrominated diphenyl ethers (PBDEs) on DNA methylation of children at birth. PBDEs are a class of flame retardant chemicals widely used in U.S. consumer products over the last 40 years that have previously been associated with adverse neurobehavioral outcomes, obesity, and other effects. Specifically, we aimed to: 1) Identify and minimize sources of technical variation for site-specific DNA methylation measured by 450K BeadChip assay in epidemiologic studies 2) Characterize sources of biological variation due to host factors (e.g. blood cell composition and sex) in measures of whole blood DNA methylation at birth 3) Determine whether prenatal exposure to PBDEs is associated with differential methylation patterns of CpG sites in umbilical cord blood Our results identified that the newly proposed All Sample Mean Normalization (ASMN) procedure performed consistently well, both at reducing batch effects and improving replicate comparability compared to several other leading normalization methods. It can be successfully implemented in epidemiologic studies to enhance 450K DNA methylation data preprocessing. In our examination biological variation, we found that a standard approach in epigenome-wide analysis – minfi white blood cell composition estimation – did not correlate well with white cell counts from newborns (ρ = -0.05 for granulocytes; ρ = -0.03 for lymphocytes), but improved substantially (ρ = 0.77 for granulocytes; ρ = 0.75 for lymphocytes) in older children likely due to increasing similarity with minfi’s adult reference data as children aged. This suggests that minfi may not currently be appropriate for analysis involving newborns or young children. Additionally, results on DNA methylation differences by sex identified 3,031 differentially methylated positions (DMPs) and 3,604 sex-associated differentially methylated regions (DMRs) on autosomes that were mostly hypermethylated in girls compared to boys. Our hits were significantly enriched for gene ontology terms related to nervous system development and behavior. Finally, we investigated the impact of exposure to PBDEs during the highly susceptible prenatal period on DNA methylation of Mexican-American children enrolled in the Center for Health Assessment of Mothers and Children of Salinas (CHAMACOS) at birth. We identified between 6 and 48 DMRs in umbilical cord blood associated with different measures of prenatal PBDE exposure. BDE’s-47, -99 and Σ4BDE had fewer (from 6 to 9), mostly hypomethylated DMRs. Prenatal BDE-100 and -153 levels were associated with more DMRs (11 and 48 respectively) and the majority hypermethylated. The PBDE-DMRs we found were located in genes (e.g. NRBP1, CDH9, NTN1, S100A13) involved in biologically relevant functions (including axon guidance and tumor suppression) given the health effects observed in association with BDE exposure to date. In the last 30 years there has been a sharp increase in obesity among children, and minority populations are particularly vulnerable. Although etiology of obesity is thought to be multifactorial with causes stemming from diet, environment, genetics and their interaction, no clear molecular pathways have been identified. Underlying obesity development are changes in critical energy balance hormones, adiponectin and leptin (adipokines), however their development and determinants over the childhood period remain poorly understood. Previous studies indicate that certain features of the early life environment may have lasting effects on future child metabolic health and highlight the potential obesogenic role of Bisphenol A (BPA) - a high volume production chemical detectable in 93% of the United States population. Mechanisms of BPA action remain uncertain however a leading hypothesis argues that BPA exposure may result in epigenetic changes, such as altered deoxyribonucleic acid (DNA) methylation, affecting expression of adipogenic genes. To address these data gaps, we proposed the following specific aims: (1) To measure plasma adiponectin and leptin in Mexican-American children from the Center for Health Assessment of Mothers and Children of Salinas (CHAMACOS) cohort at birth and again at 2, 5, and 9 years, examining heterogeneity in adipokine growth patterns and their association with candidate perinatal factors. (2) To determine whether maternal or concurrent urinary BPA concentrations are associated with adiponectin and/or leptin levels in children. (3) To characterize DNA methylation structure of peroxisome proliferator-activated receptor gamma (PPARy) - the master regulator gene in adipogenesis, determine whether PPARy methylation is associated with child adipokines and/or body size and whether prenatal or concurrent BPA may influence PPARy methylation. Our results highlight several developmental differences in adiponectin vs. leptin over the childhood period. While leptin levels closely and positively correlated with child body size at all ages, adiponectin had inverse and weaker associations with body mass index (BMI) at 2, 5, and 9 years. Further, adjusting for BMI, adiponectin reflected an improved lipid profile while leptin was directly related to systolic and diastolic blood pressure in 9-year-old children. Of the candidate perinatal factors examined, we identified maternal consumption of sugar-sweetened beverages (SSB) during pregnancy and increased rate of growth during the first 6 months of life as significant risk factors for altered adiponectin levels during childhood. Further, children with greater birth weight had rapidly-rising leptin levels over the birth to 9-year period and highest BMI and waist circumference at 9 years. Our BPA analyses indicated sexually dimorphic responses similar to those previously reported in animal studies. While BPA concentrations during early pregnancy were directly associated with adiponectin levels in 9-year-old girls (b=3.71, P=0.03, N=131), BPA concentrations during late pregnancy were associated with increased plasma leptin in 9-year-old boys (b=0.06, P=0.01, N=179), controlling for sociodemographics, dietary variables and child BMI. Finally, using the Infinium Illumina 450K Array, we examined DNA methylation in 23 sites spanning the PPARy promoter and gene body region in discovery (N=117 at birth, N=108 at 9 years) and validation (N=116 at birth, N=131 at 9 years) sets of children. We report that methylation in site 1 was significantly and negatively associated with child size at birth (b=-2.5, P=0.04) and at 9 years (b=-4.8, P<0.001) in the discovery set, and these relationships were replicated in the validation set. Overall our research adds evidence in support of the hypothesis the children's metabolic health may be programmed during early life and suggests that epigenetic mechanisms may play an important role in determining child size. The alteration of gene expression mediated by epigenetic modifications has been proposed as a mechanism by which chemical and biological factors during gestation and childhood may influence health and adult disease onset. Changes in DNA methylation, the most commonly assessed epigenetic mechanism, have been linked to numerous exposures including diet, metals, and chemicals, such as phthalates. The majority of individuals, including pregnant women and children, have detectable levels of metabolites of phthalates, which are used in consumer products to increase flexibility of plastics and as solvents. Phthalate exposure during early life has been associated with poor birth and developmental health outcomes, which may be mediated by epigenetics. Increasing evidence in human and animal models has shown an association between exposure to phthalates and DNA methylation levels. However, there is limited research on the relationship between pregnancy phthalate exposure and imprinted gene methylation. Imprinted genes exhibit expression of one parental allele and many are involved in early growth and development. Progress in the field of metabolomics has allowed for the assessment of thousands of metabolites in biological specimens. Metabolomic levels in humans have been associated with age and obesity; however, research of the effect of metabolites, especially the diverse classes of lipid metabolites, on DNA methylation is sparse. Animal studies have identified associations between maternal fatty acid dietary supplementation and offspring DNA methylation. The relationship between maternal lipid metabolites and DNA methylation of newborns has only been examined thus far in 40 mother-child dyads from a predominantly Caucasian study population. Environmental exposures, in addition to their role in epigenetic regulation, have been shown to impact human health, such as obesity status. Despite advances in the identification of additional risk factors of obesity, the influence of maternal psychosocial factors on child obesity status and biomarkers has been poorly examined. The aims of this dissertation are to address key knowledge gaps of the relationships of early life exposures, epigenetics, and childhood obesity. The analysis benefitted from numerous samples and longitudinal data accumulated since 1998 by the Center for the Health Assessment of Mothers and Children of Salinas (CHAMACOS) birth cohort study of hundreds of Mexican-American children and their mothers. The well-documented environmental exposures and the high prevalence of parental and child obesity in the CHAMACOS population make this an excellent study population to assess the role of early life exposure on epigenetic mechanisms and health. Results from the research in this dissertation can inform public health prevention strategies in the general population and more targeted approaches in Hispanic subpopulations, which are projected to comprise a greater portion of the United States population in the upcoming decades. Breastfeeding has numerous benefits to mother and child including improved maternal post-partum health, maternal/child bonding, and infant neurodevelopment and immune function. However, concern has been expressed about potential health risks posed to infants from environmental chemicals in human milk. The Food Quality Protection Act of 1996 requires the United States Environmental Protection Agency to set pesticide tolerance levels in food that ensure the safety of sensitive sub-populations, particularly pregnant women and children. Maternal dietary and environmental exposures to organophosphate (OP), organochlorine (OC), carbamate, and pyrethroid pesticides and polychlorinated biphenyls (PCBs) may lead to measurable levels of these chemicals in breast milk and because some of these chemicals interfere with hormone regulation, a mother's ability to lactate may be compromised by exposure. Lactational exposures to infants are of particular concern because infants' metabolic, neurologic and other systems are developing leading children to be more susceptible to the hazards of pesticides than adults. Although persistent pesticides, such as dichlorodiphenyltrichloroethane (DDT), have been biomonitored in human milk for decades, there are few studies measuring non-persistent pesticides in milk and no studies examining potential sources of non-persistent pesticides in milk. Using data and samples from the Center for the Health Assessment of Mothers and Children of Salinas (CHAMACOS), another study on peripartum pesticide excretion, and a study of breast milk samples collected from San Francisco Bay Area women, this research aimed to: 1) to determine whether persistent organic pollutants measured in the blood of CHAMACOS participants are associated with shortened lactation duration; 2) to measure and compare the chemical concentrations of OPs, OCs, carbamates, pyrethroids, and PCBs in the milk of women residing in an rural area with those of women residing in an urban region; and 3) to investigate whether concentrations of two non-persistent pesticides highly detected in milk are correlated with concentrations measured in other biological samples and determine the potential predictors or sources of maternal exposure. Maternal concentrations of potentially endocrine disrupting chemicals measured in maternal serum were not associated with shortened lactation duration. Breast milk samples from urban and agricultural populations contained all of the persistent chemicals measured and the non-persistent pesticides, chlorpyrifos and permethrin. Concentrations of these two non-persistent pesticides were positively, but not statistically significantly correlated with concentrations measured in the plasma and urine of the same women. Lastly, some dietary and household factors may be potential sources of exposure to the mothers studied. The proposed research will provide information on maternal exposure and lactational exposure of non-persistent and persistent pesticides and PCBs to our most sensitive population, infants. Understanding whether lactation is potentially disrupted and the extent of dietary exposures to infants will allow for informed policy decisions regarding the use of pesticides and for the design of effective interventions in order to ensure the safety of this food for infants.
https://escholarship.org/search/?q=author%3A%22Holland%2C%20Nina%22
CLASS DESCRIPTIONS: Toddler Class (ages 3-4) This class teaches children to interact with the language by playing games, making crafts, singing songs, and storytelling. These activities supplement our standard German kindergarten activities especially learning the concepts of same and different, size, opposites, left and right, numbers, colors and more. Seasonal topics are also incorporated into the lesson plan. Mondays 3:30 pm Primary Class (ages 5-6) This class teaches children to learn and practice German through playful activities that encourage the kids to communicate in German. The curriculum is filled with activities that encourage language and literacy development in young children, e.g. rhythmic games, playing with syllables and letters. Reading and writing is introduced through games and other supplemental learning materials. Mondays 4:40 pm Thursdays 3:30 pm Lower Elementary Class (ages 7-8) This class teaches children to focus on conversation, expand their vocabulary and work on Grammar, as well as reading and writing in German. Reading and writing is introduced within the learning topic. Other supplemental learning materials are supporting the reading and writing development as well. Mondays 4:40 pm Elementary Class (ages 9-10) This class focuses on conversation. A strong literary component also ensures that the students not only listen, speak and play in German, but read and write, as well as work on grammar. The topics are chosen according to children's interests to provide a fun yet educational environment similar to schools in Germany. Tuesdays 4:40 pm Upper Elementary Class (ages 11-12) This class is for students who are already communicating well, but need to work on grammar and orthography. Students will apply word analysis and vocabulary skills to comprehend selections and they will read and comprehend unfamiliar words using root words, synonyms, antonyms, word origins and derivations. Thursdays 5:50 pm Teenager Class (ages 13-15) Through the various reading of different stories and hands-on activities, students will make inferences, draw conclusions, compare and contrast, summarize stories, use context clues and establish purposes for reading. We will also focus on thinking and integrating the language process to help students improve their written communication skills.
http://as.nyu.edu/deutscheshaus/childrens-program/class-descriptions.html
Vladimir Dinets is a zoologist and author, known for his studies of Crocodilian behavior and of numerous rare animals in remote parts of the world, as well as for popular writings in Russian and English. Vladimir Dinets with a skull of a black caiman, Puerto Francisco de Orellana, Ecuador. Dinets was interested in zoology from an early age, and was a winner of all-USSR Student Biology Olympics at Moscow State University. However, due to his Jewish ancestry, he was unofficially banned from entering that university, and obtained a master's degree in biological engineering from Moscow State Institute of Radio-engineering Electronics and Automation. In 1997 Dinets emigrated to the United States, and in 2011 obtained a Ph.D from University of Miami (adviser Steven Green). In 2017 he moved to Okinawa, Japan. Dinets maintains a popular bilingual blog on LiveJournal, and a website with a number of illustrated essays on biology, conservation and travel. Dinets' early zoological studies were conducted in remote areas of the USSR, China and South America; he also participated in a number of conservation projects in Russia, Mongolia, Israel and Peru. In 1992 he solved the mystery of the ability of rock ptarmigans to winter on Arctic islands in total darkness: they survive by feeding on rich vegetation on sea cliffs where seabird colonies are located in summer. In 1996-1999 Dinets conducted a study of international trade in endangered insects and consulted the governments of Nepal and Sikkim on the issue, providing a set of recommendations for improving anti-poaching and anti-traffic control. In 2000-2005 Dinets participated in studies of marine mammals, as well as the natural circulation of plague on the Great Plains (at University of Colorado) and Sin Nombre hantavirus in the American Southwest (at the University of New Mexico). He also conducted a number of solo expeditions in North America, South America, Asia and Africa, and studied a few species of birds and mammals never before observed by scientists, such as bay cat on Borneo, woolly flying squirrel in the mountains of Pakistan, and Cameroon scaly-tail in Central African Republic. In 2005-2013 Dinets conducted a comparative study of social behavior of Crocodilians, working in 26 countries. In 2005 he discovered "alligator dances". By 2010 he elucidated the roles of many signals used by Crocodilians, and proposed their possible evolutionary history. In 2009-2013 he documented the ability of crocodiles and alligators to use coordination and role separation during cooperative hunting and to use sticks as lures for hunting birds looking for nesting material. He also conducted the first scientific studies of play behavior in crocodilians and on coordinated hunting in snakes. In 2011 Dinets took part in WWF expedition to Vietnam to study saola, and became the first zoologist to find and photograph saola tracks in the wild. In 2012-2013 Dinets was a Research Associate at Louisiana State University, working on whooping crane reintroduction to Louisiana and studying behavioral ecology. Since 2011 Dinets is a Research Assistant Professor at the University of Tennessee, where he is studying behavioral ecology and its applications to conservation. His most recent work is on predicting the effects of possible invasions of brood parasites from Eurasia into North America. Since 2017 Dinets is a Science and Technology Associate at Okinawa Institute of Science and Technology. In 1993-1997 Dinets wrote a number of books about travel that remain popular in Russia. Volumes of Encyclopedia of Russian Nature series, Actual Biology Fund, 26,000 copies published: A. Beme, A. Cherenkov, V. Dinets, V. Flint. Birds of Russia (1995); V. Dinets, E. Rotshild. Mammals of Russia (1997); V. Dinets, E. Rotshild. Domestic Animals, 1998. J. Newell (ed.) The Russian Far East: A Reference Guide for Conservation and Development. Daniel & Daniel Publishers (2004). V. Dinets. Dragon Songs: Love and Adventure among Crocodiles, Alligators, and Other Dinosaur Relations Arcade Publishing (2013). V. Dinets. Peterson Field Guide to Finding Mammals in North America (Peterson Field Guides series) Houghton Mifflin Harcourt (2015). V. Dinets. Wildlife Spectacles: Mass Migrations, Mating Rituals, and Other Fascinating Animal Behaviors Timber Press (2016). G. Burghardt, V. Dinets, S. M. Doody. Reptile Social Behavior In press, Johns Hopkins University Press. ^ Dinets, Vladimir (2011-04-12). "The Role of Habitat in Crocodilian Communication". Open Access Dissertations. ^ "Chasing butterfly poachers". Archived from the original on 2012-12-15. Retrieved 2013-02-14. ^ Dinets, V. First Photo of a Bay Cat in the Wild. IUCN/SSC Cat News 38: 5. ^ Dinets, V. Observations of the woolly flying squirrel Eupetaurus cinereus in Pakistan. Mammalia 75(3): 277-280. ^ "Dinets, V. Nocturnal behavior of American Alligator (Alligator mississippiensis) in the wild during the mating season. Herpetological Bulletin 111: 4-11". Archived from the original on 2012-04-23. Retrieved 2013-02-14. ^ Dinets, V. Effects of aquatic habitat continuity on signal composition in crocodilians. Animal Behavior 82(2): 191-201. ^ Dinets, V. Coordination and collaboration in cooperatively hunting crocodilians. Ethology Ecology & Evolution DOI: 10.1080/03949370.2014.915432. ^ Feb2017 Dinets_HH(7)_final.pdf Dinets, V. Coordinated hunting by Cuban boas. Animal Behavior and Cognition 4:24-29. ^ Dinets, V. Long-term cave roosting in the spectral bat (Vampyrum spectrum). Mammalia 81:529-531.
http://conceptmap.cfapps.io/wikipage?lang=en&name=Vladimir_Dinets
Blue and white porcelain during the Ming Dynasty Blue and white porcelain was first developed from qingbai porcelain during the Yuan period. This period was a time of experimentation as potters tried to develop a transparent glaze that would not obscure the blue motifs on the white porcelain body. During this period, they also experimented with multiple shapes and motifs, ranging from sturdy and severe to graceful and refined. These pieces reflected the dynamic, high spirited age in which they were made and classified under the Zhizheng style. The success of this style resulted in the mid fourteenth century being named as the ‘golden age’ of blue and white ware. Majority of blue and white porcelain during this period is densely painted with multiple motifs, with few exceptions of an underglaze light cobalt and a scattering of motifs. Few of the more densely ones have slightly raised white motifs of flowers and dragons carved on a blue and white painted background, therey creating a strong vivid impression. The Zhizheng style was created to meet the growing demands of West Asia and near the East. This style included larger wares and a West Asia style arabesque design, a pattern of flowers and foliage intricately and repeatedly drawn to cover the majority of the surface of the vessel. This avoided the use of white space, characteristic West Asian style. The Chinese potters managed to transform these designs to create a flexible composition that did not seem frivolous or crowded as seem in the pictures below. The quality and look of the blue and white porcelain continued to transform during the Ming dynaty. The early years of the Ming dynasty, Yongzhe era, resulted in a low quality of blue and white porcelain. The style, known as the Hongwu style, did not stand out and is commonly known as the traditional style. The Hongwu style is also decorated rather crudely, without a strength of character of older wares. The lack of compactness in design also resulted in a rather lifeless style. The weakness of the style may be attributed to the lack of smalt, a specific kind of cobalt mixture, from West Asia. To overcome this, they had to substitute smalt for a faintish black blue which is incomparable to the Zhizheng blue, even when painted with the same expertise. Blue and white ware during Yongle and Xuande era The blue and white porcelain began to reach its former glory during the Yongle reign, when smalt was readily available again. The blue and white porcelain also began to transform to meet the taste of the Chinese emperor and people, who preferred the more traditional Chinese style. Thus the design of porcelain shifted from the West Asian style to a more distinct Chinese style. Though the larger vessels of the West Asian style remained, the more complicated shapes were gradually replaced by simpler ones. These simpler shapes included both small and medium sized ones that could be used by the people in their everyday lives. The smalt was further refined to produce brighter underglaze blue motifs. The clay and the overglaze received an upgrade resulting in the final product being considerable different from the darker blue and white ware of the Zhizheng era. To match the higher quality materials used, motifs became more sophisticated, often having a musical tempo: water plants quiver in a running stream, and vines and floral scrolls undulate in a rhythmic pattern across the surface of the vessel. A vase from the Yongle reign with intricately drawn floral scrolls and an abundance white space can be seen below. Unlike previously, large areas of blank white ground were reserved to better set off the motifs. This new style marked the end of the West Asian style of the Zhizheng era. The Xuande era followed the Yongle era. Though the era only lasted ten years, the official kilns had never been more active, reflecting the stability of the country during this period. This period marked the beginning of the inscription of the reign name on the pieces, allowing them to be easily dated. However, the main distinguishing factor between the two eras, is that Xuande ware has a certain grandeur. Despite the transition to a more sophisticated design, the blue and white ware from these two eras are still of the same quality and both represent the move towards a distinctive Chinese style. Instead of reserving white space to create an airiness in design as done in the Yongle style, the Xuande potters confined their motifs to carefully outlined areas in several bands, leaving the intermediary space white. They painted their motifs with blue smalt, to which a small amount of domestic cobalt had been added, creating a blackish tint that seemed to enhance the refined quality of the porcelain. The white background both showed off the white clay and the blue motifs. The clay was unique due to the presence of extremely fine bubbles in the overglaze, creating a finely pitted, creamy white surface. The potters left large areas white to emphasize the matte-like delicate finish of the white clay. The inscriptions were also carried out with an underglaze cobalt, in the centre of the foot or the inside of the vessel.
http://www.worksoforient.com/?page_id=1349
The Dilemma of Kachi Abadis in Pakistan According to the World Bank’s development indicators, in Pakistan, 40.1% of its population was reported to reside in slum areas as of 2018. The word ‘Kachi Abadi’, also known as ‘slums’, can be associated with three types of settlements; slums, informal settlements, and squatter settlements. The only difference between these terms exists based on the lack of specific properties; - A ‘squatter settlement’ lacks a proper land tenure. - An ‘informal settlement’ lacks ‘formal’ control of land use and planning. - The most widely used term, ‘slum’, lacks basic resources of life. The Abadis are a common sight in the urban cities of Pakistan, brimming with poverty and lack of basic resources. The world’s largest slum – Orangi Town, is located in Karachi, with around 2,400,000 residents alone. The word ‘slum’ first emerged in the 19th century in London to designate a lowly part of town or a room of low repute. Over the years, the definition has evolved systematically. According to the United Nations (UN) definition, a slum is a ‘contiguous settlement’ lacking basic services and adequate housing infrastructure. They are often not legally recognised by the official authorities as an equal part of the city. The UN further describes a slum dweller as an inhabitant lacking five necessities – strong walls, ample living space, clean drinking water, sanitation, and a secure title. The UN’s Millennium Development Goals (MDG) incorporates the target of ‘significantly improving the life of a hundred million inhabitants of slums by 2020 in its seventh goal to ‘Ensure Environmental Sustainability’. Even if the lives of 200 million slum dwellers’ improved, at the same time, around another wave of 100 million slum dwellers have entered the slums regime, according to the data released by the UN. Contributing Factors to the Emergence of Urban Slums: Slums do not emerge on their own but are an outcome of varying economic, social, and often political forces. Following are listed some leading causes that foster slums and informal settlements in Pakistan. High Rate of Urbanisation In Pakistan, cities are expanding rapidly, with 37.1% of its total population residing in cities. The increasing rural to urban migration in search of economic opportunities contributes to the emergence of slums as high urban density pushes the population to the poorer sections of the cities. Urbanisation of Poverty The result of rapid rural-urban migration causes a phenomenon called ‘urbanisation of poverty whereby poverty shifts from the rural areas to the urban centres in search of employment opportunities. Lack of Affordable Housing When housing is not planned efficiently to cater to the affordability of the lower-middle and lower classes, slums areas tend to multiply as the impoverished residents move to low-cost, congested, and deteriorated regions of the city. Informal Economy In Pakistan, around 68.1% of the labour force is employed in the informal sector. The stem of this informal labour force arises from cities’ slums or informal settlements. They are an integral part of the city’s economy as they are active sources of cheap labour and daily wagers. Lack of secure tenure/titles When the informal settlements do not retain secure tenure, there is no incentive or opportunities to enhance the living conditions. Without secure tenure, livelihood opportunities and public services are often inaccessible. Due to lack of formal oversight, slums fall victim to rapid urban decay and degradation of living standards. Social Inequality The phenomena of social exclusion, economic inequality, lack of essential services, lack of education and health facilities, and marginalisation of minorities readily contribute to slums’ affluence. One definition of slums by Cities Alliance Action Plan states that slums are ‘neglected’ portions of societies where housing conditions and living standards are highly impoverished. Poor Land Use Planning Slums are a literal manifestation of poor land-use planning in urban areas. They are characterised by a lack of zonal oversight and enforcement of land use regulations. In Pakistan, corruption in land use is highly prevalent, marked with illegal construction and land mafias that resist any effort to redevelop or eradicate slums. Urban Renewal Urban renewal is mostly an unfounded concept in Pakistan’s urban planning. Urban renewals are state-authorized redevelopment programs that renovate areas suffering from urban decay, economic downturn, or lack of security. Contrarily, urban renewal programs targeted at introducing new housing societies or hosting events often are the cause of eviction of slums, whereby large proportions of slum dwellers have no choice but to relocate to more deteriorated areas that are farther away from economic centres. Population Growth Pakistan’s population growth rate in 2020 was estimated to be 1.98%. When the population rate is so high, it outpaces the rate at which affordable housing is being constructed. This significantly hampers the dynamics of livelihood opportunities, and people are forced to find a home in poverty-stricken areas. Job Opportunities Comparative to rural areas, cities offer a versatile plethora of job opportunities, a higher spirit of entrepreneurship, and a booming services sector. There is also a greater demand for cheap labour and daily wagers. These dynamics attract the rural inhabitants who pursue these opportunities to elevate their earnings and utilise the more significant social mobility to walk up the ladder of social classes. Slums and the Poverty Trap Slums are marked by intergenerational poverty as the entire households are subjected to financial and socio-economic burdens. The exposure to recurrent diseases, lack of access to public facilities, low-paid jobs, and perpetual poverty traps the inhabitants in a cycle of poverty that is hard to escape. The areas lack educational and proper vocational opportunities that entrap the future of the youngsters. The only job opportunities available for them are part of the informal economy, consisting of daily wages, cheap labour, and low incomes. With the lack of essential services, their cost of living tends to increase, coupled with difficulty in access to water, dependency on gas cylinders, and lack of electricity. Hence, as the conditions of slums continue to aggravate, it entangles more and more inhabitants into its poverty trap. Given these constricting factors, it is high time that urban planning in Pakistan shifts its attention specifically towards the emancipation of the slum areas and devises efficient land-use policies to counter urban decay and informal settlements.
https://gchf.pk/the-dilemma-of-kachi-abadis-in-pakistan/
1. Field of the Invention The present invention relates to electromagnetic couplers suitable for use in wireless communication systems to transfer information between information communication devices disposed at a short distance from each other via an electrostatic field or an induced electric field. The present invention also relates to information communication devices equipped with the electromagnetic couplers. 2. Description of Related Art Conventional electromagnetic couplers include one disclosed in JP-B 4345851. This electromagnetic coupler (high-frequency coupler) is formed by connecting a plate-like electrode to a series inductor and a parallel inductor via a high-frequency transmission line. Such an electromagnetic coupler is to be disposed in an information communication device, such as a transmitter and a receiver. In the cases where a transmitter and a receiver are disposed so that the electrodes of their electromagnetic couplers face each other, when the distance between the two electrodes is 2λ/15 or smaller of the wavelength λ of the frequency used, the two electrodes are coupled by the electrostatic field component of longitudinal waves to behave as one capacitance and like a band pass filter as a whole, making it possible to efficiently communicate information between the two electromagnetic couplers. Also, when the distance between the two electrodes is in the range from 2λ/15 to 8λ/15 of the wavelength λ of the frequency used, information can be transferred by using an induced electric field of longitudinal waves. Meanwhile, when the distance between the electromagnetic couplers is greater than a certain value, information cannot be transferred. As a result, other wireless devices do not suffer interference from electromagnetic waves generated from the electromagnetic couplers, and a wireless communication system in which information communication devices equipped with the electromagnetic couplers are used does not suffer interference from other wireless communication systems. Because of these characteristics, wireless communication systems in which the conventional electromagnetic coupler is used make it possible, by using an electrostatic field or an induced electric field of longitudinal waves over a short distance, to communicate a large volume of data between information communication devices by using the UWB (Ultra Wide Band) communication system, in which wide band signals are used. As described above, when the distance between the two electrodes is 2λ/15 or smaller of the wavelength λ of the frequency used, information can be efficiently communicated between the electromagnetic couplers by forming a band pass filter. In other words, the electrodes of the two conventional electromagnetic couplers are coupled by the electrostatic field component of longitudinal waves to behave as one capacitance, and a band pass filter is formed by the series and parallel inductors. However, when the match between the two electromagnetic couplers is not good, signal transmission efficiency is degraded. On the other hand, in the case of wireless communications carried out by using devices provided with this electromagnetic coupler, for example, a cover of each device including a dielectric exists between the electromagnetic couplers, resulting in variations in the dielectric constant between the electromagnetic couplers. Then, variations occur in the value of the capacitance between the electrodes of the two electromagnetic couplers and in the frequency characteristics of the band pass filter, which in some cases may degrade the information transmission characteristics in the frequency band of interest. Even if the expected variations in the dielectric constant in some cases are taken into account in designing the electromagnetic couplers, in the case of wireless communications carried out by using other devices made of different materials and/or differently designed, the value of the dielectric constant between the electromagnetic couplers varies, which similarly degrades the information transmission characteristics in the frequency band of interest. Also, when the distance between the electrodes of the two electromagnetic couplers is in the range from 2λ/15 to 8λ/15 of the wavelength λ of the frequency used, information is communicated by using the induced electric field component of longitudinal waves. In this case, when the relative position of the two electromagnetic couplers and the environment are kept constant, the information transmission characteristics depend on matching conditions between the electromagnetic couplers and the feeding system. In other words, the signal intensity from the electromagnetic couplers to the communication module including the feeding system increases under a good matching condition, while the signal intensity from the electromagnetic couplers to the communication module including the feeding system decreases under a poor matching condition. In the conventional art, electromagnetic couplers are designed so that a band pass filter is formed when the distance between the electromagnetic couplers is 2λ/15 or smaller of the wavelength λ of the frequency used; however, the matching condition when the distance between the electromagnetic couplers is in the range from 2λ/15 to 8λ/15 of the wavelength λ of the frequency used is not particularly taken into account in design. Therefore, in the case of an insufficient signal intensity when the distance between the electromagnetic couplers is in the range from 2λ/15 to 8λ/15 of the wavelength λ of the frequency used, for example, a redesign is required with a view to forming a band pass filter when the distance between the electromagnetic couplers is 2λ/15 or smaller of the wavelength λ of the frequency used. This means that much time and effort is required in designing the electromagnetic couplers. In addition, when the frequency band to be used is broad, it is required to obtain a large number of frequencies in which the matching condition is suitable, which means that even more time and effort is required.
Mathematical Impacts The art of mathematics is an intrinsic part of the many physical sciences which humanity strives to learn; it began as a way to explain the celestial guides, which became the science of astronomy and astrophysics. This essay will explain the use of math in astronomy, chemistry, physics, and their relation. The study of astronomy is the oldest of the physical sciences it began as an inspiration. For the purpose of this essay, the study will begin with the ancient’s knowledge of this science. They had many different views on how those nocturnal guides worked. Instead of sticking to his course of study, he learned by investigating his everyday activities. By learning to inquire further about what interested him, he made e... ... middle of paper ... ...losophers had tried to explain motion; now their task was to explain changes in motion.” In conclusion, Galileo’s discoveries are still being looked upon today. By the 1640’s no other astronomers could look past Galileo’s discoveries. The work of Galileo along with Copernicus, and Kepler could not be solved, that is until Sir Isaac Newton the “greatest genius of the Scientific Revolution” came along and made his own set of new discoveries. Works Cited Lattis, Dr. Jim. Isaac Newton not only changed the world with the invention of calculus, but also with his theory of light and color, and his invention of physical science and the law of universal gravitation (Margaret, 11). To begin with, Isaac Newton laid down the foundations for differential and integral calculus. It all began when Newton was enrolled at Cambridge College, the University that helped him along in his studies. Here, he began reading what ever he could find, especially if it had something to do with mathematics. He read books on geometry by Descartes, algebra books by John Wallis, and eventually developed the binomial theorem which was a shortcut in multiplying binomials (Margaret, 46). They answered the questions that they asked the audience in the introduction in great detail, but without becoming overbearing. I learned much about the mathematic society during different eras, including the struggle between Leibniz and Newton, and the method Archimedes used when he helped form calculus. The only two negatives I have seen in the article are the large leap from 225 B.C. to the fifteenth century, & from the late fifteenth century to the late twentieth century, and the abrupt end. Works Cited Harding, S., Scott, P. (1976). He published ‘Philosophiea Naturalis Principia Mathematica(Lantin for “Mathematical Principles of Natural Philosophy” usually called the Principia)’ in 1687. It became a masterpiece with just published. This great book includes a theory of gravity and the Newton’s three laws. He also formulated an empirical law of cooling and studied the speed of sound. Moreover, he proved helicocentrism by his theory of gravitation and kepler’s law of planetary motion. "Nature and Nature's laws lay hid in sight; God said, ' let Newton be', and all was light." -- Alexander Pope “Our society depends upon science, and yet to many of us what scientists do is a mystery” (Hall, 1992, p. XI). Sir Isaac Newton, English mathematician and physicist, was considered one of the greatest scientists in history. Without Newton’s contributions, the world would not be the same: modern technology such as computers and televisions would not exist; space and many others things would not have been explored. During his early life, Sir Isaac Newton was able to develop calculus as well as theories of natural forces and optics, based initially upon the knowledge left by his predecessors. For six years, Kepler taught, geometry, Virgil, arithmetic, and rhetoric. There he worked out a complex geometric hypothesis to account for distances between the planetary orbits-orbits that he mistakenly assumed were circular. Kepler then proposed that the sun emits a force that diminishes inversely with distance and pushes the planets around in their orbits. Kepler published his account in a thesis entitled Mysterium Cosmographicum (“Cosmographic Mystery”) in 1596. This work is significant because it presented the first comprehensive and logical account of the geometrical advantages of Copernican theory. Since Hilbert’s study in 1900 on mathematical problems, his questions have influenced mathematics still today. (Jeremy Gray) David Hilbert was born on 23rd January, 1862, Konigsberg, Germany. He attended the University of Konigsberg in the year 1880 to 1885, gymnasium of Wilhelm in the year 1879 to 1880 and Friedricskolleg gymnasium in the year 1872 to 1879. Some of the books that David Hilbert wrote include; statistical mechanics, theory of algebraic number fields, the foundations of geometry and principles of mathematical logic. Hilbert’s 23 mathematical problems were more than just a collection of mathematical problems because he outlined problems that addressed his mathematical philosophy. As indicated by his various inventions, he was also interested in applying his knowledge to practical problems. Galileo helped established the modern scientific method through his use of observation and experimentation. His work in mathematics, physics, and astronomy made him a leading figure of the early scientific revolution. Galileo Galilei Bibliography: Bibliography http://www.treasure-troves.com/physics/bio/gal Parker, Steve (1992), Galileo and the Universe, Belintha Press Limited © 1992, London The Great Scientists, (1992) No. 5, pg. The three most important offerings of Newton are solving the mystifications of light and optics, formulating his three laws of motion, and deriving from them the law of universal gravitation. Also he contributed so much to the fields of mathematics too. While he was still a student at Cambridge University in 1664, he had a great interest in the mysteries of light,optic, and colors. He read the works of Robert Boyle, Robert Hooks, and also René Descartes for some motivation. He investigated the refraction of light by passing a beam of sunlight through a type of prism, which split the beam into separate colors reflecting a rainbow.Over a few years in series of elaborate experiments, Newton discovered measurable, mathematical patterns in the phenomenon of color.
https://www.123helpme.com/essay/The-Elements-of-Newtons-Philosophy-By-Voltaire-110531
How Long Before Homemade Wine Is Drinkable? (4 Facts) If you’re making your own wine, you’re probably eager to try it before it’s completely ready. However, winemaking takes a long time, and it’s important to wait the appropriate amount of time before drinking. Homemade wine is drinkable about 1-4 months after you start the winemaking process. The wait depends on the kind of wine you’re making, how clear it is, and its specific gravity value. Let’s take a closer look at what factors need to be considered when determining whether a homemade wine is drinkable. Things To Know Before Drinking Homemade Wine When you’re making homemade wine, you probably want to taste it as soon as possible to see how well you did. Unfortunately, winemaking can be quite a lengthy process, so you won’t get the instant gratification that you may desire. The good news is that your homemade wine will be drinkable after some time. Here are some things to keep in mind. 1. Wine Should Have No More Sediment Before It Goes Into the Bottle When you buy wine in the store, it’s already clear and without sediment, but it doesn’t start out that way. So you’re probably a bit shocked to find sediment in your wine. However, you need to give the wine enough time, it will clear itself by dropping sediment to the bottom of the fermenter. This sediment is mostly tiny yeast cells. If you want to speed up this process, you can use bentonite, which is a fining agent that removes negatively-charged particles. This clears up the wine quickly and enables you to get the wine in bottles quicker. 2. Your Wine Should Be Less Than .998 on the Specific Gravity Scale Another thing you have to be aware of before even putting your homemade wine in the bottle is its specific gravity. The specific gravity scale measures the ratio of a liquid’s density to the density of water. You can measure specific gravity using a hydrometer, and the value indicates whether the fermentation process is finished or not. If the value is less than .998, the wine is ready for bottling. Check the specific gravity regularly throughout the fermentation process to determine whether you need to add more sugar to increase the alcohol level or not. Knowing this value also helps you know if your wine is progressing or if perhaps you’ve done something wrong and the fermentation isn’t continuing as it’s meant to. My favorite hydrometer is this Triple Scale Hydrometer from Amazon.com, which is extremely accurate and easy to use. I also like that it doesn’t contain any mercury, lead, or other hazardous compounds, so you can check your wine without worrying about adding any toxic material to the brew. The following video explains what specific gravity is and why it’s important: 3. Homemade Wine Needs To Spend Time in the Bottle Before Drinking You may think that you just need to wait for the fermentation process to be completed, and then you’ll be able to drink your wine, but this isn’t true. Your wine will taste better if you let it sit in the bottle for a while, as it will increase its flavors and distinctions. Wine tastes better with age because more time allows for reactions to occur among sugars, acids, and phenolic compounds. When wines age, more flavors begin to emerge as the acids and alcohols react with each other to form new compounds, which can then dissolve and rearrange to form another set of new compounds. Whenever you open a bottle of wine, it’s in a distinct reaction stage, which means it will have a unique taste. Allowing wine to rest in the bottle also helps develop its texture. In freshly made wines, the tannins, which are phenolic compounds, repel each other and therefore remain suspended in the wine. However, the older they get, the more they lose their charge and start to combine with each other instead of repelling. They form large and heavy chains as they combine, which reduces their surface area. This causes them to taste smoother. For this reason, red wine often tastes smoother if it has been aging in a bottle for some time. By allowing your homemade wine to spend time in the bottle before you drink it, you’re allowing your wine to reveal its true nature and reveal complex and unique tertiary notes. This makes for a better and individualized drinking experience for you and all who are lucky enough to snag a taste of your wine. 4. Wait Time Depends on the Kind of Wine You’re Making Some types of wine benefit more from aging than others. All wines benefit from at least some time to settle in the bottle, but some white wines don’t have a firm structure, so they don’t benefit from extended aging. Therefore, the waiting period for the aging process depends on what kind of wine you’re making. The following chart outlines the potential aging window of different kinds of wines, but keep in mind that you certainly don’t have to wait this long to drink your homemade wine. Most homemade red wines taste good after six months of aging, and three months is sufficient for most white wines. If you have the patience to wait years before trying your homemade wine, you may be surprised by the unique and complex flavors that emerge and reward you for your patience. Final Thoughts If you’re patient and give your homemade wine enough time to develop a strong and distinct flavor profile during the aging process, clear itself of all sediment, and reach an acceptable level on the specific gravity scale, your homemade wine is likely to taste how you want it to. Therefore, winemaking may require a little more patience than you were expecting.
https://homebrewadvice.com/how-long-before-homemade-wine-is-drinkable
PLEA 2016: Cities, Buildings, People- Towards Regenerative Environments 11-13. July 2016 PLEA is an organization engaged in a worldwide discourse on sustainable architecture and urban design through annual international conferences, workshops and publications. It has a membership of several thousand professionals, academics and students from over 40 countries. Participation in PLEA activities is open to all whose work deals with architecture and the built environment, who share our objectives and who attend PLEA events. PLEA stands for “Passive and Low Energy Architecture”, a commitment to the development, documentation and diffusion of the principles of bioclimatic design and the application of natural and innovative techniques for sustainable architecture and urban design. PLEA serves as an open, international, interdisciplinary forum to promote high quality research, practice and education in environmentally sustainable design. PLEA is an autonomous, non-profit association of individuals sharing the art, science, planning and design of the built environment. PLEA pursues its objectives through international conferences and workshops; expert group meetings and consultancies; scientific and technical publications; and architectural competitions and exhibitions. Since 1982 PLEA has organized international conferences and events across the globe. PLEA annual conferences are highly ranked conference attracting academia and practicing architects in equal numbers. Past conferences have taken place in United States, Europe, South America, Asia, Africa and Australia. This is the first time since 1981 that the United States hosts a PLEA conference. PLEA 2016 is organized by Cal Poly Pomona, the University of Southern California and Cal Poly San Luis Obispo. The theme of PLEA 2016 is Cities, Buildings, People: Towards Regenerative Environments and will explore the interactions between people and buildings to achieve livable, regenerative environments at multiple scales. Given that we at PLEA value both research and practice, each track will strive to combine exemplary case-studies and research papers.
https://www.activehouse.info/plea-2016-cities-buildings-people-towards-regenerative-environments-11-13-july-2016/
Book Review | Final Frontier: India and Space Security The book “Final Frontier: India and Space Security” is written by Dr Bharath Gopalaswamy and published by Westland Publications Private Limited, Chennai. Its primary focus revolves around the dynamic nature of warfare and the need for the militarisation of space by India. According to the author, the shift in the aerial frontier during the end of the First World War, from reconnaissance and information gathering to artillery guns and bombings along with advances in aircraft technology played a major role in the development of space exploration. The Cold War seemed to act as a catalyst in the process for powerful nations like The United States and the erstwhile Soviet Union in terms of securing national interests in the domain of space. India on the other hand has gradually modernised its defence and seems to have accepted the Revolution in Military Affairs which includes technological advancements in war and the growing need for space and cyber assets. The first chapter of the book gives an overview of the origins of India’s space programme. Unlike most countries in the 20th Century, India chose a civil administrative body to handle it’s space affairs and planned to use it for the socio-economic development of the country. This peculiar space programme led to strengthening its status as a leading member of the Non-Aligned Movement, aiding in research related projects with the United States as well as the Soviet Union. Dr Gopalaswamy also mentions the potential use of satellites as a means of communication through Television and other broadcasting methods at that time. The major political factor, according to the author, that led to a shift in India’s space programme was the 1971 Bangladesh war but it has not been clearly mentioned as to what kind of security threat led to that change. As we go further into the chapter, a brief history about the various departments that handle India’s space programme are mentioned along with a set timeline of events that function as background information. Much of the following chapters follow a similar theme of providing extensive overviews on various important fields like remote-sensing and navigation among others. Technical aspects of these satellites and the development of the said technology in various major countries, namely The United States, China and Russia are provided to the reader to gain a better understanding of the subject at hand. While mentioning Arms Race in space, the author mentions “Many countries and organisations fear an arms race in space could pose a serious risk to military-support assets in space, and also endanger critical civilian space infrastructure—including communications, navigation, and research”. Although deterrence seems to be a better choice when it comes to space militarisation, the rapid growth and advancement of space weaponry in key nations makes it a “necessity” for India to follow a similar path. One such development, according to the author, is China’s ASAT( Anti-Satellite Weapon) test which prompted a sudden and swift reaction by the Indian Government towards its space programme. Recently, with its very own demonstration of ASAT capabilities, India under the present government might be diverging from its primary civilian roots towards a weaponized space programme. The last few chapters delve into Governance, law and situational awareness of space. According to Dr Gopalaswamy “Space Situational Awareness can be functionally defined as the protection of space assets through monitoring and prediction of space objects or conditions that can, intentionally or unintentionally, cause damage to the assets”. The chapter then dives into India’s SSA capabilities and the potential for the military use of ISRO’s indigenous Multi-Object Tracking Radar(MOTR). After analysing the global code of conduct for outer space and the lack of national laws for space the book concludes by mentioning the need for self-dependency of the country’s space regime in today’s world. The Author sports a more neorealist perspective when it comes to the dire need for space militarization or weaponization as a result of strategic interests and growing technological advancements by countries, as mentioned in the book, like China, the United States and so on. Although the book does give a detailed insight into the changing perspectives on space regimes around the world, it also acts as a factual piece revolving around the history, science and capabilities of some specific nations as a sort of comparison. There has not been enough information provided regarding the development of India’s ASAT test which seems to be a major feat in its militarised space capabilities. Even though it has been mentioned that most nations prefer a less weaponized space, the author hints at an inevitable dependency on space-related assets and security due to the growing arms race in this sector( for example: The ASAT test and India’s lack of SSA capabilities at the time) which will lead to a less secure and highly volatile space programme. Ultimately, the need or necessity of being a leading spacefaring nation can be treated as a highly contested subject matter but the book does act as a faithful guide with its more factual overviews regarding the Indian space programme and its development over the years.
https://www.claws.in/book-review-final-frontier-india-and-space-security/
I am a video gamer. Have been since I first learned how to play Super Mario Brothers. Then came Half Life, Half Life: Opposing Force, Age of Empires 1 &2, and Mechwarrior 3. All games with a story that hooked the player into it. I remember the night I beat Mechwarrior 3. It was a school night, I was a sophomore in high school then. I believe it was a Thursday. Everybody else in the house was asleep, except for me. I had stayed up late to finish homework, and figured I’d reward myself with a quick attempt at beating the current level I had paused on last. Mechwarrior is one of those games with a deep, engrossing storyline, from tabletop to computers to the fairly well-written books, with a strong continuity. And yours truly was a junkie for mech-on-mech combat. It should come as no surprise that Mong the Magnificent arrived at this point, proclaiming “it’ll only take a couple more levels to beat the game, why not give it a go?” So I did. Mind you, Mong visited sometime around 2100. Near 0300, I finally killed Galaxy Commander Brendon Corbett, after playing the last half of the game! I didn’t notice the passage of time because I was busily having fun and enjoying a beautifully crafted story. I got approximately 2 hours of shut eye, made it through class, and slept like a dead man when I came home. But I felt accomplished. I felt I had done something that mattered, even if only to me. If story matters, then the elements within that story are what sell it. Video games are unique in that they are not simply something we view with our eyes, but hear with our ears, and physically participate in. Video game stories are more than just “move Character A from position X to position .” I own the original Halo soundtrack on CD. The music helped make that game. And it could really play on your nerves. I remember hating the Covenant, but I feared the Flood. Listen to the music and notice from one level to another how much worse trying to move through a night time jungle shrouded in mist while fighting off the parasitic hell creatures of the Flood, compared with simply fending off Covenant boarding parties. It makes a difference both in the atmosphere of the game and how players respond. Do we as writers take the time to really play on our readers’ emotions? Do we set the atmosphere such that they are drawn in? Halo’s game designers deliberately added certain factors, which lend to the emotional state of the player- it’s jungle, everything is dark, you’ve got ersatz music playing on frayed nerves, and now you’re running around in the dark, chased by zombies, your NPC teammates are dropping every time you take contact, and you’re lost. I’ll say that again: YOU ARE LOST. There is no magic beacon telling you how far you have to go to reach safety. You’re simply running as fast as you can and trying not to become a lunchable for zombies. Oh, and you’re low on ammo. It’s rough on the psyche. As it should be. When we can draw out strong emotional responses, even negative feelings, from our readers, from our viewers, we’re doing something right. When we can help direct them a certain way, through sound, through character experience and interaction, we’re doing something right. We’re creating a story that people will recommend to others. If we’ve set it up for a sequel, we’re absolutely going to get readers coming back and buying our wares again. They’re mentally, emotionally, and physically invested in what we’re doing. Fear shouldn’t be the only thing we can create, but it ought to be in our repertoire. Constant joy is only good if your name is Joel Osteen. Fiction readers want the whole emotional roller coaster, from fight to flight and everything in between. Examine the matter for yourself, look at what readers are demanding, and figure out how you’re going to fill that need they possess. Because readers represent buyers, with cash. And brother, exposure doesn’t pay the rent, or put shoes on your feet, but money? Money talks. Just like a well-crafted story talks to the emotional state and soul of the reader.
https://madgeniusclub.com/2020/02/07/rolling-through-emotions/
Each entrepreneur wants his business to be alone on the market without competition. But, it is an unrealistic and probably unachievable situation, at least for a longer period to operate without competitors. Competition is something that cannot be avoided, regardless the size of the business, or industry and market in which it operates. Even if there is no competition currently, it doesn’t mean it will last forever and will not appear shortly. You should not be afraid of competition. But, you must implement a strategic and systematic approach to the analysis of your competition. After analysis, you will need to use that knowledge in your daily business operations and management of your business. If you ignore your competition, you will have real problems which will be manifested through declining market share, lower sales, a decline in the number of customers and yes, the biggest problems as cash-flow problems. These problems will destroy your business. The fact is that you can not allow yourself to ignore such possible problems. Because of that, you will always need to ask yourself and answers the following questions related to your competition. 1. Who are your main competitors that can have a large influence on your business? You need to define your main competitors based on the influence that they have or will have in the future on your company. Make a list with 5 to 10 most important companies that you have identified as competitors for you. 2. Where are they in relation to you when you conduct a competitive analysis? When you already have a list of possible important competitors, you can start to analyze them to answer this question. You can use competitive analysis, and the result from this analysis will be the simple comparison between your business and your competitors based on most important elements of doing business. 3. What are the possible potential competitors that are not yet competitors but can be in the future? You need to think also about the future competitors, or possible companies that still aren’t your competitors but can become in the future. Think about possible substitute products or some innovation that can come in the future and will compete with your current offer. 4. What are their products or services? When you define your competitors, think about their products and services that they are offering on the market. So, make a list of all products and services that your competitors currently offer to the customers and future products and services they are developing and plan to offer. 5. Where are their products or services in relation to yours? Because you have a list of competitor’s products and services answering this question you will make a similar analysis as presented in the figure for question two. 6. What is the value that they offer to their customers? Think about the value that your competitors are offering to their customers. This can be products characteristics, features, product’s quality, benefits, problems they are solving, etc. 7. Are they delivering more or less value to their customers compared with your offer and your deliverables? When you define their value, you can analyze to compare their value with the value you are offering to your customers. 8. What is something good about their offer that you don’t have in your offer? Because the value is one thing, it is important to analyze and compare with yourself their offerings. 9. Can you implement something that you don’t have currently in your value that your competitors have? With previous questions, you already come to the conclusions based on the difference between you and your competitors. So, now you need to start thinking about possible improvements, or what you can implement if in some aspects your company is behind the competitors. 10. What is their total offer? As you already know the products and services are the part of your offer, but only the part not the total offer. Your total offer must include more things that will be your superior value. Read more about the four questions about your total offer from the customers perspective. 11. Is it different from yours? If it differs from your total offer, what are the differences? Are they know something more about the customers on the market? 12. What are the elements where their offer is better than yours? Answering this question make a list of the elements of your total offer that you doesn’t offer to customers and their total offer that they are currently offering. 13. What are the elements where your offer is better than theirs? Probably, you will have something in which you are better than your competitors when it comes to the offer. So, think about it and relate them to your customers. Are your customers satisfied and value these difference? 14. What are the benefits and features that they offer with their products or services? You can also analyze their products and services, or specifically the benefits and features that are included in their products and services. 15. How are they developing and find their customers? This question is important to go deep with the marketing differences between you and your competitors. 16. Who are their most important customers? Think about the segmentation and what they are targeting. Why is this situation? 17. How your competitors encourage their customers to buy from them? It is important to know what techniques and approaches your competitors use to encourage their customer to buy from them. 18. How much they charge for their products or services? Now it is time to ask and answer the question related to the pricing strategies. Answers to previous questions give you many attributes that can help you to check where you are positioned when it comes to the pricing. 19. Where are they advertising their products or services? It is important to know and follow advertising strategies and tactics used by your competitors. 20. How are they recruiting new staff with high skills for their business? This is an important question related to their human resource approach. 21. How often you analyze them? This is not a process that can be done once and never again. You need to continue with these questions. 22. Do you apply the findings of the analysis in the everyday operations of your business? As the last question, you want to check what you have done when you finish answering all these questions. Why you need to lose your time if you don’t take action steps to improve your business and become better from your competitors. Question: Have you ever analyzed your competition? How this type of analysis help your business to become better than competitor’s business?
https://www.entrepreneurshipinabox.com/1883/22-questions-about-competition/
The Lower Elementary is based on the unique needs of children between the ages of six and nine. HPMS offers students the intellectual, social, and spiritual tools they need to flourish not just in school, but in life beyond school. The curriculum is both broad and deep, fully integrating rigorous academic study with practical work in an atmosphere that fosters social, emotional and intellectual development. Children entering the elementary years combine vigorous stamina and curiosity with the excitement that comes with mastering the basics of reading and writing, time management, independent work, and harmonious community life. Children spend their days working both individually and collaboratively on a variety of highly engaging projects in the core subject areas of math, language, cultural studies (geography, history, sciences), and practical life. Our highly skilled teachers customize research activity, field study, and art and music instruction based on a balance of student interest and the core goals of the curriculum. Mathematics - Increasingly capable of abstract thought, students become less dependent on materials as they broaden and deepen their working knowledge of hierarchies, numerical operations, and geometry - Students extend and apply acquired skills, exploring preliminary concepts of algebra - Students eagerly observe mathematical properties and functions in the world around them and apply new concepts to studies in other areas, such as science and culture - Mathematics studies include: operations with whole, decimal and negative numbers; fractions, percentages, squaring, cubing and number bases Language - Effective written and verbal expression is emphasized as students learn to make themselves understood and get to know others - Language studies include: Sentence analysis, novel study, spelling and grammar exercises, daily reading and writing workshops, experience with different writing genres, and public speaking requirements are all part of the curriculum. - Additional curriculum highlights include a weekly newsletter published by sixth year students and a semester-long Independent Research Project (IRP) that teaches research, note-taking, expository writing, and presentation skills Cultural Studies - Through rigorous research, students refine their critical thinking capacities. - Lower Elementary students trace the story of the evolution of humans - Students compare and contrast world civilizations, and they begin thematic studies of American history. - The interdisciplinary, research-based thrust of the curriculum culminates in the sixth year as students undertake a yearlong research project. Physical and Life Sciences - Our science program brings students into direct contact with the central work of scientists: our students identify, question, explore, and conclude. - Students take responsibility for cultivating a peaceful, cooperative community. - Activities include: preparing lunch together, planting and maintaining an organic garden, organizing the library, staffing school events and participate in service projects such as fundraising for OBX SPCA and Outer Banks Food Pantry. Additional Curricula Special lessons in music, physical education, Spanish, health and art are offered.
https://www.heronpondmontessori.com/lower-elementary
Create a hole into the top of the cup using the drill and screw. Step 2 Insert the shaft of the motor into the hole. Fix the body of the motor to the top of the cup using glue. Step 3 Glue the battery to the top of the cup, beside the motor. Step 4 Attach piece of breadboard wire to each of the terminals of the motor. Knot the wire to the connector or secure with a dot of glue. Step 5 Attach one of the wires connected to the motor to the positive supply terminal of the battery using a dot of glue. Step 6 Glue the crocodile clip to the end of the wire connected to the other terminal of the motor. Step 7 Glue the felt pens around the cup. Make sure they are equally spaced out. Step 8 Place a plain paper down on the worktable. Step 9 Remove the lids of the markers and place the machine on the paper. Step 10 Connect the crocodile clip to the negative terminal of the battery and observe what happens. Step 11 To stop the machine, disconnect the crocodile clip from the battery. - The motor does not have to be purchased but could be salvaged from an old toy, as long as the specifications of the motor are similar to those suggested in the ‘Materials Required’ section. - An adult should operate the drill. - Help keep our environment clean by reducing waste, reusing materials, and recycling whenever possible! - When drilling into the cup, excessive pressure may cause the cup to crack. Apply gentle pressure. - The hot glue gun should be used with the help of an adult and under strict supervision since the hot glue is a burn risk. Younger children should not be allowed to operate the device. - Use adequate protection to prevent the work surface from being damaged by the hot glue or the colours. - Make sure the markers are aligned so that the machine can operate properly. Imagine you’re sitting all comfy on a sofa. But then your friend comes along, and asks if you want to go for a hike. It’s a lovely day, but you just don’t want to move – your chill time is just too appealing. Your friend, however, is very insistent, manages to convince you, and off you go. It’s a hot day and your hike is full of hills. It takes a lot of effort for you to put one foot in front the other, but you’ve got a good pace going and now that you’re moving, you don’t really want to stop. You can imagine that, right? Well, then you already understand a bit about how robots work. You sitting comfy at home is like a robot at stand-still, what scientists call at rest. Your reluctance to move represents the inertia of rest of the robot. Your friend represents the force causing the robot to start moving, by overcoming the frictional forces keeping the robot at rest. While you’re on your hike, the obstacles trying to stop you represent the frictional forces which try to stop the robot from continuing to move. Your desire to keep up your pace is like the robot’s inertia of motion, which makes it reluctant to stop moving once it starts. Why does the cup rotate? The motor shaft rotates, causing the cup to rotate. Why does the cup move with a jerky motion? The markers are not perfectly aligned. Also, the motor has to overcome friction and the weight of the machine. Why does the cup stop moving? The motor was disconnected from the battery. Why can the motor make the cup move? The motor produces enough (kinetic) energy to move the cup. What forces act on the cup? Torque from the motor and friction. The motor transforms chemical energy within the battery to kinetic energy in the axle. The axle uses this energy to rotate. The power provided by the motor is sufficient to cause the cup to rotate with the axle. The motor must overcome the machine’s inertia and frictional forces in order to move the cup. Any object at rest (not moving) has some reluctance to begin moving and an object which is already moving is reluctant to stop moving. This reluctance is known as the object’s inertia. The motor must also overcome the friction between the pen nibs and the paper, which opposes motion. Before the object begins moving, this force is known as static friction. Static friction occurs due to the ‘roughness’ of the surfaces. Once the machine begins moving, the forces change to dynamic frictional forces. These frictional forces are due to irregularities in both the touching surfaces. When they are pressed together, these irregularities push against each other and work to resist motion of either object. While the axle only rotates, the cup moves translationally and it also rotates. This indicates manifestation of both kinetic energy of translation and kinetic energy of rotation in the system. Kinetic energy of rotation: k.e.rot = ½ Iω2 Where I is the moment of inertia of the object, ω is the angular velocity. Kinetic energy of translation: k.e.translation = ½ mv2 Where m is the mass of the object, v is the translational velocity. Heavier objects have greater inertias of rest and motion. An object’s moment of inertia represents its reluctance to rotational motion, measured in kg m2. This depends heavily on the distribution of the object’s mass about its axis and is calculated based on the shape of the object. The frictional force between two stationary, touching, surfaces is called static friction. This friction prevents motion up until the forces promoting motion surpass a threshold determined by the coefficient of static friction. The dynamic friction produces a force which opposes the relative motion of two objects against each other. The magnitude of the frictional force depends on the coefficient of kinetic friction associated with the two surfaces. Applications Motors similar to the one used in this experiment are widely used in motorized toys bought from shops, as well as made by people as a hobby. These include toy cars, planes, robots and miniature generators. They can be powered via a battery, a solar panel or a wind turbine. Larger scale DC motors are used in some electric cars. Research There has been a surge in research into DC motors used in electric and hybrid cars. One area of interest has been modular motors. These motors can be scaled by adding modules in order to vary the power of the motor according to the vehicle it is used in, be it a small car or a truck. This involves designing new converters and power management schemes. - Change the battery to a 1.5V battery. The machine will be observed to move very slowly or not at all. This is because the battery voltage and current capability of the battery are not adequate to power the motor. - Add extra markers to the cup. Observe how each additional marker affects operation. - Swap the battery connections and observe how the direction of axle rotation is affected.
https://steamexperiments.com/experiment/motorized-drawing/
Few question the societal benefit of providing wider access to research that could help citizens make informed decisions about their health care or enable a small business to innovate and help fuel the economy, but an equally compelling case can be made for providing access to political or policy research that improves the effectiveness of an NGO working in the Democratic Republic of the Congo or literary criticism on the works of Alice Munro or Edwidge Danticat that inspires students at an under-resourced high school. Restricting access to research benefits no one and runs counter to the stated mission of educational and not-for-profit institutions. While the Open Access Network challenges the traditional focus, work processes, and financial operations of our institutions, it also enables them to more fully achieve their stated mission. To facilitate the establishment of the Open Access Network, we have created a membership program that any individual, institution, library, scholarly society, foundation, organization, or company can join. By becoming a member, you demonstrate your support for an OA model that is scalable, sustainable, and fair. Membership funds will go to support the organizational and administrative infrastructure for the OAN and the pilot projects we will use to test its model. This seed funding will allow us to put into place the structures to enable us to proceed with the post-launch phase of the OAN. Individual donations may be made online (either one-time or ongoing) or by check, payable to the Open Access Network, sent c/o K|N Consultants, Cathedral Station, P. O. Box 428, New York NY 10025. In either case, you will become a member of the OAN and receive a letter confirming receipt of your contribution that includes our federal non-profit tax ID number. Reach out to your colleagues and campus administrators. Help them better understand the issues. Urge them to join the OAN. And if they have questions, concerns, or need more information, you can share our white paper or have them contact us. There is no shortage of work to be done, from education to outreach to fundraising. Contact us if you would like to get involved. Raise these issues on social media and follow our progress on Twitter. Read your author agreements. Retain your rights. Publish in a reputable open-access journal or with an open-access-friendly publisher. Ensure your work is communicable — and regularly communicated to others. Participate regularly in the scholarly conversation, both face to face and via social media.
http://openaccessnetwork.org/take-action/
$6m EPZ Project: Peace Deal Breaks Down As Ugborodo Leaders Reject Compromise Committee BEVERLY HILLS, CA, February 02, (THEWILL) - Indications emerged at the weekend that the peace deal between the Ugborodo community in Warri South West Local Government Area of Delta State and the State Government over the $6million Export Processing Zone (EPZ) project expected to be sited in Ogidigben may have broken down. This followed the rejection on Saturday by Ugborodo leaders and elders of the compromise committee proposed by the Delta State Government to the community. Arising from the enlarged stakeholders meeting at the Ugborodo town hall, the community leaders and elders disowned the Comrade Ovouzorie Macaulay-led committee set up by the Delta State Government to reconcile the factions in the Ugborodo crisis. The State Government had mandated the committee to ensure that both Chief Thomas Ereyitomi and Hon. David Tonwe factions nominate 10 persons each to make up the two persons from government side that would constitute the committee. But the community leaders and elders at the weekend said both factions which had been laying claim to the Ugborodo Community Trust have since been disbanded. They said there are no more factions in the community since Ereyitomi and Tonwe who had been at daggers drawn have been asked to step aside to ensure that peace reigns in the community. In a unanimous agreement, the community said they have agreed to constitute the 21-member committee to interface between the people and the Federal Government and would resist any committee to be supervised by the Delta State Government. The community leaders and elders said they would resist any interference by Governor Emmanuel Uduaghan on Ugborodo community's affair, particularly the EPZ project expected to be sited in Ogidigben. While expressing gratitude to the Federal Government for the project, the elders and leaders said they have resolved to take over the affairs of the community and ensure they constitute a committee to liaise with state and Federal Government. They said the committee would be constituted later after a consultative meeting by the leaders and elders of the community. Present at the meeting were the spiritual head of Ugborodo (the Olaja-Orori),Mr. Benson Dube Omadeli; (the Olaja-Orori), Eghare-Aja, Pa Wellington Ojogho; (traditional prime minister of Ugborodo), Pa Anderson Ebiecutan; J. O Ayomike, M. E Okoro, A. L Eyeoyibo, Dr Lucky Akaruese, Wilson Akperi, Boyo Solomon, Moses Ejijala, Joseph Uwawah, Michael Ruwenor, among others. "Our meeting today has nothing to do with factions. There is no faction in Ugborodo. We are the community leaders and we are saying we are the ones with the right to constitute a committee to interface with the federal and state government. "There must be no imposition of persons. We are not here to project any faction, we are projecting Ugborodo's interest, nothing more,' one of the leaders, Ejijala, was quoted as saying at the meeting. When contacted on the new development, the Itsekiri youth president, David Tonwe, said he would abide by any decision reached by the community elders and leaders since the community is bigger than any individual. In the same vein, Ugborodo youth president, Michael Logde, said the youths would abide by any decision reached by the leaders and elders on the committee to be set up to dialogue with the Federal and State Government over the EPZ project. A faction led by Ereyitomi had recently raised the alarm that Tonwe's faction was trying to scuttle the agreement reached with the Delta State Government for the two factions to nominate 10 persons each to the committee. Ereyitomi group had alleged that the Tonwe's group was allegedly planning to constitute the entire committee against the agreement earlier reached by both factions to sheathe their swords and allow each faction nominate persons into the committee.
https://www.thenigerianvoice.com/news/136711/6m-epz-project-peace-deal-breaks-down-as-ugborodo-leaders.html
Population growth in the world's major cities is creating a need for a "dramatic change" in the way urban centres are planned, according to University of Melbourne Professor Marimuthu Palaniswami. Speaking at the CeBIT's Internet of Things conference - for which IoT Hub is a media partner, Palaniswami - who is also the director of the ARC Research Network on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP) - said city authorities had reason to investigate ways to make infrastructure smarter. "We’re trying to use the Internet of Things to change the way we conduct business in cities,” Palaniswami said. Palaniswami defined a smart city as “ICT connected to smart infrastructure, from which you are able to monitor the [city’s] environment.” He likened it to “skin around the body”, where the systems can “feel the temperature, or trigger actions.” He added that like skin, connected infrastructure in itself doesn’t make a city “intelligent”, and that you need to process the inputs to take meaningful actions. “As a result of technology, you collect data and that data is processed in the cloud, perform analytics, then create actionable knowledge you can use and feed back into actuators to enact change,” Palaniswami explained. “The smart city should 'think', include people, and take actions so it is productive.” He said that IoT enables businesses and citizens of a smart city to engage with local governments, use the data made available to them via open data platforms, and provide input back into these platforms, thereby creating a loop of information and feedback that benefits all parties. “The most important thing in a smart city environment is that citizens feel involved in the city,” he said. “When [a city] takes actions based on input from its citizens, they feel like they are a part of it.” Palaniswami also said that various systems that monitor and manage different aspects of a city’s operations need to be integrated together to provide a complete picture. He envisaged components such as environmental monitoring, infrastructure systems (transport and utilities), emergency and disaster management, community health and crowdsourced sensor data coming together to create a “common operating procedure” and a “system of systems”. “Currently, there are many [IoT] platforms that exist [to perform different functions], and what happens if they’re not integrated is you create [data] silos,” he explained. “If you have a transport monitoring system [for example] which receives data from a pollution monitoring platform and integrate it with the transport schedules for buses, taxis and trains, you can create decision-making opportunities to optimise travel conditions based on environmental inputs.” He added that the power of cloud computing and cloud-hosted applications and data enables “assimilators” to be created, which combine data and platforms together to create integrated systems capable of making automatic decisions and allowing citizens to take their own actions based on the actionable insight provided. Palaniswami cited IoT-enabled initiatives in progress in Melbourne as early examples of what could be achieved. IoT Hub recently revealed the city's future smart city ambitions. He highlighted the use of IP cameras to monitor foot traffic and crowd management at the Melbourne Cricket Ground, and the temperature monitoring of areas near the "green areas" around the city. He said that those areas experienced a drop of up to two degrees Celsius compared to areas without vegetation, which could potentially lead to energy savings for buildings and their air conditioning systems.
https://www.iothub.com.au/news/smart-city-talk-driven-by-population-growth-419052
We are committed to the protection and conservation of New Zealand’s heritage including minimising the impacts of our transport system activities on heritage places located within or adjacent to our transport network. Our land transport system follows many routes of early trails and roads, so the record of early exploration and settlement is frequently found whenever transport works are carried out. Heritage includes natural and physical resources that contribute to an understanding and appreciation of New Zealand’s history and cultures, derived from the associated archaeological, architectural, cultural, historic, scientific or technical qualities they possess. Such resources include: We are bound by legislation, including the Resource Management Act 1991 and Heritage New Zealand Pouhere Taonga Act 2014, and by agreements we have made with Heritage NZ Pouhere Taonga and through partnerships with iwi. Regional policy statements and local authority district plans require the protection of heritage. Resource consents and archaeological authorities need to be obtained for any works that could have an adverse effect on heritage and archaeological values. We recognise the connection between heritage and community wellbeing. Heritage places can have spiritual associations and cultural and social value, and can be important for identity, belonging and social interaction. For example, for Māori, heritage is an essential component of the history, traditions, culture and identity of whānau, hapū and iwi. Heritage can make places more liveable, contribute to sense of place and can have economic benefits. We follow best practice guidance on conserving heritage and have developed our guidelines and tools to assess and manage heritage and archaeology in our land transport activities. Our policy on heritage can be found in: In addition to the requirements specific to each technical area, all land transport infrastructure activities must fulfil the requirements of:
https://www.nzta.govt.nz/roads-and-rail/highways-information-portal/technical-disciplines/environment-and-sustainability-in-our-operations/environmental-technical-areas/heritage/
Another in a series of posts of "Stories about stories" -- segments from "This American Life" and other sources that are about storytelling itself. "RETRACTION": The entire last episode of This American Life is a discussion of why the show had to retract an earlier episode, which was of theater artist Mike Daisy's "reporting" on Apple Computer's manufacturing practices in China. Turns out that Mike Daisy made up a bunch of the material to make it more dramatic. He half defends himself by saying this was a theater piece, not journalism, but admits that he should never have passed it off to This American Life as fact. And in an exchange between Daisy and TAL host Ira Glass, there are some of the most uncomfortable moments -- and longest silences -- I've heard on radio! It's not only compelling listening, but further demonstration of This American Life's integrity. A great story about storytelling, fact vs. fiction, journalistic responsibility, and the like. Posted by Paul VanDeCarr at 8:00 PM 2 comments Sunday, March 18, 2012 WEEKEND ROUND-UP: JAPANESE ART, ORAL HISTORY, AND NARRATIVE THEOLOGY STORYTELLING IN JAPANESE ART: The Metropolitan Museum has up an impressive show on the topic through May 6. And, handily, they have some great images and explanatory text on their website, so click the link above. The stories are as varied as the format -- myths, war stories, tales of adventure and romance, told through text and illustrations on playing cards, folding panels, and, most interestingly, long scrolls. The scrolls are especially evocative of time; sure, there's the sense of time passing in a bound book as you turn the pages, but as you read the scrolls from one end to the next there's a sense of the flow of time, of events literally rolling out before you. These are, of course, metaphors for how real-life events happen -- they unfold or roll out, or you turn the page. Perhaps as our reading formats change yet again, we'll soon be saying "swipe the screen" to connote the same thing. But I digress, check out the Met's page on the show! NARRATIVE THEOLOGY AT MARS HILL BIBLE CHURCH: Mars Hill Bible Church, in Grand Rapids, Michigan, and founded by (now) best-selling author Rob Bell has what it calls a "narrative theology." All faiths and denominations and houses of worship tell stories, and "narrative theology" has been around for decades (e.g. Hans Frei). Still, I haven't seen a church that so clearly states that storytelling is its very mode of theology: "The word theology comes from two Greek words: 'theos', meaning 'God', and 'logos', meaning 'word'. So theology is words about God. When we put to words what we believe about God, we discover that God has been writing a story of hope and redemption for all the world. This story is a movement from creation to new creation, and God has given us a role to play in that story, in the restoration of our relationships with God, each other, ourselves, and creation. Since story is central to our belief about God, our words about God—our theology—exists in the form of a narrative." The church's website also has a space for members' personal stories of faith, and stories of good works they want to highlight.
http://www.insidestoriesonline.com/2012/03/
Six years into the project and with another four left until trains hit the capital, Crossrail surface works have today reached the halfway mark.. Although these works take only a £2.3bn slice of the overall £15bn Crossrail funding, three-quarters of the project’s 100km-plus route will run above the ground on the existing rail network through outer London, Berkshire and Essex. Surface works have been therefore vital to add more than 30 miles of new track to increase capacity on the network, as well as deliver a new flyover in Hillingdon and a new dive-under tunnel beneath the railway at Acton. Construction the new dive-under at Acton They also include improvement works at 29 stations, such as extending platforms in 16 locations to accommodate the longer trains and, in nine cases, completely rebuilding the site to provide larger and brighter ticket halls. Network Rail’s so-called ‘orange army’ will also install modernised signalling systems to increase service reliability and construct new sidings to stable Crossrail trains overnight. But a vital element of surface works is also the electrification of the Great Western Main Line (GWML) in west London and Berkshire which, as widely reported, is not on schedule in terms of its timescale or budget. Yet some GWML works have already been carried out, including the demolition and replacement of six bridges over the railway in Slough and South Bucks. It’s also been less than a year since Network Rail said surface works were one-third complete in December 2014. Take a look at what has been completed since then: West London and Berkshire East London and Essex New tracks in Abbey Wood There are no comments. Why not be the first?
https://www.railtechnologymagazine.com/Rail-News/halfway-there-for-crossrail-surface-works
Zara Cully - a female celebrity - born on Tuesday January 26th 1892, in Worcester, Massachusetts, USA,. With her omnipresent energy and intuition, Zara Cully has the potential to be a source of inspiration and illumination for people. With no conscious effort, she galvanizes every situation she enters, and energy seems to flow through her without being aware of this great potential or controlling it. When she is aware of her outstanding personality, Zara tries to blend with his environment, feeling conspicuous, alien, and out-of-place. Zara channels information between the higher and the lower, between the realm of the archetype and the relative world. Ideas, thoughts, understanding, and insight - all of these can come to her without having to go through a rational thought process, as if there is a bridge between her conscious and unconscious realms, attuning him to a high level of intuition through which even psychic information can flow. On the other hand, there is so much going on in her psyche that Cully is often misunderstood early in life, making him shy and withdrawn. Zara Cully is sharing her great capacity for invention with many inventors, artists, religious leaders, prophets, and leading figures in history. But because she is so highly charged, she experiences constant conflict between her great abilities and indulgence in self-reflection and self-criticism, leaving her highly self-conscious. Although blessed with a message or a specific role to play in life, Zara Cully must develop herself sufficiently to take full advantage of that opportunity. Until that time, her inner development takes precedence over her ability to materialize the great undertaking she was chosen to perform. Consequently, Zara seems to develop slowly, but she simply has more to accomplish in her evolution than the average person. Thus, Cully's real success does not usually begin until maturity, between the ages of 35 and 45, when she has progressed further along her path. When she has found his niche in life and begun to realize her true potential, Zara's rewards will more than compensate for her trials earlier in life. Zara Cully tends to be quite adaptable, and she finds it easy to fit into most social set ups and vocational fields. Learning to be wisely assertive is a major lesson to be taken by Zara Cully throughout her life.
https://www.celebrities-galore.com/celebrities/zara-cully/home/
In the recovery process, the role of the family in that process is essential for their loved one who is struggling with addiction. The family is crucial in regards to support and can help their loved one who is in recovery support treatment goals, promote a recovery lifestyle, and can recognize and identify the warning signs of relapse. However, family involvement in the recovery process can be delicate and difficult. Oftentimes family members may not know how to approach their loved one who is struggling with substance abuse issues in regards to treatment. Not only do family members have reluctance in talking about addiction and therapy, but they may push the issues to the side or deny there are addiction issues. The family of the addicted loved one may be enabling the behavior to continue, whether explicitly or on a more subtle level. While the family unit may have legitimate concerns in regards to approaching their loved one about treatment and the denial and confrontation that would arise, they also need to realize those who seek treatment do so because of the positive, gentle and supportive nature of the family as a whole. Families, much like the one struggling with substance abuse, undergo stress on multiple levels, whether is it physical, emotional, social or spiritual. Just as the recovering individual needs support, families also need support in order to minimize stress and enable the recovery process to move forward. It is important to note that family can take on numerous forms and can encompass anyone who is supportive of the person’s recovery process. This can be the immediate family, extended family (relatives, cousins, etc.), friends, colleagues or other supportive people. In removing the addicted person from the previous toxic environment, it can give both patient and the patient’s family the necessary space to work through the issues, concerns, and maladaptive patterns of behavior that lead to and compounded the addiction. Also, both the patient and the patient’s family can attend meetings such as Al-Anon or Nar-Anon which are free programs that provide group support. Ultimately, drug and alcohol addiction as a whole need to be looked at as a family issue and the family as a whole needs to be given knowledge, acknowledgment, and support. Addiction is more than a disease that affects individuals, it affects the entire family. The stresses that families feel in dealing with a loved one who is struggling with substance abuse can reach a breaking point and can throw the entire family structure into chaos. Addiction impacts the stability of the home, the family’s mental and physical health, finances, and impacts how family members interact with each other as a whole. It is important that when a loved one undergoes drug treatment that the whole family becomes involved in the treatment process. The following are five reasons why family programs help addicts and their families heal and move forward in recovery. Addiction is a complex disease which has deep roots and has many facets and causes. Among these causes include genetics, social and environmental factors, family dynamics as well as family history of substance use and abuse. Through family programs, experienced counselors can bring the addict and their family together to see how these factors have impacted their lives. The addict should understand the underlying roots of their addiction and how their actions have not only impacted their lives but also the lives of their family. The family of the addict will come to understand how their patterns of communication and interaction with their loved one contributed to their addiction. In family programs, the family is seen as a system with each part related to all other parts. Through counseling and therapy, both the addict and family can slowly rebuild those dysfunctional parts of the system and build a new system of interaction and communication that supports healthy relationships and recovery. With family programming, both the addict and family learn they are not alone in regards to overcoming addiction. During drug treatment, the addict is encouraged to seek the support that is found in twelve-step groups such as AA and NA. These mutual self-help groups are made up of peers who are going through similar experiences and through their support and encouragement, the addict can become empowered in their own recovery. The family is also encouraged to seek out the support of groups such as Al-Anon, Alateen, Nar-Anon and Learn to Cope. These groups are for family members of addicts who share their knowledge, strength, and hope in order to help them understand and overcome the disease of addiction. Ultimately, both the addict and family learn that the family unit itself is the best source of support. By understanding the disease of addiction and receiving the support through these self-help groups, both the addict and the family become empowered to make the necessary changes to move towards healthy relationships and communication that fosters long-term recovery for everybody. giving money to the addict, paying their rent and making excuses for their behavior. During family programming, counselors will bring awareness to these behaviors and will allow families to acknowledge how they helped perpetuate their loved one’s addiction. In addition to focusing on enabling behaviors, family programming will also focus on other behaviors that can provide roadblocks in recovery. These behaviors include blaming, rationalizing, minimizing the addiction and denial. These methods of coping with addiction are ineffective and also contribute to a loved one’s drug and alcohol problems. Ultimately, counselors in family programs will help the addict and their family recognize their roles in regards to the addiction of a loved one and help them build healthier coping and communication skills that will help the overall family unit. For families that experience drug and alcohol addiction, they go through a roller coaster of emotions that significantly impacts the physical and mental well-being of each family member. For each family member, they may feel as though they are powerless in dealing with a family member’s addiction and as a whole, the family unit can suffer. A major goal of family counseling is for family members to give voice to what they are experiencing, and once those feelings are out in the open counselors can help families find the strength to overcome those feelings of powerlessness and find empowerment. In family counseling, each family member can be open and honest and feel they have an important role in recovery. The family learns to support each other in their own recovery and in turn this provides motivation for the addict to find the strength to break the cycle of addiction. The family learns to detach themselves from a loved one’s addiction with love and allows them to assume responsibility for their own recovery. In turn, the family works on assuming responsibility for their own behaviors. It is often said that recovery is a journey and an ongoing process. There is no clear-cut timetable for the addiction therapy process and the possibility of relapse is as real for family members as it is for the addict. It can take months and even years for addicts and families to regain their physical and mental health and to repair their relationships. In order to achieve the best long-term outcomes, both addicts and their families need to constantly pursue the resources that family programs can offer after formal treatment has concluded. Counselors in family programs will encourage the addict to pursue aftercare options such as sober living homes and continued participation in peer support groups. For families of addicts, counselors in family programs will encourage them to continue to go to Al-Anon or Nar-Anon meetings in order to get the continued education and emotional support they need in order to progress in their recovery. No matter what options the addict and the family pursue, family programs are available to them to help guide them through whatever challenges may lie ahead on the road to recovery. If you or a loved one is struggling with alcohol and drug addiction contact us at the Palm Beach Institute be calling 855-534-3574, today. We can help you to achieve sobriety.
https://www.pbinstitute.com/blog/role-family-recovery-process/
In March 2007 at Culture Lab Newcastle I was parachuted into a motion capture session with Gretchen Schiller and Peter Wiegold. Negotiating a role within the supposed hierarchies of conductor, musician, dancer was a liberating experience, in which I discovered the joys of conducting as the art of dancing and making music at the same time. The AHESSC report states that: From the obvious starting point which consisted of “conducting” dance, the session proceeded to explore ways of avoiding reciprocal mirroring and simplistic mappings in responsive, gestural dialogue-type situations. Musician John Ferguson, specialised in improvisatory techniques, joined the session and played guitar to add a further layer to the interaction. Relations between the musician, conductor and dancer rapidly acquired exciting fluidity and complexity, with shifts in steer of improvisation architecture (music ranging from accompaniment to dramaturgical interaction with the other two performers).’ ‘Viewing the motion capture data after the session had a disinhibiting effect and the relatively abstract trajectories reflected - and encouraged - types and ranges of gesture that would probably not have been inspired by realistic film footage. This session raised the question of possibly using capture data as a score for improvising artists, which would in turn place a new set of constraints on indexing. This led to concrete discussion about the design of data annotation and retrieval processes which might accommodate developments of this kind.’ On 4th and 5th October 2008, Culture Lab hosted ‘Performance Technologies: Interaction & Improvisation’, a workshop involving several Culture Lab residents and a group of performing artists from Brunel University. The workshop, led by Sally Jane Norman (Culture Lab), Bennett Hogg (Newcastle) and Gretchen Schiller (Brunel) was designed to explore methods and models of improvisation across disciplines and how these might work together to create interesting tensions between performing artists. The performers from Brunel, who are all currently undertaking an MA course in Digital Performance, had a wide range of styles ranging from ballet to breakdancing. Newcastle University music students and Culture Lab residents Paul Bell, John Ferguson, Adam Parkinson, Will Schrimshaw and Nick Williams combined a range of musical improvisation techniques with the physical improvisation of the dancers. These exchanges were captured using Culture Lab’s motion capture system and the physical trajectories of both musicians and movers was reintroduced into the arena as an aspect of the performance. The event was an excellent opportunity to cross disciplinary and institutional barriers and the ongoing debates and discussions which arose from the event have been solidified into a number of emerging proposals for collaborative projects. Images courtesy of Will Schrimshaw. See more here.
http://www.fergalstrandy.co.uk/?page_id=28
To recognize the poet, actress, author, singer, teacher and civil rights activist who has inspired people worldwide and taught generations of Wake Forest students, the University will establish the Maya Angelou Artist-in-Residence Award. The new award will honor world-renowned artists who reflect Maya Angelou’s passions for creating, performing and teaching. The award will celebrate exceptional artists for combining achievement in the arts and a commitment to improving the human condition in the spirit of the University’s motto, Pro Humanitate. The award winners will visit Wake Forest to educate and engage students, as well as collaborate with faculty. “The significance and beauty of this award is that it honors the life and life's work of my Mother while inspiring artists who have demonstrated a powerful commitment to uplifting humanity through exercising virtues she lived by: courage, creativity, hope, tolerance and social activism.” Guy Johnson, Dr. Angelou's son “To honor her legacy, we must look upon ourselves and ask – ‘What are we doing to improve the human condition?’” Johnson said. Additional details about the award and the nomination process will be announced in the fall of 2021. The first Wake Forest Artist-in-Residence Award will be made in the spring of 2022. “Dr. Angelou taught students – as she taught her readers worldwide – that artistic expression is at the heart of human courage, renewal and liberation. Artists honored with this award bearing her name will reflect that commitment, in their work as in their lives. ” Rogan Kersh, Wake Forest Provost “Their engagement with students and other community members will further affirm the ‘wonder-working power’ of the arts, to quote a favored Dr. Angelou phrase, on our campus,” Kersh added. Honoring a Legacy A generous gift from a Wake Forest alumnus will provide funding to launch the award. Angelou first came to Wake Forest in 1973 for a speaking engagement, starting what would become a long relationship with the University. Wake Forest awarded Angelou an honorary degree in 1977. She was named the University’s first Reynolds Professor of American Studies in 1982 and continued teaching at Wake Forest until her death in 2014. Angelou would have celebrated her 93rd birthday on April 4. Over the past four months, a committee has worked to create the framework for the award. Wake Forest junior Adarian Sneed has represented the student perspective for that group. “Her legacy on this campus gives inspiration for the present and hope for the future,” she said. “She understood and could empathize with people from all walks of life. Furthermore, her life’s work could touch the hearts of so many different people. I want the winner of this award to hold true to Maya Angelou’s light,” said Sneed, who is a theater and business student and was in 7th grade when her grandmother encouraged her to read Maya Angelou’s I Know Why the Caged Bird Sings. As the personification of Angelou’s philanthropic legacy, the Dr. Maya Angelou Foundation has collaborated with Wake Forest in support of this meaningful tribute and most prestigious honor for artist awardees. Guided by Dr. Angelou’s vision of “…sharing in the glory of a good life, lived joyously,” the foundation’s mission is to activate positive change and inspire the next generation of critical thinkers, creative writers and courageous leaders through innovative programs rooted in greater access to education, equality, and justice for all. Maya Angelou: Artist and Teacher In a 2008 interview, Angelou talked about her love of teaching: “I’m not a writer who teaches. I’m a teacher who writes. But I had to work at Wake Forest to know that.” Over the years, she taught a variety of humanities courses, including “World Poetry in Dramatic Performance,” “Race, Politics and Literature,” “African Culture and its Impact on the U.S.,” “Race in the Southern Experience” and “Shakespeare and the Human Condition.” Angelou had immense creative energy for teaching and artistic endeavors. In 1985, she directed an innovative production of “Macbeth” for the Wake Forest University Theatre. President Bill Clinton invited Angelou to present a poem at his first inauguration in 1993; her poem “On the Pulse of Morning,” was later set to music by Wake Forest’s composer-in-residence Dan Locklair. Angelou narrated the premiere performance in Wait Chapel. Angelou is the author of more than 30 books of fiction and poetry, including her powerful autobiographical account of her early life in Stamps, Arkansas, I Know Why the Caged Bird Sings (nominated for a National Book Award), and six other autobiographical books. This year, a group of Wake Forest alumni hosted a virtual celebration honoring 50 years of Caged Bird’s influence in American culture. Her volume of poetry, Just Give me a Cool Drink of Water ‘Fore I Die (1971), was nominated for a Pulitzer Prize. In 2012, Business Insider magazine named Angelou to its top 10 list of the most famous college professors. In recent years, Angelou donated movie scripts, drafts of plays and other materials related to her work in film, television and theater to Wake Forest’s Z. Smith Reynolds Library. Many of the materials are handwritten on legal pads or in notebooks with handwritten margin notes or corrections. Highlights include the movie scripts for “Georgia, Georgia” (1972) and “Down in the Delta” (1998). The University named a residence hall in Angelou’s honor that was dedicated in 2017.
https://news.wfu.edu/2021/04/14/wake-forest-to-establish-maya-angelou-artist-in-residence-award/
The faculty who will guide you through the curriculum at Goucher are not just professors, and they're not just at Goucher. They are distinguished leaders in their fields. They bring a depth of practical experience that is invaluable to students. Amy Skillman Academic Director Phone: 410-337-6415 Email: [email protected] Amy Skillman is a folklorist whose work occurs at the intersection of culture and tension, where paying attention to culture can serve to mediate social change and foster cultural equity. She advises artists and community-based organizations on the implementation of programs that honor and conserve their cultural traditions, guides them to potential resources, and works with them to build their capacity to sustain these initiatives. For over 20 years, her work has integrated personal experience narratives of immigrant and refugee women into leadership empowerment initiatives. Working in collaboration with the PA Immigrant and Refugee Women’s Network, she has co-produced an exhibition called Our Voices, a theater piece about Coming to American in the 21st Century, and a reader’s theater called Magnificent Healing, which explores various cultural collisions with our healthcare system. She is currently working with the Susquehanna Folk Music Society to document traditional artists in Central Pennsylvania and create public programming that draws attention to and honors the breadth and depth of their work. Other work includes a Grammy-nominated recording of Old Time fiddlers in Missouri, a yearlong arts residency with alternative education high school students rooted in the ethnography of their lives, and a traveling exhibition called Making It Better, about role of folk arts as a catalyst for activism in communities throughout Pennsylvania. She has been teaching in the MACS program since 2011 and became Director in 2012. She is a Fellow of the American Folklore Society and a recipient of the society’s Benjamin A. Botkin award for significant lifetime achievement in public folklore. M.A. in Folklore and Folklife, University of California-Los Angeles B.A. in Cultural Minorities and the Immigrant Experience, St. Lawrence University Michelle Banks Michelle Banks is a cultural worker from Washington, DC. and Lead Curator for the 2023 Living Religion program of the Smithsonian Folklife Festival. A 2012 graduate of the M.A.C.S. program, she completed her PhD in Sustainability Education at Prescott College where she is also an associate faculty member. A transient resident of San Cristóbal Verapaz, Guatemala, Michelle's dissertation research explores the nexus of violence and placemaking in post-conflict Guatemala. Her research disciplines include historical memory, biocultural diversity, critical place inquiry, and epistemicide. Ph.D. in Sustainability Education, Prescott College M.A. in Cultural Sustainability, Goucher College B.A. in Cultural Studies, Union Institute & University (Vermont College) Robert Baron Robert Baron has directed the Folk Arts Program, Music Program and Museum Program of the New York State Council on the Arts (NYSCA), served as Folklore Administrator of the National Endowment for the Humanities and was a museum educator at The Brooklyn Museum. Baron has been a Fulbright Senior Specialist in Finland, the Philippines and Slovenia, a Smithsonian Museum Practice Fellow, and Non-Resident Fellow of the W.E.B. Du Bois Institute for African-American Research at Harvard University. He is a Fellow of the American Folklore Society (AFS), which he serves as Secretary, and received the AFS’s Benjamin A. Botkin award for significant lifetime achievement in public folklore. Baron is the Secretary of the Steering Committee of the ICH NGO Forum, which provides advisory services in the framework of the 2003 UNESCO Convention for safeguarding intangible cultural heritage. He has carried out field research in the Caribbean, US and Japan, and his research interests include heritage studies, public folklore, cultural policy, creolization and museum studies. Baron’s publications include Public Folklore, edited with Nick Spitzer; Creolization as Cultural Creativity, edited with Ana Cara; and articles in Curator, International Journal of Heritage Studies, Journal of American Folklore, Western Folklore and the Journal of Folklore Research. Ph.D. in Folklore and Folklife, University of Pennsylvania. M.A. in Folklore and Folklife, University of Pennsylvania A.B. in Anthropology, University of Chicago Mary Briggs Mary Briggs is an independent cultural worker living in southwestern Pennsylvania. Until 2012 she held the position of Director of Cultural Development at the Cultural Affairs Division of Arlington County, Virginia where, in 1999, she introduced folk and traditional arts programming within the county. The initiative included field research, public programs, and systematized services to ethnically and culturally diverse traditional artists and communities. She has a personal interest in topics related to the Appalachian region and sense of place, is a moderately good fiddler, and is active in promoting local art and culture as a strategy for social and economic change. She is currently contracted by Rivers of Steel National Heritage Area, an organization that works to document and preserve the cultural heritage of southwestern Pennsylvania. Carol Brooks Carol Brooks received her master’s degree in Cultural Sustainability in 2016, and continues to build upon her capstone thesis, Living History – Finding Myself in the Reflection of My Elders. She founded “The Intergenerational Griot Project” (IGP}”, a forum and platform created to honor and share the untold stories of her family’s ancestors, while capturing the stories of those elders who are still with us. Carol is a collaborative activist, currently working with African American elders and community leaders from throughout Baltimore County, connecting them with the resources and support they need to sustain their cultural and historic preservation efforts. Her work includes launching the Diggs-Johnson Museum Legacy Preservation Project to document, digitize, and formally archive an irreplaceable collection of oral history recordings, photographs, research, books, and the embodied knowledge of historian Louis Diggs. She also played a key leadership role in establishing the Baltimore County Coalition of the Maryland Lynching Memorial Project (MLMP) to raise awareness of the dark legacy of racial terror lynching in Maryland, and to develop a forum for civic engagement and restorative justice activities in the local community. Carol is part of a small team of Coalition members conducting oral history interviews as part of the Maryland Lynching Truth, Racial Healing, & Transformation Oral History Initiative. Carol is a graduate of the Maryland Equity and Inclusion Leadership Program (MEILP) through the Maryland Commission on Civil Rights and University of Baltimore Shaefer Center for Public Policy and a Senior Workforce Analyst for Baltimore County. M.A. in Cultural Sustainability, Goucher College B.A. in Fine Arts (Dance Concentration), University of Maryland Baltimore County Barry Dornfeld Barry Dornfield is a Principal at CFAR, a management consulting firm in Philadelphia, a documentary filmmaker, a media researcher, and an educator. His documentary work includes: "Eatala: A Life in Klezmer," co-produced with the Philadelphia Folklore Project and broadcast in Philadelphia; "LaVaughn Robinson: Dancing History;" "Gandy Dancers," portraying the expressive culture and history of African-American railroad workers in the US; "Look Forward and Carry on the Past: Stories from Philadelphia's Chinatown;" "Powerhouse for God" and "Plenty of Good Women Dancers: African-American Women Hoofers in Philadelphia." Dornfeld recently co-authored The Moment You Can't Ignore: When Big Trouble Leads to a Great Future, with Mal O'Connor (Public Affairs 2014). He has taught at New York University and chaired the Communication Department at the University of the Arts, Philadelphia. Ph.D., Annenberg School for Communication B.A., Tufts University Susan Eleuterio Susan Eleuterio is a professional folklorist, educator, and consultant to non-profits. She has conducted fieldwork and developed public programs including exhibits, performances, folk arts education workshops and residencies in schools, along with professional development programs for teachers, students, adults, and artists for schools, museums, arts education agencies and arts organizations across the United States. She serves as Chair of the Board of Directors for Illinois Humanities and is the former Co-Chair of the Chicago based Crossroads Fund Board of Directors. Eleuterio is the author of Irish American Material Culture: A Directory of Collections, Sites and Festivals in the United States and Canada, as well as essays in the Encyclopedia of Chicago History, the Encyclopedia of American Folklore, the Encyclopedia of Women’s Folklore and Folklife, Ethnic American Food Today: A Cultural Encyclopedia, "Statewide Models for Folk Arts in Education" in the Missouri Folklore Society Journal, and a collaboratively written chapter: "Even Presidents Need Comfort Food; Tradition, Food and Politics at the Valois Cafeteria" in Comfort Food, Meanings and Memories (2017 University Press of Mississippi ). Recent work includes publication of the chapter, "Pussy Hats: Common Ground At the Chicago Women's March in Pussy Hats, Politics and Public Protest (2020:University Press of Mississippi) and as a consultant in exhibit development, public programming, and K-12 curriculum for the Center for Folklore Studies at the Ohio State University’s Placemaking in Scioto County project. She also serves as the Board Treasurer for Southern Ohio Folklife and is Co-Chair of the American Folklore Society’s Media and Public Outreach Committee. M.A. in American Folk Culture, SUNY B.A. in English/Education, University of Delaware Robert Forloney Robert Forloney is a Cultural Institution Consultant working with a number of clients to develop innovative programs, train interpreters and facilitate strategic planning. He has worked in the museum field for more than twenty years- as a teacher for the New York City Museum School as well as an educator, administrator and consultant at institutions such as the Brooklyn Museum of Art, the Museum of the City of New York, the Morgan Library, American Museum of Natural History, the Museum of Modern Art and the South Street Seaport Museum. Most recently he served as the Director of Breene Kerr Center for Chesapeake Studies at the Chesapeake Bay Maritime Museum where he oversaw interpretative, academic, folklife and exhibition programs. In addition, he has formally taught as a classroom teacher for the New York City Museum School, adjunct faculty at Goucher College, University of Delaware and Johns Hopkins University. Trained in both formal and informal teaching methodologies, much of his work has been directly related to integrating these theories into innovative programming for diverse public audiences. His goal is to enable all audiences to actively engage objects, images and exhibitions in order to successfully access visual and textual information, acquire new knowledge and create personal meaning. Robert strives to ensure that communities have their voice heard and are empowered by the cultural institutions that attempt to share their stories. Areas of expertise include program development for diverse audiences, interpreter training, staff supervision and coordination, community engagement, exhibition design, grant writing and management as well as strategic planning for cultural institutions. M.A in Humanities and Social Thought, New York University Teaching Certificate, Bank Street College of Education B.F.A. in Fine Arts/Sculpture, Parsons School of Design, New School for Social Research Heather Gerhart Heather Gerhart (’17) is a graduate of the M.A.C.S. program. Her capstone research explored digital storytelling as a method to complement traditional cultural documentation, as well as a model for cultural work practice that involves community members as partners in knowledge production. Heather was awarded the 2017 Rory Turner Prize in Cultural Sustainability for her capstone research. Heather trained in digital storytelling facilitation and has worked as a co-facilitator for several StoryCenter workshops involving community- and capacity-building with diverse groups, including HIV-positive women and their case workers, transgender men and women, and Native health educators, among others. Heather also supported Saving What Matters, an international heritage preservation collaboration (St. Michaels, MD/Bosnia & Herzegovina) that was funded in part by the U.S. Department of State’s Communities Connecting Heritage Program. The project, which involved Goucher College students in documenting maritime heritage through digital stories, was awarded the 2018 Best New Heritage Initiative by the Lower Eastern Shore Heritage Council. Heather currently coordinates diversity initiatives at Keystone Symposia, a non-profit dedicated to connecting the biomedical research community by convening international scientific conferences. She manages programming that broadens the scope of inquiry at Keystone Symposia's conferences and focuses on the research contributions of women and scientists from diverse backgrounds. M.A. in Cultural Sustainability, Goucher College B.A. in Anthropology, University of Colorado at Colorado Springs Mary Hufford Folklorist and independent scholar Mary Hufford has worked over the past three decades in both government and academic settings and is currently a Fellow of the American Folklore Society and a Guggenheim Fellow. Her scholarship, teaching, and writing have centered on the interrelations of social, ecological, and cultural systems, and the formation of democratic public space through community-based, participatory research. As folklife specialist at the American Folklife Center, Library of Congress, she led regional team fieldwork projects in the New Jersey Pine Barrens and the southern West Virginia coalfields. She has served on the faculty of folklore and folklife and directed the Center for Folklore and Ethnography at the University of Pennsylvania. Roxanne Kymaani Dr. Roxanne Kymaani is a life transformation strategist and owner of Kymaani Catalyst Consulting. Her work includes the development and implementation of transformation initiatives for both individuals and organizations. She uses her expertise in dialogue, identity construction, and group relations to successfully lead in diverse and challenging environments while gaining and maintaining the trust of those she engages with. Dr. Kymaani also serves as the System Leader in Residence with the National Equity Project, whose mission is to transform the experiences, outcomes, and life options for children and families who have been historically underserved by our institutions and systems. Kymaani specializes in identity construction, group dynamics, conflict resolution, dialogue facilitation, leadership development, diversity, inclusion, and equity, emphasizing self-awareness, empathy, authenticity, and collective discovery. Ph.D. in Leadership Studies, University of San Diego M.S. in Leadership Studies, National University B.A. in History, University of California-San Diego Life Coach Training Program, Accomplishment Coaching Amy S. Millin Trained as a clinical social worker, Amy Millin previously worked with children and youth at Jane Addams Hull House (Chicago) and Carson Valley School (Flourtown, PA). So began what has become a life journey of exploring what it means to be resilient, the role stories have in our lives, the strength of communities, and the power that results through partnerships. This journey led her to the M.A.C.S. program for a second MA degree. Her research explored the intersection of cultural health, equity, and the use of public space, for which she was recognized with the Harold Atwood Anderson Fund, the Julia Rogers Research Prize, and the Rory Turner Prize in Cultural Sustainability. Amy currently works as a consultant providing development services and research for the National Council for the Traditional Arts, where she also supports the National and Legacy folk festivals. She is one of the founders and co-leaders of the Baltimore County Coalition of the Maryland Lynching Memorial Project where her work is centered on the ways that community programming creates opportunity for conversation. An ongoing focus in exploring the relationship between people, community, and place/space has led Amy to deepen her skills in the areas of digital storytelling and ethnography through additional training from the Vermont Folklife Center, StoryCenter, and the University of Maryland, Baltimore County. M.A. in Cultural Sustainability, Goucher College M.S.W. in Family/Child. Welfare, University of Pennsylvania B.A. in Sociology, Oberlin College Rita Moonsammy Dr. Rita Moonsammy has been conducting research, teaching, and developing programs for the support of traditional culture for 30 years. While serving as the state's Folk Arts Coordinator at the New Jersey State Council on the Arts, she was responsible for creating a multifaceted program to work with artists and communities in sustaining their culture. Her public programming has included exhibits, films ("The Seabright Skiff," "Pinelands Sketches," " Schooners on the Bay"), books (Pinelands Folklife, Passing It On), articles, workshops, conferences, festivals, teacher education, curriculum development, and community cultural planning. Her research interests include semiotics, metaphor and material culture, occupational folklife, food studies, folk art, and narrative. Ph.D. in Folklore and Folklife, University of Pennsylvania Kelly Elaine Navies Kelly Elaine Navies is an oral historian, writer, and poet. As Museum Specialist in oral history, she coordinates the Oral History Initiative at the Smithsonian National Museum of African American History and Culture (NMAAHC). Beyond her A.B. and M.S. degrees, Kelly also studied at the Southern Oral History Program (SOHP) when she was a graduate student in History at UNC Chapel Hill. Her oral history work at NMAAHC spans the entire field of African American History and Culture, but her specific areas of expertise are narratives of racial segregation and memories of racial trauma, as well as the narratives of Black artists. She hails from the California Bay Area, but her passion for oral history sprang from learning more about her family’s deep roots in the Appalachian region of Western North Carolina. Navies’ oral history projects and interviews are located at the SOHP, The Reginald F. Lewis Maryland Museum of African American History and Culture, the Washington DC Public Library Peoples’ Archive, and at the Smithsonian National Museum of African American History and Culture. Her writing may be found in several publications including, June Jordan’s Poetry for The People: A Revolutionary Blueprint, edited by Lauren Muller and Bum Rush the Page: A def poetry jam, edited by Tony Medina and Louis Reyes Rivera. Finally, she is currently the Vice President of the Oral History Association. M.S. in Library and Information Science, Cultural Heritage Track, Catholic University of America A.B. in African American Studies/Humanities, University of California Berkeley Guha Shankar Dr. Shankar is Senior Folklife Specialist in the American Folklife Center at the Library of Congress in Washington, D.C. Dr. Shankar has experience and training in multi-media production, reparative archival collections description, digital assets management, intellectual property and cultural heritage management for indigenous communities, public programs and educational outreach, and training in ethnographic field methods. He coordinates a Native American collections return and curation initiative at the Center. Dr. Shankar is Co-Director of the national Civil Rights History Project that documents the memories and experiences of activists in the Black Freedom Struggle. His research interests include diasporic community formations in the Caribbean, ethnographic media, visual representation, and performance studies. Ph.D. in Folklore and Public Culture, University of Texas-Austin M.A. in Folklore and Public Culture, University of Texas-Austin B.A., University of North Carolina-Chapel Hill Michael Alvarez Shepard Dr. Shepard is an anthropologist who teaches for both the Master's in Cultural Sustainability and Master's in Environmental Studies programs at Goucher College. His research focuses on documentation and dissemination of endangered Indigenous languages, cultural resource management, treaty rights, sovereignty and environmental preservation. He specializes in linguistic anthropology, ethnography, applied research methods and the application of collaborative Internet technologies. Dr. Shepard supports online course design and development in Goucher's Welch Center as an Instructional Designer. Michael lives with his wife and two children in Bellingham, Washington. Ph.D. in Cultural and Political Anthropology, University of British Columbia M.A. in Cultural Anthropology, Western Washington University B.A. in Indigenous and Environmental Studies, Western Washington University Rory Turner Rory Turner is a Professor of Practice in Goucher College's Center for People, Politics, and Markets’ Sociology and Anthropology program. He designed, launched and continues to teach in Goucher College's Master of the Arts in Cultural Sustainability Program. Formerly Program Director for Folk and Traditional Arts and Program Initiative Specialist at the Maryland State Arts Council, he co-founded and directed the Maryland Traditions program from 2000-2007. He also founded and subsequently revived the Baltimore Rhythm Festival. Fieldwork has taken him to Bali, Senegambia, Nigeria, Ghana, as well as the neighborhoods and communities of Maryland. In 2021, he produced African Strings: A House Concert, a film for The Performing Arts Center for African Cultures (PACAC). The concert featured noted musicians Cheikh Diabate, Amadou Kouyate, Osei Korankye, and Kweku Owusu. Recent publications include “Radical Critical Empathy and Cultural Sustainability” a chapter in Cultural Sustainabilities: Music, Media, Language, Advocacy (University of Illinois Press); “Cultural Sustainability: A Framework for Relationships, Understanding, and Action” co-written with Michael A. Mason in the Journal of American Folklore and “Talking about the Weather: Radical Critical Empathy and the Reality of Communitas” in The Intellectual Legacy of Victor and Edith Turner. Additional academic and creative writing can be found in such journals as Journal of American Folklore, Folklore Forum, Journal of Folklore Research, Anthropology and Humanism, and TDR (The Drama Review). Ph.D. in Folklore, Indiana University-Bloomington M.A. in Folklore, Indiana University-Bloomington B.A. in Religious Studies, Brown University Thomas Walker Thomas Walker currently directs the Masters programs in Environmental Studies and Historic Preservation at Goucher College and had previously served as a co-director of the M.A. in Cultural Sustainability program. With ties to these different programs, he promotes their complementarity, interrelationship, and common focus on a human dimensions approach to the study of the natural and built environments. He has worked in museums and arts organizations involving historic preservation projects, including a virtual museum developed at Indiana University based on a collection of historic log buildings and documentation of traditional culture of the area. He has also conducted oral histories of historic preservation in Indiana and documented maritime culture in the Chesapeake Bay region as well as in New York harbor to contextualize the history of the seaport and its collection of historic vessels and buildings. As a venture philanthropist, he has served as a trustee for a foundation www.walker-foundation.org which funds research, policy, and projects investigating environmental economics in areas of climate change, energy and tax policy, ecosystem services, ecotourism, and sustainability in forests and fisheries. Ph.D. in Folklore and Anthropology, Indiana University-Bloomington M.A. in Folklore and Anthropology, Indiana University-Bloomington B.A. in English, St. Lawrence University Jason Yoon Mr. Yoon is the Director of Education at the Queens Museum (QM) in New York City where he oversees the museum's visual arts education programs both at the museum and in community settings around the borough of Queens. Prior to joining QM, Jason served for five years as the executive director of New Urban Arts, a nationally recognized non-profit art studio and gallery for high school students and emerging artists in Providence RI. He was a teaching artist and museum educator at the Brooklyn Museum; founded and directed his own youth arts mentoring program 7ARTS which was featured on NY1 News; and worked as a grant writer and Development Associate for the DreamYard project. Jason is a proud graduate of Cooper Union's free visual arts high school outreach programs.
https://www.goucher.edu/learn/graduate-programs/ma-in-cultural-sustainability/faculty/
Jean-Paul Sartre was born 117 years ago today. A French philosopher, playwright, novelist, screenwriter, political activist, biographer and literary critic, Sarte was one of the key figures in the philosophy of existentialism and phenomenology and a leading figure in 20th century French philosophy and Marxism. Sarte’s work has also influenced sociology, critical theory, post-colonial theory and literary studies. He continues to influence these disciplines. Sartre has also been noted for his open relationship with the prominent feminist theorist, Simone de Beauvoir. He was awarded the 1964 Nobel Prize in Literature but refused it, saying that he always declined official honors and that "a writer should not allow himself to be turned into an institution." Sartre died in Paris in 1980 at age 74. He is pictured in 1950.
https://www.beachamjournal.com/journal/2022/06/jean-paul-sartre-was-born-117-years-ago-today.html
Diet and nutrition A balanced diet is essential for good health. Although there are no foods that can cure lymphoma, eating well can help you to cope with treatment and support your recovery. On this page How can I eat well during treatment for lymphoma? This page gives general guidance on following a healthy diet, including how to eat well during your treatment for lymphoma. You should speak to your medical team before making any changes to your diet. If you have neutropenia (low neutrophils), you may need to avoid certain foods. We have separate information about food safety and neutropenia. What is a healthy diet? A healthy diet is made up of different food groups, which give your body nutrients to grow, repair, and work well. Carbohydrates (starchy foods) Carbohydrates are the main source of energy for your body. They also provide fibre, which is important in digestive health. Carbohydrates should make up around a third of your daily food intake. Foods that are high in carbohydrates include rice, bread and pasta. For a healthy option, choose brown, wholegrain or wholemeal varieties. Protein Protein is important for your body to grow and repair. You may need more protein than usual to help your body heal during and after your treatment for lymphoma. Foods high in protein include meat, fish, eggs, beans and lentils. For a healthy option, choose lean, grilled cuts of meat. Red meat is a good source of iron and zinc as well as protein. The World Cancer Research Fund reports a link between eating a lot of red meat and some cancers (eg bowel cancer). Limit the amount of cooked red meat you eat to 70g per day. Aim for at least 2 portions of fish a week; 1 of these should be an oily fish such as salmon. If you are pregnant, current NHS guidance is not to eat more than 2 portions of oily fish a week. You should also include dairy products (made from milk) in your diet. Dairy provides calcium (important for bone health), zinc (a mineral with various functions, including helping wounds heal) and protein. Milk, yoghurts and cheese are good sources of dairy. For a healthy option, choose non full-fat varieties and use low-fat spreads instead of butter, which is high in saturated fat. If you are trying to gain weight, however, you might find it helps to eat some of the higher fat options. Fat Fat is an important source of energy and provides useful vitamins. Unsaturated (‘good’) fats can help keep your heart healthy and lower your cholesterol. Foods such as avocados, brazil nuts and oily fish are examples of sources of unsaturated fats. You can also include these types of good fat in your diet if you cook with oils or use oils as a dressing. You should limit your intake of saturated (‘sat’) fats. This type of fat is found in foods such as butter, meat, cakes, and many processed foods, eg sausages and crisps. It’s fine to have a little bit of saturated fat – women should eat no more than 20g a day; men should eat no more than 30g a day. Too much of this type of fat increases health risks including heart disease and stroke. You can see how much of each type of fat is in a product by checking the nutritional information on the packaging. Vitamins and minerals Fruits and vegetables are good sources of vitamins and minerals. Vitamins and minerals have many different functions, including keeping your immune system, bones, teeth and skin healthy. Minerals are important for the strength of your teeth and bones. They also help change the food you eat into energy you use. The recommended intake of fruit and vegetables is at least 5 portions (80g) per day. Examples of what counts as one portion are: - an apple, banana or slice of melon - 3 heaped tablespoons of cooked vegetables (eg carrots, peas or sweetcorn) - 7 cherry tomatoes. See NHS Choices for more information about portion sizes of fruit and vegetables. If you think you might not be getting enough vitamins and minerals, speak to your doctor. Do not take nutritional supplements without medical advice because some can react with other medication. Fibre Fibre helps to keep your heart healthy and your digestive system working well. It is found in foods that come from plants, for example fruits, vegetables, cereals and potatoes. Although it is not classed as a separate food group, you should aim to eat 30g of fibre each day. You can find ways of including fibre in your diet on NHS Choices. If you have a good appetite and are eating well, use the Eatwell plate as a guide. The Eatwell plate shows in what proportion various food groups should make up your daily diet. The key points are to eat: - plenty of fruit and vegetables - plenty of carbohydrates (starchy) foods - some meat, fish, eggs, and pulses - some milk and other dairy foods - small amounts of foods high in fat and sugar. How can I eat well during treatment for lymphoma? Getting the nutrients you need through a healthy diet is important in the treatment for lymphoma. It may help you tolerate higher doses of chemotherapy and protect you from infection. Eating well can also help you to feel well, both emotionally and physically. If you struggle to eat and drink during your treatment, speak to a member of your medical team for advice. They may offer you nutritional supplements or refer you to a dietitian. A dietitian assesses your nutritional wellbeing and gives support tailored to your specific nutritional needs. We offer some suggestions to help with eating problems that commonly affect people who are living with lymphoma, including guidance on food safety if you are neutropenic. We also have some basic advice if you have a sore mouth as a side effect of treatment. Speak to a member of your medical team before making any changes to your diet. Loss of appetite or feeling full quickly Some medicines and treatments for lymphoma can lower your appetite or make you feel full soon after you start to eat. This could be a side effect of chemotherapy. It may also happen if you have lymphoma in your gut or if you have radiotherapy to the gut. If you find it difficult to eat enough, you may find the following tips helpful: - Drink at least 30 minutes before your food to avoid filling up right before you eat. - Serve your food on a smaller plate – a large plateful can be off-putting. - Eat when you are hungry instead of at set meal times. - Eat little and often, with small snacks between meals. - Choose high energy foods (eg omelettes, cheese and biscuits) over foods that are filling but low in energy (eg salads and soups). - Fortify your meals with high energy foods such as olive oil, cream, cheese or milk powder. Weight loss If you have lost weight during your treatment, you can boost your energy (calories) intake in the following ways: - Choose full-fat options (eg whole milk) over low-fat alternatives. - Add cheese or sauces to pasta or vegetables. - Add sugar, honey or syrup to drinks and puddings. - Add butter or oil to bread, pasta, potatoes and vegetables. If you continue to lose weight, ask to be referred to a dietitian. Nausea and sickness Nausea (feeling or being sick) is a common side effect of many chemotherapy drugs. You may also feel nauseous with radiotherapy. To help with nausea: - take antiemetics (anti-sickness medication) - eat dry plain foods such as crackers, toast or rice - add ginger to your diet, for example in the form of ginger beer, ginger tea, ginger biscuits, or root ginger, as ginger may reduce nausea - eat food cold or cook it in a microwave to minimise the smell of food that can worsen nausea. Changes in taste A side effect of some medications, including chemotherapy and some biological therapies, is that food tastes different. Many people say food tastes bland. Others describe a metallic taste, or find that food tastes more salty or bitter than usual. Flavouring your food might help if your food starts to taste different. Herbs, spices, sauces and chutneys can flavour savoury food. A fruit coulis could help to flavour puddings. You might find sharp tasting fizzy drinks (eg lemonade or ginger beer) more enjoyable than milder flavours. Energy drinks are often high in calories (energy) but contain very few other nutrients. A more nourishing option is a milk-based drink which provides protein, vitamins and minerals as well as energy. Many people stop enjoying the taste of tea and coffee during their treatment for lymphoma. If this is the case for you, you could try herbal teas. During your treatment, you may be more at risk of infection, such as oral thrush, which can make food taste unpleasant. To avoid infection, keep good mouth care. Brush your teeth regularly with a soft bristled brush and use an alcohol-free mouthwash. The effects of treatment on your taste may change over time. Taste changes at the start of your treatment may not be the same as the taste changes you experience later on in your treatment. For this reason, keep trying different foods throughout your treatment, even if you didn’t like their taste at the start of it. Once treatment has finished, taste changes should start to fade. Diarrhoea Diarrhoea can be a side effect of some treatments for lymphoma. It is important to speak with your doctor or nurse who may be able to give you some medication to help with it. You may find the following helpful: - Drink plenty of fluids to prevent dehydration. As well as water, soup, jelly and ice lollies can be a source of fluids. - Be aware of symptoms of dehydration eg passing urine less often or passing only small amounts of dark coloured urine. - Take a little and often approach to eating. Ask your dietitian, doctor or nurse if you need to change your diet to manage your diarrhoea. Constipation Constipation is a side effect of some chemotherapy drugs, antiemetics (anti-sickness medications) and pain medication (especially morphine-based ones, such as codeine). To ease constipation, increase the amount of fibre in your diet. You can find tips to help you do this on the NHS Choices website. Drinking plenty of fluids and taking gentle exercise may also help. Talk to your doctor or nurse about suitable laxatives. FAQs about diet and lymphoma You may come across news stories about whether certain foods can prevent or cure cancer. There is no evidence that food can cure cancer – be wary of any claims that it can. A healthy diet can, however, have wide-reaching benefits. In this section, we answer some common questions people have about diet and lymphoma. Speak to your medical team for advice specific to your situation. Is it safe to eat grapefruit? You may have heard that it is unsafe to eat grapefruit while you are having treatment for lymphoma. Some foods affect how well drugs work. Before they can take effect, drugs first need to be broken down and absorbed into your bloodstream. Proteins called ‘enzymes’, particularly one known as ‘CYP3A’, are important in this process. Foods that block the action of such enzymes mean that less of the drug is absorbed into your body. Grapefruit can block CYP3A. You may, therefore, be advised to avoid eating grapefruit or drinking grapefruit juice while you are having treatment for lymphoma. Other fruits that may block CYP3A include Seville orange, blackberry, pomegranate and some varieties of grape. Speak to your consultant to find out if there are any foods you should avoid. He or she can base their advice on how your specific drugs work. Is green tea safe for people with lymphoma? Green tea is made from the leaves of Camellia sinensis, a plant that grows in China and India. Scientists think green tea has the potential to prevent some cancers and to stop cancer cells from growing. However, far more research is needed before they can reach a conclusion. There hasn’t been much research into whether green tea can help in the treatment of cancer. In a small trial of 42 people who had chronic lymphocytic leukaemia (CLL), a third had a reduction in the number of their cancerous cells and in the size of their swollen lymph nodes after drinking green tea. Recently, researchers reported that green tea could stop the drug bortezomib (Velcade®) working as well as it would do otherwise. Findings so far have come only from animal studies and more research is needed to tell whether this also applies to humans. Is it OK to drink alcohol while I am having treatment for lymphoma? Alcohol can interact with some drugs and make them less effective. Check with your consultant whether this is the case with the treatment you are having, and if it is safe for you to drink alcohol. The UK Government updated its guidance in January 2016 following new evidence linking alcohol to certain health risks. The guidance, under consultation at the time of writing, states that adults should not drink more than 14 units of alcohol per week. These should be spread out during the week. As a general guide, 14 units is 6 pints of beer, 6 glasses of wine, or 14 glasses (25ml) of spirits. You can find out more about alcohol, its effects and how to cut down on the Drinkaware website. Should I eat only organic foods? At the moment there is no good quality evidence to support that organic foods can prevent cancer or stop cancer recurring. The term ‘organic’ means food produced with restricted use of man-made fertilisers and pesticides. In the UK, this is set by the Department for Agriculture and Rural Affairs (DEFRA). Research has shown that organic cereals, fruit and vegetables have higher levels of compounds (chemicals) that have antioxidant activity. Antioxidants absorb free radicals, which can damage cells. No research has yet looked at the potential additional health benefits of increased amounts of antioxidants. Some people choose to eat organic food as they are concerned about the residues (traces) of pesticides and herbicides left in food. These levels are closely monitored and reviewed. In 2015, for example, the International Agency for Research on Cancer (IARC) re-classified glyphosate (a residue commonly found in bread) to ‘possibly carcinogenic’ (having the potential to cause cancer). This led to the European Union further restricting the use of glyphosate. The levels of residues in food are considered to be well below the maximum level that would pose a health risk. There have been studies looking at risk of non-Hodgkin lymphoma and occupational exposure to pesticides and herbicides in agriculture workers. The results are conflicting; further research is needed to clarify these risks. Will nutritional supplements help me? If you are able to eat a healthy balanced diet, you do not need to take an additional vitamin or mineral supplement. If eating is difficult, you may need to take an additional general multivitamin and mineral supplement. Some vitamins and minerals can be harmful if taken in high doses and can react with some medications and cancer treatment. Speak to your pharmacist, doctor or dietitian before starting to take any supplements. Are there foods I should avoid if my immune system is suppressed (lowered)? If your immune system is lowered, doctors may say you are ‘immunosuppressed’. If you have human immunodeficiency virus (HIV) or if you are neutropenic (have low neutrophils), you are immunosuppressed and vulnerable to infection. Speak to your medical team for advice about any foods you should avoid in order to prevent infection. Will sugar make my lymphoma worse? Some studies show that cancer cells use energy more quickly than do healthy cells. However, there is no evidence that eating sugar makes lymphoma, or any type of cancer, grow. There are also no research findings to show that if you do not eat sugar, your lymphoma will go away. If you are losing weight, sugar is a good source of energy and may help you to stabilise and regain weight. If you have no eating difficulties, consume sugar in moderation, as per the guidance on the Eatwell plate. Eating a lot of sugar can have other health risks, including obesity, which is linked to the development of other cancer types. You can find out more about body weight and the risk of cancer from Cancer Research UK. Can Echinacea help me? Echinacea (purple coneflower) is a herb that grows in North America. Some people believe that Echinacea can boost immunity, fight cancer and improve side effects of chemotherapy and radiotherapy. Research continues but there is no evidence to support these claims at the time of writing. Can I eat out? You may feel anxious about eating out if you have difficulties eating. If your appetite is small, you could order a starter instead of a main course or order a child’s portion. If you are neutropenic, please see our separate information about food safety, which includes guidance on eating out. Is it safe to diet while I am having treatment for lymphoma? Generally, you should not try to lose weight during treatment because doing so can make it harder for your immune system to recover from treatment. Steroids can stimulate your appetite, and cause fluid retention leading to weight gain. Your weight should return to normal once you stop taking steroids. Is nutrition still important once I finish my treatment for lymphoma? A healthy diet is just as important once you complete your treatment for lymphoma as it is during treatment. The benefits of good nutrition include: - helping your physical and mental recovery - lowering your risk of infection - increasing your energy and strength - helping to reduce the risk of other types of cancer. Further recommendations are available from the World Cancer Research Fund in their publication, Healthy living after cancer. Further information and resources Speak to your medical team for information about your diet – they can give you the best advice based on your specific diagnosis and personal circumstances. We have listed a few organisations and resources you might find useful in finding out more about diet and nutrition. Our helpline team is available if you wish to talk through any aspect of your lymphoma. You can call them on 0808 808 5555. You may also wish to use our online forums to get in touch with others affected by lymphoma. Macmillan Cancer Support provides information about diet and nutrition. They publish a booklet (which you can download online) called Eating problems and cancer and recipes specifically for people who are living with cancer. UK Government Meals on Wheels service is for people who live in England or Wales. If you are living with a diagnosis of cancer or caring for someone with cancer, you may be eligible to have meals delivered to you at home. To find out whether your geographical area is covered, use the online postcode search tool or call your local council. Eating well when you have cancer is a booklet published by The Royal Marsden NHS Trust. It includes information about nutrition, advice on how to deal with common difficulties related to eating, and meal ideas.
https://lymphoma-action.org.uk/about-lymphoma-living-lymphoma/diet-and-nutrition
The Product Strategy and Pricing Manager reports to the Director of Strategy and Business Development as a member of the Viewpoint Product team. The role’s objective is to drive the creation and implementation of Viewpoint’s product portfolio and pricing strategies. This will include conducting market and product research and analyses, building and managing comprehensive financial and pricing models, and delivering data driven insights and recommendations. This will require strong cross-functional collaboration including Product Management, Finance, Accounting, Sales, Support, Professional Services and IT. Additionally, the Product Strategy and Pricing Manager will lead and own the development of quantitative product dashboards, and coordinate the preparation and execution of regular product lifecycle reviews with the Chief Product Officer and the Product Management team. The position has the following job responsibilities: Product Strategy: - Conduct market research, benchmarking and quantitative analyses - Create presentations that clearly conveys complex analyses to aid executive decision making - Recommend strategies based on deep research, thoughtful analyses and complex financial modeling - Develop and execute on product strategy, including new product introduction and portfolio optimization Product Lifecycle Management: - Create framework and structure to aid product lifecycle decision making - Complete analyses to determine Make, Buy, Partner decisions based on lifecycleProduct Pricing: - Develop and manage complex financial and pricing models, ensuring precision, accuracy and reliability of model outputs - Develop comprehensive pricing-implementation strategy and lead pricing reviews on opportunities with cross functional teams - Ensure all value creating programs have appropriate value pricing strategies that maximizes profitability and win rates - Identify pricing and licensing opportunities, including pricing optimization to increase profitability - Lead the organization and presentation of pricing recommendations to pricing committees as per current review policyProduct Analytics and Metrics Management: - Own and master capturing all data necessary to conduct analyses and to identify and track metrics - Identify areas of opportunities and proactively conduct analyses to gain unique insights that can help drive product performance improvements (e.g., financial, customer success, sales, etc) - Create KPIs and appropriate tracking mechanisms to monitor product profitability and strategy impactQualificationsRequired: - Bachelor’s degree plus a minimum of five years of experience in investment banking, consulting, corporate development, product strategy / pricing or any equivalent combination of education and experience. - Experience with various software pricing models including SaaS pricing frameworks - Extensive experience with quantitative analysis including business modeling and planning - Extensive experience in the graphical representation and presentation of quantitative analysis - Experience leading cross functional teams on business planning and execution projects - Experience with enterprise SaaS software Viewpoint is an Equal Opportunity Employer DISCLOSURE for US-BASED POSITIONS ONLY: Viewpoint requires a criminal background investigation, and employment and education verification as a condition of employment.
https://careercenter.bauer.uh.edu/jobs/viewpoint-product-strategy-pricing-manager/
Should we screen for occupational lung cancer with low-dose computed tomography? The objective of this study was to assess the potential value of screening for occupational lung cancer through the use of low-dose computed tomography (LDCT). A literature review of Medline was conducted to assess: 1) screening studies of occupational lung cancer that used LDCT; 2) screening studies of nonoccupational lung cancer that used LDCT; and 3) position papers of medical professional societies and nongovernmental health organizations that have addressed the value of screening for lung cancer with LDCT. No screening studies of occupational lung cancer with LDCT were uncovered; however, numerous observational and population-based studies have addressed the value of screening for lung cancer among cigarette smokers. Results of these studies are difficult to interpret in light of numerous biases associated with these types of studies. No randomized, controlled studies on screening for lung cancer have been published at this time. No professional, governmental, or nonprofit health organization recommends screening asymptomatic people at risk of lung cancer with LDCT at this time. In the absence of randomized, controlled studies that can address biases commonly encountered in observational and population-based studies, it is unclear whether LDCT reduces mortality from lung cancer. The National Cancer Institute is sponsoring a randomized, controlled study of over 50,000 current and former smokers with the results expected in 2009.
The Stoics believed that our true education is a lifelong pursuit. “You should keep learning as long as you are ignorant, – even to the end of your life, if there is anything in the proverb. And the proverb suits the present case as well as any: “As long as you live, keep learning how to live.” For all that, there is also something which I can teach in that school. You ask, do you, what I can teach? That even an old man should keep learning.” This unending pursuit of self-development is the foundation of Stoic Philosophy. Athletics has taught many of us the value of such labor. Sports are the supreme vehicle for developing one’s virtues in a controlled atmosphere. But as time passes our bodies slow down and we inevitably give up on athletic pursuits. Stoicism is an ideology fit to guide a man through his entire life. Our highest pursuit, ourselves, is the path perpetually beneath our feet. We are never too feeble to mold our character. Stoicism can be practiced equally in wealth and scarcity, in sickness and health, in youth and old age, in the crowd and in solitude. Every moment is a vehicle for our development. “Why do you wait? Wisdom comes haphazard to no man. Money will come of its own accord; titles will be given to you; influence and authority will perhaps be thrust upon you; but virtue will not fall upon you by chance.” Seneca’s words hold true–no man becomes wise by chance. There is much comfort to be found in knowing the true measure of a man happens to be that over which he has most control.” Despite our best efforts, the majority of our lives are outside of our control. We have no say in when, where, and to whom we are born. We each come predisposed with certain strengths and weaknesses, and much of our experiences come at the hands of fate. Chance will have much say in the ordering of our outer lives but ultimately cannot touch us in our depths. “But when you wish to inquire into a man’s true worth, and to know what manner of man he is, look at him when he is naked; make him lay aside his inherited estate, his titles, and the other deceptions of fortune; let him even strip off his body. Consider his soul, its quality and its stature, and thus learn whether its greatness is borrowed, or its own.” Tony Robbins teaches that we each seek to fulfill six human needs: certainty, uncertainty, significance, love and connection, growth, and contribution. Though we value each differently, we must address all six if we are to live a complete life. This is the true value of Stoicism– it is a practice which forces us to consistently grow, and in so doing, makes the attainment of the remainder of our human needs possible. Stoicism is the subject, the world is our classroom, and each day our homework is the cultivation of ourselves. No man ever graduates this course while he lives. Our work is never done as our greatest potential can never be reached. We climb anyway, all the while remembering that the higher we ascend, the more beautiful the view. If you want to read Chris’s latest book on personal development, check it out here. If you would like to be coached by Chris personally, click here.
https://www.chrismatakas.com/post/seneca-on-continued-education
Researchers at Oregon State University have shown for the first time that loss of biodiversity may be contributing to a fungal infection that is killing amphibians around the world, and provides more evidence for why biodiversity is important to many ecosystems. The findings, being published this week in Proceedings of the National Academy of Sciences, used laboratory studies of amphibians to show that increased species richness decreased both the prevalence and severity of infection caused by the deadly chytrid fungus, Batrachochytrium dendrobatidis. "With greater diversity of species, you get a dilution effect that can reduce the severity of disease," said Catherine Searle, an OSU zoologist and lead author on the study. "Some species are poor hosts, some may not get infected at all, and this tends to slow disease transmission. "This has been shown in other systems like Lyme disease which infects humans, mice and deer," she said. "No one has really considered the dilution effect much in amphibians, which are experiencing population declines throughout the world. It's an underappreciated value of biodiversity." It's generally accepted, the researchers said, that a high diversity of species can protect ecosystem function, help to recycle nutrients, filter air and water, and also protect the storehouse of plant or animal species that may form the basis of medicines, compounds or natural products of value to humans. Protection against the spread of disease should more often be added to that list, they said. "Emerging infectious diseases are on the rise in many ecosystems," said Andrew Blaustein, a co-author on this study, professor of zoology at OSU and leading researcher on the causes of amphibian declines. "Protection of biodiversity may help reduce diseases," he said. "It's another strong argument for why diverse ecosystems are so important in general. And it's very clear that biodiversity is much easier to protect than it is to restore, once it's lost." The fungus, B. dendrobatidis, can lead to death from cardiac arrest when it reaches high levels in its amphibian hosts. It is not always fatal at lower levels of infection, but is now causing problems around the world. One research team has called the impact of the chytrid fungus on amphibians "the most spectacular loss of vertebrate biodiversity due to disease in recorded history." Amphibians face threats from multiple causes, including habitat destruction, pollution, increases in ultraviolet light due to ozone depletion, invasive species, and infectious disease. The dilution effect can occur in plants and animals, but also in human diseases. In a different report published last year in Nature, researchers noted an increased risk of West Nile encephalitis in the U.S. in areas with low bird diversity. And in more diverse communities, the infection of humans by schistosomiasis -- which infects 200 million people worldwide -- can be reduced by 25-99 percent.
Antibody assay for tissue transglutaminase is used to evaluate patients suspected of having celiac disease, including patients with typical clinical symptoms, patients with atypical symptoms, and people at high risk due to family history, diagnosis of associated disease, positive molecular test. Antibody assay against tissue transglutaminase is also used to monitor adherence of patients with dermatitis herpetiformis and celiac disease to their gluten-free diet. Gliadin and gluten are proteins that are mainly found in wheat and wheat products. Patients suffering from celiac disease (or otherwise gluten enteropathy) cannot tolerate the consumption of these proteins or any products containing wheat (as well as barley and rye). These proteins are toxic to the mucosa of the small intestine and are the cause of the characteristic pathological lesions. Patients with celiac disease develop severe gastrointestinal symptoms of malabsorption. The only effective treatment for celiac disease is to abstain from wheat and wheat-containing products. When a patient with celiac disease consumes foods containing wheat (or barley or rye), gluten and gliadin accumulate in the intestinal mucosa. These proteins (and their metabolites) cause immediate mucosal damage. In addition, IgA immunoglobulins against gliadin, endomysium, and tissue transglutaminase (tTG), are expressed in the intestinal mucosa and serum of patients. Identification of specific antibodies in the blood of patients with malabsorption is very useful in supporting the diagnosis of celiac disease or dermatitis herpetiformis. However, a definitive diagnosis of celiac disease can only be made when the characteristic pathological intestinal lesions of celiac disease are found (by biopsy). Also, the patient's symptoms should improve when on a special gluten-free restrictive diet. Both conditions are necessary for the diagnosis of the disease. In patients with celiac disease, the determination of these antibodies can be used to monitor disease progression and adherence to dietary guidelines. In addition, these antibodies are indicative of successful treatment because they are negative in patients on a gluten-free diet. Common clinical manifestations of celiac disease associated with gastrointestinal inflammation include abdominal pain, malabsorption, diarrhea, or constipation. However, the clinical symptoms of the disease are not limited to the gastrointestinal tract. Other common manifestations of celiac disease may include developmental retardation (delayed puberty and low stature), iron deficiency, repeated miscarriages, osteoporosis, chronic fatigue, recurrent aphthous stomatitis, dental enamel hypoplasia, and dermatitis herpetiformis. Patients with celiac disease may also develop neuropsychiatric manifestations, including ataxia and peripheral neuropathy, and are at increased risk for developing non-Hodgkin's lymphoma. The disease may also be associated with other clinical disorders such as thyroiditis, type 1 diabetes, Down syndrome, and IgA immunodeficiency. Important Note Laboratory test results are the most important parameter for the diagnosis and monitoring of all pathological conditions. 70%-80% of diagnostic decisions are based on laboratory tests. The correct interpretation of laboratory results allows a doctor to distinguish "healthy" from "diseased". Laboratory test results should not be interpreted from the numerical result of a single analysis. Test results should be interpreted in relation to each individual case and family history, clinical findings and the results of other laboratory tests and information. Your personal physician should explain the importance of your test results. At Diagnostiki Athinon we answer any questions you may have about the test you perform in our laboratory and we contact your doctor to get the best possible medical care.
https://eshop.athenslab.gr/en/diagnostikes-exetaseis/tissue-transglutaminase-ttg-iga-antibodies-259
PROJECT SUMMARY/ABSTRACT Eating disorders (EDs) are severe mental illnesses with the highest mortality rate of any psychiatric disorder. The most widely used empirically supported treatment for EDs (cognitive behavior therapy) is only efficacious for ~50% of individuals. This low response rate is due to the fact that EDs are heterogeneous conditions with diverse symptom trajectories that are not adequately addressed in current ?one-size-fits-all? psychotherapies. Until we can identify what maintains or exacerbates individual symptoms, clinicians will continue to have difficulty accurately predicting prognosis and will have no empirical guidance to develop targeted treatment plans to promote recovery. Our scientific premise, developed from our past work, is that the application of network theory will enable the identification of cognitive-behavioral symptom networks that maintain and ?trigger? EDs both between and within individuals. Our study goals are to (1) identify individual ED ?trigger? symptoms (cognitions, behaviors, affect, and physiology) and (2) correlate trigger symptoms with real-time physiological data to create an algorithm predicting onset of ED behaviors. These goals will ultimately identify symptoms that prevent full remission and lead to relapse. We will use a multiple units of analysis approach combined with novel, cutting-edge advances in network science. We will collect intensive real- time data on cognitions, behavior, affect, and physiology using mobile and sensor-technology from 120 individuals with a diagnosis of anorexia nervosa (AN), atypical AN, and bulimia nervosa across 30 days. At 1-month and 6-month follow ups we will assess ED outcomes (e.g., remission status, ED behaviors) to test if ?trigger? symptoms predict ED outcomes. Network science and state-of-the-art machine learning techniques will allow us, for the first time, to discover whether certain trigger symptoms predict worse outcomes. Specific aims are to (1) develop personalized networks to identify which cognitive, behavioral, affective, and physiological symptoms maintain EDs and predict ED outcomes and (2) utilize sensor data to identify physiological patterns both within and across people that correlate with core maintaining symptoms and that predict ED behaviors. The proposed research uses highly innovative methods, combining intensive longitudinal data collection methods, all remote procedures, novel advances in network science and sensor-technology, and state-of-the-art machine learning techniques to answer previously unresolvable questions pinpointing which personalized symptoms trigger EDs. The proposed research has clinical impact. If we identify patterns that contribute to symptom network variation within individuals, these data will provide a model of personalized medicine for the entire field of psychiatry, as well as providing novel intervention targets to prevent and treat EDs.
Olmec Jade Mask SKU CK.0733 Circa 1200 BC to 500 BC Dimensions 6.625″ (16.8cm) high x 6″ (15.2cm) wide Medium Jade Origin Mesoamerica Gallery Location USA The Olmecs are generally considered to be the ultimate ancestor of all subsequent Mesoamerican civilisations. Thriving between about 1200 and 400 BC, their base was the tropical lowlands of south central Mexico, an area characterized by swamps punctuated by low hill ridges and volcanoes. Here the Olmecs practiced advanced farming techniques and constructed many permanent settlements. Their influence, both cultural and political, extended far beyond their boundaries; the exotic nature of Olmec designs became synonymous with elite status in other (predominantly highland) groups, with evidence for exchange of artefacts in both directions. Other than their art (see below), they are credited with the foundations of writing systems (the loosely defined Epi-Olmec period, c. 500 BC), the first use of the zero – so instrumental in the Maya long count vigesimal calendrical system – and they also appear to have been the originators of the famous Mesoamerican ballgame so prevalent among later cultures in the region. The art form for which the Olmecs are best known, the monumental stone heads weighing up to forty tons, are generally believed to depict kingly leaders or possibly ancestors. Other symbols abound in their stylistic repertoire, including several presumably religious symbols such as the feathered serpent and the rain spirit, which persisted in subsequent and related cultures until the middle ages. Comparatively little is known of their magico-religious world, although the clues that we have are tantalising. Technically, these include all non- secular items, of which there is a fascinating array. The best- known forms are jade and ceramic figures and celts that depict men, animals and fantastical beasts with both anthropomorphic and zoomorphic characteristics. Their size and general appearance suggests that they were domestically- or institutionally-based totems or divinities. The quality of production is astonishing, particularly if one considers the technology available, the early date of the pieces, and the dearth of earlier works upon which the Olmec sculptors could draw. Some pieces are highly stylised, while others demonstrate striking naturalism with deliberate expressionist interpretation of some facial features (notably up- turned mouths and slit eyes) that can be clearly seen in the current figure. In the Olmec culture the mask was considered an icon of transformation. It makes visible the charismatic and shamanic power of the wearer; who was either a ruler or shaman. Often the mask has an expression of an otherworldly nature, as if submerged in an ecstatic trance. A mask will never change, it is unaffected by emotion or time, and will forever express the virtues the sculptor endowed upon it. This quality of the eternal appealed to Olmec rulers. The sheer power of this stone mask is monumental in scope. There is a sense it is a product of nature, elemental and beyond comprehension. Yet, a very skilled sculptor was needed to carve the intricate designs. This is not difficult to imagine given its almost primordial character, which seems to come from another dimension. In many respects the Olmec themselves seem not to have been of this world; and objects such as this extraordinary mask appear as living proof. Today, masks are worn mostly for the fun of Halloween parties or the profit of robbing banks. In either case their purpose is simply to conceal the identity of the wearer. The peoples of ancient cultures, however, believed that masks were magical and that by donning one the wearer actually became the god, demon or animal it represented and was, therefore, endowed with all its powers of good or evil. Masks of every conceivable non- perishable material and varying sizes have been found all over Mexico. The earliest we know of were made of clay but it is probable that others made of gourds or even paper have not survived. Jade, as the symbol of life and the most precious substance known, was often used for the most prestigious kings and powerful gods. Masks were frequently laid over the faces or on the chests of the dead. Though their actual purpose is obscure, at least one, that found in the tomb of a Pakal, ruler of Palenque, seems to have been a true portrait of the deceased.
Russell, Sarah L (2011) A Randomised Clinical Trial Investigating The Most Appropriate Conservative Management Of A Frozen Shoulder. Masters thesis, University of Central Lancashire. | PDF (e-Thesis available to download) - Accepted Version | 2MB Abstract ‘Frozen Shoulder’ is a term which describes a combination of shoulder pain and stiffness that causes sleep disturbance and marked disability, and which runs a prolonged course (Hanchard et al 2011). Physiotherapy has been advocated; however there is no robust evidence on the superiority of any one treatment modality (Callinan et al 2003). The aim of this study was to evaluate the effect of an exercise class compared to multimodal physiotherapy and a home exercise programme in patients with frozen shoulder. The objectives were to identify that clinical scores were effective at detecting change in the different treatment groups and to provide recommendations for the physiotherapeutic management of frozen shoulder. The study design was a randomised controlled trial with seventy five patients enrolled. The primary outcome measure was the Constant score, secondary outcome measures included the Oxford Shoulder Score (OSS), Hospital Anxiety and Depression Scale (HADS) and the short form 36 item health survey (SF-36). A repeated measures one- way analysis of variance on the outcome data was conducted. Results from the Constant score and OSS indicate that at six weeks, six months and one year, an Exercise Class was more effective than Multimodal Physiotherapy or Home Exercises. The results from the HADS indicate that the Exercise Class was more effective than Multimodal Physiotherapy or Home Exercises at six weeks and six months. However, at one year Multimodal Physiotherapy was more effective than the Exercise Class and Home Exercises. This study provides an original contribution to knowledge in frozen shoulder and has important implications for enhancing clinical practice. The findings suggest that a hospital based exercise class produced a rapid recovery with a minimum number of visits to the hospital. Physiotherapy could also be considered to optimise speed of recovery of frozen shoulder. The Constant score, OSS and HADS are recommended in the management of frozen shoulder. Finally, GPs and physiotherapists require training in the clinical diagnostic accuracy of frozen shoulder. The need for further research in this area is emphasized.
http://clok.uclan.ac.uk/5311/
HOTSPOTS enables new insights and perspectives for city development. According to the project idea innovations in acquisition and sensing as well as densification of geo-referenced city related data are supplemented by novel processing chains in city data analytics. Driven by an integrated scientific approach we develop a novel method in the selection, evaluation and prioritization of infrastructural city development measures which is directly derived from sensed data, hence reducing the risk of ad-hoc decisions or lack in impact. Starting point/Motivation City-related data are stored fragmented at different data holders in different quality, captured at different time stamps and with varying spatial resolution. Thus, there is a lack of a common data base of consolidated and harmonized data sets. But is already all relevant information recorded and available? Energy efficiency is an important criterion of modern urban planning and optimization. Which data collection ensures that knowledge about the actual energy consumption, the user behavior and the causation are stored promptly and site-specific, in contrast to temporally / spatially blended recorded data? Contents and Objectives HOTSPOTS pursues the goal of providing tools and scientific methods to cities in order to capture the current condition of existing buildings in terms of energy efficiency and of providing a decision-making basis for improving this condition. As part of the planned project, a continuous process chain will be developed and validated in the model city Gleisdorf, which aims to help cities in the future to identify, assess and accurately address optimization potentials. Future selection processes in the field of structural measures of urban development are to be made in a transparent and (measurable) data-driven basis, which reduces the risk of ad-hoc decisions or bad investments. Methods HOTSPOTS is a methodically closed process chain, realized by overlapping project modules. A 3D Thermal Register, which is created from aerial imagery, forms the data basis for the project. The task is the region-wide collection of thermal data in the urban area. These single frames are linked to a holistic city-wide data base and lifted into the third dimension by deriving a 3D model from the image data. Within the 3D Thermal Register, "critical spots" are then identified. Critical spots in the city define infrastructure cells at the district level, which have a particularly large potential for optimization. These areas are then analyzed in detail in the following project modules. A close-range data acquisition is then performed in order to perform a selective expansion of the database within a selected infrastructure cell. This includes a mobile data acquisition using a UAV in for the generation of 3D building models. From these models, relevant geometric parameters and detailed thermal information can be derived and serves as input for simulations of optimization scenarios. Another research aspect is the exploitation of selective acquisition and densification of the data in terms of a three-dimensional gas layer model. Based on this data, a cell-wide but focused critical point analysis takes place. This is followed by the creation of an effective catalog of measures, including impact factors influencing the defined critical spots in the city, which have a particularly large potential for optimization. Furthermore, a decision support tool is applied for the interactive selection, localization of energy efficiency measures and the simulation of the resulting effects with the calculation of optimal combinations of measures for subspaces. Results The highlights of the project can be summarized as follows: - Metrological and data driven innovations - Region-wide thermography of Gleisdorf in 3D captured by a specially developed measuring unit - Acquisition and generation of a three-dimensional gas layer model - Use of air balloons and UAVs for data acquisition - Correction of measured thermal data by semantically interpreting the image content - Methodological access to the integration of thermal and statistical data - Semi-automatic update of the state data by additional flights and thus the possibility of monitoring the effects of energy efficiency and heating optimization measures with regard to the fulfillment of Smart City goals - Simulation of individual energy saving measures at different spatial scales - Identification of realistically implementable solutions for renewable and CO2 neutral implementation of renovation strategies - Development of specific implementation scenarios in terms of short-, medium-and longer-term action plans - Generation of methods and evaluation metrics which allow transferability of the findings to other small towns and districts Prospects / Suggestions for future research The processing chain developed in this project was successfully evaluated in the "model city" of Gleisdorf. The suggested optimization measures were verified via on-site inspections. The implemented methods are applicable to any town or city. Data acquisition for generating the 3D Thermal Register was carried out using a hot air balloon. This method proved to be feasible and low-cost in principle, but heavily depends on weather and – especially - wind conditions. In future scenarios, using a light plane or a helicopter instead could be of interest, to identify alternatives. Future application of the processing chain could also help in developing a deeper understanding of thermal and air quality data. Especially the differentiation of relevant information from weather-induced, temporal phenomena is of special interest. The suitability of the approach will be put to test when it comes to concrete rehabilitation projects derived from or suggested by the simulations, involving owners and decision-makers.
https://nachhaltigwirtschaften.at/en/sdz/projects/hotspots-holistic-thermographic-screening-of-urban-physical-objects-at-transient-scales.php
Classic symptoms of DM are polyuria, polydipsia, and weight loss. In addition, patients with hyperglycemia often have blurred vision, increased food consumption (polyphagia), and generalized weakness. When a patient with type 1 DM loses metabolic control (such as during infections or periods of noncompliance with therapy), symptoms of diabetic ketoacidosis occur. These may include nausea, vomiting, dizziness on arising, intoxication, delirium, coma, or death. Chronic complications of hyperglycemia include retinopathy and blindness, peripheral and autonomic neuropathies, glomerulosclerosis of the kidneys (with proteinuria, nephrotic syndrome, or end-stage renal failure), coronary and peripheral vascular disease, and reduced resistance to infections. Patients with DM often also sustain infected ulcerations of the feet, which may result in osteomyelitis and the need for amputation. Monogenic diabetes is caused by mutations, or changes, in a single gene. These changes are usually passed through families, but sometimes the gene mutation happens on its own. Most of these gene mutations cause diabetes by making the pancreas less able to make insulin. The most common types of monogenic diabetes are neonatal diabetes and maturity-onset diabetes of the young (MODY). Neonatal diabetes occurs in the first 6 months of life. Doctors usually diagnose MODY during adolescence or early adulthood, but sometimes the disease is not diagnosed until later in life. Our bodies break down the foods we eat into glucose and other nutrients we need, which are then absorbed into the bloodstream from the gastrointestinal tract. The glucose level in the blood rises after a meal and triggers the pancreas to make the hormone insulin and release it into the bloodstream. But in people with diabetes, the body either can't make or can't respond to insulin properly. For example, the environmental trigger may be a virus or chemical toxin that upsets the normal function of the immune system. This may lead to the body’s immune system attacking itself. The normal beta cells in the pancreas may be attacked and destroyed. When approximately 90% of the beta cells are destroyed, symptoms of diabetes mellitus begin to appear. The exact cause and sequence is not fully understood but investigation and research into the disease continues. Regular ophthalmological examinations are recommended for early detection of diabetic retinopathy. The patient is educated about diabetes, its possible complications and their management, and the importance of adherence to the prescribed therapy. The patient is taught the importance of maintaining normal blood pressure levels (120/80 mm Hg or lower). Control of even mild-to-moderate hypertension results in fewer diabetic complications, esp. nephropathy, cerebrovascular disease, and cardiovascular disease. Limiting alcohol intake to approximately one drink daily and avoiding tobacco are also important for self-management. Emotional support and a realistic assessment of the patient's condition are offered; this assessment should stress that, with proper treatment, the patient can have a near-normal lifestyle and life expectancy. Long-term goals for a patient with diabetes should include achieving and maintaining optimal metabolic outcomes to prevent complications; modifying diet and lifestyle to prevent and treat obesity, dyslipidemia, cardiovascular disease, hypertension, and nephropathy; improving physical activity; and allowing for the patient’s nutritional and psychosocial needs and preferences. Assistance is offered to help the patient develop positive coping strategies. It is estimated that 23 million Americans will be diabetic by the year 2030. The increasing prevalence of obesity coincides with the increasing incidence of diabetes; approx. 45% of those diagnosed receive optimal care according to established guidelines. According to the CDC, the NIH, and the ADA, about 40% of Americans between ages 40 and 74 have prediabetes, putting them at increased risk for type 2 diabetes and cardiovascular disease. Lifestyle changes with a focus on decreasing obesity can prevent or delay the onset of diabetes in 58% of this population. The patient and family should be referred to local and national support and information groups and may require psychological counseling. Type 2 diabetes is partly preventable by staying a normal weight, exercising regularly, and eating properly. Treatment involves exercise and dietary changes. If blood sugar levels are not adequately lowered, the medication metformin is typically recommended. Many people may eventually also require insulin injections. In those on insulin, routinely checking blood sugar levels is advised; however, this may not be needed in those taking pills. Bariatric surgery often improves diabetes in those who are obese. They may need to take medications in order to keep glucose levels within a healthy range. Medications for type 2 diabetes are usually taken by mouth in the form of tablets and should always be taken around meal times and as prescribed by the doctor. However, if blood glucose is not controlled by oral medications, a doctor may recommend insulin injections. Type 2 diabetes occurs when the pancreas does not make enough insulin or the body does not use insulin properly. It usually occurs in adults, although in some cases children may be affected. People with type 2 diabetes usually have a family history of this condition and 90% are overweight or obese. People with type 2 diabetes may eventually need insulin injections. This condition occurs most commonly in people of Indigenous and African descent, Hispanics, and Asians. As with many conditions, treatment of type 2 diabetes begins with lifestyle changes, particularly in your diet and exercise. If you have type 2 diabetes, speak to your doctor and diabetes educator about an appropriate diet. You may be referred to a dietitian. It is also a good idea to speak with your doctor before beginning an exercise program that is more vigourous than walking to determine how much and what kind of exercise is appropriate. Before you find yourself shocked by a diabetes diagnosis, make sure you know these 20 diabetes signs you shouldn’t ignore. If you identify with any of these warning signs on the list, be sure to visit your doctor ASAP to get your blood sugar tested. And if you want to reduce your risk of becoming diabetic in the first place, start with the 40 Tips That Double Weight Loss! Type 2 diabetes which accounts for 85-95 per cent of all diabetes has a latent, asymptomatic period of sub-clinical stages which often remains undiagnosed for several years1. As a result, in many patients the vascular complications are already present at the time of diagnosis of diabetes, which is often detected by an opportunistic testing. Asian populations in general, particularly Asian Indians have a high risk of developing diabetes at a younger age when compared with the western populations5. Therefore, it is essential that efforts are made to diagnose diabetes early so that the long term sufferings by the patients and the societal burden can be considerably mitigated. Longer-term, the goals of treatment are to prolong life, reduce symptoms, and prevent diabetes-related complications such as blindness, kidney failure, and amputation of limbs. These goals are accomplished through education, insulin use, meal planning and weight control, exercise, foot care, and careful self-testing of blood glucose levels. Self-testing of blood glucose is accomplished through regular use of a blood glucose monitor (pictured, right). This machine can quickly and easily measure the level of blood glucose based by analysing the level from a small drop of blood that is usually obtained from the tip of a finger. You will also require regular tests for glycated haemoglobin (HbA1c). This measures your overall control over several months. Hypoglycemia. Hypoglycemia or “insulin shock” is a common concern in DM management. It typically develops when a diabetic patient takes his or her normal dose of insulin without eating normally. As a result, the administered insulin can push the blood sugar to potentially dangerously low levels. Initially the patient may experience, sweating, nervousness, hunger and weakness. If the hypoglycemic patient is not promptly given sugar (sugar, cola, cake icing), he or she may lose consciousness and even lapse into coma. Questions and Answers about Diabetes and Your Mouth Q: If I have diabetes, will I develop the oral complications that were mentioned? A: It depends. There is a two-way relationship between your oral health and how well your blood sugar is controlled (glycemic control). Poor control of your blood sugar increases your risk of developing the multitude of complications associated with diabetes, including oral complications. Conversely, poor oral health interferes with proper glucose stabilization. Indeed, recent research has shown that diabetic patients who improve their oral health experience a modest improvement in their blood sugar levels. In essence, “Healthy mouths mean healthy bodies.” Q: What are the complications of diabetes therapy that can impact my oral health? A: One of the most worrisome urgent complications associated with diabetes management is the previously described hypoglycemia or insulin shock. In addition, many of the medications prescribed to treat diabetes and its complications, such as hypertension and heart disease, may induce adverse side effects affecting the mouth. Common side effects include dry mouth, taste aberrations, and mouth sores. Q: I have type-2 diabetes. Are my dental problems different than those experienced by people with type-1 diabetes? A: No. All patients with diabetes are at increased risk for the development of dental disease. What is different is that type-2 disease tends to progress more slowly than type-1 disease. Thus, most type-2 diabetes patients are diagnosed later in life, a time in which they are likely to already have existing dental problems. Remember, there is no dental disease unique to diabetes. Uncontrolled or poorly controlled diabetes simply compromises your body’s ability to control the existing disease. Longer-term, the goals of treatment are to prolong life, reduce symptoms, and prevent diabetes-related complications such as blindness, kidney failure, and amputation of limbs. These goals are accomplished through education, insulin use, meal planning and weight control, exercise, foot care, and careful self-testing of blood glucose levels. Self-testing of blood glucose is accomplished through regular use of a blood glucose monitor (pictured, right). This machine can quickly and easily measure the level of blood glucose based by analysing the level from a small drop of blood that is usually obtained from the tip of a finger. You will also require regular tests for glycated haemoglobin (HbA1c). This measures your overall control over several months. Insulin is a hormone produced by the beta cells within the pancreas in response to the intake of food. The role of insulin is to lower blood sugar (glucose) levels by allowing cells in the muscle, liver and fat to take up sugar from the bloodstream that has been absorbed from food, and store it away as energy. In type 1 diabetes (previously called insulin-dependent diabetes mellitus), the insulin-producing cells are destroyed and the body is not able to produce insulin naturally. This means that sugar is not stored away but is constantly released from energy stores giving rise to high sugar levels in the blood. This in turn causes dehydration and thirst (because the high glucose ‘spills over’ into the urine and pulls water out of the body at the same time). To exacerbate the problem, because the body is not making insulin it ‘thinks’ that it is starving so does everything it can to release even more stores of energy into the bloodstream. So, if left untreated, patients become increasingly unwell, lose weight, and develop a condition called diabetic ketoacidosis, which is due to the excessive release of acidic energy stores and causes severe changes to how energy is used and stored in the body. Polyuria is defined as an increase in the frequency of urination. When you have abnormally high levels of sugar in your blood, your kidneys draw in water from your tissues to dilute that sugar, so that your body can get rid of it through the urine. The cells are also pumping water into the bloodstream to help flush out sugar, and the kidneys are unable to reabsorb this fluid during filtering, which results in excess urination. There is evidence that certain emotions can promote type 2 diabetes. A recent study found that depression seems to predispose people to diabetes. Other research has tied emotional stress to diabetes, though the link hasn't been proved. Researchers speculate that the emotional connection may have to do with the hormone cortisol, which floods the body during periods of stress. Cortisol sends glucose to the blood, where it can fuel a fight-or-flight response, but overuse of this system may lead to dysfunction. Findings from the Diabetes Control and Complications Trial (DCCT) and the United Kingdom Prospective Diabetes Study (UKPDS) have clearly shown that aggressive and intensive control of elevated levels of blood sugar in patients with type 1 and type 2 diabetes decreases the complications of nephropathy, neuropathy, retinopathy, and may reduce the occurrence and severity of large blood vessel diseases. Aggressive control with intensive therapy means achieving fasting glucose levels between 70-120 mg/dl; glucose levels of less than 160 mg/dl after meals; and a near normal hemoglobin A1c levels (see below). It is especially important that persons with diabetes who are taking insulin not skip meals; they must also be sure to eat the prescribed amounts at the prescribed times during the day. Since the insulin-dependent diabetic needs to match food consumption to the available insulin, it is advantageous to increase the number of daily feedings by adding snacks between meals and at bedtime. Although many of the symptoms of type 1 and type 2 diabetes are similar, they present in very different ways. Many people with type 2 diabetes won’t have symptoms for many years. Then often the symptoms of type 2 diabetes develop slowly over the course of time. Some people with type 2 diabetes have no symptoms at all and don’t discover their condition until complications develop. ; DM multiaetiology metabolic disease due to reduced/absent production of pancreatic insulin, and/or insulin resistance by peripheral tissue insulin receptors; characterized by reduced carbohydrate metabolism and increased fat and protein metabolism, leading to hyperglycaemia, increasing glycosuria, water and electrolyte imbalance, ketoacidosis, coma and death if left untreated; chronic long-term complications of DM include nephropathy, retinopathy, neuropathy and generalized degenerative changes in large and small arteries; treatment (with insulin/oral hypoglycaemic agents/diet) aims to stabilize blood glucose levels to the normal range (difficult to achieve fully; patients may tend to hyperglycaemia or hypoglycaemia due to mismanagement of glycaemic control); Tables D4-D7 These diabetes complications are related to blood vessel diseases and are generally classified into small vessel disease, such as those involving the eyes, kidneys and nerves (microvascular disease), and large vessel disease involving the heart and blood vessels (macrovascular disease). Diabetes accelerates hardening of the arteries (atherosclerosis) of the larger blood vessels, leading to coronary heart disease (angina or heart attack), strokes, and pain in the lower extremities because of lack of blood supply (claudication). ^ Jump up to: a b Funnell, Martha M.; Anderson, Robert M. (2008). "Influencing self-management: from compliance to collaboration". In Feinglos, Mark N.; Bethel, M. Angelyn. Type 2 diabetes mellitus: an evidence-based approach to practical management. Contemporary endocrinology. Totowa, NJ: Humana Press. p. 462. ISBN 978-1-58829-794-5. OCLC 261324723. Impaired glucose tolerance (IGT) and impaired fasting glycaemia (IFG) refer to levels of blood glucose concentration above the normal range, but below those which are diagnostic for diabetes. Subjects with IGT and/or IFG are at substantially higher risk of developing diabetes and cardiovascular disease than those with normal glucose tolerance. The benefits of clinical intervention in subjects with moderate glucose intolerance is a topic of much current interest. The problem with sugar, regardless of type, is the sheer amount of it that’s found in the Standard American Diet (SAD), which is the typical eating plan many people in the United States — as well as those in an increasing number of modernized countries — have developed a taste for. When consumed in excess, foods in this category can lead to heart disease, stroke, and other serious health issues. “Often, foods with added sugar also contain fat,” explains Grieger, noting that these components go hand in hand when it comes to the risk for insulin resistance, the hallmark of type 2 diabetes. When you have diabetes, your body becomes less efficient at breaking food down into sugar, so you have more sugar sitting in your bloodstream, says Dobbins. “Your body gets rid of it by flushing it out in the urine.” So going to the bathroom a lot could be one of the diabetes symptoms you’re missing. Most patients aren’t necessarily aware of how often they use the bathroom, says Dr. Cypess. “When we ask about it, we often hear, ‘Oh yeah, I guess I’m going more often than I used to,’” he says. But one red flag is whether the need to urinate keeps you up at night. Once or twice might be normal, but if it’s affecting your ability to sleep, that could be a diabetes symptom to pay attention to. Make sure you know these diabetes myths that could sabotage your health. Diabetes insipidus is characterized by excessive urination and thirst, as well as a general feeling of weakness. While these can also be symptoms of diabetes mellitus, if you have diabetes insipidus your blood sugar levels will be normal and no sugar present in your urine. Diabetes insipidus is a problem of fluid balance caused by a problem with the kidneys, where they can't stop the excretion of water. Polyuria (excessive urine) and polydipsia (excessive thirst) occur in diabetes mellitus as a reaction to high blood sugar. The good news is that prevention plays an important role in warding off these complications. By maintaining tight control of your blood glucose—and getting it as close to normal as possible—you’ll help your body function in the way that it would if you did not have diabetes. Tight control helps you decrease the chances that your body will experience complications from elevated glucose levels. Arlan L Rosenbloom, MD is a member of the following medical societies: American Academy of Pediatrics, American College of Epidemiology, American Pediatric Society, Endocrine Society, Pediatric Endocrine Society, Society for Pediatric Research, Florida Chapter of The American Academy of Pediatrics, Florida Pediatric Society, International Society for Pediatric and Adolescent Diabetes Alternatively, if you hit it really hard for 20 minutes or so, you may never enter the fat burning phase of exercise. Consequently, your body becomes more efficient at storing sugar (in the form of glycogen) in your liver and muscles, where it is needed, as glycogen is the muscles’ primary fuel source. If your body is efficient at storing and using of glycogen, it means that it is not storing fat. Whether you’re dealing with frequent UTIs or skin infections, undiagnosed diabetes may be to blame. The high blood sugar associated with diabetes can weaken a person’s immune system, making them more susceptible to infection. In more advanced cases of the disease, nerve damage and tissue death can open people up to further infections, often in the skin, and could be a precursor to amputation. Glucose is a simple sugar found in food. Glucose is an essential nutrient that provides energy for the proper functioning of the body cells. Carbohydrates are broken down in the small intestine and the glucose in digested food is then absorbed by the intestinal cells into the bloodstream, and is carried by the bloodstream to all the cells in the body where it is utilized. However, glucose cannot enter the cells alone and needs insulin to aid in its transport into the cells. Without insulin, the cells become starved of glucose energy despite the presence of abundant glucose in the bloodstream. In certain types of diabetes, the cells' inability to utilize glucose gives rise to the ironic situation of "starvation in the midst of plenty". The abundant, unutilized glucose is wastefully excreted in the urine. Kidney damage from diabetes is called diabetic nephropathy. The onset of kidney disease and its progression is extremely variable. Initially, diseased small blood vessels in the kidneys cause the leakage of protein in the urine. Later on, the kidneys lose their ability to cleanse and filter blood. The accumulation of toxic waste products in the blood leads to the need for dialysis. Dialysis involves using a machine that serves the function of the kidney by filtering and cleaning the blood. In patients who do not want to undergo chronic dialysis, kidney transplantation can be considered. Healthy lifestyle choices can help you prevent type 2 diabetes. Even if you have diabetes in your family, diet and exercise can help you prevent the disease. If you've already received a diagnosis of diabetes, you can use healthy lifestyle choices to help prevent complications. And if you have prediabetes, lifestyle changes can slow or halt the progression from prediabetes to diabetes. Though it may be transient, untreated GDM can damage the health of the fetus or mother. Risks to the baby include macrosomia (high birth weight), congenital heart and central nervous system abnormalities, and skeletal muscle malformations. Increased levels of insulin in a fetus's blood may inhibit fetal surfactant production and cause infant respiratory distress syndrome. A high blood bilirubin level may result from red blood cell destruction. In severe cases, perinatal death may occur, most commonly as a result of poor placental perfusion due to vascular impairment. Labor induction may be indicated with decreased placental function. A caesarean section may be performed if there is marked fetal distress or an increased risk of injury associated with macrosomia, such as shoulder dystocia. Some risks of the keto diet include low blood sugar, negative medication interactions, and nutrient deficiencies. (People who should avoid the keto diet include those with kidney damage or disease, women who are pregnant or breast-feeding, and those with or at a heightened risk for heart disease due to high blood pressure, high cholesterol, or family history. (40) Type 2 diabetes, the most common type of diabetes, is a disease that occurs when your blood glucose, also called blood sugar, is too high. Blood glucose is your main source of energy and comes mainly from the food you eat. Insulin, a hormone made by the pancreas, helps glucose get into your cells to be used for energy. In type 2 diabetes, your body doesn’t make enough insulin or doesn’t use insulin well. Too much glucose then stays in your blood, and not enough reaches your cells. Considering that being overweight is a risk factor for diabetes, it sounds counterintuitive that shedding pounds could be one of the silent symptoms of diabetes. “Weight loss comes from two things,” says Dr. Cypess. “One, from the water that you lose [from urinating]. Two, you lose some calories in the urine and you don’t absorb all the calories from the sugar in your blood.” Once people learn they have diabetes and start controlling their blood sugar, they may even experience some weight gain—but “that’s a good thing,” says Dr. Cypess, because it means your blood sugar levels are more balanced. It’s no surprise that most people could stand to drink more water. In fact, the majority of Americans are drinking less than half of the recommended eight glasses of water each day. However, if you’re finding yourself excessively thirsty, that could be a sign that you’re dealing with dangerously high blood sugar. Patients with diabetes often find themselves extremely thirsty as their bodies try to flush out excess sugar in their blood when their own insulin production just won’t cut it. If you’re parched, instead of turning to a sugary drink, quench that thirst with one of the 50 Best Detox Waters for Fat Burning and Weight Loss! Diabetes mellitus is a chronic disease for which there is treatment but no known cure. Treatment is aimed at keeping blood glucose levels as close to normal as possible. This is achieved with a combination of diet, exercise and insulin or oral medication. People with type 1 diabetes need to be hospitalized right after they are diagnosed to get their glucose levels down to an acceptable level. Yes. In fact, being sick can actually make the body need more diabetes medicine. If you take insulin, you might have to adjust your dose when you're sick, but you still need to take insulin. People with type 2 diabetes may need to adjust their diabetes medicines when they are sick. Talk to your diabetes health care team to be sure you know what to do. Type 2 diabetes is a condition of blood sugar dysregulation. In general blood sugar is too high, but it also can be too low. This can happen if you take medications then skip a meal. Blood sugar also can rise very quickly after a high glycemic index meal, and then fall a few hours later, plummeting into hypoglycemia (low blood sugar). The signs and symptoms of hypoglycemia can include Diabetes mellitus occurs throughout the world but is more common (especially type 2) in more developed countries. The greatest increase in rates has however been seen in low- and middle-income countries, where more than 80% of diabetic deaths occur. The fastest prevalence increase is expected to occur in Asia and Africa, where most people with diabetes will probably live in 2030. The increase in rates in developing countries follows the trend of urbanization and lifestyle changes, including increasingly sedentary lifestyles, less physically demanding work and the global nutrition transition, marked by increased intake of foods that are high energy-dense but nutrient-poor (often high in sugar and saturated fats, sometimes referred to as the "Western-style" diet). The global prevalence of diabetes might increase by 55% between 2013 and 2035. What does the research say about proactive type 2 diabetes management? Research shows that proactive management can pay off in fewer complications down the road. In the landmark UKPDS study, 5,102 patients newly diagnosed with type 2 diabetes were followed for an average of 10 years to determine whether intensive use of blood glucose-lowering drugs would result in health benefits. Tighter average glucose control (an A1c of 7.0% vs. an A1c of 7.9%) reduced the rate of complications in the eyes, kidneys, and nervous system, by 25%. For every percentage point decrease in A1c (e.g., from 9% to 8%), there was a 25% reduction in diabetes-related deaths, and an 18% reduction in combined fatal and nonfatal heart attacks. In type 1 diabetes, other symptoms to watch for include unexplained weight loss, lethargy, drowsiness, and hunger. Symptoms sometimes occur after a viral illness. In some cases, a person may reach the point of diabetic ketoacidosis (DKA) before a type 1 diagnosis is made. DKA occurs when blood glucose is dangerously high and the body can't get nutrients into the cells because of the absence of insulin. The body then breaks down muscle and fat for energy, causing an accumulation of ketones in the blood and urine. Symptoms of DKA include a fruity odor on the breath; heavy, taxed breathing; and vomiting. If left untreated, DKA can result in stupor, unconsciousness, and even death. ^ Jump up to: a b c d Inzucchi, SE; Bergenstal, RM; Buse, JB; Diamant, M; Ferrannini, E; Nauck, M; Peters, AL; Tsapas, A; Wender, R; Matthews, DR (March 2015). "Management of hyperglycaemia in type 2 diabetes, 2015: a patient-centred approach. Update to a Position Statement of the American Diabetes Association and the European Association for the Study of Diabetes". Diabetologia. 58 (3): 429–42. doi:10.1007/s00125-014-3460-0. PMID 25583541. A third notion is that changes in how babies are fed may be stoking the spread of type 1. In the 1980s, researchers noticed a decreased risk of type 1 in children who had been breast-fed. This could mean that there is a component of breast milk that is particularly protective for diabetes. But it has also led to a hypothesis that proteins in cow's milk, a component of infant formula, somehow aggravate the immune system and cause type 1 in genetically susceptible people. If true, it might be possible to remove that risk by chopping those proteins up into little innocuous chunks through a process called hydrolyzation. A large-scale clinical trial, called TRIGR, is testing this hypothesis and scheduled for completion in 2017. There is strong evidence that the long-term complications are related to the degree and duration of metabolic disturbances.2 These considerations form the basis of standard and innovative therapeutic approaches to this disease that include newer pharmacologic formulations of insulin, delivery by traditional and more physiologic means, and evolving methods to continuously monitor blood glucose to maintain it within desired limits by linking these features to algorithm-driven insulin delivery pumps for an “artificial pancreas.” Type 2 Diabetes: Accounting for 90 to 95 percent of those with diabetes, type 2 is the most common form. Usually, it's diagnosed in adults over age 40 and 80 percent of those with type 2 diabetes are overweight. Because of the increase in obesity, type 2 diabetes is being diagnosed at younger ages, including in children. Initially in type 2 diabetes, insulin is produced, but the insulin doesn't function properly, leading to a condition called insulin resistance. Eventually, most people with type 2 diabetes suffer from decreased insulin production.
http://diabeteshelpcare.com/diabetes-x-exercicios-fisicos-diabetes-video.html
Journal of the Academy of Physician Assistants (JAAPA) has specific instructions and guidelines for submitting articles. These author instructions and guidelines are readily available on the submission service site. Please read and review them carefully. Articles that are not submitted in accordance with our instructions and guidelines are more likely to be rejected. If you are a potential author and would like to hear more about writing for JAAPA, please feel free to listen to a session on 'Writing for JAAPA' presented at the AAPA Annual Conference by Tanya Gregory, PhD, Assistant Professor, Department of Physician Assistant Studies, Wake Forest School of Medicine and former JAAPA editor. Please note: JAAPA does not pay honorariums for articles published. JAAPA accepts manuscript submissions through a submission service on another website. Clicking on the submission service links on this page will open our manuscript submission service website in a new browser window. Submit a manuscript now. JAAPA seeks qualified PAs and other health professionals willing to review and critically evaluate manuscripts to evaluate their suitability for publication, relevance to readers, and consistency with evidence-based practice. We are seeking reviewers in primary care, surgery, internal medicine subspecialties, and research. Peer reviewers are asked to review 3 or 4 manuscripts per year. Invitations to review a manuscript are sent via e-mail, and reviews are submitted via the manuscript management portal, Editorial Manager. The invitation to review e-mail provides an abstract, a direct link to access the manuscript, and additional instructions. Reviewers have the option of declining to review a manuscript if they feel the topic is unsuitable or circumstances at the time make completing the review impossible; however, reviewers who decline three manuscripts in a row are removed from the peer reviewer panel. If a reviewer does not acknowledge interest or decline a review (using the weblink provided within the invitation to review) within 7 days, the reviewer is automatically uninvited. Peer reviewers are not paid but are able to list their service on their CV. Each peer review is evaluated for quality and thoroughness by an editor, then is score. High quality reviews earned qualified peer reviewers Category I continuing education credits. Our publisher is accredited to provide continuing education credits to physician assistants, nurses, and physicians. During the peer review, reviewers are asked several questions, including if they would like to receive continuing education credits. After a high quality review is completed, a certificate of continuing education is mailed to the address provided by the peer reviewer within Editorial Manager. If you would like to join our peer reviewer panel, or you have questions before deciding, please e-mail Andrei Greska. Volunteer reviewers should list their areas of expertise, so we know which types of manuscripts to send, and should attach a current CV to the e-mail as well. Detailed information on ethical guidelines for peer reviewing are provided here [PDF]. A guiding rubric to focus and enhance reviews is also provided here [DOC].
https://journals.lww.com/jaapa/Pages/authorguidelines.aspx
WASHINGTON, D.C. (July 28, 2010) As part of its continuing efforts to make its findings and recommendations available to a wide range of audiences, the U.S. Government Accountability Office (GAO) has launched a mobile version of its website that will allow users to more easily access GAO reports on their BlackBerrys, iPhones, Droids, and other small-screen mobile devices. People are increasingly using mobile devices to access the internet and GAOs mobile website will enable more users to quickly get to GAOs most recent reports, testimonies, and legal decisions and learn about our efforts to promote accountability and help improve government performance, said Gene Dodaro, Acting Comptroller General of the United States and head of the GAO. This initiative will help ensure our work is as accessible as possible, both for Congress and the public. GAOs mobile website is intended to meet the publics evolving preferences for sharing and receiving information. The Pew Research Center reports that 40 percent of adults are using the internet, email, or instant messaging on a mobile phone in 2010, up from 32 percent in 2009.1 This trend is expected to continue in the coming years. The mobile websites content is organized under three main tabs. The first allows users to browse the latest GAO reports and testimonies; the second provides users with the agencys legal decisions and opinions; and the third, entitled In the Spotlight, highlights especially timely GAO work. Visitors to the website can also use its search feature to quickly find specific GAO products. The mobile version of GAOs website is automatically opened by visiting the same URL as GAOs regular website, www.gao.gov, via a mobile device. For more information, contact Chuck Young, Managing Director of Public Affairs, at (202) 512-4800. The Government Accountability Office exists to support Congress in meeting its constitutional responsibilities. GAO also works to improve the performance of the federal government and ensure its accountability to the American people. The agency examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other data to help Congress make informed oversight, policy, and funding decisions. GAOs commitment to good government is reflected in its core values of accountability, integrity, and reliability. 1http://pewinternet.org/Reports/2010/Mobile-Access-2010.aspx Next Release: GAO Announces Appointments to CO-OP Program Advisory Board WASHINGTON, D.C. (June 23, 2010) Gene L. Dodaro, Acting Comptroller General of the United States and head of the U.S. Government Accountability Office (GAO), today announced the appointment of 15 members to the Advisory Board to the Consumer Operated and Oriented Plan (CO-OP) Program. The board, newly created by the Patient Protection and Affordable Care Act, will make recommendations to the Department of Health and Human Services on grants and loans to establish nonprofit, member-run health insurers serving the individual and small-group markets.
https://www.gao.gov/press-release/gao-launches-mobile-website
Taxonomy and green finance: four fundamental documents were recently presented referring to the European Commission’s Action Plan for a greener and cleaner economy. The project that was initiated in March 2018, aims to channel capital towards a low intensity carbon economy. In May 2018, the European Commission began implementing the first measures contained in the Action Plan, by introducing three regulatory proposals related to: - the taxonomy of eco-compatible activities; - low-carbon benchmarks and positive carbon impact; - institutional investors’ disclosure on ESG risks. The European Commission subsequently appointed the Technical Expert Group on Sustainable Finance (TEG), comprising a multi-stakeholder group of experts brought together by the Commission to establish the guidelines for sustainable finance in Europe and to provide consulting on four specific issues referring to: taxonomy, prioritising environmental issues and more specifically, on the mitigation and adaptation to climate change; improving the guidelines on reporting information related to climate by larger-sized and public interest companies (including banks, asset managers, insurance companies); the introduction of a European Green Bond Standard, namely European quality certification for green bonds; common criteria to build low-carbon benchmarks and a positive-carbon impact. The TEG undertook to compile four reports on each of the subject areas specified by the EU Commission. On 18 June, two final reports were presented in Brussels, relating to Taxonomy and the European stand for Green Bonds and a provisional report on Climate Benchmarks and Benchmarks’ ESG Disclosures. At the same time, the European Commission also presented the new guidelines for reporting climate information. These four documents mark a further acceleration by Europe to comply with the Paris Agreement climate objectives (COP21). The fundamental sustainable finance documents The EU classification system – European taxonomy – to determine whether an economic activity is sustainable from an environmental perspective; The guidelines on reporting information on climate change Starting with the proposals drawn up by the TEG, the European Commission published guidelines directed at companies, so that the latter could improve reporting on their impact on climate and the impact that climate change could have on their business. The document contains practical recommendations aimed at facilitating businesses’ task in providing this important information, in line with the requirements set out in the Directive on Non-Financial Reporting (Directive 2014/95/EU) and to integrate the guidelines from the Task Force on Climate-related Financial Disclosures (Financial Stability Board TCFD). There are around 6,000 businesses, including listed companies, banks and insurance companies that are obliged to provide non-financial information on their environmental impact. TEG Report on sustainable finance In addition to the guidelines, the European Commission also published three reports on sustainable finance by the TEG work group. The first and probably the most important because it provides the basis for the others (and for the guidelines), is the one on European taxonomy that aims to define the classification system for sustainable economic activity from an environmental perspective. We will look at how these reports are structured with regard to sustainable finance. 1. The technical report on European taxonomy A practical guide and the only one formulated at European level, for policy makers, industries and investors, which aims to identify green economic activities. For the European Commission, this forms the basis for channelling private investments towards a low-carbon economy. The document comprises 414 pages, in which the TEG defines the criteria for an activity to be defined as being “sustainable”. Namely one that has a positive impact on at least one of the 6 environmental protection objectives identified, without causing damage to the others. The objectives include: - mitigation of the effects of climate change; - adaptation to climate change; - sustainable use and protection of water and marine resources; - transition to a circular economy, waste prevention and recycling materials; - containment of pollution and protection of ecosystems. In turn, for an economic activity to be considered sustainable, it must meet these four conditions: - contribute positively to at least one of the six environmental objectives referred to above; - have no significant negative impact on other objectives; - comply with minimum social standards; - comply with specific technical criteria (qualitative or quantitative, based on scientific evidence and on current market practices). These criteria can define whether an activity contributes positively (“substantially contribute”) or does not involve negative effects (“not significantly harm”) in relation to the environmental objectives. The macro-sectors chosen by the TEG were: agriculture, forestry and fishing, manufacturing, electricity, gas, steam supplies and air conditioning; storage and transport; construction and property. Based on the criteria set by the TEG, every company will be able to establish and report whether the activities it is involved in comply with the European Commission’s taxonomy. In particular, it can show which portion of turnover can be considered “sustainable” based on the taxonomy definitions. 2. The report on the European Green Bonds Standard The second report refers to the “European Green Bond Standard” and defines clear and comparable criteria for the issuing of green bonds. In particular, by referencing the taxonomy, these standards attempt to determine which activities are deserving of being financed with a green bond. Specifically, the TEG recommends that a Green Bond, according to the EU GBS, is defined as a bond or any debt security, whether listed or not, issued by a European organisation, non-EU or international organisation that meets these three requirements: - specific declaration of issuer’s compliance with the EU GBS; - use of income in green projects (financing and refinancing of new or existing projects); - a recognised External Reviewer accredited with the competent European institutions will verify compliance with the EU GBS. Based on market best practices, the EU Green Bond Standard identifies four key elements that must be respected: - Compliance with European taxonomy; - Disclosure of environmental objectives and strategy implemented to achieve these; - Mandatory reporting on the allocation of resources and in environmental impacts; - Mandatory check by External Reviewer. In this report, the Commission sets the objective of supporting the green bonds’ market and increasing sustainable and responsible investments. The TEG further explained how this tool with the proposed characteristics, can contribute to overcoming the obstacles limiting the development of the green bond market and to increasing financial flows to green projects. 3. The (provisional) report on European climate benchmarks The third report refers to “European benchmarks on climate and disclosure on benchmark Esgs”. Low-carbon benchmarks imply the “decarbonisation” of traditional benchmarks, or selecting securities associated with lower CO2 emission levels. Whereas positive-carbon impacts include activities that can avoid more emissions in relation to whatever is effectively being emitted: they therefore allow investors to compare the portfolios in the securities basket that contribute to creating scenarios that correspond with the Paris Agreement objectives. The methodology specifies the technical requirement for the reference indices (benchmarks) that assist investors to invest in an effectively sustainable manner (and thus combat greenwashing). The report also defines the reporting requirements for benchmark Esg. providers. To qualify as reference indices for the climate transition, these benchmarks must meet specific criteria, inter alia: - show a significant reduction in the overall intensity of greenhouse gas emissions in relation to traditional benchmarks; - be sufficiently exposed to the relevant sector to fight climate change; - show their ability to reduce the intensity of emissions on an annual basis. The way forward The TEG mandate has been extended until the end of the year. Over this time, the work group will review the additional feedback received based on the relevant consultation process that will close on 13 September (Call for feedback on TEG report on EU Taxonomy) and prepare an implementation guide on taxonomy. In September, a final report will be published that will serve as the basis for drawing up the European Commission delegated acts. Finally, to implement the Paris Agreement and achieve the United Nation’s Sustainable Development Goals, the European Commission proposes increasing the advancement of climate programmes throughout Europe, raising the expenditure budget to reach climate targets to 25% for the period 2017-2021.
https://www.eticasgr.com/en/storie/insights/taxonomy-and-green-finance-europes-4-basic-steps-for-climate
Colonialism and its Direct Effect on the Rise of Nationalism in African Culture In America, today, the struggles of Africans over the course of history have gone widely unnoticed, with the exception of slavery in the America. Africa is a diverse group of people of many different backgrounds and languages. This is due to the colonization of Africa by Europeans, which was followed by many struggles to regain their independence as their own people. To fully understand, a person must take a closer look at colonialism and its direct effect on the rise of Nationalism in African culture. Colonialism is defined as a policy by which a nation maintains or extends its control over foreign dependencies, or in more realistic terms an exploitation by a stronger country of weaker one; the use of the weaker country’s resources to strengthen and enrich the stronger country (dictionary.com). A broad historical understanding of direct European colonial influence on the African continent dates back at least to the spread of the Roman Empire to North Africa. The more contemporary era of European colonialism, that was consecrated by the Berlin Conference of 1884-85, was preceded by a gradual process of European expansion into Africa over roughly four-hundred and fifty years (Schraeder 50-1). Beginning in 1434, Portuguese explorers under the leadership of Prince Henry the Navigator began sailing the West African coastline with intent of spreading Christianity and to enhance Portuguese political-military power. The steady advance of Portuguese explorers marked the beginning of what is commonly called in the West the age of exploration (the charting and mapping of lands previously unknown to European powers, before the ultimate imposition of colonial rule). One of the most devastating aspects of increasing foreign influence in Africa at the end of the fifteenth century was the global perception that slavery was a legitimate and necessary tool of political-military and economic expansion (51). Many slave trade routes appeared with the overwhelming acceptance of slavery by the world outside of Africa. The most prominent was the Atlantic slave trade, also called the European slave trade, which primarily shipped slaves to the Western Hemisphere (52). The Atlantic slave trade began during the fifteenth century and was dominated by the European powers. Slaves were sought as cheap labor to work the colonial plantations in the Americas that produced a variety of products that were exported to Europe. For Africans, the slave trade era sowed the seed of nationalism as Europeans divided and separated families, taking the most able people to work in the Western Hemisphere as slaves. Taking the most abled Africans slowed development in the rest of Africa, and the slaves were kept in the poorest conditions no animal, let alone human being, should suffer through. Often many Africans chose death, by jumping in the shark infested water, rather than continue to live their lives as a slave. While the slave trade sowed the seed of nationalism, the application of the nation-state system sprouted further growth. The origins of the nation-state system lie in the 1648 Treaty of Westphalia, which ended the Thirty Years’ War in Europe. The treaty marked the beginning of the nation-state system, in which sovereign political entities independent of any outside authorities exercised control over peoples residing in separate territories with officially marked boundaries. The imposition of the European nation-state system created a series of artificial states that, unlike their counterparts in Europe, did not evolve gradually according to the wishes of local African peoples. They instead were constructed by European authorities with little concern for local socioeconomic or political-military conditions. Another impact of colonialism was the division of African ethnic groups among numerous colonial states (62). The Somali people of the Horn of Africa are a notable example. Previously united by a common culture but lacking a centralized authority, this classically segmented political system was subjugated and divided among four imperial powers: Britain, France, Italy, and an independent Ethiopia. The problem with division of one people among many states is irredentism, or the political desire of nationalists to reunite their separated peoples in one unified nation-state (63). Another problem with the nation-state system is the opposite of the division of one people among many states. A third impact of European colonialism was the incorporation of previously separate and highly diverse African peoples in one colonial state. Britain’s creation of Nigeria illustrates this colonial practice and its consequences. Nigeria is composed of over two-hundred and fifty different ethnic groups. Only three of those ethnic groups comprises roughly sixty-six percent of the total population and primarily reside in three different areas of Nigeria (64). There are many problems associated with the collection of diverse groups that were never under the same rule until the arrival of colonialism and the nation-state system. It leads to language barriers that will slow the development of the nation-state as a whole. It causes clashes between political cultures. For example, Britain chooses a specific ethnic group residing in Nigeria to be in power. This leads to feuding among the rest of the tribes and ethnic groups because they all believe they should be the elites. The biggest impact that the nation-state system had among the African people was its division of families and friends, which is a vital in every Africans life. The nation-state system imposed boundaries right in the middle of villages, dividing the people among different countries that will have rule over them such as Britain and France. Each country kept strict control of who enters and leaves, making it hard for families and friends to stay in touch, often leading to a total loss of touch with a person’s family. Europeans often imposed their political, judicial, and police systems that were foreign to all Africans, and made them change their social structure to fit the Europeans. Instead of relying on a chief, Privy Council, council of elders, or village assembly, which is what Africans were working with at the time, they had to change their ways of life for the Europeans or face the consequences. Colonialism also imposed a system of a direct export economy. Europeans stripped the lands of Africa for their own benefit and left locals with very little to spare. The hardships that the Europeans imposed developed a sense of identity and pride throughout Africa. Nationalism is defined as a sense of collective identity in which a people perceives itself as different than (and often superior to) other peoples. Nationalism also implies the existence of a variety of shared characteristics, most notably a common language and culture, but also race and religion. The emergence of European “nations” (or cohesive group identities) generally preceded and contributed to the creation of European “states”. The result was the creation of viable nation-states that enjoyed the legitimacy of their peoples. This process was reversed in Africa. In most cases, the colonial state was created before any sense of nation existed (81). The idea of freedom, the underdevelopment of Africa, and the development of the concept of Pan-Africanism (feelings of unity) were the reasons why the seed of nationalism that was sowed and sprouted began to fully grow. Adding to the fire was the constant treatment of Africans by Europeans as inferiors, the development on African national unions, rise of Islamic movements, and the rise of the educated class. America also had a direct effect on African nationalism along with other countries that created examples for Africans to follow. In the Atlantic Charter of 1941, the agreement by Roosevelt and Churchill, promised that Africans could choose independence and self-governance. The development of aid and nationalism in Asia also encouraged Africans (in 1947 India took its independence from Britain). Also, the founding of the UN in 1945 increased the hope of all Africans for complete independence. A unique aspect of African nationalism was its inherently anti-colonial character. African nationalist movements were sharply divided on political agendas, ideological orientation, and economic programs. Regardless of their differences, however, the leaders of these movements did agree on one point: the necessity and desirability of independence from foreign control. That desire became a reality for the African leaders and people, but not all at once. There are four major waves of independence in the history of Africa (82). The first wave of independence was marked by peaceful transitions and took place during the 1950s. The wave was led by the heavily Arab-influenced North African countries. Three countries outside North Africa also obtained independence during this period followed by the former French colony of Guinea in 1958. The second wave of independence took place during the 1960s, when more than thirty African countries achieved independence. Most of these countries were former British and French colonies. All three Belgian colonies also acquired independence during this period and were joined by the Republic of Somalia. Aside from some noteworthy exceptions, most notably France’s unsuccessful attempt to defeat a pro-independence guerrilla insurgency in Algeria and the emergence of the Mau Mau guerilla insurgency in Kenya, the decolonization process of the 1960s was also largely peaceful. The departing colonial powers had already accepted the inevitability of decolonization. Questions simply remained as to when and under what conditions (83). The third wave of independence began in 1974. A military coup d’etat in Portugal, led by junior military officers, resulted in a declaration that the Portuguese government intended to grant immediate independence to the colonies in Africa. Coup plotters sought to end their stay because of poorly trained and unmotivated Portuguese military forces that repeatedly fought against highly motivated and increasingly adept African guerilla insurgencies. The violent path to independence in the former Portuguese colonies was further complicated in 1975, when Angolan guerrilla groups clashed in what would become an extended civil war over who would lead an independent Angola. The former French colonies of Comoros, Seychelles, and Djibouti, however, achieved independence under largely peaceful terms. The fourth wave of independence emerged during the 1980s. This wave was directed against the minority white-ruled regimes in Southern Africa. Since 1948, South Africa was controlled by the descendants of white settlers known as Afrikaners. This minority elite established a highly racist system in which blacks and other minorities (roughly eighty-five percent of the population) were denied political rights. The minority white-ruled regimes of Southern Africa were confronted by guerrilla organizations that enjoyed regional and international support. Military struggles were suspended after the white minority regimes agreed to negotiate transitions to black majority rule. Nelson Mandela’s emergence in 1994 as the first democratically elected leader of South Africa signaled the end of the decolonization process and the transition to the contemporary independence era. Through colonialism, which led to slavery and the application of the nation-state system, Africans developed a sense of Nationalism that sparked their movements toward independence. It is through their own will to be their own people that they achieved their current state of independence.
https://freeonlineresearchpapers.com/colonialism-in-africa/
What do the flashes of a firefly and the chirpings of a cricket have in common? Both occur in a regular rhythm, which is controlled by an oscillating biological clock1. Another oscillating genetic clock controls the development of embryonic structures called somites, which give rise to the vertebrae that protect the spinal cord. Our knowledge of this segmentation clock stems almost entirely from research on animals2,3, because technical and ethical considerations limit the study of human embryos in culture. Writing in Nature, Diaz-Cuadros et al.4 and Matsuda et al.5 now report a breakthrough that enables studies of the human segmentation clock in vitro. In addition, Yoshioka-Kobayashi et al.6 use sophisticated techniques in mice to provide insights into the mechanisms that control the mammalian segmentation clock. Raed the paper: In vitro characterization of the human segmentation clock Somites arise from a tissue called the presomitic mesoderm (PSM). During somite formation, temporally and spatially controlled oscillations in transcription yield gene-expression waves that propagate through the PSM along the embryo’s head-to-tail axis. The result is a striped pattern of somites that forms the blueprint for the spine. Although the molecular components of the segmentation clock are highly evolutionarily conserved across vertebrates, new somites form with different rhythms in each species. For instance, gene oscillations have a period of 30 minutes in zebrafish and 2 hours in mice. Oscillations have been estimated to occur every 4 to 5 hours in humans2 — although until now they have never been directly observed. Diaz-Cuadros et al. and Matsuda et al. set out to model the human clock using induced pluripotent stem cells (iPSCs) — cells that are generated in vitro from differentiated human cells and, similarly to embryonic stem cells, can give rise to every cell type in the body. The groups used established protocols7–9 to convert iPSCs into PSM in vitro. To visualize and monitor the dynamic oscillations of clock genes in the cultured PSM in real time, each group used a different ‘reporter’ protein. Matsuda and colleagues used a reporter in which a key segmentation-clock gene10, Hes7, drives production of the bioluminescent enzyme luciferase. As Hes7 expression oscillates, levels of the reporter increase and decrease. Diaz-Cuadros et al. used an engineered version of Hes7 fused to a gene that encodes Achilles, which is a more rapidly generated variant of yellow fluorescent protein developed by Yoshioka-Kobayashi and colleagues. The use of Achilles enabled Diaz-Cuadros and co-workers to track fluorescent waves of Hes7 expression at the single-cell level4 — a resolution not possible with the luciferase reporter. Analyses using both reporters provide the first definitive evidence that the human segmentation clock has a period of approximately 5 hours (Fig. 1a). Figure 1 | Modelling embryonic segmentation in vitro. A tissue called the presomitic mesoderm (PSM) gives rise to somites — embryonic precursors of vertebrae. This process involves a ‘segmentation clock’ that drives rhythmic oscillations of gene expression, including that of the gene Hes7. Three groups have developed systems to analyse the clock in culture using live-cell imaging. a, Diaz-Cuadros et al.4 and Matsuda et al.5 directed wild-type (WT) human induced pluripotent stem cells (iPSCs) to become PSM cells. The iPSCs had been engineered to express a version of Hes7 that drives expression (arrow) of genes encoding the fluorescent molecule Achilles4 or the luminescent molecule luciferase5. Monitoring the oscillations of these genes in PSM cells revealed that the human segmentation clock has a period of about 5 hours. b, Matsuda et al. performed the same experiment using iPSCs in which Hes7 is mutated, as in the skeletal disorder spondylocostal dysostosis, and found a lack of oscillations. c, Yoshioka-Kobayashi et al.6 isolated the PSM from mouse embryos carrying a Hes7–Achilles reporter, and monitored oscillations, which have a 2-hour period. Three key signalling pathways — the Notch, Wnt and FGF pathways — act in sequential negative feedback loops to regulate oscillating gene expression during somite formation2,3,11,12. Diaz-Cuadros and colleagues used their culture system to investigate these pathways in detail. They confirmed the roles of these pathways in PSM cells taken from mouse embryos, and then showed that similar pathways govern segmentation in human PSM differentiated from iPSCs, with oscillations dependent on Notch signalling and another pathway, mediated by a protein called YAP. They found that FGF signalling not only determines the positions along the body axis at which oscillations stop, as previously reported2, but also regulates the complex dynamics of the oscillations — their period, phase and amplitude. Read the paper: Recapitulating the human segmentation clock with pluripotent stem cells Matsuda and colleagues used their culture protocol to study a human genetic disease, congenital spondylocostal dysostosis, in which defects in segmentation of the vertebrae lead to skeletal anomalies13,14. The authors generated PSM from iPSCs derived from two people with the disease, who each had mutations in a different gene of the Notch signalling pathway. Surprisingly, despite these mutations and differences in overall gene expression, the authors observed normal oscillations in the PSM. By contrast, when the authors produced PSM from cells genetically engineered to carry a Hes7 mutation that had previously been identified as a cause of spondylocostal dysostosis15, they observed a dramatic loss of oscillations (Fig. 1b). This work highlights the potential of using iPSC-derived PSM to determine the relative roles of various clock components in development. It is known that, although individual PSM cells show autonomous oscillations, Notch signalling between cell neighbours synchronizes these oscillations1,16 to produce gene-expression waves at the population level. Yoshioka-Kobayashi et al. set out to examine this role for Notch signalling in detail. The authors engineered mice to carry a Hes7–Achilles reporter, and to lack a protein called Lunatic fringe that modulates Notch signalling. They then isolated the entire PSM from embryos that lacked Lunatic fringe and from controls that did not, and made use of optogenetics, a light-triggered gene-expression system, to visualize somite development in culture by tracking Hes7 oscillations over time (Fig. 1c). Although the autonomous oscillations of single PSM cells were unaffected by loss of Lunatic fringe, the researchers observed oscillation defects at the population level. Read the paper: Coupling delay controls synchronized oscillation in the segmentation clock Notch signalling involves the release of the protein DLL1 from one cell and its binding by Notch receptors on another. This interaction triggers a downstream signalling cascade in the receiving cell that causes increases in the expression of various genes, including Hes117. This sender–receiver system can be modulated using a genetically engineered optogenetic variant of the Dll1 gene that is expressed in response to stimulation by light18. The authors stimulated Dll1, and compared how long it took for neighbouring cells to exhibit Hes1 upregulation in mice lacking Lunatic fringe with the time it took in controls. The study revealed that Lunatic fringe controls population-level oscillations by regulating the timing and amplitude of the signal-sending and signal-receiving process in adjacent cells. This work underscores the intricate role of Notch components in the cell–cell interactions that control clock oscillations. Together, the current studies provide a remarkable demonstration that simple iPSC culture systems can be used for in-depth analysis of the oscillatory gene expression associated with somite segmentation at single-cell resolution. However, they also have limitations. For instance, Diaz-Cuadros et al. and Matsuda et al. did not observe final stages of somite development and vertebra formation in their human culture systems. Nonetheless, their protocols will undoubtedly help to advance our understanding of the molecular basis of normal segmentation and to reveal the genes that, when mutated, lead to the development of disorders of the spine. More broadly, gene-regulatory networks are highly conserved between mammals, regardless of the animals’ size or whether they are bipedal or quadrupedal. This is in stark contrast to the species-specific timing of gene oscillations, which is fundamental to body-plan development. What causes these crucial differences in timing remains an enigma — but one that can now begin to be unravelled.
Generating Realistic Non-Player Characters for Training Cyberteams Since 2010, researchers in the SEI CERT Division have emphasized the crucial need for realism within cyberteam training and exercise events. Our approach to the construction and execution of these events has led to publication of a design framework for cyberwarfare exercises that we call Realistic - Environment, Adversary, Communications, Tactics, and Roles (R-EACTR), which provides guidance on how to produce realistic training and exercise events. In this blog post, we describe efforts underway to improve the realism of non-player characters (NPCs) in training exercises with new software we have created called ANIMATOR. The ability of ANIMATOR to increase the realism of NPCs will be relevant and useful to anyone who is tasked with developing training for cyberteams. Moreover, as we describe below, the generation of highly realistic non-player characters could also be beneficially applied for use in machine-learning algorithms, honeypot payloads, insider-threat modeling, and social-network and relationship modeling. Unrealistic scenarios that do not match real-world operations are unengaging for participants. To construct a comprehensive and optimally beneficial exercise, we want participants to work in an environment that resembles situations they will encounter in the real world. Realism extends beyond network topology to include other areas, such as scenarios, workflows, and behaviors. Building this experience requires replicating many things for the purposes of training and exercise—networks, workstations, organizations, groups, users, events, intelligence, reports, etc. For many of these things, we have proper DevOps processes in place to create the necessary artifacts, documents, and otherwise that we need for any sized engagement. This automation spans the construction of the range network itself to routers, switches, servers, workstations, and other machines. It also includes components of the scenario that participants will operate or interact with, such as the road to war, intel specific to scenario-threat types, and the NPCs that have a role to play within the exercise. One existing platform that we use often is CERT GHOSTS, which is an NPC simulation-and-orchestration platform for realistic network behavior and resulting traffic. We use this software to bring users to life on a computer network and have them perform the activities that our cybersecurity-professional participants see in their work networks. In practice, however, we have always been a bit disappointed in the tools available to generate the personas of these NPCs from names, addresses, email addresses, and other datapoints. The results never feel quite real enough, and often what is generated for one datapoint does not correspond in any way to another already-generated one where the two certainly weigh on one another. For example, we started to ask questions such as - If we generate a six-foot-tall NPC, how much should they weigh? - What is the probability of their having blood type O positive? - How many social-media accounts should they have? - Based on their age, what types of career positions have they had? - If an NPC is in the military, what rank could they be? - What unit would they serve with, and in what capacity? ANIMATOR Software for Generating Realistic NPC Data To better address the questions we found ourselves asking, we set out to build our own software that generates more realistic NPC data for use in simulation, training, and exercises—ANIMATOR. One of our early ideas was to add robust support for military personnel with regard to rank, units, billets, and military occupation specialty (MOS) code. Another idea was to factor in education, career, and events history that would enable the detailed analysis of insider-threat potential. Moreover, we added types of accounts and security measures (such as PGP keys and certificates) that we might use during an exercise. For each datapoint that ANIMATOR generates, we tried to follow some public reference for matching the output of the engine to how one would find percentage breakdowns of this metric in the real world. For example, if we generate an individual at random within the U.S. military branch, how do we determine the branch in which they are likely to be a member? Here we follow guidance from the Department of Defense directly. Each NPC has more than 25 categories of associated details and more than 100 pieces of metadata defining who they are. Each piece of information is generated using sourced datasets to distribute characteristics realistically. Applying ANIMATOR Data Beyond Cyberteam Training The data generated by ANIMATOR can be leveraged in many ways, but is particularly applicable in four key areas: - Training machine-learning algorithms—ANIMATOR creates large sets of realistic user data and could easily be leveraged to generate datasets used for training machine-learning (ML) algorithms. This capability enables the rapid training of anthropology-related ML algorithms leveraging one or more of the 100-plus datapoints generated by ANIMATOR. - Honeypot payloads—NPC details generated by ANIMATOR make the user data convincingly real while still being completely fabricated. Therefore, the data is ideal for use in applications like honeypots, where the goal is to trick attackers into thinking they are compromising an asset with real user data. - Insider-threat modeling—Each ANIMATOR NPC is given an insider-threat profile. This profile determines how likely an NPC is to be an insider threat by incorporating the Center for Development of Security Excellence's (CDSE’s) insider-threat potential risk indicators. As we continue developing ANIMATOR, it will be possible to configure NPCs so they are more or less likely to be insider threats according to factors like finance, criminal history, foreign contact, and mental health. - Social-network and relationship modeling—ANIMATOR can establish relationships between the NPCs it generates. As we increase the fidelity of relationships, ANIMATOR NPCs create larger and more realistic social networks. By leveraging ANIMATOR’S ability to generate thousands of interrelated NPCs quickly, it can easily be used to perform social-networking modeling and research. From a technical perspective, we layered our approach in hopes that others could choose the use case that suited their own projects best. ANIMATOR provides a C# dotnetcore common library for other projects to leverage its generation capabilities. Moreover, individual NPCs can be connected to others and to a larger group-of-groups NPC chain via an API that is distributed as a buildable web application or directly as a Docker container. For example, for a request to create a new NPC, ANIMATOR does the following: - Once ANIMATOR receives a request to create NPCs, it starts by creating an empty NPC profile. - ANIMATOR then iterates through all 100+ datapoints for the NPC and generates synthetic data to associate with that NPC. Example datapoints are name, address, mental health, career, finances, and family members. Datapoints are generated either at random or using weighted randomization. Weighted randomization involves leveraging verified datasets to influence the distribution of randomly generated datapoints to match much more closely to reality. Our primary goal in ANIMATOR is to make our data as realistic as possible by using weighted randomization for as many datapoints for which we can find datasets. - ANIMATOR will complete this process for as many users as were selected by the request. This information can be exported through the API or stored in a local database. ANIMATOR currently stores NPC data in a local Mongo database, and this feature is still being actively improved and expanded. We continue to work on ANIMATOR, to fix and improve issues, and add new features as they are identified. Part of these enhancements is driven by our own internal use of ANIMATOR for the many exercises we build and execute for our customers, but we strive to respond to requests from the community of users or potential users of the CERT GHOSTS platform quickly and proactively and to ask for feedback and improvements from the community as well. This strategy has served us well for other GHOSTS projects hosted through Github. We welcome your feedback as we continue to move forward on this and other projects in the exercise-realism space. Additional Resources Learn more about CERT Cyber Workplace Development (CWD). Read the SEI technical report, The CERT Approach to Cyber Workforce Development. Read the SEI technical report, R-EACTR: A Framework for Designing Realistic Cyber Warfare Exercises. Read the SEI technical report, GHOSTS in the Machine: A Framework for Cyber-Warfare Exercise NPC Simulation. Watch an SEI Cyber Minute video on Cyber Workforce Development. Watch the SEI video, Cyber Workforce Development and the Cybersecurity Engineer. Watch the SEI video, R-EACTR: A Framework for Designing Realistic Cyber Warfare Exercises. Watch a video demonstration of GHOSTS. Learn more about GHOSTS software and supporting materials. Download the SEI fact sheet, GHOSTS: A Framework for Realistic NPC Orchestration. View presentation materials for the presentation, GHOSTS in the Machine: Orchestrating a Realistic Cybersecurity Exercise Battlefield. More By The Authors Using Alternate Data Streams in the Collection and Exfiltration of Data Using Machine Learning to Increase the Fidelity of Non-Player Characters in Training Simulations More In Cyber Workforce Development Using Machine Learning to Increase the Fidelity of Non-Player Characters in Training Simulations PUBLISHED IN Get updates on our latest work. Sign up to have the latest post sent to your inbox weekly.Subscribe Get our RSS feed More In Cyber Workforce Development Using Machine Learning to Increase the Fidelity of Non-Player Characters in Training Simulations Get updates on our latest work.
https://insights.sei.cmu.edu/blog/generating-realistic-non-player-characters-for-training-cyberteams/
Reliability of physiological cost index measurements. The reliability of physiological cost index (PCI) measurements under steady state, non-steady state and post-exercise conditions was evaluated. Thirty volunteers (15 male and 15 female) aged 20-30 years participated in the study. None of the volunteers had a history of smoking, respiratory disorder or locomotor abnormalities. Each subject walked at his/her preferred walking speed on an electronic treadmill (Weider TM 2000) and pulse rates (radial pulse) were monitored by the palpatory method. The test was performed twice with an interval of one week between tests. The gender effect on PCI values was evaluated using the independent t-test while the test-retest reliability was determined by the Pearson product-moment correlation method. Results showed no consistent gender difference in PCI. It also revealed a high test-retest reliability (r = 0.843-0.944) for the non-steady state PCI (NSSPCI), the steady state PCI (SSPCI) and post exercises PCI (PEPCI). Analysis of variance showed that the correlation coefficient were significantly different from zero (p < 0.001). It is concluded that PCI is an easy to-use, valid and reliable measure of energy expenditure and it is recommended as a useful tool for physiotherapist in the assessment and evaluation of functional performance.
Here at Glacier Media, our internal agency ecosystem is made up of 150+ folks. It’s time for another edition of Team GMD Q&A! It’s kind of like Vogue’s 73 Questions. Instead, it’s in blog form and there are less than 10 questions – because you’re busy and we respect your time. This week we speak to Mona, one of the newest Digital Coordinators to join the team. Mona comes from Rasht, in northern Iran and has a bachelor of English Translation and an MBA. She has an impressive background working as a Translator, English teacher, Team Lead Manager, and Marketing Director, and recently completed an internship in MNU digital in the USA. Read on to learn more about Mona! Your GMD title: Digital Coordinator In a short paragraph, tell us a little bit about your role at GMD. What do you do on a daily basis: I started my job as a digital coordinator in GMD in April 2021. As a digital coordinator, I am responsible for monitoring the projects, creating the project assets, generating reports, and presentation audits. All in all, enabling sales by providing excellent support to digital marketing managers and sales reps. Personal kryptonite: Decoration stuff, and everything related to art. Hidden talent: I am a good artist, especially in drawing (without attending any classes), and cooking. Hobbies: Reading books, listening to music, watching movies, running and biking, and playing football with my son. I love to travel and visit different places near or far, no matter where. Last but not least is hiking. What are you currently listening to? The two songs that I like to listen to whenever I get a chance are Dance Monkey and Bella Ciao. The Persian songs are really a lot. One that I can name is Rana, a classic song from my home city. What are you currently reading? I am reading The Game of Life and How to Play it by Florence Scovel Shinn again. I never get tired of reading it and recommend everyone to read this book at least once in their life. If you had to give a TED talk, what would it be about? I think it would be about humanity and being kind to each other. First thing you’re going to do once we’re all vaccinated? I am planning to visit my family in Iran or invite them to come here. Favourite Vancouver coffee spot? Shipyards Coffee for their Espresso and Rosewater Pistachio Dacquoise. Last thing you binge-watched : Soul animation. I recommend everyone to watch it! If you could learn a new skill in an instant, what would it be? Computer programming and one thing related to art would be playing piano brilliantly. Favourite quote :
https://blog.glaciermediadigital.ca/index.php/2021/05/28/meet-team-gmd-mona-yazdandoust-mofarah/
Stress can lead to numerous types of hair lossThere’s actually three different types of hair loss which can be triggered by stress: - Alopecia areata. This condition varies in severity, but it’s typically caused when the immune system starts to attack the hair follicles. It typically results in patchy hair loss, with clumps falling out at a time. However, it can also just cause the hair to thin, without showing any obvious bald spots. - This causes you to literally start pulling your own hair out. As you become stressed, you instinctively start tugging at the hair, causing it break and fall out. If not treated early enough, the condition could soon leave you with very noticeable bald spots. - Telogen effluvium.This causes more hair than usual to enter the resting phase. Then, after a set amount of time, the hair sheds together at the same time. It’s worth noting however, that this type of hair loss tends to occur after a traumatic event, or after a period of significant stress, rather than gradually.
https://www.hishairclinic.com/de-stressing-could-save-your-hair/
Our client a Oil & Gas Exploration & Production Company is looking for a PDMS Coordinator. Activities • Promote & comply in all activities with applicable safety instructions and other CLIENT HSE procedures, • Carry out work in compliance with CLIENT values and policies, relevant laws and regulations, agreed CLIENT priorities and objectives, CLIENT standards and procedures and good industry practices, • Follow-up of pre-FEED and FEED engineering development activities progress related to 3D models development for the P.O.R. project by the contractors, the LLI and packages vendors (as applicable) for the Major Projects. The scope of activities will mainly concern brownfield activities on existing A platform. The activities include the review and comment of engineering and vendor (if applicable) documentation to ensure compliance with design dossier, SOR, Company specifications, Corporate standard and HSEQ regulation, international codes, BV and SOLAS applicable rules, • Review and comment the 3D model related to P.O.R project, address all punches to contractors with resolution proposals. Develop and update punch list registers with contractor, report regularly resolution progress to Management. Review the punch point clearance proposals made by contractor, define their acceptability with CLIENT involves parties (engineering, operations, etc.). Remain pro-active in proposing solutions to clear the punch points. • Organize and animate internal CLIENT 3D model reviews with engineering disciplines in order to capture comments and clashes, to ensure compliance with discipline requirements and CLIENT specifications. • Ensure the models are developed in accordance with project specifications, procedures, HSE requirements within time frame while maintaining the highest work quality. In particular, ensure escape ways and safety devices are modelled in compliance with safety requirements. • Focus on access, handling and maintainability of the various equipment. Lead the review the handling philosophies and dedicated handling reports, ensure compliance with vendor requirements. • Define and ensure application of 3D model development and review procedures, define progress and milestones criteria for 30%, 60%, 90% model reviews. • Promote cost-saving and weight saving solutions in the design, promote design alternatives to optimise schedule, • Check the quality of the engineering documents prepared by the Contractor(s) and his subcontractor(s) & vendors, provide comments in due time and approve via PRODOM the documentation promptly, in accordance with contract procedures and requirements, • Check the incorporation of CLIENT comments by the Contractor(s) and his subcontractors / vendors in subsequent submissions, • Ensure close coordination with other disciplines on all engineering, constructability and operational matters during Feed & detailed design developments; typically 3D model development, layout reviews, etc… • Treat all information obtained during course of the work with confidentiality, • Keep a pro-active attitude in front of future interlocutors (contractors, vendors, etc.), • Participate to the preparation of the various detailed scope of work for 3D modelling discipline covering for FEED and EPC, all in line with Pre-FEED and Statement of Requirements (SOR), • Participate in the technical offers analysis (FEED and EPC), make recommendations as per best technical terms and assist in vendor selection, • Attend meetings with potential bidders, as required, during various stage of FEED & EPC CFT • Assist in the finalization of contractual documents, • Review and suggest to management possible design solutions and strategies related to brownfield tie-ins scope in relation with PVV and handling engineering scope, • Identify potential deviations from the CLIENT referential and report these to the Engineering Manager for final decision on whether or not to implement, • Ensure that relevant feedback from previous similar projects are considered in the design performance, • Participate in weekly and monthly meetings with Contractors, and in specific ad hoc meetings as required by management, • Provide technical answers to technical queries and provide support in deviations requests related to PVV, handling engineering issues, • Request assistance from shareholder specialists (remote head office), when required, • Participate in the HAZID/ HAZOP/ Project reviews sessions with CLIENT, Contractors and vendors teams and provide technical answers and clarifications, • Participate in inspections at site/vendor premises, as required, • Ensure detailed progress reporting to project management, • Assist project management team to reply to various correspondence by providing technical support, • Carry out any other duties or tasks that may be assigned by hierarchical superiors. Qualifications & Experience Required • Qualification: Master degree in mechanical engineering, • Graduate engineer with a good level of technical understanding of all relevant technical disciplines, with at least 10 years’ experience of PVV engineering on onshore/offshore oil & gas project, • Offshore experience is preferred, • Good knowledge of industry codes, standards and legislations on SSHE aspect, • Familiar with 3D model software’s, like E3D, PDMS, SM3D, Navisworks, etc. • Strong knowledge of SIMOPS and brownfield projects, • Available for world-wide missions, • Familiar with Total general specifications is a plus, • Language: fluent English, • Computer literate, • Strong leader ship skills and good communication skills, • Ability to work on projects in a complex and multicultural environment,
http://www.allworldjob.com/english/job/2350/PDMS+Coordinator+posted+by+All+jobs+by+Orion+Group+.html
In ancient Egypt, no god was worshipped more than Ra (pronounced Ray, and not Raw, as I always thought) the sun god. Ra was considered the bringer of light and life and was considered the ruler of the skies, earth and the underworld. Ra of course, was different than other gods because he flew above the earth and could therefore look after all his people. The Egyptians believed that he was born each morning, in the east and as the sun, which they perceived to be his chariot, passed to the west, he died each evening and descended to the underworld, only to be reborn again in the east the following day. The Aborigines in Australia, had a slightly different view of the sun. According to Roslynn Haynes in Sky and Telescope magazine, the Aborigines viewed the sun as a woman who would wake up every morning in her camp in the east, light a fire and carry a torch across the sky to the west. At night, the sun-woman began her long journey underground back to the east, and her torch would heat the earth, causing plants to grow. Native Americans such as the Hopi often displayed the sun as a marker of creativity and natural energy. The sun was a symbol for their supreme god because they depended on it. It represented the heart of the cosmos, along with their passion, vitality and growth. The Maya, on the other hand, tended to internalize the sun’s powers, thinking about how it could bring philosophical productivity into their lives in addition to bringing them healthy crops. The Mayans used the sun in their meditations to bring warmth into their consciousness and allow their divinity to blossom, per the wonderful world of the internet. I for one, fully support this theory. After not seeing the sun for the first 13 days of 2021, my spirits soared when it made an appearance last Wednesday. By 100 AD, Ptolemy had put forth the theory that the sun and planets all rotated around the Earth, a geocentric theory that is embodied by teenagers everywhere to this day. Later, before the invention of telescopes, Copernicus used mathematics to put forth a theory that the Sun, and not the Earth was the center of our solar system, and in 1543, at the request of Pope Clement VII, published his findings. Finally, around 1600, Galileo created a telescope and began to study the heavens. Among his many discoveries, he discovered that Jupiter’s moons orbited around the planet itself, and from this was able to discern that not everything was orbiting the earth, and came to fully support the heliocentric theory. Of course, in 1632, he was called before the Inquisition and made to recant his claims, but he supposedly muttered that the earth does move as he exited the court. Why the history lesson? Because too often, these days, we are being asked to believe things without being shown any scientific evidence. Any studies that show that playing high school sports, or eating in a restaurant, or going bowling promote the spread of COVID-19 should be widely shared. Instead we get crickets. While life goes on in every state that surrounds us. By checking tracking data, it’s become pretty apparent that prolonged close exposure, like that in senior centers or crowded apartments or factories, are where the real spread is coming from. Not from short bursts of contact from little Jimmy’s football game or while having a burger and a beer at the local pub. It’s time to give serious consideration to the voices of the athletes and their families and let them play, just as it’s time to give serious consideration to re-opening unless there is definitive evidence that says we shouldn’t.
https://www.thetuscolajournal.com/2021/01/20/hook-line-and-sinker-66/
- This course is Elective for the CMPE degree. - EE Degree - This course is Elective for the EE degree. - Lab Hours - 0 supervised lab hours and 3 unsupervised lab hours - Course Coordinator - Blough,Douglas M - Prerequisites - ECE 2035 [min C] or ECE 2036 [min c] - Corequisites - None - Catalog Description - Course covers a number of programming techniques for distributed and parallel computing and other advanced methods, such as multiprecision arithmetic and nonblocking I/O. - Textbook(s) - No Textbook Specified. - Course Outcomes - Upon successful completion of this course, students should be able to: - Determine when to use distributed computing methods or parallel computing methods to solve complex engineering applications. - Create high-quality visual 3-D images of complex objects using the OpenGL graphics library. - Implement several multi-precision public key and private key encryption methods using the GNU multi-precision mathematical library. - Create programs without memory leaks using the RAII ans smart pointers - Implement client/server applications using the sockets API and using the non-blocking appraoch for handling multiple clients simultaneously. - Manage large programming tasks using CMake. - Student Outcomes - In the parentheses for each Student Outcome: "P" for primary indicates the outcome is a major focus of the entire course. “M” for moderate indicates the outcome is the focus of at least one component of the course, but not majority of course material. “LN” for “little to none” indicates that the course does not contribute significantly to this outcome. - ( P ) An ability to identify, formulate, and solve complex engineering problems by applying principles of engineering, science, and mathematics - ( LN ) An ability to apply engineering design to produce solutions that meet specified needs with consideration of public health, safety, and welfare, as well as global, cultural, social, environmental, and economic factors - ( LN ) An ability to communicate effectively with a range of audiences - ( LN ) An ability to recognize ethical and professional responsibilities in engineering situations and make informed judgments, which must consider the impact of engineering solutions in global, economic, environmental, and societal contexts - ( LN ) An ability to function effectively on a team whose members together provide leadership, create a collaborative and inclusive environment, establish goals, plan tasks, and meet objectives - ( M ) An ability to develop and conduct appropriate experimentation, analyze and interpret data, and use engineering judgment to draw conclusions - ( P ) An ability to acquire and apply new knowledge as needed, using appropriate learning strategies. - Topical Outline 1. Distributed programming with MPI (3 lectures) (a) Synchronous and Asychronous communications (b) Group Communication and Synchronization 2. Parallel programming with pthreads (3 lectures) (a) Mutual Exclusion (b) Thread Synchronization 3. Object-Oriented code templates (2 lectures) (a) Typesafe callbacks with templates (b) Re-usable code with templates 4. Introduction to Data Mining using Map-Reduce (3 lectures) (a) Google's approach to managing large datasets 5. Event-based Programming (2 lectures) (a) Typesafe event handlers. 6. Introduction to graphics programming using OpenGL (3 lectures) (a) 2-D and 3-D coordinate transformations 7. Using web services (3 lectures) (a) Introduction to SOAP (b) Performance considerations with web servcies 8. Using non-blocking system I/O (2 lectures) (a) Asynchronous input-output programming (b) Handlng multiple sockes with select 9. Introduction to database programming using MYSQL (2 lectures) (a) The MYSQL database access API (b) Security Issues with database programming.
https://www.ece.gatech.edu/courses/course_outline/ECE4122
BACKGROUND BRIEF SUMMARY OF THE PREFERRED EMBODIMENTS OF THE INVENTION NOTATION AND NOMENCLATURE DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS 1. Field of the Invention The present invention generally relates to portfolio analysis. More particularly, the invention relates to forecasting a distribution of possible future values for an investment portfolio. 2. Background Information An investor has numerous investment options. Examples of investment options include stocks, bonds, commodities, etc. In general, investors desire for their investments to achieve returns that are acceptable to the investors. How best to invest one's money to achieve desired results is an age-old dilemma. To a certain extent, modern investment theories are generally based on the work of Nobel Laureate Harry Markowitz. Markowitz developed and published a solution to the following problem: Given a set of n stocks and a capital to be invested of C, what is the allocation of capital that maximizes the expected return, at a future time t, of the portfolio for an acceptable volatility of the total portfolio? Quantifying “volatility” has been defined in different ways. One definition of volatility is the square root of the variance of the value of a portfolio (typically designated by the Greek letter sigma, σ). A problem with using σ as a surrogate for volatility is that many investors have no particular “feel” for what σ means. Consequently, many investors have no idea as to what value of σ is appropriate for them. In short, even if the theory offered by Markowitz is sound, it may be difficult to implement in a practical way. A different approach to portfolio analysis is needed that addresses this issue. One or more of the problems noted above may be solved by a portfolio analysis technique that computes, estimates, or otherwise determines a distribution of possible values of a portfolio at a future time and percentiles (cumulative probabilities) for each possible future value. The future return distribution may then be determined in accordance with a variety of techniques. In accordance with some embodiments, the distribution is determined using a model of future returns. In accordance with other embodiments, the distribution may be determined by resampling past returns of the portfolio and extrapolating these into the future. The techniques described herein may be implemented on an electronic system, such as a computer that executes appropriate software. Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, various organizations and individuals may refer to a component by different names. This document does not intend to distinguish between components that differ in name, but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ”. Also, the term “couple” or “couples” is intended to mean either an indirect or direct electrical connection. The term “portfolio” refers to one or more securities in a group. The term “security” refers to any type of investment such as stocks, bonds, commodities, etc. Unless otherwise stated below, the verb “determines” includes computing, calculating, estimating, or in any other way obtaining a desired object or result. As used herein, the word “formula” may include one formula or more than one formula. The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted or otherwise used as limiting the scope of the disclosure, including the claims, unless otherwise specified. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment. FIG. 1 FIG. 1 100 100 102 106 104 110 120 122 124 100 100 102 104 110 106 110 100 122 100 122 102 120 illustrates a preferred embodiment of an electronic system capable of providing some or all of the functionality described below. The system preferably comprises a computer system and, as such, includes at least one central processing unit (“CPU”) , bridge device , volatile memory , a display , input/output (“I/O”) controller , an input device and non-volatile memory . shows an exemplary configuration of the components comprising computer system , and the system may comprise numerous other configurations. As shown, the CPU , memory , and display couple to the bridge device . Images (e.g., graphics, text) are provided to display for presentation to a user of the system . Via the input device (which may comprise a keyboard, mouse, trackball, etc.), a user can interact with the computer system . Input signals from the input device (e.g., keyboard, mouse, etc.) may be provided to the CPU via the I/O controller . 124 124 102 130 130 100 124 132 The non-volatile memory may comprise a hard disk drive or other type of storage medium. The non-volatile memory may include one or more applications that are executable by the CPU . At least one such application is a portfolio analysis application , which implements some, or all, of the functionality described below. The application thus comprises instructions that are retained in a storage medium and are executed by the computer. Any of the values described below which are provided by a user of the portfolio analysis application to the computer system may be input via the input device. The non-volatile memory may also include a “historical data” file , which will be explained below. 100 110 122 FIG. 1 Although the computer system in includes a display and an input device , other embodiments of an electronic system include computers without displays and input devices. For example, the electronic system may comprise a “headless” server computer which may include one or more CPUs, volatile and non-volatile memory, and other logic, but may not include input and/or output devices (display, keyboard, etc.). Such a server can be manufactured as a module that mounts in a support structure such as a rack. The rack may have capacity to accommodate a plurality of servers. One or more servers in the rack may be capable of performing the processes described below. A console including an input device (e.g., keyboard) and an output device (e.g., a display) may be operatively coupled to any of the servers in the rack to program, configure, or otherwise operate the server as described below, as well as to provide a display for any results. 102 130 50 50 50 52 54 50 50 110 124 FIG. 2 FIG. 2 In accordance with the preferred embodiments of the invention, the CPU , executing portfolio analysis application determines a distribution of possible returns for a portfolio at a future time. shows an example of an estimated return distribution for a particular portfolio at a future time. The time associated with the exemplary return distribution is 20 years into the future. The return distribution shows various possible portfolio values along the x-axis and corresponding cumulative percentiles along the y-axis . In the example of , the return distribution pertains to a starting amount of capital of $100,000. The return distribution may be shown on the display or stored in a file in the non-volatile memory for subsequent access. 54 52 FIG. 2 The 20, 50 and 80 percentile values along the y-axis will now be described to aid in understanding the meaning of the estimated return distribution. The 20 percentile point corresponds to a portfolio value on the x-axis of $417,000. This means that a portfolio value of $417,000 may be obtainable 20 years into the future, but that (relatively low) value (or a lower value) is likely to only occur 20% of the time. Alternatively stated, the 20 percentile point means that the chances are 80% that the portfolio will be worth at least $417,000. The 50 percentile point corresponds to a portfolio value of $873,000 meaning that 50% of the time, the portfolio depicted in will result in a value of at least $873,000 from an initial $100,000 20 years into in the future. Similarly, the 80 percentile point, which corresponds to a portfolio value of $1.82M indicates that such a relatively high portfolio value of $1.82M or more 20 years into the future may be achievable only 20% of the time. Thus, the return distribution reflects the probability of achieving a spectrum of results at a particular time in the future. FIG. 3 130 200 202 50 Referring now to , a method is shown, in accordance with a preferred embodiment of the invention, that may be performed by the computer system's portfolio analysis application . As shown, the preferred method includes obtaining historical data pertaining to a portfolio (block ). In block , the return distribution for the portfolio is determined at a future time as described below. 200 132 124 132 132 FIG. 1 The historical data obtained in block may include the historical return (generally price change plus dividends paid) of a single investment, in the case in which the portfolio contains only a single security, or the historical prices of multiple securities when the portfolio comprises multiple securities. The historical prices include the price(s) of the investment(s) on a periodic basis over a past period of time. For example, the historical data may include monthly stock prices over the last 75 years or yearly prices for the last 75 years. In general, the historical data reflects the change in price of the investment(s) over a previous period of time. Historical data may be obtained in accordance with any suitable technique such as from on-line databases, newspapers, etc. The historical data preferably is stored in a file located on the non-volatile memory (). The act of obtaining the historical data may include retrieving the data from a suitable source and loading such data into file . Alternatively, the act of obtaining the historical data may include retrieving the data from file . FIG. 4 FIG. 3 FIG. 4 50 202 220 224 220 0 0 0 220 50 0 0 shows an exemplary embodiment for determining a return distribution at a future time T (block in ) for a portfolio having a single security. As shown in , determining a return distribution may comprise blocks -. In block , the initial value of the security at issue is obtained. The initial value may be “today's” trading price for the security. The initial security value is represented as “S().” The “” means that the security price is given at time (i.e., current price). Block may also include selecting a future time for which the portfolio analysis application is to compute a return distribution such as distribution . The future time is represented as “T” and preferably is measured relative to time at which S() is obtained. 222 Block includes estimating the security's growth and volatility. A variety of measures of growth and volatility are acceptable. One suitable representation of volatility includes an estimate of volatility that is commonly referred to as “{circumflex over (σ)}” and is given by the following formula: <math overflow="scroll"><mtable><mtr><mtd><mrow><mover><mi>σ</mi><mo>^</mo></mover><mo>=</mo><msqrt><mfrac><msubsup><mi>S</mi><mi>R</mi><mn>2</mn></msubsup><mrow><mi>Δ</mi><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>t</mi></mrow></mfrac></msqrt></mrow></mtd><mtd><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> R 2 Where Sis represented by <math overflow="scroll"><mrow><mfrac><mn>1</mn><mrow><mi>N</mi><mo>-</mo><mn>1</mn></mrow></mfrac><mo>⁢</mo><mrow><munderover><mo>∑</mo><mrow><mn>1</mn><mo>=</mo><mn>1</mn></mrow><mi>N</mi></munderover><mo>⁢</mo><msup><mrow><mo>(</mo><mrow><mrow><mi>R</mi><mo>⁡</mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mo>-</mo><mover><mi>R</mi><mi>_</mi></mover></mrow><mo>)</mo></mrow><mn>2</mn></msup></mrow></mrow></math> and Δt represents the time interval between successive security prices in the historical data. A suitable representation of growth includes an estimate of growth that is commonly referred to as “{circumflex over (μ)}” and is given by the following formula: <math overflow="scroll"><mtable><mtr><mtd><mrow><mover><mi>μ</mi><mo>^</mo></mover><mo>=</mo><mrow><mfrac><mover><mi>R</mi><mi>_</mi></mover><mrow><mi>Δ</mi><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>t</mi></mrow></mfrac><mo>+</mo><mfrac><msup><mover><mi>σ</mi><mo>^</mo></mover><mn>2</mn></msup><mn>2</mn></mfrac></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>2</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> R where is given by <math overflow="scroll"><mrow><mover><mi>R</mi><mi>_</mi></mover><mo>=</mo><mrow><mfrac><mn>1</mn><mi>N</mi></mfrac><mo>⁢</mo><mrow><munderover><mo>∑</mo><mrow><mn>1</mn><mo>=</mo><mn>1</mn></mrow><mi>N</mi></munderover><mo>⁢</mo><mrow><mrow><mi>R</mi><mo>⁡</mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mo>.</mo></mrow></mrow></mrow></mrow></math> In the preceding formulae N represents the number of time intervals represented by the historical data and R(i) represents the period-to-period growth of the security. One suitable representation of R(i) includes <math overflow="scroll"><mrow><mrow><mi>R</mi><mo>⁡</mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mrow><mi>log</mi><mo>[</mo><mfrac><mrow><mi>S</mi><mo>⁡</mo><mrow><mo>(</mo><msub><mi>t</mi><mi>i</mi></msub><mo>)</mo></mrow></mrow><mrow><mi>S</mi><mo>⁡</mo><mrow><mo>(</mo><msub><mi>t</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>)</mo></mrow></mrow></mfrac><mo>]</mo></mrow><mo>.</mo></mrow></mrow></math> 224 130 224 224 230 FIG. 5 In block , the portfolio analysis application preferably computes the security's return distribution at future time T using a suitable model of returns. A variety of suitable return models may be used for block . illustrates one exemplary technique for implementing block . A value from a suitable random probability distribution is computed in block . Although a variety of random probability functions may be used in this regard, the “Z” probability distribution preferably is used. The density function of Z may be given by <math overflow="scroll"><mrow><mrow><mi>f</mi><mo>⁡</mo><mrow><mo>(</mo><mi>z</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mfrac><mn>1</mn><msqrt><mrow><mn>2</mn><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>π</mi></mrow></msqrt></mfrac><mo>⁢</mo><mrow><msup><mi>ⅇ</mi><mrow><mo>(</mo><mfrac><mrow><mo>-</mo><msup><mi>z</mi><mn>2</mn></msup></mrow><mn>2</mn></mfrac><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>)</mo></mrow></msup><mo>.</mo></mrow></mrow></mrow></math> 100 In general, Z comprises a normal (i.e., Gaussian) random variable whose mean is equal to zero and variance is equal to one. Software routines are widely known that can be implemented on computer system to compute values of the Z random probability function. By way of example, Table I below lists various values of the Z distribution function for various probabilities. TABLE I VALUES OF Z Z PROBABILITY 2.3263 .01 1.6449 .05 1.2816 .10 1.0364 .15 .8416 .20 .6745 .25 .5244 .30 .2533 .40 0 .50 232 230 230 230 232 50 S T S e [(μ−1/2σ 2 )T+σ√{square root over (T)}z] In block , an equation that suitably models future security returns is evaluated using the value of Z computed in block . One suitable model of returns comprises the Geometric Brownian Motion model which, in the context of security analysis, is given by: ()=(0) (3) where the estimates of growth (“{circumflex over (μ)}”) and volatility (“{circumflex over (σ)}”) are used in equation (3) above in place of μ and σ, respectively. Other estimates or proxies of μ and σ may also be used. Once solved, equation (3) provides a value of S(T) which indicates the value of the security at future time T for the value of Z computed in . The value of Z used in equation (3) corresponds to a particular probability and thus the computed value of S(T) is similarly associated with a probability. Blocks and may be performed once or repeated with different values of Z (and thus different probabilities). Using multiple Z values to compute multiple S(T) values results in a return distribution for the security at future time T. 50 240 244 240 242 240 240 242 244 FIG. 6 FIG. 2 Another embodiment for determining a security return distribution is shown in which includes blocks -. In block , using any one of the many standard computer routines for generating simulated observations from a normal distribution, a value of Z is generated from a normal (Gaussian) distribution with mean zero and variance one. In block , S(T) is computed using the value of Z computed in . Blocks and preferably are repeated B times (e.g., 10,000 times). The B values of S(T) are then sorted (block ). Once sorted, the B values of S(T) represent the return distribution for the security. For example and also referring to , if B is 10,000, the value of S(T) that is 1,000 from the smallest represents the lower 10 percentile value. Further, the value of S(T) that is at the mid-point (i.e., 5,000 from the lowest/highest) represents the median point (the 50 percentile point). FIGS. 5 and 6 FIGS. 5 and 6 FIGS. 7 and 8 FIGS. 7 and 8 FIGS. 7 and 8 1 2 As is commonly known, a market (such as the stock market) may suffer general downturns or upturns. These downturns/upturns may appear to occur again and again. The embodiments of may be modified to account for such downturns/upturns. The modifications to the processes of to account for downturns/upturns are shown in , respectively. The modifications shown in assume that two types of downward jumps occur periodically. The first type is a 10% downturn that is assumed to occur on average once every year. This assumption is modeled with the variable λset equal to a value of 1. The second type is a 20% downturn that is assumed to occur on average of once every five years, which is modeled with the variable λset equal to 0.2. Upturns may also be modeled. Further, one or more jumps may be modeled, not just the two jumps modeled in . FIG. 7 FIG. 5 FIG. 7 230 232 252 260 Referring now to , blocks and are the same as in . Blocks - generally pertain to modeling the 10% and 20% downturns noted above. The jumps depicted in are represented using a limiting distribution function. One suitable type of limiting distribution function comprises a Poisson function which may be given by: <math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msub><mi>P</mi><mi>μ</mi></msub><mo>⁡</mo><mrow><mo>(</mo><mi>c</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><msup><mi>ⅇ</mi><mrow><mo>-</mo><mi>μ</mi></mrow></msup><mo>⁡</mo><mrow><mo>[</mo><mfrac><msup><mi>μ</mi><mi>c</mi></msup><mrow><mi>c</mi><mo>!</mo></mrow></mfrac><mo>]</mo></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> where μ is an input value. The downturns/upturns may also be represented by more gradual drift or other non-jump processes. 252 254 232 256 258 254 1 1 1 2 2 m 1 m 2 In block , the Poisson variate c is generated using (4) with input value for μ being λ×T. The resulting value of the Poisson variate c using λis represented by m. In block , the value of S(T) computed in is multiplied by 0.9and the resulting product replaces the previously calculated value of S(T). In blocks and , the newly computed value of S(T) from block is again changed, this time by multiplying S(T) by 0.8(mrepresents a generated Poisson variate using μ value of λ×T). 256 230 258 50 The newly computed S(T) value in may be used in all the following steps in the process. Blocks - may be performed one or more times for all values of Z to generate a return distribution (e.g., distribution ). FIG. 8 FIG. 4 FIG. 6 224 240 242 270 272 274 276 244 240 242 244 270 276 242 270 272 242 242 274 278 272 276 240 276 244 1 2 1 1 2 2 m 1 m 2 Referring to , block (from ) may be implemented with blocks , , , , , , and . Blocks , and preferably are the same as discussed above with regard to . Blocks - adjust the computed value of S(T) from block for the two downward jumps (modeled by λand λ). In block , a Poisson variate is generated using λ×T as an input value μ with the resulting Poisson value being represented by m. In block , the value of S(T) computed in is multiplied by 0.9and the resulting product replaces the previously calculated value of S(T) from . In blocks and , the newly computed value of S(T) from block is again changed, this time by multiplying S(T) by 0.8(mrepresents a generated Poisson variate using λ×T as input value μ. The newly computed S(T) value in is used in the process from that step forward. Blocks - may be repeated B times (e.g., 10,000) and the resulting B S(T) samples are sorted in to generate the return distribution for the security. FIG. 9 FIG. 4 FIG. 9 FIG. 9 FIG. 9 224 ij j i i i−1 As mentioned above, the analysis described to this juncture is related to a portfolio having only a single security. Referring to , another embodiment of block from is shown in which a portfolio's return distribution is computed for a plurality of securities. The following discussion introduces various quantities that are used in the process of . The number of securities in the portfolio in the example of is represented by p. As will be explained below, the process of uses a vector “L” which preferably comprises a Cholesky decomposition of a covariance matrix comprising variance values pertaining to the p securities. The matrix L may be computed in the following way, although other techniques may be possible. It is assumed that the time interval between adjacent times in the historical data are equal and represented by Δt. For the jth security in the portfolio, the return from time period-to-time period may be represented by Ras follows (generally, we will take S(t) to be the price of the jth security at time tplus the dividends paid since t): <math overflow="scroll"><mtable><mtr><mtd><mrow><msub><mi>R</mi><mrow><mi>i</mi><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>j</mi></mrow></msub><mo>=</mo><mrow><mi>log</mi><mo>⁡</mo><mrow><mo>[</mo><mfrac><mrow><msub><mi>S</mi><mi>j</mi></msub><mo>⁡</mo><mrow><mo>(</mo><msub><mi>t</mi><mi>i</mi></msub><mo>)</mo></mrow></mrow><mrow><msub><mi>S</mi><mi>j</mi></msub><mo>⁡</mo><mrow><mo>(</mo><msub><mi>t</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>)</mo></mrow></mrow></mfrac><mo>]</mo></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>5</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> Accounting for the return rates of the securities, letting <math overflow="scroll"><mrow><msub><mover><mi>R</mi><mi>_</mi></mover><mi>j</mi></msub><mo>=</mo><mrow><mfrac><mn>1</mn><mi>N</mi></mfrac><mo>⁢</mo><mrow><munderover><mo>∑</mo><mrow><mn>1</mn><mo>=</mo><mn>1</mn></mrow><mi>N</mi></munderover><mo>⁢</mo><msub><mi>R</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow></mrow></mrow></math> we have <math overflow="scroll"><mtable><mtr><mtd><mrow><msub><mover><mi>σ</mi><mo>^</mo></mover><mrow><mi>j</mi><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>k</mi></mrow></msub><mo>=</mo><mrow><mfrac><mn>1</mn><mrow><mrow><mo>(</mo><mrow><mi>N</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow><mo>⁢</mo><mrow><mo>(</mo><mrow><mi>Δ</mi><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>t</mi></mrow><mo>)</mo></mrow></mrow></mfrac><mo>⁢</mo><mrow><munderover><mo>∑</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>N</mi></munderover><mo>⁢</mo><mrow><mrow><mo>(</mo><mrow><msub><mi>R</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mover><mi>R</mi><mi>_</mi></mover><mi>j</mi></msub></mrow><mo>)</mo></mrow><mo>⁢</mo><mrow><mo>(</mo><mrow><msub><mi>R</mi><mrow><mi>i</mi><mo>,</mo><mi>k</mi></mrow></msub><mo>-</mo><msub><mover><mi>R</mi><mi>_</mi></mover><mi>k</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> An estimated variance-covariance matrix then can be generated as follows: <math overflow="scroll"><mtable><mtr><mtd><mrow><mover><mo>∑</mo><mo>^</mo></mover><mo>⁢</mo><mrow><mo>=</mo><mrow><mo>(</mo><mtable><mtr><mtd><msub><mover><mi>σ</mi><mo>^</mo></mover><mn>11</mn></msub></mtd><mtd><msub><mover><mi>σ</mi><mo>^</mo></mover><mn>12</mn></msub></mtd><mtd><mi>⋯</mi></mtd><mtd><msub><mover><mi>σ</mi><mo>^</mo></mover><mrow><mn>1</mn><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>P</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mover><mi>σ</mi><mo>^</mo></mover><mn>21</mn></msub></mtd><mtd><msub><mover><mi>σ</mi><mo>^</mo></mover><mn>22</mn></msub></mtd><mtd><mi>⋯</mi></mtd><mtd><msub><mover><mi>σ</mi><mo>^</mo></mover><mrow><mn>2</mn><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>P</mi></mrow></msub></mtd></mtr><mtr><mtd><mi>⋯</mi></mtd><mtd><mi>⋯</mi></mtd><mtd><mi>⋯</mi></mtd><mtd><mi>⋯</mi></mtd></mtr><mtr><mtd><msub><mover><mi>σ</mi><mo>^</mo></mover><mrow><mn>1</mn><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>P</mi></mrow></msub></mtd><mtd><msub><mover><mi>σ</mi><mo>^</mo></mover><mrow><mn>2</mn><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>P</mi></mrow></msub></mtd><mtd><mi>⋯</mi></mtd><mtd><msub><mover><mi>σ</mi><mo>^</mo></mover><mi>PP</mi></msub></mtd></mtr></mtable><mo>)</mo></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>7</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> T j j The Cholesky decomposition may be determined from the variance-covariance matrix {circumflex over (Σ)} as: {circumflex over (Σ)}=LL (8) Further, estimates of μand σare given by <math overflow="scroll"><mrow><msub><mover><mi>μ</mi><mo>^</mo></mover><mi>j</mi></msub><mo>=</mo><mrow><mfrac><msub><mover><mi>R</mi><mi>_</mi></mover><mi>j</mi></msub><mrow><mi>Δ</mi><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>t</mi></mrow></mfrac><mo>+</mo><mfrac><msubsup><mover><mi>σ</mi><mo>^</mo></mover><mi>j</mi><mn>2</mn></msubsup><mn>2</mn></mfrac></mrow></mrow></math> and from (1) <math overflow="scroll"><mrow><mrow><msub><mover><mi>σ</mi><mo>^</mo></mover><mi>j</mi></msub><mo>=</mo><mrow><msqrt><mfrac><msubsup><mi>S</mi><msub><mi>R</mi><mi>j</mi></msub><mn>2</mn></msubsup><mrow><mi>Δ</mi><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mi>t</mi></mrow></mfrac></msqrt><mo>=</mo><msqrt><msub><mover><mi>σ</mi><mo>^</mo></mover><mi>jj</mi></msub></msqrt></mrow></mrow><mo>,</mo></mrow></math> respectively, for a multi-security portfolio. FIG. 9 280 290 280 282 284 1 2 p 1 2 p j T The embodiment of preferably comprises blocks -. In block , p values of Z are computed and placed into a row vector designated as Z=(z, z, . . . , z). In , a row vector V is computed by multiplying the row vector Z by the transpose of matrix L. That is V=ZL=(v, v, . . . , v). In block , for each of the p securities in the portfolio, a forecast future security value S(T) may be computed as: <math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msub><mi>S</mi><mi>j</mi></msub><mo>⁡</mo><mrow><mo>(</mo><mi>T</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mrow><msub><mi>S</mi><mi>j</mi></msub><mo>⁡</mo><mrow><mo>(</mo><mn>0</mn><mo>)</mo></mrow></mrow><mo>⁢</mo><msup><mi>ⅇ</mi><mrow><mo>[</mo><mrow><mrow><mrow><mo>(</mo><mrow><msub><mi>μ</mi><mi>j</mi></msub><mo>-</mo><mrow><mfrac><mn>1</mn><mn>2</mn></mfrac><mo>⁢</mo><msubsup><mi>σ</mi><mi>j</mi><mn>2</mn></msubsup></mrow></mrow><mo>)</mo></mrow><mo>⁢</mo><mi>T</mi></mrow><mo>+</mo><mrow><msub><mi>V</mi><mi>j</mi></msub><mo>⁢</mo><msqrt><mi>T</mi></msqrt></mrow></mrow><mo>]</mo></mrow></msup></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>9</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> 286 280 286 290 j i i i In accordance with block , the S(T) values of the p stocks are saved as a row vector S(T). Blocks - may be repeated B times (e.g., 10,000) thereby creating B S(T) row vectors. The result is a B by p matrix in which each row generally corresponds to a particular value of Z and its associated probability. In block , the S(T) values preferably are stored in the B by p matrix S(T). FIG. 10 FIG. 4 FIG. 10 FIG. 9 224 280 282 284 286 280 286 shows another embodiment of block of in which two downward jumps are modeled, as described previously. The process depicted in includes blocks , , and as discussed above with regard to . Blocks - preferably are repeated B times. 292 294 296 298 300 294 300 302 ij 1 1 ij ij ij ij 2 2 i m 1 m 2 In block , the S(T) values are stored in a B by p dimensional matrix, S(T). In block , a computer generated value (m) from the Poisson probability distribution is computed using λ×T as an input value. Then, as described previously, the S(T) value is replaced by 0.9×S(T) (block ). In blocks and , the newly computed S(T) value is again replaced by S(T)×0.8where mis a computer generated value from the Poisson distribution computed with an input value of λ×T. Blocks - preferably are repeated B times after which the resulting S(T) values are stored in matrix S(T) (block ). FIG. 11 FIG. 3 FIG. 11 FIG. 11 202 216 j i i−1 Referring now to , an alternative process is shown for implementing block in . The process shown in may not use a model of future returns, as was the case for the processes described above. As shown, the exemplary process of includes computing period-to-period returns using the historical data (block ). One exemplary technique for computing such historical returns includes resampling the period-to-period growth (or decline) of each of p stocks (the value S(t) will include the stock price plus dividends paid since the last time t). Such growth is given by: <math overflow="scroll"><mtable><mtr><mtd><mrow><msub><mi>R</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mrow><mi>log</mi><mo>⁡</mo><mrow><mo>[</mo><mfrac><mrow><msub><mi>S</mi><mi>j</mi></msub><mo>⁡</mo><mrow><mo>(</mo><msub><mi>t</mi><mi>i</mi></msub><mo>)</mo></mrow></mrow><mrow><msub><mi>S</mi><mi>j</mi></msub><mo>⁡</mo><mrow><mo>(</mo><msub><mi>t</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>)</mo></mrow></mrow></mfrac><mo>]</mo></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>10</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> ij 218 216 For p stocks considered over N time periods, the Rvalues are stored in a N by p matrix R. In bock , the distribution of the p securities' return at future time T may be computed based on a sampling of the historical returns computed in . FIG. 12 FIG. 11 FIG. 12 218 218 320 326 320 322 325 322 324 325 326 i,1 i,2 i,n ij j l (i,1 l (i,2),j l (i,n),j) i,1 i,2 i,p SS T S t R +R + . . . +R shows an exemplary process usable to perform block in . As shown in , the exemplary block process may comprise blocks -. In block , the number of “time steps” n is computed that corresponds to future time T. Blocks through preferably are repeated B times (e.g., 10,000). In block , n random values from N are sampled with replacement (i.e., the same value may be sampled more than once). For example, the ith sample of size n from the N integers will be denoted as l, l, . . . lIn block , future security values are forecast using for j=1, 2, . . . , p the equation: ()=(0)exp[Δ(),j] (11) The values (SS, SS, . . . S) preferably are stored as a row vector in block . In block , the n such row vectors are stored in an n by p matrix SS. FIG. 13 FIG. 12 330 338 326 330 332 334 336 338 1 1 2 provides blocks - which may follow block () to account for downturns/upturns as discussed above. In blocks and , a first value (m) of a Poisson distribution is generated based on the first downturn model (λ) and used to adjust the return values accordingly. Similarly, blocks -, the return values are again adjusted based on the other downturn model (λ). The n by p matrix of resampled row vectors is then stored in SS (block ). 130 0 0 j j There are numerous uses of the portfolio analysis application . The following examples are provided merely for illustrative purposes and, in no way, should be used to limit the scope of the disclosure including the claims. Let us suppose we have a portfolio of current value P(), invested in p stocks with the fraction invested in the jth stock being c. The csatisfy the condition that typically, but not necessarily, they are nonnegative and that for a total investment at time zero of P(), <math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>P</mi><mo>⁡</mo><mrow><mo>(</mo><mn>0</mn><mo>)</mo></mrow></mrow><mo>=</mo><mrow><munderover><mo>∑</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>p</mi></munderover><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mrow><msub><mi>c</mi><mi>j</mi></msub><mo>⁢</mo><mrow><msub><mi>S</mi><mi>j</mi></msub><mo>⁡</mo><mrow><mo>(</mo><mn>0</mn><mo>)</mo></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>12</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> 350 370 290 302 326 338 FIG. 14 T T 1 T T 2 T Further, at least one time horizon T may be used. Referring to block in , for an empirical distribution function Fof forecasted values of the portfolio value at time T, it may be desired to modify the weights such that equation (12) is satisfied, and one or several constraints explicitly or implicitly based on F, say C(F), and a criterion function explicitly or implicitly based on F, say C(F). Then in block , a B by p matrix of simulated forecast stock values S(T) may be entered for time horizon T from , , or . 380 0 0 0 0 390 390 410 292 302 326 338 1 1 2 2 p p l i1 2 i2 p ip i i T i 1 T 2 T 2 T T Simulation: A Modeler's Approach In block , the p initial weightings are also entered so that cS()+cS()+ . . . +cS()=P(). For each of the B row vectors, denoted as i, the expression cS(T)+cS(T)+ . . . +. cS(T)=P(T) preferably is evaluated (block ). Next, the B P(T)) values may be ranked to generate the empirical distribution function in block (via F(x)=(1/B) (number P(T)<x). A constrained optimization procedure () is then performed whereby the weights may be changed such that (12) is satisfied as well as the constraint C(F), and the criterion function C(F) is optimized. This may be done in an iterative fashion until the change in C(F) is small (i.e., less than a predetermined threshold). Any one of a variety of techniques of optimization may be utilized. For example, the “polytope” procedure of Nelder and Mead, described in many sources, e.g., by James R. Thompson_(John Wiley & Sons, 2000), and incorporated herein by reference, may be used. Moreover, it is possible to look at forecast Ffor several future times, or a continuum of future times, and base the constraints on these empirical distribution functions or the empirical distribution functions at other times. For example, one could use the B row vectors of width p from block (or blocks or or ), to obtain forecasted portfolio value for the ith of B simulations at time T. That is, <math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msub><mi>P</mi><mi>i</mi></msub><mo>⁡</mo><mrow><mo>(</mo><mi>T</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><munderover><mo>∑</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>p</mi></munderover><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mrow><msub><mi>c</mi><mi>j</mi></msub><mo>⁢</mo><mrow><msub><mi>S</mi><mi>ij</mi></msub><mo>⁡</mo><mrow><mo>(</mo><mi>T</mi><mo>)</mo></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>13</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> 326 328 These B values of the portfolio then may be sorted to obtain the simugram of the value of the portfolio with these weights. Alternatively, block or may be used to obtain the forecasted portfolio value as: <math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msub><mi>P</mi><mi>i</mi></msub><mo>⁡</mo><mrow><mo>(</mo><mi>T</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><munderover><mo>∑</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>p</mi></munderover><mo>⁢</mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo>⁢</mo><mrow><msub><mi>c</mi><mi>j</mi></msub><mo>⁢</mo><mrow><msub><mi>SS</mi><mi>ij</mi></msub><mo>⁡</mo><mrow><mo>(</mo><mi>T</mi><mo>)</mo></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>14</mn><mo>)</mo></mrow></mtd></mtr></mtable></math> j These B values of the portfolio may be sorted to obtain the simugram of the value of the portfolio with these weights. Optimization of the portfolio may be achieved using many different criterion functions and subject to a variety of constraints by readjusting the cvalues (subject to equation (12)). FIG. 2 130 The following provides additional examples. The numbers included in the examples below are included merely to facilitate understanding the examples. Referring to the example of , the application may be used to determine a portfolio (via readjustment of weights) that maximizes the return computed for the lower 20 percentile. This approach may be desirable for investors that wish to minimize their downside risk. 130 By way of an additional example, an investor may wish to consider multiple criteria. First, the investor may desire to know the portfolio whose 20-year, lower 20 percentile is at least 4%. Subject to that constraint, the investor also may wish to maximize the 50 percentile (median) return. Further still, the application could be used to help an investor who desires to be 90% sure his or her portfolio will achieve a compounded 5% rate and, subject to that constraint, further desires to maximize the median return. Time is easily varied in the portfolio analysis application . This flexibility permits multiple return distributions to be computed for different time periods. In yet an additional example, 90 components of the Standard & Poors (“S&P”) 100 were used that were active from May 1996 through December 2001. The forecast distribution is considered to be 5 years in this example and the maximization criteria is the mean. The minimum risk percentile is 20 and minimum risk return value is 1.15. A maximum allocation of 0.05 for any single component is used. The number of simulations was 10,000 for determination of (par)ametric (alloc)ation and the number of resamples was 10,000 for parametric and nonparametric allocations. The resulting weights that should be applied to the various stocks to maximize the forecasted mean are computed as described above and as shown in Table II below. (“id” is the number of the stock in the portfolio, “permno” is an identifier number for the stock, “ticker” is the symbol for the stock of the company traded on the NYSE, the American Stock Exchange or the NASDAQ.) TABLE II id permno ticker {circumflex over (μ)} {circumflex over (σ)} par alloc npar alloc 1 10104 ORCL 0.44 0.65 0.05 0.05 2 10107 MSFT 0.39 0.48 0.05 0.05 3 10145 HON 0.12 0.44 0.00 0.00 4 10147 EMC 0.48 0.61 0.05 0.05 5 10401 T 0.05 0.40 0.00 0.00 6 10890 UIS 0.36 0.68 0.05 0.05 7 11308 KO 0.07 0.31 0.00 0.00 8 11703 DD 0.05 0.28 0.00 0.00 9 11754 EK −0.11 0.35 0.00 0.00 10 11850 XOM 0.12 0.17 0.00 0.00 11 12052 GD 0.19 0.25 0.00 0.00 12 12060 GE 0.23 0.26 0.00 0.00 13 12079 GM 0.08 0.36 0.00 0.00 14 12490 IBM 0.32 0.34 0.05 0.05 15 13100 MAY 0.07 0.29 0.00 0.00 16 13856 PEP 0.13 0.28 0.00 0.00 17 13901 MO 0.13 0.32 0.00 0.00 18 14008 AMGN 0.32 0.39 0.05 0.05 19 14277 SLB 0.12 0.36 0.00 0.00 20 14322 S 0.05 0.36 0.00 0.00 21 15560 RSH 0.27 0.49 0.05 0.00 22 15579 TXN 0.40 0.56 0.05 0.05 23 16424 G 0.09 0.33 0.00 0.00 24 17830 UTX 0.21 0.35 0.00 0.00 25 18163 PG 0.16 0.31 0.00 0.00 26 18382 PHA 0.12 0.30 0.00 0.00 27 18411 SO 0.14 0.24 0.00 0.00 28 18729 CL 0.25 0.33 0.00 0.05 29 19393 BMY 0.20 0.26 0.00 0.00 30 19561 BA 0.05 0.35 0.00 0.00 31 20220 BDK 0.06 0.38 0.00 0.00 32 20626 DOW 0.07 0.30 0.00 0.00 33 21573 IP 0.07 0.36 0.00 0.00 34 21776 EXC 0.17 0.33 0.00 0.00 35 21936 PFE 0.26 0.28 0.05 0.05 36 22111 JNJ 0.20 0.26 0.00 0.00 37 22592 MMM 0.14 0.25 0.00 0.00 38 22752 MRK 0.17 0.31 0.00 0.00 39 22840 SLE 0.11 0.31 0.00 0.00 40 23077 HNZ 0.06 0.25 0.00 0.00 41 23819 HAL −0.03 0.47 0.00 0.00 42 24010 ETR 0.11 0.29 0.00 0.00 43 24046 CCU 0.27 0.38 0.02 0.05 44 24109 AEP 0.04 0.22 0.00 0.00 45 24643 AA 0.21 0.37 0.00 0.00 46 24942 RTN 0.02 0.44 0.00 0.00 47 25320 CPB 0.04 0.29 0.00 0.00 48 26112 DAL 0.00 0.33 0.00 0.00 49 26403 DIS 0.05 0.33 0.00 0.00 50 27828 HWP 0.11 0.48 0.00 0.00 51 27887 BAX 0.21 0.24 0.00 0.00 52 27983 XRX 0.04 0.61 0.00 0.00 53 38156 WMB 0.14 0.33 0.00 0.00 54 38703 WFC 0.20 0.31 0.00 0.00 55 39917 WY 0.07 0.32 0.00 0.00 56 40125 CSC 0.17 0.48 0.00 0.00 57 40416 AVP 0.24 0.47 0.03 0.00 58 42024 BCC 0.00 0.33 0.00 0.00 59 43123 ATI −0.05 0.39 0.00 0.00 60 43449 MCD 0.05 0.26 0.00 0.00 61 45356 TYC 0.37 0.33 0.05 0.05 62 47896 JPM 0.15 0.38 0.00 0.00 63 50227 BNI 0.03 0.27 0.00 0.00 64 51377 NSM 0.35 0.69 0.05 0.05 65 52919 MER 0.31 0.44 0.05 0.05 66 55976 WMT 0.32 0.31 0.05 0.05 67 58640 NT 0.23 0.64 0.00 0.00 68 59176 AXP 0.19 0.31 0.00 0.00 69 59184 BUD 0.20 0.21 0.00 0.00 70 59328 INTC 0.37 0.52 0.05 0.05 71 59408 BAC 0.14 0.34 0.00 0.00 72 60097 MDT 0.28 0.28 0.05 0.05 73 60628 FDX 0.23 0.35 0.00 0.00 74 61065 TOY 0.06 0.48 0.00 0.00 75 64186 CI 0.20 0.28 0.00 0.00 76 64282 LTD 0.16 0.42 0.00 0.00 77 64311 NSC −0.02 0.34 0.00 0.00 78 65138 ONE 0.10 0.37 0.00 0.00 79 65875 VZ 0.11 0.29 0.00 0.00 80 66093 SBC 0.12 0.29 0.00 0.00 81 66157 USB 0.12 0.37 0.00 0.00 82 66181 HD 0.33 0.32 0.05 0.05 83 66800 AIG 0.26 0.26 0.05 0.05 84 69032 MWD 0.34 0.47 0.05 0.05 85 70519 C 0.35 0.36 0.05 0.05 86 75034 BHI 0.11 0.41 0.00 0.00 87 75104 VIA 0.21 0.37 0.00 0.00 88 76090 HET 0.09 0.39 0.00 0.00 89 82775 HIG 0.24 0.37 0.00 0.00 90 83332 LU 0.10 0.54 0.00 0.00 By producing a distribution of returns for a portfolio, growth and risk can be integrated in a manner easily understandable by a lay investor. The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. BRIEF DESCRIPTION OF THE DRAWINGS For a detailed description of the preferred embodiments of the invention, reference will now be made to the accompanying drawings in which: FIG. 1 shows a block diagram of an electronic system usable in accordance with the preferred embodiments of the invention; FIG. 2 shows an exemplary return distribution in accordance with a preferred embodiment of the invention; FIG. 3 shows a preferred process for obtaining a distribution that is indicative of a portfolio's estimated returns at a future time including determining a return distribution based on historical data; FIG. 4 FIG. 3 shows a preferred process for determining the estimated return distribution based on historical data as shown in including computing a security's returns using a model of returns; FIG. 5 FIG. 4 shows a process of forecasting a security's returns as in in accordance with a preferred embodiment of the invention; FIG. 6 FIG. 4 shows another process of forecasting a security's returns as in in accordance with a preferred embodiment of the invention; FIG. 7 FIG. 4 shows another process of estimating a security's returns as in in accordance with a preferred embodiment of the invention which takes into account upturns and/or downturns; FIG. 8 FIG. 4 shows another process of estimating a security's returns as in in accordance with a preferred embodiment of the invention which takes into account upturns and/or downturns; FIG. 9 shows another process of forecasting future returns of multiple securities in accordance with a preferred embodiment of the invention; FIG. 10 shows another process of forecasting future returns of multiple securities in accordance with a preferred embodiment of the invention which takes into account upturns and/or downturns; FIG. 11 FIG. 3 shows a preferred process for estimating the return distribution based on historical data as shown in including resampling previous returns of a security or securities; FIG. 12 shows a preferred embodiment for resampling previous returns of a security or securities to determine the return distribution; FIG. 13 FIG. 12 shows a shows a preferred embodiment for how the process of may be modified to take into account upturns and/or downturns; and FIG. 14 shows a flow chart usable in an example given below.