content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Scientific Name:
Orcinus orca
Family Name:
Delphinidae
Size:
average 5.8 to 6.7 m
Weight:
between 3,628 and 5,442 kg
Color:
black and white
Diet:
Fishes, squids, seals, sea lions, walruses, birds, sea turtles, otters, penguins, cetaceans (both mysticete and odontocete), polar bears, reptiles, and even a moose
HOME
>>
WATER SPECIES
>>
DOLPHINS
>>
Killer Whale
Killer Whale
Killer Whale Snapshot
Killer Whale Picture Gallery
Killer Whale Description
The orca or killer whale is a toothed whale that is an efficient predator, even attacking huge young blue whales. Their only enemy is human beings. Orcas live in small, close-knit, life-long pods and have 1 blowhole. The killer whale belongs to the family of dolphins and is the biggest dolphin. It is sometimes called the "wolf of the sea" because its behavior is similar to that of wolves.
Sponsor Ad
Copyright 2004, | http://www.worldscreatures.com/water-species/dolphins/killer-whale.htm |
Highland Park Independent School District Athletic Director Johnny Ringo has spent his entire career in pursuit of helping middle and high school athletes attempt to reach their full potential. With over 30 years' worth of coaching and athletics administration, Ringo has seen sports – especially football – evolve.
Now in his fifth season with the Scots, one of the chief changes Ringo has seen has been the specialization of athletes. While overseeing athletes in his district – beginning in middle school – Ringo has overseen a policy that requires student-athletes from seventh grade through their sophomore year to participate in at least two sports.
RELATED CONTENT: NFL Draft 2018: First round loaded with multi-sport athletes
This policy has been a beneficial one not only to the student athletes, but also to Highland Park schools. In football, the Scots have won back-to-back state championships. For Ringo, the success is part and parcel of having multi-sport athletes in the athletic program.
EM: Studies have shown that there are numerous benefits for kids to be multisport athletes. What in particular does Highland Park I.S.D. do to promote this philosophy?
JR: "Being a multisport athlete is definitely something that we promote. Our goal is for all of our kids to play at least two sports through their sophomore year. We start telling them that in the sixth grade when we first get a chance to talk to them and teach them about our philosophy as an athletic department.
"In my role, one of the things I believe is that to have a great high school athletic experience, our kids need to be involved in everything they want to be involved in. We tell the athletes that what they want to participate in needs to be their decision and not their parents or their club coaches or trainers' decisions.”
RELATED CONTENT: The Spring League: A game of second chances
EM: What are some of the reasons, especially some of the tangible ones, that it is important to have multisport athletes?
JR: "On one level, I think it’s just important as kids for them to get the most out of their experience in school and not have any regrets about not doing something athletically. Once they are out of high school and are young adults, we don’t want them to look back and say, ‘I wish I would have done this or done that.’
"We try to explain to the students and their parents that it really isn’t a good thing to be playing any sport 12 months out of the year. There are injury factors, burnout, so many different things that come up when you specialize on a sport.
"With football, it really isn’t something we have to deal with too much, although 7-on-7s are a big thing and a lot of kids will have their own personal coaches and trainers. But in sports like volleyball and baseball, [specialization] becomes an issue and it can lead to injuries that often make kids want to stop playing sports altogether.”
RELATED CONTENT: Former Nebraska QB Jerry Tagge shows how perseverance pays off in football and life
EM: How do you explain to the students and the parents about the benefits of playing multiple sports? I think, ultimately, parents want what is best for their kids and that is why they lead them down this path of playing and training year-round. How do you educate them otherwise?
JR: "Well, I’m not saying that the kids don’t get good coaching when they do a sport with a private coach or a club coach. But one thing I try to get across to everyone is that while they are here, it is about playing for Highland Park and playing for the Scots. You live in this community and that’s what it should be about and I think people get that.
We’ve had a tremendous amount of success as an athletics department because we get the best athletes we have playing all the sports we offer. When we succeed on the field, it raises the morale of the school and the community as a whole. It’s to everyone’s benefit for us as an athletics department to be successful and again one of the main reasons for that is our philosophy of multisport athletes."
EM: Earlier you mentioned about the potential for athletes to suffer from burnout from playing a sport year-round. Is this something you’ve encountered often in your career?
JR: "Yeah, I’ve seen it quite a few times. Again, not so much in football, but in sports like volleyball, I’ve seen a number of really great players that have had scholarship offers decide they don’t want to play anymore. | https://blogs.usafootball.com/blog/6133/see-why-this-texas-school-district-requires-student-athletes-to-play-more-than-one-sport?utm_source=facebook.com&utm_medium=social&utm_campaign=usaf&utm_term=content&utm_content=blog |
Much like the difficulties of upgrading aging infrastructure, the United States is falling behind emerging economies such as China and Brazil because its corporate tax code, political gridlock and bureaucratic red tape are too much to bear for the biggest multinational companies.
John Quelch writes at the Harvard Business Review that there's too much going right in China for it not to surpass the U.S. when it comes to international economic clout, and that American dismissal of the Asian powerhouse as a manufacturer, not innovator, severely underestimates the nation's ability to do so.
He writes:
From gunpowder on, the history of Chinese innovation is strong. Chinese society is highly competitive. When the Chinese can no longer make easy money imitating, they will start innovating. Home-grown innovations will motivate tougher enforcement of intellectual property regulations.
Within his post, I gathered three reasons China is going to succeed.
Those were:
- Education. Chinese parents invest heavily in their children's education, often because it directly impacts their comfort in old age.
- Regional competition. The Chinese understand that owning a brand is much more profitable than simply manufacturing it; major corporations in Japan and South Korea are great targets to compete against.
- The new generation. What China is today will not be what it is tomorrow. Millennials are "leading vibrant arts and fashion scenes in the major cities," building cultural infrastructure that encourages innovation.
The silver lining for the U.S. as far as competition is concerned is that it will take several decades for China's innovation play to reach maturation. But the writing is on the wall: China has a clear, coordinated strategy, and the U.S. does not. | https://www.zdnet.com/article/china-for-innovation-too-big-to-fail/ |
When categorical data, it is common to do conduct tests such as the Tukey Honest Significant Differences (HSD). This is a powerful test which compares all pairs, but as the number of groups increases the number of comparisons increases quickly. The following is the method I use to visualize differences in categorical data.
Disclaimer: I am not claiming to have invented something new, this is just how I like to visualize this analysis, I am sure there are similar methods out there.
T-Test¶
Say you have two samples, and you want to determine if they come from the same population, i.e. are they "different". You could just compare their means and if they are different then you are good to go... right? Well, what if they are pretty close? How close is close enough?
To test this we have the t-test. We can test if two samples are significantly different from one another. | https://mikegrantham.me/blog/sqq/ |
The first session of the ‘Selfless Teacher Accomplishes Last Lift from the Water at the Price of His Own Life — Lecture Tour of Heroic Deed of “Role Model of the Times “Wang Hongxu’ was held in Chongqing on September 28. The audiences were moved by his spirit of selflessness and sacrifice as a teacher in his short life.
Wang Hongxu, Role Model of the Times. (iChongqing file photo)
Xu Linsheng, a good friend of Wang Hongxu, has experienced and witnessed how Wang Hongxu jumped into the water and rescued the two children.
He went first at the session, with his voice being stirring and powerful and deep and low. Probably because he was Wang Hongxu’s good friend, his lecture particularly resonated with and moved the audience.
“Till today, I still remember Hongxu lifting the child out of the water and pushing him to me,” said Xu. “If it were not for him wanting to rescue the second child and mustering his last energy to push the child to me, he might not have died.”
Xu choked with sobs, “Hongxu dashed into the water to save the two children without a thought for his own safety. He tried best to save the children and accomplished the last lift from the water at the cost of his own life.”
As Xu’s eyes grew moist, many people in the audience were moved to tears.
Wang Shuya, a student of Wang Hongxu, walked on the rostrum and told some interesting stories about her teacher.
When talking about her teacher’s sacrifice for saving lives, this little girl cannot help crying, “I can’t believe it’s true. I can’t believe our warm-hearted, and good-natured teacher has passed away so suddenly.”
Like her, many people sitting in the audience also cannot help sobbing.
From the perspective of a journalist, Cui Yao told audiences about Wang’s spirit of selflessness and sacrifice as a teacher in his short life.
In the first stage, Cui tried to establish Wang’s teacher’s life trajectory. In the second stage, he learned from the members of the lecture team that Wang was a good friend, husband, worker, and teacher.
“He was honest, frank, and always ready to help others,” Cao said. “He was also considerate, warm-hearted, and dedicated to his work. He loved his students as he loved his own children.”
“His sacrifice demonstrated his endless love for students,” said Cao.
Cui hopes this lecture tour can allow more people to know the great love pursued by Wang, and let more people understand and know him, and be encouraged by his spirit.
Having heard all the lectures, Yao Xi, Vice President of Dadukou Experimental Primary School, said that though Wang had passed away for over three months, he feels moved and filled with drive every time he hears his heroic deeds.
“Heroes of a city can always shape the city in one way or another,” said Tao. “As a role model for everyone in the teaching profession in Chongqing, Wang leads by example of how love and kindness can inspire us to protect lives.” | https://www.ichongqing.info/2021/10/01/wang-hongxu-a-hero-a-teacher-and-an-ordinary-chinese/ |
# Coenzyme M
Coenzyme M is a coenzyme required for methyl-transfer reactions in the metabolism of archaeal methanogens, and in the metabolism of other substrates in bacteria. It is also a necessary cofactor in the metabolic pathway of alkene-oxidizing bacteria. CoM helps eliminate the toxic epoxides formed from the oxidation of alkenes such as propylene. The structure of this coenzyme was discovered by CD Taylor and RS Wolfe in 1974 while they were studying methanogenesis, the process by which carbon dioxide is transformed into methane in some anaerobic bacteria. The coenzyme is an anion with the formula HSCH2CH2SO−3. It is named 2-mercaptoethanesulfonate and abbreviated HS–CoM. The cation is unimportant, but the sodium salt is most available. Mercaptoethanesulfonate contains both a thiol, which is the main site of reactivity, and a sulfonate group, which confers solubility in aqueous media.
## Biochemical role
### Methanogenesis
The coenzyme is the C1 donor in methanogenesis. It is converted to methyl-coenzyme M thioether, the thioether CH3SCH2CH2SO−3, in the penultimate step to methane formation. Methyl-coenzyme M reacts with coenzyme B, 7-thioheptanoylthreoninephosphate, to give a heterodisulfide, releasing methane:
This induction is catalyzed by the enzyme methyl-coenzyme M reductase, which restricts cofactor F430 as the prosthetic group.
### Alkene metabolism
Coenzyme M is also used to make acetoacetate from CO2 and propylene or ethylene in aerobic bacteria. Specifically, in bacteria that oxidize alkenes into epoxides. After the propylene (or other alkene) undergoes epoxidation and becomes epoxypropane it becomes electrophilic and toxic. These epoxides react with DNA and proteins, affecting cell function. Alkene-oxidizing bacteria like Xanthobacter autotrophicus use a metabolic pathway in which CoM is conjugated with an aliphatic epoxide. This step creates a nucleophilic compound which can react with CO2. The eventual carboxylation produces acetoacetate, breaking down the propylene. | https://en.wikipedia.org/wiki/Coenzyme-M |
Have you placed an important object down in a certain area, only to find it moved to another room minutes later? At first, you might think you imagined it, but as these instances become more frequent, you may start to worry.
One reason for this situation might simply be teleportation. Though it's not a common explanation that people turn to during these occurrences, teleportation can be defined as objects being moved to other areas through paranormal phenomena.
Similar to telekinesis, many people believe that teleportation is something that can take place through the use of the mind. Energy and mind strength is thought to be enough to relocate objects of all different sizes. Perhaps this is the reason why you keep finding your objects in random places around your house!
Speak to your psychic if you're interested in learning more about the possibility of teleportation or telekinesis taking place in your home. Psychic phone readings can help you find any lost items and get a better grasp on the situation. After all, it can be a little unsettling to continue living in wonder over this unusual occurrence. | https://www.psychicsource.com/article/other-psychic-topics/the-possibility-of-teleportation-and-telekinesis/1715 |
The invention provides an unmanned platform remote centralized control method for bridge detection, which solves the information transmission problem when an unmanned platform executes a bridge detection task in a signal blind area of a mobile communication public network by comprehensively using two communication means of a 5G mobile communication network and a microwave ad hoc network. The function of ensuring full coverage of communication signals of the unmanned platform in a detection area is realized. According to the unmanned platform, the instruction message format is uniformly controlled, and the problem that centralized control over the unmanned platforms of the unmanned aerial vehicle and the unmanned ship is difficult is solved. According to the invention, mobile communication public network signals can cover the area below the bridge floor for large bridges such as cross-sea bridges, and the information transmission requirement between an unmanned platform and a shore-based control center during bridge detection is met. | |
Tank — Pet of the Week
Tank was surrendered to the Coshocton County Animal Shelter and Humane Animal Treatment Association due to food aggression and fighting with another unaltered male in the home. Tank’s socialization has been worked on in the many weeks he’s been at the shelter and a more positive behavior with other dogs have been observed. Sponsors are still being sought to assist with his neutering.
Tank is a young adult pitbull who is housebroken. He would not be appropriate for a home with cats. It’s believed Tank can learn to respect other dogs with proper training and supervision. He’s a very loving boy who absolutely adores people.
Adoption fees for dogs and puppies are $50 and includes vaccine, dog license and any additional treatments needed. Ask about low cost spay and neuter clinics.
For more information on any adoptions, call the shelter at 740-622-9741. It’s open from 10 a.m. to 3 p.m. Monday and Tuesday, 10 a.m. to 7 p.m. Wednesday and 10 a.m. to 3 p.m. Thursday and Friday. | https://www.coshoctontribune.com/story/news/local/2017/01/15/tank-pet-week/96537910/ |
We hope to reschedule this event. Stay tuned for updates.
Food Bank of Central and Eastern N.C., 1924 Capital Blvd., Raleigh
There is no cost to be a Helping Heel. To sign up, email [email protected]. Must be 12 or older to volunteer.
The GAA will host a group service project with the Food Bank of Central and Eastern North Carolina. This project will involve repackaging bulk food items into family-size bags to be redistributed to those in need. Registration is limited and by signing up, you are committing to be there that day.
The Food Bank of Central and Eastern North Carolina works every day to provide food to people in need while building solutions to end hunger in our communities. Since 1980, the Food Bank has worked across the food system to provide access to nutritious food that nourishes families, children, seniors and individuals.
Through partnerships, education and programs, the Food Bank empowers communities to overcome hunger, creating an environment where all North Carolinians thrive. In the counties the Food Bank serves, nearly 600,000 people currently struggle to access nutritious and adequate amounts of food necessary for an active and healthy life.
Don’t forget to post a photo on social media during this event – tag @UNCGAA and #UNCAlumni. | https://alumni.unc.edu/events/volunteer-at-the-food-bank/ |
Appreciation: Happy 50th, NASA
NASA has compiled an impressive list of achievements. It's launched numerous successful manned missions, culminating with the six Moon landings. Manned space stations going back to Skylab have tested the ability of humans to live and work for prolonged periods in space. Earth-observing satellites have greatly deepened our understanding of Earth's oceans, climate, geology, vegetation, resources, and more.
Orbiting observatories, exemplified by the Hubble Space Telescope but also including satellites to study celestial objects at every conceivable wavelength, have made numerous discoveries. Our fleet of robotic space probes has visited all of the planets in our solar system, revealing them and their moons and enigmatic worlds, as well as comets, moons, and asteroids fascinating and enigmatic worlds. (Five of these probes are heading clear out of the solar system on one-way journeys to the stars.)
NASA's efforts have revolutionized our understanding of the universe and our place in it. Particularly in its early decades, the space program helped engender an excitement that led many students to pursue careers in sciences. And NASA technology has been applied to numerous areas unrelated to space.
Along with such obvious things as missile design and communications systems, NASA technology has found applications in areas as diverse as wireless headsets, plasma displays, freeze-dried foods, single-crystal silicon solar cells, portable electric vacuum cleaners, cochlear implants, bicycle helmets, air purifiers, airplane collision-avoidance and anti-icing systems, oil-spill remediation, flame-retardant coatings, better two-way radios, anthrax detectors, skis, tennis racquets, and dialysis machines.Read the rest of this post at Gearlog: "Happy Birthday, NASA: 50 Years and Counting" | https://www.pcmag.com/news/232630/appreciation-happy-50th-nasa |
This training is an advanced training for identifying and trying to build indicators from a listening test (objectification).
Learning Outcome
Upon completion of this course, you will be able to:
-
Understand Descriptive Statistics and Statistical Inference
-
Test your hypothesis
-
Analyze results
Prerequisites
- Completion of ANSYS VRXPERIENCE Sound - Sound Perception Psychoacoustics Indicators and Measurement.
Target Audience: Acoustics Engineers
Teaching Method: Lectures and computer practical sessions to validate acquired knowledge.
Learning Options: Training materials for this course are available with a ANSYS Learning Hub Subscription. If there is no active public schedule available, private training can be arranged. Please contact us.
Agenda SUBSCRIBE TODAY
Filter By Country : | https://www.ansys.com/services/training-center/systems/ansys-vrxperience-sound-sound-perception-advanced-listening-test-analysis-and-objectification |
Dear Jane: You are correct in that healing needs to start with an internal process before stepping out to the rest of the world.
Dear Dr. Ted: I have noticed that most people seem to disappear after a loss and then they seem to reach out to the world or others. Is this a normal process or what appears to happen? Thanks, Jane
Dear Jane: You are correct in that healing needs to start with an internal process before stepping out to the rest of the world. When you have a loss, it is similar to a physical wound and your first response will be to retract inside yourself in order to assess your wound and mend before you step back into the world. This is a natural and normal reflex as you want the highest level of health in order to navigate the world around you.
Even within a family or an organization, the same process happens, where there is the need to have internal stabilization from the chaos that a loss brings, and as the center core begins to have its new baseline, there can be more outreach as the person, family or organization steps back into the world. I often compare it to a medical procedure in which you have an injury, are surrounded by medical supports with the focus on your injury, then after surgery you move into the recovery room where you are still isolated from external factors; as you heal, you go home and heal more. Then you are slowly ready to reintegrate into the world once again.
This scaffolding process or continuum of care is important as it allows time for you to integrate the present situation into your own consciousness, and become aware of how the event has possibly changed you in ways that now impact how you interact with your family, work and others.
Loss can have such a profound effect that you may find it changes how you interact with yourself internally. There are often a lot of interruptions in emotional healing and becoming more skilled in being able to reestablish your own baseline before other issues arise is always beneficial. Self-care such as rest, diet, exercise and solitude and/or spiritual disciplines can help you start to move from the emotional free fall you might be feeling. Avoid behaviors with addictive tendencies, other people's dramas, unnecessary arguments and other behaviors that only exacerbate the present situation as well as build future problems. Similar to the airline protocol of setting your own oxygen mask before you help others, a loss needs you to take care of yourself first, and then reach back out into the different outlying circles so that you are make conscious and healthy decisions as your own needs are being covered. Thank you for the question. Until next week, take care.
Golden Willow Retreat is a nonprofit organization focused on emotional healing and recovery from any type of loss. Direct any questions to Dr. Ted Wiard, EdD, LPCC, CGC, founder of Golden Willow Retreat at [email protected].
This column seeks to help educate our community about emotional healing through grief. People may write questions to Golden Willow Retreat and they will be answered privately to you and possibly as a future article for others. Please list a first name that grants permission for printing.
In order to read our site, please exit private/incognito mode or log in to continue. | https://www.taosnews.com/stories/healing-an-internal-to-external-process,55057 |
Startup company Skyscrape plans to makes clothing that naturally adapts to the temperature of both the wearer and the environment with yarn that changes shape as temperatures change, becoming thicker as temperatures drop, which provides more insulation and comfort to the wearer.
The company reports that active yarns in the fabric expand and contract and, in the process, bend, which increases or decreases the thickness of the fabric. The fabric itself acts as a thermometer, without wires or sensors.
The team of researchers, designers, engineers, weavers and knitters has incorporated their knowledge gained from the small swatches woven by hand on a loom into a prototype jacket. The active fabric jacket prototype is designed as outerwear for the city. Skyscrape is finalizing designs for a 2020 product launch.
Skyscrape was born out of a vision to create clothing so thermally comfortable that it would impact the energy used in heating and cooling buildings. To view a video of the technology, visit https://www.skyscrape.us/. | https://advancedtextilessource.com/2020/02/24/shape-shifting-thermal-control-jacket-to-be-launched/ |
When a woman owns her human rights, she can change the world for herself and the people around her.
For three decades, we have centred our work on the fact that human development cannot evolve if 51% of the world’s population face persistent discrimination. IWDA advances the global goal of gender equality by focussing on women’s rights.
The Universal Declaration of Human Rights adopted by the United Nations in 1948 states “all human beings are born free and equal in dignity and rights.” Even so, nearly 70 years later no country in the world has achieved gender equality.
The International Covenant on Economic, Social and Cultural Rights and the International Covenant on Civil and Political Rights, both adopted in 1966, spell out the principles of the Declaration in international law. Members of the United Nations that accept these covenants must ensure that women and men can equally enjoy all the rights they outline.
From the time it was established in 1946, the UN Commission on the Status of Women worked to spell out what these responsibilities mean in practice. Concern that the existing human rights frameworks were not leading to comprehensive action towards women’s rights led to the development of a specific Convention on the Elimination of All Forms of Discrimination Against Women. Adopted unanimously in 1979, this spells out what countries need to do to end discrimination against women.
These international instruments provide a framework for IWDA’s work. Our road map to women’s rights is the Beijing Declaration and Platform for Action. The Beijing Platform for Action, adopted unanimously by 189 countries at the fourth UN World Conference on Women in 1995, was a visionary agenda. IWDA’s program partnerships help to advance most of the 12 areas of action in the Beijing Platform.
In addition to our program partnerships, we engage broadly as members of global networks, research partnerships and coalitions. By linking our program partnerships with research and advocacy initiatives, we create space for women’s voices to be heard and amplified.
IWDA works to show and measure positive change for women’s rights. The structural change we aim for is intergenerational. In other words, we address issues with a long-term view. Over 30 years of experience, we’ve built long-term partnerships with other women’s agencies across Asia Pacific. These partnerships strengthen our approaches, because local knowledge brings long-term solutions. As active participants in the global movement for gender equality, we bring our deep experience advancing women’s rights in Asia Pacific to regional and global stages. | https://iwda.org.au/learn/womens-rights/ |
We are builders of fine pipe organs both large and small. Being a small artisan shop, we emphasize the custom aspect of our craft. In every project we encounter new challenges, and enjoy creating personalized solutions for each situation.
We built our web site to guide you through some of the different institutional and residential projects we have done. We hope these pages will show the quality and attention to detail we put into every instrument.
Please get in touch to offer comments or join our mailing list. | https://wigtonpipeorgans.com/ |
This reciprocating shaker has been specifically designed to hold platelet bags which include the new larger bags being adopted. This model operates at a fixed speed of 60 strokes per minute. Seven slide out shelves are supplied standard, plus one fixed top shelf on the smaller model. When fully loaded, this model can hold 16 large bags in total (or 48 small bags). The unit is attractively finished in a powder coated steel. | https://www.cskgroup.com.au/product/thermoline-48-bag-platelet-bag-shaker/ |
The CPCB also pointed out that emission from different sources, apart from vehicular pollution, also varied, making it difficult to analyse the scientific benefits of the scheme.
In a report to the National Green Tribunal (NGT) on pollution levels before and during the second round of the odd-even scheme, the Central Pollution Control Board (CPCB) said it has found an “increase in the concentration of pollutants at most monitoring locations during the scheme”.
The submission comes in response to a plea filed by petitioner Vardhaman Kaushik concerning air pollution in Delhi-NCR.
The Delhi Pollution Control Committee (DPCC) and the CPCB were directed by the NGT to submit a status report on the odd-even scheme. The NGT is expected to take up the issue Tuesday.
The CPCB’s analysis, submitted to the NGT Friday, collected data on pollutants including PM 10, PM 2.5, Sulphur dioxide (SO2), Benzene, Ozone (O3), Nitrogen dioxide (NO2) and Carbon monoxide (CO) in the fortnight before the scheme (April 1-14) and during the scheme (April 15-30).
In its report, the CPCB noted that based on data collected from seven stations, “the decrease in vehicular emission was not a dominant enough factor to impact observed data”.
CPCB scientists said air quality was affected by various meteorological factors such as wind speed, temperature, solar radiation, humidity and “mixed height of pollutants”.
“These factors come into play, besides emission from various sources. The fortnight before the scheme, prominent wind direction was from west, followed by northwest and southwest.
During the scheme, the wind direction was from the west initially, but later shifted to southwest and northwest,” said a scientist.
According to meteorological factors monitored by CPCB during the scheme, wind speed and mixing height was comparatively lower from April 17 to 25.
Get live Stock Prices from BSE and NSE and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter. | https://www.financialexpress.com/india-news/during-odd-even-ii-most-air-quality-stations-showed-spike-in-pollutants/255128/ |
TESL Saskatchewan 41 Annual General Meeting & Mental Health PD Event
Registration is open until 11:59pm, May 14, 2021.
I am pleased to invite you to attend the forthcoming Annual General Meeting and professional development opportunity exploring the principles of wellness in order to support teachers to find joy and well-being even under challenging circumstances. Being well and joyful is its own reward; however research also
shows that when educators prioritize their own well-being it supports student success.
Presentation 3:00- 4:00 pm
AGM 4:00-4:30 pm
Presented by Jacquie Cuthbert and Tyler Bergen
Jacquie Cuthbert is one of two Behaviour Consultants with Saskatoon Public Schools. Prior to her role as consultant, she has supported students, families, and staff at several SPS schools as an elementary school counsellor.
Tyler Bergen is the Coordinator of Counselling, Behaviour, and Safe Schools for Saskatoon Public Schools. His career with SPS as teacher, consultant, and coordinator has focused on special education and behavioural support.
Are you a member? Sign in to see all the forms available to you. | https://teslsask.com/register/tesl-saskatchewan-41-annual-general-meeting-mental-health-pd-event |
Description:
The goal of the initiative is to increase public safety and visitor experience in national parks and federal lands by ensuring that they are better protected. The program finances the construction and planning costs of alternative transport systems such as buses and trams in government-controlled parks and public lands.
Any person or organization with a valid DOI permit is eligible to apply. The Department of the Interior, after consultation with and in coordination with the Federal Transit Administration (FTA), will select and fund projects. Proposals must be submitted by 12:00 a.m. Eastern Standard Time on February 27, 2009.
Focus:
Natural Environment
Built Environment
Recreation
Social/Economic Health
Historical/Cultural Heritage
Region(s):
National
Type: | https://greeninfrastructure.net/call-for-proposals-paul-s-sarbanes-transit-in-parks/ |
- Publication Date:
- Publication Type:Notice
- Fed Register #:66:50683
- Standard Number:
- Title:Submission for OMB Review; Comment Request
DEPARTMENT OF LABOR
Office of the Secretary
Submission for OMB Review; Comment Request
September 27, 2001.
The Department of Labor (DOL) has submitted the following public information collection requests (ICRs) to the Office of Management and Budget (OMB) for review and approval in accordance with the Paperwork Reduction Act of 1995 (Pub. L. 104-13, 44 U.S.C. Chapter 35). A copy of this ICR, with applicable supporting documentation, may be obtained by calling the Department of Labor. To obtain documentation contact Darrin King at (202) 693-4129 or E-Mail: [email protected].
Comments should be sent to Office of Information and Regulatory Affairs, Attn: Stuart Shapiro, OMB Desk Officer for OSHA, Office of Management and Budget, Room 10235, Washington, DC 20503 ((202) 395-7316), within 30 days from the date of this publication in the Federal Register.
The OMB is particularly interested in comments which:
Evaluate whether the proposed collection of information is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility;
Evaluate the accuracy of the agency's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used;
Enhance the quality, utility, and clarity of the information to be collected; and
Minimize the burden of the collection of information on those who are to respond, including through the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses.
Agency: Occupational Safety and Health Administration (OSHA).
Type of Review: Extension of a currently approved collection.
Title: Notice of Alleged Safety and Health Hazards, OSHA-7 Form.
OMB Number: 1218-0064.
Affected Public: Individuals or households.
Type of Response: Reporting.
Frequency: On occasion.
Number of Respondents: 55,132.
Number of Annual Responses: 55,132.
Estimated Time Per Response: Varies from 15-25 minutes.
Total Burden Hours: 14,767.
Total Annualized capital/startup costs: $0.
Total Annual Costs (operating/maintaining systems or purchasing services): $882.
Description: The Occupational Safety and Health Act, Section 8(f)(1) and 29 CFR 1903.11(a) and (c) authorizes employees or representative of employees to report an alleged violation of a safety and health standard to OSHA. The OSHA-7 Form is one mechanism for reporting alleged violations. The Form also provides an employer with notice of the complaint. The information is used by OSHA to evaluate the alleged hazards to determine if reasonable grounds exist to conduct an inspection of the workplace.
Ira L. Mills,
Departmental Clearance Officer. | https://www.osha.gov/laws-regs/federalregister/2001-10-04 |
With effect from today, the Board of Directors will consist of the following persons:
- Andreas Hansson (chairperson)
- Christer Stefan Blom
- Lori Varner Wright
- Joanne Kuhn Bradford
- Akshay Naheta
- Sarah Blystad(employee representative)
- Alexander Remen (employee representative)
- Daria Golubeva (deputy employee representative)
- Patrik Jandusik (deputy employee representative)
Strong and experienced new Board of Directors
“We are thrilled that both Andreas and Akshay assume roles as chairperson and board member, respectively. We have developed a strong relationship with them both since SB Northstar became a major shareholder last year, now holding 15.9% of the shares in Kahoot!, and I am confident that the experience, insights and network Andreas and Akshay can contribute, alongside the rest of the Board, will be instrumental to Kahoot! in our next phase of growth. I want to thank Harald Arnet and Sindre Østgård, who have been invaluable to Kahoot!, in their capacity as chairperson and board member in Kahoot! They have represented all shareholders in a formidable way. We are also grateful that Harald has accepted to continue in his new role as chair of the nomination committee,” said Eilert Hanoa, CEO of Kahoot!
Following Kahoot!’s main listing on the Oslo Stock Exchange in March, the new Board will strengthen Kahoot! further. With Stefan Blom, Lori Wright and Joanne Bradford continuing as board members, together with Andreas Hansson and Akshay Naheta, we have established a Board with extensive international experience, as well as deep technology insight and an unparalleled network. Furthermore, the Board has been supplemented by two employee representatives with deputies, elected by and among the employees of Kahoot! ASA.”
“I’m excited to take on the role as chairperson of the Kahoot! Board. I am truly looking forward to representing Kahoot!´s shareholders and working with such an experienced and diverse Board.
Kahoot! has demonstrated impressive growth and has steadily carved a stronger position in the global learning market. We see that digital learning, whether virtual or physical, continues to increase in impact and importance globally, and I see great opportunities for Kahoot! going forward,” said Andreas Hansson, chairperson of the Kahoot! Board and Managing Director at SB Management (a wholly-owned subsidiary of SoftBank Group Corp. and investment manager of SB Northstar).
Read more about the new Board on the Kahoot! leadership page.
This information is subject of the disclosure requirements pursuant to section 5-12 of the Norwegian Securities Trading Act.
For further information, please contact:
Eilert Hanoa, CEO
Phone: +47 928 32 905
Email: [email protected]
Ken Østreng, CFO
Phone: +47 911 51 686
Email: [email protected]
About Kahoot!
Kahoot! is on a mission to make learning awesome! We want to empower everyone, including children, students, and employees to unlock their full learning potential. Our learning platform makes it easy for any individual or corporation to create, share, and play learning games that drive compelling engagement. Launched in 2013, Kahoot!’s vision is to build the leading learning platform in the world. In the last 12 months, 279 million games have been played on the Kahoot! platform with 1.6 billion participating players in more than 200 countries. Kahoot! The family also includes award-winning DragonBox math learning apps, the Poio learn to read app, the Drops language learning apps, the Actimo and Motimate employee engagement and corporate platforms and Whiteboard.fi, the online whiteboard tool for all educators, teachers and classrooms. The Kahoot! Group is headquartered in Oslo, Norway with offices in the US, the UK, France, Finland, Estonia, Denmark and Spain. Let’s play! | https://kahoot.com/investor/announcements/kahoot-asa-minutes-from-the-annual-general-meeting/ |
The popularity and awareness of 3D Printing possibilities is being exploited in every industry from design to manufacturing making what was previously impossible viable and accessible for anyone with just a basic understanding of the technology.
This class aims at igniting the ability to think in terms of 3D modeling and additive manufacturing and to impart a hands-on learning of 3D design and printing. Participants get an understanding of 3D printing technologies, processes and materials that make 3D printing possible as well as understanding the operation of Filament 3D printers.
Duration
3 Hours
Tools and Equipment Access
After successful completion of workshop members get full access to the filament (FFF) 3D Printers. | https://www.originbase.com/3d-printing-filament |
Clarifying pre-clearance at the airport
Why is it that Cayman Airways, the national flag carrier, has not managed to integrate their reservation system into the Immigration system (but are able to provide the necessary information to the US authorities) whereas the foreign carriers have?
Auntie’s answer: After a bit of back and forth with two Department of Immigration (DOI) officials, who proved both helpful and patient, the conclusion was reached that you are probably referring to pre-clearance rather than integration of systems. If this is incorrect, please feel free to write me again. Meanwhile, I can still offer information on the integration side and then address the pre-clearance issue.
You are right that Cayman Airways has not integrated their reservation system into the immigration system, but I was informed that none of the airlines flying here has done that. That is why the thinking is that you are referring to pre-clearance, which is tied to the sharing of electronic manifests, or lists of passengers, and allows travellers to bypass the Immigration exit control counters at the airport when departing. And Cayman Airways does pre-clear passengers.
“All carriers operating in our islands provide the DOI with electronic manifests in advance. Electronic manifests are also provided to US authorities in advance by the airlines,” it was explained. | https://cnslocallife.com/2017/08/clarifying-pre-clearance-airport/ |
InteAct by StepAhead, is an interactive mobile and web based employee engagement survey,
designed to analyze and map communication dynamics withinthe organization and identify key
personnel, such as: connectors,internal champions, bottlenecks,
bypass managers and more.
The survey
is intuitive to all users and requires no longer than 4 minutes to complete. The collective feedback is cross-analyzed with big data pools and proprietary algorithms based on reliable academic research and industry benchmarks.
Detailed reports
regarding your internal networks are coupled with our teams’ ongoing guidance and support so you can implement processes of change that drive efficient collaboration.
The survey metrics provide in the following domains:
Overcome Bureaucratic roadblocks
minimizes bureaucratic challenges common in the organization's hierarchy structure by identifying bottlenecks and applying direct communication channels between decision makers to front-line personnel that.
Synergy
Explore the different dimensions of your organization. Drill down to view employee engagement or explore the dynamics and synergy across teams and departments
Network Performance
Build a network that supports the flow of information and direct communication. Maximize employee potential, based on universal feedback on employee accomplishments and collective performance. | http://step-ahead.com/interact/ |
In enzymology, a sorbose 5-dehydrogenase (NADP+) (EC 1.1.1.123) is an enzyme that catalyzes the chemical reaction
Thus, the two substrates of this enzyme are L-sorbose and NADP+, whereas its 3 products are 5-dehydro-D-fructose, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is L-sorbose:NADP+ 5-oxidoreductase. Other names in common use include 5-ketofructose reductase, 5-keto-D-fructose reductase, sorbose (nicotinamide adenine dinucleotide phosphate) dehydrogenase, reduced nicotinamide adenine dinucleotide phosphate-linked, reductase, and sorbose 5-dehydrogenase (NADP+). | https://alchetron.com/Sorbose-5-dehydrogenase-%28NADP-%29 |
Photo: Ce Blue Villas is one of several popular Anguilla resort properties.
In a “state of the nation” address Tuesday following his sweeping victory in April’s general election, Anguilla’s new chief minister promised to re-structure the country’s tourism agency.
Victor Banks, a former finance minister whose Anguilla United Front party won six of the seven government seats at stake, said his government will make “some early and sweeping adjustments to a number of statutory bodies” including the country’s tourist board. In his new role as chief minister, Banks is also minister of finance, economic development, investments and tourism.
“There is no question that these institutions deal with extremely important aspects of life on Anguilla,” said Banks, who promised to make “urgent critical decisions” regarding the tourism board’s future.
He said the previous administration under the Anguilla United Movement party had demonstrated “a total absence of clear leadership and direction among government and quasi-government tourism agencies,” according to Banks. “Indeed there has been confusion."
Banks also pointed to the British overseas territory’s financial status, including tax arrears of more than $4.4 million owed by the operators of the Cap Juluca luxury resort, which has been mired in financial difficulties since a 2012 bankruptcy filing.
“Checks from Cap Juluca have not been cashed over that period, yet the property has been operating in a state of flux for over two and a half years,” said Banks.
“The impact of a lack of decision-making on Cap Juluca is both affecting the viability of that property as well as the ability of the government to meet its budgetary commitments,” Banks added. Local media reports say “over one hundred” Cap Juluca employees have been terminated and several of the resort’s villa accommodations have been removed from the hotel’s inventory.
Long considered a “boutique” Caribbean destination that features several upscale resorts and villa properties popular with high-end travelers, Anguilla hosted 70,927 overnight tourist arrivals in 2014 according to Caribbean Tourism Organization (CTO) data, a 2.7 percent increase over 2013 but the third-fewest among the 28 destinations traced by CTO.
Anguilla’s tourist board has supported several Anguilla events including this month’s Anguilla Lit Fest, the Moonsplash Music Festival and the Festival del Mar. The popular Malliouhana Hotel & Spa re-opened in November 2014 following a three-year closure and subsequent restoration by Auberge Resorts.
Still much like the country’s tourism agency, Banks said Anguilla’s public infrastructure requires an overhaul. | https://www.travelpulse.com/news/destinations/new-anguilla-chief-minister-promises-tourism-agency-overhaul.html |
We, the undersigned organisations and individuals across the globe, are again concerned that the United Nations Commission on the Status of Women (CSW) is wavering in its commitment to advance women's human rights as demonstrated in the constant negotiation of the language in the outcome document.
On the occasion of celebrating the International Women's Day we call on the states to reaffirm its commitment to agreed upon standards in promoting women's human rights as articulated in the Convention on the Elimination of All Forms of Discrimination Against Women, the Vienna Declaration and Programme of Action, the Declaration on the Elimination of Violence against Women, the Beijing Declaration and Platform for Action and the International Conference on Population and Development Programme of Action as well as other international humanitarian and human rights law.
We say NO to any re-opening of negotiations on the already established international agreements on women's human rights and call on all governments to demonstrate their commitments to promote, protect and fulfill human rights and fundamental freedoms of women.
We commend those states that are upholding women's rights in totality. We urge states to reaffirm standards that they have agreed to. Considering the lack of an outcome document last year we hope that this is not the pattern when it comes to advancing women's human rights agenda. Women's human rights are not to be negotiated away.
Similar to last year, we strongly hold the position that given the progressive development in the international era on standard setting there should no longer be any contention on any issues related to the definition and intersectionality of women and girls experiencing violence against women, including in relations to sexual and reproductive health and rights, sexual orientation and gender identity, harmful practices perpetuated in the context of negative culture and traditions, among others. We remind states that the CSW is the principal global policy-making body dedicated exclusively to gender equality and advancement of women with the sole aim of promoting women's rights in political, economic, civil, social and educational fields. Its mandate is to ensure the full implementation of existing international agreements on women's human rights and gender equality.
We strongly demand all governments and the international community to reject any attempt to invoke traditional values or morals to infringe upon human rights guaranteed by international law, nor to limit their scope. Customs, tradition or religious considerations must not be tolerated to justify discrimination and violence against women and girls whether committed by State authorities or by non-state actors. Given the current global activism around violence against women it is imperative that member states take the lead in agreeing on a progressive outcome document that reaffirms its commitments to universal human rights standards.
This is an important moment as we are planning the post 2015 process. The outcome document has to advance women's human rights and not lower the bar for women's human rights. Future international negotiations must move forward implementation of policies and programmes that secure the human rights of girls and women.
We call upon the member states of the UN and the various UN human rights and development entities to recognise and support the important role of women's groups and organisations working at the forefront of challenging traditional values and practices that are intolerant to fundamental human rights norms, standards and principles. | https://www.peacewomen.org/content/statement-concerns-womens-organizations-over-negotiations-csw-57-outcome-document |
Children and Adults with High and Complex Needs:
There is a very small segment of the disabled population that needs to be recognised as different from the rest, with government policy allowing for these differences. This minority sector of society is actually a distinct group in their own right whose extreme needs can span the spectrum of disabilities and medical conditions. Their lack of recognition means they slip between the cracks of policy, services and funding.
These children and adults with disabilities are some of society’s most vulnerable citizens and may have some or all of the following:
They will probably never work or earn a wage. Their needs may be expressed in such a unique way that only those close to them will understand.
Recognition that this small group of disabled people is a distinct group with diverse needs, which differ from other disabled people, is essential. When those who have high and complex needs are treated under the general disability umbrella, supports are not only inadequate, but in many cases non-existent.
For the first time society is seeing a number of developmentally delayed children reaching adulthood with complex illnesses and disabilities who would previously not have survived. Many would not have lived until adulthood and for those that did the medications required to control their epilepsy and other neurological conditions would have left them heavily sedated and institutionalised. Although they are now resident in ordinary communities, they are mostly invisible. They are not able to voice their needs in appropriate forums, and for this reason their voices are not heard and policies and services have not been developed specifically to meet their needs.
It is important to note that society and government has not had to support these people in the past, outside of institutions, making them out of sight and out of mind. With the closure of institutions, however, many of the essential support services have not transferred into the community, resulting in families now being required to provide these supports. Many have outlived their initial prognosis and it is now clear that to provide appropriate and skilled support for this group it is costly in both financial and human terms.
Because of the complex and unique needs of this population group basic human rights are not being recognized. Through a lack of awareness and neglect these citizens and their carers are devalued.
People with complex needs are yet to find their voice within the disability community.
There is an emerging debate about who should be the voice for people with complex needs. Barriers exist in the form of lack of knowledge and understanding about their existence, their vulnerability, their needs, unskilled and ineffective support staff, lack of services and funding that creates daily struggles for carers.
Parents’ influence is diminished, as society perceives the young person
as an autonomous adult. However in many cases this is not a parental reality, as families continue to care for the young person/ adult with the cognitive ability of a child.
Complex Care Group provides support, networking and information for the children, young people and families of this population. | https://www.complexcaregroup.org.nz/voice/definintion-document/ |
Richard Wayne Penniman (December 5, 1932 – May 9, 2020), better known as Little Richard, was an American singer, songwriter, and musician. An influential figure in popular music, Richard's most celebrated work dates from the mid-1950s, when his dynamic music and charismatic showmanship laid the foundation for rock and roll, leading him the nickname "The Innovator, The Originator, and The Architect of Rock and Roll". Characterized by his frenetic piano playing and raspy singing voice, Richard's music also played a key role in the formation of other popular music genres, including soul and funk. He influenced numerous singers and musicians across musical genres from rock to hip hop, and his music helped shape rhythm and blues for generations to come.
|Full name at birth||
|
Richard Wayne Penniman
|Claim to fame||
|
'Tutti Frutti', 'Long Tall Sally'
|Date of birth||
|
5 December 1932
|Place of birth||
|
Macon, Georgia, USA
|Date of death||
|
9 May 2020
|Age|
|Place of death||
|
Tullahoma, Tennessee, USA
|Cause of death||
|
Bone Cancer
|Resting place||
|
Oakwood University Memorial Gardens Cemetery, Huntsville, Alabama, USA
|Occupation||
|
Singer-songwriter, musician
|Occupation category|
|Nationality|
|Height||
|
5' 10" (178 cm)
|Build|
|Hair color|
|Eye color|
|Gender|
|Ethnicity|
|Sexuality|
|Religion|
|Zodiac sign|
|Distinctive feature||
|
|Pets||
pets
|
|Residence||
|
Macon, Georgia, USA
|High school||
high school
|
|University||
university
|
|Talent agency||
talent agency
|
|Political affiliation||
political affiliation
|
|Political party||
political party
|
This page is the FamousFix profile for Little Richard. Content on this page is contributed by editors who belong to our editorial community. We welcome your contributions... so please create an account if you would like to collaborate with other editor's in helping to shape this website.
On the Little Richard page you will be able to add and update factual information, post media and connect this topic to other topics on the website. This website does skew towards famous actors, musicians, models and sports stars, however we would like to expand that to include many other interesting topics. | https://m.famousfix.com/topic/little-richard |
Andrea Dunlop discusses the importance of building strategic partnerships and Paysafe’s entrepreneurial culture.
In this issue of the PaymentEye e-newsletter we speak to Andrea Dunlop, CEO of Acquiring & Card Solutions at Paysafe Group, about the importance of building strategic collaborations and how PSD2 will drive innovation and create strategic partners between banks and fintechs.
And in other news, Darryl Proctor, Product Director at Temenos discusses why banks should offer SEPA Instant Credits to their customers and what’s needed for banks to efficiently achieve the transition.
Why your PSP should be your best defence against fraud, a new whitepaper released this month by Paysafe, details the benefits to merchants of partnering with a PSP that shoulders much of the responsibility of fraud detection.
SEPA Instant Credit enablement: Why bother?
Darryl Proctor, Product Director at Temenos discovers why banks should offer customers SEPA Instant Credits and what’s needed to make the transition a reality. | https://www.paymenteye.com/newsletter/the-value-of-partnerships-interview-with-andrea-dunlop-ceo-of-acquiring-card-solutions-at-paysafe-group/ |
Fibrosarcoma is the tumour or cancer of fibrous soft tissues, that connect and fuse the body together. Fibrosarcoma is a sporadic type of cancer; Fibrosarcoma starts in the tissues wrapped around tendons, ligaments, and muscles, attaching muscles to bones and between bones.
Congenital or infantile Fibrosarcoma (under the age of 1)
Adult Fibrosarcoma
Fibrosarcoma can happen anywhere in the connective tissues of nerves, tendons, ligaments, fat, muscle, deep skin, blood vessels, and lymph vessels. More than 50 types of Fibrosarcoma are diagnosed often in the thighs, legs, knees, and the trunk, made of malignant spindled fibroblasts, also known as myofibroblasts.
Fibrosarcoma is often found in people aged between 20 to 60 years, and men are more prone to this condition than women. Fibrosarcoma may not present with any apparent symptoms at the initial stage, but if you notice the following changes in the body, do meet your doctor immediately.
There is no specific reason that can point to Fibrosarcoma. However, a genetic mutation may play a role in triggering these cancerous growths. Other significant risk factors include certain inherited conditions like:
Fibrosarcoma is diagnosed by running blood work and correlating the results with imaging tests like X-rays, MRI, CT-scan, PET-CT, and bone scans. In case of a mass, a biopsy will be performed to collect the tissue sample to understand the type and growth of the cancer cells. Though lymph node metastasis is rare, your doctor might sometimes excise nearby lymph nodes.
Fibrosarcoma cancer is graded on a scale of 1 to 3, depending on the size and progression. If the tests reveal the presence of tumour, the doctor would grade it depending on the size, aggression on a scale 1 to 3. If the cancer cells look like normal cells, then they are categorized under high grade. These cells tend to spread faster to the nearby lymph nodes.
Stage 1: If the tumour is low-grade and is measuring less than 5 cm
Stage 2: If the tumour is middle or high-grade, 5 cm or larger
Stage 3: If the tumour is either high-grade and more significant than 5 cm or if it has already spread to lymph nodes
Stage 4: If the tumour had spread to other parts of the body
In the majority of cases, surgery forms the first line of treatment. The treatment plan is decided upon considering various factors like grade, size, and location of the tumour, if it has already metastasized. A tailor-made treatment plan will be carried out based on the age, general health history, and if cancer has recurred.
Chemotherapy: Chemotherapy is administered either intravenously or through oral pills, and it plays a crucial role in killing cancer cells. If the cancer spreads to lymph nodes and other nearby organs, it is recommended. It can be given either before or after the surgery.
Radiation Therapy: Radiation projects high-energy X-rays that completely eliminate cancer cells and stop their growth. This therapy shrinks the cancer cells prior to the surgery. In specific instances, radiation therapy can be prescribed post-surgery to eliminate residual cancer cells in the body.
Getting treated for cancer could be physically and emotionally taxing, but positivity is the key. Try eating a healthy diet, rest well and spend time with your loved ones. If you are a smoker, quit the habit right now!
Ensure regular follow-ups and check-ups with your doctor at least for the first year and then later as per the schedule. Go for periodic blood and imaging tests as recommended by your doctor. If you notice any new lump or pain, talk to your doctor immediately. For more information on Fibrosarcoma and its treatment, contact Fortis Bangalore.
With 28 hospitals across the nation and over 4000+ beds, Fortis Healthcare Limited is a leading integrated healthcare delivery service provider in India. For over 26 years, Fortis Hospitals have been committed to the cause of getting people back to their lives faster and stronger. | https://fortisbangalore.com/blogs/fibrosarcoma-causes-symptoms-diagnosis-and-treatment |
CBSE Board Exam 2018: Modi had also asked students who were due to appear for the board examinations to adopt a "never give-up" attitude during a recent interaction here.
CBSE Board Exam 2018: Prime Minister Narendra Modi today extended his best wishes to students appearing in the Class 10 and 12 board examinations, saying they should write their paper with a smile and lots of confidence. (PTI)
CBSE Board Exam 2018: Prime Minister Narendra Modi today extended his best wishes to students appearing in the Class 10 and 12 board examinations, saying they should write their paper with a smile and lots of confidence. Modi had also asked students who were due to appear for the board examinations to adopt a “never give-up” attitude during a recent interaction here. “Best of luck to all my young friends appearing for the CBSE class XII and class X examinations! Write these exams with a smile and lots of confidence,” he said on Twitter today. More than 28 lakh candidates will appear for the Class 10 and 12 examinations being conducted by the Central Board of Secondary Education. Over 16 lakh candidates have registered for the Class 10 examination, while over 11 lakh have registered for the Class 12 examination.
Best of luck to all my young friends appearing for the CBSE Class XII and Class X examinations! Write these exams with a smile and lots of confidence.
Students of class 10 and 12 are appearing for the examinations conducted by the Central Board of Secondary Education from today, a CBSE official said. A total of 16,38,428 candidates have registered for class X examination, while 11,86,306 candidates have registered for class XII examination. The class X board examination has been reintroduced from this year after the government decided to do away with the Comprehensive and Continuous Evaluation (CCE) adopted earlier. The exam will be conducted at 4,453 centres across India and 78 centres outside India. Similarly, for class XII, the exam will be held at 4,138 centres in India and 71 centres abroad.
“The board has made appropriate arrangements with state authorities and local police to ensure trouble free examinations throughout the country,” the CBSE official said. Candidates suffering from diabetes are allowed to carry eatables inside the examination centres. From this year, CBSE is also allowing candidates with special needs to write their exams using laptops but their device will have to undergo an inspection by the computer teacher at the exam centre and no Internet access will be allowed. A total of 4,510 and 2,846 differently-abled candidates have registered for class X and XII examinations respectively. | |
What is an Intentional Tort?
A tort is defined as an act committed by one party that causes harm to another, including physical injury, property damage, or even damage to one’s reputation. Torts typically stem from negligence, i.e. the careless disregard for someone else’s well-being. Most torts do not deal with intent whatsoever—rather, they focus on the harm caused and who was at fault in the incident.
However, some torts go beyond simple negligence. An intentional tort is an act committed with the intention of causing harm. What separates an intentional tort from any other tort is the mindset of the party at fault. For example, say someone rear-ends you at a stop light because he was distracted by a phone call; this would constitute a regular tort. But instead, say the driver was upset because you accidentally cut him off in traffic. To retaliate, he sped to catch up with you and sideswiped your car on purpose; this would constitute an intentional tort because the other driver clearly meant to cause harm.
The most difficult aspect of intentional torts is proving that the other party did, in fact, intend to cause you harm. In most personal injury cases, intent is irrelevant. But in an intentional tort, intent is everything. If the at-fault party can claim he or she did not actually intend to commit the harmful act, they could escape liability for the harm caused.
Some common examples of intentional torts are:
- Fraud: an intentional, deceptive representation of a matter of fact, whether by words or conduct, false or misleading allegations, or by concealment of necessary information
- Misrepresentation: an assertion (either by words or conduct) that is not true to the facts
- Slander: oral defamation in which someone spreads an untruth about another person, knowing the untruth will harm the reputation of the other person
- Libel: publishing an untruth about someone else that will cause harm to that person or his or her representation
- False imprisonment: the illegal confinement of one individual against his or her will, violating the confined person’s right to be free from restraint of movement
- Assault: the threat of bodily harm coupled with an apparent, present ability to cause harm
- Battery: harmful or offensive contact with someone else
Certain intentional torts also qualify as crimes, such as assault, battery, and fraud. In these cases, the guilty party could face criminal proceedings brought by the state as well as a civil lawsuit brought by the injured party. It is entirely possible that the party at fault could be cleared of criminal charges but still be found liable in civil court; the burden of proof is lower in civil trials, so the negligent person(s) could be required to pay damages, regardless of guilt in criminal court.
Proving intent is perhaps the most difficult aspect of an intentional tort. Because the at-fault party rarely expresses harmful intent in writing or out loud, you are left trying to prove what the other party was thinking or feeling. However, if you can prove intent, you could be eligible to win punitive damages as well as normal compensatory damages.
If you believe you have grounds for an intentional tort (or simply want to learn more about the process), call Maggiano, DiGirolamo & Lizzi at (201) 585-9111 or contact us online. | https://www.maggianolaw.com/blog/intentional-tort/ |
The molecular orbitals (MOs) of molecules can be constructed by linear combination of atomic orbitals (LCAO). Though the exact Schrödinger equation is unsolvable for many electron systems such as molecules, the solution can be numerically approximated by ab initio or density functional (DFT) theory.
This page gives an overview on the molecular orbitals of methane calculated by DFT methods using a B3LYP/6-311++G** basis set. All MO representations are 90% or 90-25% iso-contour probability surfaces of the electron density (ψ2), i.e. they resemble the spatial volume around the nuclei of the molecule in which the electrons are found with the corresponding certainty. The different colors (yellow and blue) represent regions with opposite sign of the wave function ψ; nodal planes (not necessarily real "planar" planes) were ψ passes through zero and changes sign are indicated in orange.
Click on the small images below with blue background below to obtain an enlarged view - the images with black background provide links to the corresponding 3D-models (VRML-type models); these links will open in a new window.
|
|
|
|
The total electron density (clipped 99, 95, 90, 80, 70, 60, and 50% iso-density contours depicted on the right) render the molecule with its characteristic shape (note the different iso-contour values of the MO orbitals and the total electron density contours).
|
|
|
|
Below on the right, the schematic drawing indicates the major contributions of atomic orbitals (AOs) to the molecular orbitals (MOs) of methane. With increasing energy of the orbitals (from bottom to top), the number of nodal planes (not necessarily real "planar" planes) increases and the symmetry decreases. For methane, there are 10 electrons (CH4, 6 + 4×1 = 10 electrons) and five occupied orbitals. The lowest-energy orbital is the 1s-carbon core orbital (two electrons), the remaining four valence MOs are constructed from the carbon 2s, 2px, 2py, 2pz, and hydrogen 1s-orbitals. The highest occupied molecular orbitals (HOMOs) are three-fold degenerate, i.e. all three orbitals 3a, 3b, and 3c have equal energy. Further informations on AOs are available from the gallery of hydrogenic orbitals and hybrid orbitals.
The thumbnail images on the left provide access to enlarged graphics as well as 3D-models (VRML) of the orbitals, respectively. All images and models are scaled relative to each other, and their sizes can be compared directly, only the 1s-carbon core orbital has been expanded by a factor of 1.5.
|
|
Notes: a) Orbital number (see scheme on the right, degenerate orbitals with equal energy are denoted by same numbers); b) Label (symmetry descriptor in parenthesis); c) Nodal planes (ψ = 0.0); d) 90% Probability contours of MO electron density (ψ^2); e) 90, 80, 70, 60, 50, 40, and 25% Probability contours; f) 3D-Models require a VRML plugin to be installed (large files with sizes between 50-2000 KBytes).
The size of the carbon 1s-core orbital has been expanded by a factor of 1.50.
|
|
|
|
For a more detailed description of atomic orbitals see the corresponding gallery of orbitals. All graphics and iso-contour surfaces shown on this page were created using the MolArch+ program and POVRAY Persistence of Vision Raytracer. Electron densities were calculated on three dimensional grids for the corresponding molecules using the JAGUAR program. | http://csi.chemie.tu-darmstadt.de/ak/immel/tutorials/orbitals/molecular/methane.html |
The avalanche prone locations are to be found on very steep shady slopes above approximately 2400 m and in gullies and bowls, and behind abrupt changes in the terrain. In very isolated cases avalanches can be triggered in the faceted old snow and reach medium size. Caution is to be exercised in areas where the snow cover is rather shallow. Older wind slabs are to be evaluated with care and prudence in extremely steep terrain.
Apart from the danger of being buried, restraint should be exercised as well in view of the danger of avalanches sweeping people along and giving rise to falls.
dp.7: snow-poor zones in snow-rich surrounding
dp.4: cold following warm / warm following cold
The spring-like weather conditions gave rise to increasing consolidation of the snowpack. Field observations and stability tests confirm good snowpack stability.
The old wind slabs have bonded well with the old snowpack. They are in individual cases still prone to triggering. In very isolated cases weak layers exist in the centre of the snowpack. This applies in particular on very steep shady slopes above approximately 2400 m.
At elevated altitudes snow depths vary greatly, depending on the infuence of the wind. On sunny slopes below approximately 2200 m only a little snow is lying.
As a consequence of a sometimes strong wind from northerly directions, mostly small wind slabs will form on Monday at elevated altitudes. In the north and in the regions with a lot of snow the wind slabs are larger. | https://avalanche.report/simple/2022-01-16/en.html |
Mapping and Geographic Information Systems (GIS)
Geographic Information Systems (GIS) enable users to create, combine, and analyze geocoded data—that is, information connected in some manner to geographic coordinates. GIS tools range from simple and lightweight applications to extremely powerful and complicated suites of software.
Whether you would like to create a spatio-temporal exploration of historical events, embed an interactive map in a web-based project, or conduct a spatial analysis of research data, we can assist you with your project. Our GIS Coordinator will help you with identifying the right tool for your project—from Google Maps to ArcGIS.
Please contact Stacy Curry-Johnson for a consultation. | https://www.library.vanderbilt.edu/disc/gis.php |
Welcome to Hidden Trace recruitment thread!
It's a small project I'm working on right now.
I've created the general concept, some artwork and music assets, and creating the game mechanic.
Unfortunately, the game progression itself is very minimal.
Because I primarily focus on creating the assets.
So I guess I need some help.
Let's go straight to the topic then.
Engine
RPG Maker MV
Play Time
Roughly 1-2 hours
Synopsis
Being immortal must be a beautiful thing. Enjoying the pleasure of life eternally, without the fear of death.
No.
Gale, a former treasure hunter who become a vampire, knows all too well the dark side of being immortal.
One snowy day, he met the Master of Time.
A glimpse of his past flows through his memory...
He remembered the time when he drank the Vampire's Blood to become an immortal.
Gale asked the Master of Time to send himself to the past.
He wanted to kill his past self before he get a chance to drink the vampire's blood.
Can he do it?
*Okay, maybe that's not the best kind of synopsis. But you get the idea.
If you want to know more about this project, see the game's thread here.
Roles Needed
Mapper (filled)
Honestly I already created some maps.
Your role here is to create the remaining maps and (if you are able to) beautifying the existing maps.
How to apply:
PM me with at least 4 map screenshots consist of 1 world map, 1 outside, 1 inside, and 1 dungeon. (the more the better)
You can use RTP or custom tileset for the examples.
If you are using the custom one, make sure you're allowed to use it legally.
Monster Designer (filled)
I've made some monsters. But I'd love some help from you.
Create interesting monsters using the available graphics, designing the fighting pattern and the skills.
How to apply:
PM me with at least 3 monster designs and 1 boss. Explain about the monsters strength, weakness, fighting pattern, and skills.
Follow this template:
-Monster Name
-Strength
-Weakness
-Fighting Pattern
-Skills (explain about the monster skills in detail)
Optional Roles
The optional roles are resource-based. I do realize making resources is not an easy task.
So only apply if you are truly interested with the project.
GUI Maker (filled)
I've made some part of custom battle GUI and custom windowskin.
Your role is to create the remaining GUI for the game. (menu and title GUI for example)
Icon Maker
Create custom icons for the game
Battleback Maker
I need battlebacks that can fit my character's art style and fitting for the maps
Tileset Artist (Highly Optional) (filled, but still open)
Doing tileset recolors or small edits.
I noticed the RTP tileset is lacking an atmospherical element.
With its wild choice of colors, it doesn't seem really cohesive too.
The tileset artist role is to make the RTP tileset more atmospheric and cohesive-looking.
Spriter (Highly Optional)
Create unique sideview battlers for the characters. There are 4 characters in-game.
If you are up to the task, I will give you an example of what I want to create
How to apply for the optional roles:
Send me a PM with at least 3 examples of your work.
*Note regarding recruitment:
Regardless of the recruitment state, I will try my best to keep progressing and at least finish a demo.
But I will open this recruitment as long as there are still roles to fill.
**Note regarding the applications:
I will notify you as soon as I receive your applications.
But I will review your applications before I can decide whether you are able to join the team or not.
So give me some time before I make the conclusions.
***Note regarding credits:
I will credit your username and your real name (if I am allowed to)
My Roles
Project Manager
Managing the game's progression
Artist
Create artworks for the game
Composer
Composing the soundtracks for the game
Puzzle Maker
Creating puzzles on the maps
Eventer
Creating cutscenes and events in general
Current Team
@Vincent Chu - Writer
@OnlyMe_ - Monster Designer
@HexMozart88 - GUI Maker
@Leon Kennedy - Tileset Artist
@cradth - Mapper
One more thing.
Although this is an unpaid project, I will give you some special rewards
Reward
I will create special resource pack (arts and music) only for the Hidden Trace team members. The number of contents will be decided by how well the project goes. (of course there are additional contents for optional roles)
The reward is available once the project is done.
In case you're curious about my arts and music, open these links:
Check my artworks here.
Check my music here.
Oh, and I have to give special thanks to @Vincent Chu for his willingness to join the project as a writer, even before this thread was started. Thank you very much, Vincent Chu!
So that's it guys!
If you're interested to join, you can PM me straight away or leave some comments below.
You can also ask questions or feedbacks on the comment section.
Thank you in advance and happy new year! (although it's kinda late to say it xD)
Best wishes for all of you
Last edited: | https://forums.rpgmakerweb.com/index.php?threads/hidden-trace-recruitment.89301/ |
Last decade was easily the hottest on record, as were the 1990s and, before that, the 1980s — all part of a multi-decadal trend driven primarily by human-caused emissions.
We’ve known for a while that warming appeared to slow over a short, cherry-picked time frame of 1998 to 2008 because:
- The starting year (1998) was a very strong El Niño, which temporarily boosts global temps, and the ending point (2008) was a moderate La Niña, which lowers them.
- The end point was near the bottom of “the deepest solar minimum in nearly a century.”
- One key global temperature dataset, the Hadley/CRU one used by the UK’s Met Office, had numerous flaws that led to a slower warming trend than most of the others.
Even so, as we’ll see the land and the oceans just kept warming. It is just hard to stop the radiative forcing of the CO2 humans have put in the air, which equals 1 million Hiroshima bombs a day.
What’s clever about the new Proceedings of the National Academy of Sciences study is it demonstrates that sulfur pollution from China’s massive buildup of coal plants also helped slow the warming:
Conclusion
The finding that the recent hiatus in warming is driven largely by natural factors does not contradict the hypothesis: “most of the observed increase in global average temperature since the mid 20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations (14).” As indicated in Fig. 1, anthropogenic activities that warm and cool the planet largely cancel after 1998, which allows natural variables to play a more significant role. The 1998–2008 hiatus is not the first period in the instrumental temperature record when the effects of anthropogenic changes in greenhouse gases and sulfur emissions on radiative forcing largely cancel. In-sample simulations indicate that temperature does not rise between the 1940’s and 1970’s because the cooling effects of sulfur emissions rise slightly faster than the warming effect of greenhouse gases. The post 1970 period of warming, which constitutes a significant portion of the increase in global surface temperature since the mid 20th century, is driven by efforts to reduce air pollution in general and acid deposition in particular, which cause sulfur emissions to decline while the concentration of greenhouse gases continues to rise.
The results of this analysis indicate that observed temperature after 1998 is consistent with the current understanding of the relationship among global surface temperature, internal variability, and radiative forcing, which includes anthropogenic factors that have well known warming and cooling effects.
What’s not clever about this study is that it repeats the myth that there was a ‘hiatus’ in the first place. The top figure, from John Cook’s Skeptical Science website, makes that clear.
And that’s without even discussing the oceans, where climate science says the vast majority of the warming goes:
There has been no hiatus in global warming.
I have no clue why the PNAS authors use the favorite cherry-picked endpoints of the deniers, designed to minimize the actual trend. Or why they seem unaware that in the NASA dataset 2005 was the warmest year on record — and that 2010 was tied for the warmest year, according to NASA, NOAA, and the World Meteorological Organization.
They seem unaware of the June 2010 analysis by NASA that found:
“We conclude that global temperature continued to rise rapidly in the past decade” and “there has been no reduction in the global warming trend of 0.15–0.20°C/decade that began in the late 1970s.”
The Met Office has been ignoring the warming in the places where it has been warming the most–especially the Arctic — which they themselves acknowledged in a 2009 analysis (see Finally, the truth about the Hadley/CRU data):
… in data-sparse regions such as Russia, Africa and Canada, warming over land is more extreme than in regions sampled by HadCRUT. If we take this into account, the last decade shows a global-mean trend of 0.1 °C to 0.2 °C per decade. We therefore infer with high confidence that the HadCRUT record is at the lower end of likely warming.
That was published 18 months before the PNAS paper!
Indeed, in November 2010, the Met Office Hadley Center admitted they had flawed data — that led them to underestimate the rate of recent global warming. The Guardian itself reported at the time:
World is warming quicker than thought in past decade, says Met Office
… Including the new sea surface temperatures, which push up global temperatures by 0.03C, the warming rate for the past 10 years is estimated at 0.08–0.16C.
So it is odd that the Guardian gets this wrong now.
But the key point is that it now seems clear that when you accurately model all of the data and all of the forcings, the surface temperature data shows, as NASA said last year, “global temperature continued to rise rapidly in the past decade” and “there has been no reduction in the global warming trend of 0.15–0.20°C/decade that began in the late 1970s.”
The know-nothing anti-science deniers are touting the not-clever part of the study to brag that a peer-reviewed paper vindicates their inane, cherry-picked talking point that ‘there wasn’t warming from 1998 to 2008.’ This is doubly bizarrely.
First, even this study makes clear that there was warming consistent with our current understanding of climate science and forcings — it was just masked:
Declining solar insolation as part of a normal eleven-year cycle, and a cyclical change from an El Nino to a La Nina dominate our measure of anthropogenic effects because rapid growth in short-lived sulfur emissions partially offsets rising greenhouse gas concentrations. As such, we find that recent global temperature records are consistent with the existing understanding of the relationship among global surface temperature, internal variability, and radiative forcing, which includes anthropogenic factors with well known warming and cooling effects.
[Note to the authors, the last 11-year solar cycle was not “normal.”]
Second, as the lead author explains:
But rather than suggesting that cutting carbon emissions is less urgent due to the masking effect of the sulphur, Prof Robert Kaufman, at Boston University and who led the study, said: “If anything the paper suggests that reductions in carbon emissions will be more important as China installs scrubbers [on its coal-fired power stations], which reduce sulphur emissions. This, and solar insolation increasing as part of the normal solar cycle, [will mean] temperature is likely to increase faster.”
The time to start ignoring the dangerous falsehoods of the climate science deniers was a long time ago, but now is better than later — at least if we want a fighting chance to preserve the health and well-being of billions of people, including our own children and grandchildren. | https://thinkprogress.org/study-hottest-decade-on-record-would-have-been-even-hotter-but-for-chinese-coal-plant-sulfur-dcde7b06846/ |
» Job description:
KEY OBJECTIVES
Lead a resilient, agile, and flexible organization in the best interests of all its stakeholders.
Take overall operational responsibility for group.
Ensure all strategic objectives are fully met.
Maintain full transparency and accountability to the Board.
Be a visible leader both internally and externally as representative of the group in India.
Develop thought leadership on the us [...]
» Job description:
1. Develop and execute a business plan for Apple in alignment with the global / regional KA strategy, including:
Identifying core customer needs
Understanding organizations structure, internal decision-making alignment, etc.
Pulling together resources locally, regionally, globally during the key milestones of the development and execution of the plan
Ensuring all internal processes and activities [...]
» Job description:
They are looking for highly talented software engineers to be part of Firmware team working on current/next generation WLAN products.
As part of this team, you will be responsible for
Software architecture and design for WLAN features in firmware
Working with peer teams to define SW interfaces
Development, release and maintenance of WLAN firmware
» Job description:
• Strategic planning and execution of action plan to expand addressable market and gain market share to deliver revenue targets for the country.• Development and execution of business blueprint / growth strategy – 3 years horizon• Leads all India sales and marketing activities, working closely with BU management in setting up the sales plans, objectives and the development of marketing [...]
» Job description:
Acting as the go-to within the business for anyone with questions or queries regarding training and development plans
Managing training budgets
Confident in both written and spoken communication with the ability to present to large audiences
Monitor the success of development plans and help employees make the most of learning opportunities
Strong communication and negotiation skills٫ with a good [...]
» Job description:
Lead the execution and production of content in collaboration with other teams, from the concept phase to final delivery.
Lead the development and execution of exceptional, informative, and appealing content to attract their targeted audience and build customer preference and loyalty, based on deep insights into a wide variety of audiences, propositions and channels
Manage production partners to [...]
» Job description:
Position Summary
Responsible for an annual sales revenue quota within a dedicated set of large enterprise accounts
Role and Responsibilities
Define and steer strategic go-to-market strategies and growth plans for large enterprise accounts within assigned segments.
Establish close partnership with Product Management team to drive/exceed product category revenue targets and grow market share.
Man [...]
» Job description:
Works closely with management in the implementation of corporate policies related to human resources, organizational and employee development in R&D department
Consults with employees and managers to address root causes of human resources issues, attempting to resolve employee relations issues with a strategic approach
Participates in company-wide programs and initiatives (e.g., retention pro [...]
» Job description:
Work closely with Geographical Category Managers in the zones to support as per Marketing plan and adoption of the transformation actions
Lead development program related to Protection Relays deployment in Intl Operations
Coach & Support Operations: Price positioning, Go to market, Promotion, Pre-Sales support
Drive qualifications programs with key End Users & provide prescription support [...]
» Job description: | https://www.execboardinasia.com/jobs-listing/?fwp_industry=electronics-electrical-equipment |
Everything is a clue....
1/29/2017
A private investigator must be aware of any possible leads or clues when conducting an investigation. Private investigators maintain awareness, take photos, make notes, and diagram scenes. They don't overlook anything. Following are important items and information that can lead private investigators to important clues:
1. Timing - "Timing is everything!" - In an investigation this cliche may be true. What time was the witness at the scene?, When did the target arrive?. Private investigators always ask people they interview for times. Approximate if they did not specifically know. Then ask how they knew the approximate time. You can easily create a timeline documenting key facts.
2. Clothing Descriptions - Specific - "She was wearing a red shirt and blue jeans." Not a lot of detail there. When playing back the video, you may be surprised to find 3 or more women wearing red shirts and blue jeans. Ask for specific types of clothing. Collared shirt?, Specific colors such as dark or light. Ask what type shoes they were wearing. More detailed information you can get, the smaller your list of suspects will become.
3. Phone Call Times & Numbers & Sounds!! - If your investigation includes information from a phone call, ask for specific times, phone numbers, and then inquire if there were any background noises. Smartphones log calls so times and numbers are easy to get. Background sounds the person being interviewed heard while talking could be important. Maybe there were traffic noises in the background at a time the suspect was claiming to be in the office.
4. Items laying around - Any good private investigator knows all clues do not stand out at first. Make sure to photograph and document anything you see whether you think it is relevant or not. This is important when conducting surveillance. Maybe there is dry cleaning hanging in the back of the car, hamburger wrappers laying on the desk in the office late in the afternoon, or other items. This is important information that can be used in interviews, or creating a timeline for a target's movements. Not all items you see will be clues, nor will there be a clear sign indicating that an item is a clue.
5. Receipts or actions that create a paper trail - In any investigation, this information is important. Maybe your target claimed to be eating dinner late at night. A quick scan of a credit or bank card statement can help determine if it is accurate. Of course you may need to ask, "How did you pay for that?". Leaving a secure parking deck, or card controlled access point at a building can be another important clue that can help you separate facts from fantasy.
Information that may not appear to be relevant at first, may later become important to the investigation. If not captured in writing or in photographs, it may be lost. Information that can make or break a case.
Private Investigators Notebook
9/11/2016
We have discussed this topic before but it is important enough to repeat and update.
Frequent questions are; "What do I take notes about?" and "How to take notes?". The best question is "Why do I take notes?" Let's start with last question first:
Why does a private investigator need to take notes? This question generally is referring to content more than the need to take notes, but here are a few reasons why you should: 1. Writing things down helps you remember them. The best way to learn is repetition. 2. Record of work performed. Let's face it, you really enjoy being a private investigator, but getting paid is important too. You can quickly and easily complete your work history for billable hours by referring to your notes. 3. When working on multiple investigations it is easy to confuse facts. Referring back to your notes will increase your accuracy and you will not forget important details. One investigator said, "My notes have led to the capture of many a suspect as I have found the clues I needed in them."
How to take notes? This question centers on two aspects of taking notes. First, what is the best method for taking notes? A tablet or pen and paper. We opt for the pen and paper method. It's easier, allows you draw sketches, scratch through information, and go back to earlier details you have noted. Second, what is an efficient way to take notes? Learning shorthand can be time consuming and costly. We suggest using shortcuts common to texting. For example; "V could c S put items dwn front of pants." You can probably figure this one out, "Victim could see the suspect put the stolen items down the front of his pants." Using common abbreviations such as "V" for victim, "S" for suspect, and "W" for witness will help you quickly take notes. If there are more than one, then assign a number for each. Later you can transcribe your notes for the report in to complete sentences.
What do I take notes about? This question centers on the idea that people tend to talk quickly and a lot of the information they provide is not relevant to the investigation. Trying to write down everything that is said is almost impossible and can be distracting to the individual you are interviewing. The key items to capture are: names, phone numbers, addresses, locations, descriptions of people & property, times/dates, and facts that support or disprove the allegations. Following is an example that you can use as a template for taking notes consistently:
"Monday, September 2, 2016 - 1600-1630hrs @ Jones Office Bldg Suite 123.
V - Brenda Jones, 231 345 5454, office manager
V could c S put items dwn front of pants in waiting room. S left by front door, in late model Honda 4 door. S drove north on Williams St.
S - w/m, slender, about 5'10"-5'11", blue jeans, white t-shirt and tennis shoes.
Items: small clock radio and V's purse. Purse brn leather, "Coach", contained $20 and several cc's."
You can see in the example that "PI shorthand" was used. You can easily transpose the information into complete sentences later when completing the report. Here are few tips:
1. Don't let note taking interfere with the interview.
2. Always review your notes with the person being interviewed to make sure your information is accurate. It will also prompt them to add additional details.
3. Leave spaces as you take notes so you can go back add important details as needed.
4. Transcribe your notes as soon as possible while they are fresh in your memory.
5. Always take notes. Develop a habit of taking notes even if you think you don't need to.
6. Keep all of your notes together in a notebook. This will allow you keep a journal of your work.
Always take good notes to improve your investigation!
Case File Construction
11/23/2015
There is a lot of focus on writing the investigative report and there should be since it is the primary deliverable a private investigator provides to their client. For private investigators there is not always the luxury of using a pre-formatted form for every investigation as police often do. Uniqueness of the investigation and varying types of evidence often create the need for a well written private investigative report devoid of a commonly used form.
The tried and true method is the development of case file. Case files include all information pertinent to the case investigated. It is important to remember that the case file is for use "by others", not just the investigator.
Listen to this podcast on the "Basics of Case File Construction". A transcript is also provided.
Note taking skills: 5 step method
7/25/2013
When you graduated high school you probably thought taking notes was a thing of the past!!! Professionals in every field find note taking to be a critical skill. Although technology provides several note taking tools, pen and paper (notebook) are still the best for taking notes. Whether you are novice or a skilled professional honing your note taking skills will provide allow you to accurately record information, verify you have the information you need and reduce the time you need to complete your investigative report.
Pen/pencil and paper is the best note taking tool! - Although technology provides many benefits and without smartphones, computers, tablets, audio and video recorders you would not be able to complete your tasks as a professional private investigator, pencil and paper is still the be. Just like a car mechanic, you need to pick the right tool for the right task.
Using technology to take notes during an interview is distracting and takes more time than a notebook and pencil. Audio/video recorders are great tools for recording action as it happens but in an interview they are a hindrance for the person being interviewed to freely share information. Later, when you are trying to write your report, it is difficult to find a specific fact-- rewinding and fast-forwarding is frustrating when you are in a hurry.
SAFETY FIRST! When using technology it is easy for you to get distracted scrolling, highlighting, saving, etc. With paper and pencil you maintain awareness of your situation.
Five steps to improve your note taking abilities:
1. Focus on the information you NEED! When interviewed, people provide a lot of information that is not relevant to the investigation. Write down the important items. For example; you are interviewing a witness about a crime they observed. The interviewee may respond to your question, “What did you see?” with,
“I had just got my coffee from the store around the corner and was walking back to the office. When I walked in to the office I saw a lot of people standing in front of Tom’s office. Tom has been the manager for a couple of years and doesn’t talk to a lot of people. There was a man wearing a red shirt and dark pants yelling at Tom. The man in the red shirt then threw a notebook at Tom. Tom is really a nice guy so I thought it was odd that someone was so mad at him.”
Notes: Entered office and saw man, red shirt dark pants yelling at Tom. Several people standing near office. Man threw notebook at Tom.
2. Draw diagrams when appropriate. Not only will it help the interviewee recall facts, they can show you what happened. In the previous example, having the interviewee show where everyone was standing in relation to their position will help you verify they could actually see the man throwing the notebook and will assist in identifying additional witnesses.
3. Don’t try to catch every word they are saying. Much of the english language is filled with additional words. Don’t worry about using complete sentences in your notes. You can fill in blanks later when you have time.
4. Review your notes with the person being interviewed for accuracy. At the end of the interview review your notes with the interviewee. You verify your notes and it provides the interviewee the opportunity to recall additional information they may have left out.
“You saw a man in a red shirt with dark pants throw a notebook at Tom.”
“Yes. Did I mention that he had a cowboy hat on too? I believe he might have been wearing cowboy boots and was wearing a large belt buckle like a rodeo rider.”
5. Edit your notes as soon as possible. Scribbling a few key words during the interview makes a lot of sense to you at the time. If you wait too long after the interview your notes they may not make sense.
Getting in the habit of following these five simple steps will improve your overall investigation.
Experienced private investigators realize the importance of professional investigative reports. Reports you provide your client 'live' a long time and are read by several people weeks, months or years after you completed the investigation. Professional private investigators view the finished report as product that advertises their firm. People that are impressed with your work may contract with you in the future.
First step in creating professional reports is to avoid common mistakes that create a bad impression of your work:
Report writing: Capitalization
6/12/2013
Report writing is easy with the word processors. Using grammar and spell check helps us proof-read a report. Unfortunately, capitalization is still tricky at best when writing a report with all the tools we have available.
Proper rules for capitalization are in the English language, (or is that english), are very tricky.
Listen to this podcast by Grammar Girl to learn more about when or when not to capitalize proper nouns. You can get a transcript or look for more grammar techniques by visiting Grammar Girl.
|
|
Pro PI staff
Experienced professionals and trainers. | http://www.propiacademy.com/academy-blog/category/report%20writing |
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The present invention relates to a low depth, nestable tray for transporting and storing beverage containers having substantially equal diameters and differently sized top and bottom rims. Examples of such containers are twelve-ounce aluminum cans (ca. 0,34 kg) which are made with similar body diameters and different top and bottom rim diameters.
Cans for soft drinks, beer and other beverages are often stored and transported during the distribution stages thereof in trays or boxes. Previously, single serving sized cans such as those which hold twelve fluid ounces were made in generally uniform sizes. The body diameters and the bottom and top rim diameters were generally consistent so that a tray could accommodate any can in stacked and cross-stacked configurations. However, currently the beverage industries are manufacturing cans having substantially equal body diameters and smaller top and bottom rim diameters.
An explanation for the varying diameters of the top and bottom rims on aluminum cans is economics. The cost of manufacturing is decreased by making cans with smaller top and bottom rims. Therefore, as the beverage industries switch to cans having smaller top and bottom rims, there has been a need for a returnable and reusable tray for storing, displaying and transporting cans which is light weight, easy to handle and economical. The prior art does not provide a tray which can accommodate cans with varying top and bottom rim diameters in both stacked and cross-stacked configurations.
Examples of returnable and reusable single purpose trays are disclosed in U.S. Patent No. 4,932,532; U.S. Patent 4,823,955 and U.S. Patent No. 5,031,774. The previous trays are configured for use with cans of substantially the same top and bottom rim diameters. A bottler or distributor which uses the newly introduced cans with smaller top and bottom rims cannot effectively use the prior trays since any interlocking features for stacking or cross-stacking loaded trays would not fit correctly. The result may be unstable loads of stacked and cross-stacked trays, and inefficiency.
Plastic low depth cases have been developed wherein the side walls are lower than the height of the stored containers. Since containers placed in the cases would extend above the side walls, the containers in a lower case support the weight of the other cases stacked on top of them. Metal cans generally have the structural integrity to bear the compressive loads of loaded and stacked trays.
Some major problems experienced with previous nestable trays are spreading or fraying of the side walls and "shingling" between trays placed in close side-by-side or end-to-end relation. The spreading or fraying problem often compounds the "shingling" problem. The present invention addresses both of these frequent complaints of previous trays. Structural supports to prevent spreading or fraying of the side walls are provided, which in turn help alleviate the "shingling" problem. Moreover, the side walls of the present tray are provided with additional structural improvements to avoid "shingling."
As to "shingling," previous nestable trays, which have nesting tabs or ribs on the exterior of the side walls, often are not easily handled because the tabs or ribs on the exterior of the side walls provide a catch surface between trays which come into contact. When stacks of trays are placed in close side-by-side and end-to-end relation, any catch surface such as a rib or tab on the exterior of the band will tend to land and rest on the upper edge or rim of the band of a neighboring tray. This overlapping of adjacent trays causes one end of the tray to be raised with respect to the other and is commonly referred to as "shingling". Shingling is disruptive of load stability on a pallet since it initially prevents the achievement of a perfectly squared load. Stacks which are unstable because of shingling are undesirable and can be a hazard. There exists a need for a nestable tray which is constructed to avoid shingling.
Spreading or fraying of side wall structures from nesting is a problem encountered with previous nestable trays. When a large number of trays are nested, the side walls of the trays near the bottom of the nested stack, which bear more of the load, have a tendency to spread or splay outward because no structural provision has been made for supporting the weight of trays nested above. This damage has a cumulative effect and results in a shorter service life for the trays, and thus additional expense for replacement. The shingling problem can be compounded in trays having no provision to prevent spreading or fraying. Any nesting tabs or ribs on the exterior of the side walls are even more prone to shingling or catching on other trays as the side walls spread outward. There has been a need for trays which maintain their structural integrity over repeated uses in both nested and loaded configurations. The present invention addresses the spreading problem by providing structural features to support the weight of stacked and nested trays. Since shingling is often compounded by spreading, this improvement alone would greatly alleviate the shingling problem. Moreover, as discussed above, the present tray also provides structural features on the outside of the side walls to prevent shingling.
Another problem encountered in using previous trays, particularly for cans, has been damage to the sides of the cans, ranging from slight scratches to more severe dents and even ruptures, from excessive contact with the walls of the trays during handling and transport. Simply the operating vibration of a truck containing the loaded trays can cause damage to the cans if there is excessive contact and rubbing between the walls of the tray and the cans. There is a need for a tray which can hold cans in spaced relation to one another and the wall structure to prevent damage to the cans and to other types of containers as well.
Accordingly, it is a principal object of the present invention to provide a nestable, low depth tray for storing, displaying and transporting containers having substantially equal body diameters and varying top and bottom rim diameters, such as single serve cans. The tray of the present invention combines the features adapted to accommodate cans with differing top and bottom rim diameters into a single tray.
Another object of the present invention is to provide a low depth, nestable tray which has sufficient structural features to prevent the side walls of the tray from spreading or fraying due to the weight of trays nested above it.
Still another object of the present invention is to provide a low depth, nestable tray which avoids shingling or catching on other trays during handling.
A further object of the present invention is to provide a low depth tray which is securely supported when loaded and stacked on another loaded tray beneath, but can easily be moved along the tops of the containers, particularly can tops.
A still further object of the present invention is to provide a low depth nestable tray which makes efficient use of space both when loaded and stacked and when empty and nested.
Another object of the present invention is to provide a low depth, nestable tray which holds the containers spaced apart from one another and from the wall structure of the tray to prevent any damage to the containers from excessive contact.
Directed to achieving these objects, a new low depth, nestable tray for containers having varying top and bottom rim diameters is herein provided. The preferred configuration is for single serve sized cans. This tray is formed by integrally molding from plastic, three basic components -- a floor, a band and a plurality of columns interconnecting the band and floor.
The floor preferably has an open lattice design which not only allows unwanted fluids to drain out of the tray, but also requires less material and thus is lighter than a solid floor design. The floor also has container support areas sized to receive cans, and includes a shallow groove for engaging the bottoms of cans of varying bottom rim diameter.
The floor of the tray has an outer or bottom surface which is configured for accommodating the tops of cans in a tray underneath. The floor bottom surface preferably has two sets of downwardly projecting redoubts, one set which are located to be disposed within the top rims of cans in a tray therebeneath and a second set which are located to be disposed between the top rims of adjacent cans in a tray therebeneath. The redoubts also block a tray from sliding along the tops of cans in a tray underneath it. The redoubts are positioned on the floor bottom surface of the tray so as to be able to accommodate differing top rim diameters of cans in a tray therebeneath. In particular, the first set of redoubts, the inside the rim redoubts, are designed to lock snugly with the smallest diameter rim. The second set of redoubts, the outside the rim redoubts, are designed to lock snugly with the largest diameter rim.
The band is substantially upright and extends around the periphery of the tray forming a wall structure. The band is positioned above the floor so as to be below the tops of the containers when the containers are positioned on the floor of the tray. However, the low depth arrangement is high enough relative to the containers to prevent them from tipping. The band is substantially flat and is designed specifically to avoid contact with the containers. The exterior lower surface is smoothly beveled inward and downward so as to provide no extension or surface which can catch or shingle on another tray.
The columns extend between, interconnect, and merge the floor with the band. They are spaced around the periphery of the floor between adjacent support areas. The areas between the adjacent columns and between the band and floor along the sides are open, providing a light weight design allowing for visualization and display of the containers held in the tray. An important aspect of column height is that it is designed to hold the band far enough above the floor of the tray to enable a UPC code on a can contained in the tray to be read through the space between the columns.
At least one column along each side wall is preferably a ledged column. A ledged column has an interior wall rib extending vertically upward and inward from the floor of the tray and a vertical wall slot indented into the column above the interior wall rib. The top surface of the interior wall rib and the bottom surface of the wall slot form a wall nesting ledge. The nesting ledge acts as structural support for trays nested above it. To enhance the strength of the nesting ledge, the exterior of the ledged column includes an exterior wall rib extending vertically downward and outward opposite the interior wall rib. The bottom of the exterior wall rib is substantially flush with the floor bottom surface.
Each comer of the tray preferably has a corner column or post. Each corner post has an interior corner rib extending vertically upward and inward from the floor of the tray and a vertical corner slot indented into the corner post above the interior comer rib. The top surface of the interior corner rib and the bottom surface of the corner slot form a corner nesting ledge. The nesting ledge acts as structural support for trays nested above it. To enhance the strength of the nesting ledge, the exterior of the corner post includes an exterior corner rib extending vertically downward and outward opposite the interior corner rib. The bottom of the exterior corner rib is substantially flush with the floor bottom surface.
The corner slots and wall slots also serve to matingly engage the exterior corner ribs and exterior wall ribs respectively, of another tray nested thereabove.
The corner nesting ledges and wall nesting ledges are of the same height so that the weight of any trays nested thereabove would be distributed among the various nesting ledges. Preferably a tray of the present invention has a corner nesting ledge construction at each comer of the tray, and a wall nesting ledge construction on each of the longer side walls. In this way, the weight of nested trays above will be generally evenly distributed to the six nesting ledges.
These and other features and advantages of the invention may be more completely understood from the following detailed description of the preferred embodiments of the invention with reference to the accompanying drawings.
Fig. 1 is a top plan view of the tray in accordance with the present invention;
Fig. 2 is an elevational view of a side wall of the tray;
Fig. 3 is an elevational view of an end wall of the tray;
Fig. 4 is a bottom plan view of the tray;
Fig. 5 is a cross section taken along line 5-5 of Fig. 1;
Fig. 6 is a cross section taken along line 6-6 of Fig. 1;
Fig. 7 is a cross section taken along line 7-7 of Fig. 1;
Fig. 8 is a cross section taken along line 8-8 of Fig. 1;
Fig. 9 is a cross section taken along line 9-9 of Fig. 1;
Fig. 10 is a perspective view of a corner post of the tray shown from the inside of the tray;
Fig. 11 is a perspective view of corner posts of nested trays shown from the outside of the respective trays;
Fig. 12 is a cross section taken along line 12-12 of Fig. 1;
Fig. 13 is a cross section taken along line 13-13 of Fig. 5;
Fig. 14 is a schematic top view of the tray showing the area that comes in direct contact with the bottoms of the cans; and
Fig. 15 is a schematic bottom view of the tray showing the area that comes in direct contact with the tops of the cans.
The present invention is a single tray which can be used to hold containers of similar capacity and body diameters but varying top and bottom rim diameters. The present invention is especially adaptable for twelve-ounce metal or aluminum cans. Trays loaded with cans having different top and bottom diameters may be stacked and cross-stacked.
Referring to Figs. 1-6, the tray 20 of the present invention comprises three basic elements, a band 30, a floor 50, and a plurality of columns 70. The wall structure that defines the periphery of the tray 20 comprises the band 30 which is generally vertical and above the floor 50, and is spaced above and connected to the floor 50 by a plurality of columns 70. The columns 70 are arranged along the sides of the tray 20. The tray 20 may have corner posts 100 at each of the corners of the wall structure. The wall structure includes side walls 26 and end walls 27.
The band 30 extends around the periphery of the tray 20. Band 30 is substantially smooth along its length in the areas between the columns 70. The portion of the band 30 between the columns 70 has a generally flat interior surface 31. The interior surface 31 of band 30 is not contoured or scalloped in any way so as to avoid excessive contact with the containers when the tray 20 is loaded.
Since the band 30 is normally spaced apart from the sides of the containers, damage due to excessive contact between the containers and the band is prevented. The spaced apart relationship between the containers and the band 30 also provides a protective zone around the perimeter of the loaded containers which prevents external forces from impacting and damaging the containers. The band 30 is flexible so as to flex upon impact and thereby prevent the containers from being substantially affected by external forces.
The portion of the band 30 between the columns 70 has a generally flat exterior surface 39. The lowermost portion of the exterior surface 39 has a smooth, downwardly and inwardly inclined beveled or cam surface 40 best shown in Figs. 2 and 3. The beveled or cam surface 40 is important in preventing the shingling problems of previous trays. The beveled surface 40 provides a cam surface, which when in contact with the lip or top edge of an adjacent, tray tends to drop down without resting on the adjacent top edge. To further prevent shingling, the lip 33 of the end walls 27 of the present tray is preferably provided with a plurality of end wall ribs 35, best shown in Figs. 2-5, which follow the bevel 40 of the lowermost portion of the exterior surface 39 of the band 30. The end wall ribs 35 of the end walls 27 will tend to cam downward when in contact with the top edge of an adjacent tray, thereby preventing the end walls 27 from resting on the adjacent tray. Any other structural feature disposed on the exterior of the band 30 should also be downwardly and inwardly inclined such as the beveled or cam surface 40 so as to avoid providing a catch surface prevalent in trays which have shingling problems.
The top of the band 30 along the side walls 26 preferably is slightly different than along the end walls 27. Along the exterior of the side walls 26, the top of the band 30 and the top portions of the columns 70 are substantially flush. However, along the exterior of the end walls 27, the top of the band 30 and the top portions of the columns 70 have a lip 33 at or near the top of the end walls, best shown in Figs. 1 and 11. The lip 33 forms a slight overhang over the exterior of the columns 70 as well as over the corner posts 100.
The floor 50 preferably has a lattice-like configuration having a pattern of open spaces as shown in Figs. 1 and 4. The open floor design provides a light weight tray, and is practical for allowing any liquids to drain through the floor 50. Referring specifically to Fig. 3, the floor 50 has an upper or top surface 51 defining a plurality of preferably circular support areas 53 for supporting containers thereon. The support areas 53 are connected to each other by a system of grid-like perpendicular struts 61 traversing the floor 50 in longitudinal and lateral directions, and diagonal struts 62 extending preferably radially from the circular support areas 53. Lattice members 63 are preferably diamond-shaped members located between the support areas 53. The perpendicular struts 61 extend the full length and width of the floor 50, and between the rows and columns of support areas 53. The perpendicular struts 61 connect to the lattice members 63 substantially at the points of the diamond-shapes. The diagonal struts 62 connect to the lattice members 63 substantially at the middle of the sides of the diamond-shapes. The open lattice-work floor is made up of support areas 53, perpendicular struts 61, diagonal struts 62 and lattice members 63. Lattice members 63 are preferably open in their centers with a shelf 63a extending inward. The shelves 63a provide more surface area to the bottoms of the lattice members 63. The central lattice member 64 is preferably solid and shown in cross-section in Fig. 12.
The support areas 53 are arranged in rows and columns to thereby define one or more arrays. In the preferred design, there are four two-by-three arrays to accommodate four six-packs of containers or cans, in other words, there are a total of twenty-four support areas 53 in a four-by-six arrangement.
Each support area 53 is sized to hold containers or cans of similar capacity but having varying bottom rim diameters. Each support area 53 includes a supporting ring 54 which is generally centered in the support area. The perpendicular struts 61 and diagonal struts 62 are connected to the rings 54. The rings 54 are preferably open in their centers with an annular shelf 54a extending inward near the bottoms of the rings 54. The annular shelves 54a provide more surface area to the bottoms of the rings 54. A can seat 55 is formed at each support area 53 by relatively shallow indentations 55a on the struts 61 and 62 near the ring 54. The indentations 55a are located on the struts 61 and 62 and sized to seat or engage the bottom of cans having varying bottom rim diameters. The seats 55 retain the bottoms of the cans in place which prevents the sides of the cans from being damaged due to excessive contact with side walls and other cans. Indentations 55a are best shown in Fig. 10. The range of bottom rim diameters that can be securely seated in the can seats 55 is shown schematically in Fig. 14. The shading 56 represents the size of seats 55. Therefore, any can with a bottom rim diameter within that shaded range 56 will be securely seated in a can seat 55.
The floor 50 has a bottom surface 57 which has distinctive structural features. The floor bottom surface 57 is configured to allow for stacking and cross-stacking of loaded trays. Cross-stacking is done by rotating a top tray 90 degrees about a vertical axis and lowering onto a bottom tray or trays. During shipping and handling trays may be moved by machines and it is advantageous to use trays which can be stacked or cross-stacked. Additionally, when the trays are used to display the containers in a retail setting, the retailer may wish to cross-stack the trays for display or space reasons. The floor bottom surface 57 has structural features which help hold the tray securely on other trays beneath when stacked and cross-stacked. When a tray is loaded and stacked or cross-stacked above a similarly loaded tray, the tops of the cans in the tray beneath are loosely retained in position by the floor bottom surface of the tray above. The floor bottom surface of the present invention has the necessary features to accommodate the retention of the tops of cans of varying top diameters loaded in a tray beneath.
The floor bottom surface 57 also has two sets of downwardly projecting redoubts. The first set of redoubts 58 are preferably circular in shape and are located on the floor bottom surface 57 so that they will be disposed within the top rims of cans in a loaded tray beneath. In other words, the redoubts 58 are generally centered under the support areas 53, and are the bottom surfaces of the rings 54. These redoubts 58 are also called "inside the can" redoubts. The second set of redoubts 59 are preferably diamond-shaped and are located on the floor bottom surface 57 so that they will be disposed between the top rims of cans in a loaded tray beneath. Redoubts 59 are located generally between the support areas 53, and are the bottom surfaces of the lattice members 63. Redoubts 59 are also called "outside the can" redoubts. The edges of the redoubts 58 and 59 are contoured to facilitate handling of loaded trays. Redoubts 58 and 59 are preferably contoured by providing rounded bevels 58a and 59a respectively. The annular shelves 54a and lattice member shelves 63a on the bottoms of the rings 54 and lattice members 63 respectively provide added surface area to the redoubts 58 and 59. The added surface area to the redoubts 58 and 59 results in a smoother "ride up" operation described below.
When loaded trays are stacked and cross-stacked, it is advantageous to have the loaded trays interlock. The interlocking feature provides stability to a stacked and cross-stacked pallet of loaded trays. Also during transport, the stacked trays should be prevented from moving relative to each other. An important aspect of the present invention concerns the interlocking feature, i.e., the redoubts. Since manufacturers make cans with varying top and bottom diameters, a single tray that can be used with all of those cans must be able to safely interlock loaded trays of those cans. The present invention accommodates the various sizes of can tops and bottoms by providing redoubts 58 and 59 sized to be able to interlock a range of can tops and bottoms. Regardless of how small the can top diameter is, as long as the inside redoubt 58 can fit within it, a loaded tray can be safely interlocked. Also, regardless of how large the can top diameter is, as long as outside redoubts 59 can surround it, a loaded tray can be safely interlocked. The range of top rim diameters that can fit on the floor bottom surface 57 to safely interlock loaded trays is shown schematically in Fig. 15. The shading 60 represents the area within which the can top rim diameter can fall. Therefore, any can with a top rim diameter within that shaded range 60 can safely interlock with the redoubts 58 and 59 in the floor bottom surface 57 of the tray above it.
The redoubts both help hold loaded and stacked trays in a blocked position, and facilitate movement of an upper tray along the tops of cans in a lower tray in an unblocked position. The blocked position refers to when loaded trays are firmly stacked or cross-stacked with redoubts 58 disposed inside the top rims of the cans, and redoubts 59 disposed between the tops of cans in the lower tray. In the blocked position, the upper tray is effectively blocked from moving along the tops of the containers by the downwardly projecting redoubts 58 and 59 which are disposed inside and between the top rims of the containers beneath and resist sliding movement of the upper tray. On the other hand, redoubts 58 and 59 also help the upper tray to slide when it has been unblocked from the tops of the lower cans.
To unblock a loaded tray from a lower loaded tray, a positive twist or rotation of the upper tray about a vertical axis causes the upper tray to " ride up" the redoubts' beveled surfaces 58a and 59a onto the tops of cans below and thus the surfaces of redoubts 58 and 59 of the upper tray can slide freely on the tops of cans below. It is this "ride up" operation which is improved by the addition of shelves 63a and annular shelves 54a. In the unblocked position, redoubts 58 and 59 provide a sliding surface so that a loaded tray can be easily slid along the tops of cans loaded in a similar tray below without having to be lifted. The use of redoubts 58 and 59 to move trays along other trays below facilitates shipping and handling. It should be noted that redoubts 58 and 59 are placed so that if the trays are not in either the stacked or cross-stacked positions, that is, in line or at 90 degrees with each other, at least some of the redoubts are always on the tops of the cans thereby preventing the top tray from falling into the blocked position. Only the stacked or cross-stacked positions are blocked positions. In other words. once the top tray is unblocked, redoubts 58 and 59 prevent blocking in all but the stacked or cross-stacked configurations. Redoubts 58 and 59 are also positioned on the floor bottom surface 57 so as not to impede cross-stacking of loaded trays of cans having varying top diameters. They are also designed with a clearance for cans which do not line up exactly in their support areas.
The columns 70 along the walls 26 and 27 of the tray 20 which connect the floor 50 to the band 30 are positioned between adjacent the support areas 53 at the outermost edges of the floor 50. Since the wall structure is preferably open -between the columns 70, windows 28 are formed between the columns 70 and under the band 30. The windows 28 are sized to expose the UPC labels on the cans in the tray. It is advantageous to be able to have the UPC code scanned without removing the cans from the tray. The height of the columns 70 and the width of the band 30 are preferably configured to allow the UPC code on a can in the tray to be read through the window 28. The height of the columns 70 is also sufficient enough to prevent the containers from tipping when transported and handled, and low enough, however, so that the tops of the containers extend above the band 30 and a stack of nested trays take up minimal vertical space. As shown in Fig. 11, each empty tray only adds minimal additional height to a nested stack of trays.
Referring to Figs. 1, 2, 5 and 13, the exterior surfaces of the columns 70 include slots 72. The slots 72 are configured to receive the inwardly disposed surfaces 74 of the columns 70 of a tray nested above. The inward surfaces 74 are generally vertical and preferably have three angled faces 74a, 74b, 74c which would mate in the corresponding slot 72 having mating angled surfaces 72a, 72b and 72c. The slots 72 receive the inward surfaces 74 of columns of another tray to provide a deeply nested arrangement.
Of the columns 70, preferably a column along each of the side walls 26 is a ledged column 80. A ledged column 80 is best illustrated in Figs. 1, 2, 5 and 8, and has most of the features of the other columns 70. The portion of a ledged column 80 directly below the band 30 has an indented vertical slot 82, which will be referred to as the vertical wall slot to distinguish from similar slots in the corner posts. The vertical wall slot 82 has a bottom surface 83. The ledged column 80 also includes an interior wall rib 84 extending upward from the floor top surface 51 in an inwardly direction. The top surface 85 of the interior wall rib 84 is substantially flush with the wall slot bottom surface 83. Together, surfaces 83 and 85 form a wall nesting ledge 90 as shown in Figs. 1 and 8. The wall nesting ledge 90 is a shelf-like structure in the ledged column 80. The ledged column 80 also includes an exterior wall rib 86 disposed opposite the interior wall rib 84, and extending downward and outward from the ledged column as shown in Figs. 2 and 4. The bottom surface 87 of the exterior wall rib 86 is substantially flush with the floor bottom surface 57.
When empty trays are nested, the wall slot 82 receives the exterior wall rib 86 of another tray nested thereabove so that the bottom surface 87 of the exterior wall rib rests on the wall nesting ledge 90 the tray below. In this way, the wall nesting ledges 90 support the weight of any trays nested above. The exterior wall ribs 86 reinforce the strength of the wall nesting ledges 90. The exterior wall rib 86 is substantially flush with column exterior face 72b.
Since the wall nesting ledges support the weight of above-nested trays, the wall structures of the trays are relieved of that load and consequently are not as prone to splaying outward or fraying. Thus the trays of the present invention maintain their structural integrity and will have a longer service life. Moreover, controlling the spreading or fraying of the wall structures lessens the chances of shingling.
In addition to the ledged columns 80, the tray of the present invention preferably includes corner posts 100 also having structural features for supporting the weight of above-nested trays. Referring to Figs. 1-4 and 9-11, a corner post 100 has an indented vertical corner slot 102 directly below the band 30. The slot 102 has a bottom surface 103. The corner post 100 also includes an interior corner rib 104 extending upward from the floor top surface 51 in an inwardly direction. The top surface 105 of the interior corner rib 104 is substantially flush with the corner slot bottom surface 103. Together, surfaces 103 and 105 form a corner nesting ledge 110 as shown in Figs. 1, 5, 9-11. The corner nesting ledge 110 is a shelf-like structure in the corner post 100. The corner post 100 also includes an exterior corner rib 106 disposed opposite the interior corner rib 104, and extending downward and outward from the corner post as shown in Figs. 2, 3 and 9. The bottom surface 107 of the exterior corner rib 106 is substantially flush with the floor bottom surface 57.
When empty trays are nested, the corner slot 102 receives the exterior corner rib 106 of another tray nested thereabove so that the bottom surface 107 of the exterior corner rib rests on the corner nesting ledge 110 of the tray below. In this way, the comer nesting ledges 110 support the weight of any trays nested above. The exterior corner ribs 106 reinforce the strength of the corner nesting ledges 110.
The corner nesting ledges also support the weight of above-nested trays, so the wall structures of the trays are relieved of that load. Thus as with the wall nesting ledges, the wall structures are not as prone to splaying outward or fraying. The advantages of maintaining structural integrity, longer service life and reduced chances of shingling are gained by use of comer nesting ledges.
A detailed look at the figures reveals that the corner nesting ledges 110 are preferably off-center on the comer posts 100. The comer nesting ledges 110 are preferably located closer to the end walls 27 than the side walls 26. The reason for this preferred position is to avoid interference with secondary wraps around cans or containers. Since the tray of the present invention is contemplated to be used with loose cans as well as those wrapped or otherwise bound into six-packs or twelve-packs, the off-center positioning of the comer nesting ledges 110 ensure that the ledge structure does not interfere with wraps or other binding means around the six- or twelve-packs of cans.
The preferred embodiment of the present invention comprises wall nesting ledges and corner nesting ledges, but a tray with only wall nesting ledges to support the weight of nested trays is within the scope of the invention. Any number of columns 70 could be ledged columns 80, that is, there is no limit to the number of wall nesting ledges which can be provided. Alternatively, a tray with only corner nesting ledges is also within the scope of the invention.
The columns 70, in addition to their nestability function, must also be substantial enough to support the top band 30 so that the tray 20 does not break apart when the containers push against the band 30. The columns 70 preferably have a pyramidal design allowing them to have the largest area at their bottoms, making it unlikely that they will be torn away from the floor 50 in the event of a severe impact. The columns 70 of the present tray 20 are disposed between the container support areas which are along the periphery of the tray. By this placement of the columns 70, excessive contact with the containers during normal tray handling, and any resultant damage, is avoided.
An additional feature of the present invention is the provision of a continuous exterior band portion 42 for stamping, printing or engraving logos or advertisements or other printed matter. The continuous portion 42 is preferably centered on each of the side walls 26, but could be placed anywhere on the band as best shown in Fig. 2. The continuous portion 42 is continuous over an intermediate column 71, that particular column not having exterior slots 72 which extend as far upward as the ones on the other columns. Any column which is positioned the center of a continuous portion 42 is an intermediate column 71. An important aspect of the continuous portion 42 is that on the inside of that intermediate column 71 the upper part of the column 71 does not have the interior faces 74, but instead has only structural ribs 75. The material for interior faces 74, if present, would make that portion of the column 71 too thick and may cause processing problems. An example of a processing problems is the possibility of shrinking occurring in very thick areas. Other processing problems will be apparent to persons familiar with plastic processing.
In the preferred embodiment of the present invention, the intermediate column 71 coincides with the ledged column 80 so that the continuous portion 42 is positioned above the wall nesting ledge structure 90. Of course any configuration of continuous portions 42, intermediate columns 71 and ledged columns 80 is within the scope of the invention.
From the foregoing detailed description, it will be evident that there are a number of changes, adaptations, and modifications of the present invention which come within the province of those skilled in the art. However, it is intended that all such variations not departing from the spirit of the invention be considered as within the scope thereof as limited solely only by the claims appended hereto. | |
Mention the word “integrity” and many people think of someone who is honest or trustworthy. While this is sometimes considered the definition of integrity, and these traits of honesty and trustworthiness are necessary traits of leadership, the better or more accurate definition of integrity is broader and deeper than what most people think. The Merriam-Webster dictionary defines integrity as the quality or state of being complete or undivided. The word integrity comes from the Latin words integer or integritas; both are words that in ancient Roman times meant completeness or wholeness. During inspection, Roman soldiers would pound their chest and shout these words to indicate that both their armor and their character were complete or sound.
Integrity carries the concept of consistency of actions, values, principles, expectations, and outcomes. It also indicates a high level of moral standards that drive everything that the leader does. It is said that “actions speak louder than words,” but perhaps integrity speaks even louder because the congruity or incongruity between our words and actions determines whether people can wholeheartedly follow us or not. Integrity is critical for leadership because it is the trait that communicates just how committed we are to our stated values and principles; it shows the real person.
Integrity, as stated, is a combination of a range of character traits that each stand on their own but work in concert. It can be thought of as the fabric that is interwoven with these traits. Therefore, leaders of integrity:
- Value honesty – They value honesty both from themselves and from those around them. A person of integrity doesn’t bend or whitewash the truth to suit themselves. And they value truth from those around them even when it might be negative feedback or bad news.
- Are authentic – They accept who they are and allow others to see into the strengths and weaknesses of their lives. They are comfortable in their own skin.
- Demonstrate moral courage – They do what is right in all circumstances, even when doing so costs them on a personal level. They don’t give themselves “wiggle room” on their moral standards.
- Have clear and strong values – They have defined their values over their lifetime, allowing these values to then drive their behavior. They know their values and show their values.
- Are consistent – A leader of integrity has a well-defined set of values that guide every decision, therefore the decisions that they make are consistent. Everything about their life reflects their set of values.
- Value and respect others – They have compassion for other people. While recognizing that different people contribute differently, their moral code gives equal value to each person and separates the person from the contribution.
- Demonstrate selflessness – Their compassion for others demonstrates their humility, thinking of others first. They cheer and contribute to the successes of others.
- Do what they say – They stand by their commitments. Their beliefs, words, and actions are consistently aligned.
- Are decisive and effective – Because they are committed to a solid set of values, decisions become easier to make. The choice that is consistent with their values is the right choice.
- Generate trust – Trust results from their reliability and positive commitment, all built upon their positive moral code.
Without integrity, a person cannot generate the trust and respect that followers require before they make their commitment to follow. Without this relationship of trust and respect, it is difficult to draw followers. Without followers, a person is not really a leader at all. On the other hand, integrity in a leader attracts followers.
If integrity is so crucial for effective leadership, how does a person become a leader of integrity? By its very nature, integrity cannot be learned or implemented, it must be lived, and lived consistently over time. If you feel that you are lacking in integrity, it may be time to think long and hard about your core values. Are they the right ones and do you have a strong conviction about them? Are you living them day-by-day and minute-by-minute? By examining and perhaps realigning values and actions, a person can begin to build integrity into their life.
Ken Vaughan
[email protected]
Ken is a business strategy consultant and leadership coach. His passion is helping companies and people grow and succeed. With an engineering degree and an MBA, he spent more than 20 years working in M&A and business development in the corporate world before founding New Horizon Partners, Inc. in 2002. His consulting practice works with a wide variety of industrial companies, helping them make good decisions about where and how to compete and building their leadership capabilities. To read other articles by Ken on business strategy and leadership, visit the New Horizon Partners website. | https://ohmanufacturing.org/demonstration-of-real-integrity-key-to-true-leadership/ |
Fractals and Fearless Predictions
What is it about man’s fascination with predicting the future? Can man really predict earthquakes and volcanic eruptions? A JCU scientist believes we can.
by Anastasia Koninina
Halfway around the world from Australia, German football fans watched eagerly as Paul, the octopus, ever so slowly made a choice between two competing teams vying for the 2010 FIFA World Football Cup. Seven times out of seven, Paul correctly predicted the winner of each of the German national football team’s matches in the World Cup, as well as the outcome of the final between Spain and the Netherlands. Spain won.
Somewhere in India, a 13-year-old high school student is busy observing a cockroach sitting placidly under a glass dome. As he observes, he pulls a string tied to a rough, red brick across a plank to make the table vibrate. Soon, the cockroach begins running around as the tabletop starts shaking. The student’s experiment hopes to show that insects such as the cockroach can sense small tremors in the ground that are the usual precursors of an earthquake.
In 2009, an Italian scientist made a bold prediction that a destructive earthquake would hit central Italy. His announced date came and went with no result and many accused him of spreading panic. However, less than a week later, a magnitude 6.3 earthquake did strike in the Abruzzo region sparking renewed interest in his earlier claim. His method? He had been measuring spikes in radon gas in the area.
Radon is a radioactive, colourless noble gas that is formed as part of the normal decay chain of uranium and thorium. His theory goes that before a major earthquake, the fault line adjusts itself and, during this process, the gases are released.
Structural and Economic Geology Professor Dr Tom Blenkinsop had always been interested in rocks and fossils as a child. Growing up in the UK, he recalled accompanying his grandfather to collect fossils in Dorset t age seven.
His own interest in geology and fractals was inspired by a teacher who told him that fractals would become a new movement. Professor Blenkinsop said he believes his study of fractals might lead him to the prediction of natural events such as volcanic eruptions and earthquakes.
So what are fractals? In 1975, French-American mathematician Benoit Mandelbrot coined the term fractal from the Latin adjective, fractus, to denote anything that is like a broken-up stone—irregular and fragmented.
Mandelbrot’s study focused on the occurrence of many ‘rough’ phenomena in the real world such as mountains, coastlines and river ways that he said were, far from unnatural, actually quite intuitive and natural.
Professor Blenkinsop met Professor Mandelbrot in Munich shortly before Mandelbrot’s death in 2010.
“I was organising a conference on fractals that occurs roughly every four years. It was not a big conference; probably around 70 or 80 people. That’s what he enjoyed. He could talk to each person individually.”
Professor Blenkinsop said Mandelbrot gave him useful advice at that meeting, which inspired him to continue with his work.
“Fractals are evident not only in mineral deposits but also in a whole variety of phenomena such as volcanoes, earthquakes and faults in the earth’s crust.
“I rapidly discovered that fractals were in an amazing amount of geological phenomenon and how mineral deposits follow fractal distributions,” he said.
“Fractals encapsulate self-similarity: when we look at an object that consists of small and large parts – it looks similar at different scales.”
Examples of fractals might include the humble broccoli with its compound structure of small segments that resemble the whole piece, or seashells with chambers, starting from the small size in the centre and continuing with segments of a larger size to its outer edges.
“In a group of objects, there are often many more small objects than large ones, and the ratio between them is given by a number called the fractal dimension,” Professor Blenkinsop said.
He said fractal patterns are usually found in the way rivers, rocks and mineral deposits are constructed.
“Floods and rivers tend to follow fractal distributions. These patterns are used to estimate the occurrence of floods occurring in that river, which is a very important information if you live in a flood-prone area.”
However, the list of fractals in nature is not limited to these features. Earthquakes and volcanoes tend to follow the same distributions.
“Fractal distributions of earthquakes are very important, because knowing the relationship between the numbers of small and large events with some degree of confidence, we can say that there will be so many earthquakes of a certain size in a certain region over a certain period of time.”
Despite the fact that it is evident that earthquakes follow the same fractal distributions, the JCU scientist said the current knowledge of fractals would not be sufficient to predict the precise time and location of earthquakes at this stage.
“That’s more difficult. Ideally, an earthquake prediction would show when and where an earthquake will occur and how big it is going to be,” Professor Blenkinsop said.
“What we can do and might do is to estimate the number of earthquakes that might occur within a certain period of time at a certain place.
“We are not able to understand the process of earthquakes enough in order to be able to make these predictions.”
Another example of fractals can be seen in the construction of the World Wide Web.
Fractals, with their compound structure of particles of similar appearance, are being used by artists to produce compound images.
“To me, fractal art involves a great effect and great subtlety, and perhaps more creativity.
“I think fractal art is a creative and expressive activity. It is not simple or random, I think it is not anyone of us can do.”
Digital artist and former JCU student Rob Donaldson creates images that showcase his fascination with fractals.
On his website, Donaldson said he finds this method brings the “happy accidents” into his pieces.
He specialises in digital images and uses fractals and other randomly generated structures to give a “nature of chaos” to his works.
For the future, Professor Blenkinsop hopes his research will prove fractals can help find new mineral deposits as well as possibly predict natural disasters.
He gamely predicts there will be more scientific as well as popular interest in fractals. | http://jcnn.com.au/spotlight/fractals-and-fearless-predictions/ |
Here are 6 events in Track & Field that aren’t official IAAF events, but nevertheless good fun and great for generating interest in Athletics.
Note that the beer mile is omitted because I think it’s unhealthy (at least for me) to drink a full beer, run 400m, and repeat 4 times! (Women only have to drink 3 beers, because of their bodyweight) My eyes would be bloodshot, and my cheeks would be as red as Rudolph’s nose. Not to mention how sick I would be when consuming that much alcohol in such a short period of time.
We’ve come a long way since 1896 and the Panathenaic Stadium which had a track straightaway of 204.07 meters long and 33.35 meters wide! (My ankles just cringe when I think of running a 400m there! Or 400mH :)
#6 – 19 x 84m Mile Relay
I wrote about the 105 x 400m Relay which is a full 26 Mile 385 Yard Marathon (actually, it’s 105.5 laps). How about running a mile on a 84-meter track in Anchorage, Alaska?
Or running a Mile relay on that same track? More specifically, 19 X 84m Mile Relay?
This group still ran 4:21, which is 13.7 seconds per lap (of 84 meters with the two tight turns)
#5 – Running Clockwise, not Counter Clockwise
Running counter clockwise started in the 6th century BC and has stayed that way ever since. That’s over 2600 years!
The University of Ghent conducted a study on why all athletic track events run anti-clockwise, and the 2011 Brussels Diamond League hosted an unusual event… the Reverse 400 meters running clockwise! In this race, the winner just misses the World Best (since it’s not really a world record).
#4 – Fastest 400-Meter WR While Juggling Three Balls
How about a 400 meters sprint while juggling 3 balls?
Franz Roos completed a 400-meter run in 56.06 seconds while juggling three balls on September 11, 2000. How many Masters men can run 56 seconds for 400 meters?
#3 – Full Decathlon on Ice with Skates
Imagine one day you wake up and your entire track is covered in a sheet of ice?
No worries, Spikey the Mascot has it covered.
Some of the events I cringe upon thinking, like pole vault.
Other events like the 1500m should be much faster than regular track spikes (without ice!).
How about the 400m? His time may surprise you!
#2 – Parallel High Jump or Synchronized Jumping
If you like synchronized swimming, especially the duets, then you’ll like this event.
It’s hard enough to high jump 2.18 meters (7 feet 1.5 inches), let alone one of you jumping it, while the other guy knocks it down, and keeping your composure without strangling the other guy :)
#1 – Retrorunning or Running Backwards
The world bests for a male athlete running backwards is 13.6 for 100 meters, 31.56 for 200 meters, and 69.56 for 400 meters. Four men running the 4×100 meter relay in was set in a time of 62.55.
[Tweet “World Best for Running backwards is 13.6 for 100m, 31.56 200m, 69.56 400m, and 4×100 in 62.55”]
For the record, the Women’s records are 16.8, 38.47, 1:29.0 and 1:17.8 respectively (100-200-400-4×100)
Other world records are listed here. My hamstrings hurt watching this video! | https://speedendurance.com/2016/01/09/top-6-bizarre-events-in-track-field-3-will-blow-you-away/ |
Q:
Parsing JSON object in RUBY with a wildcard?
Problem:
I'm relatively new to programming and learning Ruby, I've worked with JSON before but have been stumped by this problem.
I'm taking a hash, running hash.to_json, and returning a json object that looks like this:
'quantity' =
{
"line_1": {
"row": "1",
"productNumber": "111",
"availableQuantity": "4"
},
"line_2": {
"row": "2",
"productNumber": "112",
"availableQuantity": "6"
},
"line_3": {
"row": "3",
"productNumber": "113",
"availableQuantity": "10"
}
I want to find the 'availableQuantity' value that's greater than 5 and return the line number.
Further, I'd like to return the line number and the product number.
What I've tried
I've been searching on using a wildcard in a JSON query to get over the "line_" value for each entry, but with no luck.
to simply identify a value for 'availableQuantity' within the JSON object greater than 5:
q = JSON.parse(quantity)
q.find {|key| key["availableQuantity"] > 5}
However this returns the error: "{TypeError}no implicit conversion of String into Integer."
I've googled this error but I can not understand what it means in the context of this problem.
or even
q.find {|key, value| value > 2}
which returns the error: "undefined method `>' for {"row"=>"1", "productNumber"=>111, "availableQuantity"=>4}:Hash"
This attempt looks so simplistic I'm ashamed, but it reveals a fundamental gap in my understanding of how to work with looping over stuff using enumerable.
Can anyone help explain a solution, and ideally what the steps in the solution mean? For example, does the solution require use of an enumerable with find? Or does Ruby handle a direct query to the json?
This would help my learning considerably.
A:
I want to find the 'availableQuantity' value that's greater than 5 and [...] return the line number and the product number.
First problem: your value is not a number, so you can't compare it to 5. You need to_i to convert.
Second problem: getting the line number is easiest with regular expressions. /\d+/ is "any consecutive digits". Combining that...
q.select { |key, value|
value['availableQuantity'].to_i > 5
}.map { |key, value|
[key[/\d+/].to_i, value['productNumber'].to_i]
}
# => [[2, 112], [3, 113]]
| |
Cherry Castellvi arrived at the Department of State Building on William Street at 4 a.m. and spent the next four hours shifting her weight from foot to foot while feverishly studying a real estate textbook.
Like the roughly 400 aspiring agents behind her, Ms. Castellvi joined what has become a weekly ritual, a snaking line around the downtown block to take a one-hour real estate sales exam.
The multiple-choice test, administered every Tuesday morning and afternoon, has become a right of passage to a multitude of New Yorkers looking to join the real estate craze. With more than 20,000 people taking the test annually in the city alone, the Department of State is so overwhelmed it is adding a third test to its roster beginning in July.
“Hands down it is the most popular test we administer,” said the deputy secretary of state for business and licensing services, Keith Stack.
Despite the headache of waking at dawn, waiting on an endless line, and enduring an exam, hundreds of hopeful agents do it. Investment bankers mix with college dropouts, new immigrants squeeze next to retirees, and everyone clasps frayed notebooks close to their chests periodically glancing down to check last-minute answers.
“I broke my neck and jaw, lost 24 teeth, and had $600,000 worth of surgery,” the 28-year-old said, sitting on the sidewalk, study sheets spread around him. He said he escaped certain death by taking a cigarette break outside when he wasn’t supposed to.
about” and got him to take a job with a commercial developer who requires employees to take the test.
Others waiting in line included Zaza Chicareli, a 33-year-old used-car salesman from California; Elizabeth Cambanaos, who spent 20 years selling wholesale Greek music CDs, and Roewna Rothman, a corporate banker for Citigroup who wanted a more flexible job after having a child.
Some in line were on a return visit. Graded pass/fail, the passing rate for the test is roughly 66%.
“I’m determined this time to pass,” said Celeste Browne, a 43-year-old who was taking the test for the second time. A cosmetologist for more than 20 years, she chose to change careers after getting injured.
Rosa Mercado was also going to take the test for the second time.
“When you get up at 3 a.m. and wait for hours in the cold, you can’t be expected to think right,” she said.
to sponsor a new agent, including training and office infrastructure.
“With the average New York City costing $1 million, everyone thinks they will sell that apartment and make a ton of money,” said the head of residential sales at Heron, Corinne Pulitzer. “There is going to be a large fallout rate among agents” when those who are in it for the wrong reasons get weeded out, she said.
Other longtime real estate pros agreed. | http://esthermuller.com/2019/01/27/real-estate-prices-have-job-seekers-going-for-brokers/ |
Since hose may change in length from +2% to -4% under the surge of high pressure,provide sufficient slack for expansion and contraction.
|
|
Hose should not be twisted.Hose is weakeded when installed in twisted position.Also pressure in twisted hose tends to loosen fitting connections. Design so that machine motion produces bending rather that twisting.
|
|
Never use a bending radius less than the minimum shown in the hose specification tables.The bending radius of the hose should be far away from the hose fitting(A>1.5R).
|
|
Leave proper length when the hose is connected.
|
|
Hose bending radius is bigger when it is in motion.
|
|
Choice proper fittings,avoid too small bending radius and excess force.
|
|
Choice proper fittings,avoid twisting in hose lines bent in two planes.
|
|
Choice proper fittings,avoid excessive hose length.
|
|
Avoid twisting in hose by using clamp properly.
|
|
Reduce friction,avoid hose touching the object directly or far away from the object. | https://www.cnshunda.com/specification/hydraulic-hose-assembly/correct-assembly-installation.html |
British Airways is launching a daily service between Gatwick and Malta next summer.
The airline will deploy a combination of aircraft — a Boeing 737 and Airbus A319 and A320 — on the direct route to Luqa International Airport.
The route starts on March 30. It will have been five years since BA last flew to Malta in 2009.
Peter Simpson, BA's director at Gatwick, said: "We are delighted to be launching this new route to Malta for summer 2014 allowing us to return to the island, which we last served in 2009.
"We believe the convenient timings for the services will make the route attractive to tourists as well as those visiting friends and relatives."
BA will also be increasing the frequency of flights from Gatwick to Salzburg, Naples, Dubrovnik, Marrakech and Catania during summer 2014.
Last week, the airline's A380 superjumbo made its first long-haul flight to Los Angeles (see news, September 24). | https://www.businesstraveller.com/news/2013/10/04/ba-to-launch-daily-service-to-malta/ |
at is -480 - -323426?
322a56
In base 15, what is -c450 - 734?
-cb84
In base 3, what is 202110000201 + -10111?
202102220020
In base 9, what is 253 - 127832?
-127568
In base 12, what is -312222 - 139?
-31235b
In base 14, what is -27 + -130b2d?
-130b56
In base 7, what is -12 - 140443655?
-140444000
In base 2, what is -110110001100010101 + 111000?
-110110001011011101
In base 14, what is acb732 - -5?
acb737
In base 14, what is -2b1565 - 3?
-2b1568
In base 16, what is -14fc5e - -7?
-14fc57
In base 11, what is -386 - -82a99?
82713
In base 16, what is -1edf - -77?
-1e68
In base 13, what is 47a2680 + 3?
47a2683
In base 15, what is -d12901a - 1?
-d12901b
In base 2, what is 1001111100 + 101100111010001101?
101101000100001001
In base 12, what is 2 + -8037910?
-803790a
In base 2, what is -11101101111110 - 11010101011001?
-111000011010111
In base 16, what is -5 - 72eb43?
-72eb48
In base 5, what is -200410 - 302121?
-1003031
In base 3, what is 2221120100002 + 2002?
2221120102011
In base 10, what is 4 + 204391429?
204391433
In base 5, what is 2 - 343304402312?
-343304402310
In base 10, what is 51810142 - -3?
51810145
In base 14, what is 194d - -86c?
23bb
In base 9, what is 6 + -168577733?
-168577726
In base 14, what is 262 + c4d33?
c5195
In base 9, what is -1540 - 18070?
-20620
In base 3, what is 220210101121000 - 110?
220210101120120
In base 5, what is 2 - -30000202?
30000204
In base 11, what is -2568 + -12a6?
-3863
In base 6, what is -2325251 + 105?
-2325142
In base 8, what is 17564110 + 5?
17564115
In base 3, what is 222221012 + 1112021?
1001110110
In base 11, what is 3 - 10321751?
-10321749
In base 2, what is 100110111011110110000000000 - 1?
100110111011110101111111111
In base 16, what is 256 + -136e?
-1118
In base 6, what is 134 - 1225410134?
-1225410000
In base 13, what is -724135 + 1?
-724134
In base 4, what is 12330101 + -21011?
12303030
In base 15, what is -1004 - ebe?
-1ec3
In base 16, what is 51875b + e?
518769
In base 13, what is -b41a2 + 3b9?
-b3ab6
In base 13, what is 10 - 809801?
-8097c1
In base 11, what is 619 - 2a3485?
-2a2967
In base 13, what is a49ab27a + 0?
a49ab27a
In base 16, what is -f + -1149b6e?
-1149b7d
In base 16, what is -6 + -6893fc?
-689402
In base 14, what is -22a - 9c3b?
-a067
In base 6, what is 41 + 42125002?
42125043
In base 13, what is 31a9 - 4a2?
2a07
In base 5, what is -3214 - 1042201?
-1100420
In base 9, what is -8227041 - 0?
-8227041
In base 14, what is -1b9723 - -6?
-1b971b
In base 3, what is 2 - 10101022011022?
-10101022011020
In base 16, what is 342849 - -2?
34284b
In base 8, what is 2254 - -3240050?
3242324
In base 2, what is 10100001110110111100011 + 0?
10100001110110111100011
In base 14, what is 2b + 2242c5?
224312
In base 2, what is -10100 - -1000111001011110000?
1000111001011011100
In base 15, what is -36575e28 + 3?
-36575e25
In base 7, what is -2 + 40035405?
40035403
In base 7, what is -346516432 - -2?
-346516430
In base 14, what is 3153a - -62?
3159c
In base 2, what is 101100111000010110 - 110111010011?
101100000001000011
In base 11, what is -322 + 51759?
51437
In base 10, what is -73856041 - -2?
-73856039
In base 14, what is -345 + -5bb8?
-611d
In base 13, what is 2684b1 - 4?
2684aa
In base 5, what is 4031010 + -244301?
3231204
In base 13, what is b - 5602512?
-5602504
In base 3, what is -111212 - -1210202012?
1210020100
In base 14, what is 3a3 - 5a62d?
-5a26a
In base 14, what is 49 - 1a5971?
-1a5926
In base 11, what is -56 + -85203?
-85259
In base 16, what is e1b0 - 23c8?
bde8
In base 14, what is b56a2 - -3b?
b56dd
In base 9, what is -10813 - 83225?
-104138
In base 16, what is 60301b - 6?
603015
In base 16, what is 1350792 - -3?
1350795
In base 12, what is bb - 20418?
-20319
In base 7, what is -5 - -1335254364?
1335254356
In base 10, what is -28047 + -498?
-28545
In base 4, what is -10030113 + -1131?
-10031310
In base 4, what is -31 - 11231100212?
-11231100303
In base 16, what is -5 + -b3c505?
-b3c50a
In base 5, what is 1432312 - -1042?
1433404
In base 8, what is -755001 - 133?
-755134
In base 16, what is -47c39b30 - 2?
-47c39b32
In base 11, what is -577449 - -220?
-577229
In base 11, what is 15a96 - -5?
15aa0
In base 13, what is 1201 + -507?
9c7
In base 16, what is 2f071 + -26?
2f04b
In base 14, what is 9ad7 + -229?
98ac
In base 15, what is -4 - -1344ea8?
1344ea4
In base 7, what is -1621 + 101160?
66236
In base 8, what is -16702 + -333?
-17235
In base 10, what is -10 - -1680214?
1680204
In base 5, what is 32 - 2340331?
-2340244
In base 4, what is -1002331333302201 + 1?
-1002331333302200
In base 4, what is -100002 - -1000001202?
333301200
In base 10, what is 746 + -102003?
-101257
In base 2, what is -1000000010011110101110111 - 100?
-1000000010011110101111011
In base 15, what is -8603 - -12b89?
9586
In base 5, what is -10243443 - 10?
-10244003
In base 12, what is -2a1ba - 3a?
-2a238
In base 15, what is -1d0 - 4ed0?
-51b0
In base 10, what is -560330 - -21?
-560309
In base 11, what is -5 + -85a50?
-85a55
In base 3, what is 2 - 1222010101222010?
-1222010101222001
In base 5, what is -4 + -10024204214?
-10024204223
In base 8, what is -11233673 + 0?
-11233673
In base 6, what is -110052 - -222333?
112241
In base 7, what is 15360 - -21156?
36546
In base 11, what is -1333a363 - 28?
-1333a390
In base 14, what is 303b4c + -53?
303ad9
In base 13, what is 3496 - -3a8?
3871
In base 12, what is 82 + 1233384?
1233446
In base 12, what is -66996a53 + 0?
-66996a53
In base 3, what is 0 + -12021110122121122?
-12021110122121122
In base 4, what is -322311231 - -122?
-322311103
In base 7, what is -10 - 46136263?
-46136303
In base 16, what is 52190 - 6a6?
51aea
In base 12, what is 14 - -12027a5?
12027b9
In base 3, what is 200102110 + 102222212?
1010102022
In base 5, what is 1100231302 + -1324?
1100224423
In base 16, what is 23 - 3a54?
-3a31
In base 11, what is 2a4 - -22752?
22a46
In base 4, what is 1 + 32001330010?
32001330011
In base 10, what is -112 + 958?
846
In base 16, what is af8322 - -5?
af8327
In base 12, what is 4 + -204ab61?
-204ab59
In base 9, what is -1 + -22012038?
-22012040
In base 4, what is -10 - -212300213?
212300203
In base 6, what is 25034544 + 323?
25035311
In base 11, what is -19a + -67993?
-68082
In base 4, what is -22 + -2220001310212?
-2220001310300
In base 11, what is -6a00883 - a?
-6a00892
In base 14, what is -2c35 + -17b71?
-1a9a6
In base 12, what is 3b + -948a13?
-948994
In base 14, what is -c40d87 - -8?
-c40d7d
In base 9, what is 55143567 - -3?
55143571
In base 6, what is -33033042 - -30?
-33033012
In base 5, what is -1430102241 + 3?
-1430102233
In base 4, what is -2111110 + -12220000?
-20331110
In base 8, what is -360454107 - -5?
-360454102
In base 4, what is 2022211020 + 20?
2022211100
In base 3, what is 21211122122201222 - 1?
21211122122201221
In base 8, what is 1450 - -22232?
23702
In base 5, what is -244 + -202441010?
-202441304
In base 4, what is -101013023133 + -21?
-101013023220
In base 6, what is 1255 - 1355445?
-1354150
In base 7, what is 41240 - -6406?
50646
In base 2, what is 110 + 1111011011110001110100110111?
1111011011110001110100111101
In base 6, what is -103 + -34453004?
-34453111
In base 5, what is -130133 + 3324?
-121304
In base 13, what is 5 + -4179573?
-417956b
In base 12, what is -699 + 9ba?
321
In base 11, what is a0a26 - -11a?
a1045
In base 13, what is 5 - 2470011?
-2470009
In base 9, what is -4 + -276548?
-276553
In base 9, what is 1017453 - -228?
1017682
In base 7, what is 14260651 - 32?
14260616
In base 7, what is -42 - -35302605?
35302533
In base 6, what is 501 + 1002540503?
1002541404
In base 4, what is 12103 - -1031230110?
1031302213
In base 9, what is -36246 + -5760?
-43116
In base 8, what is 2030 + -724075?
-722045
In base 5, what is -1340030133 + 22?
-1340030111
In base 16, what is 16 + 101abca?
101abe0
In base 3, what is -21 + -101210100201100021?
-101210100201100112
In base 6, what is -552100 - -4430?
-543230
In base 14, what is 31c - -5b3a2?
5b6c0
In base 4, what is -20200 - -1233303?
1213103
In base 13, what is 4 + b4a83c?
b4a843
In base 10, what is 245903476 - -3?
245903479
In base 3, what is 2000 + -1112102010021?
-1112102001021
In base 14, what is bc7d - 1b?
bc62
In base 6, what is 24203 - 4421?
15342
In base 8, what is 271 - -306655?
307146
In base 16, what is
| |
Cochin Port Trust has issued a work order for the construction of a new cruise terminal, which will be operational at Ernakulum Wharf by February 2020.
The terminal, which will cover an area of more than 2,250m2, will include passenger and crew lounges, immigration and customs clearance facilities, security check counters, a tourist information facility, duty free shopping and a cafeteria.
In recent years the Ministry of Shipping and Ministry of Tourism have jointly taken a number of initiatives to promote cruise tourism in India. This includes the rationalisation of tariffs for cruise vessels entering India’s ports and introducing easier visa requirements.
As one of the prime cruise tourism destinations in India, Cochin has been getting around 40 cruise ship calls every year, bringing tens of thousands of high net worth international tourists to Kerala. It is estimated that every cruise tourist spends on an average US$ 400 per day during local visits. Building the new terminal is expected to further strengthen this fast growing business, generating economic benefits for the region.
Currently cruise vessels up to 260m in length are handled at the BTP berth and Samudrika, the cruise passenger facilitation centre. However, vessels over 260m are handled at Ernakulum Wharf, and this is the reason that the new cruise terminal will be built here, the Trust says. | https://www.themaritimestandard.com/cochin-build-new-cruise-terminal/ |
We’re incredibly pleased and proud to report that this season there are now three of our translocated Ospreys breeding and incubating eggs! Of course, only one of these nests is in Poole Harbour (or indeed England!), despite the main aim of the translocation project being to restore a breeding population on the South Coast. So what’s going on?
The translocation project is underpinned by a key behaviour that many birds display, know as natal philopatry. Put simply, natal philopatry means that the species is generally attracted to settle and breed in their natal area, often as close as possible as to the site they themselves hatched from. In migratory species like Ospreys, this is facilitated by their ability to imprint on (i.e. memorise and form an attachment to) their local area, which appears to occur after fledging and prior to leaving on migration for the first time. Through translocation, and by moving young birds before this period, we can manipulate this behaviour so that they are inclined to return to the area of their release site, rather than the area of their natal nest. This behaviour tends to present most strongly in male Ospreys, though plenty of females also show an inclination to return to their natal site. But, at the other end of the spectrum, some females disperse enormous distances, and two Poole Harbour translocated females, 014 and 019, have done just that…
Last year, we were delighted to report that 014 was the first translocated Osprey from the project to raise young: a single male chick, which was ringed Blue 494. For those not familiar with the story, 014 was released in Poole Harbour in 2018 and was first reported back in the UK in May 2020. We were delighted to hear of her return, though slightly surprised that the report had come from our friends all the way over at the Dyfi Osprey Project in West Wales! Over the next few months, she continued to make herself known in different areas of Wales, exploring potential territories (which is very typical of young Ospreys), and even ventured down to Devon for a brief period of time. The South Coast clearly wasn’t her cup of tea however, as she was soon back in Wales, and frequently being reported with at least one young male in tow.
It came as no surprise, therefore, that when 014 returned from migration in 2021, she made a beeline straight for Wales. She set up on a nest only a stone’s throw away from the Glaslyn site at Pont Croesor, with a young male (Z2) known as Aeron, who fledged from Dyfi in 2017. The fact that the new pair raised a chick together was momentous. It meant that our project was already beginning to fulfil some of its wider, more longterm aims, in creating a link between populations in Wales, Poole Harbour and Rutland. Excitingly, 014 and Z2 are back at their nest at Pont Croesor again this year, and we are very pleased to hear that they are incubating eggs. Without a camera on the nest yet, we are unable to know how many eggs are in the clutch, but time will soon tell.
The success in Wales doesn’t stop there, however. On Monday 9th May, we were absolutely elated to hear that a second Poole Harbour translocated female, 019, has also settled on a Welsh nest with a young male, and laid her first egg! The male is KS6, another Dyfi fledged male from 2018, known locally as Dinas. This is the first breeding attempt for the pair and it comes after an interesting couple of years of sightings of 019, both in the UK and in her wintering grounds.
019 was one of the females released in the 2019 cohort of translocated juveniles (along with 022!). During that season, she was certainly a late bloomer, being the last to reach milestones throughout the project, including flight within the pens, fledging and leaving on migration. She therefore appeared to have defied the odds when she was reported in The Gambia on the 24th December 2019; an early Christmas present for the team! We were pleased to hear that she had settled in Gunjur Quarry, which is a fantastic area to choose for her wintering grounds, perfect for hunting alongside dozens of other overwintering Ospreys. Since then we’ve received regular reports of her from The Gambia, providing insight into the other side of these birds’ lives.
Hopes were therefore high for 019 to return to the UK in 2021, and she did not disappoint, arriving back in early June. But, just like 014, she was first reported in Wales, this time putting in an appearance at Glaslyn. It was great to see her back and looking in excellent condition, and bizarrely within minutes of 014 and Z2’s nest. We had a sense of déjà vu and anticipated that this would not be the last time that 019 was seen in Wales. Fast-forward to 2022, and after putting in a first brief appearance at Llyn Brenig in North Wales on May 19th, here we are, with the fantastic news that 019 is incubating on a nest with KS6, on another Friends of the Osprey nest in the exact same valley as 014 and Z2!
So, why are these females turning up in Wales? As we previously mentioned, the imprinting bond to be drawn back to their release site is weaker in females than in males (though there are exceptions there, too!). Both Dyfi and Glaslyn are on a perfect migratory flyway for birds returning to and exploring the UK for the first time. When they reach Wales, another behaviour also comes into play, referred to as conspecific attraction: Ospreys are attracted to areas where a population already exists. Happening upon the Welsh nests would have given both 014 and 019 the impression that the local area was a productive site to find a nest and a mate, and later raise young. It also doesn’t hurt that there are so many surplus bachelor males in the growing Welsh population and plenty of available nest sites thanks to conservationists putting up new artificial nests. So it didn’t take long the two females to settle in Wales, and we shouldn’t be too surprised by this movement. Indeed, it was a very similar situation that caused CJ7 to be attracted to Poole Harbour, having fledged in Rutland. With an abundance of translocated juvenile Ospreys greeting her arrival in 2017, it was easy for her to be convinced that the area would be an excellent place to raise young.
All the pieces are therefore in place for the long-term success of the project, both with the nest in Poole Harbour paving the way to establish a new population, as well as connections being formed with Wales and Rutland. We couldn’t be happier or more proud of what has been achieved so far, and we’re extremely grateful for the teams across Wales and West Africa for their work and their updates on the translocated Ospreys. We can’t wait for the rest of the season to unfold, and will be following the progress of all three of our birds with avid excitement! | https://www.birdsofpooleharbour.co.uk/osprey-news/3-translocated-ospreys-breeding/ |
# Landesjugendorchester Baden-Württemberg
The Landesjugendorchester Baden-Württemberg (Youth Orchestra of Baden-Wuerttemberg, LJO), founded in 1972, is a youth orchestra based in the German state of Baden-Württemberg. The orchestra gives a concert tour in Baden-Württemberg twice a year, and has travelled abroad on several occasions.
## History and structure of the LJO
The Landesjugendorchester Baden-Württemberg was founded in 1972 by Klaus Matakas and Dietmar Mantel. They put together an ensemble of young musicians, who at that time had already been playing in the symphony orchestra of the music school in Lahr, appointing Christoph Wyneken as conductor. Shortly thereafter, a hand-picked selection of musicians as well as "Jugend musiziert" (Teenagers performing Classical Music) laureates from all over Baden-Wuerttemberg applied to audition.
The LJO has numerous partnerships with other German orchestras. On 7 November 2005, for instance, on the occasion of the joint initiative of the Association of German Orchestras, the German Jeunesses Musicales and the Association of German Conservatoires, the Stuttgart State Orchestra and the Youth Orchestra of Baden-Wuerttemberg launched the "tutti pro" orchestra partnership. Other partnerships were formed during concert tours abroad.
To enter the "LJO Pool" musicians have to pass an audition. The "LJO Pool" contains about 350 young musicians aged between 13 and 22, who play all kinds of orchestral instruments. Between 85 and 120 musicians participate in each working phase.
The office of the LJO is in Stuttgart-North in the rooms of the "Landesverband der Musikschulen Baden-Württemberg" (roughly translated as "state association of the music schools of Baden-Wuerttemberg"). Since 2008, the LJO has been employing one volunteer each year ("voluntary cultural year"(FSJ Kultur)).
## Artistic director
The co-founder of the Youth Orchestra of Baden-Wuerttemberg, Christoph Wyneken, was actively engaged as the creative director until 2013, from which point onward the creative director as well as conductor have been replaced at the start of every work period. Johannes Klumpp is the creative advisor.
## Working phase (schedule)
As the LJO is a project orchestra, no regular weekly rehearsals take place but working phases twice a year, always in the Easter and autumn holidays. In these periods, full-length concert programmes are rehearsed, which contain classical-romantic pieces of the concert literature as well as new music.
Every working phase contains extensive rehearsals of the chosen pieces with section (register) rehearsals which are supervised by top-class tutors as well as tutti-rehearsals with the artistic director of the respective working phase. Every working phase is followed by a concert tour throughout Baden-Württemberg with up to 7 concerts. There are also some special projects with shorter rehearsal times and fewer concerts.
## Audition
Auditions usually take place once a year in the Stuttgart area, giving musicians opportunity to showcase their talent and skills. The audition panel comprises creative advisors and tutors of the instrument. Audition dates are announced on the official website of the Youth Orchestra of Baden-Wuerttemberg. Once the date has been set and made public, musicians interested in participating can register and choose the most convenient date from a list of possible audition dates. The orchestral audition comprises a ten-minute performance. The applicant is expected to prepare and play orchestral excerpts and a fast and slow movement free of choice, but usually encompassing two periods of classical music. If the participant is successful in passing the audition, they become part of the orchestra pool and may be selected for upcoming rehearsals and concerts.
## International trips
Italy (1983) France (1989) Spain (1991) United Kingdom (1994) Russia (1995) Poland (2002) Egypt (2006) Madagascar (2008) Ecuador (2009)
### Madagascar
On 15 May 2008 Germany and Madagascar celebrated the 125th anniversary of the German–Madagascan Treaty of Friendship. As part of the "Aktion Afrika" ("Action Africa") and in cooperation with the German Ambassy in Antananarivo the Foreign Office planned the official festivities on the fourth biggest island in the world. Receiving an invitation to form the accompanying cultural programme between 9 and 19 May, the LJO delegated a 19 member brass ensemble.
### Ecuador
From September 1 to 18 2009, the Youth Orchestra of Baden-Wuerttemberg, boasting an ensemble of 45 musicians, performed a total of fifteen concerts to celebrate the 200th anniversary of Ecuadorian independence, all under the leadership of artistic director Christoph Wyneken. The orchestra also hosted a number of workshops and meet-and-greet events with Ecuadorian school children. After meeting with its fellow orchestra "Orquestra Sinfónica Juvenil de Guayaquil” in Guayaquil, the Youth Orchestra of Baden-Wuerttemberg performed a joint concert alongside its Ecuadorian counterpart, playing the first movement of Ludwig van Beethoven’s Symphony No. 5.
## Financing
The Youth Orchestra of Baden-Wuerttemberg has numerous supporters and sponsors including the federal state of Baden-Wuerttemberg, the association of savings banks of Baden-Wuerttemberg, the regional association of music schools of Baden-Wuerttemberg and the Stuttgart State Orchestra.
Concert revenue constitutes another major source of income. The sponsorship association and the Foundation of the Youth Orchestra of Baden-Wuerttemberg provide additional financial backing, as well as the revenue generated by the sales of CDs, DVDs and programmes.
## Awards and nominations
### Awards
European Prize of the Youth Orchestra of the foundation "Pro Europa" ("Pro Europe"), on 2 November 2008 "Leonberger Jugendmusikpreis" (Prize of the city of Leonberg (near Stuttgart) for young musicians), 27 times in a row, last time in 2010
### Nominations
"Kulturmarken-Award" ("Cultural Mark Award") 2008 (presented by KulturSPIEGEL) – one of the "trend marks in 2008" (as first youth orchestra ever). The Kulturmarken-Award is promoted by Škoda Germany, Apollinaris (mineral water company), Bionade and the Deutschen Bahn AG. "Kulturmarken-Award" 2015 – "European Education Programme of 2015" with the special programme Apollo 18 – Musiktheater im Jugendknast ("Apollo 18 – music theatre in the juvenile prison")
## Productions
The LJO produced several CDs and DVDs, some of which include the following:
## Guest conductors and soloists
The LJO has worked together with many reputable artists. A selection:
### Guest conductors/artistic directors
Thomas Ungar Till Drömann Patrick Strub Wolf-Dieter Hauschild Nicolas Pasquet David Afkham Hannes Krämer (Autumn 2014) Hermann Bäumer (Spring 2015) Anna-Sophie Brüning (Autumn 2015 – Apollo 18!) Johannes Klumpp (Autumn 2015) David Philip Hefti (Spring 2016) Peter Tilling (Autumn 2016)
### Soloists
Aaron Rosand, Alexander Sitkowetski, Alexander Zeiher, Koh Gabriel Kameda, Ulrike Anima Mathé, Maria-Elisabeth Lott, Lukas Stepp, Elena Graf, Oscar Bohórquez, Kathrin Scheungraber (violin) Tabea Zimmermann, Boris Faust, Hanna Breuer (viola) Christoph Henkel, Claudio Bohórquez, Emanuel Graf (cello) Kersten McCall (flute) Markus Frank (horn) Wolfgang Bauer, Reinhold Friedrich (trumpet) Xiao Xiao Zhu, Moye Kolodin, Alexej Gorlatch (piano) Jakob Spahn (cello) Esther Hoppe (violin) | https://en.wikipedia.org/wiki/Landesjugendorchester_Baden-W%C3%BCrttemberg |
Mooney Falls is the third main waterfall in the canyon. It is named after D. W. "James" Mooney, a miner, who in 1882 (according to his companions) decided to mine the area near Havasu Falls for minerals. The group then decided to try Mooney Falls. One of his companions was injured, so James Mooney decided to try and climb up the falls with his companion tied to his back, and subsequently fell to his death. The Falls are located 2.25 miles (3.6 km) from Supai, just past the campgrounds. The trail leads to the top of the falls, where there is a lookout/photograph area that overlooks the 210-foot (64 m) canyon wall that the waterfall cascades over. In order to gain access to the bottom of the falls and its pool, a very rugged and dangerous descent is required. Extreme care and discretion for the following portion is required; it is highly exposed and should not be attempted when the weather and/or conditions are not suitable.
The trail down is located on the left side (looking downstream), up against the canyon wall. The first half of the trail is only moderately difficult until the entrance of a small passageway/cave is reached. At this point the trail becomes very difficult and very precarious. The small passageway is large enough for the average human, and leads to a small opening in which another passageway is entered. At the end of the second passageway the trail becomes a semi-vertical rock climb. At this point it is advisable to turn your body around like you are descending a ladder. There are strategically placed chains, handholds, and ladders to aid in the climb. Take extreme caution and do not rush.
More than likely the rock will become slippery due to the mist from the falls, and there will probably be people heading up. Always let the person who is the most exposed to pass. The pool is the largest of the three, and along with the others there are some places for cliff-jumping (please use extreme caution). It is possible to swim to the left of the falls to the rock wall, and then shimmy your way across the rock (while staying in the water) to a small cave that is located just above the water line, approximately 15-20 feet (5 to 6 meters) away from the falls (only attempt if you are a strong swimmer). There is an island located in the middle, which breaks the pool into two streams. | http://www.ohranger.com/grand-canyon/poi/mooney-falls |
Fix compiler warning on msys2: enchmorc.c:35:31: error: unknown escape sequence: '\o' The absolute path returned by current_source_dir() returns backward slashes, which don't work well when used as a C string constant. join_paths() will make it all forward-slashes.
-
- 04 Jul, 2020 1 commit
-
-
Simplifies code and removes build path fragments from generated file (paths may be different if generated in a gst-build setup). There shouldn't be any portability issues with this. Meson has been using this in its generated config.h for years. Part-of: <!45>
-
- 02 Jul, 2020 3 commits
-
-
If the size of the JIT code is 0, there's no code and the *mem is uninitialized. This can happen when orcc.exe is used to generate backup C code. Part-of: <!44>
-
This is actually more useful because the constants are all bitfields and it's fairly straightforward to look it up: https://docs.microsoft.com/en-us/windows/win32/memory/memory-protection-constants Part-of: <!44>
-
Only whitespace changes. Part-of: <!44>
-
- 01 Jul, 2020 3 commits
-
-
https://gitlab.freedesktop.org/nirbheek/orc/-/pipelines/169274 Update to latest image and use the gstreamer runner tag. Matches the gst-ci template. Also try to make MSYS2 CI more resilient by following: https://github.com/msys2/setup-msys2/blob/master/main.js#L98 Part-of: <!42>
-
Part-of: <!42>
-
VirtualAlloc is VirtualAllocFromApp when targeting UWP, and you can only allocate executable pages if you have the codeGeneration capability set in the app manifest. Check for that capability in _orc_compiler_init() and switch to backup code or emulation if it isn't available instead of crashing when VirtualAllocFromApp returns NULL. Also you cannot allocate pages that are both READWRITE and EXECUTE, so we allocate as read-write first, then set the memory as execute-only after the code has been compiled and copied over. Part-of: <!42>
-
- 30 Jun, 2020 1 commit
-
-
On Windows, getenv() is deprecated and does not work in all cases. On the Universal Windows Platform (UWP) it always returns NULL. Add a wrapper orc_getenv() that calls GetEnvironmentVariable on Windows. Also change semantics to always make a copy before returning. Part-of: <!42>
-
- 13 Mar, 2020 1 commit
-
-
Pass bool_yn kwarg to summary() to make it print boolean arguments as nice coloured YES/NO instead of true/false. We can also pass multiple arguments like a bool and a disabled_reason string. In meson 0.54 these can be printed on one line if we set the line_sep kwarg. In meson 0.53 these will always be printed on two lines (and it will warn about the line_sep arg), so only pass two args if docs are disabled and otherwise just pass one arg, so we don't end up with an ugly empty line with meson 0.53.
-
- 08 Feb, 2020 2 commits
-
- 02 Nov, 2019 1 commit
-
- 14 Oct, 2019 1 commit
-
-
Some of the instructions are not NEON (which always uses FTZ), but are actually VFP, which requires enabling FTZ mode.
-
- 20 Sep, 2019 1 commit
-
- 14 Sep, 2019 4 commits
-
-
-
The VSX vector instructions don't support automatically treating source denormalized FP numbers as 0 or converting the denormalized results to zero.
-
Even with the PowerPC copy improvements we still exceed the 30s time limit. Power8 has an 8MB L3 cache, resulting in a total copy of ~9GB. Before PowerPC copy: ~48s After PowerPC copy: ~38s now: ~18s
-
Provides ~20x speedup for fulling aligned buffers, although still slower than builtin memcpy.
- 13 Sep, 2019 5 commits
-
-
-
Some of the constants are used by the invariants so add them to the precheck and then load constants first.
-
Two emit helpers had their parameters in the wrong location in the generated opcode. Hasn't mattered because they were using the same register for source & dest. Constant flags are now at the end, so the label should be marked forward, not backward. | https://gitlab.freedesktop.org/gstreamer/orc/-/commits/ac10d5abd11d56783e7612ad64bdcf224302ce18 |
"
"
Supercharge Your Restaurant Marketing
Written by Peter Smalls on January 10, 2020
A Startling Fact about Sustainability Science Uncovered Though this is quite a competitive industry and difficult to make it, it is totally worthwhile to experience the rigor just to be part of the newest technology, engineering and innovation. And scientists want to work with companies to learn how to translate ecosystem science into metrics and […]
A Startling Fact about Biology Cellular Respiration Uncovered The Basics of Biology Cellular Respiration The point is we all must eat. The very first half is called the energy requiring steps. In order in order to use a respirometer, you will have to use the perfect gas law, which describes the connection between temperature, pressure […]
Inside this still image the M-mode captures the movement of a specific region of the heart. Your mind does not really think, it’s far from being smart, it’s more like a big beast with a rather limited capability for reason. However, the downside of this course of action is it makes our eyes online essay […]
Written by Peter Smalls on January 09, 2020
What You Need to Do About Potential Difference Physics Beginning in the Next Three Minutes Generators produce a comparatively low voltage. The equation can thus be employed to figure the most voltage. It is essential that you know about electricity. Because electric potential distinction is expressed in units of volts, it is occasionally known as […]
Rubbing hands together is among the illustration of friction. Whatever flaws there were in these particular predictions, the approach to attempting to understand the unknown proved to be a worthy one. This is an excellent hands-on class that is going to keep you busy from the second you get there to the second you leave. […]
Written by Peter Smalls on December 27, 2019
You need to attempt to produce your essay a whole lot easier to remember, and also the ideal method to do it would be to compose a narrative as a debut. When you obtain your completed essay, be certain you tell all your friends what a fantastic service it is and what’s the perfect place […]
Written by Peter Smalls on December 21, 2019
Where to Find Theories Used in Nursing Students have to learn more concerning the circumstance and suggest the ideal solution of the scenario. Writing are likely as a way to work with you to become in contact with the silence within yourself. Taking career decisions in the correct way is essential for all of us. […]
Written by Peter Smalls on December 16, 2019
The Unexpected Truth About Hypothesis Examples Biology Today, there are lots of goods available online that could get the job done as great as a hair loss treatment to stimulate hair growth and put an end to hair loss. Although this procedure is common as https://papernow.org/buy-speech a consequence of aging in middle-aged women and men, […]
Cookie Chemistry – Is it a Scam? I used that number to figure the typical error of the mean, that is the possible assortment of all of the widths in all prospective batches of cookies. And to guarantee maximum fairness in the peer-review procedure, we wouldn’t understand who had baked which cake. I am, though, […]
There are a multitude of assumptions about the workings of the experimental apparatus that you need to accept so as to conclude that the experiment indicates the effect you’re looking for. Within this description of reality, where space is similar to a woven blanket rather than a http://www.samedayessay.com/ smooth expanse, gravity may also be separated […]
VoucherZ is a digital replacement for traditional printed vouchers, but with the power to do so much more.
This blog is aimed at helping share our knowledge of online marketing for the restaurants industry, having helped 1000's of restaurants around the world.
Sign up to our mailing list below and you will be the first to know when we have a new post. | https://blog.voucherz.co/ |
Under the concept of “Duty of care”, this team offers, preventive messages, testing of suspicious cases and travelers, care of positive cases. This team comprises medical doctors and nurses.
In that line, the position of nurse is key into offering comprehensive services to affected staffs and dependents.
Duties and Responsibilities
1. Clinical Duties:
- Responds to emergency calls and assists Medical Officers.
- Ensures effective liaison between patient and private doctor, paramedics, family members, and colleagues as appropriate, documents case findings.
- Assist to the medical officer on the management, organization and coordination of the fever clinic, including isolation ward, etc at the duty station.
- Assist the medical officer in the follow up and tracking of the positive cases and close contacts
- Performs point of care diagnostic and screening tests.
- Performs clinical assessment of patients visiting the walk-in clinic; provides care/advice accordingly or facilitates referral to the UN Medical Officer or to an outside physician, as indicated.
- Ensures preparedness of staff travelling on missions or reassignments, including administration of appropriate vaccine, instructions on malaria prophylaxis and other travel-related ailments.
Assists in providing health education and health promotion programs; participates in work environment assessment with special emphasis to the prophylaxis of the relevant diseases of the outbreak.
2. Medico-administrative duties:
- Assists Head Nurse in all her/his activities as required.
- In the UN Clinics administered by UNDP and Regional commission acts as Head Nurse.
- Ensures that medical instruments are properly sterilized.
- Ensures proper filing of medical and other records.
Evaluates, orders and maintains an efficient inventory and stock control of medications/vaccines and other medical supplies and equipment.
3. Supervisory Duties:
Assumes all supervisory and medico-administrative duties of the Head Nurse in her absence. In the case of the Regional Commissions and the UN Clinics administered by UNDP, acts as Head Nurse and supervises the work of other nurses.
4. General:
Performs other related duties as required.
Competencies
Core
- Innovation
- Leadership
- People Management
- Communication
- Delivery
Technical/Functional
- Relationship Management
- Universal Health Coverage Monitoring
- Operational Efficiency
- Internal Reporting and Accountability
- Mentoring and capacity building
- Knowledge Management
Required Skills and Experience
Education:
Registered Professional Nurse with high school diploma and who is a graduate of either an accredited baccalaureate nursing programme or an accredited diploma programme (3 years). National registration and license are required.
Experience:
Five years of experience in nursing or related area. UN field experience is highly desirable. Demonstrated expertise in the care of the mechanically ventilated patient, including the set up and maintenance of circuits and ventilator machines. Previous experience in humanitarian crisis and infectious diseases is highly desirable. Certification as Intensive Care Nurse or a minimum of 1 years’ experience as an Intensive Care Nurse in the past 5 years. Proven experience in barrier nursing and use of PPE.
Language Requirements:
English and French are required. Knowledge of another official United Nations language is an advantage
All applications must be submitted ONLINE at : https://jobs.partneragencies.net/erecruitjobs.html?JobOpeningId=36143&HRS_JO_PST_SEQ=1&hrs_site_id=2
Job Opportunity : Nurse - CMR COVID19 CLINIC, Basic Social Service Reviewed by Admin on mars 25, 2021 Rating: | https://www.commentpostuler.com/2021/03/cdjob-opportunity-nurse-cmr-covid19..html |
just got off a discussion with the dealership and they said otherwise. The sales manager said that he would stop the car engine from turning on, however the disabled key can still unlock the car doors???
not sure he was a technical person, however I refused his solution to my missing key problem.
however, if this is possible, I will accept it as a solution to my problem....
thanks once more...
Right.
Without changing the locks, the physical part of the key can turn the locks and open the doors. The ignition cylinder in the steering column has a system called EWS that talks to a chip in the key. If the dealer disables that missing key, if that key is ever found, it will not be able to start the car, but it can still open the doors manually. There is no EWS at the doors, only the ignition.
Not sure how much more we can explain it. Basic lock and key stuff here, mastered millennia ago...
| |
---
author:
- 'Shintaro Mori$^1$[^1], Masato Hisakado$^2$, and Taiki Takahashi$^{3}{}^{,}{}^{4}$'
bibliography:
- '65757.bib'
title: ' Collective Adoption of Max-Min Strategy in an Information Cascade Voting Experiment '
---
\[sec:intro\]Introduction
=========================
Even if each person has limited information, aggregated information becomes very accurate [@Smi:1996]. This is the wisdom of crowd effect, and is supported by many examples from political elections, sports predictions, quiz shows, and prediction markets [@Sur:2004; @Pag:2008; @Mil:2011]. In contrast, in order to give accurate results, three conditions need to be satisfied: diversity, independence, and decentralization. If these conditions are not satisfied, aggregated information becomes unreliable or worse [@Sur:2004; @Lor:2011]. However, in an ever-more connected world, it becomes more and more difficult to retain independence. Furthermore, if the actions or choices of others are visible, neglecting them is not realistic in light of the merit of social learning [@Ren:2010; @Ren:2011]. In this case, information cascade may emerge and information aggregation ceases [@Bik:1992; @And:1997; @Kub:2004; @Goe:2007; @Lee:1993; @Dev:1996; @Wat:2002].
More concretely, we consider a situation where people sequentially answer a two-choice question with choices A and B. The payoff for the correct choice is constant . Before this question is asked, many other people have already answered and their choices are made known as $C_{A}$ people choosing A and $C_{B}$ people choosing B, which is called social information. If the person answering knows the correct choice, he should choose it. His choice is not affected by social information. We then call him an independent voter. However, if he does not know the correct choice, he will be affected by social information [@Lat:1981]. He tends to go with the majority and we then call him a herder. By herding, the wisdom of crowds is on the edge. If a herder is isolated from others, his choice becomes A and B, and should be canceled. As a result, the choice by an independent voter remains. The majority choice always converges to the correct one in the limit of a large number of people. This is known as Condorcet’s jury theorem [@Smi:1996]. However, if others’ choices are given as social information, the cancellation mechanism does not work. The herder copies the majority and ignores the correct information given by the independent voter. If the proportion of herders $p$ exceeds some threshold value $p_{c}$, there occurs a phase transition from the one-peak phase where the majority choice always converges to the correct one to the two-peak phase where the majority choice converges to the wrong one with a finite and positive probability [@Mor:2012]. We call this phase transition the information cascade transition [@His:2011; @Mor:2012; @His:2012]. This is the risk of imitation in the wisdom of crowd. How can we avoid this risk? There exists a hint in race-track betting markets [@Hau:2008; @Ali:1977] and prediction markets [@Wol:2004; @Man:2006]. In order to aggregate information scattered among people, the market mechanism can be very effective [@Sur:2004; @Pag:2008; @Mil:2011].
We consider a situation in which each choice $\alpha\in \{A,B\}$ has a multiplier $M_{\alpha}$ that is inversely proportional to the number of subjects $C_{\alpha}$ who chose it. The payoff for the correct choice is proportional to the multiplier. If the multiplier of a choice is large, the number of people who chose it is small. If the return is constant, herder usually avoid the choice. However, now, the return on the correct choice is proportional to the multiplier, and hence we cannot say that herder does not choose it. Copying the majority gives him a small return, even if it is a correct choice. The multiplier plays the role of a “tax” on herding (free riding) and copying the minority can be an attractive choice. The situation is a zero-sum game between the herder answering and all the previous subjects who have set the multipliers. In zero-sum games, the max-min strategy maximizes the expected return and is optimal [@Neu:1944]. In the above two-choice quiz, the max-min strategy is the one where a herder chooses $\alpha$ with a probability proportional to $C_{\alpha}$ and cancels the risk in expected return by the multipliers. We call the herder who adopts the optimal max-min strategy an analog herder [@Mor:2010; @His:2010]. If a herder behaves as an analog herder, the convergence to equilibrium state becomes slow as $p$ increases and there occurs a phase transition in the convergence speed as $p$ exceeds half [@His:2010]. However, the information cascade phase transition does not occur for $p<1$. A majority of people always choose the correct choice in the limit of a large number of people (thermodynamic limit) and the system is in the one-peak phase for any value of $p$ if the accuracy of the information of the independent voter $q$ is $q>1/2$ [@His:2010]. Furthermore, the analog herder’s choice does not affect the limit value of the percentage of correct answer and it converges to $q$. As for the two-choice quiz, the independent voter knows the correct choice and $q=1$ holds. In this case, the system of analog herders maximizes the probability of correct choice for $p<1$ in the thermodynamic limit. Even in limit $p\to 1$, the system can take the probability to one.
In this paper, we have adopted an experimental approach to study whether herders adopt the max-min strategy and behave as analog herders if the choices have multipliers. We have also studied a herder’s probability of correct choice. The organization of the paper is as follows. We explain the experiment and derive the optimal max-min strategy in section \[sec:exp\]. The subjects answer a two-choice quiz in three cases $r\in \{O,C,M\}$. In case $O$, the subjects answer without social information. In cases $C$ and $M$, they receive social information based on previous subjects’ choices. Social information is given as summary statistics $\{C_{A},C_{B}\}$ in case C and as multipliers $\{M_{A},M_{B}\}$ in case M. Sections \[sec:analysis\] and \[sec:analysis2\] are devoted to the analysis of the experimental data. In section \[sec:analysis\], we summarize data about the macroscopic aspects of the system. In section \[sec:analysis2\], we derive a microscopic rule regarding how herders copy others in each case $r \in \{C,M\}$. In section \[sec:model\], we introduce a stochastic model that simulates the system. We study the transition ratio $p_{c}(r)$ for cases $r\in \{C,M\}$. We estimate the probability of correct choice by the herders in the experiment and compare it with that of the optimal analog herders system. Section \[sec:conclusions\] is devoted to the summary and discussions. In the appendices, we give some supplementary information about the experiment and a simulation study of the convergence exponent. We also prove that only the system of analog herders can take the probability of correct choice to one in limit $p\to 1$.
\[sec:exp\]Experimental setup and optimal strategy in case $M$
==============================================================
Experimental setup
------------------
The experiment reported here was conducted at the Group Experiment Laboratory of the Center for Experimental Research in Social Sciences at Hokkaido University. We have conducted two experiments. We call them EXP-I and EXP-II. In EXP-I (II), we recruited 120 (104) students from the university. We divided them into two groups, Group A and Group B, and prepared two sequences of subjects of average length 60 (52). The main motive to divide the subjects into two groups is to ensure many choice sequences in order to estimate the average value of macroscopic quantities. In addition, we can check the estimation of herders’ ratio $p$ by comparing the values from the two groups for the same question [@Mor:2012].
The subjects sequentially answered a two-choice quiz of 120 questions. Some subjects could not answer all the questions within the alloted time, and so the number $T$ of subjects who answered a particular question varied. We label the questions by $i\in \{1,2,\cdots,120\}$ and denote the length of the sequence of the subjects for question $i$ by $T_{i}$. In EXP-I, the subject answers in three cases $r \in \{O,C,M\}$ in this order. We denote the answer to question $i$ in case $r$ after $t-1$ subjects’ answers by $X(i,t|r)$, which takes the value 1 (0) if the choice is true (false). The order $t$ of the subject in the choice sequence $\{X(i,t|r)\}$ plays the role of time. $\{C_{0}(i,t|r),C_{1}(i,t|r)\}$ are the number of subjects who choose true and false for question $i$ among the prior $t$ subjects and are given as $$\begin{aligned}
C_{1}(i,t|r)&=&\sum_{t'=1}^{t}X(i,t'|r), \nonumber \\
C_{0}(i,t|r)&=&t-C_{1}(i,t|r) \nonumber . \end{aligned}$$ In case $O$, the subject answered without any social information. Then, he answered in case $C$. When $t-1$ subjects have already answered question $i$ before him in his group, he received summary statistics $\{C_{0}(i,t-1|C),C_{1}(i,t-1|C)\}$ from all of them. For the correct choice in cases $O$ and $C$, the subject gets two points. Finally, in case $M$, when $t-1$ subjects have already answered question $i$ before him in his group, the subject receives multipliers $\{M_{0}(i,t-1),M_{1}(i,t-1)\}$ from all previous $t-1$ subjects. For the correct choice, the subject gets the points which is given by the multiplier. The multiplier $M_{\alpha}$ for $\alpha \in \{0,1\}$ was calculated based on the summary statistics in case $M$ as $$\begin{aligned}
M_{\alpha}(i,t-1)
&=&\frac{C_{0}(i,t-1|M)+C_{1}(i,t-1|M)+1}{C_{\alpha}(i,t-1|M)+1}
\nonumber \\
&=&
\frac{t}{C_{\alpha}(i,t-1|M)+1} \nonumber .\end{aligned}$$ The multiplier is given by dividing total points $C_{0}+C_{1}+1=t$ for all subjects with choice value among $C_{\alpha}+1$ subjects who have chosen $\alpha$. This is similar to the payoff odds of the parimutuel system in gambling.
In EXP-II, in addition to the three cases $r\in \{O,C,M\}$, the subjects answered in at most four cases $r\in \{1,5,11,21\}$ between cases $O$ and $C$. In cases $r\in \{1,5,11,21\}$, the subject received summary statistics $\{C_{0}(i,t-1|r),C_{1}(i,t-1|r)\}$ from previous $r$ subjects. $C_{0}(i,t-1|r)+C_{1}(i,t-1|r)=r$ holds and as $r$ increases, the amount of social information increases. In EXP-I, the amount of social information increases rapidly from $r=0$ in case $O$ to $r=t-1$ in case $C$. In EXP-II, $r$ gradually increases. The payoff for the correct choice is 1 in cases $r\in \{O,1,5,11,21,C\}$ and the multiplier in case $M$. Detailed information about EXP-II has been presented in our previous work [@Mor:2012], where we have studied the experimental data for cases $r\in \{O,1,5,11,21,C\}$. In this paper, we concentrate on case $M$ and take case $C$ as the control case.
We repeated the same experiment for both Groups A and B. We obtained $120\times 2$ sequences $\{X(i,t|r)\}$ for each $r \in \{O,C,M\}$. We label the sequence in Group B by $i+120$, so that $i \in \{1,2,\cdots,240\}$. The experimental design is summarized in Table \[tab:design\].
Experiment Group $T$ Cases $\{r\}$ $I$
------------ ------- ----- ----------------------- -----
EXP-I A 57 $\{O,C,M\}$ 120
EXP-I B 63 $\{O,C,M\}$ 120
EXP-II A 52 $\{O,1,5,11,21,C,M\}$ 120
EXP-II B 52 $\{O,1,5,11,21,C,M\}$ 120
: \[tab:design\] Experimental design. $T$ means the number of subjects and $\{r\}$ means the cases where the subjects answered the quiz. $I$ means the number of questions. The length $T_{i}$ of sequence $\{X(i,t|r)\}$ for question $i$ is almost the same as $T$ in EXP-I. In EXP-II, it depends on $i$ and the average value is $50.8$.
\[Max\]Max-Min Strategy in case $M$
-----------------------------------
We derive the optimal strategy for herders in case $M$. A subject can choose $\alpha \in \{A,B\}$. We suppose that he votes one unit for a choice and call him a voter. Here, we consider the case where one vote can be divided by the voter. If a voter believes $A$ is correct, he votes one unit for $A$. If the voter does not know the answer at all, he votes 0.5 unit for $A$ and 0.5 unit for $B$. We assume that a voter thinks the probability that $A$ is correct is $\beta$, and the probability that $B$ is correct is $1- \beta$. The voter divides one unit vote into $x$ for $A$ and $1-x$ for $B$ by his decision making. Expected return $R$ is $$\begin{aligned}
R&=& \beta \cdot M_{A} \cdot x+ (1-\beta) \cdot M_{B} \cdot (1-x)
\nonumber \\
&=&\beta(M_{A} x-M_{B}(1-x)) +M_B(1-x).
\label{b1}\end{aligned}$$
We assume that herders do not have information about the correct answers without multipliers $\{M_{A},M_{B}\}$. Hence, we assume that a herder cannot estimate the probabilities of correct answer $\beta$ as Knightian uncertainty, because he has no knowledge to answer the question [@Kni:1921]. The situation is a zero-sum game between the herder and other previous voters as the multipliers are set such that all votes are divided by the voters who have chosen the correct option. The max-min strategy has been proved to be optimal in game theory [@Neu:1944]. The voter minimizes the expected loss due to the uncertainty in the choice. In order to minimize the expected loss from the uncertainty, it should be chosen such that $M_{A}\cdot x = M_{B}\cdot (1-x) $ holds, from (\[b1\]). This position has no sensibility for $\beta$.
We can calculate $x$ from (\[b1\]), $$x=\frac{M_{B}}{M_{B}+M_{A}}.$$ As multiplier $M_{\alpha}$ is calculated as $$M_{\alpha}=\frac{t+1}{C_{\alpha}+1},$$ ratio $x$ for $A$ is then $$x=\frac{C_A+1}{t+2} \sim \frac{C_A}{t} \hspace*{0.3cm}
\mbox{for}\hspace*{0.3cm}t>>1.$$ $x$ becomes proportional to $C_A$ and it is the voting strategy of analog herders [@His:2010].
The discussion shows that the strategy of analog herders is optimal for a herder as it maximizes his expected return. In our experiment, the voter cannot divide one’s vote (choice). Hence, the averaged behavior of herders becomes akin to that of the analog herders, when herders adopt the optimal strategy.
We make one comment about the optimal strategy for the independent voter. When $\beta=1$, the voter believes his information and chooses what he believes to be true. When $\beta<1$, it is not optimal to do so in general. The expected return $R$ in (\[b1\]) is $$R= (\beta \cdot M_{A}-(1-\beta)\cdot M_{B})x+M_{B}(1-\beta).
\label{B1}$$ By maximizing $R$, we obtain $x$ as $$x=\theta(\beta\cdot M_{A}-(1-\beta)\cdot M_{B}).$$ Here, $\theta$ is a Heaviside (step) function. If $\beta\cdot M_{A}>(1-\beta)\cdot M_{B}$, he chooses $A$ and vice versa. He behaves as an “arbitrager” for $\beta<1$. It is the risk-neutral strategy that has been discussed in the context of racetrack betting markets and prediction markets [@Ali:1977; @Man:2006].
\[sec:analysis\]Data analysis : Macroscopic Aspects
===================================================
We obtained 240 sequences $\{X(i,t|r)\},t\in\{1,2,\cdots,T_{i}\}$ for question $i \in \{1,\cdots,240\}$ and cases $r\in\{O,C,M\}$ in each experiment. Data $\{X(i,t|r)\}$ for both experiments is downloadable at http://arxiv.org/abs/1211.3193. The percentage of correct answers of sequence $\{X(i,t|r)\}$ for question $i$ in case $r$ is defined as $Z(i|r)=\sum_{s=1}^{T_{i}}X(i,s|r)/T_{i}$. In the analysis, the subjects are classified into two categories–independent voters and herders–for each question. We assume that the probability $q$ of a correct choice for independent voters and herders is 100% and 50%, respectively [@Mor:2012]. For a group with $p(i)$ herders and $1-p(i)$ independent voters, the expectation value of $Z(i|O)$ is $1-p(i)/2$. The maximal likelihood estimate of $p(i)$ is given as $p(i)=2(1-Z(i|O))$.
Distribution of $Z(i|r)$
------------------------
No. $Z(i|r)[\%]$ $N(\mbox{No.}|O)$ $N(\mbox{No.}|C)$ $N(\mbox{No.}|M)$ $|I(\mbox{No.})|$ $p_{avg}(\mbox{No.})[\%]$ $Z(i|C)<1/2$ $Z(i|M)<1/2$
------- -------------- ------------------- ------------------- ------------------- ------------------- --------------------------- -------------- --------------
1 $<5$ 0 5 0 NA NA NA NA
2 $5\sim 15$ 3 33 7 NA NA NA NA
3 $15\sim 25$ 5 28 25 NA NA NA NA
4 $25\sim 35$ 18 9 30 NA NA NA NA
5 $35\sim 45$ 35 5 13 NA NA NA NA
6 $45\sim 55$ 38 5 13 38 97.5 18/38 17/38
7 $55\sim 65$ 57 5 14 52 78.3 7/52 5/52
8 $65\sim 75$ 29 7 19 26 60.3 0/26 0/26
9 $75\sim 85$ 41 17 44 38 40.6 0/38 0/38
10 $85\sim 95$ 11 57 62 11 21.3 0/11 0/11
11 $\ge 95$ 3 69 13 2 5.1 0/2 0/2
Total 240 240 240 167 66.8% 25/167 22/167
No. $Z(i|r)[\%]$ $N(\mbox{No.}|O)$ $N(\mbox{No.}|C)$ $N(\mbox{No.}|M)$ $|I(\mbox{No.})|$ $p_{avg}(\mbox{No.})[\%]$ $Z(i|C)<1/2$ $Z(i|M)<1/2$
------- -------------- ------------------- ------------------- ------------------- ------------------- --------------------------- -------------- --------------
1 $<5$ 0 2 0 NA NA NA NA
2 $5\sim 15$ 0 18 6 NA NA NA NA
3 $15\sim 25$ 8 22 18 NA NA NA NA
4 $25\sim 35$ 16 20 23 NA NA NA NA
5 $35\sim 45$ 36 8 16 NA NA NA NA
6 $45\sim 55$ 43 9 19 43 96.7 16/43 15/43
7 $55\sim 65$ 46 10 16 45 79.3 8/45 3/45
8 $65\sim 75$ 45 14 26 45 62.7 2/45 0/45
9 $75\sim 85$ 33 33 56 33 41.9 0/33 0/33
10 $85\sim 95$ 11 67 54 11 21.3 0/11 0/11
11 $\ge 95$ 2 37 6 0 NA NA NA
Total 240 240 240 177 68.7% 26/177 18/177
There are $240$ samples of sequences of choices for each $r$. We divide these samples into 11 bins according to the size of $Z(i|r)$, as shown in Table \[tab:table\]. The number of data samples in each bin for cases $r \in \{O,C,M\}$ are given in the second, third and fourth column as $N(\mbox{No.}|r)$. Social information causes remarkable changes in subjects’ choices. For case $O$, there is one peak at No. 7, and for case $C (M)$, there are peaks at No. 2 (4) and No. 11 (10) in EXP-I. The samples in each bin of case $O$ share almost the same value of $p$. For example, in the samples of No. 6 bin ($0.45<Z(i|O)\le 0.55$), there are almost only herders in the subjects’ sequence and $p(i) \simeq 100\%$. In contrast, in the samples of No. 11 bin ($Z(i|O)>0.95$), almost all subjects know the answer to the questions and are independent $(p(i) \simeq 0\%)$. An extremely small value of $Z(i|O)$ indicates some bias in the question and we omit the samples that satisfy $Z(i|O)<0.45$. In addition, the minimum value of $Z(i|r)$ should be $1-p(i)$. If $Z(i|r)<1-p(i)$, it means that the estimation of $p(i)$ for the sequence $\{X(i,t|r)\}$ fails. The true value of $p(i)$ should be larger than the estimated value. We cannot give the appropriate estimation of $p(i)$ for the choice sequence and we omit the samples that satisfy $Z(i|C)<1-p(i)$ or $Z(i|M)<1-p(i)$. From these procedures, we are left with 167 (177) samples in EXP-I (II) and we denote the set by $I'$. $I(\mbox{No.})$ denotes the set of samples in each bin in case $O$ among $I'$.
We comment about the above data elimination procedure. The main purpose of the experiment is to clarify how herders copy others’ choices. For the purpose, it is necessary to assure that herders choose each option with equal probability in case $O$ and the herder’s $q$ is 50%. This is the precondition of the experiment. We assume $q=0.5$ and derive the above three conditions that $Z(i|r)$ should satisfy. If $Z(i|r)$ contradicts with at least one of the conditions, there is some bias in the options. The data for question $i$ does not meet the precondition and we discard it in the analysis of the experimental data. The elimination procedure cannot assure the precondition with absolute certainty, it is indispensable.
We calculate the average value of $p(i)$ for the samples in $I(\mbox{No.})$. We denote it as $p_{avg}(\mbox{No.})$ and estimate it as $$p_{avg}(\mbox{No.})=\frac{1}{|I(\mbox{No.})|}\sum_{i\in I(\mbox{\tiny{No.}})}p(i).$$ Here, $|I(\mbox{No.})|$ in the denominator means the number of samples in $I(\mbox{No.})$, which is given in the sixth column of the table. In the last two columns, we show the ratio of the case with $\{Z(i|r)<1/2\}$ for $r\in \{C,M\}$ among the samples in $I(\mbox{No.}|O)$. In both cases, as $p_{avg}$ increases, the ratio increases rapidly to about half.
![\[fig:scatter\_Z\] Scatter plots of $Z(i|O)$ vs. $Z(i|r)$ for (A) Case $C$ and (B) Case $M$. The vertical lines show the border of the bins in Table \[tab:table\]. The rising diagonal line from $(0.5,0)$ to top right shows the boundary condition $Z(i|r)=1-p$. ](65757Fig1a.eps "fig:"){width="7cm"} ![\[fig:scatter\_Z\] Scatter plots of $Z(i|O)$ vs. $Z(i|r)$ for (A) Case $C$ and (B) Case $M$. The vertical lines show the border of the bins in Table \[tab:table\]. The rising diagonal line from $(0.5,0)$ to top right shows the boundary condition $Z(i|r)=1-p$. ](65757Fig1b.eps "fig:"){width="7cm"}
In order to see the social influence more pictorially, we show the scatter plots of $Z(i|O)$ vs. $Z(i|r),r \in \{C,M\}$ of EXP-I in Fig. \[fig:scatter\_Z\]. The $x$-axis shows $Z(i|O)$ and the y-axis shows $Z(i|r)$. The vertical lines show the boundary between the bins (from No. 1 to No. 11) for case $O$ in Table \[tab:table\]. The rising diagonal line from $(0.5,0)$ to top right shows the boundary condition $Z(i|r)=1-p$. If subjects’ answers are not affected by social information, data would distribute on the diagonal line from $(0,0)$ to top right. As the plots clearly indicate, the samples scatter more widely in the plane in case $C$ than in case $M$, which means that social influence is bigger in case $C$. For the samples with $Z(i|O)\ge 0.65$ in case $O$ (Nos. 8, 9, 10, and 11 bins in Table \[tab:table\]), the changes, $Z(i|C)-Z(i|O)$, are almost positive and $Z(i|C)$ takes a value of about 1 in case $C$. In case $M$, the changes, $Z(i|M)-Z(i|O)$, are also almost positive and $Z(i|M)$ takes a value of about 0.9. The average probability of choosing the correct option improves with social information for the samples in both cases. In contrast, for the samples with $0.45 \le Z(i|O)<0.65$ (Nos. 6 and 7 bins in Table \[tab:table\]), social information does not necessarily improve average performance. There are many samples with $Z(i|r)-Z(i|O)<0$ in both cases. These samples constitute the lower peak in Table \[tab:table\].
Asymptotic behavior of the convergence
--------------------------------------
We have seen drastic changes in the distribution of $Z(i|r)$ from the distribution of $Z(i|O)$. Table \[tab:table\] and Figure \[fig:scatter\_Z\] show the two-peak structure in the distribution of $Z(i|r)$. In our previous work on the information cascade phase transition [@Mor:2012], we have studied the time dependence of the convergence behavior of the sequences $\{X(i,t|r)\}$.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![\[fig:herd\_macro\] Convergent behavior. Convergence is given by the double logarithmic plot of $\mbox{Var}(Z(i,t|r))_{\mbox{No.}}$ vs. $t$ using the samples in four bins (Nos. 6 $(\circ)$, 7 $(\triangle)$, 8 $(\diamond)$, and 9 $(\times)$ in Table \[tab:table\]) for (A) Case $C$ in EXP-I, (B) Case $M$ in EXP-I, (C) Case $C$ in EXP-II, and (D) Case $M$ in EXP-II. The dotted lines are fitted results with $\propto t^{-\gamma}$ for $t\ge
10 (20)$ in EXP-I (II).](65757Fig2a.eps "fig:"){width="7cm"}
![\[fig:herd\_macro\] Convergent behavior. Convergence is given by the double logarithmic plot of $\mbox{Var}(Z(i,t|r))_{\mbox{No.}}$ vs. $t$ using the samples in four bins (Nos. 6 $(\circ)$, 7 $(\triangle)$, 8 $(\diamond)$, and 9 $(\times)$ in Table \[tab:table\]) for (A) Case $C$ in EXP-I, (B) Case $M$ in EXP-I, (C) Case $C$ in EXP-II, and (D) Case $M$ in EXP-II. The dotted lines are fitted results with $\propto t^{-\gamma}$ for $t\ge
10 (20)$ in EXP-I (II).](65757Fig2b.eps "fig:"){width="7cm"}
![\[fig:herd\_macro\] Convergent behavior. Convergence is given by the double logarithmic plot of $\mbox{Var}(Z(i,t|r))_{\mbox{No.}}$ vs. $t$ using the samples in four bins (Nos. 6 $(\circ)$, 7 $(\triangle)$, 8 $(\diamond)$, and 9 $(\times)$ in Table \[tab:table\]) for (A) Case $C$ in EXP-I, (B) Case $M$ in EXP-I, (C) Case $C$ in EXP-II, and (D) Case $M$ in EXP-II. The dotted lines are fitted results with $\propto t^{-\gamma}$ for $t\ge
10 (20)$ in EXP-I (II).](65757Fig2c.eps "fig:"){width="7cm"}
![\[fig:herd\_macro\] Convergent behavior. Convergence is given by the double logarithmic plot of $\mbox{Var}(Z(i,t|r))_{\mbox{No.}}$ vs. $t$ using the samples in four bins (Nos. 6 $(\circ)$, 7 $(\triangle)$, 8 $(\diamond)$, and 9 $(\times)$ in Table \[tab:table\]) for (A) Case $C$ in EXP-I, (B) Case $M$ in EXP-I, (C) Case $C$ in EXP-II, and (D) Case $M$ in EXP-II. The dotted lines are fitted results with $\propto t^{-\gamma}$ for $t\ge
10 (20)$ in EXP-I (II).](65757Fig2d.eps "fig:"){width="7cm"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
We denote the ratio of correct answers, $\frac{C_{1}(i,t|r)}{t}$, as $$Z(i,t|r)\equiv \frac{C_{1}(i,t|r)}{t}=\frac{1}{t}\sum_{s=1}^{t}X(i,s|r).$$ $Z(i,T_{i}|r)=Z(i|r)$ holds by definition. By studying the asymptotic behavior of the convergence of sequence $\{Z(i,t|r)\}$ for the samples in $I(\mbox{No.})$, one can clarify the possibility of the information cascade transition by varying $p$. The variance of $Z(i,t|r)$ for the samples in $I(\mbox{No.})$ is defined as $$\begin{aligned}
&&\mbox{Var}(Z(i,t|r))_{\mbox{No.}}\nonumber \\
&=&\frac{1}{|I(\mbox{No.})|}\sum_{i \in
I(\mbox{\tiny{No.}})}(Z(i,t|r)-<Z(i,t|r)>_{\mbox{No.}})^{2} \nonumber \\
&&<Z(i,t|r)>_{\mbox{No.}}=\frac{1}{|I(\mbox{No.})|}\sum_{i \in
I(\mbox{\tiny{No.}})}Z(i,t|r) \nonumber . \end{aligned}$$ Here, we denote the average value of $Z(i,t|r)$ over the samples in $I(\mbox{No.})$ by $<Z(i,t|r)>_{\mbox{No.}}$. In the one-peak phase, the variance of $Z(i,t|r)$ for the samples with the same $p$ converges to zero in thermodynamic limit $t\to \infty$. In the analysis of experimental data, the values of $p$ have some variance among the samples in each bin, and Var($Z(i,t|r))_{\mbox{No.}}$ takes small values in the limit. Depending on the convergence behavior, the one-peak phase is classified into two phases [@His:2012]. If Var($Z(i,t|r))_{\mbox{No.}}$ shows normal diffusive behavior as $\mbox{Var}((Z(i,t|r))_{\mbox{No.}}\propto t^{-1}$, it is called the normal diffusion phase. We note that the variance is estimated for the ratio, $C_{1}(i,t|r)/t$, and the usual behavior $t^{1}$ for the sum of $t$ random variables is replaced by $\propto t/t^{2}=t^{-1}$. If convergence is slow and $\mbox{Var}(Z(i,t|r))_{\mbox{No.}}\propto t^{-\gamma}$ with $0<\gamma<1$, it is called the super diffusion phase [@Hod:2004]. In the two-peak phase, $\mbox{Var}(Z(i,t|r))_{\mbox{No.}}$ converges to some finite value in limit $t\to\infty$ [@His:2011].
Figure \[fig:herd\_macro\] shows the double logarithmic plots of $\mbox{Var}(Z(i,t|r))_{\mbox{No.}}$ as a function of $t$. We see that convergence becomes very slow as $p_{avg}(\mbox{No.})$ increases in general. The convergence exponent $\gamma$ is estimated by fitting with $\propto t^{-\gamma}$ for $t\ge 10$ in EXP-I. It decreases almost monotonically from about 1 to $-0.02$ $(0.14)$ with an increase in $p_{avg}$ in case $C$ ($M$). Taking into account the estimate error of the exponent given in Appendix E,
$\gamma$s are almost 1 for the samples in $I(9)$ and $I(8)$, and the system is in the normal diffusion (one-peak) phase in both cases $r \in \{C,M\}$. For the samples in $I(7)$, $\gamma$s are apparently smaller than 1 and the system might be in the super diffusion phase. For the samples in $I(6)$, $\gamma$ becomes negative $(\gamma=-0.02)$ in case $C$. This suggests that the system is in the two-peak phase for the samples in $I(6)$ [@Mor:2012]. In case $M$, $\gamma$ is positive even for the samples in $I(6)$ and the system might be in the super diffusion phase. However, the result does not necessarily deny the existence of the two-peak phase, taking into account the variance of $p(i)$ and the estimate error of $\gamma$ from the limited sample size. We can only say that if the two-peak phase exists, the threshold value $p_{c}$ in case $M$ is considerably larger than that in case $C$.
\[sec:analysis2\]Data analysis: Microscopic Aspects
===================================================
In this section, we study the microscopic aspects of the herders. We clarify how they copy others’ choices and derive a microscopic rule in each case $r\in \{C,M\}$. In particular, we study whether they behave as analog herders in case $M$.
How do herders copy others?
---------------------------
We determine how a herder’s decision depends on social information. For this purpose, we need to subtract independent subjects’ contribution from $X(i,t+1|r)$. The probability of being independent is $1-p(i)$, and such a subject always chooses 1. A herder’s contribution is estimated as $$( X(i,t+1|r)-(1-p(i))) /p(i).$$ How the herder’s decision depends on $C_{1}(i,t|r)=n_{1}$ is estimated by the expectation value of $(X(i,t+1|r)-(1-p(i))/p(i)$ under this condition. The expectation value means the probability that a herder chooses an option under the influence of prior $n_{1}$ subjects among $t$ who choose the same option. We denote it by $q_{h}(t,n_{1}|r)$, and estimate it as $$q_{h}(t,n_{1}|r)
=\frac{\sum_{i \in
I'}\left[\frac{X(i,t+1|r)-(1-p(i))}{p(i)}\right]\delta_{C_{1}(i,t|r),n_{1}}}{\sum_{i\in
I'}
\delta_{C_{1}(i,t|r),n_{1}}} \label{eq:q_h}.$$ Here, $\delta_{i,j}$ is 1 (0) if $i=j \hspace*{0.2cm}(i\neq j)$ and the denominator is the number of sequences where $C_{1}(i,t|r)=n_{1}$. From the symmetry between $1\leftrightarrow 0$, we assume that $q_{h}(t,n_{1}|r)=1-q_{h}(t,t-n_{1}|r)$. We study the dependence of $q_{h}(t,n_{1}|r)$ on $n_{1}/t$ and round $n_{1}/t$ to the nearest values in $\{k/13 (12) |k\in \{0,1,2,\cdots,13 (12)\}\}$ in EXP-I (II).
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![\[fig:herd\_micro\] Microscopic rule of herder’s decision for (A) Case $C$ and (B) Case $M$. It shows the probability $q_{h}(t,n_{1}|r)$ that a herder chooses an option under the influence of prior $n_{1}$ subjects among $t$ choosing that option in case $r$. The thin dashed line in (A) shows $2(n_{1}/t-1/2)+1/2$. The dotted diagonal line in (B) shows the analog herder model $q_{h}(t,n_{1})=n_{1}/t$. ](65757Fig3a.eps "fig:"){width="7.5cm"}
![\[fig:herd\_micro\] Microscopic rule of herder’s decision for (A) Case $C$ and (B) Case $M$. It shows the probability $q_{h}(t,n_{1}|r)$ that a herder chooses an option under the influence of prior $n_{1}$ subjects among $t$ choosing that option in case $r$. The thin dashed line in (A) shows $2(n_{1}/t-1/2)+1/2$. The dotted diagonal line in (B) shows the analog herder model $q_{h}(t,n_{1})=n_{1}/t$. ](65757Fig3b.eps "fig:"){width="7.5cm"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Figure \[fig:herd\_micro\] shows the plot of $q_{h}(t,n_{1}|r)$ for (A) case $C$ and (B) case $M$. We can clearly see the strong tendency to copy others in case $C$. As $n_{1}/t$ increases from $1/2$, $q_{h}(t,n_{1}|C)$ rapidly increases and the slope at $n_{1}/t=1/2$ is about 2.0 in EXP-I. Such nonlinear behavior is known as a quorum response in social science and ethology [@Sum:2009]. The magnitude of the slope measures the strength of the herders’ response. Comparing EXP-I and EXP-II, the response of herders is more sharp in EXP-I than in EXP-II. In EXP-II, where the amount of social information increases gradually, the subjects tend to copy others’ choices more prudently than in EXP-I. If the slope exceeds 1, the system shows the information cascade transition. The transition ratio $p_{c}$ depends on the slope. In the digital herders case, where $q_{h}(t,n_{1})=\theta(n_{1}-t/2)$ and the slope is infinite, $p_{c}$ takes $0.5$ [@His:2011]. As the slope reduces to 1, $p_{c}$ increases to 1 and the phase transition disappears in the limit [@His:2010].
Contrary to case $C$, the dependence of $q_{h}(t,n_{1}|M)$ on $n_{1}/t$ is weak and the slope at $n_{1}/t=1/2$ is almost 1 in case $M$. In range $1/4\le n_{1}/t \le 3/4$, $q_{h}(t,n_{1}|M)$ lies on the diagonal dotted line and the herders almost behave as analog herders. As the multiplier $m$ is the inverse of $n_{1}/t$ for a large $t$, the average herder adopts the optimal max-min strategy in the range $4/3\le m \le 4$. As the slope at $n_{1}/t=1/2$ is small, if the information cascade phase transition occurs, the transition ratio $p_{c}$ should become large as compared to in case $C$. One can also see an interesting behavior of herders. If the minority choice ratio $n_{1}/t $ is smaller than $1/4$ and multiplier $m$ exceeds 4, some herders make the choice. As a result, if $n_{1}/t> 3/4$, $q_{h}(t,n_{1}|M)$ becomes almost constant, about $3/4$. We can interpret this as some of the herders preferring a big multiplier (long-shot) and $q_{h}(t,n_{1})$ saturating at $3/4$.
\[sec:model\]Analysis with stochastic model
===========================================
In this section, we simulate the system by a stochastic model, which we call a voting model. We consider a system with $p$ herders and $1-p$ independent voters. We estimate the transition ratio $p_{c}$ and herder’s probability of correct choice in the experiment and compared it with that for the analog herders system.
Voting model and thermodynamic limit
------------------------------------
We introduce a stochastic process $\{X(t|p)\},t\in \{1,2,3,\cdots,T\}$ for $p \in [0,1]$. $X(t+1|p) \in \{0,1\}$ is a Bernoulli random variable. Its probabilistic rule depends on $C_{1}(t)=\sum_{t'=1}^{t}X(t'|r,p)$ and herders’ proportion $p$. Given $\{C_{1}(t)=n_{1}\}$, we denote the probability that a herder chooses (copies) the correct option by $q_{h}(t,n_{1})$. As $q_{h}(t,n_{1})$ has symmetry $q_{h}(t,n_{1})=1-q_{h}(t,t-n_{1})$, $q_{h}(t,n_{1})$ takes $1/2$ at $n_{1}/t=1/2$. We assume that $q_{h}(t,n_{1})$ is a smooth and monotonically increasing function of $n_{1}/t$. The probabilistic rule that $X(t+1|r,p)$ obeys under the condition is $$\begin{aligned}
\mbox{Prob}(X(t+1|p)=1|n_{1})
&=&(1-p)+p\cdot q_{h}(t,n_{1}), \nonumber \\
\mbox{Prob}(X(t+1|p)=0|n_{1})
&=&p\cdot (1-q_{h}(t,n_{1})). \nonumber \end{aligned}$$ We denote the probability that $X(t+1|p)$ takes 1 under the condition by $q(n_{1}/t|p)$ and the probability function Prob($C_{1}(t)=n)$ for $p$ by $P(t,n|p)$. The master equation for $P(t,n|p)$ is $$\begin{aligned}
P(t+1,n|p)
&=&q((n-1)/t|p) \cdot P(t,n-1|p) \nonumber \\
&+& (1-q(n/t|p))\cdot P(t,n|p) \label{eq:master}. \end{aligned}$$ The expected value of $Z(t|p)=\frac{1}{t}C_{1}(t)$ is then estimated as $$\mbox{E}(Z(t|p))=\sum_{n=0}^{t}P(t,n|p)\cdot \frac{n}{t}.$$
We are interested in the limit value of $Z(t|p)$ as $t\to \infty$, which we denote as $z$: $$z\equiv \lim_{t\to \infty}Z(t|p).$$ In the one-peak phase, $Z(t|p)$ always converges to E($Z(t|p)$) in the limit, which we denote as $z_{+}$. In the two-peak phase, in addition to $z_{+}$, $Z(t|p)$ converges to a value smaller than half, which we denote as $z_{-}$, with some positive probability. It is a probabilistic process and one cannot predict to which fixed point $Z(t|p)$ converges. To determine the threshold value $p_{c}$ between these phases and the limit value $z_{\pm}$, one needs to solve the following self-consistent equation [@His:2012]: $$z=q(z|p)=(1-p)+ p \cdot q_{h}(t,t\cdot z) \label{eq:self}.$$ Given $p$, if there is only one solution, it is $z_{+}$ and the system is in the one-peak phase. The convergence exponent $\gamma$ is obtained by estimating the slope of $q(z|p)$ at $z=z_{+}$ [@His:2012; @Hod:2004]. If there are three solutions, which we denote as $z_{1}<z_{u}<z_{2}$, $z_{1} \hspace*{0.2cm}(z_{2})$ corresponds to $z_{-} \hspace*{0.2cm}(z_{+})$. The middle solution $z_{u}$ is an unstable state and $Z(t|p)$ departs from $z_{u}$ as $t$ increases. The method gives the rigorous results for $z$ and $\gamma$ where $q(z|p)$ is given as smooth function of $z$.
![\[fig:analog\] Schematic view of the self-consistent equation $z=q(z|p)=(1-p)+p\cdot q_{h}(t,t\cdot z)$ for the system of analog herders : $q_{h}(t,t\cdot z)=z$. $(z,q(z|p))$ connects $(0,1-p)$ and $(1,1)$ by a direct line. There is only one stable solution $z_{+}$ at $z=1$ for $p<1$. ](65757Fig4.eps){width="6cm"}
Figure \[fig:analog\] shows the case of analog herders and $q(z|p)=(1-p)+p\cdot q_{h}(t,t\cdot z)$ with $q_{h}(t,t\cdot z)
=z$ [@His:2010]. As one can easily see, for any value of $p<1$, there is only one stable solution $z_{+}$ at $z=1$. The system is in the one-peak phase and $Z(t|p)$ always converges to $z_{+}=1$ for $p<1$. As the independent voter’s probability of correct choice $q$ is 100%, that of herders is estimated as 1 by $(z_{+}-(1-p)\cdot 1)/p$. Even in the worst limit $p\to 1$, the system of analog herders can take the probability of correct choice to one.
Transition ratio $p_{c}(r)$ for cases $r\in\{C,M\}$
---------------------------------------------------
EXP. $r$ $p_{c}(r)$ $r$ $p_{c}(r)$
------ ----- ------------ ----- ------------
I $C$ 86.0% $M$ 95.7%
II $C$ 86.5% $M$ 96.7%
: \[tab:pc\] Transition ratio $p_{c}$ of the voting (average herders) model. We determine $p_{c}$ using the condition that the self-consistent equation (\[eq:self\]) has three or more solutions for $p>p_{c}$.
We introduce an average herders model where $q_{h}(t,t\cdot z)$ is given by linear extrapolation of the values $q_{h}(t,n_{1}|r)$ in equation (\[eq:q\_h\]). In our previous work [@Mor:2012], we model the behavior of herders by the following functional form with two parameters $a$ and $\lambda$: $$\frac{1}{2} \left( a\tanh (\lambda(n_{1}/t-1/2))+1 \right).
\label{q_model}$$ However, the fitted result by the standard maximum likelihood estimation cannot capture the behavior of herders in the crucial region $n_{1}/t\sim 1/2$. We adopt the above linear extrapolated $q_{h}(t,n_{1}|r)$ for $q_{h}(t,t\cdot z)$ and solve the self-consistent equation (\[eq:self\]). We determine $p_{c}(r)$ for cases $r \in \{ C,M \}$ by the condition that the self-consistent equation has three or more solutions. We summarize the results in Table \[tab:pc\]. In case $C$, $p_{c}(C)$ is from 86.0% (EXP-I) to 86.5% (EXP-II). In case $M$, $p_{c}(M)$ is from 95.7% (EXP-I) to 96.7% (EXP-II). However, these estimates depend on the behavior of $q_{h}(t,n_{1})$ near $n_{1}/t=1/2$ where the estimate errors are big. We can at most say that $p_{c}(M)>p_{c}(C)$.
Herder’s probability of correct choice
--------------------------------------
![\[fig:p\_vs\_EHZ\] Plot of herders’ probability of correct choice, $(\mbox{E}(Z(T|r))-(1-p))/p$ vs. $p$, for the voting model. Symbol $\circ$ ($\triangle$) indicates the experimental data for the four bins $I(6),I(7),I(8)$, and $I(9)$ in Table \[tab:table\] for case $C \hspace*{0.2cm}(M)$. The lines show the results of the stochastic model with system size $T=60, r=C$ (thin solid); $T=60, r=M$ (thin dashed); $10^{6}, r=C$ (thick solid); and $10^{6},
r=M$ (thick dashed). We also plot the result of the stochastic model for analog herders $q_{h}(t,n_{1})=n_{1}/t$ with $T=60$ (thin dotted) and $10^{6}$ (thick dotted). ](65757Fig5.eps){width="8cm"}
We estimate the probability of correct choice by a herder as a function of $p$ [@Cur:2006]. As for the voting model, it can be estimated using the expectation value of $Z(t|p)$ as $$\mbox{E}((Z(t|p)-(1-p)\cdot 1)/p).$$ For the experimental data, we take the average of $(Z(i|r)-(1-p(i))\cdot 1 )/p(i)$ over the samples in $I(\mbox{No.})$: $$\frac{1}{|I(\mbox{No.})|}\sum_{i \in I
(\mbox{\tiny{No.}})}(Z(i|r)-(1-p(i))\cdot 1)/p(i) .$$ We plot the results in Figure \[fig:p\_vs\_EHZ\]. The experimental results show that the probability of correct choice in case $C$ is better than that in case $M$ except for the samples in $I(6)$. As system size $T$ increases, for $p<p_{c}(C)$, the probability of correct choice in case $C$ remains better than that in case $M$. However, the maximum value of $q_{h}(t,n_{1}|C)$ is about 0.9 and the probability of correct choice saturates at the value for $p<p_{c}(C)$. As $p$ exceeds $p_{c}(C)$, the probability of correct choice in case $C$ rapidly decreases and dips below that in case $M$. From the information cascade transition, herders’ probability of correct choice is much lowered and this results in the poor performance. In contrast, the poor performance of herders in case $M$ for $p<p_{c}(C)$ comes from the saturation of $q_{h}(t,n_{1}|M)$ at $n_{1}/t=3/4$. From the saturation, the probability of correct choice cannot reach the high value. For comparison, we show the results of the optimal system of analog herders with $T=60$ and $10^{6}$. In the thermodynamic limit, the probability of correct choice converges to one for $p<1$.
\[sec:conclusions\]Conclusions
==============================
Social influence, which here is restricted only to information regarding the choices of others, yields inaccuracy in the majority choice. If a herder receives summary statistics $\{C_{A},C_{B}\}$ and the payoff for the correct choice is constant, he strongly tends to copy the majority. The correct information given by independent voters is buried below the herd and the majority choice does not necessarily teach us the correct one if herders’ proportion exceeds $p_{c}(C)$ [@Mor:2012]. When the return is set to be proportional to multipliers $\{M_{A},M_{B}\}$ that are inversely proportional to summary statistics $\{C_{A},C_{B}\}$, the situation is a zero-sum game between a herder and other previous subjects who have set the multipliers. The optimal max-min strategy is that of analog herders who choose $\alpha \in \{A,B\}$ with probability proportional to $C_{\alpha}$. Furthermore, the system of analog herders with $q=1$ maximizes the probability of the correct choice for any value of $p$ in the thermodynamic limit. Even in limit $p\to 1$, only the system can take the probability of correct choice to one.
We performed a laboratory experiment to study herders’ behavior under the influence of multipliers $\{M_{A},M_{B}\}$. We showed that they collectively behave almost as analog herders for $4/3 \le m \le 4 $, where $m$ is the multiplier. Outside the region, herders’ copy probability $q_{h}(t,n_{1}|M)$ saturates at about $3/4$ for $n_{1}/t \ge 3/4$ and it deviates from that of analog herders’, $q_{h}(t,n_{1})=n_{1}/t$. As a result, the probability of correct choice by a herder cannot reach a high value as compared to in the system of analog herders.
The system size and number of samples in our experiment are very limited, and thus it is difficult to estimate $p_{c}$ precisely. More importantly, in the estimation of $p$, we assume herder’s $q$ is 50%. It is the precondition of the experiment and we eliminate data which does not fulfill the condition. However, the procedure does not assure the precondition. In order to estimate $p$ more precisely and check the precondition, it is necessary to improve the experimental design or the data analysis procedure. In addition, in our experimental setup, the subjects have to choose between A and B. In addition, in our analysis of the experimental data, we only observe the average behavior of many herders. An interesting problem is whether a herder can adopt the max-min strategy at the individual level or only the average herder can do it. In order to clarify this, one good way is to permit people to divide their choice and vote fractionally. If the fraction voted by a subject is proportional to the summary statistic of previous subjects’ choices, it suggests that the subject can adopt the max-min strategy at the individual level. We think that a more extensive experimental study of the system and of the related systems deserves further attention [@Sal:2006]. Such experimental studies should provide new approach to econophysics [@Man:2008; @Lux:1995; @Kir:1993; @Con:2000; @Gon:2011; @Mor:2010] and socio-physics[@Gal:2008].
We thank Yosuke Irie and Ruokang Han for their assistance in performing the experiment. This work was supported by Grant-in-Aid for Challenging Exploratory Research 25610109.
\[A\]Additional information about the Experiment
================================================
In EXP-I, 120 subjects were recruited from the Literature Department of Hokkaido University. We made two groups of about sixty subjects and the subjects in each group answered 120 questions one by one. Because of the capacity of the laboratory, we could not perform the whole experiment at a time. We divide the subjects of each group into five sub-groups of about 12 subjects. In one session, subjects in a sub-group sequentially answered the questions. After five sessions we have gathered the data from all the subjects in a group.
Subjects were paid in cash upon being released from the session. There was a 500 yen (about 5 dollars) participation fee and additional rewards that were proportional to the number of points gained. In cases $O$ and $C$, one correct choice was worth two points, and one point was worth one yen (about one cents). In case $M$, one correct choice was worth the multiplier itself. In the main text, we treat case $M$ as zero-sum game. Considering the participation fee, we can regard it as constant sum game, which is equivalent to a zero-sum game. As for EXP-II, detailed information can be found in [@Mor:2012].
\[PB\]Experimental procedure
============================
We explain the experimental procedure in EXP-I in detail. All the subjects in a sub-group entered the laboratory and sat in the partitioned spaces. Using slides, we showed subjects how the experiment would proceed. We explained that we were studying how their choices were affected by the choices of others. In particular, we emphasized that social information was realistic information calculated from the choices of previous subjects. Through the slides, we also explained how to calculate multipliers $\{M_{A},M_{B}\}$ in case $M$, with a concrete example.
After the explanation, the subjects logged into the experiment web site using their IDs and started to answer the questions. Interaction between subjects was permitted only through the social information given by the experiment server. A question was chosen by the experiment server and displayed on the monitor. First, subjects answered the fist half of the 120 questions $i \in \{1,2,\cdots,60\}$ using only their own knowledge ($r=O$). After answering all the sixty questions in case $O$, the subjects answered the same 60 questions in case $C$. Finally, the subjects answered the same questions in case $M$. In each case, the experiment server chose a question among the sixty questions at random that was not served to the other subjects at the time. Otherwise, we cannot give correct social information to the $t+1$-th subject from all previous $t$ subjects. After a five-minute interval, we repeated the same procedure so that the subjects answered all 120 questions.
{width="13cm"}\
Figure \[fig:experience\] shows the experience of the subjects in case $M$. In the example covered in the figure, already nine subjects have answered question 30. The multipliers are given in the second row along with the number of subjects who answered the question. Only one subject among nine has chosen A and the remaining eight subjects have chosen B. Multiplier $M_{A}\hspace*{0.2cm}(M_{B})$ is calculated as $10/(1+1)=5\hspace*{0.2cm}(10/(8+1)=1.1)$. The multipliers are rounded off to one decimal place.
In EXP-II, the experience of the subjects is almost the same as in EXP-I [@Mor:2012]. The difference lies in how the experiment proceeds. In EXP-II, each subject answered each question from case $O$ to case $M$. After that, the experiment server chose another question. The process continues until the subject has answered all questions. The subjects were likely to easily remember the answers for the earlier cases with different social information and be careful in choosing answers in the later cases. In order to exclude such an effect, we changed the system to that in EXP-I.
\[C\] Contorollability of the difficulty level of a question
============================================================
We have used the same 120 questions in EXP-I and EXP-II. For the selection process, please refer to our previous paper [@Mor:2012]. Here, we study whether the difficulty of a question is an inherent property or not. For this purpose, we compare the percentage of correct answers to each question in case O in Group A and in Group B. It is defined for Group A as $Z(i|O)=\sum_{s=1}^{T_{i}}X(i,s|O)/T_{i}$ and for Group B as $Z(i+120|O)=\sum_{s=1}^{T_{i+120}}X(i+120,s|O)/T_{i+120}$. We show the scatter plot $\{Z(i|O),Z(i+120|O)\}$ in Figure \[fig:compare\].
![\[fig:compare\] Scatter plots of $Z(i|O)$ vs. $Z(i+120|O)$ in EXP-I. Pearson’s correlation coefficient $\rho$ is $0.8997$. ](65757FigC1.eps){width="7cm"}
As one can clearly see the distribution almost on the diagonal line, we can infer that there is a strong correlation. Pearson’s correlation coefficient $\rho$ is about $0.90$. In EXP-II, we observe the same feature and $\rho$ is about $0.82$. The strong correlation means that if a question is difficult (easy) for the subjects in a group, it would also be difficult (easy) for the subjects in the other group. The system sizes in our experiments are very limited and there remains some fluctuation in the estimation of $Z(i|O)$, but it will disappear for a large system. We can control the difficulty levels of the questions in the experiment and study the response of a subject under controlability. This aspect is important when one makes some prediction based on the results presented in this paper.
\[D\]Uniqueness of the analog herders system
============================================
In the main text, we show that the system of analog herders maximizes the probability of correct choice for $p<1$ and can take it to one for any $p<1$. Here, we show that only the system of analog herders can do it.
![\[fig:general\] Schematic view of the self-consistent equation $z=q(z|p)=(1-p)+p\cdot q_{h}(t,t\cdot z)$ with general $q_{h}(t,t\cdot z)$. $(z,q(z|p))$ connects $(0,1-p)$ and $(1,1)$ by a continuous curve. As $z_{+}=1$ is a stable solution, $\frac{q(z|p)}{dz}$ at $z=1$ is one or less. If $q_{h}(t,t\cdot z)$ deviates from $z$, for $p'>p_{c}$, in addition to the stable solution $z_{+}$ at $z=1$, there is at least one stable solution $z_{-}$ for $z<1$. ](65757FigD1.eps){width="7cm"}
As the system with analog herders assures that the probability of correct choice is one for any $p<1$, the self-consistent equation for the system considered must have only one stable solution $z_{+}$ at $z=1$. If the equation has more than one stable solution and the probability of convergence to solutions less than one is finite, the probability of correct choice cannot take one. The self-consistent equation has a solution $z_{+}$ at $z=1$, $q_{h}(t,t\cdot z)$ must take 1 (0) at $z=1 (0)$. In addition, as $z_{+}$ is stable, the slope of $q(z|p)$ at $z=1$ is one or less. The curve $(z,q(z))$ connects $(0,(1-p))$ and $(1,1)$ as in Figure \[fig:general\]. The curve of the system of analog herders connects the two points by a direct line. If $q_{h}(t,t\cdot z)$ deviates from $z$, the curve between the two points is rippling above and below the direct line. Then, one can see that there is some threshold value $p_{c}<1$, where for $p>p_{c}$, the curve has more than three intersections with the diagonal line $y=z$. In this case, in addition to the stable solution $z_{+}$, there exists another stable solution $z_{-}$ less than one. The probability of correct choice becomes less than one and the statement is proved.
\[E\]Exponent $\gamma$
======================
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![\[fig:p\_vs\_gamma\] Plot of $\gamma$ vs. $p$. We plot the results of the average herders model for EXP-I for (A) Case $C$ and (B) Case $M$. Symbol ($\circ$) denotes $\gamma$s vs. $p_{avg}$ in EXP-I, which are estimated in Figure \[fig:herd\_macro\]. The lines show the results of the stochastic model with system size $T=60$ (thin solid) and $T=\infty$ (thick solid).](65757FigE1a.eps "fig:"){width="7cm"}
![\[fig:p\_vs\_gamma\] Plot of $\gamma$ vs. $p$. We plot the results of the average herders model for EXP-I for (A) Case $C$ and (B) Case $M$. Symbol ($\circ$) denotes $\gamma$s vs. $p_{avg}$ in EXP-I, which are estimated in Figure \[fig:herd\_macro\]. The lines show the results of the stochastic model with system size $T=60$ (thin solid) and $T=\infty$ (thick solid).](65757FigE1b.eps "fig:"){width="7cm"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In order to check the validity of the stochastic model for cases $r\in \{C,M\}$, we study the converge exponent $\gamma$. We solve the master equation (\[eq:master\]) recursively and obtain $P(t,n|p)$ for $t\le T=60$ for EXP-I. We estimate the convergence exponent $\gamma$ from the slope of $\mbox{Var}(Z(t|p))$ as $$\gamma=\log \frac{\mbox{Var}(Z(T-\Delta T|p))}
{\mbox{Var}(Z(T|p))}/\log \frac{T}{T-\Delta T}.$$ We take $\Delta T=50$ to match the analysis of the experimental data in Figures \[fig:herd\_macro\]A and B. In order to give the error bar of $\gamma$ for the experimental results, we adopted the voting model to simulate the system and estimate the 95% confidence interval [@Mor:2012]. For $T=\infty$ (thermodynamic limit), we estimate the gradient $q'(z_{+}|p)$ of $q(z|p)$ at $z=z_{+}$ and use the formula $\gamma=\mbox{Min}(1,2-2\cdot q'(z_{+}|p))$ [@His:2012]. The results are summarized in Figure \[fig:p\_vs\_gamma\] for (A) case $C$ and (B) case $M$. For $T=60$, the model describes the experimental results well. In the limit $T\to \infty$, $\gamma$ monotonically decreases from 1 to 0.
[^1]: E-mail: [email protected]
| |
How Patriots’ Path To Playoffs Can Change Dramatically In Coming Days
BOSTON (CBS) — The Patriots won on Sunday. In their quest to make the postseason, that was essentially necessary.
Unfortunately for New England, the remaining contenders in the wild-card mix also won. The Raiders benefited from the Jets being the Jets, the Colts benefited from the Texans being the Texans, and the Dolphins avoided a slip-up (while engaging in some fisticuffs) against the Bengals. The Browns also manhandled the Titans.
As a result, after Sunday’s action, here’s how the race for the three AFC wild-card spots stands:
The outlook for the Patriots remains, essentially, the same. Last week, Five Thirty Eight’s prediction tool gave the Patriots a 13 percent chance of making the playoffs. Sunday’s win boosted that number to … 16 percent. The New York Times’ prediction model has them at 15 percent, up from 13 percent a week ago.
That’s not the type of progress the Patriots were hoping to make on Sunday, of course. However, there’s a chance that the outlook changes dramatically for New England in just a matter of days.
The first key moment will come Tuesday night, when the Ravens host the Cowboys (in a game that was rescheduled from Thursday, due to the Ravens’ COVID-19 outbreak). Baltimore should have a number of key players back, including quarterback Lamar Jackson, but the Patriots have to hope that the disruptions to the Ravens’ schedule and practices will result in a losing performance on Tuesday.
That’s because if the Ravens lose, the Patriots’ playoff chances increase to 21 percent.
And if the Patriots can go out and beat the Rams on Thursday night? Those chances skyrocket (relatively speaking) to 40 percent.
That would, obviously, be significant. From there, the Patriots would still need some outside help. But if Bill Belichick’s team can win the following weekend in Miami? The chances jump all the way to 67 percent. And if that theoretical Dolphins loss comes after Miami loses to the Chiefs this weekend, the Patriots’ chances of making the playoffs will be at 69 percent.
With the Colts and Raiders set to play head-to-head in Week 14, one team will (almost certainly) lose. The Patriots would prefer a Colts win, as it would increase the Patriots’ chances to 74 percent (provided the aforementioned scenario with Miami plays out as explained.) Fire up a Colts loss to Houston in Week 15, and the Patriots would be at 75 percent.
Obviously, the ifs and buts can run wild when imagining future scenarios. But in the short term, the easiest way to look at it is quite simple. If the Ravens lose on Tuesday and the Patriots win on Thursday, then New England’s chances of reaching the playoffs will jump from 16 percent to 40 percent. Even if the Ravens do win on Tuesday, the Patriots can still increase their playoff chances to 32 percent just by beating the Rams to improve to 7-6, before watching the Sunday slate like the rest of us.
On the flip side, a loss to the Rams would plummet the Patriots to just a 7 percent chance of reaching the postseason. | |
Bragg-Williams approximation for the Ising model
Again, we consider a system with the following Hamiltonian:
First of all:
Then, for the other term of we have:
The total energy of the system will be:
We can thus see that in the Bragg-Williams approximation also the and exponents are the same of the Weiss mean field theory; in fact, we have seen in Critical exponents of Weiss mean field theory for the Ising model that they come from the expansion of the free energy density for small values of the magnetization when . If we now set into , so that , we get:
We can also say something in the case . Supposing that our system is uniform, i.e. , we can rewrite as:
In particular if (i.e. ), expanding the right hand side of the self-consistency equation we get a positive linear term (), so that the behaviour of is as shown in the first figure, and the equation has only one solution. On the other hand if then the linear term changes sign and behaves as in the second figure: in this case if is small enough there are three possible solutions, which we have called , and . These are all extrema of , but how can we understand which is a minimum or a maximum? And above all, which of them is the global minimum. If we suppose to be large there will be only one solution, , and as decreases also and will appear; we can therefore argue that for the continuity of the solution is still a minimum also when and are present. Similarly, if we take we can conclude that also is a minimum; therefore will necessarily be a maximum. Now, in order to see which between and is the global minimum of let us take and compute:
This means that as soon as changes sign the global minimum of changes abruptly from to . We are thus obtaining the phenomenology that is indeed observed for a magnet when we change the external field . In other words the sets of points are exactly the graphs of the phase diagram we have seen in Phase transitions and phase diagrams, i.e. the graphs of the magnetization seen as a function of the external field.
- ↑ This form of is very general, and does not depend on the fact that the degrees of freedom of the system can only assume two values: if there is a different number of possible states, say , then can be written in the same form, but will be the probability of one state while the probability of the remaining ones. We will shortly see this when we will apply the Bragg-Williams approximation to the Potts model.
- ↑ Note that also the temperature of the transition is still the same, considering also the factor we have already mentioned.
- ↑ These considerations apply in general also in the other mean field theories considered, but we show them now. | https://en.wikitolearn.org/Course:Statistical_Mechanics/Mean_field_theories/Bragg-Williams_approximation_for_the_Ising_model |
More News Headlines
Chitwood said several departments don't have the manpower to investigate all petty crimes, such as theft.
But that's not the case in Volusia County.
"We have never done that since 2006," Chitwood said. "We're looking for DNA, fingerprints, video evidence. We're trying to track where the stolen items were going, who's on probation in the neighborhood, who did we arrest before."
Chitwood said, in Volusia County, his detectives get results by giving every case the same attention.
"We investigate a stolen Christmas decoration with the same energy and the same paradigm as we would a homicide or armed robbery," Chitwood said.
That's why the number of crimes is going down and the number of cases cleared is going up in Volusia County, according to Chitwood.
The Sheriff's Office provided statistics to News 6 that show there were 206 burglaries in the Deltona area in 2018 and 86 of those cases were solved, resulting in a clearing rate of 42 percent.
In 2017, in the same area, there were 212 total burglaries and 77 were solved, for a clearing rate of 36 percent.
In 2016, in the same area, there were 326 burglaries and only 83 were solved, for a clearing rate of 25 percent.
Chitwood said the countywide numbers reflect the same trend.
Sheriff's Office statistics show burglaries across Volusia County have steadily dropped since 2016, while the clearing rate has increased. In 2018, 36 percent of burglaries were cleared countywide.
In December, when detectives learned of a spike in burglaries in one Deltona neighborhood, Deltona-based Detective Jarett Wooleyhan said the entire office collaborated.
"If there's a lead we can follow up on and we can solve something, we're going to solve it, no matter what it is," Wooleyhan said. "It's kind of a personal attack on somebody's privacy. If somebody broke into my home, I would like to believe my local law enforcement would do everything to find out who and get my stuff back, so I try and do the same in Deltona."
Wolleyhan said a couple broke into 15 homes within a 2-mile radius in two weeks.
A homeowner spotted one of the suspects carrying a TV out of a house and gave detectives a description of the getaway car, which a deputy discovered was parked only blocks away, Wolleyhan said.
"They live in our community," Wolleyhan said. "One of the victims was actually their direct neighbor, their next-door neighbor."
Wooleyhan said detectives arrested Gabriella Soto and Desmond Watson and charged them in 14 of the 15 break-ins.
Wooleyhan said the couple was selling the stolen items at pawn shops. He recovered many of the items and returned them to the homeowners.
"When you have a group of dedicated professionals who come to work in this city and they say we're going to get results, and if they don't, at least I know -- and their victims should know -- they did everything possible to solve the case," Chitwood said. | |
The conference proceedings are available in PDF format here.
Abstracts
Session 1
Working with pch2csd — Clavia NM G2 to Csound Converter
- Abstract
- The paper presents a detailed review on the pch2csd application, developed for conversion of popular Clavia Nord Modular G2 synthesizer patch format pch2 into a Csound-based metalanguage. The Nord Modular G2 was one of the most remarkable synthesizers of late 90s. A considerable number of different patches makes Nord Modular G2 to be a desirable target for software emulation. In this paper we describe the pch2csd work flow, including modeling approach, so the developer may use the paper as a starting point for further experiments. Each model of Nord Modular's unit is implemented as an User-defined Opcode. The paper gives an approach for modeling, including description of ancillary files needed for the correct work. First presented at the International Csound Conference 2015 in St. Petersburg, the pch2csd project continues to develop. Some directions for future developments and strategic plans are suggested. The example of transformation of Nord Modular G2 patch into the Csound code concludes the paper.
- Keywords
- Nord Modular G2, converter, metalanguage.
- Video of the presentation
- 1080 – 720 – 450
Daria: A New Framework for Composing, Rehearsing and Performing Mixed Media Music
- Abstract
- In this paper we present a new modular software framework for composing, rehearsing and performing mixed media music. By combining and extending existing open-source software we were able to synchronize the playback of the free Musescore music notation editor with three VST audio effects exported using the Csound frontend Cabbage. The JACK Audio Connection Kit sound server was used to provide a common clock and a shared virtual timeline to which each component could adhere to and follow. Moreover, data contained on the musical score was used to control the relative position of specific Csound events within the aforementioned timeline. We will explain the nature of the plugins that were built and briefly identify the five new Csound opcodes that the development process required. We will also comment on a generic programming pattern that could be used to create new compatible VST audio effects and instruments. Finally, we will conclude by mentioning what other related software exists that can interact out-of-the-box with our framework, how instrument players and computer performers can simulate the performance experience while practicing their corresponding parts at home and what our future plans for this software ecosystem are.
- Keywords
- Mixed media music, Musescore, Csound, Cabbage, JACK.
- Video of the presentation
- 1080 – 720 – 450
Session 2
Interactive Csound Coding with Emacs
- Abstract
- This paper will cover the features of the Emacs package csound-mode, a new major–mode for coding with Csound. The package is in most part a typical emacs major mode where indentation rules, completions, docstrings and syntax highlighting are provided. With an extra feature of a REPL, that is based on running csound instance through the csound–api. Similar to csound–repl.vim csound–mode strives to enable the Csound user a faster feedback loop by offering a REPL instance inside of a text editor. Making the gap between development and the final output reachable within a realtime interaction.
- Keywords
- Emacs, Csound, REPL.
- Video of the presentation
- 1080 – 720 – 450
Vim tools for Coding and Live Coding in Csound
- Abstract
- Vim is a powerful, free, cross-platform text editor, very popular among programmers and developers. Luis Jure's csound-vim plugin provides a set of tools for editing Csound files with Vim, like syntax recognition and highlighting, folding, autocompletion, on-line reference, and templates, as well as macros for compiling Csound orchestras from within Vim. Steven Yi's csound-repl plugin provide functionalities to live code with Csound. In this talk, we will demonstrate the features found in each of these plugins and discuss workflows for Vim and Csound.
- Keywords
- Vim, Csound, REPL.
- Video of the presentation
- 1080 – 720 – 450
Session 3
Chunking: A new Approach to Algorithmic Composition of Rhythm and Metre for Csound
- Abstract
- A new concept for generating non–isochronous musical metres is introduced, which produces complete rhythmic sequences on the basis of integer partitions and combinatorics. It was realized as a command–line tool called chunking, written in C++ and published under the GPL licence. Chunking produces scores for Csound as well as standard notation output using Lilypond. A new shorthand notation for rhythm is presented as intermediate data that can be sent to different backends.The algorithm uses a musical hierarchy of sentences, phrases, patterns and rhythmic chunks. The design of the algorithms was influenced by recent studies in music phenomenology, and makes references to psychology and cognition as well.
- Keywords
- Rhythm, NI-Metre, Musical Sentence, Algorithmic Composition, Symmetry, Csound Score Generators.
- Video of the presentation
- 1080 – 720 – 450
Interactive Visual Music with Csound and HTML5
- Abstract
- This paper discusses aspects of writing and performing interactive visual music, where the artist controls, in real time, a computerized process that simultaneously generates both visuals and music. An example piece based on Csound and HTML5 is presented.
- Keywords
- Visual music, generative art, algorithmic composition, computer music, Csound, HTML5.
- Video of the presentation
- 1080 – 720 – 450
Session 4
Spectral and 3D spatial granular synthesis in Csound
- Abstract
- This work presents ongoing research based on the design of an environment for Spatial Synthesis of Sound using Csound through granular synthesis, spectral data based synthesis and 3D spatialisation. Spatial Synthesis of Sound may be conceived as a particular way of sonic production in which the composer generates the sound together with its spatial features. Though this type of conception lives in the mind and work of most composers (specially in electroacoustic music) from many time ago, some strategies applied were inspired on the work of Gary Kendall. Kendall makes specific mention of both Granular Synthesis and Spectral data based synthesis as examples of resources through which the composer may partition the sonic stream in both the time domain and the frequency domain, respectively. These procedures allow a detailed spatial treatment of each one of the obtained parts of a sound which, in turn, may lead to realistic or unusual spatial images. The aim is not to describe in detail granular synthesis, nor spectral data based synthesis neither sound spatialisation techniques, but to describe the particular strategies in the design for the aforementioned purposes.
- Keywords
- Spectral data based synthesis, granular synthesis, sound spatialisation.
- Video of the presentation
- 1080 – 720 – 450
Integrated Tools for Sound Spatialization in Csound
- Abstract
- This talk is a report of an ongoing project aiming to develop a series of opcodes providing an integral solution for sound spatialization in Csound, using state–of–the–art techniques. The specific goals include extending and improving the present Ambisonics opcodes, developing a configurable high quality 3D Ambisonics FDN reverberator, and an opcode for sound localization with Ambisonics plus distance.
- Keywords
- Spatialization, Ambisonics, FDN reverberation. | https://csound.com/icsc2017/abstracts.html |
Beads date to the origins of humankind. They have served as currency, conveyed social and political messages, and expressed human creativity.
In 2011, the Museum acquired the entire collection of beads from The Bead Museum in Glendale, Arizona, spanning nearly every era, culture, and medium.
These beaded crowns are revered objects in Yorubaland as the essence of sacred rulership.
Each one of these 112 colorful beaded strands has a hidden meaning.
Produced in the millions and traded worldwide, these beads were highly valued and used as currency to purchase goods.
This highly prized necklace would have been part of an Ainu woman’s attire for formal occasions.
Hundreds of tiny steel cut beads, adorn this vintage tapestry style fringed purse.
In need of creative inspiration? Or perhaps just some retail therapy? Visit The Collectors’ Gallery, Mingei's store, there is always something beautiful to try on, touch, read or simply admire. | https://mingei.org/collection/beads/ |
Made for More – Exodus90
Made for More is a series to help you discover Christ and unfold the way he desires us to live our lives.
The main reason men make an Exodus is this: they want to grow. Closer to God, their spouse, children, and friends. Closer to that man that God has called them to be for them. Download a mobile-friendly Holy Hour guide for free.
Meet Nathaniel Binversie
Best known for his work in developing the road map for the Exodus Biblical Series and re-writing The Exodus 90 Spiritual Exercise, Nathaniel is an author, editor, podcaster and international speaker. His more recent stops have included Ireland, Poland, and Slovakia. In addition to speaking, he creates challenging and practical content for men in both video and written forms. For postings of his most recent work, check out Nathaniel’s blog.
Made for More is an initiative of the Faith Formation Office of the Diocese of Tulsa and Eastern Oklahoma. If you have any questions or would like to attend these events, please contact Sarah Jameson: [email protected]. | https://alcuininstitute.org/captivate-podcast/nathanielbinversie |
October 11th, 2012
A team of international researchers has provided the first comprehensive DNA evidence that the Addis Ababa lion in Ethiopia is genetically unique and is urging immediate conservation action to preserve this vulnerable lion population.
While it has long been noted that some lions in Ethiopia have a large, dark mane, extending from the head, neck and chest to the belly, as well as being smaller and more compact than other lions, it was not known until now if these lions represent a genetically distinct population.
The team of researchers, led by the University of York, UK, and the Max Planck Institute for Evolutionary Anthropology, Germany, has shown that captive lions at the Addis Ababa Zoo in Ethiopia are, in fact, genetically distinct from all lion populations for which comparative data exists, both in Africa and Asia.
The researchers compared DNA samples from 15 Addis Ababa Zoo lions (eight males and seven females) to lion breeds in the wild. The results of the study, which also involved researchers from Leipzig Zoo and the Universities of Durham and Oxford, UK, are published in the European Journal of Wildlife Research.
Principal Investigator Professor Michi Hofreiter, of the Department of Biology at the University of York, said: “To our knowledge, the males at Addis Ababa Zoo are the last existing lions to possess this distinctive mane. Both microsatellite and mitochondrial DNA data suggest the zoo lions are genetically distinct from all existing lion populations for which comparative data exist.
“We therefore believe the Addis Ababa lions should be treated as a distinct conservation management unit and are urging immediate conservation actions, including a captive breeding programme, to preserve this unique lion population.”
Click here to read more at Science Daily. | http://www.tadias.com/10/11/2012/dna-confirms-genetically-distinct-lion-population-for-ethiopia/ |
A haunting photo has captured the moment before a lion mauled a woman in the Ukraine.
Olga Solomina, 46, was visiting the Taygan safari park in Crimea when she was told she could step into the lions' enclosure to pose for pictures.
- Foolish safari tourist pets lion's head through car window
- PETA slams Auckland Zoo for euthanising lions
All was well until she placed her hand on the male lion's mane, provoking the creature into mauling her. The big cat seized her arm in his jaws and tried to drag her away "like a rag doll", she told Metro.
"The other lions jumped to their feet. I closed my eyes in fear waiting to be torn apart by the pride. It lasted several seconds that felt like eternity for me."
Fortunately for Ms Solomina, zoo director Oleg Zubkov was nearby enough to chase the lion away and drive her to safety.
However she says he refused to call an ambulance, instead ordering a vet to treat her injuries and offering her alcohol to soothe her pain.
She was eventually taken to hospital the following day, having contracted a serious infection from the untreated lion bite. She had surgery and her condition improved, but it's still unknown if she will ever regain full use of her arm.
After the accident, Ms Solomina demanded 1,000,000 RUB (NZ$23,150) from the safari park as compensation for her ordeal.
Mr Zubkov has refused, saying the woman was drunk and provoked the attack, and that she signed a disclaimer before entering the lions' enclosure.
Newshub. | https://www.newshub.co.nz/home/world/2018/07/photo-taken-moments-before-lion-mauled-woman.html |
I am trying to automate our version control process so we don't get smacked by ISO auditors and I'm having some difficulty.
Currently, we have all ISO documents on one network share with each section being its own word document. Within each document is contained a Revision letter (A, B, C, D, etc). Then, we have a master revision level tracking document (currently word but this could be changed to excel if necessary) that contains a list of all ISO documents and their current revision (A, B, C, D, etc.).
The goal is to just be able to update the Revision letter in each document, and then have the master revision level tracking document (either by links or whatever will work!) update itself with the individual documents' revision letters.
I've tried doing this with Bookmarks and links in word--and it works...until you change a document's revision letter. Because the bookmark lets you select and mark highlighted text (e.g "E"), when the "E" is changed to "F", the bookmark disappears as soon as the "E" disappears, rendering the link useless.
Next I tried embedding an excel sheet (for a single cell in which to place the revision letter) in the section documents so that I could instead reference the cell (which would allow data to be changed and referenced without the use of bookmarks), but I cannot seem to successfully link it, and my guess is because it's embedded.
Anyone have any ideas?
Thanks, all, in advance!
vishu4v, my guess is you need to set the revision level equal a variable and simply make your link include the variable rather than the level itself. That way you can alter the variable as needed without affecting anything else.
Perhaps a script? Maybe find 'nuts & bolts' help in the programming forum @:
http://www.computing.net/forum/prog...
HTH.
Ed in Texas. | https://www.computing.net/answers/windows-xp/version-control-iso-9001/183756.html |
Monkeys — Baboons — What’s the difference?
Well, just like all roses are flowers, but not all flowers are roses…
I preferred watching the antics of the monkeys, but let’s first take a look at the baboons.
There are five species of baboons, but the one we saw in South Africa was the Chacma Baboon. It is one of the heaviest, with males weighing from fifty to nearly one hundred pounds, and lives in social groups. These troops did not seem at all worried about the humans driving slowly by.
While the monkeys we will see later are both arboreal and terrestrial, the baboons are not arboreal. You will find them on the ground eating, walking, or hitching a ride. Their diet consists of everything from fruit, leaves and insects to rodents, birds, small antelope, and Vervet monkeys, They are not looked upon favorably because they will also raid human dwellings to feast on goats, sheep, and chicken.
There are two kinds of monkeys in South Africa — The Vervet and the Samango
Both Vervet and Samango monkeys are arboreal where they have a diet primarily of fruit, leaves, and insects. The Vervet is more common in South Africa and can cause a lot of damage to commercial fruit orchards.
This shot clearly shows the silver-gray body and black face of the Vervet. Although my pictures show individual monkeys, they are social animals and also live in troops.
And I was lucky enough to get this little guy quenching his thirst.
As you can see, their usual diet is often supplemented by anything they might find–or steal–from humans. You will find monkeys anywhere people picnic and at many outdoor restaurants. While stopping for lunch in Kruger National Park, one jumped down and attempted to swipe my food, but Andries Van Wijk moved quickly to intervene. They were both so quick, I barely knew what happened. By the time I spied the little pilferer, he was already looking for his next victim.
Next we’ll take a look at some beautiful and fascinating antelopes. Let me know if there is anything else you’re curious about or would like to see.
There is so much to see in South Africa, and with the help of our wonderful guides Andries and Steffi Van Wijk, I look forward to seeing more on our next trip there. | https://gkbostic.com/2020/03/01/its-primate-time/ |
Scotland’s premature birth rate has fallen by 10% since the public smoking ban came into force in 2006, BBC News reported today.
The news is based on the results of a large Scottish study that looked at trends in numbers of premature births and small babies born between 1996 and 2009, and how these related to the introduction of the smoking ban in March 2006. The researchers found that there was a decline in the number of premature births in the three months before the introduction, but since then there has been a slight fluctuation and numbers have begun to rise again overall. Conversely, the number of babies born small for the length of time they were in the womb declined around 2006, and has generally continued to fall.
Smoking is a known risk factor for premature birth and babies born small for the length of time they were in the womb (gestational age), and this research provides valuable clues to the potential impact of the smoking ban. However, the study only found trends, which means it cannot prove the legislation caused the drop in rates seen. It is possible that other factors may be responsible, such as general improvements in antenatal care.
Both smoking in pregnancy and passive smoking are associated with a higher risk of premature birth, babies born small and other pregnancy complications.
The study was carried out by researchers from University of Glasgow and Western General Hospital, Edinburgh. It was funded by Scotland’s Chief Scientist Office. The study was published in the peer-reviewed medical journal PLoS Medicine.
It was reported accurately by the BBC, which pointed out that other factors might have influenced the results.
This time-trend study looked at the numbers of babies born prematurely or small for gestational age in Scotland before and after the introduction of the smoking ban in March 2006. It looked at data on babies born to nearly 717,000 pregnant women between 1996 and 2009.
The researchers examined both trends in data and the possible impact of the legislation in Scotland. However, while this type of study can identify trends, it cannot confirm the various factors that caused the trends. It examined how trends related to both “active” and “passive” smoking, also known as first-hand and second-hand smoking respectively.
Both active and passive smoking during pregnancy are known to increase the risk of various complications. The researchers say the legislation – the Smoking, Health and Social Care (Scotland) Bill – has been very successful in reducing exposure to environmental tobacco smoke (ETS) in public places. It has also been associated with greater voluntary restrictions on smoking in the home. They say there was an increase in attempts to quit among current smokers three months before the legislation was introduced, and a reduction in the amount smoked by those who continued to smoke.
The researchers gathered data from a national administrative database on pregnancy, which collects information on all women discharged from Scottish maternity hospitals and records many factors, including pregnancy complications and smoking status. Data on smoking status were based on women’s self-reported smoking habits, which were reported as “current”, “never” and “former” smokers. The researchers obtained data on all singleton, live-born babies delivered at 24 to 44 weeks of pregnancy between January 1996 and December 2009. They used postcodes as an indicator of the women’s socioeconomic status.
From this data, they collected information on the rates of two complications of pregnancy: babies born small for gestational age and premature delivery. Babies were classed as small for gestational age if their birth weight was within the lowest 10% of babies of the same sex born at the same point in pregnancy. Premature delivery was defined as delivery before 37 weeks of pregnancy, and was categorised as:
The researchers also looked at other outcomes, such as spontaneous premature delivery (as opposed to overall premature deliveries which would include those that were planned, for example premature induced labour or caesarean due to complications with the mother or baby).
Researchers looked at the trends in these outcomes before and after the introduction of smoking legislation. They were particularly interested in two time points: the date when the legislation was implemented (March 26 2006), and three months before (January 1 2006). The latter date, they explain, allows for the possibility of people making changes to smoking behaviour in anticipation of the legislation and was chosen because it coincided with a New Year peak in attempts to quit smoking found in a previous study.
In their analysis, the researchers adjusted their results to account for other factors that may affect these pregnancy outcomes, including maternal age, sex of the infant and socioeconomic factors.
The researchers included 716,941 women who fulfilled all their criteria and for whom they had information on smoking status. They found that the number of current smokers fell from 25.4% before legislation to 18.8% after legislation. From looking at the trends in numbers of babies born prematurely or small for gestational age, they noticed that, of the two dates, January 1 2006 (three months before the smoking ban) seemed to have a greater influence on the fall.
The graph depicting the trend in premature births between 1996 and 2009 shows a general fluctuation in rates. Around January 2006, there is an apparent decline in rates, but over the following three years, there has been continued fluctuation and numbers have begun to rise again. For the number of babies born small for gestational age, there was a similar decline around 2006. However, the trend, though still fluctuating, seems to have continued downwards since then, rather than rising as it did with premature births.
The researchers report that, after January 1 2006:
These significant reductions were found among all women, including those who still smoked and those who had never smoked.
The researchers say that three months before the introduction of the new legislation, the numbers of premature deliveries and babies born small for gestational age fell significantly, although they point out that rates of premature births have since begun to rise again. They say this is consistent with a previous study which showed that smokers anticipated legislation, resulting in a significant peak in prescriptions for nicotine replacement therapy in January 2006.
Overall, this analysis of the relationship between pregnancies and the smoking ban in Scotland provides a valuable insight into the possible results of anti-smoking legislation. In particular, the decline in rates of premature births and babies born small for gestational age around January 2006 is interesting. As smoking is a known risk factor for these outcomes, the trend could be the result of higher quit rates, both among pregnant women or the public in general, in anticipation of the new law.
However, the trend analysis performed in this study cannot prove there is a definite relationship between the two, but only that there are associations. It is possible that other factors are involved, such as general improvements in antenatal care and management of pregnant women who are at risk of these complications. Furthermore, there has been continuing fluctuation in numbers of babies born prematurely or small for gestational age since the smoking ban in 2006. The subsequent general increase in premature births makes it even harder to draw any conclusions about the reasons behind this trend.
A further limitation of the study was that women’s smoking status was based on them reporting whether or not they smoked. As the authors point out, there is evidence that pregnant women underestimate how much they smoke and it is possible they felt under pressure to conceal their smoking following the new law. However, this would not affect the overall results of the study, which related to all deliveries irrespective of smoking status.
The study cannot prove that smoking legislation – or anticipation of it – reduced the risk of pregnancy complications. Nevertheless, smoke-free legislation is now recognised as having health benefits and it is plausible that improved pregnancy outcomes are one of them. | https://www.nicswell.co.uk/health-news/premature-births-fell-10-after-smoking-ban |
Feb 10, 2014 0183 32 Your building costs per square meter vary according to where you live unfortunately You can get a quote to build the same house in a lower class suburb and get a quote to build the same house in a more upper class area and it will cost more to build in an affluent suburb unfortunately...
Nationally, it would cost you an average of R8 163 per square metre to build flats, according to building plans passed by municipalities in 2015 Flats are followed by office space R8 092 per square metre , shopping space R7 364 per square metre , townhouses R6 802 per square metre , and free-standing houses R5 932 per square metre...
May 27, 2018 0183 32 According to the report, the average building cost of new housing completed increased by 31 y/y to R7,360 per square meter in the first quarter of 2018, compared with R7,135 per square meter ,...
Aug 27, 2018 0183 32 The North West has the cheapest construction costs in South Africa currently, while KwaZulu Natal is the most expensive, according to a release by Stats SA How much it costs to build in SA Stats SA recorded building plans passed by larger municipalities at current prices per province for the period Jan-July 2018...
Cost of Building a Detached Double Garage made of Brickwalls in South Africa Cost of Building a 24 24 Detached Double Garage made of Brickwalls in the USA 10 Best Estimating Takeoff Software for Precast Concrete, Modular Building and Prefabricated Metal / Timber Construction Building Costs Per Square Metre in the UK / England and Wales...
COST PER SQUARE METRE , For example the total wall area required for a square building will be less than that for a rectangular one but still provide the same floor area , Region - Construction costs vary between different regions in South Africa General - Paving, swimming pools, tennis courts etc also affect building costs... | http://cttiitd.pl/tool/8200/wall-building-costs-south-africa-per-square-metre.html |
6 Things to Know About Basement Wall & Floor Cracks
Who knew that one little word could mean so many different things? Crack: adj., super, first-rate; Crack: verb, break, usually into parts; Crack: verb, lose self-control; Crack: verb, hit very hard; Crack: verb, discover meaning, answer; Crack: noun, joke; Crack: noun, attempt to do something; Crack: noun, loud sound, usually from hitting; and finally, Crack: noun, break, crevice. Whew! That’s a lot of work for one little word.
And speaking of little, what about those tiny cracks that often appear in the concrete of your basement’s walls and floor–should you be concerned? What about the bigger ones? At what point should you start to worry?
3 Types of Basement Concrete Cracks in Foundation Walls:
1. Shrinkage or Curing
Concrete, by nature, shrinks as it dries and cures over time. The degree of shrinkage is largely affected by the conditions present at the time the concrete was poured and directly after. Dramatic changes in temperature can also affect curing and cause cracks to occur. Even in optimal conditions, however, basement concrete will experience some degree of shrinkage.
2. Settlement Cracks
Another often inevitable cause of cracking in basement slabs results from settlement in the soil beneath the slab itself. Most homes are designed to accommodate some movement in the soil, as it is a common occurrence. Other sources of settlement cracks include water leakage and aggressive tree roots.
3. Movement
A condition known as the frost-heave cycle explains how the moisture in damp soil under a home’s foundation freezes when cold and expands, then thaws, potentially shifting the concrete of the foundation and forming cracks in the concrete. Cracks caused by frost-heave are most often seen around the support columns of the basement floor, or as horizontal cracks along the upper part of a basement wall where it meets the surrounding topsoil, which is the soil most affected by the freeze-thaw cycle.
Potential Damage from Wall and Floor Cracks:
1. Water Leaks
Depending upon the size and amount of cracks in basement concrete, water can seep into the basement through them, which can in turn exacerbate the problem.
2. Structural Problems
Though most concrete cracks are not structural in nature, other types of cracks can be signs of more serious damage to a home’s structure. Take note of any of the following types of cracks or unnatural spaces in your home:
- Gaps forming between the floor and walls
- Gaps forming between the walls and ceiling
- Walls pulling away from each other
- Cracks on the walls, especially near corners
- Should any of these red flags become apparent, look for other clues that might suggest settlement issues, such as doors or windows that don’t open correctly or floors and surfaces that noticeably slope. Settlement is a problem best addressed sooner than later, as cracks will continue to form and the home’s foundation could suffer considerable damage.
3. Invasive Pests
While those minor settlement cracks may look (and even be) inconsequential, they can serve as a welcome spot for small pests to call home. Keep your eye on the cracks to make sure that you don’t have any uninvited guests taking up residence.
How to Fix Basement Floor Cracks
The best way to repair floor cracks is to pour in a polymer compound that forms a bond with the concrete on both sides. For wider cracks, use an epoxy filler to recreate the original monolithic pour. It’s important to do a thorough job of sealing the crack to prevent further issues, rather than just doing a cosmetic fix by covering it with a surface filler. | https://finishedbasementsandmore.com/blog/6-things-to-know-about-basement-wall-floor-cracks/ |
Q:
Doc knows he will invent time machine 30 years ahead and it doesnt change the future
In Back to the Future, Doc invented time machine in 1985 and he was really excited and little surprised that it worked. But later in movie, Marty goes back and tells young Doc about it.
Why was Doc so surprised about it, when he knew he is inventing it in 1985 since 1955?
But more serious question, when he knew it since 1955, wouldn't this knowledge change the date of first time travel (Make it sooner, because he had lots of information from the future, or even make it later, because he "knew" he will invent so he could stop trying)?
Or why he didn't change the place of meeting with Marty, to prevent attack from Libyans (any events in past could change that moment, when Marty escaped only because the gun has jammed). Is it because he do not want to ruin time-space-continuum?
I know this is not one specific question, more few confused questions, but I hope you understand what is blurry about this movie for me.
A:
Technically, in 1955, Doc Brown didn't invent time travel, he got the idea for the flux capacitor "which makes time travel possible." Doc Brown would first need to actually build a flux capacitor, and then he would need to work out how to use it to build a time machine. Just knowing that something is possible or that it will work, does not tell you how to actually make it work.
As to the rest of your question, there are many different theories of time travel and how it can affect the time line. One of the areas of contention (for which we do not have an answer) is the overall stability of the time line. Some theories say that even a minute change will cause a magnified ripple effect of larger and larger side effects - this is commonly known as the butterfly effect. At the opposite end of the spectrum, some science fiction folks speculate that a time stream is inherently stable and events and circumstances will tend to stay, as closely as possible, to what they were originally. Some theories even hold that the time stream will constantly work to repair itself and, eventually, only the largest and most significant changes will have any lasting effects.
It seems that, for Back to the Future, the theory of time travel in operation is more towards the stable time-line end of the continuum, though definitely not self-repairing. For the most part, things will tend to be very much like they were before the changes introduced by Marty's time traveling. As a result, the completion of the time machine, and the first trip will end up on the same day and in the same place, and probably not as a result of any conscious decision by Doc Brown.
| |
This is in response to your letter of July 21, 1992, requesting a ruling as to the classification and country of origin for aluminum cookware under the Harmonized Tariff Schedule of the United States (HTSUS). In response to your letter of June 8, 1992, by letter dated October 20, 1992 (HQ 734734), the Chief, Value and Marking Branch of Customs Headquarters, provided you with general information concerning the country of origin marking of this cookware.
The articles in question are non-stick, hard anodized aluminum pans sold under the name, "Circulon." You state that you intend to locate a plant in the U.S. to produce these pans. Your intended plan is to provide aluminum circles from your Hong Kong plant to your Thailand plant.
In Thailand, the aluminum circles will go through the following manufacturing processes: (1) bottom mark stamping; (2) scroll trimming (grooves cutting); (3) drawing (forming of pan); (4) edge trimming; (5) holes punching (for handle assembling); (6) machine etching (cleaning); (7) interior sandblasting; (8) exterior sunray polishing; (9) packing.
following will occur: (1) hard anodizing; (2) sealing; (3) washing; (4) interior non-stick PTFE coating; (5) sealing; (6) handle riveting (assembling); (7) cleaning; (8) packaging with cover and knob (if lid is stainless steel, rather than glass, a screw must be welded onto the lid). The complete pans will then be packaged and ready for sale.
1. Whether the cookware, as imported into the U.S., is classifiable as an aluminum table, kitchen or other household article, under Heading 7615, HTSUS.
2. What is the country of origin of the cookware?
The General Rules of Interpretation (GRI's) to the HTSUS govern the classification of goods in the tariff schedule. GRI 1 states in pertinent part that "for legal purposes, classification shall be determined according to the terms of the headings and any relative section or chapter notes and, provided such headings or notes do not otherwise require, according to the [remaining GRI's]." Heading 7615, HTSUS, provides for aluminum table, kitchen or other household articles and parts thereof.
EN 76.15, pg. 1069, states that Heading 7615, HTSUS, covers the same types of articles as are described in the ENs to Heading 7323, HTSUS (table, kitchen or other household articles of iron or steel). EN 73.23, pg. 1035, states that Heading 7323, HTSUS, covers articles for kitchen use such as "frying pans."
As imported into the U.S., the articles in question are aluminum pans that have been stamped, trimmed, grooved, sandblasted, polished and have had holes punched for handle assembly. The pans have not been hard anodized, sealed, or non- stick coated. The handles have not yet been assembled, nor have the pans been fitted with lids (glass or stainless steel).
incomplete or unfinished, provided that, as entered, the incomplete or unfinished article has the essential character of the complete or finished article." Thus, if the pans in question have the essential character of complete or finished pans, they would be classifiable under Heading 7615, HTSUS.
For an item to have the essential character of the finished product, it must be recognizable as such a product. In determining an article's essential character, one must look to the merchandise in question--as it changes, so too may the factors which determine its essential character. Factors found to be relevant in other contexts include the significance of the imported component (bulk, quantity, weight), its role in relation to the use and overall functioning of the complete article and, to the extent it validates the comparison, the cost or value of the complete article versus the cost or value of the imported component. See HQ 084845, dated November 24, 1989; HQ 086555, dated April 16, 1990; EN to GRI 3(b), pg. 4.
The articles in question have the essential character of complete, aluminum pans. The finished aluminum pans would be classified under Heading 7615, HTSUS, which provides for aluminum kitchen articles. Specifically, they would be classifiable under subheading 7615.10.30, HTSUS, which provides for aluminum cooking and kitchen ware containing non-stick interior finishes. Thus, according to GRI 2(a), the unfinished aluminum pans are likewise classifiable under subheading 7615.10.30, HTSUS.
emerge from the processing, one having a new name, character or use. Anheuser-Busch Brewing Association v. United States, 207 U.S. 556 (1908).
Your intended plan is to provide aluminum circles from your Hong Kong plant to your Thailand plant. In Thailand, the aluminum circles will be manufactured into aluminum pans that have been stamped, trimmed, grooved, sandblasted, polished and have had holes punched for handle assembly. After importation into the U.S., the handles will be assembled and the pans will be fitted with lids (glass or stainless steel). The pans also will be hard anodized, sealed, and coated with interior non-stick coating in the U.S.
The raw aluminum circles from Hong Kong are substantially transformed in Thailand into an article having a name, character and use different than that possessed by the article as it originally entered Thailand. As shown above, the aluminum circles are transformed into unfinished, aluminum pans which have the essential character of complete, aluminum pans.
The imported aluminum pot/pan is not substantially transformed in the U.S. into a new article with a new name, character or use. Although it is necessary to attach the handle in order for the pot/pan to be functional, the imported article could only be used to make a pot/pan to be used for cooking. The name, character and use of the pot/pan would not change when the handle is attached. The aluminum pot/pan is the very essence of the finished product. A significant amount of work is not done on the pot/pan itself; the processing done on the pot/pan in the U.S. is merely finishing and coating the pot/pan.
are not substantially transformed in the U.S., the country of origin of the finished aluminum pans is also Thailand.
In your ruling request, you state that if the hard anodizing process is not sufficient to qualify the pans as products of the U.S., you may send the aluminum circles from Hong Kong to Thailand for stamping and groove cutting only, and then on to the U.S. (for drawing, trimming, hole punching, etching, sandblasting, polishing, hard anodizing, sealing, washing, non- stick coating, handle riveting, cleaning and packaging). You then ask for the classification and country of origin marking requirements for these flat, stamped, grooved aluminum disks.
In this instance, the imported articles would not have the essential character of complete or finished, aluminum frying pans, according to GRI 2(a). Thus, they would not be classifiable under Heading 7615, HTSUS.
The flat, stamped, grooved aluminum disks would be classifiable under Heading 7606, HTSUS, which provides for "[a]luminum plates, sheets and strip, of a thickness exceeding 0.2% mm." If the disks are of aluminum, not alloyed, not clad, they would be classifiable under subheading 7606.91.30, HTSUS. If they are of aluminum, alloyed, not clad, they would be classifiable under subheading 7606.92.30, HTSUS.
With regard to the articles' country of origin, it is our opinion that the aluminum disks are not substantially transformed in Thailand by the stamping and grooving operation. However, they are substantially transformed in the U.S., and, therefore, the country of origin of the finished pans is the U.S.
interior finishes . . . [o]ther." The corresponding rate of duty for articles of this subheading is 5.7% ad valorem. This provision is eligible for a free rate of duty under the Generalized System of Preferences. See sections 10.171-10.178 of the Customs Regulations (19 CFR 10.171-10.178).
The described processing operations in Thailand result in a substantial transformation of the aluminum circles, or a product having a new name, character and use, namely, the unfinished pans. Thus, the imported articles are products of Thailand.
The subsequent processing of the imported articles in the U.S. does not result in a substantial transformation of these articles. Thus, the country of origin of the finished pans is also Thailand.
With regard to your alternative proposal, the flat, stamped, grooved aluminum disks are classifiable under Heading 7606, HTSUS, which provides for "[a]luminum plates, sheets and strip, of a thickness exceeding 0.2 mm." If the disks are of aluminum, not alloyed, not clad, they would be classifiable under subheading 7606.91.30, HTSUS. If they are of aluminum, alloyed, not clad, they would be classifiable under subheading 7606.92.30, HTSUS. The corresponding rate of duty for articles of these subheadings is 3% ad valorem.
Aluminum circles from Hong Kong which are stamped and groove cut only in Thailand are not substantially transformed in Thailand. Thus, these articles are products of Hong Kong.
However, the subsequent processing in the U.S. results in a substantial transformation of the flat, stamped, grooved aluminum disks. Thus, for Customs purposes the country of origin of the finished pans under this proposal is the U.S. The pans may not be marked "Made in the U.S.A." unless authorized by the Federal Trade Commission. | http://www.faqs.org/rulings/rulings1993HQ0952033.html |
In Sainte-Verge, the summers are warm and partly cloudy and the winters are very cold, windy, and mostly cloudy. Over the course of the year, the temperature typically varies from 36°F to 78°F and is rarely below 25°F or above 88°F.
Based on the tourism score, the best time of year to visit Sainte-Verge for warm-weather activities is from mid June to mid September.
The warm season lasts for 3.1 months, from June 10 to September 14, with an average daily high temperature above 71°F. The hottest day of the year is August 3, with an average high of 78°F and low of 58°F.
The cool season lasts for 3.7 months, from November 17 to March 6, with an average daily high temperature below 52°F. The coldest day of the year is February 8, with an average low of 36°F and high of 47°F.
Seattle, Washington, United States (5,035 miles away) is the far-away foreign place with temperatures most similar to Sainte-Verge (view comparison).
In Sainte-Verge, the average percentage of the sky covered by clouds experiences significant seasonal variation over the course of the year.
The clearer part of the year in Sainte-Verge begins around May 17 and lasts for 4.9 months, ending around October 12. On July 24, the clearest day of the year, the sky is clear, mostly clear, or partly cloudy 67% of the time, and overcast or mostly cloudy 33% of the time.
The cloudier part of the year begins around October 12 and lasts for 7.1 months, ending around May 17. On January 5, the cloudiest day of the year, the sky is overcast or mostly cloudy 72% of the time, and clear, mostly clear, or partly cloudy 28% of the time.
A wet day is one with at least 0.04 inches of liquid or liquid-equivalent precipitation. The chance of wet days in Sainte-Verge varies throughout the year.
The wetter season lasts 8.6 months, from September 21 to June 9, with a greater than 24% chance of a given day being a wet day. The chance of a wet day peaks at 33% on December 28.
The drier season lasts 3.4 months, from June 9 to September 21. The smallest chance of a wet day is 16% on August 27.
To show variation within the months and not just the monthly totals, we show the rainfall accumulated over a sliding 31-day period centered around each day of the year. Sainte-Verge experiences some seasonal variation in monthly rainfall.
Rain falls throughout the year in Sainte-Verge. The most rain falls during the 31 days centered around October 25, with an average total accumulation of 2.4 inches.
The length of the day in Sainte-Verge varies significantly over the course of the year. In 2019, the shortest day is December 22, with 8 hours, 30 minutes of daylight; the longest day is June 21, with 15 hours, 54 minutes of daylight.
The earliest sunrise is at 6:05 AM on June 16, and the latest sunrise is 2 hours, 42 minutes later at 8:46 AM on January 1. The earliest sunset is at 5:11 PM on December 11, and the latest sunset is 4 hours, 49 minutes later at 10:00 PM on June 26.
Daylight saving time (DST) is observed in Sainte-Verge during 2019, starting in the spring on March 31, lasting 6.9 months, and ending in the fall on October 27.
The perceived humidity level in Sainte-Verge, as measured by the percentage of time in which the humidity comfort level is muggy, oppressive, or miserable, does not vary significantly over the course of the year, staying within 3% of 3% throughout.
The average hourly wind speed in Sainte-Verge experiences significant seasonal variation over the course of the year.
The windier part of the year lasts for 6.4 months, from October 13 to April 24, with average wind speeds of more than 9.7 miles per hour. The windiest day of the year is January 16, with an average hourly wind speed of 11.3 miles per hour.
The calmer time of year lasts for 5.6 months, from April 24 to October 13. The calmest day of the year is August 5, with an average hourly wind speed of 8.1 miles per hour.
The predominant average hourly wind direction in Sainte-Verge varies throughout the year.
The wind is most often from the west for 8.3 months, from January 27 to October 6, with a peak percentage of 46% on July 20. The wind is most often from the south for 3.7 months, from October 6 to January 27, with a peak percentage of 36% on January 1.
To characterize how pleasant the weather is in Sainte-Verge throughout the year, we compute two travel scores.
The tourism score favors clear, rainless days with perceived temperatures between 65°F and 80°F. Based on this score, the best time of year to visit Sainte-Verge for general outdoor tourist activities is from mid June to mid September, with a peak score in the first week of August.
The beach/pool score favors clear, rainless days with perceived temperatures between 75°F and 90°F. Based on this score, the best time of year to visit Sainte-Verge for hot-weather activities is from mid July to mid August, with a peak score in the first week of August.
The growing season in Sainte-Verge typically lasts for 7.5 months (230 days), from around April 1 to around November 17, rarely starting before March 8 or after April 22, and rarely ending before October 24 or after December 24.
Based on growing degree days alone, the first spring blooms in Sainte-Verge should appear around April 3, only rarely appearing before March 19 or after April 20.
For the purposes of this report, the geographical coordinates of Sainte-Verge are 47.008 deg latitude, -0.210 deg longitude, and 217 ft elevation.
The topography within 2 miles of Sainte-Verge contains only modest variations in elevation, with a maximum elevation change of 197 feet and an average elevation above sea level of 230 feet. Within 10 miles contains only modest variations in elevation (509 feet). Within 50 miles contains significant variations in elevation (1,066 feet).
The area within 2 miles of Sainte-Verge is covered by cropland (67%) and artificial surfaces (25%), within 10 miles by cropland (76%) and grassland (10%), and within 50 miles by cropland (63%) and grassland (16%).
This report illustrates the typical weather in Sainte-Verge, based on a statistical analysis of historical hourly weather reports and model reconstructions from January 1, 1980 to December 31, 2016.
There are 6 weather stations near enough to contribute to our estimation of the temperature and dew point in Sainte-Verge.
For each station, the records are corrected for the elevation difference between that station and Sainte-Verge according to the International Standard Atmosphere , and by the relative change present in the MERRA-2 satellite-era reanalysis between the two locations.
The estimated value at Sainte-Verge is computed as the weighted average of the individual contributions from each station, with weights proportional to the inverse of the distance between Sainte-Verge and a given station.
The stations contributing to this reconstruction are: Angers-Avrillé (21%, 61 kilometers, northwest); Poitiers–Biard Airport (21%, 61 kilometers, southeast); Angers – Loire Airport (20%, 62 kilometers, north); Niort-Souché (14%, 79 kilometers, south); Tours Val de Loire Airport (13%, 85 kilometers, northeast); and La Roche-sur-Yon - Les Ajoncs (11%, 95 kilometers, west). | https://weatherspark.com/y/44142/Average-Weather-in-Sainte-Verge-France-Year-Round |
TECHNICAL FIELD
BACKGROUND ART
DISCLOSURE OF THE INVENTION
Problem to be Solved by the Invention
Means for Solving the Problem
Effect of the Invention
EXPLANATIONS OF NUMERALS
BEST MODES FOR CARRYING OUT THE INVENTION
INDUSTRIAL APPLICABILITY
The present invention relates to a process for production of a multilayer film having a plurality of adhesive films formed on a support film, and for production of a multilayer film having a plurality of adhesive films and a pressure-sensitive adhesive film formed on a support film.
Multilayer films having a plurality of adhesive films and a pressure-sensitive adhesive film formed on a support film include, for example, die bonding-dicing integrated films wherein a die bonding adhesive layer and a dicing pressure-sensitive adhesive layer are formed on a support film.
Such multilayer films may be produced, for example, by a method in which a die bonding film comprising a support film and adhesive layers formed on the support film and separated at a prescribed spacing is laminated with a dicing film comprising a base film and pressure-sensitive adhesive layers formed on the base film, with the adhesive layers and the pressure-sensitive adhesive layers facing inward (Patent document 1).
[Patent document 1] Japanese Unexamined Patent Publication No. 2004-221336
A plurality of adhesive layers situated at a prescribed spacing can be formed by a method in which first an adhesive layer is formed to cover the entirety of one side of the support film, after which the unneeded sections are removed leaving only portions thereof, and for industrial purposes this method is preferably employed from the viewpoint of production efficiency.
When multilayer films are produced by conventional methods, however, it is necessary to form the adhesive film on the support film while creating the spacing that is required for the final product, and this results in large amounts of unneeded portions of the adhesive layer that must be discarded. Particularly when expensive materials are used for die bonding adhesive films, it is highly desirable from an industrial point of view to minimize the amounts of unneeded portions that are to be discarded.
Another problem with production of multilayer films by conventional methods is that the adhesive film must be cut on the support film, and this results in cuts being created in the surface of the support film around the outer periphery of the adhesive film as the adhesive film is cut. The introduction of cuts invites contaminants such as film dust to collect at those sections, and therefore it is highly desirable to avoid such cuts.
It is an object of the present invention to provide a process for production of a multilayer film comprising a plurality of adhesive films formed on a support film, wherein the plurality of adhesive films can be efficiently formed at any desired predetermined spacing which may be different from the spacing in the final product, or even without spacing, and whereby cuts in the surface of the support film can be avoided.
The present invention relates to a process for production of a multilayer film comprising a support film and a plurality of adhesive films situated on the support film along the lengthwise direction of the support film. The production process of the invention comprises a step (A) in which an adhesive layer formed on a temporary base is situated along the lengthwise direction of the temporary base either at a prescribed spacing or without spacing, and cut in such a manner as to partition the plurality of sections which are to serve as adhesive films from the other sections, and a step (B) in which the adhesive films on the temporary base are moved onto the support film at a prescribed spacing along the lengthwise direction of the support film. While the plurality of sections which are to serve as adhesive films are situated along the lengthwise direction of the temporary base at the prescribed spacing, the spacing between adjacent adhesive films on the temporary base differs from the spacing between the adjacent adhesive films on the support film.
According to the production process of the invention, a plurality of adhesive films for the multilayer film are first formed on the temporary base, after which the formed adhesive films are moved onto the support film. It is therefore possible to form the adhesive films in such a manner that they are situated at any desired predetermined spacing different from the spacing in the final product, or without spacing.
Also, since the adhesive layer is cut on the temporary base and the formed adhesive films are subsequently moved onto the support film in the production process of the invention, it is possible to avoid cuts in the support film surface.
The production process of the invention may further comprise a step (C) in which all or some of the non-adhesive-film sections of the adhesive layer on the temporary base are removed to leave the adhesive films on the temporary base, as a step between step (A) and step (B). This will make it easier for the adhesive film on the temporary base to be moved onto the support film.
When the plurality of sections which are to serve as adhesive films are situated along the lengthwise direction of the temporary base at the prescribed spacing, the spacing between adjacent adhesive films on the temporary base is preferably narrower than the spacing between the adjacent adhesive films on the support film. This will form the adhesive films on the temporary base at a high density, with a narrower spacing than the spacing required for the final product. This can reduce the amount of adhesive layer that must be removed as unneeded sections.
The multilayer film may further be provided with a pressure-sensitive adhesive film having an overhanging section extending from the outer periphery of the adhesive film, which is formed on the adhesive film. In this case, the production process of the invention preferably further comprises a step (a) wherein the adhesive film and pressure-sensitive adhesive layers are attached, a step (b) wherein the pressure-sensitive adhesive layer on the adhesive film is cut so as to partition the plurality of sections which are to serve as pressure-sensitive adhesive films from the other sections, and a step (c) wherein all or some of the sections of the non-pressure-sensitive-adhesive-film sections of the pressure-sensitive adhesive layer are removed, leaving the pressure-sensitive adhesive films on the adhesive film.
For example, by attaching the adhesive films and the pressure-sensitive adhesive layer, the plurality of adhesive films on the temporary base may be moved onto the pressure-sensitive adhesive layer at a prescribed spacing that is different from the spacing between the adjacent adhesive films on the temporary base, along the lengthwise direction of the pressure-sensitive adhesive layer, and then the adhesive films and support film may be attached to the pressure-sensitive adhesive layer to move them onto the support film at a prescribed spacing along the lengthwise direction of the support film while the plurality of adhesive films are still attached to the pressure-sensitive adhesive layer. In this case, step (a) will be included in step (B), and steps (b) and (c) will be carried out after step (B). As another example, the adhesive films moved onto the support film in step (B) may be attached to the pressure-sensitive adhesive layer while still attached to the support film. In this case, steps (a), (b) and (c) will be carried out after step (B).
With a multilayer film comprising the aforementioned pressure-sensitive adhesive film, it is necessary to significantly increase the spacing between adhesive films in order to ensure space for the overhanging sections of the pressure-sensitive adhesive film, but the amount of adhesive layer that must be removed as unneeded sections can still be minimized in such cases according to the invention.
In step (c), the pressure-sensitive adhesive layer is preferably cut in such a manner that the plurality of sections that are to serve as the pressure-sensitive adhesive films are partitioned from the sections surrounding those sections as well as the other sections, and the sections of the pressure-sensitive adhesive layer surrounding the plurality of pressure-sensitive adhesive films are preferably removed leaving the pressure-sensitive adhesive films on the adhesive film. In this case, the exposed sections of the support film are formed around the pressure-sensitive adhesive film in the multilayer film. This improves the handleability of the multilayer film, facilitating release from the support film when using a laminated body comprising the adhesive films and pressure-sensitive adhesive film.
In the production process of the invention, the adhesive films may be die bonding adhesive films and the pressure-sensitive adhesive films may be dicing pressure-sensitive adhesive films.
The production process of the invention may further comprise a step wherein the outer appearance of the adhesive film on the temporary base is inspected. This can help prevent defective final products, if the only products moved onto the support film are those among the plurality of adhesive films on the temporary base that are judged as satisfactory in the step in which the outer appearance of the adhesive film is inspected.
By producing a multilayer film having a plurality of adhesive films formed on a support film according to the invention, it is possible to efficiently form the plurality of adhesive films with any desired predetermined spacing that is different from the spacing in the final product, or without spacing. As a result, reduced waste and lower production cost are achieved since the amount of adhesive layer to be removed as unneeded sections can be minimized. Also according to the invention, it is possible to avoid cuts in the support film surface and prevent residue of contaminants such as film dust caused as a result.
1
3
5
7
9
10
11
12
13
21
21
22
22
22
31
32
40
50
51
80
90
a
a
: Multilayer film, : laminated body (die bonding-dicing integrated film), : semiconductor wafer, : ring frame, : dicing film, : support film, : temporary base, : base film, : cover film, : adhesive layer, : adhesive film, : pressure-sensitive adhesive layer, A: overhanging section, : pressure-sensitive adhesive film, ,: cutters, : release sheet, ,: adsorption pad, : inspecting device, : exposed section.
Preferred embodiments of the invention will now be explained in detail, with reference to the accompanying drawing as necessary. Throughout the drawings, elements with identical or corresponding structures in the drawings will be referred to by like reference numerals, and where appropriate they will be explained only once.
FIG. 1
FIG. 2
FIG. 1
FIGS. 1 and 2
1
10
21
22
21
10
12
22
a
a
a
a
a.
is a plan view showing an embodiment of a multilayer film, and is an end view of along line II-II. The multilayer film shown in is composed of a long support film , a plurality of adhesive films having circular main sides, pressure-sensitive adhesive films laminated on the sides of the plurality of adhesive films opposite the respective support film sides, and a base film covering the pressure-sensitive adhesive films
21
10
2
10
21
22
12
21
22
22
21
22
a
a
a
a
a,
a
a.
a
The adhesive films are situated on the support film at a prescribed spacing D along the lengthwise direction of the support film . Each of the adhesive films is a die bonding adhesive film used for bonding of semiconductor elements to semiconductor element mounting boards. The pressure-sensitive adhesive films and base films have circular main sides with larger areas than the main sides of the adhesive films and each of the pressure-sensitive adhesive films has a ring-shaped overhanging section A that extends out from the outer periphery of the main side of each adhesive film Each of the pressure-sensitive adhesive films is a dicing pressure-sensitive adhesive film used for anchoring of semiconductor wafers during individuation of semiconductor wafers by dicing.
3
21
22
12
10
3
90
10
3
22
12
10
90
22
22
10
22
10
a,
a
a
b
b
a
The laminated body having a laminated structure with the adhesive film pressure-sensitive adhesive film and base film laminated in that order is released from the support film and used as a die bonding-dicing integrated film functioning both for die bonding and dicing. In order to facilitate release of the laminated body , a ring-shaped exposed section is formed to expose the support film around the perimeter of the laminated body . A portion of the pressure-sensitive adhesive layer and the base film are laminated on the support film at the areas further outward from the exposed section . The overhanging section A of each pressure-sensitive adhesive film is shown to be at a distance from the support film in the drawing only to facilitate explanation, as the overhanging section A will normally also be partially in contact with the support film .
FIG. 3
3
21
5
22
22
7
5
3
7
22
5
21
22
21
21
a
a
a,
a
a
a.
is an end view showing an embodiment of a step of dicing a semiconductor wafer using the laminated body (die bonding-dicing integrated film) . Each adhesive film is attached to the semiconductor wafer , while the overhanging section A of each pressure-sensitive adhesive film is attached to a ring frame provided surrounding the perimeter of the semiconductor wafer . The die bonding-dicing integrated film is anchored to the ring frame by the pressure-sensitive adhesive force of the overhanging sections A. The semiconductor wafer is diced into a lattice together with the adhesive films along the lines A in the drawing. After dicing, the pressure-sensitive adhesive force of the pressure-sensitive adhesive films is reduced by light irradiation if necessary, and is picked up together with adhesive films to one side of which the individuated semiconductor wafer (semiconductor chip) is attached. The picked-up semiconductor chip is bonded to the semiconductor-mounting board via the adhesive films
FIGS. 4 and 5
FIG. 4
FIG. 5
1
21
11
21
11
1
21
21
11
21
11
22
22
2
10
21
22
1
21
11
2
21
10
22
12
21
22
22
22
22
22
12
22
a
a
a
a
a
a
a
a,
c
c
c
c.
are schematic drawings for an embodiment of a process for production of the multilayer film . The production process for the embodiment shown in comprises a step in which the adhesive layer formed to cover the entirety of one side of the temporary base is cut in such a manner as to be partitioned into a plurality of sections that are to serve as the adhesive films situated along the lengthwise direction of the temporary base with a prescribed spacing D between them, and the sections other than those sections, a step in which the sections of the adhesive layer other than the adhesive films on the temporary base are removed, a step in which the adhesive films on the temporary base are moved onto the pressure-sensitive adhesive layer along the lengthwise direction of the pressure-sensitive adhesive layer with a spacing D between them, and a step in which the support film is attached onto the adhesive films that have been moved onto the pressure-sensitive adhesive layer . The spacing D between the adjacent adhesive films on the temporary base is narrower than the spacing D between the adjacent adhesive films on the support film . The production process for the embodiment shown in comprises a step in which the pressure-sensitive adhesive layer and base film on the adhesive films are further cut in such a manner that the pressure-sensitive adhesive layer is partitioned into a plurality of sections that are to serve as the pressure-sensitive adhesive films ring-shaped sections surrounding those sections, and sections other than those sections, and a step in which the ring-shaped sections of the pressure-sensitive adhesive layer are removed together with the sections of the base film on the ring-shaped sections
21
11
11
11
21
13
The adhesive layer is formed on the temporary base , for example, by a method in which an adhesive solution containing the adhesive and a solvent dissolving or dispersing the adhesive is coated onto the temporary base , and the solvent is then removed from the coated adhesive solution. A resin film is preferably used as the temporary base , and it is preferably a polyethylene terephthalate film that has been release-treated with a silicone-based release agent. The adhesive layer is preferably supplied in a form covered with a cover film .
13
61
21
21
31
21
11
21
21
63
31
21
21
11
1
b
a,
b
a
After the cover film has traveled over the peripheral surface of a roll to release it from the adhesive layer , the adhesive layer is cut into a circle using a cutter having a circular blade. Only the adhesive layer is cut at this time, avoiding cutting the entire temporary base . Sections of the adhesive layer, as sections other than the circular sections left as adhesive films are removed around the peripheral surface of a roll provided downstream from the cutter . After removing the sections of the adhesive layer, the adhesive films are left on the temporary base , situated at a prescribed spacing D.
1
2
21
1
1
2
a
The spacing D may be set as desired, without any dependence on the spacing D between the adjacent adhesive films on the multilayer film as the final product. For this embodiment, the spacing D is set to be smaller than the spacing D.
21
1
21
22
2
21
1
a
b
a
Forming the adhesive films at high density with a smaller spacing D significantly reduces the amount of the sections of the adhesive layer that must be removed as the unneeded sections. Adhesives used for die bonding as in this embodiment are particularly expensive, and therefore a notable advantage is provided by reducing the amount of unneeded sections. In order to guarantee space for the overhanging section A to be used for attachment onto a ring frame during dicing as described above, it is necessary to create a relatively large spacing D between the adhesive films on the multilayer film , and therefore the advantage provided by the present invention is notable.
1
21
11
1
1
21
a
b
Preferably, the spacing D (the minimum distance between adjacent adhesive films) is set within a range of 0-60 mm. The adjacent adhesive films on the temporary base may also be situated without a spacing, i.e. with a spacing D=0. If the spacing D is larger than this range, the amount of the sections of the adhesive layer that are removed as unneeded sections will increase, thus reducing the effect of the invention.
2
1
2
The spacing D may be adjusted irrespective of the spacing D, and different spacings D may also differ from each other.
FIGS. 4 and 5
FIG. 4
21
10
21
22
21
11
22
10
21
22
21
11
10
21
10
22
21
11
22
22
10
21
22
21
11
10
10
22
21
10
a
a
a
a
a
a
a
a
a
a
In the embodiments illustrated in , the step in which the adhesive films are moved onto the support film is carried out in a simultaneous progressive manner with the step of attaching the adhesive films to the pressure-sensitive adhesive layer . The adhesive films are transferred from the temporary base onto the pressure-sensitive adhesive layer , and the support film is attached onto the adhesive films on the pressure-sensitive adhesive layer . As a result of this continuous process, the adhesive films are moved from the temporary base onto the support film . The adhesive films may be attached onto the support film and pressure-sensitive adhesive layer in either order, and both attachments may even be carried out simultaneously. Specifically, for example, instead of first attaching the adhesive films on the temporary base onto the pressure-sensitive adhesive layer to move them onto the pressure-sensitive adhesive layer and then attaching the support film onto the adhesive films on the pressure-sensitive adhesive layer , as shown by the embodiment in , the adhesive films on the temporary base may be attached to the support film to move them onto the support film , and then the pressure-sensitive adhesive layer attached to the adhesive films on the support film .
22
9
12
22
22
9
66
65
66
66
11
21
65
21
65
66
66
65
21
22
21
65
66
66
65
21
11
21
65
a
a
a
a
a
a
The pressure-sensitive adhesive layer is supplied in the form of a dicing film comprising a base film and pressure-sensitive adhesive layer . The pressure-sensitive adhesive layer may be appropriately selected from among those commonly used as film-like pressure-sensitive adhesives for dicing. The long dicing film wound out from the supply roll travels around the peripheral surface of the roll . A roll is situated opposite the roll , and the roll is set so as to be movable along the direction of the arrow B. The temporary base on which the adhesive layer has been placed travels around the peripheral surface of the roll , and when the adhesive film has reached the point between the rolls ,, the roll presses against the roll . This causes the adhesive film to be transferred to the pressure-sensitive adhesive layer . After the adhesive film has passed between the rolls ,, the roll moves along the direction of the arrow B so that it separates from the roll and waits at a prescribed location until the next adhesive film arrives. The temporary base released from the adhesive films is ejected via the peripheral surface of the roll .
80
21
65
66
21
80
21
65
66
66
65
21
11
22
21
11
10
21
21
1
21
a
a
a
a
a
a.
a
a.
An inspecting device such as a CCD camera capable of detecting the outer appearance of the adhesive films is situated upstream from the rolls ,, and the outer appearance of the adhesive films is inspected at the inspecting device . When an adhesive film judged as unacceptable by inspection passes between the rolls ,, the roll maintains its position distant from the roll , and the unneeded adhesive film is ejected together with the temporary base without being transferred to the pressure-sensitive adhesive layer . In other words, the only adhesive films on the temporary base that are moved onto the support film are those which are judged to be satisfactory by the step of inspecting the outer appearance of the adhesive films The acceptability of the adhesive films is judged based on a predetermined standard, according to the desired product specifications. This will allow a multilayer film to be obtained that contains essentially no defective adhesive films
10
68
21
22
68
10
22
32
22
12
90
22
12
69
22
1
a
c
c
a.
FIG. 5
FIGS. 1 and 2
The support film supplied around the peripheral surface of the roll is attached to the adhesive films which have been laminated with the pressure-sensitive adhesive layer . The roll contact bonds the support film to the pressure-sensitive adhesive layer . Next, as shown in , a cutter having a ring-shaped blade is used to cut the pressure-sensitive adhesive layer and base film along the shape of the ring-shaped exposed section . Portions of the ring-shaped pressure-sensitive adhesive layer and portions of the base film are removed via the peripheral surface of the roll , thus forming the pressure-sensitive adhesive films These steps yield a multilayer film as illustrated in .
10
The support film is preferably a resin film such as a polyethylene terephthalate film. When an adhesive film is formed on the support film via a step of coating an adhesive solution onto a support film, it is generally difficult to ensure sufficient releasability of the adhesive film from the support film. With insufficient releasability, the adhesive film and pressure-sensitive adhesive film may peel when the laminated body comprising the adhesive film is released from the support film. Therefore, the surface of the support film must usually be release-treated. This creates a problem, however, in that when the layer of the release agent such as a silicone-based release agent is formed on the surface of the adhesive film side of the support film, the release agent is transferred onto the surface of the part of the overhanging section of the pressure-sensitive adhesive film that contacts the support film, potentially reducing the pressure-sensitive adhesive force of the overhanging section. In addition, the properties of the adhesive film may be impaired by inclusion of the release agent, or the semiconductor element may become fouled by the release agent.
21
11
10
10
21
10
21
10
10
21
21
10
10
10
10
11
11
a
a
a
a,
a
Instead, according to this embodiment the adhesive films formed on the temporary base are transferred to the support film without coating the adhesive solution onto the support film , and therefore the adhesive films are satisfactorily releasable from the support film even without subjecting the surface of the adhesive film side of the support film to release treatment. According to this embodiment, therefore, it is possible to employ a support film with essentially no release agent on the surfaces of the adhesive films yet while maintaining sufficient releasability of the adhesive films from the support film . Since release treatment of the support film is not necessary, this embodiment can avoid the problem described above. Moreover, since the adhesive film can be removed and reattached even with some strength of adhesiveness between the adhesive film and temporary base , release treatment of the temporary base will not be necessary in all cases.
When the adhesive film is formed on the support film by a step of coating an adhesive solution on the support film, portions of the adhesive layer may remain on the support film around the adhesive film remaining on the support film. Adhesive layer remnants will tend to remain, particularly when the support film is not release-treated. The remaining adhesive may be transferred onto the overhanging section of the pressure-sensitive adhesive film, potentially lowering the adhesive property of the overhanging section. This problem is also solved by this embodiment.
Furthermore, when the adhesive film is formed on the support film by a step of coating an adhesive solution on the support film, it is necessary to cut the adhesive film on the support film, and therefore cuts will tend to be created in the support film surface during cutting of the adhesive film. The presence of such cuts can leave extraneous material such as film dust on the support film surface around the adhesive film. This embodiment of the invention can minimize such cuts and extraneous material.
FIGS. 6
FIG. 6
FIG. 4
7
8
1
11
40
63
21
70
71
11
9
70
10
71
21
22
10
70
71
a
a
, and are schematic drawings for another embodiment of a process for production of the multilayer film . In the embodiment shown in , the temporary base is removed by pulling the acute angle section of a release sheet having a cross-section with an acute angle shape, instead of with the roll shown in . The adhesive films are inserted between a pair of mutually opposing rolls , while the temporary base is removed. A dicing film is supplied around the peripheral surface of the roll while the support film is supplied around the peripheral surface of the roll . The adhesive films are sandwiched between the pressure-sensitive adhesive layer and support film between the pair of rolls ,.
FIG. 7
31
21
11
13
21
13
21
21
13
72
11
40
11
21
13
22
9
74
40
50
9
21
50
9
50
50
9
21
75
9
50
13
10
21
76
a
a
a
b
b,
a
a
a,
a
a
a
In the embodiment shown in , a cutter is used to cut the adhesive layer on the temporary base into circles together with cover films . The adhesive films and the cover films on the adhesive films remain, so that portions of the adhesive layer and portions of the cover films as the unneeded portions, are removed around the peripheral surface of the roll . Next, the temporary base is removed by pulling the acute angle section of the release sheet having a cross-section with an acute angle shape. Together with removal of the temporary base , the adhesive films and cover films become attached to the pressure-sensitive adhesive layer of the dicing film that is supplied via the roll . Downstream from the release sheet there is provided an adsorption pad at a location on the side of the dicing film opposite the adhesive films and the action of the adsorption pad causes the dicing film to be adsorbed onto the adsorption pad . The adsorption pad attracts the dicing film by the force of static electricity, vacuum pressure or the like. The adhesive films are contact bonded by the roll onto the dicing film which has been adsorbed onto the adsorption pad . The cover films are then removed, and the support film is laminated onto the exposed adhesive films by a roll .
FIG. 8
77
11
21
13
51
11
11
51
21
13
21
22
9
78
13
10
21
76
a
a.
a
a
a
a
a
In the embodiment shown in , a roll releases the temporary base from the laminated body comprising the adhesive films and cover films The laminated body is adsorbed onto an adsorption pad standing by on the side of the laminated body opposite the temporary base , at the position where the temporary base is to be released. The adsorption pad bearing the laminated body comprising the adhesive films and cover films is transported downstream, and the adhesive films are attached to the pressure-sensitive adhesive layer of the dicing film that is supplied by the roll . The cover films are then removed, and the support film is laminated onto the exposed adhesive films by the roll .
FIGS. 6 to 8
FIG. 5
22
a
The steps illustrated by result in formation of a pressure-sensitive adhesive film in the same manner as the embodiment of .
The present invention is not limited to the embodiments described above, and it may incorporate appropriate modifications that still fall within the gist of the invention. For example, the width of the temporary base may be smaller than the width of the support film. This can reduce the amount of unneeded portions of the adhesive layer in the widthwise direction. Also, the process is not particularly restricted so long as it is a continuous process in which the adhesive film is moved from the temporary base onto the support film and a pressure-sensitive adhesive film is formed on the adhesive film, and the order of attachment or reattachment of each of the constituent members may be modified as appropriate.
By producing a multilayer film having a plurality of adhesive films formed on a support film according to the invention, it is possible to efficiently form the plurality of adhesive films with any desired predetermined spacing that is different from the spacing in the final product, or without spacing. As a result, reduced waste and lower production cost are achieved since the amount of adhesive layer to be removed as unneeded sections can be minimized. Also according to the invention, it is possible to avoid cuts in the support film surface and prevent residue of contaminants such as film dust caused as a result.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a plan view showing an embodiment of a multilayer film.
FIG. 2
FIG. 1
is an end view of along line II-II.
FIG. 3
is an end view showing the step of dicing a semiconductor wafer.
FIG. 4
is a schematic drawing showing an embodiment of a process for production of a multilayer film.
FIG. 5
is a schematic drawing showing an embodiment of a process for production of a multilayer film.
FIG. 6
is a schematic drawing showing an embodiment of a process for production of a multilayer film.
FIG. 7
is a schematic drawing showing an embodiment of a process for production of a multilayer film.
FIG. 8
is a schematic drawing showing an embodiment of a process for production of a multilayer film. | |
Katie Speicher recently found herself spending a sunny weekend on the Chesapeake Bay -- but it wasn’t for a vacation. Instead, the trip was just one of the many highlights she experienced while taking an environmental resources management class.
The course, offered by the College of Agricultural Sciences, specifically focuses on the issues surrounding the health and stability of the Chesapeake Bay watershed, with an emphasis on real-world application. Speicher, a junior majoring in environmental resource management, elaborated on the aspects that make this course different from others at Penn State.
"The best part of this course is how applicable it is to real life. There were no exams in this class, but rather we were given large assignments that mimicked something we might have to do in a real job, like developing grant proposals and public outreach tools."
Speicher decided to take the course because she grew up in the Chesapeake Bay watershed and is interested in water-quality issues. She also hoped to gain practical experience for a possible career in an environmental field. She found the networking opportunities offered by the course to be particularly valuable.
"Through our guest speakers and multiple field trips we were able to make a lot of professional contacts," she said. "As one of the speakers said, you need to put yourself inside the circle instead of just standing outside of it, and I definitely think this class helped all of us to do that."
In addition to the weekend trip to the Chesapeake Bay, the course also featured field trips to local wastewater treatment facilities and a tour of the campus' stormwater best-management practices. All of trips are led by local professionals in the industry, adding to the real-world experience gained by students like Speicher.
After the completion of each trip, students submit a written travel retrospective that includes analysis and discussion, accounting for 20 percent of their final grade. Speicher found that developing these written assignments was one of the tougher aspects of the course.
"I could memorize a thousand facts about the Chesapeake Bay, but actually having to take that information and make it understandable to a variety of audiences was challenging, but also rewarding."
After graduation, Speicher hopes to continue working in an environmental field, improving the natural environment using skills developed from this course.
"Coming to college has made me realize how uncommon it is to understand the relationship between agriculture and the environment," she said. "It is really sad to me that more people aren't aware of the natural world, and it has made me aware of how vital education and outreach are." | https://news.psu.edu/story/351971/2015/04/08/academics/student-stories-baywatch-environmental-major-immersed-chesapeake |
---------------------------------------------------------------------- |-- Module : Text.XML.Light.Input-- Copyright : (c) Galois, Inc. 2007-- License : BSD3---- Maintainer: Iavor S. Diatchki <[email protected]>-- Stability : provisional-- Portability: portable---- Lightweight XML parsing--moduleText.XML.Light.Input(parseXML,parseXMLDoc)whereimportText.XML.Light.TypesimportText.XML.Light.ProcimportText.XML.Light.Output(tagEnd)importData.Char(isSpace)importData.List(isPrefixOf)importNumeric(readHex)-- | parseXMLDoc, parse a XMLl document to maybe an elementparseXMLDoc::String->MaybeElementparseXMLDocxs=strip(parseXMLxs)wherestripcs=caseonlyElemscsofe:es|"?xml"`isPrefixOf`qName(elNamee)->strip(mapElemes)|otherwise->Juste_->Nothing-- | parseXML to a list of content chunksparseXML::String->[Content]parseXMLxs=parse$tokens$preprocessxs------------------------------------------------------------------------parse::[Token]->[Content]parse[]=[]parsets=let(es,_,ts1)=nodes([],Nothing)[]tsines++parsets1-- Information about namespaces.-- The first component is a map that associates prefixes to URIs,-- the second is the URI for the default namespace, if one was provided.typeNSInfo=([(String,String)],MaybeString)nodes::NSInfo->[QName]->[Token]->([Content],[QName],[Token])nodesnsps(TokCRefref:ts)=let(es,qs,ts1)=nodesnspstsin(CRefref:es,qs,ts1)nodesnsps(TokTexttxt:ts)=let(es,qs,ts1)=nodesnspsts(more,es1)=caseesofTextcd:es1'|cdVerbatimcd==cdVerbatimtxt->(cdDatacd,es1')_->([],es)in(Texttxt{cdData=cdDatatxt++more}:es1,qs,ts1)nodescur_infops(TokStartptasempty:ts)=(node:siblings,open,toks)wherenew_name=annotNamenew_infotnew_info=foldraddNScur_infoasnode=ElemElement{elLine=Justp,elName=new_name,elAttribs=map(annotAttrnew_info)as,elContent=children}(children,(siblings,open,toks))|empty=([],nodescur_infopsts)|otherwise=let(es1,qs1,ts1)=nodesnew_info(new_name:ps)tsin(es1,caseqs1of[]->nodescur_infopsts1_:qs3->([],qs3,ts1))nodesnsps(TokEndpt:ts)=lett1=annotNamenstincasebreak(t1==)psof(as,_:_)->([],as,ts)-- Unknown closing tag. Insert as text.(_,[])->let(es,qs,ts1)=nodesnspstsin(TextCData{cdLine=Justp,cdVerbatim=CDataText,cdData=tagEndt""}:es,qs,ts1)nodes_ps[]=([],ps,[])annotName::NSInfo->QName->QNameannotName(namespaces,def_ns)n=n{qURI=maybedef_ns(`lookup`namespaces)(qPrefixn)}annotAttr::NSInfo->Attr->AttrannotAttrnsa@(Attr{attrKey=k})=case(qPrefixk,qNamek)of-- Do not apply the default name-space to unqualified-- attributes. See Section 6.2 of <http://www.w3.org/TR/REC-xml-names>.(Nothing,_)->a_->a{attrKey=annotNamensk}addNS::Attr->NSInfo->NSInfoaddNS(Attr{attrKey=key,attrVal=val})(ns,def)=case(qPrefixkey,qNamekey)of(Nothing,"xmlns")->(ns,ifnullvalthenNothingelseJustval)(Just"xmlns",k)->((k,val):ns,def)_->(ns,def)-- Lexer -----------------------------------------------------------------------typeLChar=(Line,Char)typeLString=[LChar]dataToken=TokStartLineQName[Attr]Bool-- is empty?|TokEndLineQName|TokCRefString|TokTextCDataderivingShowtokens::String->[Token]tokens=tokens'.linenumber1tokens'::LString->[Token]tokens'((_,'<'):c@(_,'!'):cs)=specialccstokens'((_,'<'):cs)=tag(dropSpacecs)-- we are being nice heretokens'[]=[]tokens'cs@((l,_):_)=let(as,bs)=breakn('<'==)csinmapcvt(decode_textas)++tokens'bs-- XXX: Note, some of the lines might be a bit inacuaratewherecvt(TxtBitx)=TokTextCData{cdLine=Justl,cdVerbatim=CDataText,cdData=x}cvt(CRefBitx)=casecref_to_charxofJustc->TokTextCData{cdLine=Justl,cdVerbatim=CDataText,cdData=[c]}Nothing->TokCRefxspecial::LChar->LString->[Token]special_((_,'-'):(_,'-'):cs)=skipcswhereskip((_,'-'):(_,'-'):(_,'>'):ds)=tokens'dsskip(_:ds)=skipdsskip[]=[]-- unterminated commentspecialc((_,'['):(_,'C'):(_,'D'):(_,'A'):(_,'T'):(_,'A'):(_,'['):cs)=let(xs,ts)=cdatacsinTokTextCData{cdLine=Just(fstc),cdVerbatim=CDataVerbatim,cdData=xs}:tokens'tswherecdata((_,']'):(_,']'):(_,'>'):ds)=([],ds)cdata((_,d):ds)=let(xs,ys)=cdatadsin(d:xs,ys)cdata[]=([],[])specialccs=let(xs,ts)=munch""0csinTokTextCData{cdLine=Just(fstc),cdVerbatim=CDataRaw,cdData='<':'!':(reversexs)}:tokens'tswheremunchaccnesting((_,'>'):ds)|nesting==(0::Int)=('>':acc,ds)|otherwise=munch('>':acc)(nesting-1)dsmunchaccnesting((_,'<'):ds)=munch('<':acc)(nesting+1)dsmunchaccn((_,x):ds)=munch(x:acc)ndsmunchacc_[]=(acc,[])-- unterminated DTD markup--special c cs = tag (c : cs) -- invalid specials are processed as tagsqualName::LString->(QName,LString)qualNamexs=let(as,bs)=breaknendNamexs(q,n)=casebreak(':'==)asof(q1,_:n1)->(Justq1,n1)_->(Nothing,as)in(QName{qURI=Nothing,qPrefix=q,qName=n},bs)whereendNamex=isSpacex||x=='='||x=='>'||x=='/'tag::LString->[Token]tag((p,'/'):cs)=let(n,ds)=qualName(dropSpacecs)inTokEndpn:casedsof(_,'>'):es->tokens'es-- tag was not properly closed..._->tokens'dstag[]=[]tagcs=let(n,ds)=qualNamecs(as,b,ts)=attribs(dropSpaceds)inTokStart(fst(headcs))nasb:tsattribs::LString->([Attr],Bool,[Token])attribscs=casecsof(_,'>'):ds->([],False,tokens'ds)(_,'/'):ds->([],True,casedsof(_,'>'):es->tokens'es-- insert missing > ..._->tokens'ds)(_,'?'):(_,'>'):ds->([],True,tokens'ds)-- doc ended within a tag..[]->([],False,[])_->let(a,cs1)=attribcs(as,b,ts)=attribscs1in(a:as,b,ts)attrib::LString->(Attr,LString)attribcs=let(ks,cs1)=qualNamecs(vs,cs2)=attr_val(dropSpacecs1)in((Attrks(decode_attrvs)),dropSpacecs2)attr_val::LString->(String,LString)attr_val((_,'='):cs)=string(dropSpacecs)attr_valcs=("",cs)dropSpace::LString->LStringdropSpace=dropWhile(isSpace.snd)-- | Match the value for an attribute. For malformed XML we do-- our best to guess the programmer's intention.string::LString->(String,LString)string((_,'"'):cs)=break'('"'==)cs-- Allow attributes to be enclosed between ' '.string((_,'\''):cs)=break'('\''==)cs-- Allow attributes that are not enclosed by anything.stringcs=breakneoscswhereeosx=isSpacex||x=='>'||x=='/'break'::(a->Bool)->[(b,a)]->([a],[(b,a)])break'pxs=let(as,bs)=breaknpxsin(as,casebsof[]->[]_:cs->cs)breakn::(a->Bool)->[(b,a)]->([a],[(b,a)])breaknpl=(mapsndas,bs)where(as,bs)=break(p.snd)ldecode_attr::String->Stringdecode_attrcs=concatMapcvt(decode_textcs)wherecvt(TxtBitx)=xcvt(CRefBitx)=casecref_to_charxofJustc->[c]Nothing->'&':x++";"dataTxt=TxtBitString|CRefBitStringderivingShowdecode_text::[Char]->[Txt]decode_textxs@('&':cs)=casebreak(';'==)csof(as,_:bs)->CRefBitas:decode_textbs_->[TxtBitxs]decode_text[]=[]decode_textcs=let(as,bs)=break('&'==)csinTxtBitas:decode_textbscref_to_char::[Char]->MaybeCharcref_to_charcs=casecsof'#':ds->num_escds"lt"->Just'<'"gt"->Just'>'"amp"->Just'&'"apos"->Just'\''"quot"->Just'"'_->Nothingnum_esc::String->MaybeCharnum_esccs=casecsof'x':ds->check(readHexds)_->check(readscs)wherecheck[(n,"")]=cvt_charncheck_=Nothingcvt_char::Int->MaybeCharcvt_charx|fromEnum(minBound::Char)<=x&&x<=fromEnum(maxBound::Char)=Just(toEnumx)|otherwise=Nothingpreprocess::String->Stringpreprocess('\r':'\n':cs)='\n':preprocesscspreprocess('\r':cs)='\n':preprocesscspreprocess(c:cs)=c:preprocesscspreprocess[]=[]linenumber::Line->String->LStringlinenumber_[]=[]linenumbern('\n':s)=n'`seq`((n,'\n'):linenumbern's)wheren'=n+1linenumbern(c:s)=(n,c):linenumberns
| |
About a week or so ago this article titled “The Really Big One” by Kathryn Schulz from The New Yorker made the rounds on Facebook and it got me to really consider what a Cascadia earthquake meant for me (I live in the Pacific NW) as well as for the many millions of others who call this beautiful area home.
If you haven’t read the article you should take the time to do so as it does a good job of painting a vivid picture of the devastation that such an earthquake could have to our area.
But if you’re unaware I’ll quickly explain the scenario…
Earthquakes are caused when earth’s large tectonic plates get stuck trying to pass by or under/over each other; eventually they “break free” and that’s when all of the rumblings and other bad things happen.
If you live in California then you’re no stranger to the small ones and I’m sure you’ve heard that the San Andreas is going to break off California from the rest of the states–or at least cause a ton of damage–for at least the past few decades. Heck, they even made a movie about it… go figure.
Fortunately, that’s yet to happen and I’m that sure so many folks are nearly oblivious to the term “San Andreas earthquake.” I remember as a kid growing up in California I don’t think we even got out of bed for anything under a 4.0. 😉
The Cascadia fault line lies just off the coast of the Pacific Northwest as shown here (in the red):
As you can see, it pretty much runs from northern California to Vancouver Island… many hundreds of miles.
You might think “that’s not so bad” because it’s off the coast but it’s plenty close enough to cause horrible damage to the major cities of Seattle and Portland, to name two of the biggest.
The Cascadia event would, some suggest, be somewhere on the order of between 8.7 and 9.2 magnitude. That’s on par with the earthquake that devastated Japan and caused their never-ending Fukushima troubles.
You can read the article I mentioned at the start (and cite below) but suffice it to say that the Pacific NW is not nearly as *ready* for a major earthquake like Japan.
Regrettably, we’re about due, on average, for the REALLY “big one” to hit the Pacific NW and it’s likely going to be far worse–both in lives lost and property damage–than any other natural disaster to hit America since its inception as a nation.
As bad as the earthquake would be scientists feel that the resulting tsunami could be even worse:
“The water will surge upward into a huge hill, then promptly collapse. One side will rush west, toward Japan. The other side will rush east, in a seven-hundred-mile liquid wall that will reach the Northwest coast, on average, fifteen minutes after the earthquake begins. By the time the shaking has ceased and the tsunami has receded, the region will be unrecognizable. Kenneth Murphy, who directs FEMA’s Region X, the division responsible for Oregon, Washington, Idaho, and Alaska, says, ‘Our operating assumption is that everything west of Interstate 5 will be toast.’”
Well, that’s not comforting, especially since I live west of I-5. And these are odds I wouldn’t want to take:
“…we now know that the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three. The odds of the very big one are roughly one in ten.”
As far as disasters go that’s pretty much like saying: “It’s going to happen and soon.”
My point ultimately is that if you live anywhere in the Pacific NW and have been ignoring this very likely threat to you and your family, please don’t do so any longer.
Yes, there are always reasons to NOT prepare yourself but this is a very good reason TO prepare yourself. At the very least be able and ready to be without systems of support for a few weeks… a month or two would be better. | https://rethinksurvival.com/have-you-read-the-earthquake-that-will-devastate-the-pacific-northwest/ |
A quantitative analysis of stem form and crown structure: the S-curve and its application.
The statical model of Oohata and Shinozaki (1979) was applied to derive a whole stem form function for Cryptomeria japonica D. Don, Chamaecyparis obtusa Endl., and Larix leptolepis Gordon. Defining stem density (weight per unit length) at a depth z from the tree top as S(z), the relationships between the total stem weight T(s)(z) from the apex to the z-horizon and S(z) were examined. A formula with two exponential functions, corresponding to the stem form above and below the crown base, was obtained. This formula was termed the "S-curve." Applying the same analysis to the total weight of a branch and the weight of a unit length (10 cm) at the branch base yielded a similar curve. This result suggests that the formula for branch form can be determined from a single branch in a forest stand. It also provides quantitative evidence of the fractal structure of trees.
| |
TECHNICAL FIELD
This invention relates to methods and systems for routing in an ATM network and, in particular, to methods and systems for determining optimal multicast routes in an ATM network.
BACKGROUND ART
Asynchronous Transfer Mode (ATM) has emerged as a very promising transport technique for supporting services of diverse bit-rate and performance requirements in future broadband networks. High-speed packet switches are essential elements for successful implementation of ATM networks. If a significant population of network users are potential broadband-service subscribers, high-capacity packet switches with a large number of input and output ports are required.
Two basic approaches in large packet switch designs emerge as a result of recent research activities. Both approaches concentrate on scalable designs that construct a large switch using smaller switch modules. The first approach strives to avoid internal buffering of packets in order to simplify traffic management. Examples in this category are the Modular switch, the generalized Knockout switch, and the 3-stage generalized dilated-banyan switch (with no buffering at the center stage).
The second approach attempts to build a large switch by simply interconnecting switch modules as nodes in a regularly-structured network, with each switch module having its own buffer for temporary storage of packets. A notable example in this category is illustrated in FIG. 1a and described in the article by H. Suzuki, H. Hagano, T. Suzuki, T. Takeuchi, and S. Iwasaki, "Output-Buffer Switch Architecture For Asynchronous Transfer Mode," CONF. RECORD, IEEE ICC '89, pp. 99-103, June 1989. As described in the article and as illustrated in FIG. 1a, output-buffered switch modules 10 are connected together as in the 3- stage Clos circuit- switch architectures. The switch modules 10 have internal buffers 12 at their outputs. Typically, a packet must pass through several queues before reaching its desired output in these switch architectures.
Because of the simplicity of switches in the second approach, they have been the potential focus of several switch vendors. However, these switches necessitate more complicated network control mechanisms, since more queues must be managed. In addition, for the Clos architecture, routing within the switching network becomes an issue because there are multiple paths from any input port to any output port as illustrated in FIG. 1b.
Things become even more complicated if multicast (point-to- multipoint) connections, an important class of future broadband services, are to be supported. For communications networks that use these switching networks for switching in their nodes, each node should be treated as a "micronetwork" rather than an abstract entity with queues at the output links only, as is done traditionally. Internal and output buffers 14 are provided as illustrated in FIG. 1b.
An open question is to what extent the internal buffers in the micronetwork would complicate traffic management and whether routing algorithms for call setups would require unacceptably long execution times. It is assumed that all switch modules in the micronetwork have multicast capability. The multicast routing problem in a 3-stage Clos switching network can be compared with the multicast routing problem in a general network. Three features associated with routing in the Clos network are:
1 Necessity for a very fast setup algorithm;
2. Large numbers of switch modules and links; and
3. Regularity and symmetry of the network topology.
It is necessary to have an algorithm that is faster and more efficient that those used in a general network because the Clos switching network is only a subnetwork within a overall communications network.
From the viewpoint of the overall network, the algorithm performed at each Clos switching network is only part of the whole routing algorithm. Adding to the complexity is the highly connected structure of the Clos network, which dictates the examination of a large number of different routing alternatives. The Clos network is stage-wise fully connected in that each switch module is connected to all other switch modules at the adjacent stage.
As an example, for a modest Clos network with 1024 input and output ports made of 32 inputs×32 outputs switch modules (with n=32, m=32, p=32 as illustrated in FIG. 1a), the numbers of nodes and links are 96 and 3072, respectively. Thus, algorithms tailored for a general network are likely to run longer than the allotted call-setup time. Both features 1 and 2 above argue for the need for a more efficient algorithm, and feature 3, regularity of the network topology, may lend itself to such an algorithm.
In the article entitled "Nonblocking Networks for Fast Packet Switching" CONF. RECORD, IEEE INFOCOM '89, pp. 548-557, April 1989, by R. Melen and J. S. Turner, the relationship between various switch parameters that guarantee an ATM Clos network to be nonblocking are derived. In the ATM setting, each input and output link in a switch contains traffic originating from different connections with varying bandwidth requirements.
An ATM switch is said to be nonblocking if a connection request can find a path from its input to its targeted output and the bandwidth required by the connection does not exceed the remaining bandwidths on both the input and output. What was not addressed in Melen and Turner is the issue of routing. Even though the switch used may be nonblocking as defined, a connection may still suffer unacceptable performance in terms of delay and packet loss if the wrong path is chosen. This is due to contention among packets for common routes in the ATM setting where packet arrivals on different inputs are not coordinated. Consequently, regardless of whether the switch is nonblocking, some routes will be preferable because they are less congested.
SUMMARY OF THE INVENTION
An object of the present invention is to provide method and system for multicast routing in an ATM network to ensure cost-effective, high quality multicast services such as teleconferencing, video-on- demand, etc. for the duration of the service.
Another object of the present invention is to provide method and apparatus for multicast routing in an ATM network to facilitate quick call setups without sacrificing the quality of service for calls.
In carrying out the above objects and other objects of the present invention, a method is provided for determining optimal routes in a network including a multicast tree. The tree includes a plurality of nodes interconnected by links. The plurality of nodes includes a source node, multiple end nodes and multiple intermediate nodes. The method finds optimal routes from the source node to the multiple end nodes. The method includes the step of assigning a weight to each link in the tree. The weight is representative of a traffic congestion level on each link. The method also includes the step of minimizing the sum- total of link weights from the source node to the multiple end nodes to determine the optimal routes.
In one embodiment, the step of minimizing is at least partially accomplished by an optimal algorithm which utilizes a "trimming" procedure.
Preferably, the step of minimizing is at least partially accomplished by two heuristic algorithms which restrict the solution to a subset of all possible solutions.
A system is also provided for carrying out each of the above method steps.
The method and system can be applied to two areas: 1) Future communications networks that use cross-connects to configure ATM networks into simple two-hop logical structures; this could be done, for instance, to facilitate network control and increase network reliability; and 2) Future ATM switching networks (i.e., micronetworks within a switching node). Several switch manufacturers have adopted the 3-stage Clos network, a two-hop network, as the way to scale their ATM switches. The proposed routing methodology ensures efficient use of the switching and transmission resources in communications networks. It also guarantees grades of service superior to those in a network without a well-thought- out routing scheme. The remainder of the specification concentrates on the second application, although the invention readily extends to any communications networks with two-hop, or a combination of one-hop and two- hop, structures.
The optimal algorithm uses a "trimming" procedure to eliminate a majority of the nonoptimal alternatives from consideration, thus saving a substantial amount of run time. The found solution is guaranteed to be the best solution. The two heuristic algorithms reduce the run time further by judiciously restricting the solution to a subset of all possible solutions. Although the solutions given are not necessarily the best, they are generally acceptable from an engineering viewpoint.
The first heuristic algorithm consists of three steps, with each step attempting to improve on the solution found by the preceding step. The second heuristic algorithm is a modification of the optimal algorithm in which the intermediate nodes in the multicast tree are restricted to a subset of all available intermediate nodes.
The above objects and other objects, features, and advantages of the present invention are readily apparent from the following detailed description of the best mode for carrying out the invention when taken in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1a is a schematic block diagram view illustrating a 3- stage Clos switching network having internal buffers at outputs of switch modules;
FIG. 1b is a schematic view illustrating Clos switches as micronetworks within a communications network and having internal and output buffers;
FIGS. 2a through 2c are schematic diagrams illustrating costs for various paths wherein FIG. 2b illustrates a shortest-path solution and FIG. 2c illustrates a Steiner-tree solution;
FIG. 3 is a schematic diagram illustrating a multicast connection;
FIG. 4 is a schematic view of an enumeration tree having nodes F, G and H for listing alternative solutions;
FIGS. 5a through 5d are schematic diagrams illustrating costs for various paths of multicast trees and utilizing a heuristic algorithm;
FIGS. 6a through 6d are graphs comparing run time and heuristic- to- optimal cost ratio versus the number of end nodes for m=8, 16, 32 and 64, respectively; and
FIGS. 7a through 7d are graphs comparing heuristic-to-optimal cost ratios for two heuristic algorithms.
BEST MODE FOR CARRYING OUT THE INVENTION
In general, the remainder of the specification is organized as follows. The section entitled "Assumptions" there is discussed the specific problem formulation and definition with respect to other possibilities. Appendix 1 formulates the multicast routing problem in terms of the so- called warehouse location problem, thus showing that it is unlikely that an efficient optimal algorithm can be found for the routing problem, since the warehouse location problem is known to be a hard problem.
The section entitled "Algorithms For Multicast Routing In The Clos Network" presents the designs of an optimal and two heuristic algorithms, with the details of the recursive optimal algorithm given in Appendix 2.
The section entitled "Computation Results" shows that the heuristic algorithms can have much faster response time than the optimal algorithm while achieving near-optimal routing. Implications of the results for actual real-time implementation of the routing schemes in switching networks are also discussed. Finally, the main results and conclusions are summarized in the section entitled "Conclusions."
Assumptions
Routing in any network of switch modules can be posed as a graph problem in which the switch modules correspond to nodes and the links correspond to directed arcs in the graph. A weight is assigned to each arc, and its value corresponds to the congestion level on the associated link. For instance, the weight assigned could be traffic load, packet mean delay, packet loss rate, mean buffer occupancy, or other traffic measures. Alternatively, it could be a weighted function of all these parameters. In either case, the weight of an arc corresponds to the "undesirability" of choosing the arc as part of the overall route. One may argue that more than one parameter is needed to capture the traffic characteristics on each link. Although such "multi-objective optimization" problem is not described in detail in this application, the treatment here provides a basis for extension along this line.
It is also assumed that the undesirability of a route is the sum of all the weights of the arcs in the route. For instance, if the weights are taken to be the mean delays of the links, this approach aims to minimize the mean delay of the overall route. As far as point-to- point connections are concerned, the routing problem in this formulation becomes a shortest-path routing problem. It is well known that there are good algorithms that can solve this problem within a short time.
The situation is not as clear-cut in multicast routing, which involves multiple paths from one source to several destinations. If we aim to optimize the local performance or grade-of-service perceived by each path, them the shortest-path formulation is still valid, simply because this approach assigns the least congested path to each input- output pair.
On the other hand, if we aim to minimize the global congestion level (e.g., the total buffer occupancies of all queues) of the overall switching network, then we are faced with a Steiner-tree problem, in which the sum-total of the weights of all the arcs in the multicast connection is to be optimized. The solutions given by taking these two different perspectives are different, as is shown in the example of FIGS. 2a-c, where multicast routing from node i at the first stage to nodes 0, 1, and 2 at the third stage are considered.
In FIG. 2a, there is illustrated the arc cost in full graph. FIG. 2b illustrates a shortest-path solution where the costs of individual paths to end nodes are minimized. FIG. 2c illustrates a Steiner-tree solution where the sum of all arc cost in the multicast tree is minimized.
The global viewpoint has the advantage that it can accommodate more connection requests and that it reserves more capacity for future connection requests. Unfortunately, the general Steiner-tree problem is a hard problem without a known fast algorithm. In Appendix 1, it is shown that the two-hop structure of the Clos Network allows one to pose the multicast routing problem as a warehouse location problem. Although this problem is simpler than the Steiner-tree problem, it is still a hard problem if one aims for the optimal solution. Therefore, a heuristic algorithm that finds a close-to-optimal solution within a short time is desirable. The next section considers optimal as well as heuristic routing algorithms.
Algorithms For Multicast Routing In The Clos Network
Suppose one labels the three sets of nodes in stage 1, stage 2, and stage 3 of a Clos network as I, J, and K, respectively. As illustrated in FIG. 3, the weight of an arc from node i &egr; I to node j &egr; J is denoted by c.sub.ij, and that from node j &egr; J to node k . epsilon. K by d.sub.jk.
Suppose that one wants to multicast from input link p (of switch module i) to the set of output links Q (of switch modules K'&egr; K). Then, the problem is basically to select a set of second-stage switch modules J' &egr; J to be included in the multicast tree. If we only knew the second-stage nodes J' that are used in the optimal multicast tree, then the links in the tree can be easily found using the minimal- link selection process below:
Minimal-link Selection Process based on Node Set J'
1. The links from node i to all j &egr; J' are included.
2. In addition to these links, for each third-stage node k . epsilon. K', the minimal link (j.sub.k,k) is chosen for connection from the second stage to node k; i.e., the second-stage node j.sub.k chosen for the connection is such that j.sub.k &egr; J' and
d.sub.j.sbsb.k.sub.k ≦d.sub.j.sbsb.k for all j &egr; J'.
The problem, of course, is that one does not know the second- stage nodes used in the optimal multicast tree, and finding them is not easy.
Given a subset of second-stage switch modules that is proposed for use as intermediate nodes, not necessarily those used in the optimal solution, one can easily compute the best solution based on that proposal using the minimal-link selection process described above. If m. ltoreq.. vertline.K'|, there are 2.sup.m -1 possible proposals, ranging from those with only one intermediate module to that with all m intermediate modules. If m > |K'|, there are ##EQU1## possible proposals, ranging from those with one intermediate module to those with |K'| intermediate modules; proposals with more than |K'| intermediate modules need not be considered because at most |K'| intermediate modules will be used in any multicast tree. Thus, there are ##EQU2## possible proposals. A brute- force method for the overall algorithm is to go through all the proposals, calculate the best solution associated with each proposal, and choose the one with the lowest cost. With this exhaustive enumeration method, the run time of the algorithm grows exponentially with m. Assuming m≦ |K'. vertline. and a modest m value of 32, for instance, there are more than four billion proposals that must be considered. Fortunately, there are ways to eliminate some of the non-optimal alternatives without computing their solutions. The paper entitled "Optimal Location of Plants" by A. Alcouffe and G. Muratet in MANAGEMENT SCIENCE, Vol. 23, pp. 267-274, Nov. 1976 provides one such algorithm. However, the algorithm presented hereinbelow is more efficient because it can eliminate more non- optimal alternatives at the outset.
To understand the present algorithm, for simplicity, one considers all the 2.sup.m subsets of intermediate nodes and attempts to devise a method for enumerating the proposals. The fact that some of the proposals need not be considered will be taken into account below to further improve the algorithm. FIG. 4 shows one possible enumeration scheme depicted as a tree in which the leaf nodes on the right are the 2. sup.m alternative proposals. Each node in the enumeration tree, whether it is a leaf node or not, is represented by three disjoint subsets of intermediate switch modules, F, G, and H. F denotes the proposed second- stage switch modules, G denotes the excluded switch modules, and H denotes the switch modules that have neither been proposed nor excluded so far in the enumeration process. The enumeration process starts with the root node on the left with all modules being in H originally. At each node of the enumeration tree, a new module is taken from H, add the tree branches off in two directions with the chosen module being assigned to F and G, respectively. After m levels of branching, one ends up with each module either being assigned to F or G for the 2.sup.m leaf nodes, thus completing the enumeration process.
The basis of the present multicast routing algorithm is as follows: If one can determine during the enumeration process that the best solution given by the leaf nodes of one branch is inferior to the solution given by some leaf node of the other branch, then one needs to branch in the latter direction only, since the former direction will not yield the optimal solution anyway. This can potentially save a lot of computation. In the following, a theorem is adapted from the article "Optimal Location of Plants", noted above, for such a trimming process.
Let C(F) be the cost of the particular solution with node set F being the proposed intermediate nodes. Specifically, ##EQU3## where
j.sub.k =arg min.sub.j&egr;F d.sub.j.sbsb.k
can be found by the minimal-link selection process described above (i. e. node k in stage 3 will be connected to node j.sub.k in stage 2 via the link that has the smallest cost among all possible links). The first summation of C(F) includes all links from node i to nodes in F. However, it is possible for some nodes in F not to be used, because the arcs from them to nodes in K' are not minimal. Therefore, C(F) is the unadjusted cost: the adjusted cost has c.sub.ij deducted from the unadjusted cost for any node j that is not used. This distinction, however, is not important if one considers all 2.sup.m subsets of intermediate nodes as candidates for F in the optimization process, since there is an optimal candidate in which all nodes in F are used. Therefore, when comparing different solutions in our optimization process, one needs to concentrate only on the unadjusted cost.
Given the above definition, one has the following theorem relating the unadjusted costs of four alternative proposals:
Theorem 1
Consider two subsets of the second-stage nodes S and T, where S . rhalfcircle. T, and a node h T. Then, C(S)-C(S ∪ {h})≧ C(T)- C(T ∪ {h}).
The validity of the theorem is seen in simple and intuitive terms. The cost savings due to the inclusion of node h in node set S and node set T are C(S)-C(S ∪ {h}) and C(T)-C(T) ∪ {h}), respectively. Since S&Rhalfcircle;T, as far as the costs of the arcs from stage 2 to stage 3 are concerned, the solution with S as the proposed nodes is less optimized than the solution with T as the proposed nodes. Therefore, adding node h to S is likely to achieve more cost saving than node h to T.
Proof
The left-hand side of the inequality is: ##EQU4## where
J.sub.k =arg min.sub.j&egr;s d.sub.jk and (x).sup.+ =max (0, x).
Similarly, the right-hand side of the inequality is: ##EQU5## where
j.sup.l.sub.k =arg min.sub.j&egr;T d.sub.jk.
Clearly,
d.sub.j.spsb.l.sbsb.k.spsb.k ≧d.sub.j.sbsb.k.spsb.k
since S &Dotrhalfcircle; T. Hence, equation (2)≧equation (3).
The above theorem is used as the basis of the solution-trimming process.
Optimal Algorithm: Enumeration-Tree Trimming Scheme
Consider an arbitrary node in the enumeration tree in FIG. 4 where the switch modules are distributed into the three sets F, G, H defined above. If a module is selected from H and put into F and G, the enumeration process branches off in two different directions. In FIG. 4, the particular module in H that is selected is fixed at each level. For instance, module 0 and 1 are considered to be at the first and second levels, respectively.
The enumeration process is modified slightly by letting the module chosen be a variable. The test discussed hereinbelow can be used to determine whether, given the current status of F and G, one can eliminate one of the two branches without missing the optimal solution.
Test For Trimming Enumeration Tree
1. Choose each module h &egr; H successively until all modules in H have been considered. For each h, compute C(F ∪ H) and C(F . orgate. H-{h}). If C(F ∪ H-{h}) > C(F ∪ H), move h from H to F; one will not miss the optimal solution by not considering the branch with h in G.
2 Choose each module in h &egr; H successively until all modules in H have been considered. For each h, compute C(F) and C(F . orgate. {h}) . If C(F ∪ {h}) > C(F), move h from H to G; again, one will not miss the optimal solution by not considering the branch with h in F.
If both of the tests above do not succeed in moving any module from H to F or G, then trimming is not possible, and one must branch off in two directions by moving a module from H to both F and G.
To see how the first test works, substitute T in Theorem 1 with F . orgate. H-{h}. If C(F ∪ H-{h})-C(F ∪ H)=C(T) ∪ {h}) > 0, then C(S)-C(S ∪{h}) > 0 for all S &Rhalfcircle; T according to Theorem 1. One can interpret S as the proposed modules of an arbitrary leaf node belonging to the branch of the enumeration node where h is put into G. The above result says that there is a leaf node in the other branch which achieves lower cost by having h in addition to S as the proposed nodes. Thus, given the current status of F, G, and H, one will not miss enumerating the optimal solution if one only branches in the direction where h is in F. Similar reasoning applies to the second test by substituting S in Theorem 1 with F.
Finally, as previously mentioned, if |K'| < m (i. e., there are fewer than m stage-3 switch modules in the multicast connection), at most |K'| stage-2 switch modules are used. One can incorporate another test at each node of the enumeration tree: if F=|K'|, branch no more; this node is taken as one of the proposals to be examined below. This test can, substantially reduce the computation needed if |K'| m.
Appendix 2 outlines an algorithm that makes use of the above tests in a recursive and efficient way in Pidgin Algol.
Heuristic Algorithm 1: 3-Step Augmentation Scheme
The run time of the optimal algorithm can be excessive in the worst case. Considered herein is a heuristic algorithm that attempts to find a solution that is close to optimal but within a shorter time. It consists of three procedures running in sequence, each improving on the solution given by the previous procedure. FIGS. 5a through d is an example for simple illustration of the heuristic algorithm with FIG. 5a showing arc cost in full graph. Also, since all 2.sup.m subsets of intermediate nodes in our optimization process are not considered here, the adjusted cost is concentrated upon when comparing different solutions.
3-Step Augmentation Algorithm
1. Find the shortest-path solution. That is, for each node k . epsilon. K', find j.sub.k =arg min.sub.j&egr;J c.sub.ij +d.sub.jk, and make links (i,j.sub.k), (j.sub.k,k) and node j.sub.k part of the multicast tree. This would be the "optimal" solution if all individual paths in the multicast connection were to be optimized as disclosed in the previous section. FIG. 5b illustrates the multicast tree after the search in the shortest-path tree.
2. Find a new multicast tree as follows. Denote the intermediate switch modules used in the shortest-path solution by V. Find the set of minimal links from node set V to node set K' using the minimal- link selection process based on V. That is, for each k &egr; K', find j. sub.k =arg min.sub.j&egr;V d.sub.jk, and make links (i,j.sub.k), (j. sub. k,k) and node j.sub.k part of the multicast tree. Remove nodes from V that are not part of the resulting multicast tree. FIG. 5c illustrates the multicast tree after optimization based on V.
3. For each node v &egr; V, denote the set of third-stage nodes attached to it in the multicast tree by W.sub.v. See if these nodes can be attached to other nodes in V at a net cost saving. The original cost associated with the subtree of node v is
c.sub.iv +&Sgr;.sub.w&egr;W.sbsb.v d.sub.vw,
and the cost associated with attaching the nodes in W.sub.v to other nodes in V is
&Sgr;.sub.w&egr;W.sbsb.v min.sub.j&egr;V-{v} d.sub.jw.
If
c.sub.iv +&Sgr;.sub.w&egr;W.sbsb.v d.sub.vw -&Sgr;.sub.w. epsilon.W.sbsb.v min.sub.j&egr;V-{v} d.sub.jw > 0,
then saving can be achieved; remove v from V and attach nodes in W.sub. v to the other nodes in V. FIG. 5d illustrates the multicast tree after the "trim-and-graft" operation.
The 3-step augmentation scheme can be very good if the shortest- path solution in the first step already yields very good results, or the intermediate modules used in the shortest-path solution overlap substantially with those used in the optimal solution.
Heuristic Algorithm 2: Intermediate-Module Limiting Scheme
The second heuristic algorithm is based on the observation that if there were only a few intermediate modules in the Clos network (i.e., m is small), the optimal algorithm would terminate within a short run time. Therefore, in cases where m is large, one can devise a heuristic algorithm by intentionally removing some intermediate modules from consideration, as long as one was willing to give up absolute optimality. That is, one considers only m' of the m modules as candidates for use in the multicast tree, and the enumeration process in the optimal algorithm is modified so that the root enumeration node has only these m' intermediate modules in H. There are ##EQU6## ways of choosing the m' modules. By judiciously selecting one of the choices, one can maximize the probability of finding a good solution. The algorithm below chooses the m' modules based on those used in the shortest-path solution.
Intermediate-ModuIe Limiting Algorithm
1. Find the shortest-path solution.
2. Denote the intermediate nodes used in the shortest-path solution by V. If |V| < m', one needs another m'-. vertline.V. vertline. modules. Select from modules not already in V those that have the least first-stage link costs c.sub.ij 's. If |V. vertline. > m', one has too many modules. Remove from V those modules that have the highest link costs c.sub.ij 's.
3 Start the enumeration-tree trimming algorithm (see the optimal algorithm) with the m' chosen modules in H, and F=G=&phgr;.
|V|≦m', this algorithm yields a solution that is at least as good as the one given by the 3-step augmentation algorithm. Otherwise, the 3-step augmentation algorithm may give a better solution.
Computation Results
The optimal and heuristic algorithms have been coded in C language and implemented on a SPARC 2 work station, a RISC (reduced- instruction set computing) machine with 28.5 MIPS (million instructions per second) processing power. It is assumed in the discussion that a response time of no more than 0.1 second is required. From the viewpoint of the end-users, a call setup time of less than a few seconds is probably desirable. Since the algorithm resides in only one switching node, and is one of the many functions that must be performed by the overall network, it is sound engineering practice to have a more stringent requirement on the run time. A more conservative approach is also necessary to compensate for the communication overhead between different layers of functionalities and the computation of other network algorithms that share the same computing resources.
To test the algorithms, experiments have been conducted in which the problem of multicasting from node i in stage 1 to d(d≦p) in stage 3 is considered, assuming there are m stage-2 nodes in the Clos switching network. The details are as follows:
Experimental Step
The arc costs were created with a pseudorandom number generator which generates numbers uniformly distributed from 0 to 1. Five sets of random arc costs were generated to run five independent experiments for each multicast connection. Based on these data points, the sensitivity of the algorithms to arc costs were studied.
The run time and the cost of the solution given by each algorithm were taken. The ratio of the heuristic cost to the optimal cost was calculated to measure the "goodness" of the heuristic algorithms.
The heuristic algorithm 1 was first compared with the optimal algorithm. As illustrated in FIGS. 6a through 6d, run time (the left y- axis) and heuristic-to-optimal cost ratio (the right y-axis) is plotted versus number of end nodes, d, for four values (m=8,16,32, and 64, respectively). Both the individual run times (• for the optimal algorithm and o for heuristic algorithm 1) and the average run time of five data points (solid line) are shown. Only the average cost ratio is plotted (dashed line). From the graphs, one can make the following observations and recommendations about the Clos network.
Observations and Recommendations
Run time of the optimal algorithm--For m≦8, the optimal algorithm satisfies the criterion of 0.1s response time. The optimal algorithm is very sensitive to the m value. For m≧16, the average run time of the optimal algorithm is not satisfactory, although individual run times in certain cases of m=16 fall within the limit. The run times of different data points (with different arc costs) of the same multicast parameter values can differ significantly. For instance, for m=16, the difference can be close to three orders of magnitude. This is attributed to the solution-trimming process of the algorithm. Trimming is most effective in the early stage of enumeration. If the arc costs are such that a large number of branches can be eliminated in the beginning, then a significant fraction of alternative solutions can be eliminated from consideration. On the other hand, if the arc costs do not allow for substantial trimming at the outset, even if branches are cut later, chances are the algorithm will still take a long time. The graphs of FIGS. 6a through 6d also show that for each m, run time generally increases with the number of end nodes, d, although it tends to taper off after a certain point. Overall, the run time is much more sensitive to m then to d.
Run time of heuristic algorithm 1--The run time of this heuristic algorithm in all cases satisfy our criterion of 0.1s response time. Furthermore, it is much less sensitive to m than the optimal algorithm is. Consequently, for large m≧16, the run time of the heuristic algorithm can be several orders of magnitude better than the optimal algorithm. In addition, the heuristic algorithm is also much less sensitive to the arc costs, and it is highly dependable as far as meeting the response-time limit is concerned.
Cost ratio--The average heuristic-to-optimal cost ratio is very close to 1.0 on the whole, and never exceeds 1.15. What makes heuristic algorithm 1 even more interesting is that for large m, when the run time of the optimal algorithm is long, the average cost ratio quite timely becomes closer to 1.0.
Implication of parallel computing--The optimal algorithm can be parallelized quite easily. Each time the enumeration process branches off in two directions, computation on each branch can be assigned to a separate processor. Nevertheless, even with 100 processors, the reduction in run time is at most two orders of magnitude. Although parallel computing may help when m is small, it will not solve the problem for m. gtoreq.64.
Implication of time-limit interrupts--The optimal algorithm can easily be modified to store the best solution computed so far. With this change, the algorithm can be interrupted when the time limit of 0.1s is reached. This gives us a feasible, albeit possibly non-optimal, solution.
Implementation strategy--For small networks (say networks with less than 32 intermediate nodes) the response time of the optimal algorithm in some cases is no more than an order of magnitude larger than that of the heuristic algorithm. The use of the optimal algorithm should be considered for these cases. One can adopt a strategy in which the optimal and heuristic algorithms are run in parallel with a set time limit. When time is up, the better solution offered by the two algorithms is chosen.
Based on further experimentation, one finds that the optimal algorithm can usually meet the response time limit if m≦12. When the second heuristic algorithm is tested, assuming m'=8, its run time is comparable to the optimal algorithm's run time with m=8, since it is founded on modification of the optimal algorithm in which the number of intermediate nodes being considered is limited to m'. However, whereas the optimal algorithm's run time grows with m value, the heuristic algorithm's run time does not. To compare the two heuristic algorithms, FIGS. 7a through 7d plots heuristic-to-optimal cost ratio versus number of end nodes for both algorithms. Both the individual cost ratios (. largecircle. for heuristic algorithm 1 and • for heuristic algorithm 2) and the average cost ratios of five data points (dashed line for heuristic algorithm 1 and solid line for heuristic algorithm 2) are shown.
More Observations and Recommendations
Effects of m and d--Heuristics algorithm 2 is better than heuristic algorithm 1 for m≦16. For m=32, heuristic algorithm 2 is still better on the average when the number of end nodes, d, is less than 16; otherwise, heuristic algorithm 1 is better. For m=64, heuristic algorithm 1 is better. These observations are attributed to the fact that the number of intermediate modules used in the shortest-path tree solution is less than m'=8 when m and d are small, and larger than m'=8 when m and d are large. Further experimentation confirmed the expectation that increasing m' value improves the solutions found by the second heuristic algorithm, at the expense of longer run time.
Implementation strategy--Combining these observations with the previous observations, the following strategy is suggested. For m < 32, run the optimal algorithm, heuristic algorithm 2 with m'=10, and heuristic algorithm 1 in parallel with a set time limit. For higher m values, run heuristic algorithm 2 with m'=12 and heuristic algorithm 1 in parallel with a set time limit. These quantitative recommendations assume a particular computing environment and a particular response time requirement. Perhaps the more important observation is the qualitative fact that each of the algorithm has its own regime of operation, and which one or which combination to use depends largely on the switch parameters, the response time requirement, and the computing power available.
Conclusions
One of the approaches to building a large ATM switch is to simply set up a regularly-structured network in which smaller switch modules are interconnected. To meet the grade-of-service and reliability requirements, there are typically many alternative paths from any input to any output in a such a switching network. This means that routing, or the choice of routes, must be considered to achieve good performance. Multicast connections will be an important service in the future. Multicast routing in 3-stage Clos networks to find out if routing will be a bottleneck to call setup is disclosed herein.
The multicast routing problem can be formulated as a warehouse- location problem. This formulation achieves global optimality as opposed to local optimality obtained with the shortest-path tree formulation. One optimal and two heuristic algorithms are disclosed herein. The optimal algorithm is centered on a procedure which eliminates a large number of non-optimal solutions from consideration without computing them, thereby achieving a substantial reduction in run time.
The first heuristic algorithm is based on a three-step optimization process in which each step attempts to improve on the solutions found by the previous steps.
The second heuristic algorithm is founded on a modification of the optimal algorithm in which the second-stage switch modules being considered for use in the multicast connection is limited to a subset of all the available second-stage modules. Major observations and implications of the work are summarized below.
1. Computation experiments show that the heuristic algorithms can find multicast routes that are close to optimal within an average response time that is several orders of magnitude lower than that of the optimal algorithm. Compared with the optimal algorithm, the response times of the heuristic algorithms do not increase as much with the network size. In addition, the response time of the first heuristic algorithm is also relatively insensitive to the values of arc costs.
2. For large networks (i.e. networks with more than 32 nodes at stage 2), the response time of the optimal algorithm can exceed the targeted 0. 1s by several orders of magnitude. Even with a more powerful processor (i. e. 100 MIPS) than the one used in the experiments, the response time will still not be satisfactory. For small networks (i.e. networks with less than 32 intermediate nodes), the run time of the optimal algorithm in some cases is no more than an order of magnitude larger than that of the heuristic algorithm. The use of the optimal algorithm should be considered in these cases.
3. By modifying the optimal algorithm to store the best solution computed so far, one can have a hybrid strategy in which the optimal and heuristic algorithms are run in parallel. When a set time limit is reached, the better solution offered by the algorithms is chosen.
4. The need for a sophisticated routing procedure in itself does not rule out the Clos network as a viable choice for a switch architecture. If the network can also be designed to meet other requirements, such as grade-of-service and fault tolerance requirements, without complex control mechanisms, then it is a serious candidate for a future broadband switch.
Although motivated by the Clos switching network, the algorithms of the present invention and the discussion here also apply to large- scale communications networks with a two-hop structure. It is likely that facility cross-connects will be used to configure future ATM networks into very simple logical network structures in order to facilitate control and increase reliability. It is undesirable from a control standpoint to have too many stages of queues between two nodes. The present invention is especially relevant to logical networks in which two nodes are directly connected via a set of logical paths, and indirectly connected via another set of two-hop logical paths, with each involving only one intermediate switching node.
While the best mode for carrying out the invention has been described in detail, those familiar with the art to which this invention relates will recognize various alternative designs and embodiments for practicing the invention as defined by the following claims.
APPENDIX 1
Warehouse Location Problem Formulation
Let the three sets of nodes at stage 1, stage 2, and stage 3 be I, J, and K, respectively. Furthermore, denote the weight of the arc from node i &Egr; I to node j &egr; J by c.sub.ij, and that from node j . epsilon. J to node k &egr; K by d.sub.ji. Suppose that one want to multicast from node i &egr; I to nodes K' K. Then, the problem can be cast as: ##EQU7## subject to ##EQU8##
x.sub.ij ≧y.sub.jk V all j &egr; J, k &egr;' K'
x.sub.ij,y.sub.jk =0 or 1 for all j &egr; J, k &egr; K'
where x.sub.ij (or y.sub.jk) is 1 if arc (i,j)(or arc (j,k)) is part of the multicast tree and 0 otherwise. This is known as the warehouse location problem in the Operations Research community. The idea is to select the optimal warehouse locations (corresponding to the selected second-stage nodes) for the delivery of some commodity to a set of destinations (corresponding to the third-stage end nodes).
______________________________________
Appendix 2:
Algorithm For Exact Solution To Multicast Routing
______________________________________
main (E)
begin
F = .0.
H = E;
expandF(F,H);
expandFG(F,H,2);
enumerate(F,H);
end
function enumerate(F,H);
begin
(comment: the following enumerates solutions at the next
level).
if H≠.0. or |F| < |K'. vertline. do
begin
choose an element h from H;
H' :=H - {h};
F' :=F ∪ {h};
H :=H';
(comment: note that the memory space for F' and H' is
allocated locally, whereas the memory space for F and H
is allocated from the calling routine although the
content of H is modified by this function; each invoca-
tion of the function "enumerate" gets a fresh set of F'
and H'.)
expandFG(F',H',2); enumerate (F', H' )
expandFG(F,H,1); enumerate(F,H);
(comment: branch off in two directions, one with h in
F, one with h in G.)
end
end
function expandFG(F,H,flag)
begin
while flag ≠ 0 and H ≠ .0. do
begin
if flag = 1 do flag := expandF(F,H)
else flag := expandG(F,H);
end
end
function expandG(F,H)
begin
(comment: the following attempts to expand G by moving
elements from H to G.)
C := cost(F);
flag := 0;
for all h &egr; H do
begin
D := cost(F ∪ {h};
if D > C do
begin
H := H - {h};
flag := 1;
end
end
return flag;
end
function expandF(F,H)
begin
(comment: the following attempts to expand F by moving
elements from H to F.)
C := cost (F ∪ H);
flag := 0;
for all h &egr; H do
begin
D := cost (F∪H - {h});
if D > C do
begin
F := F ∪ {h};
H := H - {h};
flag := 2;
end
end
return flag;
end
______________________________________ | |
Maxim mikhaylov opposite from russia.
Following the decision of the fivb board of administration the new world ranking system will be implemented from 1 february 2020 and will take into account all results from 1 january 2019. We cover all of the 2021 nations league stages including the regular season to give you a complete picture of the 2021 world nations league standings. 2018 world championships the first two letters indicate confederation then it indicates the group or po playoff and the last number is the rank. The fivb world ranking is a ranking system for men s national teams.
Sergio dutra santos. Teams that participated in both continental and world olympic qualification tournaments are awarded with the points from the highest result achieved in these qualification tournaments. Volleyball continues to grow as a sport with more and more interest from around the world. However for the tokyo 2020 olympic games grouping the previous world ranking will be applied based on the rankings as of 31 january 2020.
A point system is used with points being awarded based on the last results of the following competitions. As of march 16 2020 ranking of teams based on the eight 8 best results at all fivb recognized events over the last 365 days. Punters were busy betting on volleyball throughout the tournament as brazil again started as the strong. Top men volleyball players 20153 rankings are based on the positions of player s teams in tournaments and individual awards won by players.
The fivb senior world rankings is a ranking system for men s and women s national teams in volleyball the teams of the member nations of fédération internationale de volleyball fivb volleyball s world governing body are ranked based on their game results with the most successful teams being ranked highest. Get the current nations league volleyball league tables world nations league standings for the 2021 season at scorespro today. | https://adidaszxflux.com/sport/volleyball-rankings-world.html |
In football, a player-on-player hit may be more likely to cause a more severe head injury than any other type of impact, a recent study suggests.
Study author Julianne Schmidt, assistant professor in the University of Georgia College of Education, looked at game videos to analyze nearly 7,000 head impacts during 13 games of a high school football team’s regular and postseason play.
Per her analysis, the players sustained more serious head injuries when they hit another player during the game than when they hit their head on another object, such as the ground, according to a media release from the University of Georgia.
According to Schmidt, the distance players ran before impact, as well as their starting stance, also affected the severity of their head impact. Players who ran more than 10 yards before impact had much more severe head injuries than those who traveled just a few feet, which is similar to earlier findings in studies of college football players. These studies resulted in a change to the kickoff line, moving it from the 30- to the 35-yard-line to reduce the distance players ran.
Players were even more likely to sustain a more severe head injury if they began running in the traditional three-point stance and ran a long time before impact, per the release.
“When you combine a three-point stance with running a long distance, it results in the most severe head impacts,” said Schmidt, who studies concussions in the college’s department of kinesiology, in the release. “So that points toward a need for rule changes that emphasize the combination of the two if we’re going to reduce head impact severity.”
“High school football players that start in a three-point stance do not typically run a long distance before collision, but some position types, like tight ends and defensive ends, might be more likely to combine the two,” she adds.
The study was published recently in the journal Pediatrics. | https://ptproductsonline.com/industry-news/research-development/player-player-hits-football-may-likely-cause-severe-concussion/ |
"Don't focus on whether you are dying. Focus on God's mission and your transformation."
Tod Bolsinger, author of Canoeing the Mountains.
It seems inevitable that any institution would rather talk endlessly about change or even instruct change when that change will not, in fact, carry any personal cost to those charged with driving and defining the future course of the organisation. And so statistics are quoted, dire predictions are shared and, when those predictions are realised, folk can rest in the knowledge that they were right.
What is much more difficult and way more costly is to BE the change that we need to be. To submit and be challenged by personal transformation in order to effect organisational change. Indeed it may even be that our personal transformation is not enough to change the institution. But that does not mean that we should avoid it.
When God called Moses out of his self imposed exile to go and lead God's people out of Egypt, Moses recognised all too well how ill-equipped he was for the task but, in humility, he responded to the call of God. And all through the years in the wilderness, time and again, Moses was forced to return to that place of inadequacy and ineffectiveness, that place where only reliance on God allowed God's purpose to be fulfilled. The Moses we see glimpsing the Promised Land, a land he knew he would not enter, is a man who was prepared to submit to incredible personal transformation, the kind of transformation he could never have imagined. Other leaders emerged along the way and were mentored by Moses as he modelled for them the transforming power of God.
As we navigate our way through this wilderness season in the church today, our task as leaders is not to provide answers but to model transformation through which God equips us for the journey, calling us into God's mission today. At the very least, that requires humility and dependence on God. It requires us not to lead with answers but to lead with questions, the kind of questions that seek to discern God's will and purpose for our lives and that enable and empower others to discern God's will and purpose for their lives. God's preferred and promised future is writ large in Scripture. God's peacable kingdom has been outlined by priests and prophets through the ages. That kingdom will be realised when we respond to the challenge to be transformed as we participate in the mission of God.
Romans 12:1-2 (The Message)
So here's what I want you to do, God helping you: Take your everyday, ordinary life—your sleeping, eating, going-to-work, and walking-around life—and place it before God as an offering. Embracing what God does for you is the best thing you can do for him. Don't become so well-adjusted to your culture that you fit into it without even thinking. Instead, fix your attention on God. You'll be changed from the inside out. Readily recognize what he wants from you, and quickly respond to it. Unlike the culture around you, always dragging you down to its level of immaturity, God brings the best out of you, develops well-formed maturity in you. | http://barryparish.org.uk/path-of-renewal/28th-may-2018-6946.php |
I am to solve for $r(\rho)$ given the function,
ρAsymp[r_, b_, q_] := 1/(1 - q) Gamma[1/(1 - q)]/Gamma[(q - 2)/(q - 1)] r Sqrt[1 - (b/r)^(1 - q)]
This can be solved in a straightforward way by converting this into a quadratic equation and use
Solve to find for the root corresponding to $r(\rho)$ for every value of $q$. But I have difficulty implementing the
InverseFunction to this problem.
Can someone help me with this? Also, I want to scan $r(\rho)$ for all values of $q<1$, how can I implement this by using a
Module? | https://mathematica.stackexchange.com/questions/189510/finding-the-inverse-of-a-function |
Cybersecurity and Identity Management Trends in Government
The federal government continues to rely on online and digitized services. Government agencies access and process personal and other sensitive information. However, cybersecurity challenges affecting the security of critical information and infrastructure continue to be a thorn in the flesh.
In a bid to ensure information security, the government is currently leveraging on the following trends to strengthen identity management and cybersecurity for government:
- Securing identity
With numerous cases surrounding identity theft being a daily topic for most media outlets, the government has shifted its cybersecurity focus on the user/human aspect. Most attacks are executed through stolen credentials, where adversaries use them to access government databases. Although different authentication measures are still preferred to identify users, there is a growing need for the establishment and development of strong identity access management. These are needed to ascertain the legitimacy of a user, as credentials alone are not enough.
- Cybersecurity frameworks
As information and communication technologies advance, public safety is at risk and faces challenges from increased requirements for identity, credential, and access management. Canada establishes a working and learning group for identity, credential, and access management (ICAM). Policies are developed to address the implementation of ICAM for federal agencies.
Canada maintains a National Cyber Security Strategy that features the country’s vision for security and prosperity in the digital age. In 2010, the Government of Canada launched a national effort to defend against numerous threats with Canada’s first cybersecurity strategy. The approach offers confidence to proceed with the adoption of technologies in the digital age. Canada views cybersecurity as the companion to innovation and protector of cybersecurity.
- Enforcing policy targeting specific companies
Policy development banning the importation of technological capabilities into the Canadian borders is one of the government’s latest approaches to cybersecurity. For instance, the Canadian Government has recently stated that its 5G decision will hinge on security considerations and the advice of government experts. The government, in some cases, considers the geopolitical impact in deciding on whether to ban companies such as Huawei Technologies. These measures are enforced to safeguard Canada’s national security.
Are Cybersecurity Challenges Impeding Digital Transformations?
Despite the obvious positive impacts of disruptive technologies such as 5G and IoT, many industries may wait longer to benefit from digital transformations. Security challenges, especially in government, have been derailing efforts to transition to newer technologies.
Protecting Sensitive Data is Crucial for Government Agencies
Tools for executing attacks are readily available, resulting in a rise of cybercrimes that threaten the economic well-being of Canada. They extend to all industries, where victims incur devastating financial losses while recovering and restoring impacted systems. Security challenges persist in the changing technological landscape since organizations must upgrade cybersecurity systems to adapt to emerging risks. Start-ups are increasingly vulnerable; cyber incidents have largely contributed to complete financial ruins regardless of business sizes. Here’s how.
The profound impacts of cyber-attacks destabilize critical services. For example, customers are less likely to trust financial institutions that were recently breached, and loss of customers results in diminishing revenues. On the other hand, the government’s inability to solve persistent security challenges leads to hesitations in adopting and using newer technologies. What if transitioning operations purely to 5G and IoT will cause insurmountable cyber risks? Such concerns discourage organizations across the divide from adopting them.
Moreover, the government is a high-value target for cyber-attacks as it is a rich source of personal and financial information such as social security numbers and credit cards. These data types have been identified to be top motivators behind most breaches. Besides, CISO’s across the country have indicated that governments are making little investments in cybersecurity, despite it being a highly targeted sector. Perhaps one of the underlying reasons why security problems are rife in government is the little effort made to attract and retain top talent. Emerging technologies require talented individuals backed with sufficient resources to identify new and existing threats so as to innovate superior solutions. Until the government adequately addresses the challenges cutting across all industries, adopting disruptive technologies may remain a far-fetched call for most enterprises.
Addressing the challenges
The following table indicates the recommended measures for addressing the aforementioned challenges:
Challenge
Recommended mitigations
Establishing a strong/comprehensive cybersecurity framework
· Develop a comprehensive strategy for achieving national cybersecurity and a secure cyberspace
· Deploy measures for mitigating global supply chains
Ensuring the security of federal information and systems
· Strengthen the implementation procedures for government cybersecurity initiatives
· Identify and address security flaws in information security procedures
· Enhance response plans to cyber events
Securing critical infrastructure
· Strengthen the roles and abilities of the government to protect critical infrastructures
Ensuring data privacy
· Apply regulations developed to ensure data privacy and security
· Restrict the acquisition and processing of personal information.
About ISA Cybersecurity
ISA is Canada’s leading pure-play cybersecurity organization committed to helping clients achieve their security needs and to stay ahead of cyber threats. With over twenty-seven years’ experience assisting diverse organizations to overcome sophisticated cybersecurity challenges, ISA is revolutionizing cybersecurity through service delivery and technology integrations to enable clients to maximize the value of their technologies while ensuring the safety of their assets.
For more information, please visit our website at https://www.isacybersecurity.com. | https://www.isacybersecurity.com/cybersecurity-and-digital-identity-management-in-government/ |
Idaho® Potato Gratin with Black Truffles
Ingredients:
Red Wine/Porcini Sauce
- 3 ounces fresh porcini mushrooms, sliced 1/4-inch thick
- 8 tablespoons butter, divided use
- 1 shallot, julienned
- 8 ounces dry red wine, such as Merlot or Syrah
- 4 ounces reduced chicken stock
- 2 ounces reduced veal stock or demi-glace
- 2 sprigs fresh thyme
- 1/2 bay leaf
- 1 ounce red-wine vinegar
- Salt and pepper as needed
Potato Gratin
- 1 ounce panko (coarse bread crumbs)
- 2 tablespoons butter, plus 2 teaspoons
- 1 ounce grated Parmigiano-Reggiano cheese
- 1 medium Idaho russet potato (70 count), peeled
- 1/2 cup heavy cream
- 3 ounces Fontina Val d'Aosta cheese, grated
- 2 tablespoons minced garlic
- 1 pinch nutmeg
- Salt and pepper as needed
- 1 medium fresh black truffle
- 1 teaspoon chopped parsley
Directions:
- In a medium saucepan, sauté porcinis in 6 tablespoons butter until soft; set aside in a bowl.
- In the same pan, soften shallot in remaining 2 tablespoons butter over medium-low heat; add wine, increase heat and bring to a simmer, reducing volume by two-thirds, 7 to 8 minutes.
- Add stocks; simmer on medium-low heat, 15 minutes, skimming as needed.
- Add thyme and bay leaf; cook 1 minute. Add reserved mushrooms. Season with vinegar, salt and pepper. Reserve.
- Melt 2 tablespoons butter in small sauté pan; add panko crumbs and cook until lightly toasted, 1 to 2 minutes. Toss with Parmigiano-Reggiano; reserve.
- Butter earthenware or glass baking dish (1 3/4 inches to 2 1/2 inches deep, 16-ounce capacity) using remaining 2 teaspoons butter.
- Slice potato into 1/8-inch-thick slices. Combine with cream, cheese, garlic and nutmeg in large mixing bowl. Season with salt and pepper.
- Slice 3/4 of the truffle paper-thin.
- Layer potatoes, spreading cheese around potato slices. Add a thin layer of truffles; top with layers of remaining potatoes and truffles. (Do not mix in truffles.)
- Cover dish with aluminum foil, place on sheet pan; bake at 325°F until just fork tender, 30 to 50 minutes. Remove foil; top with reserved Parmigiano-Reggiano/breadcrumb mixture. Place in a 550°F oven to brown the top, 3 to 4 minutes. Remove bay leaf and thyme.
- Slice remaining 1/4 of truffle paper thin.
- Garnish gratin with chopped parsley and truffle slices.
- Per portion: Place 3 to 4 ounces of gratin onto serving plate; spoon 1 1/2 to 2 ounces sauce around gratin.
Yield: 4 small-plate servings
Source: | https://idahopotato.com/recipes/idaho-potato-gratin-with-black-truffles |
---
author:
- |
R. Loll[^1] and W. Westra[^2]\
${}$\
[Institute for Theoretical Physics, Utrecht University,]{}\
[Leuvenlaan 4, NL-3584 CE Utrecht]{}
title: |
[SPIN-2003/14]{}\
\
${}$\
Sum over topologies and double-scaling limit in 2D Lorentzian quantum gravity
---
Summing over topologies?
========================
A central question that arises in the construction of a theory of quantum gravity is that of the fundamental, microscopic degrees of freedom whose dynamics the theory should describe. The idea that the information contained in the metric field tensor $g_{\mu\nu}$ may not constitute an adequate description of the geometric properties of space-time at the very shortest scales goes back all the way to Riemann himself [@riemann]. More recently this has led to the suggestion that at the Planck scale also the topological degrees of freedom of space-time should become excited. Wheeler is usually credited with coining the notion of a space-time foam [@wheeler], according to which space-time is a smooth, classical object macroscopically, well described by general relativity, but at the Planck scale presents a scenario of wildly fluctuating geometry and topology.
In the context of the gravitational path integral, this has inspired an extension of the customary integral over all metrics (modulo diffeomorphisms) by an additional sum over space-time topologies, namely, $$Z(\kappa,\lambda)=\sum_{\rm topol.} \int D[g_{\mu\nu}]{\rm e}^{iS[g_{\mu\nu}]},
\label{pi}$$ where the square brackets denote diffeomorphism equivalence classes of metrics and where $S$ is the gravitational action. We will take the action to include a cosmological term, $$S=\int d^dx \sqrt{|\det g|} (\kappa R-\lambda),
\label{action}$$ with $\kappa$ and $\lambda$ denoting the inverse gravitational coupling constant and the cosmological constant. In space-time dimension $d=4$, given the well-known difficulties of defining a path integral over the metric degrees of freedom alone, it may not come as a surprise that very little progress has been made in giving a well-defined mathematical and physical meaning to (\[pi\]). Previous semi-classical treatments of topology change, as for example in discussions of the effect of baby universes on effective coupling constants (see [@giddings] for a critical appraisal) are unlikely to be of relevance to the problem, for a variety of reasons.
First, since there is currently no direct or indirect evidence for topology changes from experiment, the phenomenon – if realized at all – must take place at the Planck scale or not too far from it, casting doubt on the applicability of semi-classical methods. Secondly, with very few exceptions, such investigations have been made within the path integral for [*Euclidean*]{} metrics. However, in the absence of a Wick rotation for theories with a dynamical metric, the Euclidean theory has no obvious relation with the physical, Lorentzian theory. Moreover, the Euclidean path integral seems to suffer from incurable divergences due to the presence of the conformal mode [@conformal]. Lastly, and most importantly, once topology change is permitted, topology-changing contributions dominate the path integral completely, since the number of distinct geometries at a fixed space-time volume $V$ grows [*super-exponentially*]{} with $V$. This entropic effect is truly non-perturbative and cannot be seen in a semi-classical treatment. It implies that arguments for a dynamical suppression of topology changes which are based on an evaluation of their semi-classical action are largely irrelevant.
Yet more worrying for the proponents of a “sum over topologies" should be the fact that the analogous problem is unsolved even in dimension $d<4$.[^3] Again, this can be traced to the super-exponential growth of the number of geometries with their volume, which renders the path integral badly divergent.
In space-time dimension two, which we will focus on in the following, the quantization of pure Euclidean (or Liouville) gravity for [*fixed*]{} topology is well understood in analytic terms [@2deuclid]. The sum over topologies is turned into a sum over a single parameter $g\geq 0$, the genus (number of handles or holes) of the two-dimensional space-time. The Euclidean analogue of the path integral (\[pi\]) for $d=2$ has been the object of intense study in the past, since it is an example of a non-perturbative sum over world sheets of a bosonic string (in a zero-dimensional target space) [@3papers]. The problem has been addressed by matrix model methods or, equivalently, a regularization of the path integral in terms of triangulated, piecewise flat two-surfaces. However, it turns out that the topological expansion of (\[pi\]) in powers of ${\rm e}^{-\kappa}$ (the integrated curvature in 2d is proportional to $g$, up to an additive constant) is not Borel-summable, because the coefficients in the series grow factorially with $g$ and are all positive. Attempts to fix the ensuing non-perturbative ambiguities of the partition function in a unique and physically motivated way have so far remained unsuccessful [@2deuclid].
Doing it the Lorentzian way
===========================
This leaves us in the rather unsatisfactory situation of not having a single instance of a quantum-gravitational theory where the sum over topologies had actually been performed. In the present work, we will suggest a possible way out of this impasse. The central idea is to take seriously the causal nature of space-time, and to perform a non-perturbative summation over [*Lorentzian*]{} geometries. As regards the sum over topologies, the Lorentzian structure will be used to quantify how badly causality is violated by individual contributions to the path integral. We will introduce and solve a model of 2d quantum gravity which at the regularized level amounts to a sum over 2d piecewise linear space-times of any genus whose causal properties are “not too bad". For the purposes of this paper, we will adopt a strictly quantum-gravitational point of view, in the sense that we will only be interested in models that do not lead to large-scale causality violations. In particular, we do not think that in this context third-quantized models, whose Hilbert spaces describe multiply-connected spatial geometries, can be interpreted in a physically meaningful way. This is different from situations where the geometries appear as imbedded quantities, as they do in the case of string theory, where moreover topology changes of the world sheet are mandatory, and not optional.
The question is then whether there are any models with topology change that produce a quantum space-time foam whose non-trivial microstructure leads to a measurable, but not necessarily large effect at a macroscopic level. The quantum gravity model we are about to construct has exactly this property. Although it is a model whose topological fluctuations are associated with the “mildest" type of causality violation imaginable in two dimensions,[^4] it is already at the limit of what is acceptable as a space-time foam. Namely, we will show that for a sufficiently large value of the renormalized gravitational coupling constant, the effects of topology change become overwhelming and the system enters a phase of “handle condensation".
In order to perform the sum (\[pi\]) non-perturbatively, we will adopt a Lorentzian version of the regularized sum over piecewise flat 2d space-times. For fixed topology $[0,1]\times S^1$, this model is exactly soluble and leads to a 2d quantum gravity theory inequivalent to Liouville quantum gravity [@lor2d], with a well-defined Wick rotation and without a $c=1$ barrier [@barrier]. The difference can be traced to the absence (in the Lorentzian case) of branching “baby universes", which are incompatible with causality.[^5] This method of “Lorentzian dynamical triangulations" has also been applied successfully in dimension three [@lor3d], leading to a well-behaved quantum ground state, which unlike in the Euclidean theory does not degenerate into a lower-dimensional polymer as a consequence of a dominance of the conformal mode.
Recall that any 1+1 dimensional Lorentzian space-time contributing to the regularized path integral is given by a sequence of strips of height $\Delta t=1$, where each strip in turn is a random sequence of $N$ Minkowskian up- and down-triangles (Fig.\[onestrip\]), each with two time-like and one space-like edge of length-squared $\pm a^2$ [@lor2d]. We will now generalize these to a class of Lorentzian geometries with holes, where the holes have minimal time duration $\Delta t=1$. Although this time interval goes to zero in the continuum limit $a\rightarrow 0$, their effect is not necessarily negligible, since the triangle density goes to infinity in the limit.
The way in which we create triangulations with holes in a strip $[t,t+1]$ is as follows. Suppose the geometry has been built up to integer time $t$, possibly with holes. The spatial geometry at time $t$ is a closed circle consisting of $l_t$ space-like edges. Now, glue on another strip with matching “in-geometry" of length $l_t$ and some “out-geometry" of length $l_{t+1}$ (giving rise to a total discrete strip volume of $N_t=l_t+l_{t+1}$ triangles). Next, glue an even number of the $N_t$ time-like edges in the strip pairwise to each other, according to an arrow diagram (Fig.\[arch\]), and then cut open the geometry at each of these edges, perpendicular to the direction in which they were glued together. This will result in a space-time geometry consisting of several cylindrical components between $t$ and $t+1$. In order to obtain back a spatial circle at time $t+1$, the cylinders must be cut open at some of their vertices at $t+1$ and their spatial boundaries be aligned in some order to form again a single $S^1$. In this way, one has constructed a strip geometry with some number $g_t$ of holes.
It is straightforward to show that if arbitrary regluings at time $t+1$ are allowed, the number of possible geometries at a given strip volume scales factorially with $N_t$, just as in the Euclidean case. However, if one looks at the causal properties of the resulting space-times, most of them turn out to be extremely ill-behaved, in the sense that even a single hole in the entire space-time will lead to a global rearrangement of parts of a light front after passing the hole, even if it exists only for an infinitesimal time $\Delta t$. Fortunately, there is a subclass of geometries for which this does not happen, which are those where the cylinders are reglued [*without*]{} any intermediate relative twisting or rearrangement of the order of the components during the time interval when they are disconnected.
The effect is most easily illustrated by the case of two components. In a “regluing without twist" the two saddle points[^6] $p_t$ and $p_{t+1}$ at $t$ and $t+1$ where the hole appears and vanishes are connected in each of the two cylinders by a time-like link, which implies that they are nearest lattice neighbours in either of the components. If this is not the case, e.g. if one of the cylinders is twisted before regluing, then $p_t$ and $p_{t+1}$ will not appear as nearest neighbours in that component, but $p_{t+1}$ will have a relative shift $\Delta l$ along the spatial direction. The resulting space-time geometry will have the property that a light beam of macroscopic width that passes by the hole will be split into two parts which will emerge with a relative separation of $\Delta l$ after the hole disappears! By comparison, the only effect of the hole in the “untwisted" case is that a small fraction of the light beam will be scattered into the far-away part of the space-time to which the hole connects during its infinitesimal life time (this effect is of course also present in the twisted case).[^7] Since we find it difficult to envisage how a quantum geometry with anything near a macroscopic causal structure could emerge from a superposition of such ill-behaved manifolds, our sum over topologies will contain only geometries with “untwisted" holes.
Discrete solution and double-scaling limit
==========================================
To illustrate that the causality constraints imposed above do lead to a well-defined and soluble model, we will now solve the combinatorics for a single strip $\Delta t=1$ and look for a scaling behaviour of the two coupling constants that leads to a non-trivial continuum limit. The partition function after Wick-rotating is $$Z(\lambda,\kappa)=\sum_{l_{in}}\sum_{l_{out}}
{\rm e}^{-\lambda (l_{in}+l_{out})}
\sum_{T|_{l_{in},l_{out}} }{\rm e}^{-\kappa g(T)},
\label{part}$$ with a sum over the initial and final boundary geometries of length $l_{in}$ and $l_{out}$, and a sum over triangulations $T$ of a strip with these boundaries. For a given triangulated strip of volume $N=l_{in}+l_{out}$, the counting of geometries with holes according to the procedure of the previous section involves a counting of diagrams like in Fig.\[arch\] with $N$ vertices and $g$ arrows. Expression (\[part\]) can be rearranged, $$Z(\lambda,\kappa)= \frac{1}{2} \sum_{N=0}^\infty\ \sum_{g=0}^{[N/2]}\
\biggl({N\atop 2g}\biggr)
\ \frac{(2g)!}{g! (g+1)!}\ {\rm e}^{-2\kappa g} {\rm e}^{-(\lambda -\log 2)N},
\label{z1}$$ after which the sums can be performed explicitly, leading to $$Z(\lambda,\kappa)=\frac{1}{2 (1-{\rm e}^{-(\lambda-\log 2)})}\ \frac{1-\sqrt{1-4 z}}{2z},
\label{partz}$$ where the second term depends only on the combination $$z:={\rm e}^{-2\kappa}({\rm e}^{\lambda -\log 2}-1)^{-2}.
\label{both}$$ An infinite-volume limit is obtained by tuning the bare cosmological coupling $\lambda$ to $\log 2$ [*from above*]{}[^8], as in standard Lorentzian quantum gravity [@lor2d], $$\lambda =\lambda^{crit}+a^2\Lambda +O(a^3)\equiv
\log 2 +a^2\Lambda +O(a^3),
\label{cosren}$$ where $\Lambda$ denotes the renormalized, dimensionful cosmological constant, as the geodesic cutoff $a\rightarrow 0$. As can be seen from eq.(\[partz\]), this is only consistent if simultaneously also the inverse Newton constant $\kappa$ is renormalized. Such a double-scaling limit is obtained by fixing $z$ to a constant, $z=c<1/4$, and defining a renormalized coupling $\rm K$ by $${\rm K} =\kappa -2 \log \frac{1}{a\sqrt{\Lambda}} +O(a),\;\;\;\;
{\rm K}:=\frac{1}{2} \log \frac{1}{c}.
\label{kapren}$$ Substituting these expansions into the expression for the strip partition function (\[partz\]), a straightforward computation yields the renormalized partition function in terms of $\Lambda$ and the gravitational coupling $G=1/$K, $$Z^R(\Lambda,G)=
\frac{ {\rm e}^{2/G}}{4\Lambda }
\biggl( 1-\sqrt{1-4\, {\rm e}^{-2/G}}
\biggr).
\label{zdone}$$ In the continuum theory, one expects $\Lambda$ to set the global scale because of $\langle V\rangle =\frac{1}{\Lambda}$ for the expectation value of the space-time volume. On the other hand, the strength of the gravitational coupling governs the average number $\langle g
\rangle$ of holes per slice, which is proportional to the fraction of a lightbeam that will be scattered by holes in a non-local and causality-violating manner [@prep]. As is illustrated by Fig.\[genplot\], for $G=0$ there are no holes at all. Their number increases for $G>0$, first slowly and then rapidly, until it diverges at the maximum value $G=2/\log 4$, at which point the system undergoes a transition to a phase of “condensed handles".
What we have found therefore is an example of a gravity-inspired statistical model with a well-defined double-scaling limit. As in previous work on non-perturbative gravitational path integrals, the Lorentzian structure of the individual geometries has played a crucial role in the construction. In forthcoming work [@lwz] we will investigate what happens when one keeps the boundaries of the space-time strip fixed instead of summing over them, as presented here. This more complicated model needs to be solved in order to determine the Hamiltonian and the full propagator of 2D Lorentzian quantum gravity with holes. Interestingly, it turns out that the inclusion of the boundaries leads to a different scaling behaviour of Newton’s constant. This also implies a different behaviour for the number of holes: unlike in the strip model, there is no condensation of handles, and the number of holes per strip stays infinitesimal. Unlike in the pure Lorentzian theory without holes therefore, the scaling of the “bulk" couplings and the bulk partition function cannot be deduced from solving the simpler strip model with summed-over boundaries. This may teach us an important lesson for higher-dimensional models, where a similar phenomenon may well be present.
[*Acknowledgements.*]{} We thank J. Ambjørn, G. ‘t Hooft and S. Zohren for enjoyable discussions. Support through the EU network on “Discrete Random Geometry”, grant HPRN-CT-1999-00161, is gratefully acknowledged.
[xx]{}
http://www.maths.tcd.ie/pub/HistMath/People/Riemann/Geom
J.A. Wheeler, Ann. Phys. [**2**]{}, 604 (1957). S.B. Giddings, Int. J. Mod. Phys. [**A5**]{}, 3811 (1990). A. Dasgupta and R. Loll, Nucl. Phys. B [**606**]{}, 357 (2001) \[hep-th/0103186\]. F. David, in [*Gravitation and Quantizations*]{}, ed. J. Zinn-Justin and B. Julia, North-Holland, 679 (1995) \[hep-th/9303127\]; P. Di Francesco, P. Ginsparg and J. Zinn-Justin, Phys. Rept. [**254**]{}, 1 (1995); J. Ambjørn, B. Durhuus and T. Jonsson, [*Quantum Geometry*]{}, Cambridge Monogr. Math. Phys. [**1**]{} (1997).
M.R. Douglas and S.H. Shenker, Nucl. Phys. B [**335**]{}, 635 (1990); E. Brézin and V.A. Kazakov, Phys. Lett. B [**236**]{}, 144 (1990); D.J. Gross and A.A. Migdal, Phys. Rev. Lett. [**64**]{}, 127 (1990). F. Dowker, in [*The future of theoretical physics and cosmology*]{}, ed. G.W. Gibbons, E.P.S. Shellard and S.J. Rankin, Cambridge Univ. Press, 436 (2003) \[gr-qc/0206020\]. J. Ambjørn and R. Loll, Nucl. Phys. B [**536**]{}, 407 (1998) \[hep-th/9805108\]. J. Ambjørn, K.N. Anagnostopoulos and R. Loll, Phys. Rev. D [**61**]{}, 044010 (2000) \[hep-lat/9909129\]. J. Ambjørn, J. Jurkiewicz and R. Loll, Phys. Rev. D [**64**]{}, 044011 (2001) \[hep-th/0011276\]. J. Louko and R.D. Sorkin, Class. Quant. Grav., 179 (1997) \[gr-qc/9511023\]. R. Loll and W. Westra, Acta Phys. Polonica B [**34**]{}, 4997 (2003) \[hep-th/0309012\]. R. Loll, W. Westra and S. Zohren (to be published).
[^1]: email: [email protected]
[^2]: email: [email protected]
[^3]: Quantum gravities in dimension 2 and 3 serve as useful models for diffeomorphism-invariant theories of dynamical geometry. Their metric configuration spaces and dynamics are much simplified in comparison with the physical, four-dimensional theory.
[^4]: However, it should be kept in mind that all topology changes in 2 and 3d which do not involve universe creation or annihilation are “bad” according to the classification of Dowker and collaborators [@dowker].
[^5]: By this we mean baby universes that do not return to the “mother universe", and therefore do not change the [*space-time*]{} topology. We will not consider such configurations in the present work.
[^6]: Note in passing that it is not clear a priori how to account for the curvature singularities at the saddle points in the Lorentzian action, and how to treat them in the Wick rotation, see [@lousor] for a related discussion. We will simply use the standard Regge prescription in terms of deficit angles in the Wick-rotated action.
[^7]: A detailed geometric analysis can be found in [@prep].
[^8]: Note that this gives rise to a non-negative renormalized cosmological constant; our approach naturally leads to a de-Sitter-like behaviour.
| |
The role of relatedness in competition between prairie species
Do plants compete more strongly with closely or distantly related species? Does a non-native species that is closely related to a native species have a better chance of invading? Our group wants to determine if closely related species compete more strongly than distantly related species, based on the hypothesis that related species have similar traits and similar niches. We will use a greenhouse experiment to investigate this hypothesis. Individuals will be grown alone, with an individual of the same species, and with closely and distantly related species. The experiment will examine species pairs that are included in a larger, plot-level experiment, and ultimately inform restoration practices in the Midwest’s tallgrass prairie, one of the most endangered ecosystems in North America. This research is part of a larger project that integrates long-term observational data from remnant and restored prairies; experimental manipulations of PD in field and greenhouse settings; and tools for helping practitioners solve complex restoration problems.
Our REU intern will work on competition greenhouse experiments between native prairie species. You’ll work with postdoctoral researcher Evelyn Williams, PhD student Rebecca Barak, and Masters student Jacob Zeldin to maintain plants, measure functional traits, harvest above- and below-ground biomass, and analyze data. In addition to helping with the larger competition experiment, you'll develop an independent project based on your interests to be integrated into the overall data collection effort. You could investigate questions such as: do above or below-ground traits better predict competitive interactions? Does incorporating intra-specific trait variation improve these predictions? Do closely related species actually have similar traits? Do competitive interactions change with resource availability? At the end of the summer, we'll work to analyze your data and present it the rest of the summer interns. | https://www.cbgreu.org/view-mentor-projects/node/3381 |
Thunderstorms pounded Southern California through the weekend, causing flash flooding and smashing July rainfall records. Here’s what happened:
HURRICANE DOLORES: Atmospheric moisture flowed into the region from former Hurricane Dolores, which had formed in the Pacific Ocean off Mexico’s southwestern coast and moved northwestward, losing some strength as it moved into cooler waters but still spinning as a tropical storm off Baja California on Saturday.
DRAMATIC WEATHER: Skies darkened and thunder rumbled across normally sun-splashed summer skies, unleashing downpours punctuated by lightning strikes and gusts, especially in the mountains and deserts. Thousands of utility customers lost power. Tourists whipped out umbrellas and scattered as a sudden shower hit Hollywood Boulevard.
BRIDGE WASHOUT: On Sunday, a torrent gushing down a desert wash collapsed a bridge on Interstate 10 about 174 miles east of Los Angeles. Based on radar estimates, rain fell in the area at the rate of 1.5 inches per hour during the afternoon, and total accumulation Sunday was 6.7 inches. The washout left one vehicle dangling over the side with its driver inside. Other drivers used straps from their trucks to tie the vehicle to a guardrail to keep it from washing away. Firefighters worked to free the driver in rapidly rising waters as pieces of asphalt and debris fell about them. The driver’s injuries were described as moderate.
RECORDS: July is a notoriously dry month but many daily rainfall marks were set Sunday in the region, including .02 inch in downtown Los Angeles. The tiny amount broke the previous July 19 record of a “trace” set in 1946, 1948 and 2014. The downtown’s July rainfall total now stands at .38 inch more rain than in all the months of July since 1987 combined.
Join the Conversation
We invite you to use our commenting platform to engage in insightful conversations about issues in our community. We reserve the right at all times to remove any information or materials that are unlawful, threatening, abusive, libelous, defamatory, obscene, vulgar, pornographic, profane, indecent or otherwise objectionable to us, and to disclose any information necessary to satisfy the law, regulation, or government request. We might permanently block any user who abuses these conditions. | https://www.marinij.com/2015/07/20/california-thunderstorms-spawned-by-ex-hurricane/ |
Lake Havasu City is an oasis in the Mohave Desert and the American home of the London Bridge. The lake itself was formed by the construction of Parker Dam in the mid-1930's. The city was founded by Robert McCulloch who bought the London Bridge from the City of London, shipped it across the ocean block by block and reconstructed the antique, giving Lake Havasu City a lifetime brand. The city lies near the joining of Arizona, California and Nevada. With a beautiful desert climate, temperatures are moderate 9 months of the year and warm in the summer, beckoning winter visitors from colder climes between Oct-April and sun loving, watercraft oriented travelers from Southern California, Southern Nevada and from throughout the Southwest from Easter through the early fall.
The city's fulltime population of nearly 55,000 enjoy the feel of a small town in a small metropolitan area. There are several resorts on/near Lake Havasu and plenty of water to play on. Off-roading in the desert that surrounds the city's borders is also a popular past time. There are several elementary schools, one middle school and one public high school along with several public and private charter and parochial schools, all offering a higher than average level of education. The city is also fortunate to have the ASU @ Lake Havasu campus in the center of town, and a local branch of Mohave Community College offering outstanding higher education opportunities in many doctrines. Festivals, including a world class hot air balloon event, power boat racing, a winter street festival that attracts 35,000 each year, occur almost every weekend throughout the year. Lake Havasu City has something for everyone.
Lectures are held on a regular basis at ASU @ Lake Havasu through the university, the Lake Havasu Museum of History and other organizations. Live community theater is available year-around at Grace Arts Live in downtown Lake Havasu City. The Allied Arts Association maintains a diversified schedule of the cultural arts opportunities year-around. Several art galleries are located throughout the city, including the Spresser Gallery at the Visitor Center under London Bridge.
Lake Havasu City has a municipal transportation system with by appointment service for seniors. The nearest major airport is McCarran in Las Vegas, only 140 miles to the north. Charter flights are available at the Lake Havasu airport as well. Rental cars are available from 3 different agencies in town and several independent taxi companies are located in Lake Havasu City. Uber and Lyft also have a presence in the city. There is transportation to VA centers in Prescott and Phoenix from Lake Havasu City on a regularly scheduled basis.
Havasu Regional Medical Center is the primary source for the region. It is a full service hospital with 200 beds.
Jan 40.0 F° 53.0 F° 66.0 F° 1.0 "
Feb 43.0 F° 57.0 F° 71.0 F° 1.0 "
Mar 49.0 F° 63.0 F° 77.0 F° 0.8 "
Apr 56.0 F° 70.5 F° 85.0 F° 0.2 "
May 65.0 F° 80.0 F° 95.0 F° 0.1 "
Jun 74.0 F° 89.0 F° 104.0 F° 0.0 "
Jul 80.0 F° 94.0 F° 108.0 F° 0.4 "
Aug 79.0 F° 93.0 F° 107.0 F° 0.6 "
Sep 72.0 F° 86.5 F° 101.0 F° 0.6 "
Oct 60.0 F° 74.0 F° 88.0 F° 0.4 "
Nov 47.0 F° 61.0 F° 75.0 F° 0.4 "
Dec 39.0 F° 52.0 F° 65.0 F° 0.7 "
Spring temperatures are mild with averages in the low 70's and highs in the mid 80's. Nights are cool with lows in the mid 50's. Summer temperatures are hot with averages in the low 90's and highs in the mid 100's. Nights are warm with lows in the upper 70's. Fall temperatures are mild with averages in the low 70's and highs in the upper 80's. Nights are cool with lows in the upper 50's. Winter temperatures are cool with averages in the low 50's and highs in the upper 60's. Nights are cold with lows in the low 40's. | http://www.best-place-to-retire.com/retire-in-lake-havasu-area-az |
On which different devices the Beat2Phone works?
The Beat2Phone application has not been tested on all mobile devices.
It has been found to work well in the following devices:
- Samsung mallit: Galaxy S3, S4, S5, S6, S7, S8, A3, A5, Tab A, Note 10.1 (tab)
- LG G3, LG Nexus 5S
- Huawei mallit: Honor 8, Media Pad M3
- Motorola Moto G
- Sony mallit: Xperia M4, Xperia Z5 Compact
- Google Nexus 7 (tabletti)
- Doro
- Nokia 3
- OnePlus 3T
In the following devices, Beat2Phone does not work properly (Bluetooth problem): | https://www.beat2phone.com/en/ufaqs/different-android-devices-beat2phone-works/ |
Features:
pulled from working machine , cpu will be final double tested before shipping.
Warranty Terms: 1 Year Warranty
Specifications:
Processor Brand: Intel
Processor Series: Intel Pentium
Model number: G5400
Processor Speed: 3.7GHz
Processor Socket: 1151
stepping: SR3X9
The number of CPU cores: 2
The number of threads: 4
L2 cache size (KB): 2 x 256 KB
L3 cache size (MB): 4MB
Wattage: 54 Watt
58 Watt (4-core die version)
Package Inclues:
1 x G5400 Processor
1x Cooling FAN
Tips:
Please check before purchasing if your PC and parts are compatible with this processor to avoid extra shipping cost and delays.
Photos and serial number are only for reference , if there is fluctuation, according to the real object please.Thanks! | https://mineshop.eu/intel-pentium-g5400-3-7ghz-dual-core-bulk-packaging/ |
Product Details:
Payment & Shipping Terms:
|Material:||Stainless Steel||Weight:||20 KG|
|Control Box Size:||L600 X W500 X H1280 MM||Test Machine Size:||L630 X W500 X H1910 MM|
|Test Voltage:||0 ~ 6 KV||Power:||220V / 50HZ|
Mining Cable Mechanical Impact Testing Equipment / Mechanical Tests For Flexible Cables
1. Overview
This test machine is suitable for mechanical impact resistance of mine rubber flexible cables below 6KV. Meet the standard requirements of GB12972, MT818.
2. Matters Needing Attention
The hammer of this test machine weighs 20kg and has a high impact height. It must be safe when doing the test to avoid injury to the body parts such as hands and feet. The protective latch (the latch under the weight) and the fixed latch (the latch under the electromagnetic plate) must be inserted before the test. When the test is to be carried out, the fixing pin can be pulled out to prevent the electromagnet from being powered off or damaged, causing the hammer to fall and causing harm to the human body.
3. Technical Parameters
(1) Instrument Dimensions: Test Machine Size: L630mm x W500mm x H1910mm
Control box size: L600mm x W500mm x H1280mm
(2) Weight quality: 20 kg
(3) Test voltage: 0 ~ 6 KV continuously adjustable
(4) Test stroke: 0.75m, 1.1m and 1.5m
(5) Working power supply voltage: 220V/50HZ
4. Test Preparation
(1) The test core cross section and the corresponding number of cable impacts are specified as;
(16—35)mm2: 2 times;
(50—l20)mm2: 3 times.
(2) Sample preparation
One sample was taken from the finished cable and was about 2 m long.
Q: Are you trading company or manufacturer ?
A: We are a manufacturer.
Q: How long is your delivery time?
A: Generally it is 5-10 days if the goods are in stock. or it is 15-20 days if the goods are not in stock, it is according to quantity.
Q: Do you provide samples ? Is it free or extra ?
A: No, we don't provide the samples for testing.
Q: What is your terms of payment ?
A: Payment<=1000USD, 100% in advance. Payment>=1000USD, 50% T/T in advance ,balance before shipment.
If you have another question, please feel free to contact us.
Q: What is your terms of payment ?
A: The warranty for the equipment is 1 year. Definitely, all equipment are tested well and runs well by testing our customer's sample before shipment. Please contact us if the equipment doesn't work well after being shipped to your company. We will guide you how to fix it. The spare parts or components will be sent if necessary. The charges will be occurred if it is caused by customer's misoperation. | http://www.environmental-testingequipment.com/sale-11608180-mt818-mining-cable-mechanical-impact-testing-equipment-mechanical-tests-for-flexible-cables.html |
Adam’s background:
- Graduate of Modern Technology School, Anaheim, CA. in X-ray Technology
- ROP, Anaheim, CA. Medical Assisting Program
- CDPH-RHB X-ray Technician (XT)
- Member of the American Society of Radiologic Technologist (ASRT)
Adam, thank you for meeting with me. Can you tell me a little bit about your role at Modern Technology School?
I make sure that all the doctors/physicians at clinical locations are aware of the state requirements and our requirements for training. I synchronize with clinical coordinators to
ensure the students are getting what they need with regards to the number and types of exams. We also ensure that people are passionately training the students, not just using them as another ‘warm body’ in the clinic — we want there to be a lot of training and for the students to come out of the clinical internship portion of the program feeling confident and well-trained — ready to take on their new professions.
We also have to ensure that the clinics are compliant with regards to following state laws, and the state wants to make sure it’s a competent facility that will thoroughly train our students. As a Radiation Safety Officer, I handle the radiation protection & radiation safety-end of the x-ray program with the clinics. I have a working rapport and history with the x-ray techs at the clinical locations, so I’m able to act as a go-between. This is helpful because each clinic and office will have their own set of individual rules, so it’s important that we keep up with them in order to ensure our students’ success.
Talk to me about your experience and background within the Healthcare and X-Ray Technician Education field
Of course. I started out as a File Clerk within a family practice in Orange County, working in medical records. I really enjoyed it and saw how other people (X-ray technicians) were happy to show up to work each day in order to help people, and I’ve always had a passion for helping people… whether it was working in public safety, or in medicine. I ended up going through ROP (Regional Occupational Program) at the time, which offered an 11-month program for medical assisting. This was while I was still working as a File Clerk at the family practice, so this gave me the opportunity to learn the medical terminology for both front & back medical office. Once I graduated and I was able to become a medical assistant in California, that same clinic I was working at, gave me an opportunity to work in the back office, and I worked there for about a year before moving forward with my Healthcare career.
While working as a medical assistant / back office there, I started seeing Modern Technology School’s students begin coming through my office, as part of their clinical internship. Now this was before I worked here (at Modern Technology School), and it really intrigued me. I had the opportunity to speak to them and work with them quite a bit, and really developed an interest in the X-Ray portion of their training within the family practice. Now this was back 2005, so Modern Technology School was moving to Fountain Valley at the time. As soon as they relocated closer to me, I took that as a sign and jumped on the opportunity to attend the school and study to become an X-Ray Technician in California as soon as I could.
What was also really interesting, was that before I moved on to wanting to become an X-Ray Tech in California, I actually worked with Modern Technology School students at my clinic for over 7 years. This gave me a unique vantage-point and helped me to understand what it takes to succeed as an X-Ray Technician, as well as what skills were needed in order to successfully complete the clinical internship for X-ray techs. For instance, some students were more independent throughout their training at the clinic, while others required more patience, and slower, focused instruction. I was able to see both sides of the X-Ray Technician program this way, and it’s also how I met the staff and faculty at Modern Technology School.
What makes the X-ray program unique for students considering a career in Radiology/X-Ray
The X-Ray Technician program at Modern Technology School is very unique. This is due primarily to the way we cross-train our x-ray students as medical assistants. It reminds me a lot of public safety — firefighters for instance — a lot of them will go out and cross-train as paramedics and EMT’s. It makes them more useful and attractive to the cities and counties hiring them. It’s the same thing here — you get that cross-training as a medical assistant, and you’re that much more attractive to employers looking to hire an x-ray technician in California. Hiring an MA/X-Ray Tech is also very cost-effective for Healthcare employers, which is another reason we do the cross-training — we’ve evolved the program to match up with what more and more Healthcare employers are looking for from XT’s. This in turn creates more value for our students and helps them become more attractive to prospective employers.
There are a lot of Healthcare corporations that are moving to California from back east as well. Now on the East Coast, they don’t really have many (or any) X-Ray Technician / Medical Assistant positions. In fact, many of them utilize an LVN/RT, which is very costly to the employer. As many of these corporations have made their ways into the California, they’ve realized how valuable it is to have an MA/X-Ray Tech working for them versus the LVN/RT’s they were used to working with back east.
You were a graduate first and foremost, what did you find useful about the X-ray Tech Program and what portions/modules did you enjoy most?
Everything that you get to put to work once you graduate from the X-ray Technician program and get out there in the field. I loved the osteology, learning about bones, it was fascinating. I enjoyed learning about how X-Ray Technicians use different positioning in order to take and capture different types of x-rays/images. Also, figuring out the small details and learning what to say to patients and work with them professionally — explaining things properly and helping keep them safe, providing instructions, and ensuring you’re doing right by them.
It’s very different from what may be considered a very traditional education, where you’re learning things you may not really utilize in your field of choice. Instead, we used everything we learned, and it’s fascinating coming back as a teacher now, because I get to teach all of that useful stuff that I enjoyed and use on a daily basis.
Now that you’ve been on both sides, what do we offer that other schools don’t at this level?
It’s hard to compare us to other schools in Orange County; we are very unique in how we offer both X-Ray Technician training as well as the included Medical Assistant certification under one program. We’ve been doing it for so long here, since 1981 and we’ve had the opportunity to work with so many different students throughout that time. There is a great deal of book-work involved, which isn’t boring, and we also train with an enormous amount of hands-on. Our students get to work closely with instructors and faculty, and our classes aren’t overloaded.
We believe in more quality instruction and so class sizes are kept small, so our x-ray students have a better chance at absorbing the material as well as working hands-on with actual field equipment. If they have challenges, the instructors are always there to coach them through it, and of course the rest of team that makes up the entire package is phenomenal — it’s tough to beat here in Orange County, especially considering all of the impacted radiology programs you hear about.
Where would an XT generally will find work upon graduation/exam completion?
XT’s who cross-train as Medical Assistants will generally find work in clinics, urgent cares, worker’s compensation offices, industrial medicine, internal medicine, family practice, sports medicine, orthopedic surgeon offices, you name — it’s a pretty broad spectrum of where you can find work as an X-Ray Technician in California. As far as working routines, as an x-ray tech, you have a lot of choices with how you want to build your schedule. X-ray techs can choose to work graveyard, 2nd-shift, mornings — it’s very flexible here in California.
I’ve worked “sleeper-tech” where you work 7pm to 7am on weekends, and the clinic actually has a living-room setup, where you can take naps during your shift… when a patient comes in, say at 3am and they ring the bell, you come out and you take care of the patient and you’re able to go back to sleep until the next one arrives. Obviously that’s a unique situation and shift, but I’m trying to illustrate that there are a lot of options for different types of employment for x-ray technicians in California. It really depends on what you like, and it’s nice to know that those options are out there, one to match each X-ray tech’s personality and needs.
It’s also interesting in that there are so many employment options for X-ray technicians in California, as it’s not really like that for some of the other positions within the radiology field right now. There are impacted classes at the CRT or rad-tech level, and a lot of these junior colleges have 5-6 year wait lists for their radiology programs. They’re also having a hard time placing students in clinical internship because the programs are so impacted. Furthermore, they’re not building new hospitals, like they are urgent cares and clinics, where x-ray techs work. There are also a good number of doctors/physicians that are coming out of school and creating their own practice, or taking over the family practice — these are all great opportunities for the X-ray technician.
Walk me through a standard day in the life of an X-Ray Technician in California
Once you get in, you grab your clinic schedule, and see what your day is going to consist of. You’ll probably then go on to turn on your x-ray machines and then go through your morning routine while they warm-up. I then wipe down all my equipment, as I’m not sure who was there the night before — even if it was me. I want to always protect my patients and ensure that my room is clean, that I’m making a great first impression on the patients when they enter. I go through and make sure all of the trash cans have been emptied, that my lead aprons are ready for use, and check supplies before my first patient comes in. Preparation is key in Healthcare and working as an X-ray technician. If I see that specific patients are coming in, or ones that have specific needs, I can also prepare for my day in that way as well.
As part of my daily routine, of course you’ll be taking x-rays and filling in with other physicians as necessary. Maybe the clinic will assign you to one physician, where you take x-rays for just one doctor, or you’ll work with multiple physicians. It really depends. Some clinics have multiple x-ray techs and medical assistants, others employ only one or two with a broader range of skills — it really depends on the type of medical office or clinic you choose to work for. This is really nice for the x-ray technician, because you can find a clinic that suits your needs and work-flow.
For instance, an X-ray technician’s daily routine at an urgent care, will primarily be working with patients who have a variety of non-life threatening emergencies. People with sore throats, urinary tract infections, cuts and lacerations, that type of stuff. As far as family practice goes, the x-ray tech will probably be working more with patients receiving their diabetes checkups, people with COPD or CHF — conducting routine x-rays and follow-up x-rays… so it really depends on the field and work that you like and enjoy. If you work for an orthopedic surgeon’s office, your daily routine will consist of no chest x-rays, but more casting, splinting, follow-up x-rays for post and pre-op surgeries, etc. Lastly, if you work as a medical assistant/x-ray tech for an industrial medicine office, your daily routine will consist mainly of urinalysis and drug screens, injuries, physicals, doing EKGs… and this is where being a great medical assisting pays off. You’ve got to be a good MA to be a great XT, which is another reason the included cross-training is important.
What is the relationship between the X-ray Technician and the Physician?
It’s a close relationship. The physician will rely on the X-ray tech to take really clear x-rays that are easily readable and identifiable. While you wouldn’t usually be reading the x-rays, you may help point things out to them in order to ensure the physician is confident in their diagnosis and what they see. And for instance, if the physician is not confident in the image, they will most likely send that image out to a radiologist, who will look at the x-ray and send your clinic back a report with their findings.
Your relationship as an X-ray technician working with the physician is important. A lot of the physicians are not supervisor/operators so they don’t use the x-ray equipment, or know how to use the equipment, they know how to read the x-rays you create, as well as what types of x-rays to order. If the physician is a newer doctor, and they may not be 100% confident in their diagnosis, they may even rely more heavily on you as an x-ray tech, and so your work and that relationship is paramount to both the clinic and ultimately the patient.
Walk me through some of the highlights of working in the field as an XT – what is most enjoyable about the job to you?
I’ve worked with orthopedic surgeons, in a family practice, internal medicine, and urgent care clinics. I think by far, my favorite is working as an X-ray technician within an urgent care. You’re able to help a large amount people and to me that’s what it’s all about. I’m in this field because I care about people and want to help them. Because of the number of incoming patients, it’s rarely boring and you’re able to serve more people, as well as get a lot of gratitude from the people you help.
When I worked in family practice, it was different because doctors/physicians and the clinical team were helping patients in small increments throughout the course of their treatment. So it was very gradual help, and small steps towards an ultimate goal/solution. But when that patient comes to see me at an urgent care, it’s different. They need something done right away and we’re able to provide that instant gratification for that patient, and to me that’s what’s most important. I get that “thank you for helping me,” or “I feel so much better because of you,” and we tell them that if they have any other issues, to follow up with their regular physician… but we’re able to sort it out for them quickly and provide instant gratification and relief for the patient. That’s powerful and special.
How has the X-ray Technician job changed if at all, in the last few years, or when you first started?
Technology has changed a lot. About 8 or 9 years ago, when Arnold Schwarzenegger was governor, a bill was signed that allowed X-ray technicians to process images digitally. Before that, you had to be an RT or rad-tech, because it wasn’t written into law. If that bill didn’t get passed, it was going to put a lot of XT’s out of a job, because the only thing we could process was film, using chemicals. After the bill was passed and it became a law, we put it into the curriculum and X-ray technicians now take digital x-rays, it’s a huge step in the right direction in California.
The change in Digital X-ray is also tremendous. When digital x-rays first showed up, back in the 1970’s, we only had about 250 shades of grey in each x-ray. We are now at over 20,000 shades of grey, providing amazing clarity, detail, contrast and a beautiful image which helps with quick and accurate patient diagnosis. The most important aspect of all of the detail you can now find in those digital x-rays is in the chest x-rays. We have certain radiologists who are B-readers, who only read chest x-rays for patients coming in with exposure to asbestos, siliceous dust from cement, black lung miners and people who come in with industrial ailments/diseases… this is where digital x-ray has changed lives. Now that technology has caught up with the film/chemical process, the detail is tough to beat, and it’s very useful.
Digital x-ray is one of the modules here at Modern Technology School. We teach strictly digital at one portion of the X-ray Technician program here and it’s integrated into the physics and science of x-ray and radiology. As things change every day, and they eventually phase out the chemical and film portions, we will follow suit and ensure that our students are prepared for whatever the industry and employers demand of them.
How does the included Medical Assistant certification/training fit into an X-ray Tech workload
For an XT to take x-rays for 8 hours a day, would be very unusual. If you didn’t get the medical assisting training, you may be sitting around the clinic an awful lot, and so that’s why we cross-train our students… and why most employers are looking for well-rounded XT’s who have that MA certification. It keeps you busy and you’re able to work on both sides of the office, with several different physicians. Again, you’ve got to be a good medical assistant to be a great X-ray tech. It comes with professional patient care and bedside manner, critical thinking skills, and of course the terminology and a solid foundation in Healthcare. When I started, I started as a medical assistant, and that was really the meat & potatoes of the job in many cases.
Can you walk me through what students learn in Clinical Internship and how long that portion is and when it takes place?
The clinical internship portion of the X-ray technician program starts when students are ready for it. That means once you’re prepared and have learned all your anatomy, x-ray positioning, your patient instructions for each individual exam, and are fully prepared to go out there and take x-rays and work with actual patients (under supervision). Once you get to the clinical phase of our x-ray tech program, you’ll work with an a seasoned x-ray technician who will work hand-in-hand with you. It’s very rare that you’ll find an x-ray tech at our clinical sites who has not worked with students from our x-ray school and program.
We’ve built a rapport with a lot of great Healthcare and medical offices, and most of the time the x-ray tech that you train with at the clinical site is also an alumni from Modern Technology School as well. This is great because they understand the process, they understand what it’s like to be an x-ray technician student, and they are very patient. They’ll walk you through the workflow slowly, they’ll make sure you understand how to do the exams, they’ll walk you through them and watch you work with patients before letting you work on your own. They definitely take you under their wing and you build a trust and relationship with them, which is great.
The clinical internship phase starts at about the last 4 months of the X-ray technician program. It’s 520 hours, and you’ve got different categories of x-rays that you’ll have to take, for a total of 350. However, keep in mind that with an 8-hour day at the clinic and the 520 hours you’ll spend there, those 350 x-rays coincide perfectly with the amount of work you’re doing. It may sound like a lot of x-rays at first, but if you break it down to the weeks and individual x-rays, it really is only a handful of x-rays you’re trying to capture each week, throughout those final 4 months.
Why is clinical internship so important, and why is it important that we offer it as part of the program (as opposed to sending students to find it on their own?)
I think it’s important that both the x-ray techs and the doctors at the clinical sites know what is expected of the training required by the school and the state. For this reason, if the students are forced or asked to go out and find their own clinical internship as an x-ray technician, there’s no way to guarantee they are going to get the same level of education they’d get from a clinical site and office that’s been fully vetted and whom we’ve built a relationship with… an office that understands what’s expected of the student once they graduate and take their exams and begin working in the field as an X-ray technician in California.
If students are asked by other schools to find their own clinical internship, you’ll never be certain as to whether or not they are getting everything they need — not just to succeed, but to qualify based on the licensing needs as well. That’s why we’ve extensively interviewed and worked with the same offices for our X-ray technician student’s clinical internships. We want there to be accountability and reliability, so that our students are protected and well-trained for success in the field, working as an X-ray tech. It also goes both ways — clinics are looking to us as well; they want to train well because sometimes they want to keep the student because of their performance, and so they’ll train really well. We generally use offices that would be a typical job for MA/X-Ray techs, so when students finish their clinical internship, they feel well-prepared and confident when reading the job descriptions for the positions they are applying for.
Any advice for X-ray Techs that are in the clinical internship phase of their program?
I would say, don’t overlook the little things. Mirror the good x-ray technicians who are doing well at the office you’re training at. Do the little things that make a big difference. For instance, you may notice that the needles need to be restocked. It may not be your job, but taking care of those little things makes a big difference — your manager will notice.
Do you have advice for incoming students who are on the fence about a career as an X-ray Technician?
Becoming an x-ray technician is a great starting point for a career in radiology & Healthcare, but also it’s a great endpoint. You make a well-enough living that you’re lucky enough to be able to choose whether you want to stay working as an x-ray tech in California, or advance your career to the next level. There aren’t many positions out there like that; there are some where you’re forced to continue to move to the next level in order to make a good living. As an XT you can choose to stop at the entry-level, but you can also move on to bigger opportunities as well. For instance, as an X-ray tech, you can get more modalities, go into bone densitometry, nuclear medicine, and of course making the transition to advance into the CRT or rad tech level. I’ve also seen many XT’s advance into management positions at clinics/offices.
Now that you’re on the faculty side, what stands out most about our staff/faculty/instructors at Modern Technology School?
Everyone here has worked in the field. We’re passionate and love our careers. We enjoy what we do. I enjoy what I do. We wouldn’t be here if we didn’t. I find it very fascinating. Radiation is very fascinating to me.
Any final notes for current/prospective X-ray technician students coming from your years of experience?
Make sure you’re doing what you love. Enjoy what you do, because that’s key. I enjoy what I do — I didn’t think I’d be on the education-side of things when I started, but this is what I love. Find what you love and be good at what you do! | https://mtschool.edu/day-in-the-life-of-x-ray-technician-in-california/ |
Are southpaw boxers better?
Why left-handed people make better fighters : ‘ Southpaw ‘ boxers win more often by catching opponents off-guard, study reveals. Left-handed people are better fighters than their right-handed counterparts because they catch them off guard, new research has found.
What is orthodox in boxing?
In combat sports such as boxing , an orthodox stance is one in which the boxer places their left foot farther in front of the right foot, thus having their weaker side closer to the opponent. It is mostly used by right-handed boxers .
Why do southpaws have an advantage?
As with many sports, left-handed athletes (known in boxing as southpaws ) carry a huge advantage because everything they do comes from the opposite side that a normal right-handed orthodox fighter is use to seeing. Simply put, southpaws have infinitely more experience against orthodox fighters than vice versa.
What is a Northpaw?
Noun. northpaw (plural northpaws) (informal) A right-handed person.
Is Muhammad Ali southpaw?
Most famous Orthodox stance boxers are Muhammad Ali , Sugar Ray Robinson, Floyd Mayweather, Jr., Mike Tyson, George Foreman, Wladimir Klitschko. The opposite of the Orthodox stance is the Southpaw stance which involves placing your right weaker side closer to the opponent, while keeping the strong left hand in the back.
What are the 4 styles of boxing?
There are four generally accepted boxing styles that are used to define fighters. These are the swarmer , out-boxer, slugger , and boxer-puncher. Many boxers do not always fit into these categories, and it’s not uncommon for a fighter to change their style over a period of time.
Why are southpaws dangerous?
This means the inner thigh doesn’t get conditioned as well to take hard hits, and when you fight a southpaw it’s the target for their power kick from the rear leg. Even experienced fighters can be in serious trouble after even 1-2 hard inner thigh strikes from a southpaw .
Is Floyd Mayweather orthodox or southpaw?
Boxer Floyd Mayweather is an orthodox fighter, which you can see here. Now let’s talk about what a southpaw is. As you might have guessed, southpaw is simply the opposite of what an orthodox fighter is. A southpaw fighter stands with their right side forward, while keeping their left side behind them.
Should I be southpaw or orthodox?
To have an orthodox stance means to stand with your left foot in the front and right foot in the back. To have a southpaw stance means to stand with your right foot in the front and left foot in the back. In general, you would always have your strongest hand in the back.
Was Bruce Lee a southpaw?
Bruce Lee was a right hander, but also a southpaw . His belief was that the strong side should be in front as a streetfighter, as in a “real” (street) fight there is no jumping about, and the fight will be over in seconds. Bruce Lee did not believe in different fighting styles.
Can right handed person fight southpaw?
Left-handed boxers are usually taught to fight in a southpaw stance, but right – handed fighters can also fight in the southpaw stance for many reasons such as tricking the opponent into a false sense of safety.
Is Tyson left handed?
Also not known to many, but Mike Tyson is also a left – hand dominant boxer who chose the orthodox stance. If you’re left – handed and want to remain in the orthodox stance, this is an option for you.
Why do they call it southpaw?
In baseball, “ southpaw ” has referred to left-handed pitchers since the 19th century. One origins tale notes that old ballparks were oriented with home plate to the west, so that a lefty facing west would be throwing with his “south” paw.
How do you counter southpaw?
Tips to Fighting Southpaws Always expect the southpaw to counter your every move with the left cross. Be ready for it at all times. Try throwing a right hook/uppercut to his body (or elbow), giving him an easy block as you pull your head to the left. Then slip your head down outside the southpaw’s counter left.
Why Left hand batsman are called southpaw?
The “American Heritage Dictionary of the English Language” cites the conventional wisdom that the word “ southpaw ” originated “from the practice in baseball of arranging the diamond with the batter facing east to avoid the afternoon sun. | https://www.ilovekickboxingnewyorkny.com/boxing/what-is-southpaw-in-boxing.html |
Only clinically validated genetic screening test based on next-generation sequencing can reduce the risk of transmitting genetic disorders to 1:100,000, shares Dr Rajni Khajuria, PhD, Laboratory Manager, Igenomix India with Elets News Network (ENN)
Family genes are very influential. Various genetic disorders pass on to children through their parents or grandparents and found in the family lineage or ethnicity. While carriers may not suffer from this disorder and live a healthy life, if both partners are carriers the risk of an affected child is as high as 25 per cent. A study by IGENOMIX shows out of 138 Non-Consanguineous couples 6 per cent had a high-risk of transmission to their offspring. This number is as high as 17 per cent in case of consanguineous couples, especially in countries like the United Arab Emirates.
Birth Defect in India
In March 2006, research carried by the March of Dimes Birth Defect Foundation reported the birth defect pervasiveness in India as 64.4 over 1,000 live births. Rao and Ghosh (2005) states, 1 out of every 20 newborns admitted to the hospital carries a genetic disease that eventually account for nearly 1 out of 10 infant mortality. In the regions with more congenial marriages, congenital abnormalities and genetic disorders are the third most common cause of mortality in newborns. Nevertheless, it is crucial to prevent the birth of a child with genetic disorder thus reducing the risk.
Top 5 most Common genetic disorders in Indian ethnicity are Beta-Thalassemia, Cystic Fibrosis, Sickle Cell Anaemia, Spinal Muscular Atrophy and Haemophilia A.
Cystic Fibrosis: Cystic fibrosis (CF) is a transmitted genetic disorder that causes persistent lung infections and limits the ability to breathe over time. A study by Igenomix shows 1 in every 25 live births suffer from CF. People with CF have obtained two copies of the defective gene, one copy from each parent, which results in 25% risk of contracting the disease. CF Symptoms include a problem in breathing, chronic lung infections, digestive and reproductive issues.
Sickle Cell Anaemia: Sickle Cell Anemia (SCA) is a genetic blood disorder that affects 1 in every 150 live births. It induces red blood cells to grow into a crescent shape, like a sickle. The sickle-shaped red blood cells split easily, inducing anaemia. These red blood cells survive for only 10-20 days rather than normal 120 days. This causes severe pain and permanent harm to cerebellum, heart, lungs, kidneys and other body organs. In the United States (US), every year about 2,000 live births is born with sickle cell disorder.
Spinal Muscular Atrophy: Spinal Muscular Atrophy (SMA) is a genetic disorder that strips an individual of physical strength by influencing the nerve cells in the spinal cord, driving away the energy to walk, eat, or breathe. SMA affects approximately 1 in 10,000 babies, and about 1 in every 50 live births is a genetic carrier.
Haemophilia A: Haemophilia A or factor VIII (FVIII) deficiency is a hereditary disorder affected by the lack of defective factor VIII, a clotting protein. Even though it is carried in genes approximately 1/3 of cases are induced by a spontaneous mutation. According to the US Centres for Disease Control and Prevention, haemophilia occurs in approximately 1 in 5,000 live births. Individuals with haemophilia A usually, bleed longer than other people. Bleeds can happen internally, into joints and muscles, or externally, from minor cuts, dental procedures or trauma.
We all have changes in our genes, and the carrier screening test allows us to find out whether they could cause a disease in our children. 20% of infant mortality in developed countries is caused due to genetic disorders. Igenomix would provide a key platform for screening the risk of transmission of genetic disorders to your unborn. Till now Igenomix has analysed more than 7,500 samples and screened 6,000 mutation in 600 genetic disorders. It is the only clinically validated genetic screening test based on next-generation sequencing that can reduce the risk of transmitting genetic disorders to 1:100,000. Genetic diseases cannot be cured but can be prevented with Carrier Genetic Tests (CGT).
About the Author
Dr Rajni Khajuria completed her Masters in Human Genetics in 2003 from Guru Nanak Dev University, Amritsar, Punjab. After completing her Masters, she joined the field of research and diagnostics in All India Institute of Medical Sciences (AIIMS), New Delhi, which is the Primary tertiary care hospital of India. She completed her PhD from AIIMS in year 2011 in the field of Molecular Genetics from Department of Pediatrics, under the supervision of Dr Madhulika Kabra, one of the leading expert in the field of Pediatric Genetics in India.
Her doctorate research work has been involved in establishing the molecular diagnosis of Rett syndrome in India which facilitated the possibility of Prenatal Diagnosis and genetic counseling services for the families of Children affected with this unique disorder. She has been the Founder and National Scientific Adviser of Indian Rett syndrome foundation, the first national trust for the care and support of Indian families having children with Rett syndrome and also initiated the Awareness raising campaigns for Rett syndrome in India to educate the parents/doctors/healthcare professionals/caregivers/scientists and therapist, who were really unaware of this disorder.
While working in AIIMS for ~9.5 years, she was involved in Research and Diagnostics of various paediatric genetic disorders as well as activities like teaching, trainings and organisation of various symposiums, conferences, workshops and Awareness programs. She has published and reviewed various peer review journals and has presented her research in several national and international conferences. She has been recipient of many awards and honors during her academic period. She is also a member of Indian society of Human Genetics and Genetic chapter, Indian Academy of Pediatrics.
She joined Igenomix India in May 2013 as a biologist to learn and take the advanced techniques of preimplantation genetics like PGS and PGD further in India, and help all those Indian families who were deprived of these technologies previously and give them the option to have a healthy baby at home. | https://ehealth.eletsonline.com/2016/06/most-common-genetic-disorders-in-india/ |
By TRICIA HILL
After three years of service, the Riverdale School Board accepted the resignation of High School Principal Dan Kiel at the Board’s meeting on Monday night.
Kiel has accepted a job offer in Elkhorn as an Associate Principal and District Director of extra curricular activities. Kiel accepted the job offer to Elkhorn in southeast Wisconsin as it was closer to his wife and family.
Leaving Riverdale was not an easy decision for Kiel, as he is going to miss everything about the Riverdale School District.
“The students are extremely nice and I was able to work with a nice group of people,” said Kiel.
Kiel will be starting his new position in Elkhorn on Friday.
New Principal
Jonathan Schmidt started his duties as high school principal of the Riverdale School District on Tuesday, with Kiel giving him a helping hand.
Schmidt was a current sixth grade Math/Science teacher for Riverdale, as well as the Dean of Students.
Schmidt graduated from the UW-La Crosse in 1993 with a bachelors degree in Park and Recreation Management. In 2001 Schmidt enrolled in the Master’s of Education Program at the UW-La Crosse and graduated in the summer of 2004.
After Schmidt was married and had two children, he decided to go back to school to receive his Masters degree in education and began his teaching career at Riverdale in the fall of 2004.
Elementary Class Size
The Riverdale Elementary School is considered a SAGE school. This means that the district receives funding through the SAGE program.
In order for this to happen, the 5K through third grade class sizes must be one teacher to 18 students, or two teachers to 30 students, otherwise they can do team teaching classrooms as they did last year with the Kindergarten class.
With Riverdale being in the SAGE program, the board may have to consider hiring a new teacher as there has been some growing in the class sizes, especially in first and third grades. The first grade class has 53 students and third grade has 54. This will put the first grade at one student below the limit.
“These numbers will change throughout the next couple of weeks,” said Principal Shari Hougan.
Currently, the district has advertised for an elementary SAGE teacher, so in the event the numbers increase they are prepared should they need to hire a new teacher.
PBIS
The Positive Behavioral Interventions and Supports program (PBIS), is the behavioral intervention program started for the 2012-13 school year, and has now finally formed a committee for this year. It will be headed by Jon Schmidt and Jen Tarrell, to continue to add PBIS ideas to the REMS.
The committee recently visited with other districts and formed ideas specific to Riverdale. The hope is that the PBIS programs will help students and staff approach behavioral interventions in both a positive and unified way.
Starting when the students return in September, the staff will introduce the “Riverdale Way,” during the first few weeks of school.
This will be a big topic, not just at the beginning of school but also at the upcoming board meetings, as the program will be introduced more in depth soon at a school board meeting.
Principal Hougan along with other Riverdale District members, wanted to make sure to thank the St. Vincent de Paul in Muscoda, for donating over $200 of school supplies to the Riverdale School District. They would also like to thank the Grant County Thrift Shop for their generous donations of $100 for school supplies.
“We appreciate this greatly and out students and families do as well,” said Hougan. | https://www.swnews4u.com/local/education/kiel-resigns-as-riverdale-principal-is-replaced-by-jonathan-schmidt/ |
Last week, after watching some of the youth teams in the club play, I asked to organize a coach’s meeting. My purpose for the meeting was a desire to create a club philosophy. Within our club, coaches change almost every year, as most youth teams are coached by senior players. Therefore, there is little consistency from year to year, and the growth and development of the players appears slow and fragmented.
Yesterday, however, Finnish coach Harri Mannonen posted a blog about clubs designing their feeder systems around a staple system and questioned whether this limited development rather than enhancing it. Whereas a philosophy and a staple system are not necessarily the same thing, and in many ways are different, what is the purpose or the benefit of developing within one club?
In the U.S., it is rare for a basketball player to develop within one environment fro several consecutive years. Even when I coached with a very good AAU program – Hoop Masters in Los Angeles – it was rare to retain good players for more than 2-3 years. I worked with another lasting program, the NorCal Sparx, but it imploded during the high school years. Another former club, the Santa Monica Surf, kept a group together for 2-3 years, and some for a couple more, but eventually the players split.
There are advantages and disadvantages to a one club or one coach system. For instance, when I was at Hoop Masters and coached u9 boys, we emphasized several things with the belief that their coach at the u10 level could complement this development. Therefore, we focused on individual defense, ball handling, and lay-ups, and believed the next coach could build on these skills with more team defense and shooting concepts. Because the majority of the players on the u10 team had played together on the u9 team, they had similar skills and a similar playing background which could be used as a foundation to continue to build and develop skills.
Instead, oftentimes players move from coach to coach without a natural progression. For instance, if a child plays in a local Parks & Recreation league as an u9, there is no guarantee of what skills his or her coach will emphasize. When these players move to u10, there is no guarantee that players will play for the same team or that the u10 coach will know the u9 coach. Therefore, the players may or may not have a similar playing background or skill set.
As a small example, with our team, every player handled the ball. Every player had varying abilities with the ball and varying size and quickness, but every player did every ball handling drill. We did not designate positions. Whereas there was definitely a discrepancy between the best and the worst ball handlers at the end of the season, even the worst ball handler was capable with the ball against a full-court press, which several opponents noted to us. In another environment, a coach may tell the bigger players not to dribble or focus only on one to two players dribbling the ball at all times. If one of the bigger players from this environment had joined with the players who we had at u9s, he would have been far behind. There would have been multiple foundations from which the next coach would have to start.
The disadvantage of the one coach or one club system, from an individual’s perspective, is the other side. What if your skills or strengths do not mesh with the coach’s approach to the game? For instance, what if Dirk Nowitski had played for coaches who believed that he should never leave the key, that a 6’10 player had to rebound and protect the basket, not shoot jump shots? If he was stuck in a one club or one coach situation, he may have quit the game because that approach did not fit his skills or his desires or he may not have developed into the player that he is today.
I worry about this with the teams that I coach. What if I miss on a player? What if a player excels at something that I am not noticing and I stunt his growth? Two years ago, I asked the varsity coach to promote a freshman player to the sophomore team because I id not feel like I was maximizing his talent. I did not think that he was the best player on my team, but I thought that a new coach might be able to get more from him than I was able to. I was worried that maybe he did not fit my system or our personalities did not mesh or whatever. In a one club or one coach system, however, what if that player was stuck with me or with my ideas for his entire career? When I trained a number of players, I encouraged them to try other trainers to see if there was something else that they could learn rather than being limited only to my knowledge and teaching.
When I played, I thought the best experience was playing for other coaches, whether at summer camps or summer leagues. In truth, my game and my skills never fit with the systems that my school teams used even though my school teams were always very successful. When I played in other environments, I was able to utilize my full skill set. As an example, as a high-school freshman, as a point guard, I basically dribbled the ball into the front court, passed to the wing, cut away and became a spot-up shooter. That summer, in a summer league, I was basically Steve Nash with the Suns. The coach gave me the ball, encouraged us to run, and let me use ball screens all game long. In the summer, I was able to use and improve all my skills even though our school-team system did not require those skills. If I only played with my school team, I may never have developed those skills.
Despite my fun and learning with my summer teams, most of my skill development occurred with my school teams. From 5th-8th grade, I basically had the same coach. Our skills progressed on an annual basis, as we progressed with the same foundation from 5th grade to 6th grade. With the summer team, I was the fortunate player chosen by my coach – there were other players who rarely ever touched the ball as I was the dominant ball handler. I was the one with the higher skill level. My coach once admonished me for passing up an open shot to pass to a less-skilled teammate for a lay-up which he missed. Whereas this built m confidence as no coach had, what was it like for my teammate? Was he developing? Would that situation been good for him had that been his only coach for his entire career?
Therefore, I don’t know if there is a correct answer. I imagine players can develop in either situation. In the U.S., there is an embarrassment of riches. With the freshman player on my team, if he did not develop, there were another five point guards in the class behind him. If he never maximizes his talent, someone else will, and the success of the program will likely not be altered; just his individual career. In small clubs, like the one where I am now, we cannot afford to lose a player. We cannot afford not to maximize each player. My first team has only one true homegrown player; that means the club has to find ways to attract an entire team’s worth of players. That means finding jobs for some, enrolling some in schools, etc. Luckily, we are the only club at our level for miles and miles, so a couple players choose to drive an hour to play for our club in a more competitive level, as opposed to playing for their home club. On my second team, I have 3-4 homegrown players. The next level is the u16s, and they have only 8 players. Because of the dearth of players, we essentially have to develop all eight players and maximize the talents of all of them. Imagine coaching a varsity high-school team in a district where there were only 8 players playing on a combined 6th-8th grade team. It’d be pretty difficult to build a program. You would feel a need to develop all eight players to maximize their skills. That is a very different situation than being in a large public school where 70-100 freshman tryout for the team every year. If one of those 8th graders quit, there are plenty of players to replace him.
In a small club, I think there needs to be greater organization governed by a club philosophy. The philosophy does not need to be restrictive, but it should help to guide the coaches and players. When I watch our clubs play, each team from year to year plays different defense and runs different offense. Is that enhancing adaptability, as Mannonen suggested, or is it simply leading to a lack of mastery?
When I studied English at UCLA, I remember a professor telling us about the haiku that the greatest constraints allow for the most creativity. Because a poet no longer has to think about rhyme scheme or syllables per line because the haiku constraints the poet to three lines in a 5-7-5 pattern, the poet can focus entirely on the creativity of the content and the idea. In the same way, whereas a philosophy could be seen as reducing variability or creativity, simplifying the process may actually serve to create more creativity.
Mannonen wrote:
“Variability is one the main principles of motor skill learning. But if the club has a staple system, it will make the players repeat the same motor patterns over and over again, season after season. So having a staple system is counter-productive when comes to motor skill learning.”
Which motor skills? If a club adopts a packline defense philosophy for all levels, does that ensure that players will repeat the same patterns over and over? Doesn’t that depend on the offenses? If a team becomes very adept at its basic defensive principles, won’t that allow for additional creativity?
I agree, for the most part, with Mannonen’s argument because he is influenced by more restrictive systems, as he wrote:
“Sometimes clubs – at least here in Finland – will put an emphasis on designing a staple system of play for all or most youth teams within the organization. This staple system may be drawn up in great detail. It may include set plays, a continuity offense, a distinctive set of defensive rules and so on.”
My goal with my club is not to create a restrictive system or to force all the coaches to run my plays, my defense, and my drills. However, I do believe that a philosophy, an objective, and a systematic progression for players will enhance our club’s player development. From year to year, this development will be shaped by the individual coaches, who will change, but some things should be absolutes for all the teams or for specific ages.
What does that mean? Based on the way our teams have performed, I think we need to emphasize speed with all our age groups: speed of play, speed of foot, speed of thought, and speed of decision making. The better teams play so much faster than us at the youth ages, and much of that starts at practice. We need more of an emphasis on shooting at every level, as only one or two homegrown players is an above-average shooter for his or her age group. As an extension of speed, we need more attention placed on individual defense, as very few players in the club at any age group excel at moving their feet and containing dribble penetration.
My goal is not to design one specific system for the entire club so every team runs the Flex and plays the packline defense. My goal is to create some absolutes and allow each coach to use his or her creativity within those absolutes. I think that the u16 coach should have an expectation of skill levels and knowledge base when players move from u14 to u16 rather than starting over each year.
Essentially, I think the goal of a club should be to utilize the best of both approaches (one coach/one club vs. hodge-podge), whereas, I think my club, due to the constant changing of coaches is stuck with the worst of both. I believe a club should have a continuity of learning for players as they progress from level to level. The teams should speak the same language so I can pluck an u16 player for my 2nd team, and he can adjust quickly to the new team because we have the same basic principles. However, I also believe that individual coaches should have some freedom to use their own drills or their own style, as long as they achieve certain skill-related benchmarks with their players and speak the same basketball language. In this way, there is continuity in each player’s development, but there is the opportunity for players to learn from different coaches and different styles, even within one club. | https://learntocoachbasketball.com/using-the-club-system-to-develop-a-true-feeder-system |
WW Freestyle:
9
Prep Time
15
mins
Cook Time
1
hr
Total Time
1
hr
15
mins
Coconut Milk and Thyme Braised Chicken - Delicious and easy to make one pot chicken dinner cooked in thyme-infused coconut milk and garlic.
Course:
Dinner
Cuisine:
Asian
Servings:
6
people
Calories:
482
Author:
Katerina | Diethood
Ingredients
▢
4
pounds
of chicken pieces
(I used boneless chicken thighs and drumsticks)
▢
salt and fresh ground pepper
, to taste
▢
2
tablespoons
olive oil
▢
1
tablespoon
butter
▢
6
thyme sprigs
▢
1
whole garlic bulb
, separated into individual cloves and peeled
▢
1
can
coconut milk
Instructions
Preheat oven to 375F degrees.
Season chicken pieces with salt and pepper.
Heat olive oil and butter in a dutch oven or any other heavy pot that is stovetop and oven safe.
Add chicken pieces to the pot and cook for 4 minutes; flip and continue to cook for 4 minutes or until chicken is browned on all sides.
Add thyme sprigs and garlic; cook and stir for 30 seconds.
Add coconut milk, cover and bake for 50 minutes to 1 hour, or until chicken is cooked.
Remove from oven and let stand several minutes.
Serve.
Nutrition Facts
Coconut Milk and Thyme Braised Chicken
Amount Per Serving
Calories
482
Calories from Fat 180
% Daily Value*
Fat
20g
31%
Saturated Fat 9g
45%
Cholesterol
216mg
72%
Sodium
312mg
13%
Potassium
698mg
20%
Carbohydrates
2g
1%
Protein
64g
128%
Vitamin A
265IU
5%
Vitamin C
8.7mg
11%
Calcium
40mg
4%
Iron
2.9mg
16%
* Percent Daily Values are based on a 2000 calorie diet.
Keywords: | https://diethood.com/wprm_print/recipe/34717 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.