content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
A large-scale Bulgarian-British project for exploring the underwater archaeology of Bulgaria’s exclusive zone in the Black Sea has been started by the Sozopol-based Center for Underwater Archaeology at the Bulgarian Ministry of Culture and the Center for Maritime Archaeology of the University of Southampton. The project has been approved by Bulgaria’s Minister of Culture Vezhdi Rashidov, the press service of the Culture Ministry has announced, adding that the joint project with the Center for Maritime Archaeology of the University of Southampton will be Bulgaria’s first research effort for Black Sea exploration to focus solely on archaeology. The project is to be realized in compliance with the 2001 UNESCO Convention on the Protocol of the Underwater Cultural Heritage of which Bulgaria is a signatory. Bulgaria’s Culture Ministry points out that the “invisible" underwater cultural heritage “is probably the most endangered part of [the country’s] cultural heritage", and the archaeologists on the project team are set to adhere to the best international practices in this field. “The achievements of the Bulgarian underwater archaeology in recent years are the reason the University of Southampton, and its Center for Maritime Archaeology have sought cooperation with the Bulgarian Center for Underwater Archaeology, a state institute at the Ministry of Culture," the institution says. The main goal of the Bulgarian-British project is to create a map of Bulgaria’s underwater archaeological heritage, and to collect further information about the sea level and climate changes which affected the life of prehistoric and ancient societies in the Black Sea region. The researchers are going to locate and identify underwater archaeology sites off the Bulgarian Black Sea coast, and take bottom samples. They will use two sea vessels – one for the shallow and another for the deeper parts of the Black Sea section under Bulgarian jurisdiction. The project, which will be fully funded with a grant provided to the University of Southampton by a charity foundation financing educational expeditions, will take 3 years for exploration and 1 year for analysis of the collected data. The findings will be published in international academic journals. “The project will provide opportunities for the promotion of the underwater cultural heritage in the Bulgarian section of the Black Sea, and for the training of young experts with interests in the field of underwater archaeological explorations," concludes the statement of the Bulgarian Culture Ministry. Artifacts found through underwater archaeology explorations are displayed in Bulgaria’s only Museum of Underwater Archaeology in the southern Black Sea town of Kiten. Photo: Todor Hristov from the Facebook group “Bulgarians" dedicated to promotion of Bulgaria’s historical and archaeological heritage. He may be contacted at [email protected].
http://archaeologyinbulgaria.com/2015/05/16/bulgarian-institute-university-of-southampton-to-start-joint-exploration-of-black-sea-underwater-archaeology/
Who is MSU? Myositis Support and Understanding Association (MSU) is an all-volunteer, patient-centered 501(c)(3) nonprofit organization founded by myositis patients for myositis patients and caregivers. MSU is the leading support network for myositis patients, their caregivers, family members, and friends. Even though people living with the forms of the rare disease myositis are diverse and far-stretched across the globe, MSU provides various global communities and platforms for them to come together to share information on the effects, commonalities, and complexities of myositis to help empower each other. Through individual trial and error, similar experiences, and empathy for each other, MSU is continuing to build a supportive and informational network of camaraderie in which people can better understand their disease, effectively advocate for themselves, and realize they do not have to be alone in their struggles. MSU is the leading support network for myositis patients, their caregivers, family members, and friends. MSU, founded in 2015, is a fast-growing U.S. based myositis nonprofit that provides education, support, advocacy, awareness, clinical trial matching, access to research, and need-based financial assistance for patients. Partnering with like-minded organizations, and continuing to build relationships with researchers, MSU is helping to lead the effort to find effective, affordable, and safe therapies for all forms of myositis. Working together with the pharmaceutical industry in a straightforward and transparent approach is key to this endeavor. Knowledge in Action MSU’s board consists of real patients with Myositis, so we are able to fully understand what other patients go through. Our knowledge of Myositis, among its various treatments, complications, hardships and struggles give us a powerful base from which to formulate action plans. We are a tight-knit group that supports each other and others in our community. Our drive to support and help one another motivates us to help other patients in any way we can. Through education, online support groups, video chats, and our financial assistance program, we are able to put our knowledge into action and truly do all that we can to assist and encourage those with Myositis. We work for and with you because we are you MSU is an organization which is operated by volunteers who understand Myositis on a very personal level. As patients with Myositis, we, too, experience the frustration of living with this disease which is rare and treatments which are harsh. We understand that our medical team is extremely important to us, but that they have no way to truly understand what it is really like to live with our rare and complicated disease. So we lean on each other and learn from each other through our support groups, our website, and our nonprofit organization. We continue to harness enthusiasm and determination focusing on educating each other, our friends and families, and the medical profession. Utilizing the energy we create working with each other to be a valuable force to the understanding and advocacy of these diseases and to provide assistance and support to people with Myositis. The future of Myositis is bright, as is our future as an organization. We will post updates as we work towards our goals. We appreciate your support! How does MSU help? - MSU provides educational materials for patients to take to doctors who may have never encountered myositis before in their careers, other than in a medical school textbook. - MSU provides online patient support groups in which myositis patients and caregivers are free to share their fears, symptoms, and experiences honestly and openly – with the opportunity to get feedback, advice, and encouragement – knowing that they are not alone in their struggles. - MSU provides financial assistance to myositis patients for medical and household expenses related to the costs of living with the disease and the financial devastation caused by interruption of the ability to work. We also offer travel assistance to visit a physician who has experience diagnosing and treating myositis. - MSU offers a clinical trial matching program to help myositis patients find clinical trials for which they may be eligible. - MSU partners with researchers, academia, and pharmaceutical companies in clinical stage research stages to help with patient recruitment, education, and more. - MSU assists patients in advocating for themselves with insurance companies and legislators; as well as providing educational materials to policymakers about the importance of access to appropriate treatments. MSU is instrumental in helping to improve the lives of patients fighting this rare, complicated, immune-mediated muscle, skin, and often multi-organ disease by being the very first patient-centered organization to offer live, online video patient support and education sessions that simply make sense for those living with a muscle disease that involves limited mobility and with patients spread across the world. MSU also provides a “Simply Put” education series, offers clinical trial matching, advocacy, several online support options, awareness building programs, and need-based financial assistance for patients.
https://understandingmyositis.org/who-is-msu/
by Mike Carbone & Veronica Lin What is a meaningful connection and how can we foster those types of connections? Pluto is an experimental social network that focuses on fostering meaningful connections. This raises two fundamental questions: What is a meaningful connection, and how can we foster them? For this reason, Pluto must identify what features will best accomplish this goal and ultimately create an engaging platform for our users. Thesis Statement We believe that creating a social network that prioritizes meaningful connections by giving users a platform with an increased sense of privacy and security, will allow them to maintain stronger online relationships than existing platforms. Key Findings The most meaningful online connections are formed around repeated engagement through anything that most closely resembles face-to-face interaction. Choosing who can see your content is a big factor in feeling private when posting, and would foster more sharing. Users do not like thinking that their content might be shared elsewhere or may be “owned” by the company. Methodologies Target Audience Our target market is for all genders 16-80, who live in English-speaking countries, and like to easily share personal content to those that are close to them. For this research, however, we’ll be targeting potential users aged 18-25 on the presumption that young users are crucial to a platform’s success. Screener Survey The purpose of a screener survey is to gather study participants whom fit our target audience to perform further research on. The screener survey consisted of 15 high-level questions asking about social media usage habits. We also used this opportunity to validate early assumptions, asking high-level questions about users’ perceptions regarding privacy and their social media connections’ meaningfulness. Lasting four days, the survey garnered 250 responses with some notable results. - 3 out of 5 users feel current networks don’t value their privacy - 80% use social media to connect with close friends and family - 43% of people feel their connections on social media are meaningful The second survey was meant to further investigate users’ interactions with social media. With 42 responses, this survey was intended to be longer form and included many open-ended questions, hence the much smaller response rate than the screener survey. - 9 out of 10 users would consider using a new social network - 74% said they don’t want their family to see their interactions on social media - 62% of respondents have multiple accounts on the same platform I Like, I Wish, What If The purpose of the IL/IW/WI survey was to have a discussion with our target demographic and gauge their feelings about current networks, determining what they like about social networks, what they wish existing social networks had, and “What if” statements to gather ideas. The discussion consisted of 10 potential users and provided key takeaways. - Content is King - Most participants’ favorite aspects of social media derives from the content they see and how they see it. - Communities Count - Many participants are drawn to platforms by being a part of communities. - Smart Moderation - Participants value free speech, but want content to be more heavily moderated. - Connections Matter - Participants wish certain platforms made it easier for them to connect with others. Interviews After conducting the screener survey, 16 participants fit our target demographic perfectly. They were within our age range, were happy to be interviewed further, concerned about privacy and wished they could form more meaningful connections on social media. Conducted over a span of three weeks, the one-on-one interviews provided great insight into our target audience’s feelings. After reading through the transcripts, we found nine categories to group transcribed interview responses into. Using these 9 categories, we grouped responses in a data sheet to find the most important takeaways. - Usage - Audience - Privacy Concern - Meaning - Interacting - Privacy Control - Frequency - Sharing - Content After breaking each individual interview down, information was grouped into a conceptually clustered data sheet, where the trends and significant points become clear. Our biggest takeaways from the interviews are… - Interaction Brings Meaning - The most meaningful online connections are formed around repeated engagement through anything that most closely resembles face-to-face interaction. - Audience Targeting - Targeting the audience when posting is a big factor in feeling private when posting, and would foster more sharing. - Content Ownership - Users do not like thinking that their content might be shared elsewhere or may be “owned” by the company. Snapchats “visible screenshot” and “self-destruction” features are a perceived deterrent to this notion, leading many users to view Snapchat as the most private social media platform. Desirability Survey To help jumpstart our design process, we created a desirability survey to guide our direction. - Appeal - Excitement - Personal - Value - Intuitiveness - Meaning - Security - Reliability - Structurally - Trustworthiness We tested two different color palettes, one with a gradient and another one with solid colors. We wanted to include cool colors such as the blue that symbolizes trust and warmer colors such as the pink and orange to have more of an inviting feel. For fonts, we chose these 3 sans-serif, modern ones. These fonts are more straightforward. Of the 3, Quicksand is more decorative, creative and fun. These are the 3 different prototypes that we made of the home screen. We wanted to focus on the grouping functionality. During our tabletop class time, we were able to talk to many different people, walking them through these prototypes, asking them their thoughts. The one of the left shows the people in a grid and how the overlapping would be visualized. The one in the middle shows a more segmented view of the groups. The one on the very right is more freeflowing and shows the amount of interaction between the user and their friends. We found that the free-flowing prototype was overwhelming well received. It’s playful nature drew most people to it. We realized that this survey was conducted with a very small sample size. Moving forward, we will continually validate our design decisions with different users alongside with conducting more usability tests with our next tabletop event on Friday. Recommendations Foster Sharing Through research, we were able to learn that meaningful connections come from repeated engagement. So, making sharing as easy as possible will make engaging easier. Emphasize Audience Control Users feel more private and more comfortable sharing when they know who is seeing their content. Prioritize Privacy By focusing on privacy through all aspects of the platform, users will feel more comfortable developing online meaningful connections, and sharing the content that matters most.
https://digm.drexel.edu/idm/2019/12/04/pluto/
Since the weather has cooled and the calendar says it’s December, one of the flowers we all look forward to this time of year is the poinsettia. They are beautiful and enhance the looks of any home or office. But sadly, they only last for just a short time. However, here are a few tips you might try if you would like to coax your poinsettia to bloom again for another year: - Christmas: Pick a colorful plant with tightly clustered yellow buds. Protect it from hot or cold drafts, water when dry and place in a room with enough natural light for reading. - New Year’s: Apply fertilizer. Continue light and water. The plant should remain colorful for many weeks. - Valentine’s Day: If your plant has become long and leggy, prune to 5 inches from the soil. - St. Patrick’s Day: Remove faded and dried parts of the plant. Add more soil, preferably a commercially-available sterile mix. - Memorial Day: Trim off two or three inches from the ends of branches, to promote side branching. Repot to larger container. Move plant outside – first to indirect, then direct light. - Fourth of July: Trim plant again. Make sure it has full sunlight. Slightly increase the amount of fertilizer. - Labor Day: Move the plant indoors, but make sure it has six hours of direct light from an uncurtained window. Reduce fertilizer. - First Day of Autumn: Starting on or near Sept. 21, give plant 13 hours of uninterrupted darkness and 11 hours of bright light per day. Keep night temperatures in the lower 60s. Continue to water and fertilize. Rotate plant each day to give all sides even light. - Thanksgiving: Discontinue day/night treatment. Put plant in a sunny area. Reduce water and fertilizer. Then wait for those beautiful blooms to reappear. I’ve tried to “save” many poinsettias, but this is the only method that works for me. Good Luck!! Favorite Plant: Ponytail Palm By Barbara Lancaster, Somervell County Master Gardener Common Name/Scientific Name: Ponytail Palm/Beaucarnea recurvata Native/Adapted: Native to Mexico Height: 10-20 feet tall if grown outside, but rarely exceeds 10 ft. It is a slow- growing palm. Spread: Can reach 12 ft in diameter if grown outside Light: Full sun to partial shade. It prefers full sun but can also grow in partial shade. Evergreen/Deciduous: Evergreen. Seasonal Interest: None. Color/Features: Mature ponytail palms produce creamy white flowers in spring or summer. They bloom for several weeks two or three times a year. Flowers are followed by reddish small fruit, about ½ inch long. Water Requirements: Moderate. Closely related to yuccas and thrive under the same conditions. Tolerates drought very well. Likes moist, but well-drained soil. Allow the soil to dry between watering because it is easy to overwater this palm. Maintenance: Easy. To prevent nutritional deficiency, apply good quality palm fertilizer twice a year during growing season. Wildlife: Unknown. Deer Resistant: Unknown. Comments/Experience: I have grown ponytail palms for over 35 years, and have one plant that is at least 35 years old. I grow these in containers. I have never tried growing one in the ground, but I do move several of my plants outside during the growing season. While they can tolerate cold down to 15º F when mature, my plants are inside during the winter. The ponytail palm is a great indoor plant as long as there is adequate lighting. My plants have never bloomed. Source: www.floridapalmtrees.com Autumn Joy Sedum “Sedum telephium” by Donna Hagar, Somervell County Master Gardener Plant Group: Perennials. Native/Adaptive: Adaptive Hardiness: USDA zones: 3-10. Mature size: Height 2-3 feet, Width: 2 feet Flowering period: July through fall. Flowering attributes: Flower heads form in July and the flat corymbs look like broccoli. In August, the flowers start to color up, turning pink. Slowly the flowers turn red, and later in fall they turn a deeper rusty-red. Leaf attributes: Succulent, dark green leaves. Growth habit: Clump-forming. Water: Drought tolerant once established Light: Full sun. Soil: Light, well-drained soil. Propagation Methods: Stem cuttings. Division. In early spring pull a rosette off the main plant and transplant the small rosette to a new area in the garden. Pruning Methods: No pruning is necessary except to clean up dead stems in late fall. If you leave the plant intact over winter, prune out dead stems in early spring. Notes: Very succulent looking foliage, always thriving no matter where I plant it. Autumn Joy is very easy to propagate though not invasive. I started with one plant, several years and homes ago. I have left many in various landscapes over time and I still have 4 or 5 now – all from the same original plant. They are very hardy and although deer do tend to nibble them, they re-sprout and come back year after year. They look great in rock gardens or as clumping borders. Just do not over water. ZEXMENIA By Becky Altobelli, Somervell County Master Gardeners Driving back to Texas after a two-week vacation, my husband and I were speculating to what extent our home lawn and landscape would be damaged after the continuous 100+ degree temperatures and drought they had suffered through. Would there be total devastation or total annihilation of all plant species? We held out little hope of any survivors, much less any green color. And, yes there was devastation and annihilation, but not total. There in my dry-creek bed was a large mound of dark green leaves with golden-yellow blooms. Called a “miracle plant” by some Texas gardeners, this Texas native’s name is Zexmenia hispida or Wedelia texana. “Miracle plant” because it will grow in rocky, poor, or amended soil with good drainage, sun or partial shade (leggy and lower to the ground in shade with fewer blooms) and heat, cold, and drought tolerant (the proof is in my dry-creek bed this summer). Zexmenia is a 2-3 ft. semi-evergreen herbaceous shrub and hardy to Zone 7. In our zone it will freeze to the ground in winter, but return in mid-spring. The dark green leaves are rough and hairy as are the stems, lanceolate and irregularly toothed. The daisy-like orange-yellow flowers appear at the ends of bare stalks and will continue to sporadically bloom March to November in our area. You can trim back to one-half in mid-July to encourage more growth and flowering. The mound can spread to 2-3 ft. and with supplemental watering will bloom with more regularity and more profusely. Good drainage and reflective heat encourage maximum health of the zexmenia (note my plant is a true “miracle” this summer in the dry-creek bed surrounded by gravel). Zexmenia self-seeds readily from the dried flower heads, but if you want to collect the seeds yourself to use in another location in your landscape, allow the seed heads to dry in place, and re-sow where desired as soon as possible after collection for best results. An additional benefit to this Texas native is that it is a nectar and a larval plant food for butterflies, and a food source for bees and birds. Personally, I have not noticed any deer or rabbits nibbling on my zexmenia, but some of the scientific resource materials I researched list it as a food source for both. I bought my zexmenia many springs ago at a native plant nursery and have seen it available through the years. Once you buy your transplant and give it a good home in your landscape, I know you will also find it a welcoming site in late summer when other plants have succumbed to our hot Texas weather. THE OBEDIENT PLANT By Joan Orr and Nancy Hillin, Somervell County Master Gardeners Common Name/ Scientific Name: Obedient Plant/ Physostegia Viginiana Native/Adaptive: Native perennial wildflower Height: 24 to 36 inches Spread: Aggressive by root and seed Light: Full sun (6 hours) to partial shade Evergreen/ Deciduous: This is a deciduous plant Seasonal Interest: Blooms in August to September Colors/Features: Lance shaped leaves with showy spikes of lavender pink blooms Water: Average to damp soil Maintenance: Dead-head flower stalks to prevent seeding and to encourage rebloom. Prune back in early spring to minimize height and bending. Wildlife: No serious insect problems. Butterflies love this plant. Deer: Deer usually will not be attracted to the Obedient plant. Comments/ Experience with the plant: The Obedient plant is often mistaken for the Snapdragon plant and is also known as False Dragonhead. If you plant this showy perennial in slightly acidic soil and keep it evenly moist, it will multiply and could get out of hand. That being said, it is best to plant it where you really need some color and where you really want it. For the most part, it is very manageable. There is a new variety called “Miss Manners” that is not so aggressive. The individual flowers of this plant will stay in the position you place them in, hence the name, Obedient. It is an outstanding cut flower and lends itself to floral arrangements beautifully. Although it does not resemble some of the mint family, its relatives include the Mints, Salvias, Lavenders and Rosemary. Source: National Home Gardening Perennials Pictorial Guide to Perennials Native Texas Plants by Glenda Marsh, Somervell Coounty Master Gardener Watch closely and you may well see a hummingbird at your blooming red yucca because its flowers are full of nectar! Requiring very little to no maintenance, this evergreen scrub has flowers that can last 30 weeks per year beginning in April and continuing through October Got deer problems? Fortunately, those pesky deer almost never eat this plant, which lives on rainwater and can be 3 to 4 feet tall. Red yucca is a perennial and hardy in our zone 7 as it tolerates full sun and is considered drought tolerant – just what we need in our area! It should be spaced at least 2 to 4 feet apart. Colorful coral blooms with pale yellow on the inside are the most frequent but solid yellow varieties are also available. Hummers will visit both! Beware of a few prickly items: the plant has spines with sharp edges, so BE CAREFUL when handling it (i.e. should you want to move it). Some people may be sensitive to the plant and handling may cause skin irritation or allergic reactions and, like any plant, the pollen also may cause allergic reactions. If you have kids or grandkids, parts of the plant are poisonous if ingested. So NO snacking on this plant! Save the seed pods and you can direct sow them outside in the fall or sow indoors before the last frost. You can also propagate them by dividing rhizomes, tubers, corms or bulbs (including offsets). Take a look at the red yuccas as you drive up and down Highway 67. We have some great examples of this hardy plant right in front of several of our businesses here in Glen Rose! Good references: www.davesgarden.com Easy Gardens for North Central Texas by Steve Huddleston & Pamela Crawford Printable Version RedYucca (PDF) June 2011 Texas Bluebell By Donna Hagar, Somervell County Master Gardener Common Name/Scientific Name: Eustoma grandiflorum Description: Gray-green soft, velvety texture foliage on upright stalks; tulip-shaped flowers solitary or in clusters; flowers last for several weeks; hybrids have compact form and double flowers. Native/Adapted: This is a native annual or short lived perennial wildflower Height: 18 to 24 inches Width: 8-12” Light Requirement: high to medium Flower Color: native in various shades of blue or purple, hybrids in blue, purple, pink and white Blooming Period: summer Foliage Texture: medium Heat Tolerance: high Water Requirements: high Wildlife: Unpalatable to grazing animals, deer resistant Comments/Experience with the plant: Bluebells are one of the most striking wildflowers we have! I’ve seen fields of bluebells so dense it appears like a lake or large pond. I have collected seeds from Bluebells and spread them in my flower beds, but they do tend to grow where they prefer, rather than where I plant them! They definitely prefer moisture, as they grow in the lowest spots of our yard and where seeps occur in our pastures. Because of their moisture preferences, they are more prominent in wet years, but will still produce beautiful blooms, even during dry times. We have many blooming right now! The Bluebell is disappearing in the wild, presumably because of their beautiful showiness, they have been picked without allowing them to go to seed. The tiny black seeds from dried pods are the size of ground pepper. May 2011 Phlox By Joan Orr, Somervell County Master Gardener Common Name/Scientific Name: Phlox/Phlox-Polemoniaceae Native/Adapted: This is a native perennial wildflower Height: Four inches to 48 inches Spread: Will spread some but fairly easy to control Light: Sun to light shade Evergreen/Deciduous: This is a deciduous plant Seasonal Interest: Blooms in August Colors/Features: Pink, rose, white and blue Water: Average to moist Maintenance: Little to none Wildlife: Phlox attracts butterflies and hummingbirds Deer: The deer do not seem to like Phlox Comments/Experience with the plant: Phlox is known by several common names such as Fall Phlox, Garden Phlox, and Perennial Phlox. Perennial Phlox grow on long stems with large flower heads of Rosey-Lavender to Soft Pink colors. They are often found growing in the wild. When Phlox first opens, its fragrance is abundant and very pleasing to humans. But even after the scent fades for us, it still is a great attractant for butterflies and hummingbirds. The heads of these flowers are three to five inches across with a cluster that stands about the same height, which is also a draw to most winged creatures. You may find that the flower heads on White Phlox and also Blue Phlox will not produce as large a flower cluster as some of the other colors. All totaled, there are about ten varieties of Phlox. There is a Wild Blue Phlox that is usually found in woody areas and likes to grow in caliche rocks. P.Maculate(ma-kew-LAH-ta) commonly called Wild Sweet William is just one other variety of this favorite plant. I have some Phlox that my Grandmother gave me about 40 years ago. Each and every time I have moved I have left some and taken some with me. Through the years I have shared this Phlox with many gardening friends and even a few folks I just met. “When you share flowers with strangers they become your friends.” Sources: Pictorial Guide to Perennials, National Home Gardening Perennials, Native Texas Plants March 2011 Cypress/Cardinal Vine By Bonnah Boyd, Somervell County Master Gardener Common Name: Cypress vine/cardinal vine Scientific Name: Ipomoea quanoclit Adaptable to most areas of Texas Grow in sun to partial shade NOTES In spring, once temperatures stay securely above 50 degrees both day and night (April – June), plant seeds in a warm, sunny location in ordinary garden soil 2 to 3 inches apart and 1/2 inch deep. Firm soil over seeds and keep evenly moist. Erect well anchored supports at least 6 to 8 feet tall at planting time; strong netting, fence or trellis serve well to hold these climbers. This annual vine twines up effortlessly and produces feathery foliage. At midsummer, a profusion of dainty tubular flowers in shades of light pink, rose, white or scarlet open to five-pointed stars. The blossoms attract hummingbirds. It reseeds profusely. February 2011 CANNAS (Canna Lillies) By Joan Orr, Somervell County Master Gardener Common name/ Scientific name: Canna / Canna x generalis Native/ Adapted: Not a native plant but adapted to this area Height: One and a half to eight feet Spread: Will multiply and spread during growing season Light: Full sun but will tolerate partial shade Evergreen/ Deciduous: Perennial plant reproduced by seed or rhizomes Seasonal Interest: Large flowers on bold up-turned leaves start blooming in early summer/a great focal point in any garden Colors/ Features: Variegated and solid leaf varieties with flowers in an array of colors including red/orange/ salmon/ pink / yellow Water: Loves water/ will tolerate soggy soil Maintenance: Some pruning required/mulch during the winter Wildlife: Hummingbirds and bees favor this plant Deer: Deer seem to have no interest in cannas Comments/Experience: My grandmother always liked growing cannas or, as she and some of the old-timers called them, Canna Lillies. They were and still are a favorite of gardeners because they adjust well to our Texas heat and can tolerate some cold. I have several cannas in my garden, one of which is a dwarf variety. Cannas are a hardy plant and tolerate most activities of children and animals. I have not had any problems with deer eating my cannas. There is one pest known as the canna leaf-roller that sometimes will come to cannas. Just pick them off as you see them or be prepared to spray. Occasionally in drought, grasshoppers can be a problem. In Latin, the word canna means “reed” and indeed cannas do have a reed-like stalk. Even though we speak of this plant as a Lilly, it is not in the lilly family. It is just a dependable easy-to-grow perennial plant. For best results, plant after the last frost 3-4 inches deep in rich moist soil. When mature plants flower, cut the spent blooms to produce new growth. Often, there are new shoots just below the spent blossoms, so take care not to cut those. Cannas will not fail you. Just sit back and enjoy their blossoms and beauty. Source: Southern Living Annuals and Perennials January 2011 A Non-Evasive Honeysuckle- The Coral Honeysuckle By Nancy Hillin, Somervell County Master Gardener Common Name/Scientific Name: Coral Honeysuckle/Lonicera Sempervirens Native/Adapted: Native to Texas Height: 3 to 20 feet runners Spread: Can be grown as a shrub, ground cover or trellised vine Light: Full sun to part shade Evergreen/Deciduous: Evergreen Seasonal Interest: Normally blooms mid spring and intermittently thereafter Color/Features: Fiery red to orange slender-trumpet shaped flowers with yellow on the inside followed by bright red berries in the fall Water Requirements: Moderate watering unless summer is very dry Maintenance: Every two years prune sparingly in the winter or after blooming in the spring to allow good air circulation /heavy on the mulch/ feeding is usually not necessary Wildlife: Flowers attract hummingbirds, bees and butterflies. Berries that follow the flowers attract Cardinals, Goldfinches and Robins. Deer Resistant: Deer do not seem to show an interest in Honeysuckle Comments/Experience with the plant: Coral Honeysuckle is not aggressive as the common honeysuckle or many of the some one-hundred eighty cultivars of the species. It is a great companion plant to many other plants such as Coreopsis, Shasta Daisies, and Victoria Blue Salvia. It will lend itself to any fashion you wish it to be, whether you choose to trellis it or use it as a ground cover. Hummingbirds, songbirds, bees and butterflies will rush to this honeysuckle, one of nature’s perfect habitats. The slender-trumpet shaped flowers and the berries that follow in the fall are the draw for many types of birds and insects. Coral Honeysuckle is a host plant for the Spring Azure Butterfly larva and the Snowberry Clearwing larva. This honeysuckle is not fragrant, but makes up for it in the showy red to orange colors and with the sweet nectar that brings in many beneficials. Coral Honeysuckle will tolerate a wide variety of soils, but will fair better if it is mulched frequently. Propagation may be done by layering at the end of spring, cutting in the summer and by seed in autumn. Try one of the following named cultivars for your landscape.
https://txmg.org/somervell/garden-info/favorite-plants/favorite-plants-2011
Reverse Engineering Malware — A Look at How the Process Has Evolved In 1971 the first known instance of a self-replicating “computer worm” was recorded. Creeper didn’t harm computers, but it did propagate through computers over the internet predecessor, ARPANET. By investigating how Creeper worked, a contrasting program — Reaper — was created to stop its spread. Reverse engineering has long been the leading method for understanding how malicious programs operate and what they’re engineered to do. Reverse engineering as a process has evolved as malware has become more sophisticated and detection tools have improved, but it remains critical. Overview of the Reverse Engineering Process Reverse engineering malware involves disassembling (and sometimes decompiling) a software program. Through this process, binary instructions are converted to code mnemonics (or higher level constructs) so that engineers can look at what the program does and what systems it impacts. Only by knowing its details are engineers then able to create solutions that can mitigate the program’s intended malicious effects. A reverse engineer (aka “reverser”) will use a range of tools to find out how a program is propagating through a system and what it is engineered to do. And in doing so, the reverser would then know which vulnerabilities the program was intending to exploit. Reverse engineers are able to extract hints revealing when a program was created (although malware authors are known to leave behind fake trails), what embedded resources they may be using, encryption keys, and other file, header, and metadata details. When WannaCry was reverse engineered, attempts to find a way to track its spreading led to discovering what is today known to be its “kill switch” – a fact that proved to be incredibly important to stop its spread. In order to reverse malware code, engineers will often use many tools. Below a small selection of the most important ones: - Disassemblers (e.g. IDA Pro). A disassembler will take apart an application to produce assembly code. Decompilers also are available for converting binary code into native code, although they’re not available for all architectures. - Debuggers (e.g. x64dbg, Windbg, GDB). Reversers use debuggers to manipulate the execution of a program in order to gain insights into what it is doing when it is running. They also let the engineer control certain aspects of the program while it is running, such as areas of the program’s memory. This allows for more insight into what the program is doing and how it is impacting a system or network. - PE Viewers (e.g. CFF Explorer, PE Explorer). PE (for Windows Portable Executable file format) viewers extract important information from executables to provide dependency viewing for example. - Network Analyzers (e.g. Wireshark). Network analyzers tell an engineer how a program is interacting with other machines, including what connections the program is making and what data it is attempting to send. The Challenge with Reverse Engineering Malware Today As malicious programs become more complex, it becomes increasingly likely that the disassembler fails somehow, or the decompiler produces obfuscated code. So, reversers need more time to understand the disassembled or decompiled code. And this is time during which the malware may be wreaking havoc on a network. Because of this, there has been an increasing focus on dynamic malware analysis. Dynamic malware analysis relies on a closed system (known as a sandbox), to launch the malicious program in a secure environment and simply watch to see what it does. There are a lot of benefits to using a sandbox for dynamic analysis, but some downsides as well. For example, many of the more sophisticated malicious programs use evasion techniques to detect that they are in a sandbox. When a sandbox is detected, the malware will refrain from demonstrating its true malicious nature. Advanced malware programs have a suite of tools they use to outsmart sandboxes and evade detection: they can delay their malicious activities, only act when a user is active, hide malicious code in areas where it will not be detected, along with a variety of other evasion techniques. This means that reverse engineers cannot rely solely on dynamic techniques. At the same time, reverse engineering every new malware threat is unrealistic. The Changing Role of Reverse Engineers By using dynamic analysis to automate as much of the malware analysis as possible, cybersecurity experts can mitigate advanced malware faster and more effectively, freeing their time for the really difficult work, such as understanding new encryption schemes, reverse communication protocols, or working on attribution. The more advanced the automated solution is, the more likely a reverser will not have to go back to the initial (and time consuming) phase of the process, which is unpacking, deobfuscating, and understanding the malware main behaviors. Cybersecurity teams need in fact to implement a two-pronged approach where sandbox technologies are used to automatically analyze the vast majority of threats, and reversers dedicate their time to surgically analyze the internals of the most sophisticated ones when further threat intelligence is sought.
https://www.lastline.com/blog/reverse-engineering-malware/
Lab #3: Initial Velocity of a Projectile Theory: How can we determine the initial velocity of a projectile? Experimental Design: The purpose behind this experiment was to determine the initial velocity of a projectile. Projection motion consists of kinematics of motion in the x and y directions. With two dimension kinematics, there are the x and y components in any given velocity. In projectile motion, the x component has no acceleration as no outside forces are acting on it. The Y component on the other hand has gravity acting as a force. A small ball is shot, at three various angles (30,45,60), and through the known values the initial velocity of the ball is found. As a result, the range of the project can be represented with the equation 1) R = V02g*Sin2? , where R represents the range or Dx; the values of g and ? are known. However, in this experiment, one main equation were used to determine the initial velocity. 1) y-y0=tan? x-gx22(V0cos? )2 , where y is the trajectory of a particle in two dimensional motion, gravity is -9. 1 m/s 2 , and ? is the launch angle. X is equal to the average distance launched in the x direction. In order to determine all the components required to use the trajectory equation, a small projectile ball was launched at 3 different angles. The distance traveled was measured as well as the initial height the ball was launched. 3 trials of each angle were conducted and the average of the trials was used as the x distance in order to determine the trajectory. The photo-gate was used to find how long it took the diameter of the ball to pass through the sensor. Using all the data gathered, the initial velocity should be able to be determine as it is the only missing variable. Materials and Methods The materials needed were the projectile launcher, the plunger, a spherical ball, a photogate, a meter stick and white sheets of paper. To go about the experiment, first the angle of the projectile launcher to the horizontal was set to 30 degrees and the height from the ground to the bottom of the launching position was measured. Then a trial shot was fired to approximate the location of the end of the ball’s trajectory. Then the white papers were placed in the area around the location from the first shot. Trial one was initiated and the ball was shot again by pushing it into the launcher with a plunger like object and then pulling the cord. The landing spot of the ball was recorded on the paper on the ground and the distance from the point where the ball hit the ground to the location of the launcher. The time it took for the ball to pass through the photo-gate was recorded. Then two more trials were done. Then the angle was changed to 45 degrees and the procedure was repeated. Finally the angle was changed to 60 degrees and the procedure was repeated for a last time. Uncertainty Analysis Discussion There were measured quantities with uncertainty values in this experiment. It was impossible to tell if the ball was released at either the exact defined height or a bit above it. The meter stick measured by centimeters so the uncertainty value was half a centimeter. It was also impossible to tell the definite diameter of the ball. The caliper measured by millimeters so the uncertainty was half a millimeter. The most dominant source of uncertainty was the range that the ball travelled and the initial height the ball was launched from. Conclusion From the data gathered we can conclude that the average initial velocity of the ball could be anywhere from 4 to 6 meters per second. The accepted range of times for the complete diameter of the ball to pass. There are sources of error that could have accounted for the percent error seen. The first source of error is air resistance. The actual value is determined in a vacuum setting where there is no air friction. The second potential source of error is the fact that the projectile launcher was not permanently positioned on the table and moved after every shot. That means that the direction in which the ball travelled was not always constant. This experiment was overall helpful in understanding the uses of the trajectory equation and how it can be used in order to calculate the initial velocity without the use of time. In order to improve the experiment, the ball should be shot in a vacuum in order to minimize the effects of air resistance on the projectile when it is launched. Improvements? There can be a few improvements on the experiments. The data can become unskewed by performing the experiment in a vacuum, using a machine that would keep the launcher positioned in the same place and orientations when the trigger is pulled, and using a machine that would allow measuring the distances to a more precise degree. These improvements, however, may cost a lot of resources and money, so the experiment now is suffice.
https://onlinecourseaide.com/initial-velocity-of-a-projectile/
One of the most important ways we facilitate learning in a guided play setting is to ask questions that extend the child’s learning and help them take their exploration to a new level. When I say to ask questions, I don’t mean to quiz a child about basic facts. If a child is engaged in an exciting experiment with pouring and scooping in the sensory table, and you use it as an opportunity to test their knowledge of academic facts (“what color is that scoop” or “how many toys are there?”) then you’re interrupting their learning. If they know the answer to your question, they don’t learn anything by answering. If they don’t know it, they feel stupid. And it didn’t show that you cared about what they were experiencing. Don’t ask any question you already know the answer to. That’s not interesting for eithr of you! If they’re stuck and come to you for help, ask: “can you describe the problem?” “Can you tell me what you’ve tried so far?” “Let’s think of lots of things you might try next, then you can decide which thing to try.” Then brainstorm with them, not giving “the right answer” but giving some helpful options and maybe some crazy not-helpful ideas. When you ask a question, wait for a response – sometimes it takes a young child several seconds to gather their thoughts, then a few more seconds to put their thoughts together into words. Listen patiently and attentively to their response. Don’t overdo the questions. Would you enjoy working on a hobby if someone was peppering you with questions the whole time? Probably not. Much of the time, you can just sit nearby quietly observing, or play alongside your child, or get chores done while your child plays. It’s helpful for children to have some times when you aren’t closely supervising (beyond what’s needed for safety) so they aren’t feeling watched or judged and can follow their own whims in unguided play. I made up posters to hang in the classroom to give parents ideas for connecting with kids. You can get the questions to ask posters here. This entry was posted in Child Development on September 10, 2015 by Janelle Durham.
https://gooddayswithkids.com/2015/09/10/questions-extend-learning/
Life is a dance. Mindfulness is witnessing that dance. – Amit Ray Mindfulness is knowing what you are experiencing while you are experiencing it. – Guy Armstrong Mindfulness is simply being aware of what is happening right now without wishing it were different; enjoying the pleasant without holding on when it changes (which it will); being with the unpleasant without fearing it will always be this way (which it won’t). – James Baraz Mindfulness is the aware, balanced acceptance of the present experience. It isn’t more complicated than that. It is opening to or receiving the present moment, pleasant or unpleasant, just as it is, without either clinging to it or rejecting it. – Sylvia Boorstein Feelings come and go like clouds in a windy sky. Conscious breathing is my anchor. – Thich Nhat Hanh Mindfulness is about being fully awake in our lives. It is about perceiving the exquisite vividness of each moment. We also gain immediate access to our own powerful inner resources for insight, transformation and healing. – Jon Kabat-Zinn Knowledge does not mean mastering a great quantity of different information, but understanding the nature of mind. This knowledge can penetrate each one of our thoughts and illuminate each one of our perceptions. – Matthieu Ricard The moment one gives close attention to anything, even a blade of grass, it becomes a mysterious, awesome, indescribably magnificent world in itself. – Henry Miller Always hold fast to the present. Every situation, indeed every moment, is of infinite value, for it is the representative of a whole eternity. – Johann Wolfgang von Goethe To see a world in a grain of sand and heaven in a wild flower, hold infinity in the palm of your hand and eternity in an hour. – William Blake The only way to live is by accepting each minutes as an unrepeatable miracle. – Tara Brach Mindfulness isn’t that difficult. We just need to remember to do it.
https://lindagraham-mft.net/apropos-of-mindfulness/
3 Basic Techniques in Solving Quadratic Equation Questions In this chapter we will learn 3 most basic techniques on how to: - Solve the quadratic equations - Form a quadratic equation - Determine the conditions for the type of roots. Generally, is the quadratic equation, expressed in the general form of , where a=1, b=- 6 and c=5. The root is the value of x that can solve the equations. A quadratic equation only has two roots. Example1: What are the roots of ? Answer: The value of 1 and 5 are the roots of the quadratic equation, because you will get zero when substitute 1 or 5 in the equation. We will further discuss on how to solve the quadratic equation and find out the roots later. 1) Solve the quadratic equations There are many ways we can use to solve quadratic equations such as using: 1) substitution, 2) inspection, 3) trial and improvement method, 4) factorization, 5) completing the square and 6) Quadratic formula. However, we will only focus on the last three methods as there are the most commonly use methods to solve a quadratic equation in the SPM questions. Let’s move on! Factorization Factorization is the decomposition of a number into the product of the other numbers, example, 12 could be factored into 3 x 4, 2 x 6, and 1 x 12. Example 2: Solve using factorization. Answer: We can factor the number 12 into 4 x 3. Remember, always think of the factors which can be added up to the get the middle value (3+4 = 7), refer factorization table below, So we will get ( x + 3 )( x + 4 ) = 0, x + 3 = 0 or x + 4 = 0 x = – 3 or x = – 4 Example 3: Solve using factorization. Answer: Rearrange the equation in the form of So we will get (4x – 3)(2x – 1)=0, 4x – 3 = 0 or 2x – 1 = 0 x = or x = Completing the square Example 4: Solve the following equation by using completing the square method. Quadratic formula Normally when do you need to use this formula? 1) The exam question requested to do so! 2) The quadratic equation cannot be factorized. 3) The figure of a, b, and c of the equation are too large and hard to factorized. Example 5: Solve using quadratic formula. 2) Form a quadratic equation How do you form a quadratic equation if the roots of the equation are 1 and 2? Well, we can do the work out like this using the reverse method: We can assume: x = 1 or x = 2 x – 1 = 0 or x – 2 = 0 (x-1)(x-2)=0 x2-2x-x+2=0 x2-3x+2=0 So the quadratic equation is x2 – 3x + 2=0. This is the most basic technique to form up a quadratic equation. Let’s assume we have the roots of and : In other words, we can form up the equation using the sum of roots (SOR) and product of roots (POR). If the roots are 1 and 2, SOR = 1+2 = 3 POR = 1 x 2 =2 Sometime we need to determine the SOR and POR from a given quadratic equation in order to find a new equation from a given new roots. In general form, Let’s look at the example below on how the concept above can help us solve the question. Example 6: Given that and are the roots of , form a quadratic equation with the roots of ( – 5 ) and ( – 5 ). 3) Determine the conditions for the type of roots Refer back to example 2, we know that has two different roots (-3 and -4) by solving using factorization method. However, how are we going to determine the types of roots of without solving the equation? The trick is we can use . is called a discriminant. Remember, when the value is greater than 0, we have 2 different roots, when it is 0, we have 2 equal roots, and when it is less than 0, we have no roots. From the quadratic equation, , , we have 2 different roots since the discriminant is greater than zero. Refer table below. Example 7: A quadratic equation has two equal roots. Find the possible values of h. We will look at more SPM questions and example for quadratic equation in next topic on SPM Questions for Quadratic Equations.
https://perfectmaths.com/quadratic-equations/
Is the Universe too big, or are we too small? What’s for sure is that scientists still have a lot to learn about nature, and whether we like it or not, that’s exactly the beauty of science. Astronomers estimate that there are about two trillion galaxies in the entire observable Universe, according to a 2016 study. The more astronomers look into the Cosmos, there more galaxies they see, not to mention that the entire Cosmos could be much larger than what astronomers call “the Observable Universe”. Thanks to SciTechDaily, we have a spectacular image of two beautiful galaxies spotted by the Hubble Telescope in the Perseus galaxy cluster, meaning approximately 350 million light-years away from Earth: Credit: ESA/Hubble & NASA, W. Harris, Acknowledgement: L. Shatz The galaxy from the left is known by the name of 2MASX J03193743+4137580, while the one from the right carries the moniker of UGC 2665. Perseus is among the most massive cosmic objects in the Universe The Perseus cluster, which is also known as Abell 426, is one of the most massive objects discovered, and it contains thousands of galaxies across a vast cloud of gas. The cluster reaches a recession speed of 5,366 km/s. The observable part of the Universe is known to have around 96 billion light-years in diameter, and although it sounds like an unfathomably huge distance, it could pale compared to the size of the entire Universe. If the Cosmos is indeed much bigger than what telescopes could see, it could take billions of years until the light from the most distant regions will reach us so that astronomers would be able to see something. Who knows how many more jaw-dropping clusters of galaxies could exist at the edges of the Universe! While the bad news is that most probably, most of us won’t be around to find out what exists beyond the observable Universe, the good news is that it should be only a matter of time until astronomers will come with some irrefutable answers.
https://www.healththoroughfare.com/science/hubbles-picture-of-the-week-showcases-two-breathtaking-galaxies/33370
Bored and underpaid at work while colleagues seem to easily move up in their careers, jealous when a friend buys a new car, worried you cannot afford a product everyone around you is raving about… That feeling of being stuck while others around you appear successful is called status anxiety. Alain de Botton first introduced the term “status anxiety” in 2004. Status anxiety occurs when we compare ourselves to others and fear that we are not meeting society’s standards for success. As a result, we feel trapped in our current economic or social status. This leads to a feeling of shame, which can be detrimental to our mental health. In the words of Anna Keshabyan and Martin Day from the Department of Psychology at Memorial University of Newfoundland: “Status anxiety is believed to be exacerbated by economic inequality and negatively affects well-being.” However, it is possible to turn the noise down and to focus on what you need to feel successful in life. Money and happiness Status anxiety is driven in part by our cultural beliefs about wealth and self-fulfillment. In the Western world, we often think that money equals happiness. We also tend to view other people in higher social classes as more confident compared to those with less wealth. Some people use these beliefs to motivate themselves in a healthy way and to strive for educational and career success without sacrificing other priorities in their lives. But in other cases, we exhaust ourselves physically and emotionally to try to keep up with our peers, yet never seem to get to where we want to be. Even if we can move up the socioeconomic ladder, it doesn’t mean our status anxiety will go away. As Alain de Botton puts it: “Even Bill Gates will suffer from status anxiety. Why? Because he compares himself to his own peer group. We all do this, and that’s why we end up feeling we lack things even though we’re so much better off than people ever were in the past.” A person’s environment can intensify status anxiety. For instance, income inequality and social media both exacerbate status anxiety. Research has found that people who use Facebook experience lower self-esteem when viewing accounts featuring others doing activities to increase their attractiveness or fitness. From shame to status anxiety At a deeper level, the emotional root of status anxiety is shame. According to researchers from the Department of Psychology and Social Behavior at the University of California, shame occurs when “a core aspect of the self is judged as defective, inferior, or inadequate.” Shame appears when we fear that our peers will be negatively judged or rejected by others because we are not meeting their definitions of success. It turns out, that’s exactly what is happening with status anxiety. Sociologists Jan Delhey and Georgi Dragolov explain that when we experience status anxiety, we become afraid of losing respect from others in our peer group. While status anxiety may start with comparing ourselves to other people, the real problems come when we turn what can be a healthy comparison into unhealthy shame and self-judgment. When we internalize our assumptions about other people’s opinions, it negatively impacts how we feel about ourselves. In their book about wealth and peer pressure, Clive Hamilton and Richard Denniss link status anxiety to “overconsumption, luxury fever, consumer debt, overwork, waste, and harm to the environment,” which can lead to “psychological disorders, alienation, and distress.” In other words, while spending too much and putting oneself in debt is destructive in the short term, the shame that lies at the heart of status anxiety also has a long-term impact on our mental health. But there is hope: status anxiety can be managed so we become kinder to our wallets and our minds. Managing status anxiety When your status anxiety causes you to spend too much money or obsessively compare yourself with people on social media, you can take some steps to reduce these behaviors. If you suffer from the overspending type of status anxiety, give yourself time to think before hitting the “place order” button. A little distance can do a lot to change your perspective. When you shop online, add items you want to your cart but leave them there overnight. After a good night’s rest, take a look at your cart and see if you still want the items. Often, you will realize you don’t need the latest tech toy and you will empty the cart. As we have seen, social media is also linked to status anxiety. It is almost impossible to stay offline in our increasingly connected world, but many apps can help you limit your time using social media on your phone. Also, think critically about the social media accounts you follow. Are they making you feel bad about yourself and your personal success, which could lead to social anxiety? Unfollow the accounts that are harming you, and if you are feeling bold, try removing social media apps from your devices altogether. In the long term, it is worth reflecting on what success means to you. Forget about how success is defined by your family, your friends, and society at large. Let go of any external expectations. Take a bit of time to think: how do you define success for yourself in different aspects of your life? What are the steps you need to take to get there? Place the focus back on yourself to reduce the noise from other people and their biased expectations. Remember, status anxiety occurs when we believe we are not meeting society’s standards for success compared to our peers. Shame lies at the heart of status anxiety, as we are afraid of being judged and rejected by our peers. That means people with status anxiety may spend money on things they do not need, put themselves in debt, experience low self-esteem, and show signs of depression. Reducing social media use and taking more time to consider purchases can help decrease status anxiety right away, but understanding our own definitions of success is the best long-term remedy to social anxiety. What will you do today to understand what success means to you?
https://nesslabs.com/status-anxiety
A CLINICAL STUDY ON MORPHOMETRIC ANALYSIS OF THE FIRSTCERVICAL VERTEBRA European Journal of Molecular & Clinical Medicine, 2022, Volume 9, Issue 1, Pages 584-601 AbstractBACKGROUND AND OBJECTIVE: Various decompressive and stabilising methods such atlas and axis vertebra (C1 and C2) transarticular screw fixation and posterior screw placement on the lateral mass are used in CVJ surgery. These operations usually target the C1, which is part of the CVJ's bony architecture.Thus, complete understanding of atlas vertebra anatomy is required for surgical planning and fracture analysis. The dimensions of the vertebral artery groove are rare in Indian literature, and even rarer in South Indian literature. This project will gather and give atlas anatomical data that may be useful for surgical planning and assessing C1 fractures.The current study sought to make the following findings:1. Examine and measure atlas vertebra specimens for morphological criteria.2. To offer anatomic data for surgical planning of the CVJ. METHODOLOGY : During 18 months, 120 human dry adult complete atlas vertebrae were gathered from the Department of Anatomy, Kakatiya Medical College.R ESULTS: The mean transverse diameter of the spinal canal was 27.012 mm, and the mean anteroposterior diameter of the vertebral canal was 26.17 mm.In the vertebral artery groove, the midline to lateral most point distance was 23.78 mm on the right and 23.87 mm on the left, while the midline to medial most point distance was 12.98 mm on both sides.The morphology revealed 4 SAF kinds. Oval (45%) and bi-lobed (26%) aspects (13.33 percent right and 7 percent ).ANALYSIS AND SUMMARY :The current research adds to the existing knowledge of atlas vertebra anatomy. Understanding the normal structure of the atlas vertebra is critical for diagnosis and treatment. Keywords:
https://ejmcm.com/article_16833.html
Karma Yoga expands the heart and breaks all barriers that stand in the way of unity or oneness. The secret is to work without any attachment and egoism. The practice of Karma Yoga allows for inner spiritual strength and power by performing motiveless and selfless actions. Mostly, we tend to do things of good nature for others so as to be seen as kind, generous, giving etc. “Can we do these actions, without any selfish motivation or need for reward or acknowledgement?" "Do things for people not because of who they are or what they do in return, but because of who you are." -Mother Theresa Nar Seva, Narayan Seva - Is an inside out approach to living. Seva definition: "Service which is given without consideration of anything in return, at the right place and time to one that is qualified, with the feeling that it is one's duty, is regarded as the nature of goodness." -Bhagavad Gita 17.20 Seva is a Sanskrit word meaning selfless service, and perhaps considered the most important part of any spiritual practice. It lies at the heart of the path of karma yoga - selfless action - and asks us to serve others with no expectation of outcome. I chose Christmas day because it is the time for giving. Inspired by the birth of Christ and savior to the world. On day 25th December 2019, I volunteered to give 5 hours to feed the homeless in the community of Lansdowne who occupy the station roads, bridge and parking areas because I read a post that these individuals would be evicted and felt that out of all the causes where support was greatly needed, that this is where I could assist and support the most in the hope of enriching and uplifting a group of individuals who, too, deserves the basic human essentials in life. I joined my friends, Rucita and Viveka and arrived at their house at 08:30 and from the get-go, felt a sense of warmth because I was participating in something that would potentially make a difference to a community who are in dire-straights. We cooked pasta and soya mince and made Bolognese. There were many food donations and also other volunteers. From thereon, with the heaps of items that were given with such love, I felt a pure sense of community and goodness. The goodness of people. People who care and want to assist where they can to make a sincere and perhaps even small difference to one's life for the sake of enriching our surroundings. Immediately, I knew I was in the right place, doing the right thing at the right time with the right people. We drove in unison to the Lansdowne station community and setup a food station. Other volunteers from far and wide joined in. I took a long walk around the area and looked for people to round them up for their meal and gifts. We looked everywhere and once the word got out, people started climbing, jumping and walking out of their homes. What a blissful occasion filled with jubilee. To see happiness. To feel the comfort of knowing that someone knows that today on Christmas, they will have food and entertainment and even a Christmas gift. Once everyone was in the parking lot, we served food and cooldrinks. We sang Christmas carols and also danced and laughed together. The community felt the love and also felt safe. They were happy and in that moment, all was well. There is nothing, absolutely nothing that can beat that. The feeling of being safe, secure, cared for, supported and loved. I was humbled, in awe and so inspired that I shall always be a Seva practitioner.
https://www.purplemoonyoga.com/blog/selfless-service
Q . What Is Narayan Seva? Ans .To Serve Daridra As Narayana is Narayana seva. See God in everyone you meet; see God in every thing you handle. not stand still or recede. Every minute must mark a forward step. ~ BABAGod has two forms, One is Lakshmi-Narayana and other one is Daridra-Narayana. Most people prefer to worship Lakshmi-Narayana for ensuring their personal prosperity and welfare, but few chose to worship Daridra-Narayana (the Lord in the form of the poor and the forlorn). Members of the Sai Organisations should think only of service to Daridra-Narayana. If the hungry are fed, they are easily satisfied. Service to Daridra-Narayana can never go waste. It is the highest form of sadhana. Man is the product of the society and service to society is real service to God. Such service should be rendered without regard to caste, creed, race or nationality. The essence of all religions is one and the same, like the current that serves many different purposes but is the same energy. In serving society, they should bear in mind the four ideals of Sathya, Dharma, Shanthi and Prema. Service is like a bulb, which cannot shed light unless there is a wire to convey the current. Sathyam is the current. Dharma is the wire through which the current flows. When the wire of Dharma is connected to the bulb of Shanthi, then you have the light of Love. One may encounter difficulties in rendering service. But one should not be overwhelmed by them. The Pandavas have become immortal because of the sufferings they underwent for the sake of Dharma. Jesus sacrificed his life for the sake of those whom he came to serve. Prophet Mohammed had to face similar troubles in his mission. Do not aspire for comfort. Greater than all other forms of worship is Seva (service to one’s fe11ow-men) done in an unselfish and dedicated spirit. There is an element of selfishness in forms of worship like Japa Dhyana etc.But when service is done spontaneously, it is its own reward. It must be done as an offering to God. – Address delivered to Active Sai Workers of 300 adopted villages of Tamil Nadu, at the Prayer Hall, Sundaram, Madras on 22-1 – 1985. What Is true Seva ? Seva is a small word but is filled with immense spiritual significance. Hanuman is the supreme exemplar of the ideal of service. When the Rakshasas asked Hanuman, during his search for Sita in Lanka, who he was, he replied simply: “Daasoham Kosalen-drasya.” He was content to describe himself as the humble servant of Rama. Seva must be viewed as the highest form of sadhana Serving the poor in the villages is the best form of sadhana. In the various forms of worship of the Diane, culminating in Atma-nivedhanam (complete surrender to the Divine), Seva comes before Atmanivedhananam. God’s grace will come when Seva is done without expectation of reward or recognition. Sometimes Ahamkaram (ego) and Abhirnanam (attachment) rear their heads during Seva. These should be eliminated altogether. In speech what matters is the inner feeling. The purpose of speech should be to promote heart-toheart understanding. Develop the love of God in your hearts. The heart is like a musical chair in which there is room for only one. Give in your heart place only for God. – Address delivered to Seva Dal workers at the workshop on Health and Hygiene at Abbotsbury, Madras on 25-1 – 1985. Mass Rudrabhishekam at Somnath Temple, Gujarat.
https://www.ssssogu.org/narayan-seva/
While there have been historical accounts of the anarchist school movement, there has been no systematic work on the philosophical underpinnings of anarchist educational ideas—until now. Anarchism and Education offers a philosophical account of the neglected tradition of anarchist thought on education. Although few anarchist thinkers wrote systematically on education, this analysis is based largely on a reconstruction of the educational thought of anarchist thinkers gleaned from their various ethical, philosophical and popular writings. Primarily drawing on the work of the nineteenth century anarchist theorists such as Bakunin, Kropotkin and Proudhon, the book also covers twentieth century anarchist thinkers such as Noam Chomsky, Paul Goodman, Daniel Guerin and Colin Ward. This original work will interest philosophers of education and educationalist thinkers as well as those with a general interest in anarchism. What People Are Saying "This is an excellent book that deals with important issues through the lens of anarchist theories and practices of education… The book tackles a number of issues that are relevant to anybody who is trying to come to terms with the philosophy of education." —Higher Education Review About the Author Senior Lecturer in Philosophy of Education at the Institute of Education University of London, Judith has authored several books and articles on Philosophy, Anarchism and Education.
https://leftwingbooks.net/en-us/products/anarchism-and-education-a-philosophical-perspective
Overall, violent crimes fell by 4.4 percent and property crimes dropped by 6.1 percent, according to the data collected by the FBI. Crime rates haven't been this low since the 1960's, and are nowhere near the peak reached in the early 1990's. The new figures show car thefts also dropped significantly, falling nearly 19 percent and continuing a sharp downward trend in that category. The figures are based on data supplied to the FBI by more than 11,700 police and law enforcement agencies. They compare reported crimes in the first six months of this year to the first six months of last year. The early 2009 data suggests the crime-dropping trend of 2008 is not just continuing but accelerating. In 2008, the same data showed a nearly 4 percent drop in murder and manslaughter, and an overall drop in violent crime of 1.9 percent from 2007 to 2008. According to the FBI figures, reports of violent crime fell about 7 percent in cities with 1 million or more people. But in towns with 10,000 to 25,000 people, violent crime ticked up slightly by 1.7 percent. Each city's data was different, but collectively pointed to less crime in every major category. Nationwide, rape fell by 3.3 percent, and robbery by 6.5 percent. Arsons, which are subject to a variety of reporting standards, declined more than 8 percent. The FBI's data for New York City shows 204 reported murders in the first half of 2009, compared to 252 in same period last year. By comparison, Oklahoma City saw reported killings increase from 26 to 32, the FBI said. Phoenix, Ariz., saw 10 fewer killings, dropping from 86 in the first half of 2008 to 76 in the first half of this year, according to the data. Separate statistics compiled by the Justice Department measure both reported and unreported crimes. On the Net:
https://6abc.com/archive/7181937/
Trivia: The entire ending of the film, and other parts of the film was altered at the last minute. The test preview did not get the reaction the head of the studio paying for the film wanted. The audience just didn't get what Ridley Scott was calling "The dark side of comic strip" so they took control, and collaborating with Ridley Scott changed it to what it is today. 25 years on Ridley Scott will be re-releasing the original edit September 2007. Trivia: Blade Runner is based on a novel by Philip K Dick entitled, 'Do Androids Dream of Electric Sheep?' Trivia: Pris (Daryl Hannah) is doubled by a *male* stuntperson during her fight with Deckard. (Hence the dark room and the cuts..) (01:32:40) Trivia: Look at Priss' incept date, when shown on the monitor. It's the 14th of february 2016. Being the "basic pleasure model", she was born on Valentine's Day, of course. (00:15:30) Trivia: The chess game between Tyrell and Sebastian uses the conclusion of a game played between Anderssen and Kieseritzky, in London in 1851. It is considered one of the most brilliant games ever played, and is universally known as "The Immortal Game". (01:21:15) Trivia: Dustin Hoffman was the original choice for the role of Deckard and was briefly cast in the role but dropped out. Director Ridley Scott then cast his second choice for the part, Harrison Ford. Trivia: In the final scenes in the first move version (before the Director's Cut) Deckard and Rachael fly over a landscape. Ridley Scott had run out of funds and time when bad weather had ruined the shooting of the landscape, no possibility to get new shots. The scenes thus used were outtakes from intro scenes of The Shining. These shots were removed in the Director's Cut.
https://www.moviemistakes.com/film180/trivia
Introduction ============ Background ---------- New types of learning, such as e-learning, have become popular in medical education \[[@ref1],[@ref2]\] since the emergence of the internet \[[@ref2]\]. These new models allow learning to transcend boundaries of space and time; they improve collaborative and individualized learning effectiveness and are more convenient \[[@ref3]-[@ref5]\]. Nevertheless, e-learning presents some disadvantages, including high cost multimedia materials, high costs for platform maintenance, and often user training is required. In parallel, traditional learning presents several limitations, including requiring the physical presence of students and teachers at a specific time and place \[[@ref6]\]. Blended learning is characterized by the combination of traditional face-to-face learning and asynchronous or synchronous e-learning \[[@ref7]\]. Blended learning is a promising alternative for medical education because of its advantages over traditional learning. In academia, this learning format has had a rapid growth and is now widely used \[[@ref8]\]. Increased research on blended learning has been reported since the 1990s \[[@ref9]-[@ref11]\]. Synthesis of these studies may inform students and teachers on the effectiveness of blended learning \[[@ref12]\]. Previous systematic reviews have reported that blended learning has the potential to improve clinical training among medical students \[[@ref13]\] and undergraduate nursing education \[[@ref14]\]. In parallel, many reviews have summarized the potential of blended learning in medical education \[[@ref15],[@ref16]\]. A meta-analysis \[[@ref12]\] showed that blended learning was more effective than nonblended learning but with a high level of heterogeneity. Nevertheless, these reviews were limited to only some areas of health education, and few have used quantitative synthesis in the evaluation of the effectiveness of blended learning; therefore, the purpose of this study was to quantitatively synthesize the studies that evaluated the efficacy (using knowledge outcomes) of blended learning for health education (with students, postgraduate trainees, or practitioners). Objective --------- The objective of this review was to evaluate the effectiveness of blended learning for health education on knowledge outcomes assessed with subjective (eg, learner self-report) or objective evaluations (eg, multiple-choice question knowledge test) of learners' factual or conceptual understanding of the course in studies where blended learning was compared with traditional learning. Methods ======= Comparison Categories and Definitions ------------------------------------- Blended learning was compared with traditional learning, overall, and after stratification by type of learning support, the following comparisons were made: offline blended learning versus traditional learning, online blended learning versus traditional learning, digital blended learning versus traditional learning, computer-aided instruction blended learning versus traditional learning, and virtual patient blended learning versus traditional learning. Offline learning was defined as the use of personal computers or laptops to assist in delivering stand-alone multimedia materials without the need for internet or local area network connections \[[@ref17]\]. These could be supplemented by videoconferences, emails, and audio-visual learning materials kept in either magnetic storage (CD-ROM, floppy disk, flash memory, multimedia cards, external hard disks) as long as the learning activities did not rely on this connection \[[@ref18]\]. Online support was defined as all online materials used in learning courses. Digital education was a broad construct describing a wide range of teaching and learning strategies that were exclusively based on the use of electronic media and devices as training, communication, and interactions tools \[[@ref19]\]. These aspects could pertain to educational approaches, concepts, methods, or technologies. Moreover, these concepts facilitated remote learning, which could help address the shortage of health professionals in settings with limited resources by reducing the time constraints and geographic barriers to training. Computer-assisted instruction was defined as the use of interactive CD-ROM, multimedia software, or audio-visual material to augment instruction including multimedia presentations, live synchronous virtual sessions offered via a web-based learning platform, presentations with audio-visuals, and synchronous or asynchronous discussion forums to enhance participation and increase engagement \[[@ref20],[@ref21]\]. Virtual patients were defined as interactive computers simulations of real-life clinical scenarios for health professional training, education, or assessment. This broad definition encompassed a variety of systems that used different technologies and addressed various learning needs \[[@ref22]\]. Traditional learning, in this paper, was used to describe all nonblended learning such as nondigital and not online, but also only online, only e-learning, or other single support educational methods (lectures, face-to-face, reading exercises, group discussion in classroom). Reporting Standards ------------------- We conducted and reported our study according to PRISMA guidelines \[[@ref23]\] and Cochrane systematic review guidelines \[[@ref24]\]. Eligibility Criteria -------------------- Inclusion criteria for studies were based on the PICOS (population, intervention, comparison, outcome, and study design) framework. Studies were included if they were conducted among health learners, used a blended learning intervention in the experimental group, involved a comparison of blended learning with traditional learning, included quantitative outcomes with respect to knowledge assessed with either subjective or objective evaluations, and were randomized controlled trials or nonrandomized studies (which are widely used in health education). Only studies published in English were included. Data Sources ------------ To identify relevant studies, we conducted a search of citations published in MEDLINE between January 1990 and July 2019. Key search terms included delivery concepts (*blended, hybrid, integrated, computer-aided, computer assisted, virtual patient, learning, training, education, instruction, teaching, course*), participant characteristics (*physician, medic\*, nurs\*, pharmac\*, dent\*, health\**), and study design concepts (*compar\*, trial\*, evaluat\*, assess\*, effect\*, pretest\*, pre-test, posttest\*, post-test, preintervention, pre-intervention, postintervention, post-intervention*). Asterisks were used as a truncation symbol for searching. [Multimedia Appendix 1](#app1){ref-type="supplementary-material"} describes the complete research strategy. Study Selection --------------- Using the eligibility criteria, AV and ES independently screened all articles and abstracts and reviewed the full text of potentially eligible abstracts. Data Extraction --------------- AV and ES independently extracted relevant characteristics related to participants, intervention, comparators, outcome measures, and results from the studies that were found to be eligible using a standard data collection form. Any disagreements were resolved through discussion with a third research team member until agreement was reached. Risk of Bias Assessment ----------------------- During the data extraction process, researchers independently assessed the risk of bias for each study using the Cochrane Collaboration's risk of bias tool \[[@ref25]\]. Evaluation criteria included the following: random sequence generation, allocation concealment, blinding of students and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, or other which included publication bias. Funnel plots were used to evaluate publication bias. Risk of bias for each criterion was rate as low, high, or unclear according to the Cochrane risk of bias instructions. Data Synthesis -------------- Analyses were performed for knowledge outcomes using SAS software (version 9.4; SAS Institute). The standardized mean difference (standard mean difference; Hedges g effect size), converted from means and standard deviation from each study, was used \[[@ref15]\]. When the mean was available, but the standard deviation was not, we used the mean standard deviation of all other studies. Since the overall scores of included studies were not the same and standard mean difference could eliminate the effects of absolute values, we adjusted the mean and standard deviation so that the average standard deviation could replace the missing value of standard deviation. We employed a random-effects model for the meta-analysis (statistically significant if *P*\<.05). The I^2^ statistic was used to quantify heterogeneity across studies \[[@ref26]\]. When the estimated I^2^ was equal to or greater than 50%, this indicated a large amount of heterogeneity. As the studies were functionally different and involved different study designs, participants, interventions, and settings; a random-effects model that allowed more heterogeneity was used. Forest plots were created to display the meta-analysis findings. To explore publication bias, funnel plots were created and Begg tests were performed (statistically significant if *P*\<.05). To explore potential sources of heterogeneity, multiple meta-regression and subgroup analyses based on the study design were performed. Sensitivity analyses to test the robustness of findings were also performed. Results ======= Study Selection --------------- The search strategy identified 3389 articles from MEDLINE. After scanning the titles and abstracts, 93 articles were found to be potentially eligible, and their full texts were read for further assessment. Of these, 56 articles were included \[[@ref9]-[@ref11],[@ref22],[@ref27]-[@ref78]\] ([Figure 1](#figure1){ref-type="fig"}). All articles that were included had been published in peer-reviewed journal. ![Study flow diagram.](jmir_v22i8e16504_fig1){#figure1} Type of Participants -------------------- In the 56 articles, 9943 participants were included. In 30 out of 56 participant subgroups, participants were from the field of medicine \[[@ref11],[@ref31]-[@ref34],[@ref36],[@ref38],[@ref39],[@ref41],[@ref43],[@ref44],[@ref46]-[@ref50],[@ref53],[@ref64]-[@ref67],[@ref69]-[@ref74], [@ref76],[@ref78],[@ref79]\]. The participant subgroups from fields other than medicine were as follows: 16 studies in nursing \[[@ref9],[@ref10],[@ref12],[@ref27],[@ref29],[@ref35],[@ref37],[@ref40],[@ref51],[@ref52],[@ref57],[@ref58],[@ref61]-[@ref63],[@ref75]\], 1 in pharmacy \[[@ref37]\], 3 in physiotherapy \[[@ref12],[@ref30],[@ref45]\], 5 in dentistry \[[@ref10],[@ref42], [@ref54],[@ref55],[@ref59]\], and 4 interprofessional education \[[@ref56],[@ref60],[@ref68],[@ref77]\]. Of the 56 studies, 47 were conducted in high-income countries: 14 were from the United States \[[@ref9],[@ref10],[@ref29],[@ref31],[@ref36],[@ref38],[@ref43],[@ref50], [@ref53]-[@ref55],[@ref59],[@ref61],[@ref73]\], 2 from Canada \[[@ref47],[@ref58]\], 5 from Germany \[[@ref39],[@ref41],[@ref46],[@ref57],[@ref76]\], 3 from the United Kingdom \[[@ref42],[@ref56],[@ref75]\], 3 from Spain \[[@ref30],[@ref45],[@ref48]\], 1 from France \[[@ref74]\], 1 from Greece \[[@ref34]\], 1 from Sweden \[[@ref67]\], 1 from the Netherlands \[[@ref37]\], 1 from Korea \[[@ref40]\], 1 from Poland \[[@ref79]\], 1 from Serbia \[[@ref70]\], 1 from Croatia \[[@ref64]\], 1 from Turkey \[[@ref32]\], 2 from Taiwan \[[@ref28],[@ref51]\], 1 from Japan \[[@ref69]\], and 7 from Australia \[[@ref44],[@ref49],[@ref63],[@ref65],[@ref66],[@ref68],[@ref78]\]. Of the 56 studies, 9 studies were conducted in low- or middle-income countries: 2 from Thailand \[[@ref52],[@ref62]\], 1 from China \[[@ref77]\], 1 from Malaysia \[[@ref72]\], 2 from Iran \[[@ref27],[@ref71]\], 1 from Jordan \[[@ref35]\], 1 from South Africa \[[@ref11]\], and 1 from Uruguay \[[@ref60]\]. The technical characteristics of the blended learning systems, topics of educational content, applied design methods, and other information on the validity of outcome measurements can be found in [Multimedia Appendix 1](#app1){ref-type="supplementary-material"}. Effects of Interventions ------------------------ ### Blended Learning Versus Traditional Learning The pooled effect size reflected a significantly large effect on knowledge outcome (standard mean difference 1.07, 95% CI 0.85 to 1.28, *z=*9.72, n=9943, *P\<*.001). A significant heterogeneity was observed among studies (I^2^=94.3%). [Figure 2](#figure2){ref-type="fig"} shows details of the main analysis. The test of asymmetry funnel plot ([Figure 3](#figure3){ref-type="fig"}) indicated publication bias among studies (Begg test *P=*.01). The trim and fill method indicated that the effect size changed to 0.41 (95% CI 0.16 to 0.66, *P\<*.001) after adjusting for publication bias, which suggested that blended learning was more effective than traditional learning. ![Forest plot of blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval; SMD: standard mean difference.](jmir_v22i8e16504_fig2){#figure2} ![Funnel plot of blended learning versus traditional learning.](jmir_v22i8e16504_fig3){#figure3} ### Offline Blended Learning Versus Traditional Learning Of the 3 studies \[[@ref27]-[@ref29]\] comparing offline blended learning to traditional learning, in 2 studies \[[@ref27],[@ref28]\], the groups with blended resources scored better than their corresponding control groups with significant positive standard mean differences. The other study did not show a statistically significant difference in knowledge outcome (standard mean difference 0.08, 95% CI --0.63 to 0.79) \[[@ref29]\]. The pooled effect for knowledge outcomes suggested no significant effects from offline blended learning over traditional education alone (standard mean difference 0.67, 95% CI --0.50 to 1.84, I^2^=87.9%, n=327) ([Figure 4](#figure4){ref-type="fig"}). ![Forest plot of offline blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval.](jmir_v22i8e16504_fig4){#figure4} ### Online Blended Learning Versus Traditional Learning In studies comparing online blended learning to traditional learning, 26 \[[@ref34]-[@ref37],[@ref39]-[@ref41],[@ref43],[@ref46],[@ref50],[@ref53]-[@ref55],[@ref58],[@ref60],[@ref64],[@ref69]-[@ref72], [@ref74],[@ref75],[@ref77],[@ref78]\] of the 41 studies \[[@ref34]-[@ref47],[@ref50],[@ref51],[@ref53]-[@ref55],[@ref57],[@ref58],[@ref60],[@ref62], [@ref64],[@ref68]-[@ref72],[@ref74],[@ref75],[@ref77],[@ref78]\] showed that groups with blended learning had better scores than those of their corresponding control groups. The pooled effect for knowledge outcomes was a standard mean difference of 0.73 (95% CI 0.60 to 0.86, n=6976) ([Figure 5](#figure5){ref-type="fig"}). There was a substantial amount of heterogeneity in the pooled analysis (I^2^=94.9%). ![Forest plot of online blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval; SMD: standard mean difference.](jmir_v22i8e16504_fig5){#figure5} ### Digital Learning Versus Traditional Learning Only 3 \[[@ref32],[@ref33],[@ref63]\] of 7 studies \[[@ref30]-[@ref33],[@ref56],[@ref63],[@ref76]\] comparing digital learning to traditional learning presented a better score than the control group. The pooled effect for knowledge outcomes suggested no significant effects between blended and traditional learning (standard mean difference 0.04, 95% CI --0.45 to 0.52, I^2^=93.4%, n=1093) ([Figure 6](#figure6){ref-type="fig"}). ![Forest plot of digital blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval.](jmir_v22i8e16504_fig6){#figure6} ### Computer-Assisted Instruction Blended Learning Versus Traditional Learning Of the studies focusing on computer-assisted instruction blended learning, 5 \[[@ref10],[@ref11],[@ref48],[@ref49],[@ref73]\] of the 8 studies \[[@ref9]-[@ref11],[@ref38],[@ref48], [@ref49],[@ref52],[@ref73]\] showed significantly higher scores than those of traditional learning. Only 1 study \[[@ref38]\] showed a significant negative effect compared to traditional learning (standard mean difference --0.68, 95% CI --1.32 to --0.04). The other studies showed no significant difference \[[@ref9],[@ref52]\]. The pooled effect for knowledge outcomes suggested a significant improvement of computer-assisted instruction blended with traditional education over traditional education alone (standard mean difference 1.13, 95% CI 0.47 to 1.79, I^2^=78.0%, n=926) ([Figure 7](#figure7){ref-type="fig"}). ![Forest plot of computer-assisted instruction blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval.](jmir_v22i8e16504_fig7){#figure7} ### Virtual Patient Blended Learning Versus Traditional Learning In 4 \[[@ref59],[@ref65],[@ref66],[@ref79]\] of the 5 studies \[[@ref59],[@ref65]-[@ref67],[@ref79]\] on knowledge outcomes when using virtual patients as a supplement to traditional learning, the groups with supplementary virtual patient learning support scored better than their corresponding control groups. Only 1 study with virtual patients did not show a statistically significant difference in knowledge outcomes (standard mean difference 0.13, 95% CI --0.30 to 0.56) \[[@ref67]\]. The pooled effect for knowledge outcomes suggested significant effects for virtual patient blended learning (standard mean difference 0.62, 95% CI 0.18 to 1.06, I^2^=78.4%, n=621) ([Figure 8](#figure8){ref-type="fig"}). ![Forest plot of virtual patient blended learning to traditional learning comparison for knowledge outcomes. df: degree of freedom; CI: confidence interval.](jmir_v22i8e16504_fig8){#figure8} Sensitivity Analyses -------------------- None of the subgroup analyses that were initially planned explained the heterogeneity of the results. Among many analyzed aspects, we considered the differences regarding the efficiency of learning with blended learning between the health professions disciplines. Most of the studies involved students of medicine as participants (30/56, 54%). When analyzing knowledge outcomes in medicine, nursing, and dentistry, some differences were apparent. The pooled effect of medicine studies showed a standard mean difference of 0.91 (95% CI 0.65 to 1.17, *z=* 6.77, I^2^=95.8%, n=3418, *P\<*.001) ([Figure 9](#figure9){ref-type="fig"}), nursing studies showed a standard mean difference of 0.75 (95% CI 0.26 to 1.24, *z=*2.99, I^2^=94.9%, n=1590, *P=*.008) ([Figure 10](#figure10){ref-type="fig"}), and dentistry studies showed a standard mean difference of 0.35 (95% CI 0.17 to 0.53, *z=*3.78, I^2^=37.6%, n=1130, *P=\<*.001) ([Figure 11](#figure11){ref-type="fig"}). Dentistry studies included 3 online blended learning studies (standard mean difference 0.37, 95% CI 0.14 to 0.64, *z=*2.63, I^2^=58.3%, n=879), 1 virtual patient learning study, and 1 computer-assisted instruction learning study. Additional interest was observed for offline blended learning in nursing compared to traditional learning (standard mean difference 1.28, 95% CI 0.25 to 2.31, *z=*2.43, I^2^=86.2%, n=249), and in computer-assisted instruction (standard mean difference 0.53, 95% CI 0.17 to 0.90, *z=*2.84, I^2^=23.9%, n=174), but not for online blended learning (standard mean difference 0.68, 95% CI --0.07 to 1.45, *z=*1.76, I^2^=96.7%, n=1091). Additional interest was observed for digital blended learning compared to traditional learning in medicine (standard mean difference 0.26, 95% CI 0.07 to 0.45, *z=*2.71, I^2^=95.6%, n=417) \[[@ref31]-[@ref33],[@ref76]\], in virtual patient (standard mean difference 0.71, 95% CI 0.14 to 1.28, *z=*2.45, I^2^=85.8%, n=416) \[[@ref65]-[@ref67],[@ref79]\], in online (standard mean difference 1.26, 95% CI 0.81 to 1.71, *z=*5.49, I^2^=96.1%, n=1879) \[[@ref34],[@ref36],[@ref38],[@ref39],[@ref41],[@ref43],[@ref44],[@ref46],[@ref47],[@ref50],[@ref53], [@ref64],[@ref69]-[@ref72],[@ref74],[@ref78]\], and in computer-aided-instruction (standard mean difference 2.1, 95% CI 0.68 to 3.44, *z=*2.91, I^2^=97.9%, n=706) \[[@ref11],[@ref38],[@ref48],[@ref49],[@ref73]\] suggesting more positive effects of blended learning over traditional learning alone for learning in medicine. ![Forest plot of blended learning to traditional learning comparison for knowledge outcomes for medical students. df: degree of freedom; CI: confidence interval.](jmir_v22i8e16504_fig9){#figure9} ![Forest plot of blended learning to traditional learning comparison for knowledge outcomes for nurses as students. df: degree of freedom; CI: confidence interval.](jmir_v22i8e16504_fig10){#figure10} ![Forest plot of blended learning to traditional learning comparison for knowledge outcomes for dentistry students. df: degree of freedom; CI: confidence interval.](jmir_v22i8e16504_fig11){#figure11} Risk of Bias ------------ Risk of bias is shown in [Figure 12](#figure12){ref-type="fig"}. The risk of bias of evaluators was avoided in several studies by using automated assessment instruments. Thus, we rated the risk as low in 50 of the 56 studies. Nevertheless, it was still unclear whether the instruments had been correctly validated. Attribution bias was within acceptable levels in some studies (low risk in 24 of the 56 studies), but this did not exclude voluntary bias and its influence on the estimated effect. Reporting bias was considered low in 28 of the 56 studies. We cannot consider allocation bias as a significant problem in this review because, if studies described an adequate randomization method or an unclear description, it was assumed that randomization was unlikely be defective. Performance bias on traditional learning may be a problem, but it is impossible to avoid in this type of research. It is possible to blind participants in blended learning design comparisons, but these studies are still rare in the literature. We cannot reliably estimate publication bias given the high degree of heterogeneity of the included studies. ![Risk of bias summary (+ low risk of bias; - high risk of bias; ? unclear risk of bias).](jmir_v22i8e16504_fig12){#figure12} Discussion ========== Principal Findings ------------------ This meta-analysis provided several findings. First, blended learning had a large consistent positive effect (standard mean difference 1.07, 95% CI 0.85 to 1.28) on knowledge acquisition in comparison to traditional learning in health professions. A possible explanation could be that, compared to traditional learning, blended learning allowed students to review electronic materials as often as necessary and at their own pace, and this likely enhanced learning performance \[[@ref80]\]. The trim and fill method showed that the pooled effect size changed to 0.41 (95% CI 0.16 to 0.66), meaning that blended learning remained more effective than traditional learning. The strength of this meta-analysis was that it reinforced previous results \[[@ref12]\]; however, a large heterogeneity was observed across the studies. The participant subgroup analyses partially explained these differences. The effectiveness of blended learning is complex and dependent on how well the evaluation fits, since it occurs before the implementation of any innovation as well as allowing planners to determine the needs, considering participant characteristics, analyzing contextual matters, and gathering baseline information \[[@ref81]\]. Some interventional studies have highlighted the potential of blended learning to increase course completion rates, improve retention, and increase student satisfaction \[[@ref82]\]. Nevertheless, comparisons between blended learning environments and the disparity between academic achievement or grade dispersions have been studied; no significant differences were observed \[[@ref83]\]. The effectiveness of blended learning may be dependent on student characteristics, design features, and learning outcomes. Learner success is dependent on the ability to cope with technical difficulty, technical skills, and knowledge in computer operations and internet navigation. Thus, the success of blended learning is highly dependent on experience with the internet and computer apps. Some studies have observed that the success of blended learning was largely associated on the capability to participate in blended course. A previous study \[[@ref84]\] showed that high motivation among blended learning led to persistence in their courses. Moreover, time management is a crucial effectiveness factor for successful online learning \[[@ref85]\]. Second, offline blended learning did not show a positive pooled effect compared to traditional learning; however, 2 of the 3 studies were in nursing. These results were consistent with a previous meta-analysis on offline digital education \[[@ref86]\]. Nevertheless, potential benefits of offline education such as unrestrained knowledge transfer and enriched accessibility of health education have previously been suggested \[[@ref87]\]. These interventions could be focused on an interactive, associative, and perceptual learning experience by text, images, audio-video, or other components \[[@ref88],[@ref89]\]. Third, the effect of digital learning on knowledge outcomes presented inconsistent effects according to the group or subgroup analysis. Overall, the 8 digital blended learning studies showed a nonsignificant effect compared to traditional learning whereas in the medicine subgroup, digital learning had a positive effect (standard mean difference 0.26, 95% CI 0.07 to 0.45). Previous studies \[[@ref18],[@ref90]\] have shown similar results. Nevertheless, George et al \[[@ref18]\] showed the effectiveness of digital learning for undergraduate health professionals compared to traditional learning. Fourth, in the 10 studies related to computer-assisted instruction, we observed a significant difference in knowledge acquisition outcomes. Furthermore, the difference was higher in the medicine subgroup. This finding must be interpreted with caution because of the high level of heterogeneity (all computer-assisted instruction: I^2^=78.0%; medicine computer-assisted instruction: I^2^=97.9%). Previous studies showed that computer-assisted instruction was equally as effective as traditional learning \[[@ref91]\]. Nevertheless, the results of these studies also had high levels of heterogeneity and require cautious interpretation. We believe that a comparative approach focusing on the differences in intervention design, sample characteristics, and context of learning is needed to better understand the effectiveness of computer-assisted instruction. Computer-assisted instruction could be perceived negatively by some students and impact outcomes. Fifth, the participants in Al-Riyami et al's study \[[@ref92]\] reported difficulties accessing the course because of network difficulties with university's server and internet; therefore, the asynchronous features of the discussion boards were not used to their full potential in this study. Both problems could have emerged regardless of the online course. In traditional learning, students may choose not to engage in discussions, and internet connectivity issues can happen anywhere. This supports the contention above that local conditions, rather than a general effect, may render one or the other mode of instruction preferable to the other. Sixth, virtual patient blended learning had an overall positive pooled effect when compared to traditional learning on knowledge outcomes; this was also found in a similar meta-analysis \[[@ref93]\]. Our observations also supplement the evidence in previous reviews \[[@ref94],[@ref95]\] which included studies since 2010. Nevertheless, virtual patient simulations predominantly affect skill rather than knowledge outcomes. This could explain the low number of studies and the low added value of virtual patient in comparison to traditional learning. Virtual patients have greater impact in skills training, in applying problem solving, and when direct patient contact is not possible \[[@ref93]\]. As proposed by Cook and Triola \[[@ref96]\], virtual patients can be said to be a modality for learning in which learners actively use and train their clinical reasoning and critical thinking abilities before bedside learning \[[@ref96]\]. Nevertheless, some exceptions can be noted. A need for more guidance within virtual patient simulations may appear in studies with different instructional methods where narrative virtual patient design was better than more autonomous problem-oriented designs \[[@ref97]\]. Feedback given by humans in a virtual patient system was better than an animated backstory in increasing empathy \[[@ref98]\], but no feedback had no more positive result on the outcomes than learning from a virtual patient scenario \[[@ref99]\]. This reminds us that presenting realistic patient scenarios with a great degree of freedom may not be an excuse for neglecting guidance in relation to learning objectives \[[@ref100]\]. Strengths and Limitations ------------------------- This meta-analysis had numerous strengths. An evaluation of the effectiveness of blended learning for health professions is timely and very important for both health educators and learners. We intentionally kept our scope broad in terms of learning topic and included all studies with learners from health professions. The samples used in this study consisted in various health professionals (in medicine, nursing, dentistry, and others) across a wide variety of health care disciplines. Although, these observations could explain the high level of heterogeneity, we found moderate or large effects for the pooled effects sizes of almost all subgroup analyses exploring variations in participant types. Thus, these results could suggest that health care learning should use blended learning in several and various disciplines of health learning. However, some limitations must be considered. The systematic literature search encompassed one database (MEDLINE) with few exclusion criteria. The quality of the analyses was dependent on the quality of data from the included studies. Although the standard deviation of some interventions was not available due to poor reporting, we used the average standard deviation of the other studies and imputed effect sizes with concomitant potential for error. Results of subgroup analyses should be interpreted with caution because of the absence of a priori hypotheses in some cases, such as study design, country socioeconomic status, and outcome assessment. In addition, since variability of study interventions, assessment instruments, circumstances were not assessed and could be potential sources of heterogeneity, this should also be cause for cautious interpretation of results. Finally, publication bias was also found. Conclusions ----------- This study has implications for research on blended learning in health professions. Even though conclusions could be weakened by heterogeneity across studies, the results of this synthesis reinforced that blended learning may have a positive effect on knowledge acquisition related to health professions. Blended learning could be promising and worthwhile for further application in health professions. The difference in effects across subgroup analyses of health population indicated that different methods of conducting blended courses could demonstrate differing effectiveness. Therefore, researchers and educators should pay attention to how to implement a blended course effectively. This question may be answered successfully through studies directly comparing different blended instructional methods. Authors\' Contributions: AV performed the design and conceived the original idea. AV and ES read the articles and selected the articles. AV performed the statistical analyses. AV wrote the manuscript. ES, AC, and JB participated in the writing of the manuscript. Conflicts of Interest: None declared. Supplemental appendix.
Q: Word for calculating/adjusting a payment when I previously paid too much? As part of a percentage rent, I need to pay my landlord 10% of my monthly revenue, but no lower than $10,000. Therefore, I automatically pay $10,000 at the beginning of each month. So let's say we're dealing with January. I pay $10,000, and shortly into February, I have determined my monthly revenue, and 10% of it is $15,000. Now I need to pay an additional $5,000 for January, instead of all 10% of my revenue (because I already put down $10,000). Now, what is the word for what we did to the 10% of revenue to use only the portion needed to leave the landlord with only $15,000? Example sentence: Because I have already paid $10,000, we must _________ the $15,000. The word offset has been suggested, but I do not think it is an accurate choice. Also - please make suggestions that fit in the example sentence above, the structure is quite important to my question. A: You deduct the advance payment ($10,000 in this case) from the final/total rent payable. Because I have already paid $10,000, we must deduct it from the total ($15,000). ODO: deduct VERB [WITH OBJECT] Subtract or take away (an amount or part) from a total: ‘tax has been deducted from the payments’ ‘Any severance already paid to the workers will be deducted from that amount, the judge ruled.’ TFD(idioms): deduct (something) from (something else) to subtract an amount from another amount. Mr. Wilson deducted the discount from the bill. McGraw-Hill Dictionary of American Idioms and Phrasal Verbs. © 2002 by The McGraw-Hill Companies, Inc. Wiktionary: Verb deduct (third-person singular simple present deducts, present participle deducting, simple past and past participle deducted) To take one thing from another; remove from; make smaller by some amount. I will deduct the cost of the can of peas from the money I owe you. A: ... we must pay the balance. See also this entry. A: Offset is fine, but you don't offset the $15000. You offset the $10k against that sum. Because I have already paid $10,000, we must offset it against the $15,000. Oxford's definition isn't particularly helpful, but there is an example which matches, noting how tax liability can be reduced by charitable donations already made: offset verb [with object] Counteract (something) by having an equal and opposite force or effect: ‘donations to charities can be offset against tax’ Other examples, showing a slightly different usage: The cost would be roughly $1.5 billion, which he says would be offset with spending cuts. [online.wsj.com] The cost of the program was partly offset with a $1 million grant from Mercury Insurance. [dailynews.com] In both usages, offset against and offset with, the larger sum is as stated and the smaller sum is offset.
Objective: To evaluate whether a 2-dimensional(2D) model describes the surgical difficulty of a renal mass accurately comparable to that obtained using a 3D model with the Simplified PADUA REnal nephrometry system (SPARE). Methods: A total of 100 patients underwent RAPN in our hospital between October 2018 and May 2021. We excluded patients with CT images inappropriate for evaluation or for construction of 3D models, patients with multiple tumors, and those who underwent preoperative transcatheter arterial embolization. We conducted a retrospective analysis of the remaining patients using SPARE predictions from CT images (2D-SPARE) and SPARE predictions from 3D models (3D-SPARE). We evaluated the difference between the 2 nephrometry scores and compared them by their ability to predict the achievement of the desired surgical outcome: absence of positive margins, absence of ischemia, and absence of significant complications. Results: A total of 87 patients were included in this study. Total score, and risk categorization using 3D-SPARE was significantly different from those using 2D-SPARE (P <.05), but in their areas under the curve (AUC), the scores and categorizations were not significantly different (score, 0.763 vs 0.742; P =.501; categorization, 0.711 vs 0.701; P =.755). Conclusion: The SPARE system can describe the surgical difficulty of renal masses with high accuracy even without the use of 3D renal models.
https://okayama.pure.elsevier.com/en/publications/simplified-padua-renal-spare-nephrometry-system-can-describe-the-
However, they haven’t mentioned the personal cost and whether it’s sustainable. As it appears to me, our economy has been growing while most businesses have been struggling to find and train quality staff to handle the workload. As a result of that, all of us are doing more work faster and with less support. Simply put, we are all over-tasked and under-resourced. In the midst of the hundreds of emails, the constant interruptions that we endure and the plethora of new requirements being thrown at us, we should keep in mind our core job objectives and goals. It’s so easy to get distracted as our inboxes flood with emails and as our phones buzz constantly with text messages. Time seems to evaporate as we chase each and every task, sometimes at the expense of our goals. Working on the goals and objectives related to our job descriptions and functions needs to find a place in the workday. Our job function might be derailed as we spend time satisfying small demands, which while important, fall outside of our objectives. Sometimes simply stepping away from the clutter and putting what we want to accomplish on a piece of paper, a Post-it note or a dry erase board and prioritizing can make all the difference to our company and personal satisfaction. I personally employ all three. I have a dry-erase board filled with reminders, Post-its all around my office and I’m constantly writing notes on paper to keep my objectives in mind. This helps me stay focused. We can’t escape the volume of messages that come at us, but we can sometimes prioritize the more meaningful activities over the time-consuming tasks. If possible, it is good to discuss these things with your team and try to divide and conquer. When you can delegate subjects to team members who can respond more proactively and with less effort, everyone wins. Sometimes, we just have to work late into the night and on the weekends to accomplish other necessary tasks so that we can focus on our objectives and goals throughout the workday. Most importantly, we have to keep our goals and objectives in the front of our minds if we hope to accomplish them.
http://midwestrg.com/2018/06/task-management/
At its core, CBM+ is maintenance performed based on evidence of need, integrating RCM analysis with those enabling processes, technologies, and capabilities that enhance the readiness and maintenance effectiveness of DoD systems and components. CBM+ uses a systems engineering approach to collect data, enable analysis, and support the decision-making processes for system acquisition, modernization, sustainment, and operations. CBM integrates all functional aspects of life cycle management processes for materiel requirements, such as systems engineering, development, acquisition, distribution, supply chain management, sustainment, and modernization. Implement an optimum mix of maintenance technologies e.g., condition monitoring, diagnostics, and prognostics, best practices, RCM-based processes, and enablers (e.g., total asset visibility within the integrated total life cycle framework. Minimize mean downtime by providing timely condition information, precise failure mode identification, and accurate technical data to expedite repair and support processes. CBM+ is pursued through the examination, evaluation, and implementation of enabling technologies, tools, and process improvements Used as a principal consideration in the selection of maintenance concepts, technologies, and processes for all new weapon systems, equipment, and materiel programs based on readiness requirements, life cycle cost goals, and reliability centered maintenance (RCM)-based function. DoD policy requires CBM+ be implemented for maintenance and logistics support of Service weapon systems where cost effective. The scope of CBM+ includes maintenance related processes, procedures, technological capabilities, information systems, and other logistics concepts that apply to both legacy systems and new acquisition programs. Require Program Managers PM design, develop, demonstrate, deploy, and sustain equipment in accordance with CBM+ policy and guidance to achieve required materiel readiness at best value. PM Architecture Framework is fundamental organization of a system or process embodied in its components, their relationships to each other and to the environment, and the principles guiding its design and evolution. PM Architectural Framework defines a common approach for architecture description development, presentation, and integration for both DoD warfighting operations and for business operations and processes. PM Framework is intended to ensure design descriptions and interfaces can be compared and related throughout the product or process life cycle across organizational and functional boundaries . CBM+ initiatives reflect universally popular objectives, but can lose support when faced with competing operational priorities. Continued research into emerging technologies and business practices provides programs with the latest information for selecting optimum maintenance solutions. Sharing the information between programs and Services will stimulate forward progress in CBM+ development and implementation. Regular progress reviews will ensure that new personnel and programs will be included into the CBM+ environment so CBM+ strategic plans stay on track. Here we provide an overall Condition-based Maintenance CBM Business Case Approach BCA process, common set of cost elements, measures of effectiveness, a notional BCA framework, and factors to consider when assessing and subsequently conducting a CBM BCA to shape an understanding of the areas that CBM capabilities might benefit a program/system, in order to support a go/no‐go decision and subsequent investment decisions with justifiable information. CBM is the application and integration of appropriate processes, technologies, and knowledge-based capabilities to improve the reliability and maintenance effectiveness of DoD systems and components. At its core, CBM is maintenance performed based on evidence of need provided by Reliability Centered Maintenance RCM analysis and other enabling processes and technologies. CBM uses a systems engineering approach to collect data, enable analysis, and support the decision‐making processes for system acquisition, operations, and sustainment. In evaluating potential CBM capabilities, whether they are technologies, maintenance processes, or information/data knowledge applications, a BCA needs to address these areas in a comprehensive and consistent manner, particularly when an incremental acquisition or fielding strategy is being considered. Although the basic concept and purpose of BCAs are generally understood throughout DoD, many interpretations exist regarding assessment of CBM capabilities to ensure appropriate and accurate considerations are given to CBM capabilities, costs, and benefits. So, what is a BCA? A BCA is a decision support approach that identifies alternatives and presents convincing business, economic, risk, and technical arguments for selection and implementation to achieve stated organizational objectives/imperatives. A BCA does not replace the judgment of a decision maker, but rather provides an analytic and uniform foundation upon which sound investment decisions can be made. The subject of a BCA may include any significant investment decision that leadership is contemplating. For example, a BCA may be used to substantiate the case to invest in a new weapons system, but not at the same level as a Capabilities Based Assessment; transform business operations; develop a web‐based training curriculum; or retire an asset. In general, BCAs are designed to answer the following question: What are the likely operational/business consequences if we execute this investment decision or this action? The possibility exists that any projected savings or cost reductions identified in the BCA could be viewed as an asset available for reallocation in the budgeting process. In evaluating the potential application of a CBM capability, it is important to understand the desired end state from a CBM metrics perspective and key assumptions that may impact the system or CBM capability. Must define the need for a BCA, understand and define the problem, and define the desired end state. This approach focuses on As‐Is system trends, evaluating Measures of Effectiveness MOE and their cost drivers, key CBM metrics, determining if CBM is a viable solution and if so, what CBM capabilities are applicable, and then defining feasible solutions. CBM Guidebook outlines measureable objectives for maintenance in a CBM+ environment and five relevant CBM+ operating metrics: material availability, material reliability, ownership costs, and mean down time; and logistics footprint. When defining metrics for your BCA, select a set of metrics, considering Systems Operational Effectiveness (SOEs) metrics, that fairly represent the potential costs and MOEs that you expect to be able to capture, or is available. Other metrics may also be appropriate when considering predictive capabilities such as advanced diagnostics or prognostics that enable accurate and timely prediction of Remaining Useful Life (RUL). Potential MOEs for Prognostics Health Management (PHM) systems could include advanced warning of failures; increased availability through an extension of maintenance cycles and/or timely repair actions; lower life‐cycle costs of equipment from reductions in inspection costs, downtime, inventory, and no‐fault founds; or improved system qualification, design, and logistical support of fielded and future systems Key assumptions constitute a critical element of the boundaries of the CBM+ BCA. Not everything included in the analysis is known. CBM+ BCAs, like any forecasting analysis, address future periods and conditions and as much as we utilize data to predict future conditions, any future datum or condition is subject to change from forces that could not be predicted when the analysis was conducted. Assumptions allow us to logically portray reasonable expectations of future circumstances. Tailoring the BCA to fit your case will require adjusting functional areas, weighting factors interfacing systems, sustainment/incremental capability improvements and service life considerations. Areas to consider in defining assumptions should include: areas of integration with other systems; operating tempo; projected useful service life/remaining useful life; expected funding levels; basis for cost estimates; MOEs and related metrics (throughout the life cycle); technology forecast; CBM+ related logistics processes; and areas not addressed in the BCA. Considering these areas will help to ensure the CBM+ BCA is well scoped and defined. It is also important to understand how incremental CBM+ capability improvements may impact ROI and ensure adequate assumptions are defined regarding the impact of incremental improvements on ROI. These incremental improvements, as well as improvements to other systems, e.g., Global Combat Support System (GCSS), and maintenance processes, will make contributions to the CBM+ ROI as they come online. Assumptions regarding timing and capability impact on the overall CBM+ ROI should be clearly stated. The ROI contribution of other systems may be significantly greater than one depending on application, access, and use of CBM+ data. Define the As‐Is CBM+ configuration in terms of the existing CBM+ capability itself, the platform/weapons system it supports or is integrated into, and the current system’s CBM+ performance measures/metrics. To the maximum extent possible, describe the As‐Is configuration in terms of the CBM+ functionality (fault detection, isolation, prediction, reporting, assessment, analysis, decision‐support execution and recovery, both on and off‐board); CBM+ business needs; and operating metrics. Where incremental improvements have been implemented, provide any prior BCA, Economic Analysis (EA), or analytical data related to CBM+ functionality, CBM+ business needs, and metrics. Measures of Effectiveness. MOEs should clearly answer the question, “What does this investment provide the customer/ organization?” It is important to understand how benefits will be measured to ensure that appropriate data and information is collected and folded into reasonable measures that support CBM+ metrics and can be tied to time‐phased changes in the system being evaluated. MOEs can be defined as an advantage, profit, or gain attained. They are commonly thought of as an investment return and should describe what the investment enables an agency to accomplish and how the mission is enhanced. Focusing on improved business outcomes rather than the technology is one of the best ways to ensure the expenditure of any resource furthers the agency’s mission. CBM+ BCA risk analysis should include any areas or processes that may significantly affect your program and provide an assessment of their likelihood and potential impact. For each alternative, identify risks that could adversely affect it, and assess the possibility that the initiative can be successful; specify a risk‐reduction strategy for each risk; and identify key parameters and conditions that impact the investment decision. Present potential contingent actions that could mitigate the uncertainty. Identify how such uncertainties impact the analysis and investment decision. Prepare Sensitivity analysis by Comparing alternatives and rank according to net present value, risk, ROI, or primary measures/factors such as risks and areas of uncertainty, technical maturity, level of integration risk, and funding. A sensitivity analysis can answer “What if the assumptions change?” It involves evaluating the variability of an alternative’s cost, benefit, and risk with respect to a change in specific factors. The objective is to determine which factors have the greatest impact (positive or negative) on the evaluation of the alternative. Here we present some general questions and guidance that may relate to your CBM+ initiative. Answers to these questions are provided as information and an approach to support CBM implementation. As you plan your CBM BCA, the questions may assist in framing your general approach and strategy and ensure your CBM+ BCA is adequately defined and scoped to address key CBM business areas. 1. CBM+ implementation enhance maintenance efficiency and effectiveness 2. Establish integrated, predictive maintenance approaches 3. Minimize unscheduled repairs 4. Eliminate unnecessary maintenance 5. Employ the most cost-effective system condition management processes. 6. Implement data collection and analysis requirements 7. Measure equipment sustainment performance characteristics 8. Collect measures of effectiveness throughout life cycle sustainment. 9. Enhance materiel availability and life cycle system readiness 10. Reduce equipment failures during mission periods 11. Identify best time to perform required maintenance, to increase operational assets. 12. Leverage open architectures and open standards to facilitate the broad application of CBM+ enablers 13. Improve materiel reliability through the disciplined analysis of failure data 14. Create capacity to modify designs and operating practices 15. Ensure equipment meets target performance standards within operational context. 16. Optimize life cycle logistics processes 17. Reduce operation and support costs by eliminating unnecessary maintenance activities 18. Accurately position resources for an effective logistics footprint in support of warfighting requirements. 19. Incorporate CBM+ based on Failure modes, effects and other R&M analysis. 20. Create capacity for RCM analysis in accordance with established standards 21. Continuous administrative process improvement initiatives 22. Serialized item management applications 23. Predictive reliability engineering methods 24. Technology assessments and business case analyses 25. Analysis formulated in a comprehensive reliability and maintainability (R&M) engineering program 26. Document in the program Systems Engineering Plan and Life Cycle Sustainment Plan 27. Assess standards during acquisition process reviews and evaluations 28. Include in the development of mandatory sustainment key performance parameter (KPP) and supporting key system attributes (KSAs) 29. Develop sustainment KPP or sponsor defined sustainment metrics 30. Adequately resource for implementation, to include product development, procurement, and sustainment. 31. Integrate in current weapon systems, equipment, and materiel sustainment programs where it is technically feasible 32. Incorporate as part of maintenance plans and into contracts for systems and programs 33. Review performance-based life cycle product support sustainment arrangements 34. Assess availability metrics of the sustainment KPP: materiel availability and operational availability 35. Consider KSA reliability, operation and support cost 36. Monitor and review the implementation of policies to ensure CBM+ 37. Demonstrate effectiveness across maintenance, acquisition, engineering, logistics, and industry groups 38. Ensure CBM+ technologies, processes, and enablers are integrated with program acquisition and technical planning. 39. Consider CBM+ during program support reviews and other oversight reviews. 40..Support identified critical technologies through studies and analyses 41. Review plans and projects to eliminate unpromising or duplicative programs. 42. Guide science and technology programs, advanced component development, and prototypes programs to achieve CBM+ capabilities 43. Incorporate the requirement for CBM+ in appropriate policy and guidance. 44. Develop and establish enterprise level requirements for implementing CBM+ 45. Provide resources for CBM+ requirements developed at enterprise and weapon systems levels. 46. Review and monitor programs for CBM+ implementation and outcomes. 47. Use CBM+ solutions to maintain the readiness of new and fielded equipment 48. Integrate common CBM+ technologies, processes, and procedures for similar platforms and components. 49. Require implementation of RCM and other appropriate reliability and maintainability analyses. 50. Ensure logistics information systems support CBM+ objectives.
http://www.marinemagnet.com/status-updates/top-50-questions-execute-design-approach-support-condition-based-maintenance-business-case-assessment
Two important personality theories are the biological theory and the humanistic theory. The biological theory is based on the premise that all people inherit their characteristics from their family. This theory basically contends that people do not have control over their behaviors because they are genetically pre-determined. The humanistic theory, on the other hand, is based on the premise that each person has free will to control their actions. This theory does not go along with the idea that behaviors are pre-determined by genetics, but chosen by the individual. These two theories have created debates between psychologists for many yearsHans j. Eysenck, Ph. D., D.Sc., who developed the biological theory, is one of the world's most cited psychologist. He is a pioneer in the use of behavior therapy as well as research in personality theory and measurements. The biological theory has to do with his findings that individual differences in personality are biology based. This was based on his theory that there are three dimensions of personality (super factors). These dimensions of personality were extraversion-introversion, neuroticism, and psychoticism. Eysenck also went a step farther in pointing out the results of many studies indicating that genetics play an important role in deciding the amounts of which of the three personality dimensions one might possess. I agree with this theory because even most psychologists will admit that it is getting increasingly harder to ignore the obvious link between our genetic makeup and certain inherited behaviors. I disagree with this theory because it is difficult to test in actual experiments. Another reason I disagree with this theory is that while genetics play a role in certain behaviors, it does not excuse or justify certain actions. Lastly, this theory offers us very little in the area of personality change. Biological and Humanistics 3In humanistic...
https://www.writework.com/essay/biological-and-humanistic-theories
Findings of a new study suggest that mystery features observed in light released from active galactic nuclei could be due to minimal obscuration by dust clouds. Large galaxies often feature a bright core region known as an Active Galactic Nucleus (AGN) that is powered by matter that spirals into a huge black hole. Gas clouds in the broad-line region then release light at characteristic wavelengths but what had puzzled astrophysicists is the complexity and variability that characterizes these emissions. A new study by researchers at UC Santa Cruz explains the puzzle of the AGN as the impact of small dust clouds that can partly obscure the AGN’s innermost areas. Martin Gaskell, a research associate in astronomy says that they have shown that numerous mysterious properties of the AGN can be explained by the small dusty clouds. The recent findings have significant implications since researchers make use of optical emissions coming from the broad-line region to make conclusions concerning the behavior of gases in the inner areas of the black hole. Gaskell says that there is poor understanding of the nature of these gases. Peter Harrington, a UCSC graduate explains that the spiraling of gases towards a galaxy’s core black hole triggers the formation of a flat accretion disk containing superheated gas which then produces intense thermal radiation. There is reprocessing of that light by hydrogen and other gases that swirl on top and under the disk in the broad-line region. The region of dust is above and beyond the accretion disk. The impacts of the dust clouds on the light released is to make the light emanating from behind them look both faint and red in the same way that the Earth’s atmosphere makes the sun at sunset look fainter and redder. In their publication, Harrington and Gaskell provide observational evidence that supports the existence of the said dust clouds in the inner regions of the AGN. The two researchers developed a computer code that can model the results of dust clouds on the broad-line region. Harrington says that they wrote the computer code with the intention of adjusting parameters such as gas distribution in the broad-line region, its speed of movement and the system orientation then introduce dust clouds and observe their effect on the emission-line profiles. The findings suggest that the inclusion of dust clouds in their model can replicate numerous emission features from the broad-line region that have puzzled astrophysicists for long. Thus, the gas doesn’t have a changing and asymmetrical distribution that is complex but features a uniform, symmetric and turbulent disk around the black hole.
https://www.dailyscanner.com/dust-clouds-causing-the-puzzling-features-in-active-galactic-nuclei/
Please enable javascript to view this site. Scarification has been used for many reasons in many different cultures:Scarification has been used as a rite of passage in adolescence, or to denote the emotional state of the wearer of the scars, such as times of sorrow or well-being. This is common among Australian Aboriginal and Sepik River tribes in New Guinea, amongst others. The Māori of New Zealand used a form of ink rubbing scarification to produce facial tattoos known as "moko." Moko were considered to make the body complete as Māori bodies were considered to be naked without these marks. Moko were unique to each person and served as a sort of signature. Some Māori chiefs even used the pattern of their moko as their signatures on land treaties with Europeans.
Tuesday marks the second edition of the International Clean Air Day for Blue Skies, an initiative to raise awareness of air quality issues and the impact of air pollution on humanity and the environment. The International Clean Air Day for Blue Skies, established by the United Nations General Assembly in 2019, is part of the 2030 Agenda for Sustainable Development. The day’s purpose is to give people information and raise awareness about the dangers of air pollution and the importance of clean air to both humans and the environment. It also allows participatory member nations of the UN Environment Programme to highlight and share initiatives aimed at reducing air pollution. To mark the day and to encourage cleaner air practices, the Compagnie Autobus de Monaco (CAM) made bus services free to all throughout the Principality on Tuesday. Since 1998, the Principality’s Environmental Department has been monitoring air quality, alerting the public to days when pollution thresholds are exceeded as well as to monitor changes. The government of Monaco has long been committed to improving air quality, and in 2019 they went the extra mile by becoming members of the Breathe Life organisation, which sets goals for members recommended by the World Health Organisation with the hope they will be achieved by the self-imposed 2030 deadline. Atmospheric pollutants are assessed every year by the Department of the Environment so that it may respond to the several international conventions to which they have bound themselves, including the Convention on Long-Range Transboundary Air Pollution (CPATLD) and the Montreal Protocol for the preservation of the ozone layer. A network of five stations positioned around Monaco was reinforced in 2020 starting the analysis of very fine particles and fine particles from “incomplete combustion” or black carbon, as well as by deploying micro-sensors at certain places around the country which can sense not only pollution but will, by the end of 2021, detect other air-borne irritants like pollen. Monaco continues to push for more environmentally friendly mobility choices amongst workers and residents. It encourages purchasing electric vehicles, using alternative methods of transport such as biking and walking when possible, carpooling, the use of biofuels and a gradual transition to total electrification of public transport. All these efforts are meant to culminate in a return to pre-1990 traffic density and cleaner air.
https://monacolife.net/what-is-monaco-doing-to-achieve-cleaner-air/
In a divorce situation, there are likely to be many child visitation issues. The first issues may arise when visitation or custody is set in the first place. A visitation order will generally be created, either by the parents or the judge. This visitation order is a legal document that will outline the non-custodial parent’s visitation rights. It will typically include the information regarding the exact dates and times that the visitations may take place, the duration of the visitations, and whether the visits must be supervised our not. Such details will often be based on weekends, school vacations, and holidays. It will also be in accordance to the residences of both parents, i.e. whether they are both local or have plans of relocating. Travel time will of course be considered. This legal document is drawn up as an important step in the divorce proceeding. It may be however modified later on. Despite child visitation details being stated clearly in the order, several issues may still arise and cause conflict between both parents. Usual problems encountered involve antagonism between parents, disorganization, and lack of priorities. All these factors may result in the violation of the visitation order. There are non-custodial parents who do not take full advantage of their visitation rights. In some cases, especially where both parents are not on good terms, the non-custodial parent may be inconsistent with his or her visitations as a way to spite the custodial parent. There are also many other possible reasons for irregular or inconsistent visitations. Some may be genuine –perhaps the parent had work obligations or heath problems. However some reasons are purely selfish –a non-custodial parent may sometimes choose to only visit when it is most convenient. When a non-custodial parent repeatedly fails to follow the visitation schedule, one of the simplest ways to resolve this is through communication. Both parents must talk and come to an agreement prior to each visitation to make sure if the plans will go through or not. If after such conversations the non-custodial still is inconsistent, the custodial parent has the right to seek legal intervention to alter or to eliminate the visiting schedule. The custodial parent could also refuse to allow the visitation that the schedule permits. If this is the case, the non-custodial parent could have to get legal help enforcing his or her right to the children. Seeking court intervention should be the last resort when trying to solve these issues. However, when parents have repeatedly failed in resolving the problems, the parent who is considering court action should keep record of the dates and times the visitation order was clearly violated. The record should also include the child’s reactions to such violations, as well as your attempts to resolve the issues. If your visitation schedule is not working, you should strongly consider contacting a lawyer. Your attorney can explain to you any legal rights you may have for getting the problems with visitation corrected so you can better protect your children's interests.
http://lawyer88.com/practice-areas/divorce-lawyer/child-custody/749-china-family-law-firm-provides-child-visitation-issues-with-non-custodial-parent.html
The biopsychosocial framework refers to three different entities: biological, psychological, and sociocultural. It is crucial to understand this model, especially when it comes to the topic of health. We must realize that biological, psychological, and sociocultural factors all contribute to an individual’s health. The biological forces refer to all genetic and health related factors. These are what humans are born with, half of our traits given to us by each parent. These factors are completely out of our control and cannot by manipulated. Being a group of physiological developments that lead to maturity, including things such as nutrition and hormonal effects. The biological force is concerned with how the body continues to develop. Psychological forces include all internal cognitive, emotional, perceptual, and personality factors which underlie human behavior, feelings, and emotions and how they may relate to early human experience. As we get older, we gain experience and understand that the world is more complex than we can imagine. Things like personality, learning abilities, making the right choices are consistent of the psychological force. The most obvious changes occur during childhood, such as understanding how to respond to a situation and what choices should be made, but they can also occur during adulthood. As people continue to grow into older age, they learn to control their emotions better and gain wisdom. Sociocultural forces include interpersonal, societal, cultural, and ethnic factors. They are a group of values, ideas, and beliefs that influence how we grow. Examples include morals, habits, and practices made throughout our life. There is a system involved in order to understand human development. This larger system involves our family, such as parents, children, and siblings as well as teachers, friends, and people that we work with. The sociocultural system also includes institutions such as church, school, and work. All of these values fit together to help constitute a person’s culture, forming knowledge, attitude and behavior that can be associated within a group of people. To understand the biopsychosocial framework, certain theories are centered on various aspects of development during growth. In order to understand this, a number of theories of child development have been studied to explain human growth. Critical thinkers such as Erikson, Skinner, Piaget, Bronfenbrenner, and Baltes’ have created the way for future thinkers to continue and develop current growth and development theories and introduce new ideas as well. These theories are a way of rationalizing the world and the way living and non-living things behave. The diversity of individualism and the variables that character is dependent on are diverse and different for different individuals. Erikson categorized human development into a series of eight stages that not only involves toddlers or children, but adults as well. Each phase of life is involved with a corresponding problem that must be solved if happiness is to be had. B.F. Skinner believed that consequences for a behavior determined whether that behavior was repeated in the future. His purpose was to show that consequences could be deemed influential. Positive and negative reinforcements increased the likelihood of the behavior occurring again while punishments decreased that likelihood. Piaget documented the significance of collaborating socially within the environment during the development of children. He assumed that the complexion of human knowledge derives from inside the child. Four distinct stages of cognitive development were created: sensorimotor, preoperational, concrete operational, and formal operational. This theory was based on self-discovery and innate abilities. Bronfenbrenner believed that development depended on interactive environmental systems. He acknowledged that humans do not grow and mature if they are alone or quarantined, but they do begin to learn and adapt in response to the relationships and experiences involving friends and family. People within a system influence development and once multiple systems interact, optimal development is in effect. Paul Baltes’, who was an advocate for the life-span perspective, believed that “aging is a lifelong process of growing up and growing old, beginning with birth and ending with death. We must know what came before and what is likely to come afterward” (Kail & Cavanaugh 2017). He proposed four ideas that were important to the life-span perspective: multidirectionality, plasticity, historical context, and multiple causation. When comparing and contrasting between the cognitive development approach and behaviorism theory, the cognitive approach revolves around the idea of understanding the internal processes of the mind and how it is vital in understanding why people do certain things and make certain decisions. Piaget understood that the “core focus of his theory was on the process of people acquiring, processing, and storing information” (San Diego Figure Skating Communications). The behaviorist approach emphasizes external behaviors rather than the internal state of the mental processing of information such as positive and negative reinforcement as well as punishment. Key concepts that Skinner believed in included reinforcement, conditioning, and punishment. Behaviorism focuses heavily on a person’s behavior in terms of learning. They may look for change in a person’s behavior after experiencing something new. Forming associations between specific stimuli and responses is very important. For example, humans are more likely to learn behaviors if they think it will prompt an exact response. Piaget would disagree because he see’s learning as a mental change that could be reflected in an individual’s behavior, whereas, Skinner considered reinforcement and punishment as the best way to enforce learning. Understanding human development can help provide more insight in how the patient or client is affected how they are. When we think of development, we think of it as a process that is done once we hit adult life but it’s important to remember that development is a process that happens all throughout the course of our lives. Being a clinician or health care professional, knowing the ins and outs of development can help us gain a better understanding of what’s normal. Although everyone is unique in their own way, growth follows a specific pattern and being familiar with certain terms can help us spot possible signs of trouble, whether it be cognitive, emotional, functional, or social. The sooner developmental issues are found, the faster the person can begin therapy, which can lead to a potentially better outcome. References Kail, R. & Cavanaugh, J. (2017). Essentials of human development : a life-span view. Boston, MA: Cengage Learning.
https://oregonsigmanu.com/the-biopsychosocial-framework-refers-to-three-different-entities/
LedgerOps, Inc. (the “Company”) is committed to protecting the privacy of its users. For purposes of this Agreement, “Service” refers to the Company’s service which can be accessed via our website at LedgerOps.com. The terms “we,” “us,” and “our” refer to the Company. “You” refers to you, as a user of Service. I. CONSENT II. INFORMATION WE COLLECT We may collect both “Non-Personal Information” and “Personal Information” about you. “Non-Personal Information” includes information that cannot be used to personally identify you, such as anonymous usage data, general demographic information we may collect, referring/exit pages and URLs, platform types, preferences you submit and preferences that are generated based on the data you submit and number of clicks. “Personal Information” includes information that can be used to personally identify you, such as your name, address and email address. In addition, we may also track information provided to us by your browser when you view or use the Service, such as the website you came from (known as the “referring URL”), the type of browser you use, the device from which you connected to the Service, the time and date of access, and other information that does not personally identify you. We use this information for, among other things, the operation of the Service, to maintain the quality of the Service, to provide general statistics regarding use of the Service and for other business purposes. We track this information using cookies, or small text files which include an anonymous unique identifier. Cookies are sent to a user’s browser from our servers and are stored on the user’s computer hard drive. Sending a cookie to a user’s browser enables us to collect Non-Personal Information about that user and keep a record of the user’s preferences when utilizing our services, both on an individual and aggregate basis. We may use both persistent and session cookies; persistent cookies remain on your computer after you close your session and until you delete them, while session cookies expire when you close your browser. Persistent cookies can be removed by following your Internet browser help file directions. If you choose to disable cookies, some areas of the Service may not work properly. III. HOW WE USE AND SHARE INFORMATION Personal Information: In general, we do not sell, trade, rent or otherwise share your Personal Information with third parties without your consent. We may share your Personal Information with partners and other third-party providers who are performing services for the Company. In general, the partners and third-party providers used by us will only collect, use and disclose your information to the extent necessary to allow them to perform the services they provide for the Company. However, certain third-party service providers have their own privacy policies in respect of the information that we are required to provide to them in order to use their services. For these third-party service providers, we recommend that you read their privacy policies so that you can understand the manner in which your Personal Information will be handled by such providers. In addition, we may disclose your Personal Information if required to do so by law. Non-Personal Information: In general, we use Non-Personal Information to help us improve the Service and customize the user experience. We also aggregate Non-Personal Information in order to track trends and analyze use patterns of the Service. IV. HOW WE PROTECT INFORMATION We implement reasonable precautions and follow industry best practices in order to protect your Personal Information and ensure that such Personal Information is not accessed, disclosed, altered or destroyed. These measures, however, do not guarantee that your information will not be accessed, disclosed, altered or destroyed by breach of such precautions. By using our Service, you acknowledge that you understand and agree to assume these risks. V. YOUR RIGHTS REGARDING THE USE OF YOUR PERSONAL INFORMATION You have the right at any time to prevent us from contacting you for marketing purposes. When we send a promotional communication to a user via HubSpot, Inc. (“HubSpot”), the user can opt out of further promotional communications by following the unsubscribe instructions provided in each email. VI. SQUARESPACE AND HUBSPOT Our Service is hosted by Squarespace, Inc. (“Squarespace”) and HubSpot. Squarespace and HubSpot provides us with the online platform that allows us to provide the Service to you. Your information, including Personal Information, may be stored on servers belonging to Squarespace and HubSpot. By using the Service, you consent to the collection, disclosure, storage, and use of your Personal Information in accordance with the privacy policies of both Squarespace and HubSpot. You should read the privacy policies at https://www.squarespace.com/privacy and https://legal.hubspot.com/privacy-policy to understand how your personal information may be collected, stored, and used. VII. LINKS TO OTHER WEBSITES As part of the Service, we may provide links to other websites or applications. However, we are not responsible for the privacy practices employed by those websites or the information or content they contain. VIII. AGE OF CONSENT By using the Service, you represent that you are at least 18 years of age. X. MERGER OR ACQUISITION
https://ledgerops.com/privacy-policy/
BACKGROUND Aspects of the disclosure relate to deploying machine learning systems to predict customer needs. In particular, one or more aspects of the disclosure relate to machine learning based automated pairing of customers and businesses. Enterprise organizations may utilize various computing infrastructure to provide services to their customers. Customers of the enterprise organization may include individuals and businesses. In some instances, a customer may make a purchase, and there may be business customers that have offerings that may be beneficial to a customer. Detecting a pattern of purchase activity for customers, and matching them to appropriate business customers, may be of high significance to an enterprise organization. In many instances, however, it may be difficult to ensure detection of such need, and connecting customers and businesses, while also attempting to optimize the resource utilization, bandwidth utilization, and efficient operations of the computing infrastructure involved in maintaining, accessing, and executing such purchase activities. SUMMARY Aspects of the disclosure provide effective, efficient, scalable, fast, reliable, and convenient technical solutions that address and overcome the technical problems associated with automated pairing of customers and businesses. In accordance with one or more embodiments, a computing platform having at least one processor, and memory, may determine, via a computing device and based on historical user activity of a user, a pattern of the user activity. Subsequently, the computing platform may identify, based on the pattern of the user activity, one or more anticipated purchase activities of the user. Then, the computing platform may determine, via the computing device and based on the one or more anticipated purchase activities of the user, a sales offering by a vendor. Then, the computing platform may match, via the computing device, the one or more anticipated purchase activities of the user with the sales offering by the vendor. Then, the computing platform may retrieve, from a repository of user data and for a purchase activity of the one or more anticipated purchase activities, one or more user-defined preference rules associated with the purchase activity. Then, the computing platform may determine whether the one or more preference rules apply to one or more attributes of the purchase activity. Subsequently, the computing platform may trigger, via the computing device and based on a determination that the one or more preference rules apply to the one or more attributes of the purchase activity, an action associated with the purchase activity. In some embodiments, the action associated with the purchase activity may include providing, via an intelligent virtual assistant, a recommendation to perform the purchase activity. In some embodiments, the action associated with the purchase activity may include making an automatic purchase. In some embodiments, the computing platform may retrieve, from one or more external sources of data, one or more events that may impact the purchase activity. Then, the computing platform may identify an event of the one or more events that impacts the purchase activity, where the action associated with the purchase activity comprises a recommendation that minimizes the impact of the event for the user. In some embodiments, the one or more external sources of data may be artificial intelligence based systems. In some embodiments, the event may include one or more of a weather related event, an employment related event, a geopolitical event, a civic unrest, and a medical event. In some embodiments, the computing platform may determine the one or more user-defined preference rules by establishing, via an intelligent virtual assistant, an interactive session with the user. Then, the computing platform may provide, to the user, one or more questions in sequential format. Then, the computing platform may receive, from the user, responses to the one or more questions. Subsequently, the computing platform may provide, to the user and based on the responses, one or more additional questions. Then, the computing platform may receive, from the user, responses to the one or more additional questions. In some embodiments, the computing platform may train the intelligent virtual assistant based on a machine learning model. In some embodiments, the one or more user-defined preference rules may include one or more of a preference associated with an automatic loan amount, a secondary funding source, automatic payment options, a designated alternate decision making authority, and preferred communication channels. In some embodiments, the computing platform may train a machine learning model to determine the pattern of the user activity. In some embodiments, the computing platform may train a machine learning model to identify the one or more anticipated purchase activities of the user. These features, along with many others, are discussed in greater detail below. BRIEF DESCRIPTION OF THE DRAWINGS The present disclosure is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which: FIGS. 1A and 1B depict an illustrative computing environment for machine learning based automated pairing of customers and businesses; FIGS. 2A and 2B depict an illustrative event sequence for machine learning based automated pairing of customers and businesses; and FIG. 3 depicts an illustrative method for machine learning based automated pairing of customers and businesses. DETAILED DESCRIPTION In the following description of various illustrative embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown, by way of illustration, various embodiments in which aspects of the disclosure may be practiced. It is to be understood that other embodiments may be utilized, and structural and functional modifications may be made, without departing from the scope of the present disclosure. It is noted that various connections between elements are discussed in the following description. It is noted that these connections are general and, unless specified otherwise, may be direct or indirect, wired or wireless, and that the specification is not intended to be limiting in this respect. Customers of an enterprise organization may perform purchase related activities. In some instances, such activities may be in response to an event, such as a natural disaster, an illness, an accident, and so forth. For example, the customer may be undergoing a medical treatment, may have suffered an accident, may be in a disaster-struck region, may have lost employment, and so forth. Generally, such events may cause a customer to make certain purchases. Also, for example, a customer may make routine purchases and may be unaware of a type of alternate resources that may be available for the purchases. In some instances, an enterprise organization may be better able to serve its customers by understanding customer behavior, identifying offerings provided by business customers, and tailoring resources to the customers. Also, for example, the enterprise organization may tailor resources based on user-defined preferences. Accordingly, it may be of high significance for an enterprise organization to devise ways in which to provide tailored purchase resources to an enterprise user. Also, fast and reliable responses to potential events that may impact a customer's well-being may be of high significance to the enterprise organization. Some aspects of the disclosure relate to utilizing machine learning models to detect patterns of customer activity, identify relevant resources offered by businesses, identify customer preferences, and provide timely and effective options to the customer. Fast information processing, fast data transmission rates, availability of bandwidth, and so forth may be significant factors in automatically pairing customers and businesses. FIGS. 1A and 1B FIG. 1A 100 100 110 120 130 140 150 depict an illustrative computing environment for automated pairing of individual customers and businesses. Referring to , computing environment may include one or more computer systems. For example, computing environment may include a vendor matching computing platform , enterprise computing infrastructure , an enterprise data storage platform , a user device , and a vendor platform . 110 110 As illustrated in greater detail below, vendor matching computing platform may include one or more computing devices configured to perform one or more of the functions described herein. For example, vendor matching computing platform may include one or more computers (e.g., laptop computers, desktop computers, servers, server blades, or the like) and/or other computer components (e.g., processors, memories, communication interfaces). 120 120 120 120 120 120 100 120 110 Enterprise computing infrastructure may include one or more computing devices and/or other computer components (e.g., processors, memories, communication interfaces). In addition, enterprise computing infrastructure may be configured to host, execute, and/or otherwise provide one or more enterprise applications. For example, enterprise computing infrastructure may be configured to host, execute, and/or otherwise provide one or more vendor platforms to provide goods and services, transaction processing programs, an enterprise mobile application for user devices, automated payment functions, loan processing programs, and/or other programs associated with an enterprise server. In some instances, enterprise computing infrastructure may be configured to provide various enterprise and/or back-office computing functions for an enterprise organization, such as a financial institution. For example, enterprise computing infrastructure may include various servers and/or databases that store and/or otherwise maintain account information, such as financial account information including account balances, transaction history, account owner information, and/or other information. In addition, enterprise computing infrastructure may process and/or otherwise execute tasks on specific accounts based on commands and/or other information received from other computer systems included in computing environment . Additionally or alternatively, enterprise computing infrastructure may receive instructions from vendor matching computing platform and execute the instructions in a timely manner. 130 130 130 120 130 130 100 Enterprise data storage platform may include one or more computing devices and/or other computer components (e.g., processors, memories, communication interfaces). In addition, and as illustrated in greater detail below, enterprise data storage platform may be configured to store and/or otherwise maintain enterprise data. For example, enterprise data storage platform may be configured to store and/or otherwise maintain, for enterprise customers, account information, payment information, payment schedules, patterns of activity, product and service offerings, discounts, and so forth. Additionally or alternatively, enterprise computing infrastructure may load data from enterprise data storage platform , manipulate and/or otherwise process such data, and return modified data and/or other data to enterprise data storage platform and/or to other computer systems included in computing environment . 140 140 110 140 140 User device may be a personal computing device (e.g., desktop computer, laptop computer) or mobile computing device (e.g., smartphone, tablet, wearable device). In addition, user device may be linked to and/or used by a specific user (who may, e.g., be a customer of a financial institution or other organization operating vendor matching computing platform ). Also, for example, user of user device may use user device to perform transactions (e.g., perform banking operations, perform financial transactions, trade financial assets, and so forth) and purchase activities (e.g., buy and/or sell products and services). 150 150 150 150 150 150 110 160 150 Vendor platform may include one or more computers (e.g., laptop computers, desktop computers, servers, server blades, or the like) and/or other computer components (e.g., processors, memories, communication interfaces). Vendor platform may generally be a platform to provide products and services, odder discounts, coupons, promotions, and so forth. For example, vendor platform may be a platform for mortgage loan payments. As another example, vendor platform may be a platform for loan applications. Also, for example, vendor platform may be a platform for a utility company, a telecommunications company, a credit card company, for healthcare services, a pharmacy, auto repair services, home improvement goods and/or services, legal services, and so forth. Although not illustrated herein, in some embodiments, vendor platform may be a component of vendor matching computing platform , or may be a standalone component connected to private network . Also, for example, vendor platform may represent a plurality of platforms. 100 110 120 130 140 150 100 160 110 120 130 170 140 150 160 170 160 Computing environment also may include one or more networks, which may interconnect one or more of vendor matching computing platform , enterprise computing infrastructure , enterprise data storage platform , user device , and vendor platform . For example, computing environment may include a private network (which may, e.g., interconnect vendor matching computing platform , enterprise computing infrastructure , enterprise data storage platform , and/or one or more other systems which may be associated with an organization, such as a financial institution) and public network (which may, e.g., interconnect user device and vendor platform with private network and/or one or more other systems, public networks, sub-networks, and/or the like). Public network may be a high generation cellular network, such as, for example, a 5G or higher cellular network. In some embodiments, private network may likewise be a high generation cellular enterprise network, such as, for example, a 5G or higher cellular network. 120 130 140 150 100 120 130 140 150 100 110 120 130 140 150 In one or more arrangements, enterprise computing infrastructure , enterprise data storage platform , user device , vendor platform , and/or the other systems included in computing environment may be any type of computing device capable of receiving input via a user interface, and communicating the received input to one or more other computing devices. For example, enterprise computing infrastructure , enterprise data storage platform , user device , and vendor platform , and/or the other systems included in computing environment may, in some instances, be and/or include server computers, desktop computers, laptop computers, tablet computers, smart phones, or the like that may include one or more processors, memories, communication interfaces, storage devices, and/or other components. As noted above, and as illustrated in greater detail below, any and/or all of vendor matching computing platform , enterprise computing infrastructure , enterprise data storage platform , user device , and vendor platform , may, in some instances, be special-purpose computing devices configured to perform specific functions. FIG. 1B 110 111 112 113 111 112 113 113 110 150 160 112 111 110 111 110 110 112 112 112 112 112 a, b, c, d. Referring to , vendor matching computing platform may include one or more processors , memory , and communication interface . A data bus may interconnect processor , memory , and communication interface . Communication interface may be a network interface configured to support communication between vendor matching computing platform and one or more networks (e.g., network , network , a local network, or the like). Memory may include one or more program modules having instructions that when executed by processor cause vendor matching computing platform to perform one or more functions described herein and/or one or more databases that may store and/or otherwise maintain information which may be used by such program modules and/or processor . In some instances, the one or more program modules and/or databases may be stored by and/or maintained in different memory units of vendor matching computing platform and/or by different computing devices that may form and/or otherwise make up vendor matching computing platform . For example, memory may have, store, and/or include a pattern detection engine a purchase activity prediction engine a matching engine and an action triggering engine 112 110 112 110 112 110 112 110 112 110 112 110 112 110 a b b c c c d Pattern detection engine may have instructions that direct and/or cause vendor matching computing platform to determine, via a computing device and based on historical user activity of a user, a pattern of the user activity, as discussed in greater detail below. Purchase activity prediction engine may have instructions that direct and/or cause vendor matching computing platform to identify, based on the pattern of the user activity, one or more anticipated purchase activities of the user. In some embodiments, purchase activity prediction engine may have instructions that direct and/or cause vendor matching computing platform to determine, via the computing device and based on the one or more anticipated purchase activities of the user, a sales offering by a vendor. Matching engine may have instructions that direct and/or cause vendor matching computing platform to match, via the computing device, the one or more anticipated purchase activities of the user with the sales offering by the vendor. In some embodiments, matching engine may have instructions that direct and/or cause vendor matching computing platform to retrieve, from a repository of user data and for a purchase activity of the one or more anticipated purchase activities, one or more user-defined preference rules associated with the purchase activity. In some embodiments, matching engine may have instructions that direct and/or cause vendor matching computing platform to determine whether the one or more preference rules apply to one or more attributes of the purchase activity. Action triggering engine may have instructions that direct and/or cause vendor matching computing platform to trigger, via the computing device and based on a determination that the one or more preference rules apply to the one or more attributes of the purchase activity, an action associated with the purchase activity. FIGS. 2A and 2B FIG. 2A 201 110 110 130 110 110 depict an illustrative event sequence for automated pairing of customers and businesses. Referring to , at step , vendor matching computing platform may determine, via a computing device and based on historical user activity of a user, a pattern of the user activity. For example, the user may perform various purchase transactions, and vendor matching computing platform may store a record of such transactions in an enterprise storage platform (e.g., enterprise storage platform ). For example, a user may purchase a certain item from a certain vendor at regular intervals. Also, for example, a user may configure one or more payments for transactions (e.g., credit card payments, utility bill payments, mortgage payments, and so forth) to be electronically paid. In some instances, such payments may be periodic, and may be paid automatically. Accordingly, vendor matching computing platform may determine a pattern of such transactions, including a vendor, relevant goods and/or services, a payee, an amount of payment, a time of recurrence, a mode of payment, a pattern of late payments, a pattern of minimum payments, and so forth. In some embodiments, vendor matching computing platform may maintain a record of a user's login activity on a website or a mobile application. 110 In some embodiments, vendor matching computing platform may train a machine learning model to determine the pattern of the user activity. For example, clustering algorithms may be utilized to determine patterns of behaviors for users. For example, the clustering algorithm may take, as input, one or more attributes of user behavior, and classify users based on such behaviors. The one or more attributes may include consumer behavior, income activity, spending activity, credit rating patterns, loan activity, risk appetites, and so forth. In some embodiments, unsupervised learning algorithms, such as K-means clustering, principal component analysis, and so forth may be utilized. Accordingly, customers may be associated with clusters based on their patterns of activity. In some instances, a customer may be incapacitated and/or otherwise unable to make their regular purchases. For example, the customer may be unwell, hospitalized, and/or incapacitated by a short-term or long-term disability, and so forth. Also, for example, the customer may have lost employment. As another example, the customer may have reduced income from a job, business, or other source of revenue. Also, for example, the customer may be in a geographical location that has experienced a natural disaster (e.g., earthquake, fire, volcanic eruption, flooding, hurricane, tornado, winter storm, and so forth), and the customer may not have access to their account. As another example, the customer may be in a location that is experiencing civil unrest, or is experiencing a medical health emergency. In such circumstances, a customer may be subjected to further hardship due to increased bills, non-payment of bills, surmounting interest payments, fines for late and/or missed payments, and so forth. Accordingly, an enterprise organization may provide relief to customers by detecting such circumstances, identifying available resources that may benefit the customer, and providing such resources to the customer in a timely and effective manner. 202 110 110 110 110 At step , vendor matching computing platform may identify, based on the pattern of the user activity, one or more anticipated purchase activities of the user. Generally, vendor matching computing platform may determine behavior patterns for a user based on purchase activity, and along with information about their physical location, purchase history, spending habits, and so forth, vendor matching computing platform may detect changes indicative of a need for help. Accordingly, vendor matching computing platform may identify goods and/or services offered by business customers, such as, for example, loans programs, discounts, rebates, promotions, and so forth. 110 110 110 110 For example, vendor matching computing platform may identify that a customer purchases a medicinal product at regular intervals of time. Accordingly, vendor matching computing platform may identify the one or more anticipated purchase activities of the user as a next purchase of the medicinal product. As another example, vendor matching computing platform may identify that a customer purchases groceries from an online vendor at regular intervals of time. Accordingly, vendor matching computing platform may identify the one or more anticipated purchase activities of the user as a next purchase of the groceries. 110 110 As another example, vendor matching computing platform may identify that a customer may be going for an annual eye examination, and purchase prescription eyeglasses, contact lenses, and so forth, from an online vendor at regular intervals of time. Accordingly, vendor matching computing platform may identify the one or more anticipated purchase activities of the user as a next visit to an optometrist, a next purchase of vision related products, and so forth. 110 110 110 110 110 In some embodiments, vendor matching computing platform may train a machine learning model to identify the one or more anticipated purchase activities of the user. For example, vendor matching computing platform may detect an increase in a number of medical bill payments, and may identify the one or more anticipated purchase activities of the user based on the medical bills, subject to the customer's pre-approval. For example, vendor matching computing platform may receive customer approval to analyze medical bills to identify a service provider, a pharmacy, a type of products, and so forth. As another example, vendor matching computing platform may detect, from data based on an email service, that there is an increased activity related to auto insurance, including, for example, a rental car transaction, calendar schedules for an auto body repair shop, and so forth. Accordingly, vendor matching computing platform may determine that an automobile accident may have occurred, and may identify the one or more anticipated purchase activities of the user based on the insurance related activity. 203 110 150 110 110 150 At step , vendor matching computing platform may determine, via the computing device and based on the one or more anticipated purchase activities of the user, a sales offering by a vendor. The term “sales offering” as used herein, may be an offering of a product and/or service by a business. In some embodiments, the business may be a business customer of an enterprise organization. Similarly, a “customer” may be an individual customer of the enterprise organization, and/or a business customer. For example, vendor platform may be a platform associated with a customer that offers products and/or services. In some embodiments, such products and/or services may be provided at discounted rates to enterprise customers. Accordingly, vendor matching computing platform may analyze products and/or services offered by businesses. In some embodiments, vendor matching computing platform may classify products and/or services based on customer needs, consumer habits, and so forth. In some embodiments, vendor platform may be a virtual marketplace hosted by enterprise computing infrastructure for its customers to transact business, promote sales for small businesses, optimize costs for its customers, and so forth. 110 110 110 110 In some embodiments, vendor matching computing platform may determine the sales offering based on geographic proximity to the customer. For example, a customer may be purchasing gas at a gas station that may be 3 miles away. However, a business customer of the enterprise may be offering a discount to enterprise customers, and the business customer may be operating a gas station within a block of the customer's residence. Accordingly, vendor matching computing platform may determine the sales offering as the discount offered by the business customer operating the nearby gas station. As another example, a customer may be purchasing vision care products online, and a business customer of the enterprise may be offering a discount to enterprise customers, and the business customer may be operating a vision care store within a mile of the customer's residence. Accordingly, vendor matching computing platform may determine the sales offering as the discount offered by the business customer operating the nearby vision care store. Vendor matching computing platform may identify similar local and/or proximate vendors for purchase of groceries, medical care, exercise gyms, recreation services, auto rental services, and so forth. 204 110 At step , vendor matching computing platform may match, via the computing device, the one or more anticipated purchase activities of the user with the sales offering by the vendor. Generally, based on identifying patterns of activity for the user, an enterprise organization may be able to analyze the user's purchase behavior and determine insights into what a consumer purchases, where they purchase it from, when they purchase, how much they may pay for the purchase, and so forth. Also, for example, the enterprise organization may be able to analyze the offerings of its business customers, and determine potentially niche markets for their offerings. Accordingly, the enterprise organization may be able to provide cost savings to customers, while also promoting business for its business customers. 150 110 For example, during a natural disaster and/or a public health emergency, supplies for one or more items may not be sufficient to meet customer demand. However, an enterprise organization may be able to identify a demand for the one or more items among its customers, and provide such information to business customers (e.g., via vendor platform ), who may then tailor their products and services to the customer demand. For example, customers in an area hit by the natural disaster and/or public health emergency, may need emergency food supplies, medicines, transportation, and so forth. Accordingly, vendor matching computing platform may match these customers to businesses that may be providing special discounts, products, services, for such customers. As described herein, a machine learning model may analyze customers and businesses, based on factors such as consumer habits, purchase activities, geographic location, and so forth. Based on the analyzing, the machine learning model may identify patterns and/or correlations among the factors, and identify a match. In some embodiments, the machine learning model may be trained to run an optimization algorithm that may optimize an availability of resources by minimizing cost, minimizing distance from a customer to a vendor, minimizing a time taken to provide the goods and/or services, and so forth. For example, the machine learning model may determine a frequency of a medicinal purchase (e.g., based on historical activity), a gas purchase (e.g., based on a make and model of a car, amount driven, and so forth), purchase of groceries, and so forth. FIG. 2B 205 110 110 130 Referring to , at step , vendor matching computing platform may retrieve, from a repository of user data and for a purchase activity of the one or more anticipated purchase activities, one or more user-defined preference rules associated with the purchase activity. For example, vendor matching computing platform may collect, from the user, preferences regarding various purchase activities, and store the preferences in a repository (e.g., enterprise data storage platform ). 110 In some embodiments, a machine learning model may learn user preferences from the patterns of historical activity, and vendor matching computing platform may provide the user with such recommended preferences. For example, the machine learning model may be trained to learn user preferences (e.g., prefer to drive, walk, bike, and so forth), how far a user may prefer to drive to purchase certain products and/or services, and so forth. For example, the user may prefer to walk to the local grocery store, but may also prefer to drive 15 miles to shop at an outlet store. Also, for example, the user may prefer to drive one mile to a store to purchase a product, but may not prefer to drive 20 miles to purchase the same product, even at a discounted price. 110 In some embodiments, the user may interact with vendor matching computing platform to refine the preferences, accept and/or reject the preferences. For example, a customer may set up parameters for various purchase activities. For example, the customer may indicate a preference to purchase a product from a store that may be further than 20 miles from the customer's residence if the product is available at a discount of, say, for example, 25%. Also, for example, the customer may indicate a preference to purchase non-perishable items online, and purchase perishable items from a local store. As another example, on their commute to work, the customer may prefer to visit a gas station that sells coffee and donuts. As another example, during weekends, the customer may prefer to travel to nearby attractions. 110 110 In some embodiments, the one or more user-defined preference rules may include one or more of a preference associated with an automatic loan amount, a secondary funding source, automatic payment options, a designated alternate decision making authority, and preferred communication channels. The one or more user-defined preferences may generally relate to aspects of a purchase activity, such as, for example, an amount of loan, a level of risk, an amount of payments, a source of funding, an authorized user, a preferred mode of communication, and so forth. In some instances, the user-define preferences may indicate a type of purchase activity that the user may want vendor matching computing platform to perform automatically. For example, a user may allow vendor matching computing platform to make automatic periodic purchases of household items such as toilet paper, toothpaste, shampoo, soap, dishwashing detergent, laundry detergent, and so forth. 110 110 110 In some embodiments, the one or more user-defined preferences may include an indication by the user to allocate a source of funding for a purchase. As another example, the user may indicate a percentage of an outstanding purchase payment that may be automatically paid. In some embodiments, the one or more user-defined preferences may include an indication by the customer to add such parameters for a purchase activity. For example, the user may allow vendor matching computing platform to automatically manage all aspects of forthcoming purchase activities. Also, for example, the user may identify certain types of purchase activities that may be automatically managed by vendor matching computing platform . In some instances, the user may set time limits for various purchase related activities, and may pre-authorize vendor matching computing platform to make purchases after the time limit expires. In some embodiments, the user may indicate an amount of loan that the user may pre-authorize for a purchase activity. For example, the user may set limits for an amount of credit that may be applied to a credit card for specific purchases. Also, for example, the user-defined preferences may indicate a pre-authorization of an automatic interest free loan for certain purchases (e.g., for purchase of large electronic items). 110 110 Also, for example, the user-defined preferences may indicate that the user prefers to communicate via email, and vendor matching computing platform may contact the user via telephone only after a predetermined number of attempts to communicate via email. As another example, the user may designate another individual as a responsible party who may be contacted when the user is incapacitated. For example, a college student may authorize an established process to inform a parent or a guardian of a need for the student. For example, the college student may receive periodic supplies from their parent, and vendor matching computing platform may contact their parent to facilitate purchase of the supplies, and shipment of the supplies to the student. 110 110 Generally, vendor matching computing platform may determine and store the one or more user-defined preferences prior to a time when vendor matching computing platform may make automatic purchases and so forth. Accordingly, a customer may be presented with an opportunity to avail of preferred and/or cost-effective options for products and/or services. 110 120 In some embodiments, vendor matching computing platform may determine the one or more user-defined preference rules by establishing, via an intelligent virtual assistant, an interactive session with the user. For example, an intelligent chat bot may utilize natural language processing to converse with the user. The intelligent virtual assistant may access an enterprise server (e.g., enterprise computing infrastructure ) to identify one or more resources offered by business customers. For example, the information for available resources may be organized in a hierarchical tree structure, and the intelligent virtual assistant may traverse this tree based on an interactive question-answer (Q/A) session with the user. 110 110 110 Then, vendor matching computing platform may provide, to the user, one or more questions in sequential format. In some embodiments, the sequential format may be based on the hierarchical tree structure. For example, vendor matching computing platform may ask “Would you like us to purchase the household items listed below?” Also, for example, vendor matching computing platform may ask “Would you like to withdraw funds from your savings account to pay the pharmacy for your monthly purchase of medicines?” 110 Then, vendor matching computing platform may receive, from the user, responses to the one or more questions. For example, the user may respond “Yes” to the question, “Would you like us to purchase the household items listed below?” In some embodiments, the user may respond “Yes, but do not pay the entire balance.” Additional and/or alternate user responses may be received. 110 110 110 110 110 Subsequently, vendor matching computing platform may provide, to the user and based on the responses, one or more additional questions. For example, vendor matching computing platform may ask “Would you like to explore discounted prices for the products?” As another example, vendor matching computing platform may ask “Would you like to pre-approve an interest free loan to pay for your purchase?” Then, vendor matching computing platform may receive, from the user, responses to the one or more additional questions. For example, the user may respond “Yes” to a pre-approved loan, and vendor matching computing platform may ask the user to provide a range for the pre-approved loan. 110 110 110 As described herein, vendor matching computing platform may train the intelligent virtual assistant based on a machine learning model to analyze the responses, determine services available, and tailor the services to a user based on detected patterns of purchase activity. For example, a user may historically purchase their medicines online. Accordingly, vendor matching computing platform may not ask the user “Would you like us to place an order for the medicines at your local pharmacy?” and may instead ask, “Would you like us to place an order for the medicines at your preferred online pharmacy?” In some embodiments, vendor matching computing platform may determine that a generic form of the medicines may be available at the local store, and may ask, “We know you prefer to purchase your medicines online, but your local store is offering a generic version at a 25% discount. This may lead to a total savings of $49.99.” In some embodiments, intelligent virtual assistant may integrate with one or more external artificial intelligence (AI) systems to provide recommendations to a user. Such AI systems may include, for example, a weather system, a news analysis system, a stock market analysis system, a virtual assistant associated with mobile devices, a consumer behavior analysis system, an email analysis system, and so forth. Generally, the intelligent virtual assistant may utilize an AI machine learning system to initiate automated matching of customers and vendors, provide messaging updates to the customer, receive user preferences, and so forth. In some embodiments, the AI system may proactively message the customer to provide updates about anticipated purchases, account activity, account balance, and so forth. As described herein, the messages may be sent via a natural language processing system. Such messages may be sent via an SMS service, and may assume that the customer has a minimum level of internet connectivity. In some embodiments, the virtual assistant may interact with the customer via a telephone, a personal computer, a mobile device, a video link, and so forth. Also, for example, the virtual assistant may be configured to interact with the customer via a variety of channels, operating systems, natural languages, and so forth. 206 110 110 110 At step , vendor matching computing platform may determine whether the one or more preference rules apply to one or more attributes of the purchase activity. For example, vendor matching computing platform may identify the anticipated purchase activity as a monthly purchase of prescription medication, and the user-defined preference may indicate that the user has approved automatic purchase from an online pharmacy. As another example, one or more attributes of the payment may relate to how payment for a purchase is to be made, a source of the funding, an amount of the payment, whether an automatic loan may apply toward payment for the purchase, and so forth. Accordingly, vendor matching computing platform may determine whether the user-defined preferences apply to the one or more attributes of the purchase. 110 As another example, the anticipated purchase activity may be a grocery purchase, and one or more attributes of the purchase may relate to items that may be automatically ordered online, and items that may need to be purchased at a local grocery store. Accordingly, vendor matching computing platform may determine whether the user-defined preferences apply to the one or more attributes of the grocery purchase. 110 110 Also, for example, the anticipated purchase activity may occur when a user is likely to be incapacitated. For example, vendor matching computing platform may determine that the user is undergoing medical treatment and is unable to access the account to make purchases. Accordingly, vendor matching computing platform may identify one or more attributes of the purchase activity to be whether the user has approved automatic purchases, whether the user has designated an authorized individual to manage and/or approve a payment for the purchase, whether the user has pre-approved a loan for the purchase, whether the user has arranged for a source of funding for the purchase, whether the user has a preferred vendor for the purchase, and so forth. 110 110 110 110 As another example, vendor matching computing platform may determine that the user is located in an area that is undergoing a civic unrest, or has been visited by a natural disaster, and vendor matching computing platform may determine that the user is unable to access a preferred mode of communication. Accordingly, the one or more attributes of an anticipated transaction may include an alternate mode of communication preferred by the user. For example, the internet coverage may have been affected, and vendor matching computing platform may determine that the user is not able to connect to the internet to receive email communications, or log in to their account. Accordingly, vendor matching computing platform may determine SMS messaging as a user-defined alternate mode of communication. 207 110 110 110 110 At step , vendor matching computing platform may trigger, via the computing device and based on a determination that the one or more preference rules apply to the one or more attributes of the purchase activity, an action associated with the purchase activity. For example, vendor matching computing platform may identify the anticipated purchase activity as a monthly purchase of prescription medication, and the user-defined preference may indicate that the user has approved automatic purchase from an online pharmacy. Accordingly, vendor matching computing platform may trigger an automatic purchase of the prescription medication from the online pharmacy. Also, for example, the one or more attributes of the payment may relate to when the payment is to be made, a source of the funding, an amount of the payment, whether an automatic loan may apply toward the payment, and so forth, and vendor matching computing platform may trigger the automatic payment for the purchase in accordance with the one or more attributes of the purchase activity. 110 110 As another example, the anticipated purchase activity may be a grocery purchase, and one or more attributes of the purchase may relate to items that may be automatically ordered online, and items that may need to be purchased at a local grocery store. Accordingly, vendor matching computing platform may trigger the automatic purchase of the items that may be automatically ordered online. Also, for example, vendor matching computing platform may, for the items that may need to be purchased at a local grocery store, identify coupons available at local stores, determine a total cost for the items, and provide the customer with a list of grocery stores, their distances from a residence of the customer, and an estimated price for the grocery items based on applicable discounts. 110 110 110 110 110 Also, for example, the anticipated purchase activity may occur when a user is likely to be incapacitated, and vendor matching computing platform may trigger action on the anticipated purchase activity in accordance with the one or more attributes of the anticipated purchase activity. For example, vendor matching computing platform may determine that the user has designated an authorized individual to approve a payment for the purchase activity, and vendor matching computing platform may trigger contact with the authorized individual. As another example, vendor matching computing platform may determine that the user has pre-approved a loan for the purchase activity, and vendor matching computing platform may trigger application of the loan as payment to the purchase. 110 110 110 As another example, vendor matching computing platform may determine that the user is located in an area that is undergoing a civic unrest, or has been visited by a natural disaster, and vendor matching computing platform may determine that the user has designated a preferred mode of communication. Accordingly, vendor matching computing platform may trigger contact with the user via the designated alternate mode of communication. 110 110 110 Also, for example, vendor matching computing platform may determine that the user is located in an area that is undergoing a civic unrest, or has been visited by a natural disaster, and vendor matching computing platform may determine that the user may need help with insurance claims, bridge loans, home repair, automobile repair, and so forth. Accordingly, vendor matching computing platform may identify businesses that may tailor provision of goods and/or services to affected customers, and provide the goods and/or services based on user-defined preferences. 110 110 In some embodiments, vendor matching computing platform may retrieve, from one or more external sources of data, one or more events that may impact the purchase activity. As described herein, the user may be located in a geographical area that may have experienced a weather related event (e.g., snowstorm, hurricane, tornado, volcanic eruption, floods, forest fires, and so forth), a public health care related event, a civic unrest, a political upheaval, and so forth. Generally, such events may disrupt a customer's ability to access their account, and/or make purchase activities. In some embodiments, the one or more external sources of data may be artificial intelligence based systems. In some embodiments, the event may include one or more of a weather related event, an employment related event, a geopolitical event, a civic unrest, and a medical event. Vendor matching computing platform may retrieve data related to the event from an external weather system, a traffic alert system, a news agency, and so forth. 110 110 110 110 110 Then, vendor matching computing platform may identify an event of the one or more events that impacts the purchase activity, where the action associated with the purchase activity comprises a recommendation that minimizes the impact of the event for the user. For example, the event may be a natural disaster, such as, for example, a hurricane. The customer may have lost their house, or may have lost electrical power. Vendor matching computing platform may identify customers who may be in a zip code affected by the natural disaster, and may match the customers to available goods and/or services to proactively address the customer's potential need for such goods and/or services. For example, vendor matching computing platform may identify and/or generate new loan schemes to help affected customers. For example, vendor matching computing platform may provide loans to rebuild a house that has been damaged, and identify construction companies that may be customers that may provide discounted services. In some instances, the loan may be an interest-free loan. Also, for example, the loan may be a bridge loan configured to support a customer while an insurance payout is forthcoming. Generally, vendor matching computing platform may identify resources to and provide these to customers to support them during hardships, and help them return to normalcy. 110 For example, bureaus of motor vehicle include information related to vehicles that may indicate accidents, and so forth. Also, for example, a customer may authorize access to medical related records, and vendor matching computing platform may determine whether the customer has health related expenses. As another example, an internet search history may indicate a user's anticipated purchase activity (e.g., travel expenses, medical expenses, construction expenses, home improvement expenses, educational expenses, and so forth). Generally, any information available in the public domain may be retrieved, and/or analyzed, to identify a potential impact to from an event that has occurred. FIG. 3 FIG. 3 305 110 310 110 315 110 320 110 325 110 330 110 depicts an illustrative method for automated pairing of customers and businesses. Referring to , at step , vendor matching computing platform having at least one processor, and memory may determine, via a computing device and based on historical user activity of a user, a pattern of the user activity. At step , vendor matching computing platform may identify, based on the pattern of the user activity, one or more anticipated purchase activities of the user. At step , vendor matching computing platform may determine, via the computing device and based on the one or more anticipated purchase activities of the user, a sales offering by a vendor. At step , vendor matching computing platform may match, via the computing device, the one or more anticipated purchase activities of the user with the sales offering by the vendor. At step , vendor matching computing platform may retrieve, from a repository of user data and for a purchase activity of the one or more anticipated purchase activities, one or more user-defined preference rules associated with the purchase activity. At step , vendor matching computing platform may determine whether the one or more preference rules apply to one or more attributes of the purchase activity. 335 335 110 310 Upon a determination that the one or more preference rules apply to one or more attributes of the purchase activity, the process may proceed to step . At step , vendor matching computing platform may trigger an action associated with the purchase activity. Upon a determination that the one or more preference rules do not apply to one or more attributes of the purchase activity, the process may return to step . One or more aspects of the disclosure may be embodied in computer-usable data or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices to perform the operations described herein. Generally, program modules include routines, programs, objects, components, data structures, and the like that perform particular time-sensitive tasks or implement particular abstract data types when executed by one or more processors in a computer or other data processing device. The computer-executable instructions may be stored as computer-readable instructions on a computer-readable medium such as a hard disk, optical disk, removable storage media, solid-state memory, RAM, and the like. The functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents, such as integrated circuits, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects of the disclosure, and such data structures are contemplated to be within the scope of computer executable instructions and computer-usable data described herein. Various aspects described herein may be embodied as a method, an apparatus, or as one or more computer-readable media storing computer-executable instructions. Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment, an entirely firmware embodiment, or an embodiment combining software, hardware, and firmware aspects in any combination. In addition, various signals representing data or events as described herein may be transferred between a source and a destination in the form of light or electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, or wireless transmission media (e.g., air or space). In general, the one or more computer-readable media may be and/or include one or more non-transitory computer-readable media. As described herein, the various methods and acts may be operative across one or more computing servers and one or more networks. The functionality may be distributed in any manner, or may be located in a single computing device (e.g., a server, a client computer, and the like). For example, in alternative embodiments, one or more of the computing platforms discussed above may be combined into a single computing platform, and the various functions of each computing platform may be performed by the single computing platform. In such arrangements, any and/or all of the above-discussed communications between computing platforms may correspond to data being accessed, moved, modified, updated, and/or otherwise used by the single computing platform. Additionally or alternatively, one or more of the computing platforms discussed above may be implemented in one or more virtual machines that are provided by one or more physical computing devices. In such arrangements, the various functions of each computing platform may be performed by the one or more virtual machines, and any and/or all of the above-discussed communications between computing platforms may correspond to data being accessed, moved, modified, updated, and/or otherwise used by the one or more virtual machines. Aspects of the disclosure have been described in terms of illustrative embodiments thereof. Numerous other embodiments, modifications, and variations within the scope and spirit of the appended claims will occur to persons of ordinary skill in the art from a review of this disclosure. For example, one or more of the steps depicted in the illustrative figures may be performed in other than the recited order, and one or more depicted steps may be optional in accordance with aspects of the disclosure.
The novel coronavirus spreads quickly with regular social contact. Self-quarantine is a method of slowing its spread through staying at home and away from other people. But what does self-quarantine involve, why is it necessary, and when does it apply? The coronavirus travels fast Coronaviruses are a family of viruses common in people and animals. The specific coronavirus responsible for the current global pandemic is officially known as SARS-CoV-2. The virus causes the respiratory disease COVID-19. The coronavirus typically spreads through respiratory droplets when an infected person coughs or sneezes. Direct social contact increases the spread of the virus. Therefore, the best way to prevent illness is to avoid contact with infected individuals through social distancing and self-quarantine. “Our goal is to slow the spread of the coronavirus to limit the number of people who get very sick from COVID-19. Social distancing and self-quarantine are proven public health measures that can make a difference and protect the most vulnerable people in our communities,” says Rebekah Sensenig, D.O., physician specialist in infectious disease at Riverside Health System. The difference between social distancing and self-quarantine Social distancing is a lot like it sounds, keeping a safe distance away from people. Social distancing makes it physically hard for the virus to spread between people. To practice social distancing, the Centers for Disease Control and Prevention recommends that you: - Limit all gatherings to 10 people or less including family, school, work, church, etc. - Maintain 6 feet of physical space between individuals COVID-19 is a global health issue at present. Therefore, the World Health Organization recommends social distancing for all people right now. However, if you suspect you may have been exposed to the coronavirus or are part of a vulnerable population, more aggressive measures should be taken such as self-quarantine. Self-quarantine includes avoiding all social contact by staying home and limiting activities in public spaces. Who should self-quarantine? If you’ve been exposed to the coronavirus you should self-quarantine for 14 days. Fourteen days is the amount of time between the infection and the appearance of signs or symptoms. After this time period, the risk of passing the virus to someone else is very low or nonexistent. During self-quarantine, most activities that require leaving home or interacting with others are avoided. When possible, you should only leave home to receive medical care and for no other reason. Research from past cases of COVID-19 show that some populations may get very sick from the disease including individuals in the following groups: - The elderly - Those with compromised immune function such as individuals with cancer - Those with existing health conditions like heart disease, diabetes, asthma and COPD During a COVID-19 outbreak, individuals in these high risk categories are advised to follow precautions similar to self-quarantine to protect against the virus — including staying at home as much as possible and limiting all social interaction. Protect yourself and others Daily precautions should be taken whether you are sick or not to limit the spread of the disease including: - Wash your hands often - Avoid close contact - Disinfect surfaces - Stay home if you’re sick - Cover coughs and sneezes Make a plan It’s difficult for most people to stay indoors and limit social contact for two weeks. Make a plan and take steps to be prepared to stay home should you need to self-quarantine. As part of your plan, establish: - A two-week supply of food, medication and household products - A plan to work from home - Care for pets Stay Informed The COVID-19 outbreak is evolving rapidly. Learn more about the virus, disease and available updates by visiting riversideonline.com/covid-19.
https://www.riversideonline.com/patients-and-visitors/healthy-you-blog/blog/s/self-quarantine-what-does-it-mean
A new study by researchers at the Bourns College of Engineering, University of California, Riverside (UCR) and colleagues at the Manufacturers of Emission Controls Association (MECA) has found that catalyzed gasoline particulate filters (GPF) are effective not only at reducing particulate mass, black carbon, and total and solid particle number emissions in gasoline direct injection engines (GDI) but also polycyclic aromatic hydrocarbons (PAHs) and nitrated PAHs. Their study is publishedin the ACS journal Environmental Science & Technology. The researchers assessed the gaseous, particulate, and genotoxic pollutants from two current technology gasoline direct injection vehicles when tested in their original configuration and with a catalyzed gasoline particulate filter (GPF). Testing was conducted over the LA92 and US06 Supplemental Federal Test Procedure (US06) driving cycles on typical California E10 fuel. Top: Gravimetric PM mass, PM mass calculated based on the IPSD method, black carbon, and EC/OC emissions over the LA92 cycle. Bottom: Total particle-phase PAH emissions, expressed in ng/mile, for both test vehicles over the LA92 cycle. Yang et al. Click to enlarge. The found that using a GPF did not show any fuel economy and CO2 emission penalties, while the emissions of total hydrocarbons (THC), carbon monoxide (CO), and nitrogen oxides (NOx) were generally reduced. Polycyclic aromatic hydrocarbons (PAHs) and nitrated PAHs were quantified in both the vapor and particle phases of the PM, with the GPF-equipped vehicles practically eliminating most of these species in the exhaust. For the stock vehicles, 2–3 ring compounds and heavier 5–6 ring compounds were observed in the PM, whereas the vapor phase was dominated mostly by 2–3 ring aromatic compounds. Although GDI vehicles offer the potential of improved fuel economy, less fuel pumping, and charge air cooling, they tend to produce higher particulate matter (PM) emissions when compared with the traditional port fuel injection engines. In GDI engines, fuel is sprayed directly into the combustion chamber, which leads to incomplete fuel evaporation due to the limited time available for fuel and air mixing, resulting in pockets with high temperatures but insufficient oxygen, leading to pyrolysis reactions and soot formation. Additionally, as the fuel comes directly into contact with the cold cylinder walls and piston, a small amount of fuel may impinge on the piston, which during evaporation may lead to diffusion combustion and PM formation. The rapid market penetration of GDI vehicles has led governments to impose stricter standards to control PM emissions. California LEVIII and US Tier 3 regulations will begin a four year phase-in starting in 2015 and 2017, respectively, to a PM maximum of 3 mg/mile from the current 10 mg/mile LEVII limit. LEVIII will begin a four year phase-in of a tighter 1 mg/mile starting in 2025. In the EU, the Euro 6a particle number (PN) standard for GDI vehicles was reduced from 6x1012 particles/km to 6x1011 particles/km in September 2017. … It is important to better understand the toxicity of the particles being formed in GDI combustion. Today, the literature is scarce about the toxic properties of PM emissions from GDI vehicles, such as those of polycyclic aromatic hydrocarbons (PAHs), their oxygenated (oxy-PAHs), and nitrated derivatives (nitro-PAHs). PAHs have long been recognized as one of the major soot precursors for soot particles, while they are also classified as carcinogenic and 85 mutagenic compounds adsorbed onto the PM or partition in the semivolatile PM phase. This study aims to better characterize the toxicity of PM from GDI vehicles and the potential for catalyzed GPFs to reduce this toxicity. Additionally, some oxy-PAH and nitro-PAH species have been recognized as similarly or more toxic than their parent PAHs. —Yang et al. In the study, the team used two MY 2016 passenger cars. The first was equipped with a 2.0-liter all-guided direct injection SI Atkinson cycle engine; the second was equipped with a 1.5-liter downsized turbocharged centrally-mounted direct injection engine. Both vehicles were operated stoichiometrically, and were equipped with three-way catalysts (TWCs). Both were certified to meet LEV III SULEV30 (PZEV) and LEV II emissions standards and had 14,780 and 24,600 miles at the start of the campaign, respectively. After measuring the baseline emissions, the researchers retrofitted both vehicles with a catalyzed GPF installed in place of the underfloor TWC. The original close-coupled catalysts were retained in their stock location. The catalyzed GPFs were provided by MECA. The GPFs were sized based on the engine displacement of each vehicle and they were catalyzed with precious metal loadings typical of underfloor catalysts matching the certification levels of the two vehicles. Both vehicles were tested over duplicate LA92s and US06 cycles using California E10 fuel. The LA92 test cycle or the California Unified Cycle (UC) is a dynamometer driving schedule for light-duty vehicles developed by the California Air Resources Board (CARB). LA92 consists of three phases (i.e., cold-start, urban, and hot-start phases) and has a three-bag structure similar to the FTP cycle. LA92 is characterized by higher speeds, higher accelerations, fewer stops per mile, and less idle time than the FTP. US06 was developed to reflect aggressive, high speed, and high acceleration driving behavior. Unlike the LA92, it is a hot-start test typically run with a prep cycle to ensure the vehicle is warmed up. The results showed that current technology GDI vehicles could indeed be an important source for on-road ultrafine particles and black carbon emissions and ultimately a contributor to urban air pollution. This study also showed that that catalyzed GPFs can improve the conversion efficiency for NOx, THC, and CO emissions and have no measurable impact on CO2 emissions and fuel economy. This is one of the few studies revealing that GDI vehicles could significantly contribute to PAH and nitrated PAH emissions and to our knowledge, the only one that looked at remediation of these toxics using a catalyzed gasoline particulate filter. We found that the use of catalyzed GPFs could significantly reduce the PM mass and black carbon emissions, as well as total and solid particle number emissions without having a measurable impact on the vehicle’s GHG emissions and fuel economy. The catalyzed GPF significantly reduced the particle-phase PAHs and nitro-PAHs emissions, especially the less volatile or highly reactive PAH species. On the other hand, the vapor-phase PAHs did not show the same filtration efficiency as the PM-bound compounds. This study showed that GDI vehicle exhaust is characterized by diverse PAH distribution profile, ranging from 3-6 ring species. The projected increased penetration of GDI vehicles in the US market, suggests that future health studies aimed at characterizing the toxicity of GDI emissions are needed to understand the health risk associated with non-GPF-equipped GDI PM emissions. The fact that GPF adoption from US vehicle manufacturers is not as dynamic as in the EU, due to the more stringent European PN standard especially over real-driving emissions (RDE) testing, should raise concerns about the lack of societal and air quality benefits from the GDI fleet.
Globally the majority of commercial fisheries have experienced dramatic declines in stock and catch. Likewise, projections for many subsistence fisheries in the tropics indicate a dramatic decline is looming in the coming decades. In the Pacific Islands coastal fisheries provide basic subsistence needs for millions of people. A decline in fish catch would therefore have profound impacts on the health and livelihoods of these coastal communities. Given the decrease in local catch rates reported for many coastal communities in the Pacific, it is important to understand if fishers have responded to ecological change (either by expanding their fishing range and/or increasing their fishing effort), and if so, to evaluate the costs or benefits of these responses. We compare data from fish catches in 1995 and 2011 from a rural coastal community in Solomon Islands to examine the potentially changing coastal reef fishery at these time points. In particular we found changes in preferred fishing locations, fishing methodology and catch composition between these data sets. The results indicate that despite changes in catch rates (catch per unit effort) between data collected in 2011 and 16 years previously, the study community was able to increase gross catches through visiting fishing sites further away, diversifying fishing methods and targeting pelagic species through trolling. Such insight into local-scale responses to changing resources and/or fisheries development will help scientists and policy makers throughout the Pacific region in managing the region’s fisheries in the future. Citation: Albert S, Aswani S, Fisher PL, Albert J (2015) Keeping Food on the Table: Human Responses and Changing Coastal Fisheries in Solomon Islands. PLoS ONE 10(7): e0130800. https://doi.org/10.1371/journal.pone.0130800 Academic Editor: Giacomo Bernardi, University of California Santa Cruz, UNITED STATES Received: February 16, 2015; Accepted: May 26, 2015; Published: July 9, 2015 Copyright: © 2015 Albert et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited Data Availability: Due to ethical restrictions related to participant consent, all relevant data is available upon request. 2011 catch data are available from a database housed by the Solomon Island Ministry of Fisheries and Marine Resources. For access to this database, please contact James Teri ([email protected]), Director of Inshore Fisheries, Ministry of Fisheries and Marine Resources. 1995 catch data are available from Dr. Shankar Aswani ([email protected]). Funding: This work was supported by the Consultative Group on International Agricultural Research (CGIAR) Research Programs on Aquatic Agricultural Systems (aas.cgiar.org) and an Australian Centre for International Agricultural Research (aciar.gov.au) grant (FIS/2012/074). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Introduction A large proportion of the world’s fish stocks are believed to be either overfished, under-pressure or significantly depleted from historical over fishing [1–3]. Subsequently, there is rising concern over future food and nutrition security, particularly in developing countries where population growth is high and fish are the primary animal source protein . The Pacific region has some of the highest global population growth rates, greatest consumption of fish per capita and limited access to arable lands [5,6]. This demographic process, coupled with external pressures (e.g. climate change and ineffective governance) is expected to lead to a shortfall in the ability of reef fisheries to supply the protein needs for the populations of several Pacific Island countries and territories, by 2030 [5–8]. Offshore and coastal fisheries are integral for the economies and food security in the Pacific. Offshore tuna fisheries are important for economic development (contributing up to 10% of the regions gross domestic product ), while coastal fisheries are critical for food security and rural incomes . Recent assessments of key offshore tuna stocks (bigeye, yellowfin and skipjack) in the Central and Western Pacific suggest that stocks of skipjack and yellowfin tuna remain healthy, while stocks of bigeye tuna are under-pressure and require management intervention . In contrast to offshore fisheries, coastal fisheries are relatively understudied and are data deficient [9,10]. There is limited information on catch and productivity, primarily due to the difficulty in data collection for this primarily subsistence-based fishery . However, there is some information available on commercially valuable species, such as bêch-de-mer, trochus, giant clams and green snail which have shown dramatic decline and local extinctions in some parts of the region [10,13]. To date, estimates of fishing pressure and yield assessments have generally been conducted at regional or national scales. The general consensus being that population growth will soon outpace the fisheries ability to provide protein for many Pacific countries [6,14]. Results from village scale studies are mixed, with one instance in Fiji revealing that catch per unit effort has increased over time as a result of urban drift. However, in other cases fishers haven’t observed changes in fisheries over recent decades . On the other-hand, in Solomon Islands, some species of fish targeted for market, have showed indications of being overfished [17–19]. Understanding village scale responses to these potential shortfalls in fish based protein and overfishing of certain species is fundamental to provide insight into possible management actions that maintain food and nutrition security for rural Pacific communities. Despite the warning of a looming shortfall in the supply of fish for millions of people in the Pacific, there is limited information on long-term changes in fish catches and stocks . This is particularly true for Solomon Islands, where a comprehensive review of Pacific Island fisheries presented comparatively less data for Solomon Islands compared to other Pacific Island nations, which on the whole are considered to possess little data compared to other global regions [20,21]. Despite this, it has been suggested that Solomon Islands will require an additional fish catch of 18,750 tonnes p.a. by 2030 to meet the country’s fish consumption demand, which is more than double the current catch rate (11,150 tonnes p.a.). Fisheries in Solomon Islands, as with the majority of the Pacific Islands, are largely subsistence based multi-species reef-based fisheries with as little as 20% of the fish and invertebrate catch being sold at cash markets . Due to the subsistence nature of fisheries, and widespread geographic setting of the Solomon Islands archipelago, there is little documentation of catch composition and the majority of fisheries are not subject to formal markets and record keeping. In addition, fishers return to numerous different fishing nodes making creel surveys by third party government or research organisations difficult. These limitations in record keeping are exacerbated when combined with the high diversity of species typically caught by subsistence fisheries, for example, reports of 100–200 species are not uncommon for reef fishers . In this study we analyse the spatial and temporal characteristics of fishing behaviour of Baraulu fishers in the Roviana Lagoon, Solomon Islands, and compare fish catch data from 1995 to 2011 to examine changes to the coastal reef fishery between these time points. The key questions explored through this research were, given ongoing environmental changes [23,24], have fishers responded to ecological change by expanding their fishing range and/or effort? Answering these questions is fundamental for understanding human responses to resource scarcity in a rapidly changing environment. This paper provides a unique insight into local-scale responses to resource depletion that is relevant throughout the Pacific region. Materials and Methods This paper focuses on data collected over two time periods, 1995 and 2011. In 1995 raw data were collected as part of a study on customary sea tenure and artisanal fishing within the Roviana Lagoon , which utilized voluntary self-reporting diaries and focal follows to obtain catch data. The work in 2011 was part of a broader national level study in partnership with the Solomon Islands Ministry of Fisheries and Marine Resources. As part of this national study, a self-reporting diary was developed for fishers to record information about their fishing catches. Village fishers were trained in record keeping. Using these two data sets, we analyzed changes in fish catch composition, location and methodology. Given that fishing technology and local generalized foraging methods/strategies have remained fairly constant since 1994, the methodology used in both surveys furnished data that are comparable. Ethics statement In 1995, fish catch data was obtained on a voluntary basis and consisted solely of fisheries data provided by consenting individuals. In 2011, research clearance was provided through a memorandum of understanding that WorldFish has with the Solomon Islands Government and adhered to the WorldFish Code of Ethics for working with people (2009). Fishers involved in the study were informed on the purpose of data collection prior to research activities being initiated. Fisher consent was documented through self-reporting diaries; those not willing to participate did not provide information on their fishing activities. To maintain confidentiality of fishers, the information obtained through the self-reporting diaries and focal follows did not record personal information and was limited to information about their fishing activities. Study site Baraulu is a rural coastal village of 400 people in the Roviana Lagoon, Western Province, Solomon Islands, S1 Fig. The lagoon is 400 km2, comprised of extensive shallow coral reefs, seagrass and mangrove habitat that supports a human population of over 15,000. While the general population has increased across the Solomon Islands since 1995, the population of Baraulu has remained fairly constant over this period as a result of rural-to-urban migration and a recent religious split that has divided the village into 2 different hamlets. For instance, a 1994 population census of the village registered 394 inhabitants (212 males/282 females), while a population census conducted in 2001 provided a population of 419 inhabitants (230 males/189 females), indicating a fairly stable population throughout a period of 7 years. The main Baraulu community is located on a raised barrier island on the southern side of the lagoon, adjacent to a passage connecting the lagoon to the open ocean. The adjacent lagoon is formed by a gradient of marine ecosystems including mangrove forests, river mouths, mudflats, seagrass, lagoonal reefs, barrier reefs, marine lakes, amongst others, and has characteristics of both coastal and coral atoll lagoons. The lagoon passages are wide and deep allowing for the exchange of water between the open ocean and the lagoon. The lagoon hydrodynamics has allowed for the development of diverse ecological communities, particularly coral reef communities of diverse ecological characteristics in the entrances and central zones of the lagoon. Beyond the passages is the open sea, which offers fishers the possibility of exploiting a number of deep sea coral reef species as well as pelagic fish such as tuna. Baraulu people have lived a largely subsistence lifestyle of small scale farming and fishing, yet today, their livelihood activities are threatening coral reefs as well as other marine and terrestrial habitats. Damaging activities include the small-scale, non-regulated exploitation of commercial species like holothurians, trochus, and various shell species, the increasing pressures on the subsistence fishery include small-scale commercial netting of fish, night diving for scarids (done primarily by external poachers) and for rock lobsters for the growing tourist industry, the collection of corals for building structures such as wharfs, the aquarium fish collection trade, and poor land based practices which cause sedimentation impacts on lagoon nursery areas . This coupled with environmental effects related to climate change is increasingly degrading coral reefs and their future role in providing ecosystem services. Fish catch data Between March and August in 1995 and 2011 fishers from Baraulu village collected information on their fishing trips. Approximately 50 individual fishers returned data in both 1995 and 2011 from a total of approximately 65 fishers in Baraulu village. In 1995 fishers were randomly selected throughout the week and in 2011 all fisher catches were documented on one day (Saturday) of each week. The following parameters were quantified for each fishing trip in both 1995 and 2011: number of fishers, paddling time, fishing time at each spot, fishing methods used, species caught, fish weight at each location, fishing location and habitat type, and distance from village. Diary entries were co-ordinated and checked for anomalous entries by a trained research assistant. Manual spring scales (± 50 g) were used for fish weight. Fish species were recorded in diaries in local vernacular in which fishers have a detailed taxonomic knowledge, which were identified to species or family level with fishers prior to data entry. For the purpose of data analysis, fishing methods were grouped into broad groupings of: trolling (nylon line, lure and hook towed behind a wooden paddle canoe), handline (nylon line (typically 10–30 lb) with baited hooks near bottom or non-baited hooks striking to attract and catch fish in mid-water), dropline (fishing with a lured hook using a stone as a weight, striking the stone off in mid-water), net (fishing with a net from either a canoe or from shore where people chase fish into the net), poison (using Derris sp. leaf to temporarily stun fish) and spear (freediving with wooden spear gun or sling). In 1995 two research methods were employed: focal follows and self-reporting diaries. Focal follows were initially used in 1995 to optimize the use of self-reporting diaries and to ensure catch data were being reported correctly. Once confident that the diaries were being used correctly, diaries were the predominant reporting method. Statistical analysis of catch data from focal follow and diaries showed no significant difference for catch rate between methods (CPUE, ANOVA F(1, 2378) = 2.4041, p = 0.12). Diary holders were recruited randomly to keep records of their fishing activities. The data collected during 1995 and 2011 were used to explore the effects of fishing method and fishing location/habitat type on mean net return rates and fishing event duration (see for further discussion). Data analysis All fishing trips involving harvesting of invertebrates were removed from the analysis due to high variability in fishing time and inability to compare invertebrate and fish harvest weights. In total, 767 fishing trips (over 895 fisher hours) were analysed in 1995 and 259 fishing trips (over 851 fisher hours) in 2011. The statistical software Statistica Ver. 12 (Statsoft, USA) was used to analyse for significant differences between data recorded in 1995 and 2011 using a Kruskal-Wallis ANOVA both for the entire data set and for each grouped fishing methods. Geographic information system maps were produced using ArcMap Ver. 10.2 (Esri, USA). Data is presented as mean ± standard error throughout the manuscript unless otherwise stated. Results In both 1995 and 2011 fishers spent most of their time using handline, although handline use was higher in 1995 at 69.7% compared to 46.2% in 2011 (Table 1). Likewise the proportion of time spent net fishing was higher in 1995 (15.1%) compared to 2011 (6.7%). Conversely, the proportion of time spent dropline fishing was substantially lower in 1995 at 21.4% compared to 30.1% in 2011), as was trolling (10.6% and 15.3%, in 1995 to 2011 respectively), Table 1. The mean time spent fishing (excluding travel time) on each fishing trip was 71.7 (±3.76) minutes in 1995, while significantly higher at 197 (±0.04) minutes in 2011 (KWH(1,988) = 358.67, p<0.001) (Fig 1A). Fishers travelled further to fish in 2011 (2.94 ±0.13 km) compared to 1995 (1.75 ±0.06 km) (KWH(1,988) = 75.03, p<0.001) (Fig 1B). The average total weight of fish caught per trip was twice as high in 2011 compared to 1995 (5.6 ±0.52 kg and 2.7 ±0.10 kg, respectively (KWH(1,988) = 358.67, p<0.001)) (Fig 1C), as was the mean weight of individual fish (0.99 ±0.14 kg and 0.45 ±0.03 kg, respectively). The difference in weights between the two years sampled was synergistic with an increased proportion of larger fish from trolling (0.19 ±0.01 vs. 2.86 ±0.65 kg) in 2011 (KWH(1,145) = 6.98, p = 0.008) (Fig 1C). Whereas, the weight of fish caught by hand line showed the opposite trend (0.43 ±0.03 vs. 0.35 ±0.08 kg for 1995 and 2011). Overall fish catch rates, as measured using catch per unit effort (CPUE) was significantly higher in 1995 at 2.77 ±0.21 kg fisher-1 hr-1 compared to 1.90 ±0.16 kg fisher-1 hr-1 in 2011 (KWH(1,988) = 24.86, p<0.001) (Fig 2). Similarly, catch rate for handline fishing alone were higher at 2.08 ±0.09 kg fisher-1 hr-1 in 1995 versus 1.23 ±0.13 kg fisher-1 hr-1 in 2011 (KWH(1,662) = 30.89, p<0.001). Panels show a) Mean time spent fishing in 1995 and 2011, b) mean distance to fishing site, c) mean fishing trip catch weight, d) mean fish size. Data significantly different between 1995 and 2011 using a Kruskal-Wallis test are shown (*). Statistical results for all fishing methods are shown in results section. Statistical results for individual methods are shown within S1 Table. Graph indicates catch per unit effort (CPUE) within the study site for 1995 and 2011 by fishing method. Data significantly different between 1995 and 2011 using a Kruskal-Wallis test are shown (*). A diverse assemblage of fish were recorded in 1995 and 2011, including at least 90 species from 19 families. Species from the Carangidae family comprised the highest proportion of catch in both 1995 (34.8%) and 2011 (31.4%). Species from the Lutjanidae family were the second most frequently caught fish in 1995 (17.6%) however Lutjanids contributed to only to 7.7% of fish caught in 2011. The change in catch composition was compensated by an increased contribution of Scombridae to the catch from 1.4% in 1995 to 11.4% in 2011 (Table 2). In 1995 fishers concentrated their fishing effort close to the village, using predominately handline angling (Fig 3). In 2011 fishing there was increased effort spread further from the village at fewer locations. In general, fishing methods were more diverse in 2011 than 1995, with a shift to more trolling in reef drop-off locations, such as Sagnava, Gurana and Korihokata (Fig 4). Fishers spent most of their time fishing in shallow reef habitat (51%) in both 1995 and 2011. Time spent fishing mangroves and reef drop-offs habitats was higher in 2011 whereas effort in seagrass and reef passages was lower (Fig 5). Size of pie chart symbol denotes the the anmount of time spent at fishing locations throughout the Roviana lagoon during the 1995 record period. The amount of time spent for each method is shown by pie chart sections. Size of pie chart symbol denotes the the anmount of time spent at fishing locations throughout the Roviana lagoon during the 2011 record period. The amount of time spent for each method is shown by pie chart sections. Data for 1995 and 2011 are shown by black and white bars, respectively. Discussion Coastal fisheries play a fundamental role in supporting the subsistence livelihoods of millions of people globally. The remoteness of many subsistence communities on the small islands of the Pacific limits their ability to access alternative protein sources and hence fisheries productivity is a matter of basic survival . This renders the predictions of imminent coastal fisheries decline across the Pacific [6,8] of particular concern. Solomon Islands are considered one of the more vulnerable nations with high population growth rate, high fisheries dependency and low GDP limiting access to other protein options . However experience from large scale environmental disasters that have severely impacted fisheries in Solomon Islands suggests rural communities in Solomon Islands possess substantial adaptive capacity [23,28,29]. In order to adequately plan for the predicted shortfall in protein supply for millions of people across the Pacific an understanding of how fishers respond to changing fisheries productivity over time is essential. This study has clearly demonstrated that for the primary fishing method of handlining significantly lower catch rates (CPUE) were found in 2011 compared to data collected 16 years prior, with similar results observed for other fishing methods. This lower catch rate was coupled with a decrease in average fish size. Although several factors, other than reducing fish stocks, can result in reduced catch rates the results from this study concur with several other studies and reinforce the notion that coastal fisheries may be in decline . However, we are not able to completely exclude other factors such as socio-economic changes, external poaching, or change in fish catchability or aggregation. Indeed, the overall quantity of fish returned to the village on each fishing trip using handline increased substantially from 2.22 (±0.09) in 1995 to 3.8 (±0.30) kg per trip in 2011. This counter-intuitive increase was driven by both longer fishing times and travelling to fishing grounds further away. In 1995 the majority of the handline fishing effort was focussed within 2 km of the village, by 2011 this had increased to 3 km. So whilst fishers appear to have maintained and actually increased the quantity of fish returned to the village using handline, this was at a cost of paddling further and spending longer at fishing sites increasing fishing effort for individual fishers. Maintaining handline catches through paddling further may only provide a short term solution if fisheries are indeed declining in Roviana as customary fishing grounds of the Baraulu community only extend 3–4 km to the east and west and the current fishing effort is located on the extremities of these fishing grounds. In addition, the social impacts of this increased effort and time away from the village could be significant in the rural villages, as the fishers also have import roles to play in the household and gardens. As mentioned previously lower catch rates may also have resulted from an increase in population within the region between 1995 and 2011 resulting in an increase overall fishing effort for the region. National census data shows an annual growth rate of 2.6% for the Roviana Lagoon region between 1999 and 2009 , however, data available for Baraulu suggests the village population has remained relatively stable over time. Changes to target species between 1995 and 2011 may also have affected catch rates if new target species have a lower catchability. However this would appear not to be the case in Baraulu, with Lethrinidae and Lutjanidae dominating the handline catch in both years (Table 2). The higher number of fish being returned to the village in 2011 was also supported by a substantially higher proportion of fishing time spent both trolling and droplining in 2011. These methods may have offset the reduced effort spent on handlining in fishing grounds close to the village. Despite the lower trolling catch rates recorded in 2011 there was a clear increase in fish size, distance travelled to fishing grounds, time spent fishing, and the average catch weight from paddle canoe based trolling. The increase in fish size and catch was driven by an increase in Scombridae and Carangidae. This trend may point towards a process where declining handlining of reef fish from shallow coral reefs could be supplemented through trolling for pelagics in deeper waters providing the potential for a longer term solution to a potential fisheries crisis. Whilst the tuna resource in the Western Pacific is under pressure from commercial operations, it is still one of the most intact fisheries globally and diversifying access to tuna is promoted as a means to increase food security in the Pacific region . Generally speaking these open ocean pelagic resources are under-utilised by coastal communities in the Pacific, the use of inshore fish aggregating devices (FADs) for sole use by subsistence communities shows great promise as a means to promote access to pelagic fisheries [5,33]. Understanding the social and biological drivers for changing fishing practices in Roviana through transitioning from inshore reef fishing to offshore pelagic fish may provide an important avenue for future research. Previous research in the region [23,34] suggests that the annual fluctuation of species’ spatial and temporal distribution allows fishers to harvest numerous organisms at different times and places, with this variability being determined by lunar and tidal phases. Recurrent lunar aggregations are spatio-temporally predictable occurrences that can increase a fisher’s catches across various periods of the year. Sometimes fishers become specialists by targeting a limited number of species, while at others times they act as generalists and exploit a wide range of species in various marine habitats. In summary, the results presented in this paper concur with regional scale generalisations that the productivity of inshore fisheries maybe declining as population growth outpaces supply, as evidenced by the fall in catch rates between 1995 and 2011 observed in this study. However the simplistic models that this linear supply and demand equation will lead to a depletion of inshore fisheries and hence severe food security issues for many Pacific nations does not fully consider the behavioural plasticity of Pacific communities allowing them to adapt to environmental change. This study provides evidence that increasing access and utilisation of offshore pelagic fish resources will likely play a critical role in food security as reef fish resources decline. In addition the high reliance of many Melanesian communities on mangrove resources such as crabs and bivalves [35,36] provides further buffering and adaptive capacity to the projected declines in reef fisheries. Supporting Information S1 Fig. Map of Solomon Islands and study location. https://doi.org/10.1371/journal.pone.0130800.s001 (TIF) S1 Table. Statistical results of Kruskal-Wallis analysis by fishing method. https://doi.org/10.1371/journal.pone.0130800.s002 (DOCX) Acknowledgments We wish to thank the Baraulu community, fishers and fish monitors who provided warm hospitality and support throughout the study. We are grateful for the technical advice received from the Solomon Islands Ministry of Fisheries and Marine Resources and the Secretariat of the Pacific Community. Thank you to Minnie Rafe and Faye Siota for entering and validating 2011 catch data. Author Contributions Conceived and designed the experiments: S. Albert S. Aswani JA. Performed the experiments: S. Aswani JA. Analyzed the data: PLF. Wrote the paper: S. Albert PLF S. Aswani JA. References - 1. FAO (2014) The State of World Fisheries and Aquaculture. Rome: Food and Agriculture Organisation of the United Nations. - 2. Pauly D (2007) The Sea Around Us Project: Documenting and Communicating Global Fisheries Impacts on Marine Ecosystems. AMBIO 36: 290–295. pmid:17626465 - 3. Pauly D (2008) Global Fisheries: A Brief Review. J Biol Res (Thessalon) 9: 3–9. - 4. Hall SJ, Hilborn R, Andrew NL, Allison EH (2013) Innovations in capture fisheries are an imperative for nutrition security in the developing world. Proc Natl Acad Sci U S A 110: 8393–8398. pmid:23671089 - 5. Bell JD, Allain V, Allison EH, Andréfouët S, Andrew NL, Batty MJ et al. (2015) Diversifying the use of tuna to improve food security and public health in Pacific Island countries and territories. Marine Policy 51: 584–591. - 6. Bell JD, Kronen M, Vunisea A, Nash WJ, Keeble G, Demmke A, et al. (2009) Planning the use of fish for food security in the Pacific. Marine Policy 33: 64–76. - 7. Bell J, Reid C, Batty M, Lehodey P, Rodwell L, Hobday A, et al. (2013) Effects of climate change on oceanic fisheries in the tropical Pacific: implications for economic development and food security. Clim. Change 119: 199–212. - 8. Bell JD, Ganachaud A, Gehrke PC, Griffiths SP, Hobday AJ, et al. (2013) Mixed responses of tropical Pacific fisheries and aquaculture to climate change. Nat Clim Chang 3: 591–599. - 9. Gillett R, Cartwright I (2010) The future of Pacific Island fisheries. Noumea, New Caledonia: Secretariat of the Pacific Community. - 10. SPC (2013) Status Report: Pacific Islands reef and nearshore fisheries and aquaculture. Noumea, New Caledonia: Secretariat of the Pacific Community. - 11. Harley S, Williams P, Nicol S, Hampton J (2014) The western and central pacific tuna fishery: 2012 Overview and status of stocks. Noumea, New Caledonia: Secretariat of the Pacific Community. - 12. Gillett R (2009) Fisheries and the economies of the Pacific Island Countries and Territories. Mandaluyong City, Phitippines: Asian Development Bank. - 13. Richards AH, Bell LJ, Bell JD (1994) Inshore fisheries resources of Solomon Islands. Mar Pollut Bull 29: 90–98. - 14. Dalzell P, Adams TJH (1996) Sustainability and Management of Reef Fisheries in the Pacific Islands. 8th International Coral Reef Symposium Panama City. - 15. Kuster C, Vuki VC, Zann LP (2005) Long-term trends in subsistence fishing patterns and coral reef fisheries yield from a remote Fijian island. Fish Res 76: 221–228. - 16. Craig P, Green A, Tuilagi F (2008) Subsistence harvest of coral reef resources in the outer islands of American Samoa: Modern, historic and prehistoric catches. Fish Res 89: 230–240. - 17. Aswani S, Sabetian A (2010) Implications of Urbanization for Artisanal Parrotfish Fisheries in the Western Solomon Islands. Conserv Biol 24: 520–530. pmid:19961509 - 18. Brewer TD, Cinner JE, Fisher R, Green A, Wilson SK (2012) Market access, population density, and socioeconomic development explain diversity and functional group biomass of coral reef fish assemblages. Glob Environ Change 22: 399–406. - 19. Brewer TD, Cinner JE, Green A, Pandolfi JM (2009) Thresholds and multiple scale interaction of environment, resource use, and market proximity on reef fishery resources in the Solomon Islands. Biol Conserv 142: 1797–1807. - 20. Adams TJH, Dalzell P, Farman R (1996) Status of Pacific Island Coral Reef Fisheries. 8th International Coral Reef Symposium Panama. - 21. Dalzell P, Adams TJH, Polunin NVC (1996) Coastal fisheries in the Pacific Islands. Oceanography and Marine Biology: An Annual Review 34: 395–531. - 22. Skewes T (1990) Marine Resource Profiles; Solomon Islands. Honiara, Solomon Islands: Pacific Islands Forum Fisheries Agency. 59 p. - 23. Aswani S, Lauer M (2014) Indigenous People's Detection of Rapid Ecological Change. Conserv Biol 28: 820–828. pmid:24528101 - 24. Lauer M, Aswani S (2010) Indigenous Knowledge and Long-term Ecological Change: Detection, Interpretation, and Responses to Changing Ecological Conditions in Pacific Island Communities. Environ Manage 45: 985–997. pmid:20336296 - 25. Aswani S., 1997. Customary sea tenure and artisanal fishing in the Roviana and Vonavona Lagoons, Solomon Islands: the evolutionary ecology of marine resource utilization, Department of Anthropology. University of Hawaii, p. 485. - 26. Halpern BS, Selkoe KA, White C, Albert S, Aswani S, Lauer M (2013) Marine protected areas and resilience to sedimentation in the Solomon Islands. Coral Reefs 32: 61–69. - 27. Aswani S (1998) Patterns of marine harvest effort in southwestern New Georgia, Solomon Islands: resource management or optimal foraging? Ocean Coast Manag 40: 207–235. - 28. Albert S, Dunbabin M, Skinner M, Moore B, Grinham A (2012). Benthic Shift in a Solomon Island's lagoon: corals to cyanobacteria. 12th International Coral Reef Symposium, Australia. - 29. Lauer M, Albert S, Aswani S, Halpern BS, Campanella L, La Rose D (2013) Globalization, Pacific Islands, and the paradox of resilience. Glob Environ Change 23: 40–50. - 30. Friedlander AM, DeMartini EE (2002) Contrasts in density, size, and biomass of reef fishes between the northwestern and the main Hawaiian islands: the effects of fishing down apex predators. Mar Ecol Prog Ser 230: 253–264. - 31. Newton K, Côté IM, Pilling GM, Jennings S, Dulvy NK (2007) Current and Future Sustainability of Island Coral Reef Fisheries. Curr Biol 17: 655–658. pmid:17382547 - 32. Anon. (2009) Solomon Island Population and Housing Census 2009. Solomon Islands Government. - 33. Albert JA, Beare D, Schwarz AM, Albert S, Warren R, Teri J, et al. (In press) The contribution of nearshore fish aggregating devices (FADs) to food security and livelihoods in Solomon Islands. PLOS One. - 34. Aswani S, Vaccaro I (2008) Lagoon Ecology and Social Strategies: Habitat Diversity and Ethnobiology. Hum Ecol 36: 325–341. - 35. Aswani S, Flores C, Broitman B. (2015) Human harvesting impacts on managed areas: Ecological effects of socially-compatible shellfish reserves. Rev of Fish Biol Fish 25: 217–230.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130800
The Values of KCIPrimary underpin everything we do. They create an environment that helps our management, our teachers, our pupils and their parents understand what the school stands for and guides the school body in everything it does. In alphabetical order, KCIPrimary School’s Values are: ACTIVE LEARNING We believe in using a wide range of active methods to enhance the learning experience of the children. It is proven that sitting at a desk for the whole of the school day is not healthy for the mind and body of both children and adults. Thus KCIPrimary uses a diverse variety of activities to ensure our children are active learners. Active learning also encourages the development of a diverse range of transferable skills and lifeskills that pupils are able to apply to every day life. As with life, learning at KCIPrimary is not limited to the classroom environment, and pupils are encouraged to explore their world. COLLABORATION In order to provide an excellent learning environment, teachers, parents and children must work together as collaborators. Our students are encouraged to work effectively and willingly in collaboration with others, promoting team work, listening skills, leadership skills and appreciation of others. KCIPrimary values and encourages the contributions of all our stakeholders to school life: parents are encouraged to play an active role in their child’s education at KCIPrimary. COMMITMENT We value the dedication and commitment of all our stakeholders; we encourage pupils to be persistent and not to give up and be willing to try new things with energy and enthusiasm. CREATIVITY Through exploration and discovery children are able to form original ideas, encouraging them to think and problem solve, skills essential for everyday life. When children are allowed to make mistakes, they then have the freedom to invent, create and find new ways to do things. KCIPrimary encourages creativity in all areas of learning. DIVERSITY We are an inclusive community that celebrates diversity, believing it is a positive strength within our community. Mutual respect, tolerance and awareness are key elements in constructing a community of learners and ultimately, responsible global citizens. KCIPrimary values and celebrates diversity and actively promotes international awareness including an understanding of different ways of life, and encourages its community to do so too. EMPOWERMENT Children should always be encouraged to help themselves according to their own capabilities. When children feel empowered to make decisions and problem solve, they begin to take responsibility for their own learning, their own actions and their own choices. EQUALITY KCIPrimary values all members of the school community equally; and encourages other school stakeholders, including pupils to do the same. We promote and practice tolerance, respect and patience in the school community. EXCELLENCE KCIPrimary strives to be a place of outstanding education: we recognise and celebrate both academic and all round achievement both in teaching and learning. Our pupils are encouraged to work to the best of their ability, developing their confidence, courage, persistence, and perseverance, in both the learning environment as well as in all aspects of their life. Our staff are encouraged to perform to the best of their ability, to motivate and inspire our pupils to achieve their best, and to be positive role models for the pupils and school community. Staff are valued and regularly undertake professional development to remain at the top of their game, with all teaching and learning techniques, and staff development, informed by research and effective practice. KCIP also invests in high quality resources to support effective learning. HONESTY, INTEGRITY, & RESPONSIBILITY Honesty, Integrity and Responsibility are core values we aim to develop in all our pupils and therefore set the bar high by expecting school management and teaching staff to act with a high level of honesty, integrity and responsibility at all times. We encourage each other to be consistent with our words and actions; to follow through on what we say; to be trustworthy; and to take responsibility for our actions. KCIPrimary also endeavours to be not only a responsible and fair employer but also to promote and practise environmentally friendly policies within the context of the school. INDIVIDUALITY Every child has a unique character and an individual learning style. KCIPrimary acknowledges these and allows for differentiation and equal access to all learning opportunities. KCIPrimary actively encourages pupils to maintain their individuality and build on their interests, strengths and talents, whilst encouraging its community to give everyone a fair go. RESPECT KCIP believes in the importance of respect for all members of our community, whatever their role. This includes demonstrating self control, showing care and kindness to others, sharing, cooperating, and being friendly, polite and thoughtful in dealing with others. We encourage all stakeholders to treat others the way they would like to be treated, listen when others are speaking, take turns and use good manners.
http://kcipschool.com/about-us/values/
The term “Human Capital” has long been used in the field of Human Resources Management (HRM) and traditionally it is defined as “the knowledge, skills, competencies and other attributes embodied in individuals that are relevant to economic activity” . Within an organisation, human capital is the value that an employee brings to the organisation in the context of knowledge, skills and the experience. It includes the employee’s skills and knowledge gained through formal and informal learnings. Understanding human capital is important for HRM in order to outline its strategies for talent acquisition, training, and skills development amongst many other things. Human capital is even more important when it comes to employee learning and career development in the workplace. Organisations spend a considerable amount of resources in the training of their employees to increase the overall human capital of the organisation. The question arises whether the human capital the only capital that HRM in an organisation needs to invest in? With the growth of enterprise social networking services, social dimensions of work are getting more attention within the organisations. Beyond the use of the tools of collaboration, these services provide many other indirect benefits as discussed in a previous blog post. One of the many benefits that can be achieved from social interactions at the workplace is to determine the “Social Capital” of each employee. Social capital is defined as ‘features of social organization such as networks, norms, and social trust that facilitate coordination and cooperation for mutual benefit’ . Social capital locates the value in relationships among people and often this relational type of capital is missing in HRM. In HRM, where traditional human capital allows HR to understand employee’s skills and knowledge, social capital enables them to understand how some teams perform better than others, how successful employees progress their career, how information flows within organisations and who are the information brokers or blockers in the organisation. Social capital is as important as traditional human capital for HRM and combining both capitals can help HR better manage employee career progression, staffing around tasks and formation of teams. In DEVELOP, we aim to determine the social capital value for employees by analysing their workplace social relationships. By analysing enterprise social network data, we plan to focus on various factors such as identifying the communities an employee belongs to. We then determine the factors that keep an employee bonded within that community as a means of characterising the social capital for the employee. It is expected that social capital will help in determining personalized learning interventions for the employees to progress their career in the workplace. Moreover, social capital when combined with traditional human capital can benefit HRM in many different ways. For example, HR can use social capital to compare the performance of teams in the organisation. In multidisciplinary teams where human capital is balanced among different teams, social relationships (social capital) can be used to explain the high performing teams. Individuals in the high performing teams tend to have a strong bond amongst themselves. They are often better in accessing and applying the knowledge or expertise within the team in order to accomplish a task. In summary, human capital is crucial for HRM to understand in order to function properly but it is not enough to meet the challenges that businesses are facing these days in talent acquisition and skills development. Social capital combined with human capital is the key to solve today’s HRM challenges. REFERENCES OECD. Centre for Educational Research and Innovation (CERI). (1998). Human capital investment: An international comparison. OECD, Paris, France.
http://develop-project.eu/news/social-capital
- Consent: We may process your data if you have given us specific consent to use your personal information for a specific purpose. - Legitimate Interests: We may process your data when it is reasonably necessary to achieve our legitimate business interests. - Performance of a Contract: Where we have entered into a contract with you, we may process your personal information to fulfill the terms of our contract. - Legal Obligations: We may disclose your information where we are legally required to do so in order to comply with applicable law, governmental requests, a judicial proceeding, court order, or legal process, such as in response to a court order or a subpoena (including in response to public authorities to meet national security or law enforcement requirements). - Vital Interests: We may disclose your information where we believe it is necessary to investigate, prevent, or take action regarding potential violations of our policies, suspected fraud, situations involving potential threats to the safety of any person and illegal activities, or as evidence in litigation in which we are involved. - Business Transfers. We may share or transfer your information in connection with, or during negotiations of, any merger, sale of company assets, financing, or acquisition of all or a portion of our business to another company. |Category||Examples||Collected| | | A. Identifiers | | Contact details, such as real name, alias, postal address, telephone or mobile contact number, unique personal identifier, online identifier, Internet Protocol address, email address and account name | | NO | | B. Personal information categories listed in the California Customer Records statute | | Name, contact information, education, employment, employment history and financial information | | YES | | C. Protected classification characteristics under California or federal law | | Gender and date of birth | | NO | | D. Commercial information | | Transaction information, purchase history, financial details and payment information | | NO | | E. Biometric information | | Fingerprints and voiceprints | | NO | | F. Internet or other similar network activity | | Browsing history, search history, online behavior, interest data, and interactions with our and other websites, applications, systems and advertisements | | NO | | G. Geolocation data | | Device location | | NO | | H. Audio, electronic, visual, thermal, olfactory, or similar information | | Images and audio, video or call recordings created in connection with our business activities | | NO | | I. Professional or employment-related information | | Business contact details in order to provide you our services at a business level, job title as well as work history and professional qualifications if you apply for a job with us | | NO | | J. Education Information | | Student records and directory information | | NO | | K. Inferences drawn from other personal information | | Inferences drawn from any of the collected personal information listed above to create a profile or summary about, for example, an individual’s preferences and characteristics | | NO - Receiving help through our customer support channels; - Participation in customer surveys or contests; and - Facilitation in the delivery of our Services and to respond to your inquiries. - whether we collect and use your personal information; - the categories of personal information that we collect; - the purposes for which the collected personal information is used; - whether we sell your personal information to third parties; - the categories of personal information that we sold or disclosed for a business purpose; - the categories of third parties to whom the personal information was sold or disclosed for a business purpose; and - the business or commercial purpose for collecting or selling personal information. - you may object to the processing of your personal data. - you may request correction of your personal data if it is incorrect or no longer relevant, or ask to restrict the processing of the data. - you can designate an authorized agent to make a request under the CCPA on your behalf. We may deny a request from an authorized agent that does not submit proof that they have been validly authorized to act on your behalf in accordance with the CCPA. - you may request to opt-out from future selling of your personal information to third parties. Upon receiving a request to opt-out, we will act upon the request as soon as feasibly possible, but no later than 15 days from the date of the request submission. Mediavine Programmatic Advertising (Ver 1.1) The Website works with Mediavine to manage third-party interest-based advertising appearing on the Website. Mediavine serves content and advertisements when you visit the Website, which may use first and third-party cookies. A cookie is a small text file which is sent to your computer or mobile device (referred to in this policy as a “device”) by the web server so that a website can remember some information about your browsing activity on the Website. First party cookies are created by the website that you are visiting. A third-party cookie is frequently used in behavioral advertising and analytics and is created by a domain other than the website you are visiting. Third-party cookies, tags, pixels, beacons and other similar technologies (collectively, “Tags”) may be placed on the Website to monitor interaction with advertising content and to target and optimize advertising. Each internet browser has functionality so that you can block both first and third-party cookies and clear your browser’s cache. The “help” feature of the menu bar on most browsers will tell you how to stop accepting new cookies, how to receive notification of new cookies, how to disable existing cookies and how to clear your browser’s cache. For more information about cookies and how to disable them, you can consult the information at All About Cookies. Without cookies you may not be able to take full advantage of the Website content and features. Please note that rejecting cookies does not mean that you will no longer see ads when you visit our Site. In the event you opt-out, you will still see non-personalized advertisements on the Website. The Website collects the following data using a cookie when serving personalized ads: - IP Address - Operating System type - Operating System version - Device Type - Language of the website - Web browser type - Email (in hashed form) Mediavine Partners (companies listed below with whom Mediavine shares data) may also use this data to link to other end user information the partner has independently collected to deliver targeted advertisements. Mediavine Partners may also separately collect data about end users from other sources, such as advertising IDs or pixels, and link that data to data collected from Mediavine publishers in order to provide interest-based advertising across your online experience, including devices, browsers and apps. This data includes usage data, cookie information, device information, information about interactions between users and advertisements and websites, geolocation data, traffic data, and information about a visitor’s referral source to a particular website. Mediavine Partners may also create unique IDs to create audience segments, which are used to provide targeted advertising. If you would like more information about this practice and to know your choices to opt-in or opt-out of this data collection, please visit National Advertising Initiative opt out page. You may also visit Digital Advertising Alliance website and Network Advertising Initiative website to learn more information about interest-based advertising. You may download the AppChoices app at Digital Advertising Alliance’s AppChoices app to opt out in connection with mobile apps, or use the platform controls on your mobile device to opt out. For specific information about Mediavine Partners, the data each collects and their data collection and privacy policies, please visit Mediavine Partners.
https://kidtestedrecipes.com/privacy-policy/
There are well established limits for workplace noise based on the risk of hearing damage. For example, an 8-hour noise exposure level is limited to 85 decibels (when the sound is this loud you need to shout to talk to someone near you). There are also guidelines for acceptable noise levels in workplaces that aim to ensure the noise will not be intrusive or affect the ability of the worker to do the tasks. For example, a design level for a general office may be 40 to 45 decibels (dBA), while for a ticket sales area, 45 to 50 dBA. In this range, noise should not have an adverse affect on your ability to complete a task. However, there are many work environments, particularly in the transportation industry, in which the noise levels are above 50 dBA but the employees are required to perform tasks that require a high level of concentration and attention. For pilots and bus, truck and train drivers, the noise levels in the area they are working can be 65 to more than 75 dBA at times. These workers all need to make safety-critical decisions and operate technical equipment in the presence of continuous noise generated from their vehicle’s engine. Transport check-in staff need to communicate and process passengers in noisy check-in halls where there is both vehicle and equipment noise as well as the noise from personnel around, such as “babble.” In this paper, we discuss findings from a number of studies investigating the effect of constant noise at 65 dBA on various cognitive and memory skills. Two noise sources were used: One, a wideband noise like constant mechanical noise from an engine, and the other a babble noise of multiple persons’ incomprehensible speech. Language background is another factor that can increase cognitive load for those workers who are communicating in a language that is not native. The cognitive tasks aimed to test working memory with an alphabet span test and recognition memory using a cued recall task. The signal to noise ratio used was 0, -5 and -10 dBA. Wideband noise was found to have a greater effect on working memory and recognition memory than babble noise. Those who were not native English speakers were also more affected by the wideband noise than the babble noise. The subjective assessment, when the subjects were asked their opinion of the effect of the noise and the annoyance, was also greater for broadband noise. These findings reinforce the limitations of basing acceptability on a simple overall dBA value alone. The reduction in performance demonstrates the importance of reducing the noise levels within transportation workplaces.
https://acoustics.org/effects-of-noise-for-workers-in-the-transportation-industry-marion-burgess/
Earlier this week, I flew from Boston to Pittsburgh. I don't know why this flight was different, but I couldn't stop staring at the terrain far below, seeing the imposed order of tiny houses and roads cut out of hilly terrain covered with forest. I saw a wind farm dotted across a hill, each turbine spinning in its achingly slow circle. It made me think about how the concept of "neighbors" had a very different meaning when moving from staring out a window in your house to staring out a window in an airplane. Different ideas are useful in different contexts. Groupings move from people to neighborhoods to towns to counties to states to countries, each with different concerns and needs. Even language adapts based on the breadth of data being considered. "Local" can mean a ten minute walk, or an hour drive, or two hours in a plane, or even the solar system if you're thinking interstellar travel. I think our perception of language causes miscommunication. We each have different experiences and perspectives, and that colors what we mean when we use a specific word. Different assumptions cause different meanings. "Community" could mean everyone who has ever played a particular game, or all people at a convention, or the usual suspects at a game store, or your gaming group. Each meaning makes sense. We need to be sure that we're talking about the same thing, or we need to invent language that communicates the particular shade of a concept we mean. We all start from our own perspective, but where we go from there depends entirely on us. Do we want to close out the world and exist in a bubble all our own? Do we steal concepts from the anonymous well of the Internet and fear the retribution given to Prometheus? Do we dive into online conversations, brash and self-assured, coming to loggerheads with anyone with a differing opinion? Do we listen politely and try to encourage others to do the same? Do we try to understand other points of view and incorporate that perspective into our games and lives? Can we really afford to fabricate edition-based holy wars for our own amusement? Are we really that bored? I challenge you to think in a perspective that is not your own, and see what happens. Orcs do not understand power in the same way as humans who aspire to join the aristocracy. Elves and Kobolds have different understandings of "a long time". Dwarves and Merfolk both know exactly what "deep" means. A cleric and a rogue will have very different meanings for "good deeds". A con man and a God of Trickery have very different timeframes for their long cons. And we as gamers have vastly differing perspectives on what "normal" means. And every last one of those perceptions is absolutely correct.
http://gamerblog.twwombat.com/2011/07/on-perspective-and-context.html
Mainframe Software Division Strategy, Partnership Programs, Pricing Optimization Greg Lotko is SVP and General Manager for Broadcom’s Mainframe Software Division. A passionate leader, Greg’s customer-first approach has helped countless organizations drive business and technical success resulting in lasting customer relationships. Currently, Greg is responsible for all facets of the Mainframe division, including strategy, product management, engineering, services, support, and marketing. Under his leadership, the division helps the world’s top companies operate, automate, secure, and modernize the mainframe systems that run their businesses. Known for driving growth and innovation, Greg was selected to lead the strategic Mainframe division when Broadcom acquired CA Technologies in 2018. At CA, Greg was General Manager for Mainframe and, prior to that, SVP of Software Engineering. Before joining CA, Greg spent 29 years with IBM. He was the Business Line Executive for IBM’s Mainframe business, including strategy, architecture, operations, and overall financial performance. In addition, he held senior leadership roles across server, storage, software, services, and information technology. Greg also was a member of the CEO’s Growth and Transformation team responsible for reinventing IBM. A technologist at heart, Greg received his Computer Science degree from Clarkson University where he serves on the Dean of Arts and Science Advisory Council. Greg is a car enthusiast and compares car racing to IT. Taking advantage of the available power with discipline and control is crucial. This is how you drive with both speed and resource efficiency to sustain top performance and strategically outmaneuver the competition. Product & Solution Strategy, Mainframe Value Assessments, Pricing Optimization Jeff Henry is Vice President of Strategy, Design and Product Management for Broadcom’s Mainframe Software Division. In this role, Jeff drives innovation across a diverse portfolio of mainframe software solutions that include DevOps, machine learning, AI, and security and compliance. Jeff joined Broadcom through the CA Technologies acquisition. At CA, he was responsible for the Intelligent Operations and Automation portfolio, including the acclaimed Mainframe Operational Intelligence and Dynamic Capacity Intelligence products. Prior to his time at CA, Jeff was an executive at IBM where he led efforts to advance IBM Design Thinking, define and systematize cross-brand solutions, and develop the Offering Management discipline. He held positions in product management, engineering, strategy, and market management. With over 30 years of experience leading software organizations, Jeff’s forte is connecting strategy to execution: analyzing a wide range of opportunities, narrowing the field to the best few, translating ideas to goals and plans, and inspiring cross-functional teams to successfully deliver to the market. Throughout his career, Jeff has specialized in business services delivery, cloud deployments, and bringing Systems of Engagement together with Systems of Record for enterprise customers. At Broadcom, Jeff uses this expertise to deliver capabilities that help organizations integrate their mainframes across cloud and distributed environments. Open Mainframe, DevOps, Cloud, Design Thinking Workshops, Mainframe Value Assessments, Pricing Optimization Hayden Lindsey is the Vice President responsible for Worldwide Software Engineering and Architecture for Broadcom’s Mainframe Software Division. In this role, Hayden leads development of all mainframe products developed in the Broadcom labs around the world. Hayden leads teams that employ iterative and agile development techniques and methodologies. The solutions these teams develop help enterprises build, operate, automate, optimize, and secure their current and next-generation systems to enable digital transformation. Over his career of three decades in the software industry, Hayden has held a variety of positions. Most recently, he was SVP for Worldwide Engineering & Architecture at CA Technologies. Prior to that, he was VP and Distinguished Engineer of DevOps for Enterprise Systems and Compilers at IBM. He has led teams for many different tools and platforms and has been an offering manager, product architect, and software developer. A forward-thinking technologist and holder of fourteen patents, Hayden was an early adopter of object technology and applied the approach effectively in the areas of code generation, debugging, IDEs, and performance. Hayden is also an accomplished public speaker, demonstrating a deep passion for and understanding of business and technology. He consistently paints a clear vision for Broadcom that motivates and inspires. Portfolio Rationalization, Education & Talent Development, Mainframe Value Assessments, Pricing Optimization Vikas Sinha is Vice President of Global Customer Experience and Strategic Alliances for Broadcom’s Mainframe Software Division. His mission is to help Broadcom’s mainframe customers succeed and realize maximum value from their mainframe investments. To this end, Vikas oversees all the field technical resources that directly serve customers and partners, including presales, professional services, support, engineering services, competitive conversions, education and training, and strategic alliances. With these functions aligned in a single organization under Vikas’s leadership, Broadcom partners with customers throughout their mainframe journeys. Vikas joined Broadcom through the CA Technologies acquisition. At CA, he led offering management for machine learning, operational intelligence, and enterprise data security for the mainframe portfolio. He also led software engineering for database management systems and security. Prior to CA Technologies, Vikas spent a number of years at IBM and SPSS, leading research and development for their predictive analytics portfolio. A passionate supporter of lifelong learning, Vikas recently earned a Master’s degree in Predictive Analytics and a Certificate for Advanced Graduate Studies in Data Sciences from Northwestern University. He also has Engineering degrees from Sri Venkateswara University in India and Florida Atlantic University, as well as executive management education from Kellogg School of Management. He has over two dozen publications in international conferences and journals in the fields of engineering design and expert systems. PR & Media, Customer Stories, Partnership Programs Joe Doria is Chief Marketing Officer of the Mainframe Software Division at Broadcom and a member of Broadcom’s leadership team. His unparalleled expertise in product management and product marketing is the fruit of over two decades in the technology industry focusing on mainframe and enterprise IT. Prior to joining Broadcom, Joe served as an executive within IBM for 21 years. While at IBM, he oversaw marketing for the revolutionary ThinkPad product. He then moved on to become head of IBM z Systems Marketing where he developed and directed the global launches of six generations of the mainframe: z10through z15. Joe’s mainframe and enterprise IT marketing success is the outcome of his commitment to both the art and the science of marketing. He is highly skilled at leveraging the tremendous amounts of data now available to inform marketing strategies and plans, while simultaneously encouraging collaboration and creativity to bring power to brand messages and optimize client engagement. As an industry speaker, Joe motivates decision makers to use technology to drive value and impact in their businesses. He also prioritizes time with clients, keeping his finger on the pulse of what is happening in the marketplace so that he can position the mainframe as the dynamic solution for business that it continues to be. Mainframe Division Strategy, Partnership Programs, Pricing Optimization Monique Boucher is Chief Business Operations Officer for the Mainframe Software Division at Broadcom. In this role, Monique drives creation and execution of the Mainframe division operating plan and oversees the management system, processes, and operational controls necessary to ensure the division is achieving its performance and operational objectives. Monique joined Broadcom through the CA Technologies acquisition. While at CA, she held operational roles for the Mainframe Business Unit as well as Enterprise Systems. In her three decades in the software industry, Monique has served in a variety of leadership positions in Software Development, Support, Product Management, and Business Operations and Program Management. Drawing upon the experience and expertise provided by her diverse background, Monique is able to effectively manage change across organizations to drive business. Her skills played a vital role in the initial Mainframe Agile Transformation at Concord Communications and CA Technologies Mainframe Division, which focused on developing partnerships with customers to drive more value into products and improve time to value. Her current focus is around business management and agility in the organization. Monique is an alumna of Worcester Polytechnic Institute.
https://mainframe.broadcom.com/executive-meeting
Calling for “cooperation, compromise and the common good,” President Trump used his much-anticipated 2019 State of the Union address to focus on issues intended to ease uncertainty for today’s businesses, most notably those related to trade, infrastructure and immigration. President Trump pledged in his State of the Union address to keep the U.S. economy strong – vowing to reach pending international trade deals and a bipartisan infrastructure agreement that could help ease the uncertainty among business leaders. The president’s promises to work with Congress to pass a long-overdue agreement on improving our roads, bridges and ports to keep the economy humming and to strike trade deals that would level the international playing field, if successful, would be welcome news to the business community. State of the Union: Industry analysis While infrastructure and international trade deals topped the President’s agenda to strengthen the economy, there are a number of implications for business leaders in key industries. Grant Thornton’s industry leaders weigh in with their analysis. “In his State of the Union address, the President highlighted some of the positive impacts current economic conditions are having on manufacturers. He highlighted that 600,000 new manufacturing jobs have been created. There is little doubt that some of the administration policies around regulatory reform and tax reform have contributed to this growth, but manufacturers are also concerned about the continuing impact of trade policy and specifically tariffs. The President addressed this issue by exhibiting a positive belief that the U.S. and China can make a trade deal. However, he emphasized ‘It must be real structural change to end unfair trade practices, reduce our chronic trade deficit, and protect American jobs.’ A worthy goal for sure, but the difficulty in this may not foretell a quick end to the current tariffs. Trump also emphasized for Congress the importance in pursuing the USMCA, which beginning this spring will be a measure of whether Republicans and Democrats can come together to authorize this agreement. This is a major priority for manufacturers, although a few adjustments might be warranted and will be lobbied hard. The President still believes both parties should come together on an infrastructure bill. He stated ‘He was eager to work with Congress on legislation’. Manufacturers also believe this to be a critical issue to improve their supply chain efficiency and would like Congress and the President to find the funding for such a bill. That will be the main sticking point. “During his 2019 State of the Union address, President Trump again highlighted the need to shore up the country’s infrastructure system. We welcome forward movement on this critical need, which will bring jobs and opportunities for construction companies and increased economic opportunities for many industries. America’s construction industry is well equipped and ready to begin work on roads, bridges, rail systems, and other projects that will nurture improved commerce and facilitate growth in the U.S. Employment – The technology industry is an engine of job growth and of digital transformation affecting almost every other industry. To keep up with demand for new jobs skills, we must invest in STEM for our students, and reskilling of experienced workers disrupted by the 4th Industrial revolution. Fair trade – Technology is a global industry with customers, competitors, innovation and supply chains around the world. Free trade, open access to markets and protection of the intellectual capital of U.S. technology companies are essential to continued growth. Infrastructure – Technology underpins our business, social, educational and political lives. Ready access to technology and the internet is an essential component for individual and business growth. Unfortunately, today our infrastructure struggles to keep pace with demand and perpetuates a growing digital divide. Technology must be a central component of a revitalization of American infrastructure. Immigration – Innovation is the lifeblood of technology. More than half of the U.S. Tech Unicorns were founded by at least one immigrant. Competition for global talent is increasing. Unfortunately, today’s immigration system cannot meet the demand for this talent. We encourage a sustained effort to increase the number of high-skilled worker visas, and the structures that allow highly skilled workers to remain. While the president’s plan as articulated in his State of the Union address was short on specifics, the proof of success will be when bipartisan legislation is passed and signed into law. The president’s calls for unity and bipartisanship were made in the shadow of the longest U.S. government shutdown in history and just days ahead of another potential shutdown — unless of course he, House Speaker Nancy Pelosi and Congress compromise on funding the government and settle their differences on border security and immigration. Business would like to see a solution that brings more skilled workers legally into this country — a win-win situation for everybody. Hamrick added, "Given the partisan start to the 116th Congress, it is too soon to tell whether our leaders will govern as one nation. Let’s start with a handshake agreement — hopefully not an eleventh-hour one — that avoids another shutdown on February 15 and leads to the type of bipartisan deals that make government more efficient and business more competitive."
https://www.grantthornton.com/library/articles/public-policy/2019/SOTU-analysis-President-vows-strengthen-economy.aspx
WOMENS: England women celebrate a double series victory against New Zealand02-Mar-2015 England women have finished their four week tour of New Zealand in a positive fashion by defeating their hosts by 5 wickets in the final One Day International (ODI) between the two sides at the Bert Sutcliffe Oval on Saturday. The result means that the team will return home tomorrow having recorded a 3-2 ODI series victory, alongside a 2-1 win in the Twenty20 International (T20I) series against the White Ferns. Following the close of play in Lincoln today, England women’s vice-captain, Heather Knight, was also presented with the T20I player of the series award, and after her run-scoring exploits in the final two matches, Sussex's Sarah Taylor received the ODI player of the series accolade. Taylor scored 93 in the final victory of the series. Reflecting on the tour, England women’s captain, Charlotte Edwards, said: “I’m really pleased with how strongly and quickly we got back into this tour after a slow start. “The manner in which the results have swung between the two teams across the eight matches shows what an exciting and enthralling tour this has been to play in, and I am delighted that we have ultimately been able to finish positively, with series victories in both the ODIs and T20Is. “There have clearly been elements of inconsistency within our performances at times during this tour, but along with today’s run-chase, the way we played to secure the nine wicket win in the fourth ODI was superb, and it is that type of collective team effort that we need to take away and replicate on a regular basis. “We are now all looking forward to the start of the English domestic season in April and resuming training at the National Cricket Performance Centre (NCPC) in Loughborough, in preparation for our Women’s Ashes defence against Australia in the summer.” England women will next be in action in July and August, when they play on home soil against Australia in the Women’s Ashes. The multi-format series gets underway with the three Royal London ODI matches on July 21 (Taunton), 23 (Bristol) and 26 (Worcester), followed by the Kia Women’s Test match from August 11-14 in Canterbury, and finishing with the three NatWest T20Is between the two sides on August 26 (Chelmsford), 28 (Hove) and 31 (Cardiff). Click HERE to seal your tickets to the game at The BrightonandHoveJobs.com County Ground!
http://www.sussexcricket.co.uk/news-1/womens-england-women-celebrate-a-double-series-victory-against-new-zealand
A new material developed by University of Colorado Boulder engineers can transform into complex, pre-programmed shapes via light and temperature stimuli, allowing a literal square peg to morph and fit into a round hole before fully reverting to its original form. For more information see the IDTechEx report on Functional Materials for Future Electronics. The controllable shape-shifting material, described today in the journal Science Advances, could have broad applications for manufacturing, robotics, biomedical devices and artificial muscles. "The ability to form materials that can repeatedly oscillate back and forth between two independent shapes by exposing them to light will open up a wide range of new applications and approaches to areas such as additive manufacturing, robotics and biomaterials", said Christopher Bowman, senior author of the new study and a Distinguished Professor in CU Boulder's Department of Chemical and Biological Engineering (CHBE). Previous efforts have used a variety of physical mechanisms to alter an object's size, shape or texture with programmable stimuli. However, such materials have historically been limited in size or extent and the object state changes have proven difficult to fully reverse. The new CU Boulder material achieves readily programmable two-way transformations on a macroscopic level by using liquid crystal elastomers (LCEs), the same technology underlying modern television displays. The unique molecular arrangement of LCEs make them susceptible to dynamic change via heat and light. To solve this, the researchers installed a light-activated trigger to LCE networks that can set a desired molecular alignment in advance by exposing the object to particular wavelengths of light. The trigger then remains inactive until exposed to the corresponding heat stimuli. For example, a hand-folded origami swan programmed in this fashion will remain folded at room temperature. When heated to 200 degrees Fahrenheit, however, the swan relaxes into a flat sheet. Later, as it cools back to room temperature, it will gradually regain its pre-programmed swan shape. The ability to change and then change back gives this new material a wide range of possible applications, especially for future biomedical devices that could become more flexible and adaptable than ever before. "We view this as an elegant foundational system for transforming an object's properties," said Matthew McBride, lead author of the new study and a post-doctoral researcher in CHBE. "We plan to continue optimizing and exploring the possibilities of this technology."
https://www.printedelectronicsworld.com/articles/15240/shape-shifting-material-can-morph-reverse-itself-using-heat-light
Evaluation of failure characteristics and bond strength after ceramic and polycarbonate bracket debonding: effect of bracket base silanization. The objectives of this study were to evaluate the effect of silanization on the failure type and shear-peel bond strength (SBS) of ceramic and polycarbonate brackets, and to determine the type of failure when debonded with either a universal testing machine or orthodontic pliers. Silanized and non-silanized ceramic and polycarbonate brackets (N = 48, n = 24 per bracket type) were bonded to extracted caries-free human maxillary central incisors using an alignment apparatus under a weight of 750 g. All bonded specimens were thermocycled 1000 times (5-55 degrees C). Half of the specimens from each group were debonded with a universal testing machine (1 mm/minute) to determine the SBS and the other half by an operator using orthodontic debonding pliers. Failure types of the enamel surface and the bracket base were identified both from visual inspection and digital photographs using the adhesive remnant index (ARI) and base remnant index (BRI). As-received ceramic brackets showed significantly higher bond strength values (11.5 +/- 4.1 MPa) than polycarbonate brackets [6.3 +/- 2.7 MPa; (P = 0.0077; analysis of variance (ANOVA)]. Interaction between bracket types and silanization was not significant (P = 0.4408). Silanization did not significantly improve the mean SBS results either for the ceramic or polycarbonate brackets (12.9 +/- 3.7 and 6.3 +/- 2.7 MPa, respectively; P = 0.4044; two-way ANOVA, Tukey-Kramer adjustment). There was a significant difference between groups in ARI scores for ceramic (P = 0.0991) but not polycarbonate (P = 0.3916; Kruskall-Wallis) brackets. BRI values did not vary significantly for ceramic (P = 0.1476) or polycarbonate (P = 0.0227) brackets. Failure type was not significantly different when brackets were debonded with a universal testing machine or with orthodontic debonding pliers. No enamel damage was observed in any of the groups.
By Joanna Kyriakakis Earlier this year I wrote a piece about the potential for Australian laws to regulate and oversee the human rights impacts of Australian mining companies operating abroad. Anvil Mining and Oceana Gold were two examples I gave where Australian mining interests had come under the spotlight for alleged human rights abuses connected in some way to those companies’ overseas operations. Sadly, this week another case has made headlines. Disturbing details have emerged of the shooting on 24 December by Indonesian police of protestors protesting against the mining exploration licence of Sydney based Arc Exploration Limited (ARX) on the island of Sumbawa, Indonesia. According to media reports, protestors had been blockading Sape port on the island for some days in opposition to potential mining in the area and the impact they believe this would have on the local environment. The violent response of police resulted in three dead and others injured. In response to the events of 24 December, ARX has announced that its operations will be halted until the situation is resolved. The company is to be credited for this action, which is consistent with calls made by the chairman of the Indonesian National Human Rights Commission following the violence. While there is no suggestion that ARX was involved in the violence, the Australian Greens have called for an inquiry into the relationship between the company and the police in question. This is quite rightly information that should be on the public record. Relationships between mining companies and security forces are an issue of legitimate public interest and concern. According to the UN Special Representative for Business and Human Rights, of the worst cases of corporate-related human rights abuses in recent years the extractive industries utterly dominate the field. Typically this is for conduct by security forces seeking to secure company assets and property. The risks associated with security and mining operations is a subject that has received increasing attention in recent years. Coined by Craig Forcese as ‘militarized commerce’ is the increasing phenomenon of companies acquiring services from military or para-military forces as security for firm operations, including through the provision of assistance in return for security. Risks emanating from such arrangements are most acute where mining takes place in conflict or post conflict areas or in states with weak governance unwilling or unable to mitigate the adverse impacts of mining. Given evidence of a propensity for resource dependent developing states towards certain negative political conditions such as civil conflict (sometimes called the resource curse), these are the kinds of environments in which resource companies can certainly find themselves operating. In this case a spokesperson for ASX has indicated that no benefits or other payments have been made by the Company to the Indonesian police. But the case does give us reason to pause and consider the vexed issue of security, mining and human rights. One international mechanism developed in order to address the specific risks arising from mining and security is the Voluntary Principles on Security and Human Rights. By joining these Principles signatory companies, of which there are currently 19, agree to undertake risk assessments to identify security risks arising from their operations. The assessment must consider whether a company’s actions may heighten particular risks, the human rights records of security partners, and patterns of violence in the region. Whilst the Principles have enabled an important ongoing dialogue between participating parties and have achieved success particularly where companies internalise the principles endorsed, they are ultimately aspirational being neither legally binding nor incorporating any form of grievance mechanism where they are not met. These are limitations common to the voluntary and self regulatory mechanisms that dominate the regulatory landscape in the field of business and human rights internationally, a summary of which can be found here. Some individual states are taking steps towards strengthening regulation. Seeking to go beyond voluntary initiatives and the current status quo of an ostensibly ‘sanction-free environment’, John McKay MP introduced Bill C-300 to the Canadian Parliament in 2009. A relatively modest proposition, the Bill would have enabled Canadian government authorities to investigate complaints against Canadian resource companies operating abroad and to withhold public funds from companies found to have breached certain environmental and human rights standards. The Bill was motivated by reports of serious human rights violations related to Canadian resource operations abroad and the impact these were having upon the reputation of Canada internationally. The Bill was narrowly defeated by 140 to 134 following a significant campaign opposing the Bill undertaken by the mining lobby. An issue at the heart of the Bill was the use of Canadian taxpayer monies to fund and support extraterritorial Canadian resource operations, even where credible evidence exists that such operations may be linked to environmental or human rights damage. The same issue is important here in Australia. Jubilee Australia for example is one organisation seeking to put the spotlight on the Australian government’s export credit agency, the Export Finance and Insurance Corporation, and on a troubling lack of transparency and human rights considerations in its decision making. In this respect, one avenue that might be considered is requiring public funding to be conditional upon the kinds of risk assessments endorsed by the Voluntary Principles being undertaken by Australian companies seeking government support. In Australia the public debate regarding the regulation of our mining companies abroad has been largely silent since the failed attempt in 2000 by the Australian Democrats to introduce the Corporate Code of Conduct Bill (Cth). That Bill sought to impose and enforce human rights standards on the overseas conduct of Australian corporations. Exceptions to this silence are those laws that have recently come into operation with extraterritorial dimensions that attach to corporations largely as an incidence of their application to natural persons. Examples include prohibitions on bribery, sex tourism and international crimes (e.g. war crimes, crimes against humanity, genocide). As I have previously argued, the political will to use such laws with respect to corporate violations is as yet uncertain. The recent experience in Canada shows the polarisation of views likely to be encountered with any proposal to directly regulate Australian mining operations abroad. But as recent events remind us, it is an issue that cannot be ignored.
https://castancentre.com/2011/12/30/mining-security-and-human-rights/?shared=email&msg=fail&replytocom=1277
As a linguist and lover of linguistic diversity, Dr. Junker works on aboriginal language documentation and maintenance. She uses a participatory-action research framework to work with communities and individuals interested in saving their language and seeing it thrive in the 21st century. Exploring how information and communication technologies can help aboriginal languages, she has developed several websites for languages of the Algonquian family, in partnership with aboriginal organizations. She co-created the www.eastcree.org website, a resource for documenting and preserving the East Cree language, as well as online dictionaries for Cree and Innu. Dr. Junker also created the Algonquian Linguistic Atlas: www.atlas-ling.ca, which is evolving into a large collaborative project to build a digital infrastructure for Algonquian dictionaries and other resources. She regularly facilitates language-documentation workshops at Carleton or in the communities she works with. Marie-Odile has been a member of CIRCLE since its first incarnation at Carleton University in 1992 and is grateful for the support provided for her work.
https://carleton.ca/circle/people/dr-marie-odile-junker/
As you wander the Garden, head off the beaten track and explore to chance upon secluded seating areas, tranquil oases hidden from view and clifftop meadows with breath-taking views of the coastline. Discover your favourite spot to breathe in the heavenly scent of the abundant flora, enjoy a picnic with the family or simply relax with your favourite book. The young and the young at heart love the opportunity to meander, explore, run and play as they let their imaginations go, watching the wall lizards or daydreaming under Britain’s oldest palm trees. Perfect for playing hide and seek (or just seek…) – There’s no need to keep off the grass! The Garden has wellbeing at its heart and with so much space to roam free, there is always the opportunity for a mindful moment of relaxation and contemplation as you take in your serene surroundings and let that feeling of calm wash over you.
https://www.botanic.co.uk/visit-the-garden/secret-gardens/
Senate Bill 9 enables homeowners to subdivide their properties to construct up to four housing units, including two accessory dwelling units or junior accessory dwelling units. Senate Bill 10 allows cities to rezone transit-rich areas of town (blocks within a half-mile of a major transit stop) and parcels along "high-quality" bus routes for 10 housing units per parcel. Authored by Senate President Pro Tempore Toni Atkins, D-San Diego, and state Sen. Scott Wiener, D-San Francisco, respectively, the two bills were boosted by California YIMBY and other pro-housing groups. They were also opposed by cities like Palo Alto, where city leaders have consistently characterized them as an attack on local control. "Housing affordability crisis is undermining the California Dream for families across the state, and threatens our long-term growth and prosperity," Newsom said in a statement on Sept. 16. "Making a meaningful impact on this crisis will take bold investments, strong collaboration across sectors and political courage from our leaders and communities to do the right thing and build housing for all," Newsom said. While many housing advocates have long called for the loosening of rules surrounding single-family zones — a step that has already been taken in cities like Berkeley and Minneapolis — prior efforts to achieve major zoning reforms in the Legislature had struggled to advance in recent years. In January 2020, the Legislature killed Wiener's bid to increase zoning in transit-friendly and jobs-rich areas when it voted down SB 50. Both SB 9 and SB 10 advanced after lawmakers agreed to make several amendments to address criticisms. Among the changes, SB 9 added a requirement that the property owner live in one of the homes for three years after the lot subdivision is approved. For SB 10, the bill was amended to address the issue of local authority. The city of Palo Alto was among those that lambasted a provision of the bill that allowed cities to use SB 10 to override existing zoning restrictions that were put in place through voter initiatives. Palo Alto's letter of opposition argues that such legislation "echoes more of Russia than of California." Under the revision, cities that want to use the bill to override zoning restrictions imposed by local initiatives would need to secure a two-thirds majority from their legislative body. Unlike SB 9, SB 10 is not a mandatory requirement of cities but an optional law on which they can rely. Those cities that do choose to use it cannot apply it to parkland or open space. Wiener said in a statement that SB 10 provides "one important approach: making it dramatically easier and faster for cities to zone for more housing." "It shouldn't take five or 10 years for cities to rezone, and SB 10 gives cities a powerful new tool to get the job done quickly," Wiener said. In his signing message for SB 10, Newsom touted its potential to increase housing but also warned that while the benefits are promising, "certain provisions may have unintended impacts on affordable housing projects that use density bonuses, as well as possible Fair Housing implications based on how jurisdictions may choose to implement its provisions." He wrote that he is directing the Department of Housing and Community Development's newly established Housing Accountability Unit to "vigilantly monitor the implementation of this bill at the local level, and if needed, work with the Legislature to proactively address any unintended consequences, should they arise." A report from the Terner Center for Housing Innovation at University of California, Berkeley described SB 9 in July as "the most significant housing bill coming out of California's current legislative session," noting its potential to "expand the supply of smaller-scaled housing, particularly in higher-resourced, single-family neighborhoods." In analyzing a similar proposal, SB 1120, which faltered on the final day of the prior legislative session, the Terner Center estimated that about 6 million properties would be eligible for the bill's provisions. If 5% of those parcels created new two-unit structures, that would have resulted in 597,706 new homes, according to the report. Atkins said in a statement on Sept. 16 that SB 9 will "open up opportunities for homeowners to help ease our state's housing shortage, while still protecting tenants from displacement," referring to a clause that disallows redevelopment of a parcel if a tenant has lived there within the prior three years. "And it will help our communities welcome new families to the neighborhood and enable more folks to set foot on the path to buying their first home," Atkins said. A group known as Californians for Community Planning has already filed a ballot initiative with the state opposing the recently passed legislation.
https://paloaltoonline.com/print/story/2021/09/24/reform-of-single-family-zoning-okd-by-newsom
Saskatoon Transit altering routes to improve service As Saskatoon Transit moves towards providing service that reflects the approved Bus Rapid Transit (BRT) lines (moving more routes onto Broadway Avenue) and to reduce congestion on the Senator Sid Buckwold Bridge during rehabilitation, changes are being made to three routes as of March 29, 2020. Affected routes: - Route 8 inbound and the 80‘s outbound will move off 8th Street/over Sid Buckwold Bridge onto Broadway Avenue. - Route 1 and 6 will move off of Broadway Avenue onto 8th Street/over Sid Buckwold Bridge. These service changes will allow for more convenience and a quicker commute for east side residents traveling to and from the downtown. Saskatoon Transit encourages everyone to check their route in advance to see how this will impact their commute and to plan their trip using the real-time mobile app Transit. To learn more on the new routing and schedules, visit: saskatoontransit.ca, or call the Transit Customer Service Centr: 306.975.3100.
https://transit.saskatoon.ca/news-releases/saskatoon-transit-altering-routes-improve-service
Introduction {#Sec1} ============ Whether plasma triglycerides, or more specifically, the lipoprotein particles in which they are transported between sites of absorption, lipolysis and remodelling and catabolism constitute an independent risk factor for CVD has been the subject of debate for decades \[[@CR1]\]. Significant progress has been made of late in resolving this question as a result of three elements: firstly, the availability of prospective data focusing on the relationship between circulating TG levels and cardiovascular risk in large cohorts, secondly, observations made in the postprandial, non-fasting period, allowing analysis of this relationship over a substantially greater range of TG concentrations as compared to the fasting state and thirdly, analytical approaches which allow an estimation of the cholesterol burden carried in potentially atherogenic remnant particles \[[@CR2]--[@CR5], [@CR6]•\]. Thus, accumulating evidence demonstrates a strong correlation between the risk of CVD and both non-fasting (postprandial) and fasting plasma TG levels. Furthermore, large prospective epidemiologic studies focused on non-fasting TG in response to normal food intake have demonstrated significant associations between increased CVD events with elevated concentrations of non-fasting TG \[[@CR2], [@CR3], [@CR7]\]. Indeed, a meta-analysis of 17 prospective studies with 2900 CHD endpoints revealed that an increment of 1 mmol/L in fasting TG levels was associated with a 14 % increase in CVD risk \[[@CR5]\]. However, this strong correlation is often lost or attenuated in multivariate analysis, principally as a consequence of the strong link between hypertriglyceridemia and other cardiovascular risk factors such as low high-density lipoprotein (HDL) cholesterol, obesity and insulin resistance \[[@CR8], [@CR9]•\]. Further support for a causative role of triglyceride-rich lipoproteins (TRLs) in CVD arises from genetic studies \[[@CR4], [@CR10]--[@CR12]\]; such studies equally indicate that remnant particles, which represent the partially degraded products of TRLs (i.e. chylomicrons and very-low-density lipoprotein (VLDL)), play a key role in the pathophysiology of atherosclerotic vascular disease. Hypertriglyceridemia is generally defined and diagnosed as fasting plasma TG \> 1.7 mmol/L or \>150 mg/dL and is the consequence of environmental, behavioural and genetic factors, among which lifestyle is prominent (alcohol use, smoking, a high carbohydrate diet and obesity) \[[@CR13]••\]. Severe hypertriglyceridemia with plasma TG levels  \> 10 mol/L (885 mg/dL) is typically of genetic origin, notably in the pediatric age group, and is associated with elevated risk of pancreatitis \[[@CR14]\]. Further understanding of the pathobiology which underlies the atherogenicity of TRLs and their remnants will undoubtedly enable us to identify novel therapeutic targets; the translation of such targets into innovative therapeutic agents may significantly decrease cardiovascular risk in large numbers of hypertriglyceridemic individuals who currently remain at high risk despite optimal treatment according to current guideline recommendations. Production and Intravascular Metabolism of Triglyceride-Rich Lipoproteins and Remnants {#Sec2} -------------------------------------------------------------------------------------- Triglycerides represent the transport module for fatty acids which provide an essential source of energy upon oxidation in mitochondria. A major source of TG is derived from dietary fat consumption. Dietary TGs are transported in intestinally derived apolipoprotein (apo) B48-containing chylomicrons, which enter the systemic circulation through the lymphatic system and target the heart as the first organ for delivery of fatty acids to fulfil energy requirements. The liver plays a cental role in TG homeostasis and maintains a steady state between TG synthesis, secretion and oxidation. In contrast to adipose tissue, the liver does not serve as an organ for TG storage under normal physiologic conditions. The liver can take up fatty acids derived from lipolysis in adipose tissue or from circulating lipoproteins but may equally synthesise fatty acids from carbohydrates in the process of de novo lipogenesis. In the liver, fatty acids can be partly stored as TG in lipid droplets or oxidised to generate energy in mitochondria in the process of beta-oxidation or packaged in apo B100-containing VLDL particles and secreted into the systemic circulation where they serve as a source for energy for peripheral tissues. The molecular pathway involved in the packaging of TG into both chylomicron and VLDL particles is remarkably similar, involving microsomal triglyceride transfer protein (MTTP) as described in detail elsewhere \[[@CR15], [@CR16]\]. Triglycerides cannot pass through cell membranes freely. Consequently, intravascular lipolysis is an essential process for release of free fatty acids, which can then be taken up via specific fatty acid transporters or other yet unknown mechanisms. The underlying mechanism is still only partly understood, but identification of some of the key proteins involved has allowed progress in our understanding. Lipoprotein lipase (LPL) is the key enzyme which drives TG hydrolysis along the luminal surface of capillaries, whereas the recently identified protein glycosylphosphatidylinositol HDL binding protein 1 (GPIHBP1) provides the platform to allow lipolysis to occur at the endothelial cell surface \[[@CR17], [@CR18]\]. Lipoprotein lipase is synthesised in macrophages, adipocytes and myocytes and must be transferred to the luminal site of the endothelial cell to become active, a process which is facilitated by GPIHBP1 \[[@CR18], [@CR19]\]. Strict regulation of LPL production and activity is critical in different tissues and organs. LPL can be regulated at the level of transcription by both peroxisome proliferator-activated receptor (PPAR)α and PPARγ through binding to a PPRE element in the 5′ regulatory region of the LPL gene \[[@CR20], [@CR21]\]. PPARα is intimately involved in lipid metabolism in the liver, whereas PPARγ is more closely involved in adipose tissue lipid homeostasis thereby regulating LPL action in a tissue-specific manner. More recently, different microRNAs (miR29-a, 497b, 1277, 410) have been found to be involved in posttranslational regulation of LPL; the exact mechanism(s) has not as yet been elucidated \[[@CR22], [@CR23]\]. Both nutritional (fasting vs fed state) and hormonal status play central roles in the regulation of LPL expression in adipose tissue. In fed conditions, LPL activity is high due to the effect of insulin, thereby resulting in increased uptake of fatty acids. Interestingly, LPL is more strictly regulated in heart and skeletal muscle since these tissues need a continuous supply of fatty acids for energy production \[[@CR24]\]. Regulation of the Lipolytic Process {#Sec3} ----------------------------------- In vivo LPL action is regulated by several proteins including apolipoprotein (apo) C-II, apoA-V, apoC-III and angiopoietin-like protein (ANGPTL) 3, 4 and 8 \[[@CR25]\] (Fig. [1](#Fig1){ref-type="fig"}). ApoC-II is a 79 amino acid peptide of hepatic origin containing a C-terminal domain involved in LPL activation and an N-terminal domain involved in lipid binding. Very little is known of the regulation of apoC-II synthesis. ApoC-II circulates in plasma on TG-rich lipoprotein particles as well as on HDL and is the rate-limiting protein required for normal LPL activity to occur. Patients with complete loss-of-function mutations in *APOC2* have severe hypertriglyceridemia similar to LPL deficiency \[[@CR26]\]. On the other hand, increased plasma apoC-II levels are associated with increased plasma TG levels suggesting that a surplus may have an inhibitory effect on LPL function. ApoC-II may be important for guiding TG-rich lipoproteins to the active site of LPL at the endothelial cell surface \[[@CR27], [@CR28]\]. The proposed working model for apoC-II involves a mechanical process occurring during hydrolysis of lipoprotein TG and results in increased surface pressure with concomitant conformational change in apoC-II structure, followed by the release of an apoC-II-phosphatidyl choline complex to HDL \[[@CR29]\]. The amino acid residues tyr63, ile66, asp69 and gln70 in the C-terminal helix of apoC-II are essential for LPL activation and have now been used to create an apoC-II mimetic peptide that promotes lipolysis on TG-rich lipoproteins by LPL and may represent a new therapeutic target \[[@CR30], [@CR31]\].Fig. 1LPL is synthesised in parenchymal cells in muscle and adipose tissue and then transported to the endothelial cell surface. LPL-mediated TG lipolysis at this surface is the first essential step in TG homeostasis. TGs are hydrolysed by LPL bound to GPIHBP1 in a process that is dependent on apoC-II. ApoC-III and apoA-V are potential inhibitors of LPL-mediated lipolysis. Upon TG hydrolysis, free fatty acids are taken up by surrounding tissues ApoC-III is a potent inhibitor of LPL function and is a 99 amino acid protein containing three sialic acid residues; it is synthesised mainly in the liver and to a small extent in the intestine. Different transcription factors may regulate *APOC3* gene expression such as PPARα and Fox01 \[[@CR20], [@CR32]\]. ApoC-III circulates on TG-rich lipoprotein particles as well as on HDL \[[@CR33]\]. The apoC-III protein undergoes O-linked glycosylation by GALNT2 resulting in the presence of three isoforms: apoC-III 0, 1 and 2, which is impacting on apoC-III function \[[@CR34]\]. Genetic studies have provided insight into the function of apoC-III. With respect to the potential relationship of circulating apoC-III levels to cardiovascular risk, a null mutation, p.R19X (rs56353203), was found to be associated with low plasma TG levels and attenuated subclinical atherosclerosis in the Amish population \[[@CR35], [@CR36]\]. Additional evidence was provided in a number of epidemiological studies showing the causal relationship between genetic variants in *APOC3*, plasma TG and CVD risk \[[@CR37]--[@CR39]\]. ApoC-III is now recognised as a multifaceted protein involved in different metabolic processes related to TG homeostasis. Firstly, apoC-III may inhibit hepatic clearance of TG-rich remnant particles by interfering with receptor binding sites \[[@CR33]\]. Secondly, apoC-III has been recognised to inhibit LPL-mediated lipolysis in vitro; kinetic studies in human subjects do not however favour this concept \[[@CR40]\]. Apparently, the ratio of apoC-III to apoC-II molecules on the surface of VLDL particles is the main determinant for LPL inhibition to occur. In vitro studies have shown that apoC-III/apoC-II ratios \>5.0 are effective in inhibiting LPL action \[[@CR41]\], a molar ratio which does not occur in human physiology. Finally, recent clinical trials in hypertriglyceridemic LPL-deficient patients using an allele-specific oligonucleotide (ASO) against apoC-III are in line with the concept that apoC-III has a major role in the hepatic uptake of remnant particles \[[@CR42]\]. Thus, apoC-III emerges as an important drug target for reducing residual cardiovascular risk in hypertriglyceridemic subjects \[[@CR43]\]. ApoA-V is a 366 amino acid protein primarily of hepatic origin. Circulating plasma apoA-V concentrations are very low which means that at most only 4 % of VLDL particles carry one apoA-V molecule \[[@CR44]\]. Despite such low abundance, evidence supporting an essential role for apoA-V in TG metabolism is accumulating. Rare variants in or close to the *APOA5* gene locus are consistently associated with plasma TG levels and risk for CVD \[[@CR45], [@CR46]\]. However, plasma apoA-V levels are positively associated with plasma TG in humans, an observation which to date has not been fully understood \[[@CR44], [@CR47], [@CR48]\]. Most studies on apoA-V function have been performed in mice overexpressing human apoA-V or in in vitro models using apoA-V liposomes \[[@CR49]\]. In all of these models, apoA-V concentration is elevated in comparison to the physiological concentrations typically seen in humans. Moreover, the physiological context in which apoA-V is functional, i.e. in the presence of apoC-II, apoC-III or apoE on the same lipoprotein particle, and which all compete for similar functions, is missing. Interestingly, injection of apoA-V rHDL into *Apoa5*^*−/−*^ mice induces a rapid decline in plasma TG levels whereas a similar injection in *Gpihbp1*^*−/−*^ mice had no effect \[[@CR50]\], suggesting that apoA-V might be involved in binding of TG-rich lipoproteins to GPIHBP1, thereby allowing LPL-mediated TG hydrolysis to proceed \[[@CR51]\]. Whether apoA-V is involved in LPL-mediated TG lipolysis in humans has not yet been established. ANGPTL3, 4 and 8 have been implicated in TG homeostasis. ANGPTL proteins contain a signal peptide, an N-terminal coiled-coil domain and a C-terminal fibrinogen-like domain \[[@CR52]\]. ANGPTL4 is produced in adipose tissue, whereas ANGPTL3 is produced in the liver and ANGPTRL8 in both adipose tissue and the liver \[[@CR53]\]. ANGPTL4 expression is increased under fasting conditions, whereas ANGPTL8 levels are increased in the fed state. ANGPTL3, 4 and 8 have been implicated in LPL action. In fasting conditions, in which ANGPTL4 expression in adipose tissue is upregulated, LPL action may be suppressed by ANGPTL4, resulting in rerouting of fatty acids towards other organs for supply of energy. ANGPTL4 effectively inhibits LPL action by converting the active dimer into an inactive monomer \[[@CR54]\]. However, evidence has accumulated to show that ANGPTL4 could also act as a reversible, non-competitive inhibitor of LPL \[[@CR55]\]. Such inhibition occurs solely when LPL is in a complex with ANGPTL4 and leads to restoration of LPL activity upon dissociation of the complex. Alternatively, ANGPTL4 may directly bind LPL that is bound to GPIHBP1 and in this manner inactivate the protein, whereafter dissociation from GPIHBP1 occurs \[[@CR56]\]. Although the role of ANGPTL3 in regulation of lipolysis is less well understood, mutations in the *ANGPTL3* gene in humans are associated with reduced TG and cholesterol levels and elevated LPL activity \[[@CR57]\]. ANGPTL3 is activated by proteolytic cleavage, leading to the release of the N-terminal domain, which has been shown to inhibit LPL and therefore result in reduced TG clearance. Recent data have emerged that ANGPTL8, also known as betatrophin or lipasin and a paralog of ANGPTL3, is able to interact with ANGPTL3, facilitating the cleavage of its N-terminal domain and thereby regulating its activity \[[@CR58], [@CR59]\]. In conclusion, all three ANGPTL isoforms are able to impact LPL activity and thereby lead to altered plasma TG levels. Determinants of Plasma Triglyceride Levels and Heterogeneity of Triglyceride-Rich Lipoprotein Particles {#Sec4} ------------------------------------------------------------------------------------------------------- Very-low-density lipoprotein particles of hepatic origin can be subdivided on the basis of their size and role in TG metabolism; the larger, less dense particles are defined as VLDL1 (Sf 60--400) and the smaller as VLDL2 (Sf 20--60). Elevation of levels of large TG-rich VLDL1 is the major determinant of plasma TG concentrations in both normal and insulin-resistant individuals \[[@CR60]\]. Increased plasma concentrations of VLDL1 can result either from hepatic oversecretion and/or impaired clearance of TRL remnants from the circulation \[[@CR61], [@CR62]\]. Hepatic oversecretion of VLDL1 particles is linked to increased liver fat and hyperglycemia \[[@CR61], [@CR63], [@CR64]\]. Increased liver fat is equally associated with impaired suppression of VLDL1 secretion and results in oversecretion of VLDL1 particles \[[@CR64], [@CR65]\]. Recent findings in in vivo kinetic studies have shown that kinetic indices for VLDL1-TG catabolism are stronger determinants of circulating plasma TG concentration than kinetic parameters for the increased secretion of VLDL1 \[[@CR66], [@CR67]•\]. In particular, these studies revealed strong correlations between the catabolism of VLDL1-TG levels and the plasma concentration of apoC-III \[[@CR66], [@CR68]\]. The principal mechanism underlying the impairment of the catabolism of TRLs by apoC-III remains unclear, since apoC-III impairs both intravascular lipolysis by LPL and LPL-independent clearance of TRLs \[[@CR69]\]. The significance of apoC-III in the hepatic clearance of TRL was recently illustrated by the markedly accelerated catabolism of TG-rich remnant particles in human subjects with deficiency of apoC-III \[[@CR70]\]. Remnants are generated when chylomicrons and VLDL particles are remodelled during TG hydrolysis by LPL and are concomitantly enriched in cholesteryl esters by the action of the cholesteryl ester transfer protein (CETP). Thus, as TGs are removed, remnant particles become enriched with cholesteryl esters \[[@CR71]\]. Subendothelial Accumulation of Atherogenic Lipoproteins Induces Atherogenesis {#Sec5} ----------------------------------------------------------------------------- Lipoproteins in the circulation normally flux into and out of the arterial wall by transcytosis, a transport system in which lipoproteins and other macromolecules are transported across the endothelial cell in specialised clathrin-coated vesicles (Fig. [2](#Fig2){ref-type="fig"}) \[[@CR72]\]. The transcytosis pathway has not previously attracted attention, but recent studies indicate that the process is responsive to LDL levels in the blood \[[@CR73]--[@CR75]\]. The transport vesicles are about 100 nm in diameter, and therefore, the transcytotic transport system is restricted to lipoproteins smaller than approximately 70 nm in diameter. Thus, larger lipoproteins, such as chylomicrons and large VLDL particles, cannot transverse the endothelium \[[@CR76], [@CR77]\]. This size limitation explains why individuals with lipoprotein disorders involving accumulation of large lipoproteins, such as chylomicrons in LPL-deficient patients, do not develop atherosclerosis. The capacity of the transcytotic transport system is very high; indeed, it has been estimated that about 2500 transport vesicles leave the plasma membrane every minute. Therefore, it is not the influx of lipoproteins into the artery wall that is rate limiting and thus determines the concentration of atherogenic lipoproteins in the artery wall but rather the selective subendothelial retention of lipoproteins in the artery wall \[[@CR78]\]. Such retention is mediated by ionic interactions between positively charged residues in apoB and apoE on the atherogenic lipoproteins \[[@CR79]--[@CR81]\] and negatively charged sugar and sulphate groups in the glycosaminoglycan chains of the arterial wall proteoglycans.Fig. 2Transcytosis enable the influx of lipoproteins over the vessel wall. This process is mediated by clathrin. The average diameter of these transport vesicles are around 100 nm, which only allows transport of lipoproteins with the size of 70 nm or smaller, thereby excluding chylomicrons and large VLDL remnant particles. The average transport speed is around 2500 vesicles per minute. The retention of lipoproteins in the subendothelial space is mediated by the interaction between positively charged residues on apoB and apoE and the negatively charged sulphate groups in the glycosaminoglycans chains of HSPG expressed on the vessel wall Postprandial Hypertriglyceridemia and Atherogenicity of Remnant Particles {#Sec6} ------------------------------------------------------------------------- For many years, accumulation of chylomicron and chylomicron remnants in plasma was believed to be the essential cause of postprandial hypertriglyceridemia \[[@CR82], [@CR83]\]. However, it is now established that although approximately 80 % of the increase in postprandial TG is due to chylomicrons \[[@CR84]\], approximately 80 % of the increase in particle number is accounted for by VLDL particles \[[@CR85], [@CR86]\]. The underlying reason is that chylomicrons and VLDL particles are cleared from the circulation by common pathways and therefore compete for clearance \[[@CR76]\], even if chylomicrons seem to be preferentially cleared \[[@CR87]\]. Increased secretion of liver-derived VLDL is therefore causatively linked to postprandial accumulation of chylomicron remnants \[[@CR87]\]. As discussed above, remnant particles contain significant amount of cholesteryl esters and can enter the arterial wall, even if their size results in attenuated transport across the endothelium as compared to smaller LDL particles. However, since each remnant particle contains approximately 40 times more cholesterol compared with LDL, elevated levels of remnants may lead to accelerated atherosclerosis and CVD \[[@CR77]\]. Interestingly, the importance of postprandial lipoproteins in the development of atherosclerotic vascular disease was initially proposed almost 70 years ago by Moreton who wrote "the lipid particles must be assumed to be retained and deposited from the plasma-derived nutrient lymph stream which normally passes from the lumen through the intramural structures towards the adventitial venules and lymphatics. It may be theorised that the increased particle size of the lipids in sustained or alimentary hyperlipemia is the stimulus to the phagocytosis in the intima by macrophages and the formation of the typical foam cells" \[[@CR88], [@CR89]\]. It is now clear that Moreton's work has not, until now, received the attention it deserves. Triglyceride-Rich Lipoproteins and Remnants as Therapeutic Targets in Hypertriglyceridemia {#Sec7} ------------------------------------------------------------------------------------------ As discussed above, recent epidemiological studies have unequivocally demonstrated that elevated levels of postprandial TG and remnant particles are clinically significant risk factors for CVD \[[@CR2], [@CR3], [@CR6]•\]. Furthermore, postprandial TG concentrations have been shown to be a superior risk predictor for CVD than fasting TG \[[@CR2], [@CR3], [@CR90]\]. Epidemiological data providing insight into the frequency of mild to moderate hypertriglyceridemia (approx. 150 to 800 mg/dL) in the general population as a function of age, gender and ethnicity is lacking; nonetheless, findings in the NHANES survey suggest that at least one third of the US population can be classified as hypertriglyceridemic \[[@CR91]\]. The degree to which such hypertriglyceridemia reflects the impact of elements of dietary habits and lifestyle relative to genetic factors is indeterminate. In this context, it is however especially relevant that recent studies from several laboratories suggest that mild to moderate hypertriglyceridemia is frequently of polygenic origin, arising as a result of a cumulative burden of common and rare variants in more than 30 genes coding for proteins of the complex lipolytic system, each of these polymorphisms generating proteins with mildly attenuated biological activity \[[@CR13]••\]. A genetic score approach is therefore meaningful. Clearly then, an emerging body of evidence supports the contention that, from a therapeutic perspective, efforts to efficaciously reduce circulating concentrations of TRLs and their remnants have become critical. Management of Hypertriglyceridemia and New Therapeutic Options {#Sec8} -------------------------------------------------------------- Following exclusion of secondary causes, treatment of mild to moderate hypertriglyceridemia should follow guideline recommendations, the initial step involving counselling on dietary habits, smoking and exercise \[[@CR92]\]. The objective in such individuals is clearly to diminish their cardiovascular risk. In the event that pharmacotherapy is required, statins, fibrates and omega-3 fatty acids are all effective agents for reduction of TG levels, but only the use of statins is supported by a solid evidence base derived from multiple randomised control intervention trials \[[@CR92]\]. New therapeutic options are currently under development that have been based on genetic evidence for reduced cardiovascular risk in families with phenotypes involving markedly diminished TG levels and rare causative monogenic mutations. Of the candidate proteins involved, apoC-III stands out as an elegant example. Indeed, a null mutation in human *APOC3* was discovered in 2008 in the Amish community in the USA and found to provide apparent cardioprotection \[[@CR35]\]. Abundant data now support the working hypothesis that pharmacotherapeutic reduction in circulating apoC-III levels may represent a valid target for hypotriglyceridemic therapy and ultimately for reduction of cardiovascular risk and potentially pancreatitis. Thus, loss-of-function mutations in *APOC3* are associated with low TG levels and reduced risk of ischemic heart disease in two general population studies involving more than 75,000 participants \[[@CR37]\]. Similar findings were made in the exome sequencing project \[[@CR38]\]. As discussed above, apoC-III is a key factor on the surface of TRL and is a critical modulator of the lipolytic activity of lipoprotein lipase. Circulating levels of apoC-III are elevated however and of the order of 10--20 mg/dL, suggesting that hepatic production of apoC-III may be more viable as a target compared to a monoclonal antibody approach to remove apoC-III protein, allowing maintenance of low plasma levels over extended periods of time \[[@CR93]\]. It is in this context that the development of anti-sense oligonucleotides targeted to the hepatic mRNA of apoC3 hold considerable promise, as dose-dependent reductions in TG levels of up to 80 % are attainable \[[@CR94]\]. Conclusion {#Sec9} ========== The number of patients with hypertriglyceridemia will grow significantly over the coming years, partly due to the increase in patients with diabetes mellitus type 2 and metabolic syndrome. Indeed, hypertriglyceridemia poses a major emerging challenge for public health and requires adequate targeting. As current therapies are not optimal for normalisation of elevated TG levels, the development of novel therapeutic agents is therefore warranted. This article is part of the Topical Collection on *Lipid Abnormalities and Cardiovascular Prevention* Conflict of Interest {#FPar1} ==================== M. Dallinga-Thie, J. Kroon and Jan Borén declare that they have no conflict of interest. John Chapman reports grants and personal fees from Pfizer and Kowa, grants from CSL and personal fees from Amgen, Sanofi-Regeneron and Astrazeneca. Human and Animal Rights and Informed Consent {#FPar2} ============================================ This article does not contain any studies with human or animal subjects performed by any of the authors.
ApoA-1 in Diabetes: Damaged Goods Diabetes is a major risk factor for the development of atherosclerosis. In addition to increased risk of stroke, myocardial infarction, and peripheral vascular disease, diabetics suffer from a particularly aggressive form of atherosclerosis with greater in-hospital mortality following myocardial infarction and a higher incidence of heart failure, if they survive (1–3). While diabetics often have other accompanying risk factors for atherosclerosis (e.g., hypertension hypercholesterolemia, obesity), the additional risk conferred by diabetes and the particularly aggressive vascular and myocardial disease that affects diabetics suggest that diabetes-associated atherosclerosis involves unique pathogenic mechanisms. The systemic metabolic disturbances of diabetes, including hyperglycemia and hyperlipidemia, likely play a central role in the pathogenesis of diabetes-associated atherosclerosis through the generation of oxidative stress. Hyperglycemia causes increased flux through the polyol pathway, formation of advanced glycation end products, activation of protein kinase C isoforms, and increased hexosamine pathway flux, all of which may contribute to increased oxidative stress (4–6). Excessive free fatty acids delivered to nonadipose tissues can lead to reactive oxygen species (ROS) formation through cycles of oxidative phosphorylation, activation of NADPH oxidase, and alterations in mitochondrial structure that precipitate ROS production (7–9). In addition to evidence for activation of these pathways in cultured endothelial cells, human studies support the notion of increased systemic oxidative stress in diabetic subjects in whom increased circulating levels of adhesion molecules and oxidized lipids correlate with increases in A1C and hypertriglyceridemia (10). The effects of oxidative stress in diabetes on both the vascular wall and lipoproteins in the circulation may promote atherogenesis. In this issue of Diabetes, Jaleel et al. (11) provide intriguing evidence that poor glycemic control in type 1 diabetes is associated with accelerated oxidative damage to apolipoprotein (apo) A-1. These investigators adapted a pulse-chase approach, classically used in cell culture experiments, to label newly synthesized proteins with 13C-phenylalanine in human subjects. They then analyzed various plasma apoA-1 isoforms by two-dimensional gel separation and mass spectrometry. This approach enabled quantification of isotopic enrichment in newly synthesized forms of the protein containing the propeptide and in more mature cleaved forms, which together form a charge train of five spots in two-dimensional gel analyses. As expected, isotopic enrichment hours after the stable isotope pulse was highest in the immature forms, and over the course of 10 days, “chased” into more mature forms of the protein lacking the propeptide. Importantly, the older forms of apoA-1 accumulated significantly more evidence of damage including deamidation, oxidation, and carbonylation of amino acids, post-translational modifications that likely contributed to their altered migration in isoelectric focusing. Although the apoA-1 profile of type 1 diabetics during insulin infusion was indistinguishable from that of control subjects, type 1 diabetics deprived of insulin demonstrated increased oxidative damage to newly synthesized apoA-1 (Fig. 1). These findings add to a growing body of molecular evidence for how the oxidative stress that accompanies poor metabolic control impacts physiology. It has long been appreciated that ROS can initiate damage to the nucleic acids, membranes, and proteins of cells. It should not be surprising then that similar damage can affect plasma proteins such as apoA-1. Transcriptional, posttranslational, and signaling mechanisms have been well described in studies of the cellular response to oxidative stress (12–14). Given that apoA-1 is a major component of HDLs, which protect against atherosclerosis by facilitating the removal of cholesterol from macrophages in the artery wall and promoting reverse cholesterol transport, obvious extensions of this work will be to determine whether the changes observed by Jaleel et al. in apoA-1 forms are due directly to oxidative stress (e.g., attenuated following anti-oxidant treatment), with which HDL subclasses the damaged apoA-1 associates, and whether the altered forms of apoA-1 affect HDL clearance or function. The former will provide mechanistic insight into the etiology of these changes. The latter two aspects have the potential to functionally link the investigators' biochemical observations to increased cardiovascular risk in diabetes. While low plasma HDL is an independent risk factor for coronary artery disease (15,16), it is increasingly clear that perturbations in HDL metabolism can alter HDL function and promote atherosclerosis independent of plasma HDL levels (17–19). In fact, HDL cholesterol levels alone are insufficient to capture the functional variation in HDL particles and the associated cardiovascular risk for individual subjects (20). Together with the failure of HDL-raising therapy in recent clinical trials to reduce cardiovascular events (21), these findings suggest that the functional competence of HDL may be as important as absolute plasma HDL levels. It is likely that an important pathway for the generation of dysfunctional HDL is through oxidative damage, such as that precipitated by hyperglycemia and hyperlipidemia (22). The damage to apoA-1 described in the accompanying original article adds to an expanding list of HDL alterations that may impair its function in vivo. HDL-associated paraoxanase-1 (PON1), which is principally responsible for the anti-oxidant properties of HDL that prevent LDL oxidation, is reduced in diabetic subjects and is associated with defective anti-oxidant capacity (23,24). HDL anti-oxidant activity is further impaired by the formation of advanced glycation end products that interfere with PON1 activity and reduce cholesterol efflux to HDL (25,26). In vitro oxidation of apoA-I has been shown to impair the ability of HDL to activate lecithin:cholesterol acyltransferase, the enzyme responsible for converting nascent HDL into mature, cholesteryl ester-rich HDL, and to interact with ATP-binding cassette transporter A1 to facilitate cholesterol export (27–29). Disruption of this critical step in the reverse cholesterol transport pathway is likely to have profound effects on the mobilization of cholesterol from vascular tissues. Beyond analyses such as these, proteomic examination of HDL is likely to identify changes in additional proteins that impact lipoprotein function in the setting of poor metabolic control in diabetes. Moreover, examination of the lipid constituents of the HDL particles, which are similarly susceptible to oxidation, is likely to provide equally important insights into HDL dysfunction and increased susceptibility to atherosclerosis in diabetes. ACKNOWLEDGMENTS No potential conflicts of interest relevant to this article were reported. Footnotes See accompanying original article, p. 2366. - © 2010 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered. See http://creativecommons.org/licenses/by-nc-nd/3.0/ for details.
https://diabetes.diabetesjournals.org/content/59/10/2358
The Regional Directory lists approximately 1458 information technology vendors and 167 computer and computer equipment manufacturers in California. Nationwide, the directory lists approximately 181 computer software manufacturers. Sacramento County, where PC Component Inc is located, is shown in red on the California map below. | | | | Folsom, California, the location of PC Component Inc, is shown on the interactive map below. The most recently known address of PC Component Inc is 404 Blue Ravine Road # 500, Folsom, California 95630. Before visiting PC Component Inc, be sure to verify its address and hours of operation. This California-based organization may have ceased operations or relocated, and hours can sometimes vary. So a quick phone call can often save you time and aggravation. Visit this Information Technology Services directory page to find information technology vendors throughout the USA. Use this search box to find more information about PC Component Inc — or to find information about the Folsom area. Copyright © RegionalDirectory.us. All rights reserved. Terms and Conditions.
https://www.regionaldirectory.us/ca/4238245.htm
Source: Deep Learning on Medium FastAI + ResNet152 and differential learning rates. Utilizing the latest in AI deep learning techniques such as differential learning rates and auto learning rate detection with Fast AI and an ImageNet-trained ResNet152 CNN, I was able to build and train a neural network model showing up to 100% accuracy in classifying the Kimia 960 histo-pathology data-set. Previous papers have shown 91% — 94% peak accuracy using AI, and a range of rates for machine learning (i.e. 75–81%) so achieving up to 100% was quite exciting. This article provides some insight into the process and results! Background —the Kimia 960 dataset consists of 960 whole slide images of muscle, epithelial and connective tissue. It was selected from a larger set of histo-pathology scans. It is considered a challenging data set due to it’s large ‘intra-class variability’ as shown in this paper: When reviewing some of the papers using AI for classification, I noticed that while they were as recent as six months ago, the architectures and techniques were not leveraging some of the newer breakthroughs that FastAI has brought to the fore. Notably, things like progressive resizing, cyclical restart annealing and differential learning rates. With that, I decided this was an excellent challenge and proceeded to prep the dataset for use with FastAI. ResNet50 = Could not get higher than 98%, not enough horsepower to solve: Of interest, I initially used ResNet50 believing that should be sufficient. However, a few hours in and it became clear that ResNet50 could not get past 98% accuracy. The issue was consistent confusion between class A and D in the dataset. Even with multiple changes and various learning rate adjustments, ResNet50 could not shake the difference. When you view some of the images from these two classes, you can see they are quite similar: Thus, I decided the root issue might be the CNN simply needed more raw horsepower and had to restart with ResNet152, which adds nearly 100 more layers of neural ‘power’ (or 3x the total layers). That proved to be the right change, as impressively, it never got stuck the way ResNet50 did. Differential learning rates and auto learning rate finder, for the win: While the initial training results were pretty steady, ResNet152 began to peak around 97–98% and the risk of potentially over-training was starting to appear as the validation. Thus, I had to become very conservative with the learning rates and this is where differential learning rate appeared to really excel in terms of guiding ResNet152 to 100% accuracy. The process was to repeatedly run the FastAI learning rate finder, and then use differential learning rates to allow the initial layers to only be gently coaxed, vs the middle and finishing layers received a bit more push from the training epochs. By using differential learning rates, you don’t overly disturb what’s working, while still providing more feedback to the layers that need it. This is a huge improvement in terms of finesse versus the more traditional fixed learning rate for the entire network, and in this case, appeared to be a key difference. With learning rates steadily scaling down via the learning rate finders changing curves, the final training runs were split with a learning rate, between 1e-8 and 1e-6 (and thus 1e-8 to the first layers, 1e-7 to the middle layers and 1e-6 to the final layers), the validation results slowly climbed until the exciting moment of repeated 100% accuracy on the validation set: As the training results above show, 100% accuracy was repeatable and with that I froze the model for future use. To be safe, I titled this “up to 100%” accuracy as there’s no guarantee with even larger data sets that the 100% accuracy would be maintained. However, it did achieve 100% consecutive times and in addition, 98.7% was very stable. These results exceed previously published CNN results ranging from 91–94%, as well as other non deep learning approaches to this dataset (i.e. 75–82% with Support Vector Machine learning). We can thus see that Deep Learning (or AI), with the techniques leveraged in FastAI, continues to rapidly improve and evolve, and holds the prospect’s of improving health care for all in the future. References (previous papers using AI (CNN) and Kimia 960):
https://mc.ai/up-to-100-accuracy-for-cellular-histo-pathology-image-classification-with-fast-ai-and-kimia960/
Areo Gardens I, Attract This series of animated short films depicts surreal and sublime natural geology in the solar system. Scans of natural rocks and boulders captured in many parts of the world using 3D photogrammetry are set above a Martian landscape captured by the NASA/JPL HiRise Imager. The stones depicted in Areo Gardens include boulders from Joshua Tree, California, an abandoned quarry in Syracuse NY and volcanic stone formations from the coast of Senegal. Areo Gardens III, Wash Areo Gardens V, Gale Crater The landscape of this film is a speculative model combining a USGS LIDAR scan of Bingham Copper Mine, UT, with the terrain of Gale Crater, Mars, obtained from the MRO/HiRise satellite program. The depiction of this fictional setting helps us to consider the ramifications of anthropogenic change and radical alterations to landscapes beyond earth. Gale Crater with Bingham Copper Mine This speculative 3D model sites Bingham Copper Mine on the surface of Mars. The famed Utah copper mine is the largest open pit mine in the world, and the site of one of Robert Smithson’s proposed land interventions. The Martian site, Gale Crater, is currently being explored by the NASA Curiosity Rover. The work urges us to consider the ramifications of anthropogenic change and radical land alterations beyond Earth. Kim Stanley Robinson, in his fictional Mars series, illustrates conflicting perspectives towards human intervention on uninhabited planets. The “Greens” in his story are those wishing to terraform Mars for human habitability while the “Reds” take the extreme standpoint that the planet should not be altered in any way. Ostensibly the Red position is scientific — to preserve the geological record for scientific inquiry. However his Red characters also posses an almost mystical impulse when they admit that they are willing to wage eco-terrorism, not for science, but for a profound reverence for the sublime awe to be found in nature, wherever in the solar system that may be found. SpaceX’s Elon Musk dreams of a Martian city. He softens his intentions with vague optimism like “becoming a space-faring race is a lot more interesting” but the economic driver of his Martian colonization plan is resource extraction. What he is actually proposing is a mining colony. Industrial mining operations don’t tend to have such an altruistic track record on planet Earth. Typically they end with an enormous hole in the ground and a river of toxic by-products. It is tempting to turn towards Robert Smithson’s strategy to reclaim the exposed terrain of mining operations as sites for radical land art. But perhaps this also implies a sort of defeatism, as if the forces of industrial capital are inevitable. Might it be even more radical to consider a future in which “art-washing” the destruction of nature is not necessary in the first place?
https://www.devharlan.com/areo-gardens/
TECHNICAL FIELD BACKGROUND DETAILED DESCRIPTION This document relates to systems for monitoring non-account holders that seek to perform transactions with a financial institution. A non-account holder is a person who seeks to perform a transaction (e.g., cashing a check) with a bank or other financial institution but does not have a banking relationship with the bank (e.g., does not have a bank account with the bank). Banks are highly susceptible to fraud at the hands of a non-account holder because of the difficulty in confirming the identity of the non-account holder and because of the lack of information regarding the history of transactions between the non-account holder and the bank and between the non-account holder and other banks. Like reference symbols in the various drawings indicate like elements. A system for identifying non-account holders and monitoring the transaction history of non-account holders enables banks to decrease incidences of fraud performed by non-account holders. The system enables banks to decrease identity fraud by providing a comprehensive non-account holder enrollment process that includes comparing collected identification data with identification data stored in third party, non-public, identity verification data stores. Once a non-account holder is enrolled, the system enables banks to decrease fraud by tracking the transactions between banks and enrolled non-account holders and enabling a bank to review the transaction history of an enrolled non-account holder when determining whether to perform a transaction with the enrolled non-account holder (e.g., when determining whether to allow the enrolled non-account holder to open an account with the bank or whether to grant a loan to the enrolled non-account holder). The system also provides banks with a recommendation as to whether to perform a transaction (e.g., cash a check) with an enrolled non-account holder. The recommendation is generated through an automated risk analysis. The automated risk analysis is performed by accessing the transaction history of the enrolled non-account holder and applying a set of risk rules to the accessed transaction history. The result of the risk analysis is a transaction approval or denial recommendation. The bank may automatically perform the transaction in accordance with the recommendation (e.g., when the transaction is performed at an Automated Teller Machine (ATM)) or may ignore the recommendation and approve or deny the transaction based on other factors (e.g., when the transaction is performed at a teller terminal and a bank supervisor chooses to ignore the recommendation). The risk rules and thus the risk analysis may be tailored to each bank, thereby enabling each bank to vary the rules in accordance with its particular risk sensitivity. FIG. 1 FIG. 1 100 105 110 115 105 120 130 110 120 135 140 120 105 145 115 120 Referring to , a system for identifying non-account holders and monitoring the transaction history of non-account holders includes a teller terminal and an ATM , both of which are associated with a bank . The teller terminal communicates with a Non-account Holder Monitoring System (NHMS) across a data network , and the ATM communicates with the NHMS across a virtual private network through ATM transaction switches . The NHMS provides non-account holder identity verification information to the teller terminal by accessing third party identity verification data stores . While only shows one bank , one or more other banks and associated teller terminals and ATMs also may communicate with the NHMS to identify and track non-account holders. 105 115 120 105 105 105 105 105 120 130 105 105 The teller terminal is a computer terminal configured to enable a teller affiliated with the bank to verify the identity of a non-account holder and enroll the non-account holder into the NHMS . The teller terminal also may be configured to process transactions with enrolled non-account holders and to generate management reports related to enrolled non-account holder transaction histories. The teller terminal may include driver's license decoding software, a fingerprint scanner for biometric identification, and a check imaging device. In other implementations, the teller terminal may use other types of biometric identification mechanisms. For example, the teller terminal may include identification software that verifies the identity of a non-account holder based on an image of the non-account holder's face, a palmprint, DNA analysis, a retinal scan, or an analysis of the non-account holder's voice. In one implementation, the teller terminal is a personal computer having peripheral components used to collect data from the non-account holder (e.g., a check imager, a card reader, and a fingerprint scanner) and with a secure connection to the NHMS over the data network . The teller terminal may additionally perform other financial service functions unrelated to non-account holder enrollment, tracking, and transaction processing. For example, the teller terminal may enable the teller to assist account holders in withdrawing funds from and depositing finds to a savings and/or a checking account. 105 120 120 105 120 145 120 120 The teller terminal is configured to enable a teller to enroll a non-account holder into the NHMS by collecting data that identifies the non-account holder and communicating the collected data to the NHMS . For example, the teller terminal may collect the identity data by swiping the non-account holder's driver's license, orally requesting the identity data from the non-account holder and manually entering the data, and/or enrolling a biometric template of the non-account holder (e.g., a template of the fingerprints of both index fingers of the non-account holder). The NHMS uses some or all of the collected identity data to access identity verification data stored locally or stored in third party identity verification data stores . The NHMS compares the accessed identity verification data to the collected identity data to validate the non-account holder's identity. If the identity of the non-account holder is successfully validated and the non-account holder was not previously enrolled, the NHMS enrolls the non-account holder into the system. 110 110 105 110 110 The ATM is a check cashing unit that is configured to enable an enrolled non-account holder to cash a check. In operation, an enrolled non-account holder enters his or her driver's license number, social security number (SSN), or other identification number, and the amount of the check to be cashed, and inserts the check into the machine. In some implementations, the enrolled non-account holder also may be required to provide biometric data. The ATM may include biometric identification mechanisms similar to those included in the teller terminal . The ATM also includes a check imaging device that produces images of the front and back of the check, validates the MICR (“magnetic ink character recognition”) code on the check, and reads designated zones of the check. In one implementation, the ATM is a Diebold Opteva 720 with an IDM operating on an Agilis 912 platform. 110 140 120 120 120 115 110 120 110 110 The ATM is configured to send transactional data, including customer identity information (e.g., biometric data and identification number) to the ATM transaction switch , which converts the received transactional data into a format understandable by the NHMS and sends the converted transactional data to the NHMS . The NHMS determines whether to approve the transaction by accessing the transaction history of the enrolled non-account holder and performing a risk analysis in accordance with risk rules established and maintained for the particular bank associated with the ATM . The NHMS returns either an approval or denial indicator to the ATM . The ATM proceeds to cash the check if an approval indicator was received or reject the check if a denial indicator was received. 115 115 115 105 110 105 110 115 115 115 The bank may be any financial institution that provides check cashing services. The bank also may enable customers to open bank accounts (e.g., checking or savings accounts) and may provide other types of financial services (e.g., loans). The bank typically includes one or more teller terminals and one or more ATMs . The teller terminals and ATMS may be local to the bank or may be remote to the bank but in communication with the bank over a public or private data network. 120 122 124 126 128 122 145 The NHMS includes a customer and transaction data store , a report data store , a transaction queue , and a risk engine . The customer and transaction data store includes one or more records for each non-account holder. These records store non-account holder identity data, non-account holder transaction history data, and non-account holder enrollment history data. The non-account holder identity data includes identity data collected directly from the non-account holder and collected from the third party identity verification data stores . The non-account holder identity data may include some or all of the following items: name, address, date of birth, driver's license number, sex, height, eye color, hair color, weight, social security number or user identity (ID) number, phone number, previous five addresses (or any previous number of addresses), an indicator as to whether the non-account holder is deceased, and biometric data (e.g., fingerprint images). 120 120 The non-account holder transaction history data is collected by the NHMS each time a transaction is completed with the non-account holder after the non-account holder has been enrolled. The transaction history data includes a transaction number, a transaction date and time, an ATM number if the transaction occurred at an ATM, an indicator that the transaction was handled automatically by an ATM or that the transaction was handled manually by an operator/teller, a log of supervisory overrides including the identity of the supervisor that submitted the override, and check information, if applicable. The check information may include the amount of the check, the payor identity (e.g., payor name, address, and/or phone number), whether the check was cashed, and the reason why the check was not cashed, if applicable. The transaction history data also includes a flag or setting that indicates whether the enrolled non-account holder has been placed on a negative or fraud list by a bank or by the NHMS and also includes the identity of the bank, if applicable, that placed the non-account holder on the negative or fraud list. 122 The payor identity may be used to access payor records that are also stored in the customer and transaction history data store . The payor records may include the payor name, the payor address, the payor phone number (if applicable), the payor account number(s), and the payor transaction history. The payor transaction history may include the number of returned checks (i.e., “bounced” checks), the number of collected checks (i.e., “bounced” but then funded checks), dates when checks associated with the payor were cashed, and the identity of check payees. 120 115 115 120 FIG. 3 The non-account holder enrollment history data is collected by the NHMS each time an attempt to enroll a non-account holder is performed at the bank and/or at other banks. If a non-account holder approaches the bank or another bank to perform a transaction and the enrollment process (see ) is started, a log of the enrollment is stored. The log includes time and date information when the enrollment process occurred, the name of the bank where the enrollment process was started, the name of the teller that performed the enrollment process, an indicator as to whether the enrollment was successful, and, if the enrollment was not successful, a reason why the enrollment was not successful. Reasons why an enrollment process may not be successful include that the non-account holder did not complete the enrollment process (e.g., the non-account holder walks away), that the non-account holder was recently enrolled at another bank (e.g., enrolled at another bank in the past week), that the non-account holder was placed on a fraud list or a negative list by another bank or by the NHMS , that the non-account holder did not correctly answer verification or confirmation questions (e.g., the non-account holder was not able to correctly identify his or her previous addresses), and that biometric authentication of the non-account holder failed. In one implementation, biometric authentication fails only if the non-account holder was previously enrolled in the system and the non-account holder was placed on a fraud list or a negative list. 120 120 120 120 Once a non-account holder has been successfully enrolled, the non-account holder may perform transactions (e.g., check cashing and opening a new account) with the bank. At the same time, the NHMS begins tracking the transaction history of the enrolled non-account holder. In one implementation, a non-account holder only needs to enroll once with one bank in communication with the NHMS to perform transactions with all banks in communication with the NHMS . In another implementation, a non-account holder must enroll separately with a bank in communication with the NHMS before being allowed to perform transactions with that bank. 124 120 124 122 122 The report data store is a data store configured to hold non-account holder identity data, transaction history data, and enrollment history data used to generate management reports for banks in communication with the NHMS . The data stored in the report data store is typically a copy of all or some of the data stored in the customer and transaction history data store and may be accessed and manipulated without the risk of changing or corrupting the original data stored in the customer and transaction history data store . 120 115 105 110 122 145 105 110 120 The NHMS is configured to enable a bank to validate the identity of a non-account holder by receiving non-account holder identity information from the teller terminal and the ATM , and comparing the identity information to that stored in the customer and transaction data store and/or in one or more third party identity verification data stores . The non-account holder identity information collected at the teller terminal or at the ATM may include, for example, an identification number, a phone number, biometric data, and/or information captured from a driver's license. The NHMS may provide the bank with an indication that validation of the identity of the non-account holder has succeeded or, alternatively, with an indication that validation of the identity of the non-account holder has failed. 120 115 120 120 105 110 115 105 115 105 110 The NHMS is also configured to monitor and track the transactions of enrolled non-account holders with the bank and with other banks in communication with the NHMS . Specifically, the NHMS is configured to receive enrolled non-account holder transaction request data from the teller terminal and from the ATM . The enrolled non-account holder transaction request data includes an identifier of the enrolled non-account holder (e.g., the social security number of the non-account holder) and a transaction request. The transaction request is a request by the enrolled non-account holder to perform a transaction with the bank . For example, the transaction request may be a request received from the teller terminal to open a new account with the bank or a request received from the teller terminal or from the ATM to cash a check. 110 105 110 105 120 120 122 Once the ATM or the teller terminal disposes of a transaction request by either performing or not performing the requested transaction, the ATM or the teller terminal sends transaction disposition data to the NHMS . The transaction disposition data includes an identifier of the enrolled non-account holder and a transaction disposition. For example, the identifier may be the social security number of the enrolled non-account holder and the transaction disposition may indicate that the request to open a new account or to cash a check was denied and may provide information as to why the request was denied. The NHMS uses the transaction disposition data to update the non-account holder's transaction history data that is stored in the customer and transaction data store . 120 122 105 115 115 115 115 The NHMS may respond to a transaction request to open a new account by accessing the customer and transaction data store and providing the teller terminal with all or a portion of the transaction history of the enrolled non-account holder. By reviewing the transaction history of the enrolled non-account holder, the teller associated with the bank may make an informed decision as to whether to allow the non-account holder to open a new account with the bank . For example, if the enrolled non-account holder has had a history of being denied check cashing services by the bank or by other banks, the bank may decide not to allow the enrolled non-account holder to open a new account. 105 115 115 The transaction history of the enrolled non-account holder that is displayed by the teller terminal (through a user interface) may include the date, details, and disposition of previous transactions between the bank and the enrolled non-account holder. For example, the transaction history may include the dates that check cashing services were requested by the enrolled non-account holder, the amounts of the checks, the name of the payor of each check, and whether the bank approved or denied cashing each check. 120 120 The displayed transaction history of the enrolled non-account holder also may include information regarding transactions between the enrolled non-account holder and one or more other banks that also communicate with the NHMS . To preserve the privacy of the enrolled non-account holder and of the other banks, the displayed transaction history related to the other banks is typically provided anonymously (e.g., the identities of the other banks and the payor are not provided) and may be limited to transactions that reflect negative, delinquent, or fraudulent behavior by the enrolled non-account holder. For example, the displayed transaction history between the enrolled non-account holder and the other banks may include information related to the number and type of transaction requests denied by the other banks in a predetermined interval of time (e.g., in the last three months) but may not include the identity of the one or more other banks or the details of the specific transactions between the non-account holder and the other banks (e.g., the check amount, the date when the enrolled non-account holder requested to cash the check, and the identity of the payor). Additionally or alternatively, the displayed transaction history of the enrolled non-account holder with respect to other banks may be limited to an indication that the enrolled non-account holder was or was not placed on a black list or a fraud list by the NHMS or by one or more of the other banks. 120 122 128 128 105 The NHMS may respond to a transaction request to cash a check by accessing the transaction history of the non-account holder stored in the customer and transaction data store . The risk rules, which may be tailored specifically to the bank, are applied to the transaction history by the risk engine . The risk rules that are applied may vary by non-account holder identity and may take into account many factors in assessing the risk related to cashing a check with a particular non-account holder. The factors may include the amount of the check (e.g., if the check amount is greater than $500, do not cash the check), the frequency that the non-account holder requests check cashing services with the bank and/or with other banks (e.g., if the non-account holder has requested that the bank cash a check more than two times in a week, do not cash the check), and the payor transaction history (e.g., if more than 10% of the checks from that payor are returned, do not cash the check). After applying the risk rules, the risk engine provides a recommendation that the bank cash the check or, alternatively, that the bank not cash the check. If the transaction with the enrolled non-account holder takes place at the teller terminal , the teller or a supervisor may choose to ignore the recommendation and instead cash or not cash the check, accordingly. 110 110 110 120 126 120 110 110 110 120 If the transaction with the enrolled non-account holder takes place at the ATM , the ATM automatically cashes the check if the transaction request was approved. On the other hand, if the transaction request was not approved, the ATM may inform the enrolled non-account holder that the check cannot be cashed and may instruct the enrolled non-account holder to contact a teller at the corresponding bank. Alternatively, the NHMS may place the check cashing request in the transaction queue , and an operator of the NHMS may be contacted to manually approve or deny the transaction. The operator may respond to the transaction request by taking actions that may include, among other actions, accepting the transaction, rejecting the transaction, or requesting identification of the user. The ATM may be configured to enable communications between the operator and the enrolled non-account holder (e.g., the ATM may include a telephone). The ATM also may be configured to not cash checks of non-account holders that have not been enrolled in the NHMS and instruct the non-account holders to contact a teller at the corresponding bank. 120 120 120 While the NHMS is shown as a central processing system that provides non-account holder monitoring services for one or more banks, the functions performed by the NHMS may be performed entirely by an internal system at a single bank or may be distributed across multiple internal systems at multiple different banks. Additionally, the functions of the NHMS may be limited to tracking non-account holders and transactions with non-account holders that approach a single bank, rather than tracking non-account holders and their transactions when they approach any of multiple different banks. 130 105 120 145 130 The data network is a delivery network that enables direct or indirect communications between the teller terminal , the NHMS , and the third party identity verification data stores , irrespective of physical separation. Examples of the data network include the Internet, the World Wide Web, LANs, WANs, analog or digital wired and wireless telephone networks (e.g., PSTN, ISDN, and xDSL), radio, television, cable, satellite, and/or any other delivery mechanism for carrying data. 140 110 120 135 140 The ATM transaction switches include an ATM transaction processor and an ATM terminal driver that enable exchange of transactional data between the ATM and the NHMS across the virtual private network . In one implementation, the ATM transaction switches enable communications using the 912 messaging protocol. 145 120 130 145 120 145 145 The third party identity verification data stores are data stores accessible to the NHMS across the data network . The data stores contain identity verification data that may be used to validate the identity of a non-account holder when the non-account holder enrolls into the NHMS . The identity verification data may include, but are not limited to, name, social security or other identification number, most recent five addresses, date of birth, driver's license number, sex, height, weight, eye color, hair color, phone number, whether the person is deceased, and aliases. The identity verification data is typically indexed by social security number and/or name. Some third party data stores also may include biometric data (e.g., images of fingerprints). The third party identity verification data stores may provide identity verification data not otherwise available to the public. FIG. 2 FIG. 1 FIG. 1 200 200 shows a transaction process for responding to a check cashing request from a non-account holder at a teller terminal. For convenience, particular components described with respect to are referenced as performing process . However, similar methodologies may be applied in other implementations where different components are used to define the structure of the system, or where the functionality is distributed differently among the components shown by . 202 204 200 502 500 FIG. 5 When the non-account holder requests to cash a check, the teller asks the non-account holder to provide the following information: a social security number or an assigned ID number and, optionally, a picture identification (ID), such as a driver's license (). If the non-account holder does not have a social security number or an assigned ID number (), the transaction process proceeds to an operation of an ID assignment process (show in and discussed below) that randomly assigns a unique ID number to the non-account holder. 204 120 122 115 206 120 115 115 200 302 300 FIG. 3 If the non-account holder has a social security number or an assigned ID number (), the NHMS accesses the customer and transaction data store to determine whether the social security number or assigned ID number corresponds to a non-account holder enrolled with the bank (). For example, the NHMS may determine whether a non-account holder record corresponding to the social security number or assigned ID number exists and, if the record does exist, may determine whether it indicates that the non-account holder is enrolled with the bank . If the record does not exist or if the record exists but the non-account holder is not enrolled with the bank , process proceeds to operation of a non-account holder enrollment process (shown in and discussed below). 208 105 120 122 145 210 120 212 214 105 216 Biometric data is then collected from the enrolled non-account holder (). For example, the collection of biometric data may include collecting images of the fingerprints of the right and left index fingers of the enrolled non-account holder using the fingerprint scanner of the teller terminal . The NHMS may then authenticate the identity of the enrolled non-account holder by comparing the collected biometric data with biometric data stored in the customer and transaction data store and/or in the third party identity verification data stores (). If the bioauthentication is not successful, the NHMS updates the corresponding non-account holder record with transaction disposition data to indicate that the check-cashing transaction was not successful due to the inability to confirm the identity of the enrolled non-account holder () and the transaction ends (). If the bioauthentication is successful, the teller scans the check using the check imager of the teller terminal (). 120 120 212 214 105 216 105 216 120 122 218 120 120 200 402 400 FIG. 4 A bank sometimes may not be able to perform bioauthentication due to an inability to capture biometric data from the non-account holder. For example, a person's fingerprints may be too worn to be detected by the fingerprint scanner. When bioauthentication is not possible, the bank may choose instead to ask the non-account holder a security question that was provided to the bank by the non-account holder when the non-account holder was enrolled in the NHMS . If the non-account holder does not correctly answer the security question, the NHMS updates the corresponding non-account holder record with transaction disposition data to indicate that the check-cashing transaction was not successful due to the inability to confirm the identity of the enrolled non-account holder () and the transaction ends (). If the non-account holder correctly answers the security question, the teller scans the check using the check imager of the teller terminal (). In some implementations, a bank may choose not to perform the bioauthentication and, instead, may choose to only use a picture ID to authenticate the identity of the non-account holder. The teller scans the check using the check imager of the teller terminal () to obtain the following data: the payor account number, the routing number, the check number, the payor name, the payor address, and the payor phone (if applicable). The NHMS accesses the payor records in the customer and transaction data store to determine whether the payor name corresponds to an enrolled payor (). For example, if the NHMS is able to access a payor record corresponding to the payor's name, then the payor has already been enrolled in the NHMS . Otherwise, process proceeds to operation of a payor enrollment process (shown in and discussed below). 120 122 128 220 120 If the payor is enrolled, the NHMS accesses the enrolled non-account holder transaction history in the corresponding non-account holder record in the customer and transaction data store , and the risk engine applies the risk rules associated with the bank to the accessed transaction history to determine whether to pass or not pass the transaction (i.e., whether to recommend or not recommend that the bank cash the check of the enrolled non-account holder) (). In another implementation, the NHMS also accesses payor transaction history and/or the enrolled non-account holder enrollment history and applies the risk rules associated with the bank to both the accessed transaction history and either or both of the accessed enrollment history or payor transaction history to determine whether to pass the transaction. 222 128 224 120 128 212 214 If the transaction does not pass (), the teller is informed that the transaction did not pass and is provided with a reason (e.g., the amount of the check exceeds a predetermined value set for that account holder) which may be communicated to the non-account holder. The teller and/or a supervisor of the bank may choose to override the decision of the risk engine (). If the teller and/or supervisor, chooses not to override the decision, the NHMS updates the corresponding non-account holder record with transaction disposition data to include details of the check cashing transaction (e.g., check amount, date, and payor information) and indicate that the check-cashing transaction was not successful due to the reasons specified by the risk engine (), and the transaction ends (). 222 224 105 226 120 212 214 If the transaction passes () or does not pass but the teller and/or supervisor chooses to allow the transaction (), the teller terminal deducts any necessary fees from the check amount and displays payout information to the teller (). The teller then dispenses the funds to the enrolled non-account holder, the NHMS updates the corresponding non-account holder record with transaction disposition data to include details of the check cashing transaction and to indicate that the check-cashing transaction was successful (), and the transaction ends (). FIG. 3 FIG. 1 FIG. 1 300 300 shows a process for enrolling a non-account holder with a non-account holder monitoring system. For convenience, particular components described with respect to are referenced as performing process . However, similar methodologies may be applied in other implementations where different components are used to define the structure of the system, or where the functionality is distributed differently among the components shown by . 302 304 306 308 310 The teller requests a driver's license from the non-account holder (). If the customer does not have a driver's license on his or her person, the teller requests another ID from the non-account holder (). Examples of other IDs that a non-account holder may provide include a passport, a military ID, a work ID, and a credit card. If the non-account holder does not have a driver's license or other ID, or claims that he or she has a driver's license or other ID but does not have the driver's license or other ID on his or her person (), the teller or the teller's supervisor may decide whether to continue the enrollment process (). If the teller or the teller's supervisor decides not to continue the enrollment process, the enrollment process and the transaction end (). In this case, the non-account holder is not enrolled, and the bank does not cash the check of the non-account holder. 105 312 314 If the customer has a driver's license, the teller swipes the driver's license using the card reader of the teller terminal (). Swiping the driver's license collects the following identity information: name, address, date of birth, driver's license number, sex, height, eye color, hair color, and weight. The identity information is flagged as having been collected from a driver's license. The card reader may or may not successfully read the data on the driver's license (). 316 If the non-account holder has a driver's license that was not successfully read by the card reader, has another ID on his or her person, or does not have another ID but the teller or supervisor decides to go ahead with the enrollment process, the teller manually enters the identity information and the source of the identity information (). The identity information may include some or all of the information available from the preferred ID and/or provided orally by the non-account holder and typically includes at least the name, address, sex, and date of birth of the non-account holder. The entered identity information is flagged to reflect the source of the identity information (e.g., received orally from the non-accountholder, received from a driver's license, received from a passport, and received from a credit card). 318 320 105 The teller also enters the phone number and, if an assigned ID number has not been generated, the social security or other identification number of the non-account holder (). The teller then collects biometric data from the non-account holder (). For example, the collection of biometric data may include collecting images of the fingerprints of the right and left index fingers of the enrolled non-account holder using the finger scanner of the teller terminal . 120 122 115 322 115 324 122 326 310 Once the non-account holder identification information, including biometric data, has been collected, the NHMS accesses the customer and transaction data store to determine whether the non-account holder has been placed on a negative/fraud list by the bank or another bank, or has recently been enrolled at another bank (). If the non-account holder has been placed on a negative/fraud list by the bank or by another bank, the teller is informed that the non-account holder has been flagged as being a problem customer (), the enrollment is denied, the enrollment denial is logged in the customer and transaction data store (), and the transaction ends (). In this case, the non-account holder is not enrolled, and the bank does not cash the check of the non-account holder. 324 122 326 310 115 115 115 If the non-account holder has recently enrolled at another bank (e.g., has enrolled at another bank in the past week), the teller is informed that the non-account holder has recently enrolled at another bank (), the enrollment is denied, the enrollment denial is logged in the customer and transaction data store (), and the transaction ends (). The non-account holder is thereby prevented from cashing a check at bank . Non-account holders that have recently enrolled at another bank are not allowed to enroll again at bank because of the increased risk of bank fraud posed by the non-account holder. Cashing fraudulent checks at multiple different banks in the same day or in a short interval of time is a common pattern among perpetrators of bank fraud. Preventing a non-account holder from enrolling at bank if the non-account holder has recently enrolled at another bank helps prevents this type of bank fraud. 120 122 328 324 122 326 310 The NHMS accesses the customer and transaction data store to check whether the collected biometric data corresponds to another non-account holder that has already been enrolled (). If the collected biometric data corresponds to another non-account holder that has already been enrolled, the teller is informed that the biometric data does not correspond to the identity offered by the non-account holder (), the enrollment is denied, the enrollment denial is logged in the customer and transaction data store (), and the transaction ends (). 120 145 330 332 120 If the non-account holder is not in a negative or fraud list, is not recently enrolled at another bank, and the collected biometric data does not correspond to an already enrolled non-account holder, the NHMS accesses third party identity verification data stores () to retrieve identity verification data corresponding to the non-account holder (). Typically, the NHMS is able to access the appropriate identity verification data using either the name or the social security or other identification number offered by the non-account holder. 120 334 324 122 326 310 The NHMS validates the identity information collected from the non-account holder by comparing the collected information to the retrieved identity verification data (). If the identity verification data does not match the identity information collected from the non-account holder, the teller is informed that the enrollment was not successful (), the enrollment is denied, the enrollment denial is logged in the customer and transaction data store (), and the transaction ends (). 336 338 122 326 310 If the identity verification data generally matches the identity information collected from the non-account holder with the exception that the collected identity information is different in minor ways (e.g., the driver's license number, phone number, or address is different), the teller may be provided with verification questions to ask the non-account holder (). The verification questions may include, for example, a request that the non-account holder specify the last three residence addresses of the non-account holder. The teller asks the non-account holder the verification questions () and, if the non-account holder is not able to provide the correct answers to the teller's satisfaction, the enrollment is denied, the enrollment denial is logged in the customer and transaction data store (), and the transaction ends (). 340 122 326 310 342 122 344 300 216 200 FIG. 2 If the identity verification information completely matches the collected identity information, or if the non-account holder correctly answers the verification questions, the teller is provided with confirmation questions that the teller is required to answer (). The confirmation questions are questions that relate to the physical appearance of the non-account holder such as, for example, “does the customer have blue eyes?” or “is the customer 6 foot 4 inches in height?” If the teller answers no to any of the confirmation questions, the enrollment is denied, the enrollment denial is logged in the customer and transaction data store () and the transaction ends (). If the teller answers yes to each confirmation question, the teller is informed that the enrollment was successful (), the successful enrollment is logged in the customer and transaction data store (), and process proceeds to operation of transaction process (). FIG. 4 FIG. 1 FIG. 1 400 400 shows a process for enrolling a payor with a non-account holder monitoring system. For convenience, particular components described with respect to are referenced as performing process . However, similar methodologies may be applied in other implementations where different components are used to define the structure of the system, or where the functionality is distributed differently among the components shown by . 105 120 402 404 406 122 408 410 After the check routing number (RTN) is either manually entered by the teller at the teller terminal or is scanned using the check imager, the NHMS first verifies that the RTN is valid by running the RTN through a standard, well-known RTN algorithm, and then verifies that the RTN corresponds to a known bank (). If the RTN is not valid or does not correspond to a known bank, the teller is informed of this () and the teller or a supervisor may decide whether to proceed with the payor enrollment (). If the teller or supervisor decides not to proceed with the payor enrollment, the payor enrollment denial is logged in the customer and transaction data store () and the transaction ends (). 412 120 122 414 400 220 200 FIG. 2 If the teller or supervisor decides to proceed with the payor enrollment despite an invalid or unknown RTN, or if the RTN is valid and corresponds to a known bank, the teller is prompted to enter the payor information (). The payor information includes the payor name, address, account number, and phone number (if applicable). The NHMS logs the payor enrollment and stores the payor information in a payor record in the customer and transaction data store (). Process then proceeds to operation of process (). FIG. 5 FIG. 1 FIG. 1 500 500 shows a process for assigning a unique ID number to a non-account holder. For convenience, particular components described with respect to are referenced as performing process . However, similar methodologies may be applied in other implementations where different components are used to define the structure of the system, or where the functionality is distributed differently among the components shown by . 120 502 120 504 120 120 506 500 300 FIG. 3 If the non-account holder does not have a social security number and is not enrolled in the NHMS , the teller may select to assign a unique ID number to the non-account holder (). The NHMS generates a unique ID number in response to a request by the teller (). The unique ID number is subsequently used by the NHMS to track all interactions with the non-accountholder. The unique ID number may be, for example, a randomly generated nine digit number similar to a social security number with the exception that the nine digit number begins with the number “9.” Once a unique ID number has been generated, the NHMS creates a non-account holder record that stores the generated ID number (). As discussed previously, the enrollment history, the identity information, and the transaction history of the non-account holder are subsequently stored in the non-account holder record. After the unique ID number is generated and assigned to the non-account holder, process proceeds to customer enrollment process (). FIG. 6 FIG. 1 FIG. 1 600 600 115 602 604 105 606 105 shows a transaction process for responding to a request from a non-account holder to open a new account at a teller terminal. For convenience, particular components described with respect to are referenced as performing process . However, similar methodologies may be applied in other implementations where different components are used to define the structure of the system, or where the functionality is distributed differently among the components shown by . When the non-account holder requests to open a new account with the bank , the teller asks the non-account holder for his or her social security number (). If the non-account holder does not have a social security number, the request to open a new account is denied and the transaction ends (). If the non-account holder provides a social security number, the teller enters the social security number using the teller terminal (). In another implementation, if the non-account holder does not have a social security number, the bank does not deny the request to open a new account. Instead, the teller enters the name and/or address of the non-account holder using the teller terminal . 120 122 608 120 122 610 105 612 614 The NHMS determines whether the non-account holder is enrolled by accessing the customer and transaction data store using the entered social security number (). In implementations in which the non-account holder does not have a social security number and the bank still proceeds with the transaction, the NHMS determines whether the non-account holder is enrolled by accessing the customer and transaction data store using the entered name and/or address. If the non-account holder is not enrolled, the teller asks the non-account holder for a driver's license (). If the customer has a driver's license, the teller swipes the driver's license using the card reader of the teller terminal (). The card reader may or may not successfully read the data on the driver's license (). 616 618 620 If the non-account holder has a driver's license that was not successfully read by the card reader or does not have a driver's license, the teller manually enters identity information (). The identity information entered by the teller may be obtained from the driver's license of the non-account holder and/or from the non-account holder in response to teller questions. The information may include, for example, the name, address, driver's license number, date of birth, and sex of the non-account holder. The teller also enters the phone number of the non-account holder () and collects customer biometric data from the non-account holder (). Some banks may not require that the teller collect customer biometric data from the non-account holder. 120 622 120 122 624 120 636 604 120 626 120 636 604 120 628 If the non-account holder is enrolled in the NHMS , the teller collects customer biometric data from the non-account holder (). The NHMS authenticates the identity of the enrolled non-account holder by comparing the collected biometric data with that stored in the customer and transaction data store (). If the bioauthentication is not successful, the NHMS updates the corresponding non-account holder record () with transaction disposition data to indicate that the new account request was denied due to the inability to confirm the identity of the enrolled non-account holder. The transaction then ends (). If the bioauthentication is successful, the NHMS verifies that the identification number in the corresponding non-account holder record matches the identification number offered by the non-account holder requesting to open a new account (). If the identification numbers do not match, the NHMS updates the non-account holder record () with transaction disposition data to indicate that the new account request was denied due to the inability to confirm the identity of the enrolled non-accountholder. The transaction then ends (). If the bioauthentication is successful and the identification numbers match, the NHMS collects the non-account holder transaction history from the corresponding non-account holder record (). Some banks may not require bioauthentication prior to collecting the enrolled non-account holder transaction history. 120 145 630 632 120 634 120 636 604 638 640 122 636 604 The NHMS accesses third party identity verification data stores () to retrieve identity verification data corresponding to the non-account holder (). The NHMS validates the identity information collected from the non-account holder by comparing the collected information to the retrieved identity verification data (). If the identity verification data does not match the collected identification information, the NHMS denies the new account request, logs the new account request denial () and ends the transaction (). If the identity verification matches or generally matches the collected identification information, the teller is provided with verification questions to ask the non-account holder (). The teller asks the non-account holder the verification questions (), and if the non-account holder is not able to provide the correct answers to the teller's satisfaction, the new account request is denied, the new account request denial is logged in the customer and transaction data store (), and the transaction ends (). 642 122 636 604 120 644 646 122 636 604 If the non-account holder correctly answers the verification questions, the teller is provided with confirmation questions that the teller is required to answer (). If the teller answers no to any of the confirmation questions, the new account request is denied, the new account request denial is logged in the customer and transaction data store (), and the transaction ends (). If the teller answers yes to each confirmation question, the NHMS displays a confirmation of the non-account holder identity and, if applicable, displays the non-account holder transaction history (). The teller then decides whether to open a new account for the non-account based on the displayed information (). The new account request denial or approval is logged in the customer and transaction data store (), and the transaction ends (). 600 600 600 646 600 While the transaction process is directed to responding to a request from a non-account holder to open a new account, the same general process may be used to respond to any transaction request from a non-account holder in which verification of the identity of the non-account holder is desirable. When generalizing process to any transaction request, operation is replaced by the teller deciding whether to perform the transaction based on the displayed information. An example of a transaction request that may be responded to through process includes a request to grant a non-account holder a loan. FIG. 7 FIG. 1 FIG. 1 700 700 shows a transaction process for responding to a check cashing request from a non-account holder at an ATM. For convenience, particular components described with respect to are referenced as performing process . However, similar methodologies may be applied in other implementations where different components are used to define the structure of the system, or where the functionality is distributed differently among the components shown by . 110 702 110 704 706 708 The non-account holder selects to cash a check at the ATM (). The ATM prompts the non-account holder to enter his or her social security number or assigned ID number (), his or her date of birth (), and the amount of the check (). Some banks may not require that the non-account holder enter a date of birth. 110 110 110 120 140 120 120 110 140 700 710 110 In some implementations, the ATM also may prompt the non-account holder to input a personal identification number (PIN) for security purposes (not shown). The PIN may be assigned to the non-account holder during the enrollment process. If the ATM prompts the non-account holder to enter a PIN, the ATM validates the identity of the non-account holder by sending the PIN and the identity information (e.g., SSN or assigned ID) to the NHMS through the ATM transaction switches . The NHMS accesses the non-account holder record corresponding to the received SSN or assigned ID number and determines whether the received PIN matches the assigned PIN stored in the record. The NHMS then sends an identity validated signal (if the PINs match) or an identity not validated signal (if the PINs do not match) to the ATM through the ATM transaction switches . If the identity is validated, the process proceeds to operation . If the identity is not validated, the ATM informs the non-account holder that the transaction has been rejected due to an inability to confirm the non-account holder's identity, and the transaction ends. 110 110 710 110 712 110 714 716 110 115 718 720 140 722 120 724 120 122 726 After the non-account holder has inputted identity information (and a valid PIN, if applicable), the ATM prompts the non-account holder to insert the check into the ATM (). The ATM proceeds to scan the check with the check imager (). If the check is not successfully read by the scan, the ATM attempts to rescan the check (). If the rescan is not successful (), the ATM returns the check to the non-account holder and displays a message informing the non-account holder that the check cannot be read by the ATM and directing the non-account holder to visit a teller at the bank (). The ATM transaction with the non-account holder ends (). The ATM sends transaction disposition data (e.g., the SSN, the date of birth, the amount of the check, and that the transaction failed due to the inability to read the check) to the ATM transaction switches () which, in turn, send the transaction disposition data to the NHMS (). The NHMS updates the customer and transaction data store with the transaction disposition data (). 140 728 120 730 120 115 128 732 120 120 734 If the check is read successfully by the scan or by the rescan, a check cashing transaction request is sent to the ATM transaction switches (). The ATM transaction switches process and send the check cashing request to the NHMS (). The NHMS accesses the transaction history in the non-account holder record corresponding to the entered social security number or assigned ID and applies the risk rules associated with the bank using the risk engine (). In some implementations, the NHMS also accesses the enrollment history of the non-account holder and the check payor transaction history and applies the risk rules to the transaction history, enrollment history, and payor transaction history. If the check cashing transaction is approved after the risk analysis, the NHMS also calculates and deducts any bank or transaction fees from the amount of the check (). 120 140 736 110 738 110 740 110 115 718 720 740 722 120 724 120 122 726 The transaction recommendation generated by the risk analysis is sent by the NHMS to the ATM transaction switches (), which process and send the transaction recommendation to the ATM (). The ATM analyzes the transaction recommendation and determines whether the transaction passed the risk analysis (). If the transaction did not pass the risk analysis, the ATM returns the check to the non-account holder, informs the non-account holder that the check cannot be cashed, and instructs the non-account holder to visit a teller at the bank (). The ATM transaction ends () and the ATM sends transaction disposition data to the ATM transaction switches (), which process and send the transaction disposition data to the NHMS (). The NHMS updates the non-account holder record in the customer and transaction data store with the transaction disposition data (). 110 742 744 746 720 740 722 120 724 120 122 726 If the transaction passed the risk analysis, the ATM presents the non-account holder with payback information indicating the amount of money that the ATM will dispense if the customer chooses to accept the transaction (i.e., the amount of the check minus the calculated bank and transaction fees) (). The non-account holder may then select whether or not to accept the transaction and corresponding transaction and bank fees (). If the non-account holder does not accept the transaction, the check is returned to the non-account holder (), and the ATM transaction ends (). The ATM sends transaction disposition data indicating that the non-account holder rejected the transaction to the ATM transaction switches (), which process and send the transaction disposition data to the NHMS (). The NHMS updates the non-account holder record in the customer and transaction data store with the transaction disposition data (). 110 748 140 750 140 120 752 120 122 754 If the non-account holder accepts the transaction, the ATM dispenses the finds and a receipt () and sends transaction disposition data to the ATM transaction switches indicating that the check was successfully cashed (). The ATM transaction switches process and send the transaction disposition data to the NHMS (). The NHMS updates the non-account holder record in the customer and transaction data store with the transaction disposition data (). 700 110 105 105 122 110 Because process records transaction disposition data in the corresponding non-account holder record, a teller is able to access the transaction disposition data to immediately tell a non-account holder when and why his or her check cashing transaction failed at the ATM . The teller enters the non-account holder identification number and, if available, scans the rejected check using the teller terminal . The teller terminal accesses the transaction disposition data in the non-account holder record stored in the customer and transaction data store and enables the teller to perceive the transaction disposition data. The teller may then provide immediate feedback to the non-account holder as to why the transaction failed. For example, the teller may inform the non-account holder that the check was not cashed due to the inability to scan the check because of a check jam, or that the check was rejected because of violation of a risk rule (e.g., the check amount exceeded the maximum amount allowed). The transaction disposition data also may inform the teller that the non-account holder was not provided with sufficient fun ds by the ATM (i.e., the non-account holder was shorted). The teller may then dispense the additional funds owed to the non-account holder. Other implementations are within the scope of the following claims. DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram of a system for identifying non-account holders and monitoring the transaction history of non-account holders. FIG. 2 is a flow chart illustrating a process for responding to a check cashing request from a non-account holder at a teller terminal. FIG. 3 is a flow chart illustrating a process for enrolling a non-account holder with a non-account holder monitoring system. FIG. 4 is a flow chart illustrating a process for enrolling a payor with a non-account holder monitoring system. FIG. 5 is a flow chart illustrating a process for assigning a unique ID number to a non-account holder. FIG. 6 is a flow chart illustrating a process for responding to a request from a non-account holder to open a new account at a teller terminal. FIG. 7 is a flow chart illustrating a process for responding to a check cashing request from a non-account holder at an Automated Teller Machine (ATM).
The healthcare law fight isn’t over On Thursday, an unusual Supreme Court majority of “one” — Chief JusticeJohn G. Roberts Jr.— found that the healthcare law’s individual mandate is unconstitutional under the power of Congress to regulate interstate commerce. But, surprise, the mandate is constitutional as a tax. This strange reasoning, not fully embraced even by the four concurring justices, handed judicial conservatives the most recent in a long parade of disappointments. No matter how controversial, contradictory and complex the ruling is, it represents a major legal victory for the Obama administration and other supporters of the Affordable Care Act. Roberts closed the open-ended commerce clause door to sweeping federal regulatory authority over just about anything. This is something conservatives could have embraced — had he not then opened an even wider door that expanded the taxing powers of Congress. DOCUMENT: Supreme Court ruling on healthcare law Roberts first concluded that the individual mandate was not a tax when it came to jurisdictional issues under the Anti-Injunction Act (thereby allowing the court to decide the case). But, like the Decepticon villains in a “Transformer” movie, the mandate could convert into a tax for constitutional authority purposes. This flew in the face of the Affordable Care Act’s history and language. President Obama himself once insisted the law didn’t impose a tax. Logical or not, the ruling underscores the dangers of relying too heavily on the Supreme Court to solve policy problems. Conservatives should have used the time that the court was deliberating to formulate attractive legislative proposals to both repeal and replace this unpopular law. But they didn’t. So where does this leave us? Millions of Americans may well opt out of the plan, given the sizable gap between the modest penalty amounts imposed for not purchasing insurance and the cost of doing so. Enforcement of the penalty (or the tax, under Roberts’ reasoning) is likely to be relatively weak. Uninsured or otherwise noncompliant Americans who do not file federal income taxes or do not expect to receive year-end tax refunds will remain free from the IRS penalty box. On the other hand, the federal government may have a freer hand to regulate whatever it wants (broccoli, anyone?), as long as it just charges you a tax if you don’t comply. And the government doesn’t even have to openly call it a tax in advance. We have already heard cries for repealing the law in Congress, but the fact is that most of the healthcare industry is resigned to shrugging its shoulders and falling back into line with the political deals it cut with the Obama administration several years ago. The political case for repeal will become much stronger among grass-roots voters — particularly independent ones — outside the Beltway this fall if it is combined with a credible, attractive alternative that offers better solutions to chronic health policy problems. Those who think “Obamacare” doesn’t serve Americans need to come up with a more decentralized and market-based alternative that improves the lives of Americans. A better plan would include a combination of defined-contribution financing of taxpayer subsidies (for Medicare, Medicaid and private insurance, respectively), and a restructuring of the healthcare safety net to protect the most vulnerable individuals and their families (with such things as high-risk pools, protection against restrictions on coverage of pre-existing conditions for those who have maintained continuous insurance coverage). The country needs a more competitive healthcare marketplace that encourages more entry and less command-and-control regulation. New insurance purchasing vehicles such as the exchanges called for under Obama’s law should remain optional, not exclusive, and should welcome all willing buyers and sellers. By providing better and more usable information about the “value” of healthcare options — including how different healthcare providers perform — but without dictating decisions, the federal and state government could empower consumers to make more responsible choices on their own. It is just about certain that the courts and politicians will continue to disappoint us, but that’s all the more reason why the rest of us need to reclaim our roles and responsibilities in fixing what Washington keeps breaking. Tom Miller heads the “Beyond Repeal and Replace” project at the American Enterprise Institute, and is the coauthor of “Why ObamaCare Is Wrong for America.” A cure for the common opinion Get thought-provoking perspectives with our weekly newsletter. You may occasionally receive promotional content from the Los Angeles Times.
https://www.latimes.com/opinion/la-xpm-2012-jun-29-la-oe-miller-health-care-decision-20120629-story.html
In the US Supreme Court Janus case, the court has taken the first step to dismantle organized labor in favor of the influence and power of corporations and Republican-leaning candidates. (https://www.supremecourt.gov/opinions/17pdf/16-1466_2b3j.pdf) The history of organized labor and the path to leveling the playing field between management and workers is one of violent resistance and bloody confrontation orchestrated by those who value profits over people. Instead of taking a cooperative approach to bargaining with those whose labor powers the engines of business, corporations look to put a stranglehold on those who seek such accommodation. In Janus, the plaintiff sought relief from ‘agency fees’ that covered the cost of collective bargaining. He argued these fees infringed on his first amendment rights because he objected to the negotiation positions and proposals decided on by the union. While the majority of the court supported the First Amendment argument of the plaintiff, Justice Kagan, joined by Justices Ginsburg, Sotomayor, and Bryer dissented based on the sound precedent of Abood v. Detroit Board of Education (432 U.S 209, (1977.) In the dissent, Kagan wrote, “Rarely if ever has the Court overruled a decision—let alone one of this import—with so little regard for the general principles of stare decisis. There are no special justifications for reversing Abood. It has proved workable. No recent developments have eroded its underpinnings. And it is deeply entrenched, in both the law and the real world. More than 20 States have statutory schemes built on the decision. Those laws underpin thousands of ongoing contracts involving millions of employees. Reliance interests do not come any stronger than those surrounding Abood. And likewise, judicial disruption does not get any greater than what the Court does today. I respectfully dissent.” The majority ignored the precedent of ‘stare decisis’ arguing Abood was faulty in its logic. With this decision, as Kagan points out in her dissent, “the Court succeeds in its 6-year campaign to reverse Abood.” What we have here is the best example of the danger in pendulum swings to extremes. An agenda driven Supreme Court is a danger to America. In Citizens United (Citizens United v. Federal Election Commission, 558 U.S. 310) the court put the election process up for sale. And despite Janus’s claims to the insidious and greedy nature of the union’s demands, organized labor is a pauper compared to the deep pockets of the conservative, Republican-leaning, billionaires club. If this were a soccer match, the court just blinded the labor team. Janus claims the union denied his first amendment rights because he disagreed with the union proposals. Nonsense. It is just as likely he preferred the old patronage system once so ingrained in state and municipal government, and the union was a roadblock to such practices. Now, this is not to say unions are entirely blameless here. It may seem that unions ignore the greater public good when negotiating. This is disingenuous. Public unions consist of taxpayers. Unions are not isolated from the public. They have as vested an interest in the public good as any other citizen. The pendulum of law swings to and fro, with trends toward labor or management almost interchangeable. Except in this case. The court used this as a convenient smokescreen to weaken unions and the political philosophies they endorse. The spread of so-called “right-to-work’ statutes are an unfortunate misnomer for taking the leverage away from those who perform the work and shifting it to those who control the jobs. Public unions were the target, but private unions are next. The real enemy here is not the unions or even management but the political system that runs on power and money. In an unpublished manuscript by Professor Nicholas Easton, community activist and former President of the Providence City Council, entitled, The Political Machine Reexamined, Easton writes, “To begin let me lay out a few propositions. First is the idea that, generally speaking, for at least a century or so the Democratic Party has represented the economic interests of the poor and the Republican Party has represented the economic interests of the wealthy. Second is the idea that, generally speaking, there are only two sources of power in our democracy; money and people. Finally is the idea that, given the two previous assertions, Republicans will win most political battles based on money and Democrats will win those based on superior organization and mobilization of people.” (excerpt from https://joebroadmeadowblog.com/2018/07/04/guest-post-the-political-machine-reexamined-by-professor-nicholas-easton/) Organizations such as labor unions serve an essential function in balancing the power in politics. Being a member of a union does not limit one’s ability to vote for any candidate, support any cause, or oppose any policy. Freedom of expression, the venerable right of the First Amendment, is not silenced by “agency fees.” This decision is a microcosm of all that is wrong in our political process. We have lost the once stable balance between those with the personal or corporate wealth to support candidates they endorse, and the collective assets of organized labor. The ever-growing income gap, coupled with the disintegration of the middle class, now creates two classes in society; those who wield power through economic control and those subject to it. Until we negate the power of money in politics, this balance between unions and management is necessary. If you need more proof of the politics of this decision look at the plethora of organizations offering ways to “opt out” of union participation. For example, https://www.mypaymysay.com/ part of the Mackinac Center for Public Policy https://www.mackinac.org/ which claims to be a non-partisan organization but is backed by a host of conservative Republicans. Research shows these organizations are all backed by conservative Republican anti-labor groups. Make no mistake about it. Janus is not a First Amendment case. It is another volley in the war of powerful corporate money against the once competing power of unions. And the court picked sides. This is not the end, but it may be the beginning of the end of the balance of power. Click the links below to share with others.
https://joebroadmeadowblog.com/2018/07/10/the-janus-decision-corporate-money-loves-labors-loss/
This is the latest report, covering the current COVID-19 impact on the Telematics Market. The pandemic of Coronavirus (COVID-19) has affected every aspect of life globally. This has brought along several changes in Telematics Market conditions. The rapidly changing Telematics Market scenario and initial and future assessment of the impact is covered in the report. It covers the entire Telematics Market with an in-depth study on revenue growth and profitability. The report also delivers on key players along with a strategic standpoint pertaining to price and promotion. The Telematics market has witnessed growth from USD XX million to USD XX million from 2014 to 2019. With the CAGR of X.X%, this market is estimated to reach USD XX million in 2026. The report mainly studies the size, recent trends and development status of the Telematics market, as well as investment opportunities, government policy, market dynamics (drivers, restraints, opportunities), supply chain and competitive landscape. Technological innovation and advancement will further optimize the performance of the product, making it more widely used in downstream applications. Moreover, Porter’s Five Forces Analysis (potential entrants, suppliers, substitutes, buyers, industry competitors) provides crucial information for knowing the Telematics market. Chapter 1 provides an overview of Telematics market, containing global revenue, global production, sales, and CAGR. The forecast and analysis of Telematics market by type, application, and region are also presented in this chapter. Chapter 2 is about the market landscape and major players. It provides competitive situation and market concentration status along with the basic information of these players. Chapter 3 provides a full-scale analysis of major players in Telematics industry. The basic information, as well as the profiles, applications and specifications of products market performance along with Business Overview are offered. Chapter 4 gives a worldwide view of Telematics market. It includes production, market share revenue, price, and the growth rate by type. Chapter 5 focuses on the application of Telematics, by analyzing the consumption and its growth rate of each application. Chapter 6 is about production, consumption, export, and import of Telematics in each region. Chapter 7 pays attention to the production, revenue, price and gross margin of Telematics in markets of different regions. The analysis on production, revenue, price and gross margin of the global market is covered in this part. Chapter 8 concentrates on manufacturing analysis, including key raw material analysis, cost structure analysis and process analysis, making up a comprehensive analysis of manufacturing cost. Chapter 9 introduces the industrial chain of Telematics. Industrial chain analysis, raw material sources and downstream buyers are analyzed in this chapter. Chapter 10 provides clear insights into market dynamics. Chapter 11 prospects the whole Telematics market, including the global production and revenue forecast, regional forecast. It also foresees the Telematics market by type and application. Chapter 12 concludes the research findings and refines all the highlights of the study. Chapter 13 introduces the research methodology and sources of research data for your understanding. About Us: ReportsnReports.com is your single source for all market research needs. Our database includes 500,000+ market research reports from over 95 leading global publishers & in-depth market research studies of over 5000 micro markets. We provide 24/7 online and offline support to our customers. Related Articles A report entitled, the 1,2-Diaminocyclohexane-Market Market, published by UpMarketResearch is an in-depth research study of the current market scenario, growth trends of the market components over the recent years, and scope for the market development in the future. The report offers a complete picture of the market by providing key insights about the potential size, […] The Image Sensors market analysis is provided for the international markets including development trends, competitive landscape analysis, and key regions development status. The report provides key statistics on the market status of the Image Sensors manufacturers and is a valuable source of guidance and direction for companies and individuals interested in the industry. Effect of […]
Meet the Brass sisters, two passionate food explorers on a mission to tackle their culinary bucket list. The women "flirt" their way into chefs' kitchens to uncover unique ethnic delights, then head home to create tantalizing cross-cultural mash-ups. Upcoming Broadcasts: Burger Meets Dosa (#101H) Duration: 26:16 STEREO TVPG (Secondary audio: DVI) Follow the Brass sisters as they tackle their burger bucket list, then "flirt" their way into an Indian kitchen to uncover the mystique of Indian dosa. After some careful experimentation in their home kitchen, they serve up a cheeseburger dosa. Upcoming Broadcasts:
https://www.kqed.org/tv/programs/index.jsp?pgmid=24493
Why aren’t more cities implementing placemaking strategies, which are proven to expand economic activity, increase mobility, protect the environment, and create more equitable places? CNU’s Project for Code Reform seeks to streamline the code reform process by providing local governments place-specific incremental coding changes that address the most problematic barriers first, build political will, and ultimately create more walkable, prosperous, and equitable places. The YIMBY Movement: Opportunities and Challenges for Planners Yes In My Backyard (YIMBY) is a relatively new movement, but YIMBYs are quickly gaining political power and numbers. This course discusses the origins, goals, and tactics of the YIMBY movement. Measuring Neighborhood Segregation and Diversity This course reviews the various ways to measure both segregation and diversity at the neighborhood scale. Introduction to City Planning 3: Midcentury Modern (1940-1979) This course explores the central role of planning in envisioning cities in the middle 20th century. World War II and the Cold War re-ordered power and politics in new ways. The tragic destruction and loss of World War II gave transformed into exciting opportunities for planners to try new things, in new ways. Transportation Planning: Effects on the Environment, Health, and Social Justice This course discusses the local and global impacts of transportation systems and the mitigation of those impacts. The course also identifies prospects for change, as achieved by technology, transportation management, and pricing. Transportation Planning: The Role of Transportation Systems in Social and Economic Life By the end of this course, you will have a strong understanding of the way in which transportation systems interact with society and the economy. The Ethics of Disruptive Transportation Technologies This course discusses the process for making ethical decisions as part of planning for disruptive technologies. Regulatory Implications of Tiny Homes In this course we will define a tiny home and explore the history and appeal of this seemingly recent movement. The course touches on challenges associated with the legal development and regulation of this alternative residential option. Introduction to New Mobility The course on "New Mobility" covers the gamut of technological advancements where planning, transportation, and infrastructure design intersect. Transit Planning: The First/Last Mile This course covers the range of elements needed to boost access to transit, with a focus on door-to-door transportation from a destination to a transit station. Beyond Complete Streets for Walking and Biking This course covers current practices in planning and implementation of infrastructure for biking and walking. Multi-Family Property Valuation Case Study for Planners This course will take planners through a case study multi-family property valuation. The course will build upon previous course topics of time discounting, internal rate of return, net operating income, lease structures, debt payments, and risk assessment. The Ethics of Office Administration, Part 2 The second course in the "Ethics of Office Administration" series discusses how to identify, evaluate, and resolve difficult scenarios that might arise in a planning office. The Ethics of Office Administration, Part 1 The administration of a planning office—whether in the private or public sector—can raise ethical questions. This course introduces these questions and presents tools for analyzing them. Tactical Urbanism: How It's Done From unsanctioned crosswalks to city-led "Pavement-to-Plaza" programs, instructor Mike Lydon describes the success of short-term, temporary projects in influencing long-term physical and policy changes in cities across the United States and Canada. Planning Ethics This course provides professional planners with a thorough and thoughtful discussion of ethical concerns likely to face many plannersin their careers. The work of planning for communities is rooted in values, often unexpressed, about the role of government in working for a better future. So planners should, from time to time, examine their own values and those of the American Institute of Certified Planners as they go about their work in the public or private sectors. SketchUp for Planners - An Introduction Get started using SketchUp, the popular, easy-to-learn 3D digital modeling program. This course provides an introduction to how planners and architects represent three-dimensional objects in two-dimensions, with step-by-step instructions for creating and using simple 3D models. SketchUp for Planners - Intermediate Part 3 Combining SketchUp and Adobe Creative Suite, this course demonstrates how to use SketchUp together with Adobe Photoshop, Illustrator and InDesign to create illustrative and informative photo simulations, perspective views, sections and site plans. InDesign for Planners - Advanced Adobe InDesign is widely recognized among design professionals as the premier document layout software, with a number of valuable applications for urban planning. This course builds upon the Introduction and Intermediate InDesign courses, giving you step-by-step instructions on my advanced features of InDesign CS6, all built around designing and composing a longer report/plan. Ethics: Balancing a Business Friendly Planning Environment Over the past few decades and increasingly over the past several years, the private sector, led by developers, has increasingly courted, conflicted and collaborated with planning departments amid shrinking budgets. As business interests engage and influence public agencies and planning strategy, the role of ethics is of increasing importance for the practicing planner. This is the first of a two-part series that evaluates and analyses the role of planners, from public window staff to department heads, in an increasingly business-friendly environment.
https://courses.planetizen.com/courses?f%5B0%5D=%3A141&f%5B1%5D=credit%3A423&f%5B2%5D=topic%3A306&f%5B3%5D=topic%3A315&f%5B4%5D=topic%3A321&f%5B5%5D=topic%3A322&f%5B6%5D=topic%3A350&f%5B7%5D=topic%3A356&f%5B8%5D=topic%3A400&amp%3Bf%5B1%5D=software%3A334&amp%3Bamp%3Bf%5B1%5D=topic%3A327&amp%3Bamp%3Bf%5B2%5D=topic%3A352&amp%3Bamp%3Bf%5B3%5D=topic%3A395&amp%3Bamp%3Bf%5B4%5D=topic%3A411&amp%3Bamp%3Bkeywords=
Which country is not a member of? States Without Non-member Observer Status |States Not Recognized by the U.N.| |Name||Recognized By| |Western Sahara||44 U.N. member states| |Taiwan||16 U.N. member states| |South Ossetia||5 U.N. member states| Which country is not included to Southeast Asia? The Sunda Plate is the main plate of the region, featuring almost all Southeast Asian countries except Myanmar, northern Thailand, northern Laos, northern Vietnam, and northern Luzon of the Philippines. How many countries aren’t in the UN? The United States recognizes 195 countries, 193 of which are part of the United Nations. The two countries that are not UN members are Vatican City (Holy See) and Palestine. Is Kosovo a country? The United States formally recognized Kosovo as a sovereign and independent state on February 18. To date, Kosovo has been recognized by a robust majority of European states, the United States, Japan, and Canada, and by other states from the Americas, Africa, and Asia. Is India considered South Asia? The Southern and Southeast Asian region includes South Asian countries: Nepal, India, and Pakistan, as well as Southeast Asian countries: Myanmar, Vietnam, Thailand, Indonesia, the Philippines, and Singapore. What are the 11 countries in Southeast Asia? Southeast Asia is composed of eleven countries of impressive diversity in religion, culture and history: Brunei, Burma (Myanmar), Cambodia, Timor-Leste, Indonesia, Laos, Malaysia, the Philippines, Singapore, Thailand and Vietnam. What country is in 2021? The Pacific island nation of Samoa and parts of Kiribati were the first places in the world to welcome 2021, leaving behind a year which was marked by the COVID-19 pandemic and its effect on society. It takes 26 hours for all time zones to reach the new year. Is North Korea in UN? The Republic of Korea (commonly known as South Korea) and the Democratic People’s Republic of Korea (commonly known as North Korea) were simultaneously admitted to the United Nations (UN) in 1991.
https://colours-indonesia.com/where-to-go-to-sea/frequent-question-which-country-is-not-a-member-of-asean.html
Are your reflexes good enough for Enceladus?19/05/2021 - 16:30 "Far down the road in Human history, an abnormality in space-time suddently appeared on one of Saturn's moon: Encelade. Spaceships of an unknown origin take the human fleet by surprise. A real genocide. Earth also seems to be overwhelmed by the assault and only a small group, scouting near Encelade, manage to avoid the attack. By approaching the icy surface of the moon there is still one last hope of a counter attack against the enemy from another world." Enceladus is a single-player Indie shoot'em up inspired by the Playstation & Saturn era filled with mechanical pixel graphics action gameplay and modern artistic style. The player takes control of « Frost », a spaceship capable of firing energy blast or to channel a very powerful laser. He also has the ability to trigger a powerful explosion, ideal to get yourself out of trouble. The number one asset of « Frost » is his ability to teleport. By a simple push of a button, it is possible to create a ghost of Frost while still defending yourself. By letting go of that same button « Frost » teleports to the ghost's location. That technology allows you to move faster, to dodge overwhelming shots but also to gain access to inaccessible zones. Teleportation needs practice, but once mastered « Frost » becomes so much harder to destroy. Oznake:
https://bs.idcgames.com/enceladus/novosti/are-your-reflexes-good-enough-for-enceladus-2021-05-19-16-30.html
A novel that many consider to be the very first science fiction story was written by Mary Shelley when she was just 18 years old. Furthermore, her book’s influence on the horror genre can’t be overstated, and has inspired numerous films, plays, and stories. However, even though it was published anonymously in 1818, the novel’s enduring popularity makes it one of literature’s true classics. Shelley’s themes in Frankenstein continue to be used by contemporary authors to this day. Check out other books like Frankenstein if you’re a fan of Victor Frankenstein and his monster. R.U.R (Rossum’s Universal Robots) by Karel Capek A century after Mary Shelley’s novel Frankenstein, R.U.R. had a massive impact on popular culture, as well. Another first in this novel’s canon is the use of the word “robot” for the first time. When humans create robots, they hope that they will one day be able to lead a utopian life. Humans eventually stop reproducing when they lose their sense of purpose as a result of this. As with Frankenstein’s monster, the robots will eventually turn on their creators. It is, however, left to the robots to learn the secret of self-duplication after their creators are killed. Man Made Boy by Jon Skovron Man Made Boy by Jon Skovron is a good choice for fans of Frankenstein who want something a little more heartwarming and less spooky. Boy, a seventeen-year-old boy, is the protagonist of this story. He lives in the catacombs beneath Times Square with a slew of other monsters and creatures as the son of Frankenstein’s monster and the Bride of Frankenstein. To earn a living, these mythical and magical creatures perform as a theater troupe, but Boy and his parents are shunned by the other members of the troupe because of their scientific nature. The boy is unable to bear this treatment any longer and decides to flee, but he encounters even more difficulties in the process. Fans of monsters will enjoy Man-Made Boy, which features memorable characters and heartwarming moments. The Island of Doctor Moreau by H.G. Wells The Island of Doctor Moreau, in contrast to Frankenstein, is about a man who attempts to create new life forms by fusing together the bodies of men and animals from various cadavers. Edward Prendick, who is shipwrecked and left on a remote island, tells the story. Soon after arriving on the island, Prendick discovers that the island is home to Dr. Moreau, who was once considered a genius for his twisted experiments until they were uncovered. Prendick’s situation worsens when he learns that Moreau intends to add him to his gruesome collection. When Prendick has no other choice, he flees into the jungle, where he finds all of Moreau’s anthropomorphic creations. Spare and Found Parts by Sarah Maria Griffin Frankenstein builds a mate for his monster out of spare parts and an artificial brain, a concept that was first introduced in the novel and then expanded upon in the film adaptation, Bride of Frankenstein. After surviving a devastating epidemic, everyone in the city of Spare and Found Parts by Sarah Maria Griffin has lost body parts. To distinguish herself from everyone else, she is the only one to have biomechanical limbs but no biomechanical heart. After finding a mannequin hand washed up on the shore, Nell, driven by loneliness, devises a plan to construct her own companion. Her quest also leads her to uncover some disturbing secrets about her hometown and her late father, which is unfortunate because she lives in a society that now shudders at the mere mention of modern technology. Despite the grim, dystopian setting, the story has a strong Frankensteinian feel to it. This Monstrous Thing by Mackenzi Lee In Mackenzi Lee’s novel, This Monstrous Thing, the Mary Shelley novel Frankenstein not only serves as inspiration, but also plays an important role in the plot. Clockwork-powered society is the norm in this alternate fantasy world. To make matters worse, after Oliver Finch’s death a young mechanic named Alasdair Finch takes matters into his own hands and uses clockwork pieces to bring his brother back from the dead. Even though Alasdair is able to bring Oliver back to life, he discovers that the man his brother once was has returned in monstrous form. As a result, the townspeople see the brothers as a real-life version of Dr. Frankenstein and his monstrous creation, Frankenstein’s monster. The Dark Descent of Elizabeth Frankenstein by Kiersten White The Dark Descent of Elizabeth Frankenstein by Kiersten White is one of the most popular retellings of Frankenstein. Elizabeth, a character who played a more passive role in the original story, now has a story of her own, one that is rich in emotion. When Elizabeth was a child, she was taken in by the Frankenstein family and later became Victor’s fiancée, as fans of the Mary Shelley novel will recall. This book chronicles her unwavering will to live in the face of the world falling apart around her due to Victor’s irrationality. The Dark Descent of Elizabeth Frankenstein pays homage to the original novel while also serving as a stand-alone work of fiction. Jurassic Park by Michael Crichton Science gone awry doesn’t get better than a person cloning dinosaurs and losing control of massive, ferocious animals eager to reclaim their position in the food chain. Current issues, such as the dangers of capitalism, are addressed in Michael Crichton’s 1990 bestseller. The clones aren’t made for research, but for the amusement park that gives the book its title to profit from their commercial exploitation. An expert in science fiction stories, who had previously dabbled in Frankenstein’s monster themes in his 1972 novel The Terminal Man, penned Jurassic Park. It’s true that Jurassic Park is one of Crichton’s best-known books; it was made into a movie directed by Steven Spielberg in 1992. (and a long film franchise that continues today). The Monster Men by Edgar Rice Burroughs Edgar Rice Burroughs, a pulp-fiction author best known for Tarzan of the Apes and John Carter of Mars, created two iconic literary characters. However, he was also a prolific sci-fi/horror filmmaker, and one of his best known works is this adventure that sounds like a cross between Frankenstein and The Island of Dr. Moreau. On an island off the coast of Borneo, two scientists created a humanoid being known as Experiment Number Thirteen. There is a rebellion against the experimenters, and the 12 previously created monster-men go on a series of jungle adventures. This is one of the earliest examples of a story in which the creature is a hero rather than a terrifying foe. We get a good mix of action, horror, and romance in the end. Enjoyable, I think.
https://dennislehanebooks.com/books-like-frankenstein/
Europe and the rest of the in vitro diagnostic (IVD) device world know that the European Union’s new In Vitro Device Regulation (IVDR) is coming. The deadline to comply with IVDR is May 26, 2022, which is not as far off as it may seem. Device manufacturers need to begin transition preparations now in order to minimize the hiccups the new regulation is likely to cause. One of the key issues all manufacturers are grappling with is the requirements for intended purpose. Intended purpose is key because it is a key input into the design of a device, risk management, performance evaluation and classification. Everything starts with a good intended purpose statement. The IVDR requires a clear statement that must stipulate whether your device is for screening, monitoring, diagnosis or aid to diagnosis, prognosis, prediction, or is a companion diagnostic. The intended purpose must be included in the labeling as described in the regulation’s General Safety and Performance Requirements (GSPR) 20. It needs to be clear and support the classification defined by the manufacturer plus all claims as required in Article 7. The intended purpose needs to be consistent with the classification of the device. If the classification is for screening blood donations for syphilis this should be described in the intended purpose statement. This is key because an assay for diagnosis would be in Class C whereas a screening assay would be in Class D. The next step is to establish the state of the art for devices with your documented intended purpose. This does not mean it has to be best in class; however, it needs to be able to perform so that it can clinically meet it’s intended purpose. It is therefore important to identify the critical characteristics of the device. For example, if you consider a device for screening blood donations then assay sensitivity and specificity may be important, whereas the accuracy and repeatability of a device may be more important for a diagnostic or monitoring assay. The performance evaluation plan (PEP) identifies what data would be needed to support the intended purpose. Under the regulation, these are documented in the scientific validity, the analytical performance, and the clinical performance report and pulled together into the performance evaluation report. The scientific validity report (SVR) documents the association of the analyte to the clinical condition or physiological state described in the intended purpose and requires a formal literature search or studies as objective evidence. The state of the art will also require literature to support your conclusions; this may include common specification (CS), product standards, clinical guidelines as well as scientific papers. New or novel markers may require testing and medical opinions to support the scientific validity and state of the art. The performance evaluation plan also requires the preparation of an analytical performance report, which describes the performance necessary to meet the intended purpose and objective evidence that it has been achieved. For example, the test should have an appropriate sensitivity and specificity to meet the intended purpose and to ensure that it is state of the art. Finally, the clinical performance should support the populations and intended users described in the intended purpose. The data from these reports is then summarized into the performance evaluation report to demonstrate compliance and confirm that the PEP has been completed. There is some crossover in the performance evaluation plan with the GSPR checklist, as it allows a manufacturer to identify and point to documentation that supports compliance with individual requirements but does not usually include a detailed summary of how the compliance is achieved. The analytical performance report and the clinical performance report should include a summary of the data and a discussion and conclusion about how this meets the specific IVDR requirement and the intended use. If we consider interfering substances as an example, this should be first identified in risk management, the PEP would then identify the necessary studies to be performed to confirm risks have been managed. The output of this study will be a report and conclusions of whether acceptance criteria were met. The analytical performance report then summarizes the data for all analytical studies and clearly documents that IVDR requirements have been met as well as the claims. Residual risk should to be described in the instructions for use (IFU). The GSPR checklist is, as the name suggests, a list that enables a manufacturer to ensure they have addressed all the requirements, but it will lack detailed summaries and most importantly, clear conclusions. In the past, the essential requirements checklist may have referenced many individual reports; however, it is now more relevant to reference the analytical performance report (APR), which in turn will reference the original reports. When compiling IVDR files based on existing devices, the APR should be used to compile data from existing sources, summarize the findings and making clear statements of compliance to the IVDR, these should be consistent with the intended purpose and the claims. Notified Bodies (NBs) are not allowed to look at your data and conclude whether it is compliant to the IVDR; NBs look at your discussion and conclusions, and then look at the data to see whether they agree. In some cases, this will require a deep dive into the full data, and then they will decide whether they agree with your conclusion. You cannot supply the NB with lots of data and expect them to review the data and determine if it is compliant; you must make clear conclusions. Reports referenced in the APR may have been created prior to the IVDR and so they will not make clear conclusions that the requirements of the IVDR have been met. For example, the IVDR requires data from three batches of product to meet stability requirements while the ISO standard allows some flexibility about the number of batches; as a result, a stability study compliant with the IVDD may not be compliant with the IVDR. The conclusion at the end of the stability study may be that the data supports the essential requirements. However, there needs to be a statement that the IVDR requirements have been met, that the APR can reference the existing report, and that describes how the data demonstrates compliance to the IVDR. Where there is insufficient data to make this conclusion additional on‐market data will be required. Stability is an area that seems to be a frequent problem for manufacturers and unfortunately it can take the longest time to remediate. When deciding how to transition IVDD files to meet the IVDR, it is important to understand the flow of documents. The PEP uses inputs from risk management, the intended purpose and state of the art to determine what data is needed to support both IVDR requirements and the broader claims made for the device as required by Article 7 of the IVDR. For a new device, the PEP determines what studies have to be run to create the data to support the requirements; whereas for existing devices this data is usually repurposed from the original design verification and validation, on-market design changes, postmarket surveillance activities and other real‐world data. In summary, it is important to understand how the IVDR documents fit together. Understanding intended purpose and state of the art are key to creating good performance evaluation reports. Potential gaps in stability need to be identified early because they will take the longest time to remediate. If your transition to the IVDR requires a change to the intended purpose or claims, it may result in a change to the IFU with a domino effect on global labelling. If this is required, it will have far-reaching effects on the rest of world submissions, so evaluating gaps in intended purpose and ensuring all claims can be supported is an essential first step to IVDR compliance. Enjoying this blog? Learn More. Annex 11: The EU's New Expectations for Regulated Computerized SystemsDownload Now Support for the Internet Explorer browser will end on June 15, 2022 and some site features may be unavailable. For the best experience, we recommend using a modern web browser such as Google Chrome, Mozilla Firefox or Microsoft Edge.
https://www.mastercontrol.com/gxp-lifeline/the-importance-of-intended-purpose-and-state-of-the-art-in-implementing-eu-s-ivdr/
The new Building Passport Practical Guidelines addresses the need for accessible and reliable data and information on buildings. Policymakers and market participants alike see the development and use of Building Passports as a way of overcoming current data gaps and data barriers, helping to capture, administer and manage building-related data and information across the whole life cycle. The overarching goals of these practical guidelines, which represent the collaborative effort of a global Task Force of public and private sector experts, are to illustrate the value of developing holistic, multi-dimensional Building Passports. At the same time, the guidelines reflect key aspects of past discussions about how to make them work in practice, drawing on the experiences of stakeholders and on existing and emerging similar-type initiatives. As such, these guidelines are a supporting tool that: - explain the approach of a Building Passport for a more systematic and coherent approach to building-related data and information. - help build capacity for improved data capture and management through practical recommendations and real-life examples of good practice. - ensure a minimum of harmonization / standardization. - foster more widespread market transformation through progressive digitization of building-related data and information, thus creating greater overall sectoral transparency and opportunities for the development of new business models and tools. The report will be available to download soon!
http://globalabc.org/news/new-report-building-passport-practical-guidelines
This collection explores current issues in the phonology and morphology of the major Iberian languages: Basque, Catalan, Galician, Portuguese, and Spanish. This volume presents essays by some of the leading figures in the vanguard of theoretical linguistics within the framework of universal grammmar. Compares the sounds, phonology, and prosody of General American English and Southeastern Brazilian Portuguese. A strictly descriptive—or synchronic—approach to romance linguistics. Agard provides an historical comparison of the major Romance languages with a reconstruction of their common source and a chronological account of their development through changes and splits.
http://press.georgetown.edu/category/subject-area/linguistics/romance-linguistics
Windham Labs: Thought Leadership in Asset Allocation and Risk Management. Windham Insights We are an ardent team of quant finance analysts with extensive experience in strategic asset allocation, risk management, and technology. This repository serves our collection of articles that you may find useful to support your asset management or analytics process - explore using the list of articles in the navigation menu on the left. Books Mark Kritzman is a Founding Partner of Windham Labs , Windham Capital Management, State Street Associates, and teaches a graduate finance course at the Massachusetts Institute of Technology. He has published eight books, including, most recently, Asset Allocation: From Theory to Practice and Beyond, Practitioner’s Guide to Asset Allocation and The Portable Financial Analyst , and has received several prestigious awards. His innovative research, extensive publications , and investment acumen make him one of the foremost figures in his field. Asset Allocation: From Theory to Practice and Beyond (2021) In Asset Allocation: From Theory to Practice and Beyond —the newly and substantially revised Second Edition of A Practitioner’s Guide to Asset Allocation—accomplished finance professionals William Kinlaw, Mark P. Kritzman, and David Turkington deliver a robust and insightful exploration of the core tenets of asset allocation. Drawing on their experience working with hundreds of the world’s largest and most sophisticated investors, the authors review foundational concepts, debunk fallacies, and address cutting-edge themes like factor investing and scenario analysis. The new edition also includes references to related topics at the end of each chapter and a summary of key takeaways to help readers rapidly locate material of interest. A Practitioner's Guide to Asset Allocation (2017) A Practitioner's Guide to Asset Allocation also explores the innovations that address key challenges to asset allocation and presents an alternative optimization procedure to address the idea that some investors have complex preferences and returns may not be elliptically distributed. Among the challenges highlighted, the authors explain how to overcome inefficiencies that result from constraints by expanding the optimization objective function to incorporate absolute and relative goals simultaneously. The text also explores the challenge of currency risk, describes how to use shadow assets and liabilities to unify liquidity with expected return and risk, and shows how to evaluate alternative asset mixes by assessing exposure to loss throughout the investment horizon based on regime-dependent risk. This practical text contains an illustrative example of asset allocation which is used to demonstrate the impact of the innovations described throughout the book. The Portable Financial Analyst (2007) Financial professionals are faced with increasingly technical topics that are theoretically complicated but practically necessary in determining the trade-off between risk and return. The Portable Financial Analyst, Second Edition is a unique collection of essays that address the heart of every analyst's and investor's dilemma: how to make decisions in the face of unknown forces and how to assert some control over the outcome Puzzles of Finance (2000) Puzzles of Finance takes on today's most persistently challenging financial questions and, through clever examples and just plain logic, helps you move beyond those questions to arrive at a deeper understanding of finance and the daily management of money. From Siegel's Paradox ("Is it possible to profit from asymmetry of exchange rate changes?") to questions of option value ("Why is the value of an option unaffected by the underlying asset's expected return?"), Puzzles of Finance goes beyond vague theoretical suppositions to supply practical, concrete solutions that investors and money managers can benefit from every day. While the intellectually curious will be drawn to Puzzles of Finance, it is the day-to-day finance professional who will derive the most benefit from this remarkable book.
https://insights.windhamlabs.com/
With the Olympics not long past and the Paralympics still upon us, I thought that this might be an opportune moment to consider the haunted history of the UK’s capital city. There is certainly no shortage of material to draw upon when investigating the strange past of London, for just as it has been central to many of the major event’s in the nation’s history, it has also been equally famous as a city of vice, sin, crime and bloodletting. The grim legends of Jack the Ripper, Springheeled Jack and Sweeney Todd continue to cast a menacing shadow over the grimy streets of the East End. Bram Stoker may have made London Dracula’s main preying ground in his iconic horror novel, but the dark Count is seemingly very far from the only vampire – real or fictional – who has flitted through the city’s shadows. Similarly, the plot of Jonathan Landis’s American Werewolf in London also has its terrifying basis in reality. Many of the phantoms that are said to roam the capital are an essential part of British history, folklore and legend. The Tower of London, for example, is reputedly England’s most haunted building precisely because of the many who perished within its walls. There is an old saying that ghosts only ever appear in places that have known either great happiness or great misery, and the buildings and the haunted streets of London have certainly known both in abundance. The countless numbers of people who have lived and died in London in the course of its almost two thousand years of history had known every human emotion – among them hope, joy, love and, of course, terror. In consequence, there is not one square inch of old London town that is not imbued with the memories and experiences of its former citizens. Phantoms and Faders in Old London Town2 Sep Whitechapel11 Mar One of my favourite shows on TV at the moment is Whitechapel, which explores the many dark and disturbing urban legends of one of London’s famous suburbs. The first series of the show focused on a suspected copycat killer who copied the modus operandi of the most infamous and terrifying serial murderer ever to plague old London town – Jack the Ripper. In the words of Jack himself: “Below the skin of history are London’s veins. These symbols, the mitre, the pentacle star, even the ignorant and degenerate can sense that they course with energy… and meaning. I am that meaning. I am that energy. One day, men will look back and say that I gave birth to the 20th Century.” In one sense this is true yet, in spite of the epidemic of 2oth century serial killers with sobriquets like the Boston Strangler, the Buffalo Slasher, the Sunset Slayer and the Yorkshire Ripper, it is Jack who still remains by far and away the most infamous. This is not due simply to the grisly picturesqueness of the nickname but to the fact that the murders took place in the gaslit, fog-shrouded London of Sherlock Holmes and that – unlike the other criminals mentioned above – the identity of Jack the Ripper is still a total mystery. Spring-heeled Jack: The Other Ripper25 Sep Almost everyone knows of Jack the Ripper and his fearsome reputation as one of the most notorious (and un-caught) serial killers of all time. Fewer people have heard of a character who was equally infamous, and feared, about fifty years before the time of Jack the Ripper: Spring heeled-Jack. This was the name given to the entity which terrorized London and later the whole country in a string of bizarre incidents which occurred with most frequency between 1837 and 1843 but were reported again every few years until the last sighting in 1904. Despite this large span of years, each incident was strikingly similar: on every occasion a young woman was the victim and Spring-heeled Jack was described as having the same characteristics – the ability to jump inhumanly long distances, the capacity to disappear without trace, and a frightening countenance variously described as bestial, demonic and even extra-terrestrial.
https://anilbalan.com/tag/jack-the-ripper/
Dehradun : Though incessant rains that have been lashing Uttarakhand since June in the aftermath of the Kedarnath tragedy and showing no signs of receding, the focus of the central government as also the state is now on the rehabilitation and reconstruction. The biggest bottleneck in this task is how best to ensure environmental safety and at the same time deliver the goods. Though the IITs of Roorkee and Mumbai have chipped in to contribute the necessary expertise in rehabilitation and reconstruction works, experts feel that the June 16 tragedy has hit this small mountain state in its soft underbelly. Years of haphazard construction and rampant dynamiting of the fragile mountains have only added to the woes of the planners. The latest list of identified danger zones to aggressive landslides as updated by the National Remote Sensing Centre has now gone up to 1604 as 268 new low and high hazard zones have been added after the June 16 holocaust. Besides, there are hundreds of villages that have also come under the highly sensitive list and need to be rehabilitated. Experts at the Wadia Himalayan Geology Institute here feel that before the rehabilitation and reconstruction work is undertaken, there is need for mapping the landslide prone zones. The state government planners also feel that this exercise should be undertaken immediately so that necessary corrective measures can be adopted before any construction is undertaken in such areas. Experts of IIT Roorkee and Mumbai feel that a prosperous and planned Uttarakhand can be re-established after the tragedy, but before starting there is a need for studying its geological and topographical conditions. Besides the aspirations and the causes that led to the June 16 tragedy have also to be taken into consideration, they observed. Asserting that the two IITs have the necessary wherewithal to undertake the task, they said that houses and roads would have to be constructed in a manner to ensure that there is no repeat of the tragedy in the future. “We are just waiting for the nod from the Uttarakhand government to take upon the gigantic task”, they claimed. Meanwhile the planners of the Uttarakhand government have also started preparing the road map for the restoration and rehabilitation of the devastated areas of the state, though the concentration is still is on connecting the villages that remain cut off even after one month of the tragedy. The rains have been the major reason for the slow progress in the relief and rehabilitation process. However, as of now the focus of the planners is that while rehabilitating and reconstructing the state, special thought will be given to the environment, forests and wildlife and it will be anusred that the statement given by the chief minister that no construction will be allowed within 200 metres of the rivers and streams is implemented in letter and spirit. It has also been decided that keeping in view the fact that this small mountain state is in a high active seismic zone, all houses, buildings and other constructions are made using appropriate technology should there be an earthquake of high magnitude on the Richter scale in the near future. And before any villages are rehabilitated the Geological Survey of India (GSI) will be asked to conduct necessary tests. It has also been decided that the Central Building Research Institute (CBRI), Geological Survey of India (GSI), Remote Sensing and other allied establishments are kept in the loop during the entire planning and construction stage and the roads are constructed in a planned manner with due thought given to drainage of water due to rains and snow. The planners were also of the view that the villages of the hilly areas of the state will be rehabilitated in the higher reaches only, for which the state government will make land available and the centre will also be asked for necessary permission where forest land is involved while the villages of the lower reaches will be rehabilitated in the plains. Special development will be done of areas which involve religious places that are visited by lakhs of pilgrims annually. To avoid a repeat of the Kedarnath tragedy, as of now the state government is working on how to regulate the number of pilgrims to the important shrines of the state to ensure that there is no overcrowding and collection of large number of people at any one time.
https://hillpost.in/2013/07/iit-mumbai-roorkee-willing-to-pitch-in-for-uttarakhand-rehabilitation/94093/?replytocom=692681
There may now be three justices open to moving away from substantive due process as the basis for incorporating the rights the Constitution elsewhere spells out and, potentially, as the basis for recognizing other rights it does not. In recent months, several states have enacted laws curtailing access to abortions, ostensibly in the hopes of giving the U.S. Supreme Court a vehicle for revisiting Roe v. Wade, 410 U.S. 113 (1973), the then-and-still controversial decision recognizing a woman's constitutional right to decide whether to terminate her pregnancy. If five justices agree to author a decision overruling or modifying Roe, what might that decision look like? The court's recent decision in Timbs v. Indiana, 139 S. Ct. 682 (2019) cuts one possible path those justices might take, or at the very least leaves us some breadcrumbs from which we can speculate. Roe itself is grounded in substantive due process. The Fifth and Fourteenth Amendments, respectively, prohibit the federal and state governments from "depriv[ing]' a "person" "of life, liberty, or property, without due process of law." These clauses unquestionably assure procedural due process by requiring that covered deprivations be accompanied by "fair process." Wash. v. Glucksberg, 521 U.S. 702, 719 (1997). At various times, the U.S. Supreme Court has also read these clauses to have a substantive component that "protects individual liberty against 'certain government actions regardless of the fairness of the procedures used to implement them.'" Collins v. City of Harker Heights, 503 U.S. 115, 125 (1992). The Supreme Court's first foray into substantive due process was Lochner v. New York, 198 U.S. 45 (1905). Lochner recognized a "liberty" interest in -- or, in other words, a constitutional right to -- "free[ly] contract" for one's own labor, and held that a New York law prohibiting bakers from working more than 60 hours in a week violated that right because it could not withstand the greater judicial scrutiny applicable to laws infringing constitutional rights. Id. at 52, 57-58. But the court blew down Lochner's straw house three decades later, stating simply: "The Constitution does not speak of freedom of contract." West Coast Hotel Co. v. Parrish, 300 U.S. 379, 391 (1937). The court's second foray into substantive due process recognized a person's liberty interest in "personal privacy" -- that is, a right to make decisions regarding marriage, procreation, contraception, and child rearing and education. See Loving v. Virginia, 388 U.S. 1, 12 (1967) (interracial marriages); Skinner v. Oklahoma, 316 U.S. 535, 541-42 (1942) (sterilization); Eisenstadt v. Baird, 405 U.S. 438, 453-54 (1972) (contraceptives); Pierce v. Soc'y of Sisters, 268 U.S. 510, 535 (1925) (mandatory public school attendance). State laws infringing this constitutional right to privacy are subject to greater judicial scrutiny. Roe has been part of this lineage since 1973, although Planned Parenthood v. Casey, 505 U.S. 833 (1992), remodeled Roe's wooden structure by swapping out Roe's trimester-based test for an "undue burden" test. Id. at 870, 876. And now along comes Timbs. The question in Timbs was whether a person prosecuted for drug crimes in state court could object to the civil forfeiture of his luxury SUV under the Eighth Amendment's excessive fines clause. Because the Bill of Rights of its own force constrains only the federal government and not the states, Barron v. Balt., 32 U.S. 243, 247-50 (1833), Timbs asked whether the excessive fines clause constrained the states through some other provision of the U.S. Constitution. All nine justices said "yes," but disagreed over which provision. For an eight-justice majority, Justice Ruth Bader Ginsburg applied the so-called "selective incorporation doctrine." That doctrine asks whether a right secured by the Bill of Rights, on a right-by-right basis, is so "fundamental to our scheme of ordered liberty" or so "'deeply rooted in this Nation's history or tradition'" that it should be deemed one of the "libert[ies]" secured by the Fourteenth Amendment's due process clause. Timbs, at 687; McDonald v. Chicago, 561 U.S. 742, 765-67 (2010). Because of this textual "hook," selective incorporation is grounded in substantive due process, although the rights it recognizes -- unlike the rights of contract and privacy -- are explicitly spelled out elsewhere in the Constitution. Justice Ginsburg had no difficulty concluding that the right against excessive fines was fundamental due to its "venerable lineage back to" the Magna Carta and its presence in the constitutions of "all 50 States." Timbs at 687-89. Justice Clarence Thomas concurred only in Timbs' result. He agreed that the right against excessive fines applied to the states, but refused to rely upon substantive due process, which in his view is a "legal fiction" lacking "any textual constraints." Id. at 691-92. Instead, and consistent with his prior separate opinions in Saenz v. Roe, 526 U.S. 489 (1999) and McDonald, Justice Thomas preferred what he views as the sturdier, more-brick-like structure of the privileges and immunities clause. That clause prohibits any "state" from "mak[ing] or enforc[ing] any law which shall abridge the privileges and immunities of citizens of the United States." U.S. Const., art. XIV, § 1. Enacted alongside the Fourteenth Amendment's due process clause, the privileges and immunities clause was effectively neutered by the U.S. Supreme Court, mere years after its adoption, in the Slaughter-House Cases, 83 U.S. 36 (1873). In those cases, the court was asked to decide whether a New Orleans law granting a 25-year monopoly to a slaughterhouse company infringed upon the privilege of butchers to pursue their vocation. The court rejected the butchers' argument that the law violated the privileges and immunities clause because, in its view, the clause only protected from state infringement the "privileges and immunities" of national citizenship, not the privileges and immunities of state citizenship. Id. at 74. In reaching this conclusion, the court reasoned that the Constitution elsewhere protects the privileges and immunities of state citizenship, see Art. IV, § 2, and that it was unwise to undertake the "serious," "far-reaching" and "pervading" task of defining the privileges and immunities of state citizenship and using them to nullify state law in the absence of "language" "clearly" mandating such an undertaking. Id. at 75, 78. Thus, the clause has for years been confined to protecting a narrow band of privileges and immunities of national citizenship. They include (1) the right to travel to, and transact with, the "seat of government," (2) "the right of free access to [each state's seaports]" and navigable waters, (3) the right to protection while "on the high seas or within the jurisdiction of a foreign government," (4) the "right to peaceably assemble and petition for redress of grievances," and (5) "the privilege of the writ of habeas corpus." In 1999, Saenz added to this list the "right to be treated equally in [a] new State of residence." Justice Thomas would have the clause "establish a minimum baseline of federal rights" with which states could not interfere. McDonald, at 850. To avoid what he decries as the "fictional" aspects of substantive due process, Justice Thomas would define this baseline solely by reference to the privileges and immunities secured by English common law, by the constitutions of the 13 colonies, and by the laws in effect when the clause was adopted in 1868. Timbs at 693-94; McDonald, at 854-55; Saenz, at 524. To Justice Thomas, the "house" of rights built by this clause would be sturdier, but have less square footage than substantive due process. What makes Timbs interesting is that Justice Neil Gorsuch wrote separately to express his view that the privileges and immunities clause "may well be" "the appropriate vehicle for incorporation." Id. at 691. Given that Chief Justice Roberts has criticized substantive due process for "exalt[ing] judges at the expense of the People," Obergefell v. Hodges, 135 S. Ct. 2584, 2631 (2015), there may now be three justices open to moving away from substantive due process as the basis for incorporating the rights the Constitution elsewhere spells out and, potentially, as the basis for recognizing other rights it does not. The Supreme Court will have further opportunity to revisit the substantive due process doctrine next term in Ramos v. Louisiana, 18-5924, and Kahler v. Kansas, 18-6135, which deal, respectively, with whether substantive due process protects the right to a unanimous jury verdict and the right to plead insanity as a defense. However, if past is prologue, the time may still not be right for the next step in the evolution of substantive due process. After all, the Slaughter-House Cases involved butchers and Lochner involved bakers. The ideal case would, of course, involve the rights of candle-stick makers. In 2019, such a case may be as likely as eloping flatware or bovine astronauts, but justice can be surprisingly poetic. For reprint rights or to order a copy of your photo: Direct dial: 949-702-5390 Send a letter to the editor:
https://dailyjournal.com/articles/353595-rub-a-dub-dub
A single-layer neural network represents the most simple form of neural network, in which there is only one layer of input nodes that send weighted inputs to a subsequent layer of receiving nodes, or in some cases, one receiving node. What is single layer? Single-layer boards have just one layer of base material, also known as a substrate, while multi-layer PCBs have multiple layers. … Double-sided PCBs, like the single-sided variation, have one substrate layer. The difference is that they have a layer of conductive metal on both sides of the substrate. What is the difference between single layer and multi-layer neural network? A Multi-Layer Perceptron (MLP) or Multi-Layer Neural Network contains one or more hidden layers (apart from one input and one output layer). While a single layer perceptron can only learn linear functions, a multi-layer perceptron can also learn non – linear functions. Is single layer neural network enough? Most of the literature suggests that a single layer neural network with a sufficient number of hidden neurons will provide a good approximation for most problems, and that adding a second or third layer yields little benefit. What is single layer feedforward neural network? The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network. How does single layer perceptron function? The computation of a single layer perceptron is performed over the calculation of sum of the input vector each with the value multiplied by corresponding element of vector of the weights. … The value which is displayed in the output will be the input of an activation function. What is the significance of multiple layer network over single layer network? With that being said, using an MLP with three layers (2 hidden + 1 output layers) in deep learning gives the network the ability to separate the filtered data using more complex shapes, compared to a single fully connected layer. What is the best deep learning framework? Top Deep Learning Frameworks - TensorFlow. Google’s open-source platform TensorFlow is perhaps the most popular tool for Machine Learning and Deep Learning. … - PyTorch. PyTorch is an open-source Deep Learning framework developed by Facebook. … - Keras. … - Sonnet. … - MXNet. … - Swift for TensorFlow. … - Gluon. … - DL4J. What is single layer Perceptron in machine learning? A single layer perceptron (SLP) is a feed-forward network based on a threshold transfer function. SLP is the simplest type of artificial neural networks and can only classify linearly separable cases with a binary target (1 , 0). A single line will not work. As a result, we must use hidden layers in order to get the best decision boundary. In such case, we may still not use hidden layers but this will affect the classification accuracy. So, it is better to use hidden layers. What are weights in a neural network? Weight is the parameter within a neural network that transforms input data within the network’s hidden layers. A neural network is a series of nodes, or neurons. Within each node is a set of inputs, weight, and a bias value. … Often the weights of a neural network are contained within the hidden layers of the network. What are the drawbacks of single layered perceptrons? Disadvantages. This neural network can represent only a limited set of functions. The decision boundaries that are the threshold boundaries are only allowed to be hyperplanes. This model only works for the linearly separable data.
https://mostrealisticai.com/ai/quick-answer-what-is-a-single-layer-neural-network.html
TECHNICAL FIELD BACKGROUND SUMMARY DETAILED DESCRIPTION This disclosure relates generally to systems to control light fixture dimming operations. More specifically, but not by way of limitation, this disclosure relates to light-emitting diode light fixture systems providing control of a dimming operation using multiple dimming modes. Light fixture dimming operations provide a mechanism for adjusting a brightness of a light output on a gradient from a bright output to a dim output. During a dimming operation, a human eye observes a light output of an incandescent bulb transition from a bright white color to a dim yellow color. Because light-emitting diodes (LEDs) do not change color temperatures with changes to brightness of the LEDs, the dimming of an LED light fixture does not provide a color temperature transition similar to the dimming of an incandescent bulb. Dimming operations of the LED light fixtures may be performed without consideration or control of a color temperature associated with the light output of the light fixture. Existing static color dimming operations ignore the ability to alter a color temperature of a light output based on a dimming level. Because the color temperature is unchanged during the static color dimming operation, an LED light fixture equipped with a white light may generate a light output that is perceived by a human eye as unusual or unnatural during the static color dimming operation. Such a perception may be particularly evident in a residential lighting environment. Instead of creating a subdued and warm ambiance, the dimmed light fixture that includes LEDs with a white color temperature generates a dim and crisp environment. Certain aspects involve lighting systems in which the color temperature of the illumination can be selectively modified. For instance, a lighting system may include a first light source that outputs a first light color and a second light source that outputs a second light color. The lighting system also includes a switching device that couples and decouples the second light source in parallel with the first light source. The switching device includes a first setting that couples the second light source in parallel with the first light source and enables a variable color dimming operation. The switching device also includes a second setting that decouples the second light source from the first light source and enables a static color dimming operation. In one or more additional examples, a lighting system includes a first light source that emits light at a first color and a second light source that emits light at a second color different from the first color. The lighting system also includes a controller that controls application of current across a conductive path associated with the second light source based on a magnitude of current output from a driver toward the first light source and the second light source. Further, the lighting system includes a switching device having a first configuration connecting the second light source in parallel with the first light source and a second configuration removing the second light source from the conductive path associated with the second light source. In one or more additional examples, a method includes emitting a first light output of a first light source at a first color. Additionally, the method includes adding a second light source in parallel with the first light source and emitting a second light output of the second light source at a second color different from the first color. Further, the method includes performing a variable color dimming operation on a combined light output of the first light source and the second light source. These illustrative aspects are mentioned not to limit or define the disclosure, but to provide examples to aid understanding thereof. Additional aspects are discussed in the Detailed Description, and further description is provided there. The present disclosure relates to systems that control dimming operations of a light-emitting diode (LED) light fixture using multiple dimming modes. As explained above, dimming operations of LED light fixtures generally ignore a color temperature of the light output of the LED light fixture during dimming operations. As a result, the light output maintains a single color temperature throughout the dimming operation. Certain aspects described herein improve the performance of the dimming operations of LED light fixtures. For example, certain aspects involve a mechanism that can switch between a variable color dimming operation or a static color dimming operation. Selecting between the variable color dimming operation or the static color dimming operation provides the option of increasing yellow or amber colored light or maintaining a white colored light as the LED light fixture dims. To provide a mechanism that enables the switching between color dimming operations, multiple LED strings of varying colors may be positioned within the LED light fixture. In an example, one or more yellow, amber, or other colored LED strings are activated during the dimming operation to provide a warm dim effect to the LED light fixture as a light output dims. A switch in communication with the LED light fixture may enable the activation of the warm dim effect in the LED light fixture. For example, the switch in one position maintains a single light output color while the LED light fixture dims, and the switch in another position provides a dynamic light output color that varies while the LED light fixture dims. FIG. 1 100 100 102 104 106 102 106 106 106 108 110 106 108 110 is a block diagram depicting a multiple dimming mode light system . The illustrated light system includes an input device , a driver , and a lighting device . In an example, the input device is a dimmer switch that controls light intensity of the lighting device based on user input. The lighting device may be illuminated using groups of LEDs (e.g., strings of LEDs). For example, the lighting device may include one or more cool LED groups and one or more warm LED groups . While the present disclosure describes groups of LEDs performing the illumination of the lighting device , other lighting devices may be used in place of the groups of LEDs. For example, the cool LED groups and the warm LED groups may be replaced with one or more organic LEDs (OLEDs), one or more laser diodes, or one or more of any other light sources. 104 106 106 102 106 102 106 102 102 104 106 The driver may provide a current source to the lighting device . The amount of current provided to the lighting device may vary based on an input provided by the input device (e.g., a dimming switch). For example, less current is provided to the lighting device if the input device is set to a dim setting, and more current is provided to the lighting device if the input device is set to a bright setting. Examples of the input device may include a dimmer bar, a dimmer wheel, a dimmer button, a dimmer with wireless communication capabilities, or any other device that is capable of controlling the driver to reduce an amount of current provided to the lighting device . 106 108 108 108 108 108 108 The lighting device may include the cool LED group . The cool LED group may provide a light output at a static light output color. In one example, the static light output color of the cool LED group is a white light or a near white light. Because the cool LED group is a group of LEDs, the light output color of the cool LED group remains constant as the intensity of the light output increases and decreases. The light output color of the cool LED group may be desirable for certain tasks using a relatively bright light output, such as in a kitchen or a workspace. 106 110 110 110 110 108 110 106 The lighting device may also include the warm LED group . The warm LED group may provide a light output at an additional static light output color. In one example, the static light output color of the warm LED group is a yellow or amber light. The light output color of the warm LED group may be desirable at relatively dim light outputs. The outputs of the cool LED group and the warm LED group combine to provide an overall light output of the lighting device . 106 112 110 102 112 110 102 112 110 108 106 106 To generate the light output of the lighting device with a variable color during a dimming operation, a warm LED controller may control current applied to the warm LED group . For example, if the input device requests a bright light output, the warm LED controller may limit or stop application of current to the warm LED group . As the input device requests a dimmer light output, the warm LED controller may increase an amount of current provided to the warm LED group while the current provided to the cool LED group decreases. The resulting overall light output of the lighting device is a light output that transitions from a bright white color at full brightness to a yellow or amber color as the light output of the lighting device dims. 106 114 114 110 102 104 106 106 108 114 114 110 106 104 102 114 106 106 106 The lighting device may also include a warm LED switch . The warm LED switch (e.g., an output configuration switch) may be a switching device that includes a setting to prevent provision of current to the warm LED group regardless of an input provided by the input device to the driver . In such a setting, the dimming operation of the lighting device may be referred to as a static color dimming operation. The output of the lighting device remains a color of the cool LED group (e.g., a white or near white light) during the entire dimming operation. In another setting of the warm LED switch , the warm LED switch enables provision of current to the warm LED group based on an amount of current provided to the lighting device by the driver (e.g., based on an input by the input device ). If the warm LED switch is in such a setting, the dimming operation of the lighting device may be referred to as a dynamic color dimming operation. The output color of the lighting device changes as the lighting device experiences the dynamic color dimming operation. 114 110 110 106 106 114 106 106 106 Examples of the warm LED switch may include a mechanical switch, an electro-mechanical switch, a wireless switch, a solid-state switch, a transistor, or any other switch capable of changing a conductive path associated with the warm LED group . By using a mechanism to couple or decouple the conductive path associated with the warm LED group , the lighting device can provide a user with the ability to transition between the static color dimming operation and the dynamic color dimming operation. The static color dimming operation may, for example, provide a crisp light output color even at lower intensity light outputs. In an example, such a light output may be used as for task lighting at a lower intensity. Additionally, the dynamic color dimming operation may provide a warm dimming approach to decreasing the intensity of the lighting device . In an example, the warm dimming approach is used to provide natural or ambient lighting to an area at a lower light output intensity. By providing the warm LED switch to the lighting device , dim light output is available for both task lighting and ambient lighting in the same lighting device . For example, a lamp or a light fixture including the lighting device is capable of performing both the static color dimming operation and the dynamic color dimming operation. FIG. 2 FIG. 1 100 102 104 106 112 104 202 104 108 110 is a schematic diagram of an example of the multiple dimming mode light fixture . As discussed above with respect to , the input device provides signals to the driver to increase or decrease current supplied to the lighting device . In an example, the warm LED controller includes a current meter that monitors a magnitude of current supplied by the driver at a node , which is positioned between the driver and the cool LED group and the warm LED group . 202 112 204 206 110 114 114 112 204 202 202 204 202 204 206 110 110 206 110 FIG. 1 FIGS. 2-4 By measuring the current at the node , the warm LED controller outputs a voltage to a gate of a transistor (e.g., a field-effect transistor (FET), a metal-oxide-semiconductor field-effect transistor (MOSFET), a bipolar junction transistor, or any other type of transistor), which is coupled in series with the warm LED group , if the warm LED switch is closed. As discussed above with reference to , the warm LED switch is closed if a variable color dimming operation is desired. A pulse width modulated (PWM) voltage signal provided by the warm LED controller to the gate may vary based on the current detected at the node . For example, as the current at the node decreases, a duty cycle of the PWM voltage signal provided to the gate may increase. Likewise, as the current at the node increases, the duty cycle of the PWM voltage signal provided to the gate may decrease. While the transistor is depicted in as a mechanism to control current across the warm LED group , any other device or component capable of controlling current across the warm LED group is also contemplated. For example, the transistor may be replaced by a any electronic switching device capable of controlling the current across the warm LED group . 204 206 208 210 206 204 208 210 204 206 110 110 112 204 114 208 210 206 110 110 The voltage provided to the gate of the transistor increases conductivity between a source and a drain of the transistor . Accordingly, as the duty cycle of the PWM voltage signal provided to the gate increases, less overall resistance is present between the source and the drain . In this manner, a greater duty cycle of the PWM voltage signal at the gate results in a greater overall conductivity across the transistor , an increase in the current provided to the warm LED group , and a brighter output of the warm LED group . If the warm LED controller provides a voltage less than a gate threshold voltage to the gate , or if the warm LED switch is open, the conductivity between the source and the drain of the transistor is negligible, and the warm LED group does not emit a light output as no current flows across the warm LED group . FIG. 3 FIG. 1 100 102 104 106 112 104 302 104 108 110 is a schematic diagram of another example of the multiple dimming mode light fixture . As discussed above with respect to , the input device provides signals to the driver to increase or decrease current supplied to the lighting device . In an example, the warm LED controller includes a current meter that detects the current supplied by the driver at a node , which is positioned between the driver and the cool LED group and the warm LED group . 302 112 204 206 112 204 302 302 204 302 204 Based on the current measured at the node , the warm LED controller outputs a PWM voltage signal to the gate of the transistor . The duty cycle of the PWM voltage signal provided by the warm LED controller to the gate may vary based on the current detected at the node . For example, as the current at the node decreases, the duty cycle of the PWM voltage signal provided to the gate may increase. Likewise, as the current at the node increases, the duty cycle of the PWM voltage signal provided to the gate may decrease. 204 206 208 210 206 204 208 210 204 206 110 110 112 204 208 210 206 110 114 110 112 204 206 The voltage provided to the gate of the transistor increases conductivity between the source and the drain of the transistor . Accordingly, as the duty cycle of the PWM voltage signal provided to the gate increases, less overall resistance is present between the source and the drain . In this manner, a greater duty cycle of the PWM voltage signal at the gate results in a greater overall conductivity across the transistor , an increase in current provided to the warm LED group , and a brighter output of the warm LED group . However, if the warm LED controller provides a voltage less than a gate threshold voltage to the gate the conductivity between the source and the drain of the transistor is negligible, and the warm LED group does not emit a light output. Further, if the warm LED switch is open, current is not provided to the warm LED group regardless of the voltage supplied by the warm LED controller to the gate of the transistor . 114 204 206 206 304 114 114 302 112 302 110 110 110 206 114 112 204 206 112 FIGS. 2 and 3 While the warm LED switch is described in as being placed at the gate of the transistor or between the transistor and a node , other positions of the warm LED switch are also contemplated. For example, the warm LED switch may be positioned between the node and the warm LED controller , between the node and the warm LED group , between individual LEDs of the warm LED group , between the warm LED group and the transistor , or at any other location where the warm LED switch is able to override control voltage signals provided by the warm LED controller to the gate of the transistor or to override sensing signals provided to the warm LED controller . FIG. 4 FIG. 1 400 102 104 402 404 108 404 112 206 110 108 404 400 Turning to , a schematic diagram of an example of a warm dimming mode light fixture is depicted. As discussed above with respect to , the input device provides signals to the driver to increase or decrease current supplied to a lighting device . As illustrated, a warm LED device may be added in parallel to the cool LED group . The warm LED device may include the warm LED controller , the transistor , and the warm LED group . In an example, a previously assembled or manufactured lighting device including a group of LEDs with only one light output color (e.g., the cool LED group ) may be retrofitted with the warm LED device to form the warm dimming mode light fixture . 404 112 112 104 406 104 108 110 404 108 406 112 204 206 112 204 406 406 204 406 204 In an example, the warm LED device includes the warm LED controller . The warm LED controller may include a current meter that detects the current supplied by the driver at a node , which is positioned between the driver and the cool LED group and the warm LED group if the warm LED device is coupled in parallel with the cool LED group . Based on the current measured at the node , the warm LED controller outputs a PWM voltage signal to the gate of the transistor . A duty cycle of the PWM voltage signal provided by the warm LED controller to the gate may vary based on the current detected at the node . For example, as the current at the node decreases, the duty cycle of the PWM voltage signal provided to the gate may increase. Likewise, as the current at the node increases, the duty cycle of the PWM voltage signal provided to the gate may decrease. 204 206 208 210 206 204 208 210 204 206 110 110 112 204 208 210 206 110 The voltage provided to the gate of the transistor results in an increase in conductivity between the source and the drain of the transistor . Accordingly, as the duty cycle of the PWM voltage signal provided to the gate increases, less resistance is present between the source and the drain . In this manner, a duty cycle of the PWM voltage signal at the gate results in a greater conductivity across the transistor resulting in a greater flow of current through the warm LED group and a brighter light output of the warm LED group . However, if the warm LED controller provides a voltage less than a gate threshold voltage to the gate , the conductivity between the source and the drain of the transistor is negligible, and the warm LED group does not emit a light output. 404 114 114 404 404 114 108 While the warm LED device is depicted without the warm LED switch , the warm LED switch may also be included in one or more examples of the warm LED device . Accordingly, the warm LED device with the warm LED switch may be provided in parallel with a cool LED group to switch between a variable color dimming operation and a static color dimming operation. FIG. 5 500 100 400 502 100 400 504 108 506 110 502 112 204 206 is graphical representation of electric currents provided during a variable color dimming operation over a range of output lighting percentages of a light fixture or . As can be seen at point , which represents full power provided to lighting elements of the light fixture or , a current provided to the cool LED group is at a maximum value, and a current provided to the warm LED group is at a minimum value. At the point , the warm LED controller is not providing a positive voltage to the gate of the transistor . 508 500 112 106 100 400 508 106 106 502 106 112 204 206 106 204 506 110 504 108 108 110 506 110 504 110 As the output lighting percentage decreases to point on the graphical representation , the warm LED controller can detect a change in the current provided to the lighting device of the light fixture or . For example, at point , the current provided to the lighting device may be approximately 50% of the current provided to the lighting device at point . In response to a decrease in the current provided to the lighting device , the warm LED controller may begin to apply a positive voltage to the gate of the transistor . As the current provided to the lighting device decreases, a duty cycle of the PWM voltage signal applied to the gate increases such that additional current flows through the warm LED group while less current flows through the cool LED group . Due to the parallel path between the cool LED group and the warm LED group , any increase in the current provided to the warm LED group results in a decrease in the current provided to the cool LED group . 510 504 108 506 110 108 110 504 506 514 106 As the light output percentage reaches point , the current provided to the cool LED group and the current provided to the warm LED group may reach an equilibrium. That is, the same amount of current may be provided to the two parallel paths to the cool LED group and the warm LED group . Upon reaching the equilibrium, the currents and continue to decrease toward point where the lighting device is no longer providing a light output. 500 112 106 504 506 108 110 114 110 108 FIG. 5 As illustrated in the graphical representation , the warm LED controller may enable the lighting device to simulate a light color output of an incandescent lightbulb during a dimming operation. For example, the currents and provided to the cool LED group and the warm LED group , respectively, provide a white light output at 100% output intensity and gradually transition the color of the light output to a yellow or amber light as the output intensity decreases. Additionally, if the warm LED switch is set to the static color dimming operation, all of the current provided to the warm LED group during the dynamic color dimming operation depicted in remains with the cool LED group throughout the course of the static color dimming operation. General Considerations Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter. Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform. The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs. Suitable computing devices include multi-purpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more aspects of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device. Aspects of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied—for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel. The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting. While the present subject matter has been described in detail with respect to specific aspects thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily produce alterations to, variations of, and equivalents to such aspects. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude the inclusion of such modifications, variations, and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. BRIEF DESCRIPTION OF THE DRAWINGS Features, aspects, and advantages of the present disclosure are better understood when the following Detailed Description is read with reference to the accompanying drawings. FIG. 1 depicts a block diagram of an example of a multiple dimming mode light system, according to certain aspects of the present disclosure. FIG. 2 FIG. 1 depicts a schematic diagram of an example of the multiple dimming mode light system of , according to certain aspects of the present disclosure. FIG. 3 FIG. 1 depicts a schematic diagram of an additional example of the multiple dimming mode light system of , according to certain aspects of the present disclosure. FIG. 4 depicts a schematic diagram of an example of a warm dimming mode light fixture, according to certain aspects of the present disclosure. FIG. 5 depicts graphical representation of electric currents provided to light-emitting diode groups during a variable color dimming operation over a range of dimming percentages, according to certain aspects of the present disclosure.
Horus Security Consultancy Limited is a private limited company registered in England and Wales under company number 5788493 and has a registered office address of Beaver House, 23-38 Hythe Bridge Street, Oxford, OX1 2ET. Our VAT number is 885203123. Horus Security Consultancy was founded in April 2006, on the principles of transparency and integrity, in order to bring security best practice to not-for-profit and commercial clients. Horus has since developed to become a leading UK company with expertise in providing timely, accurate and ethical information as well as an end to end security service capability to a number of sectors globally, including; the bio-medical and pharmaceutical, construction, Governmental, energy, transport, entertainment, retail and charities. Horus’ global development has been further cemented through the launch of Horus Asia-Pacific Limited which is based in Singapore. Horus’ services are underpinned by its Core Values, which are: - Transparency and full legal compliance - Ethicality in our information gathering and investigations - Proportionality providing balance and relevance - Support providing what the client wants as well as what the client needs - Continuous development of staff and technological resources For its Open Source collection and Security Screening Horus does not undertake the practices of hacking or false flagging, neither does it engage in other intrusive techniques. Therefore, Horus is able to completely protect client reputation and provide Open Source collection and Security Screening material which is entirely disclosable. The Horus understanding of current and potential risks to a variety of sectors, as well as our specialist research, analytical and investigative ability is second to none. We pride ourselves on the depth and detail of our research and our talent for discovering the less obvious.
https://horus-security.co.uk/about
295 S.W.3d 857 (2009) In re Larry D. COLEMAN, Respondent. No. SC 89849. Supreme Court of Missouri, En Banc. September 15, 2009. As Modified on Denial of Rehearing November 17, 2009. *859 Alan D. Pratzel, Shannon L. Briesacher, Office of Chief Disciplinary Counsel, Jefferson City, MO, for appellant. Larry D. Coleman, Raytown, MO, for respondent. PATRICIA BRECKENRIDGE, Judge. The Office of Chief Disciplinary Counsel (OCDC) seeks discipline of Larry D. Coleman for multiple violations of the rules of professional conduct in his representation of a client in three cases and one violation for the improper handling of his IOLTA account. This Court finds that Mr. Coleman violated Rule 4-1.2, Scope of Representation, by accepting a settlement offer in a wrongful death case without the consent of his client and filing a motion in that case requesting that the court enforce the agreement despite his client's refusal to settle; Rule 4-1.7, Conflict of Interest, by entering into written agreements with his client purporting to give him the exclusive right to settle her three cases and taking direct, adverse action against his client's directives when he filed and proceeded on a motion to enforce the agreement; Rule 4-1.15, Safekeeping Property, by failing to keep his personal funds separate from his IOLTA account; Rule 4-1.16, Declining or Terminating Representation, by failing to take reasonable steps to protect his client's interest on termination of representation; and Rule 4-8.4, Misconduct, by violating other rules of professional conduct and by engaging in conduct that is prejudicial to the administration of justice.[1] Mr. Coleman is suspended from the practice of law, with execution of such suspension stayed, subject to Mr. Coleman's completion of a one-year term of probation in accordance with conditions imposed by this Court. Factual and Procedural Background Larry D. Coleman was licensed to practice law in Missouri in 1977. Mr. Coleman has had three prior disciplinary matters. Mr. Coleman was admonished in 1990 for failure to communicate with a client and for unreasonable fees. He was admonished in 1999 for failure to act with reasonable diligence, to expedite litigation and to communicate with a client. In April 2008, he received a public reprimand for violations regarding diligence, unreasonable fees, and conduct prejudicial to the administration of justice. The present disciplinary action arises from his representation of a client, Vera Davis, and his handling of his IOLTA account. Beginning in July 2001, Ms. Davis hired Mr. Coleman to represent her in three separate civil actions that eventually were pending simultaneously. Ms. Davis hired Mr. Coleman to represent her in a wrongful death action for medical malpractice in the death of her sister. She then hired Mr. Coleman to represent her in a wrongful termination case against her employer, Two Rivers Psychiatric Hospital. Finally, Ms. Davis hired Mr. Coleman to represent her in a discrimination claim against her employer, Western Missouri Mental Health Center. The fee agreement in each case required Ms. Davis to pay Mr. Coleman nonrefundable retainers of $5000, $2000 and $1000 respectively, and then an hourly rate of $200 per hour. In addition, the agreement required Ms. *860 Davis to pay the litigation expenses in each case. From July 2001 to September 2006, Mr. Coleman periodically sent bills to Ms. Davis for his legal fees and expenses in the three cases; as the bills arrived, Ms. Davis paid them. Ms. Davis did not keep a ledger or receipts for all monies she paid to Mr. Coleman, but she was able to produce receipts for payments of more than $38,000. She believes she may have paid him up to $50,000, which Mr. Coleman does not dispute. Mr. Coleman did not maintain copies of any documentation showing his billable hours, the litigation costs accrued, or her payments. Mr. Coleman maintains that his records of all bills and statements were delivered to Ms. Davis when she picked up her legal files from his office. In July 2006, the state of Missouri, on behalf of Western Missouri Mental Health Center, made an offer to settle Ms. Davis' discrimination claim for $20,000. When Mr. Coleman advised Ms. Davis of the settlement offer, she told him that she did not find the settlement offer acceptable and instructed him to reject it. In September 2006, Ms. Davis informed Mr. Coleman that she no longer had the financial ability to pay him as previously agreed. Mr. Coleman proposed that their fee agreements in her three cases be converted to contingent fee agreements. For each case, Mr. Coleman prepared a written contingent fee agreement, which he executed and mailed to Ms. Davis. Each agreement provided that Mr. Coleman was forgiving any currently owed fees in exchange for the right to one-third of any future recovery.[2] Mr. Coleman also included the following provision in each agreement: In consideration of one-third (1/3) of any recovery, I agree to forego my hourly rate, and instead, agree to accept one-third of any recovery. However, because I am taking a risk with you on this case, and because I am more familiar with the legal trends relative to judgments, settlements, and summary disposition, you agree I shall have the exclusive right to determine when and for how much to settle this case. That way, I am not held hostage to an agreement I disagree with. Mr. Coleman did not explain or otherwise discuss this clause with Ms. Davis prior to the execution of the agreements. Ms. Davis signed each of the new agreements. In October 2006, the state, on behalf of Western Missouri Mental Health Center, again offered to settle Ms. Davis' case for $20,000. When Mr. Coleman advised her of the offer, Ms. Davis again informed Mr. Coleman that the offer was unacceptable. Without Ms. Davis' consent, Mr. Coleman accepted the state's $20,000 offer. Mr. Coleman then informed Ms. Davis that he had settled her case against Western Missouri Mental Health Center for $20,000. Because it was necessary for Ms. Davis to execute the settlement documents to effectuate the settlement, Mr. Coleman repeatedly requested that Ms. Davis sign the documents. Ms. Davis refused and informed Mr. Coleman that she wished to proceed to trial. In November 2006, Mr. Coleman sent a letter to Ms. Davis stating that, if Ms. Davis refused to sign the settlement agreement, Mr. Coleman would be forced to withdraw in all three of her cases or move the court to enforce the settlement agreement against Western Missouri Mental *861 Health Center despite her explicit refusal to settle. Thereafter, Mr. Coleman filed a motion in the Western Missouri Mental Health Center case to enforce the settlement agreement against Ms. Davis. In February 2007, the court ruled against Mr. Coleman and declined to enforce the settlement agreement. The court placed the case back on the trial docket for April 2, 2007. Mr. Coleman mailed her a copy of the order overruling his motion. He mailed Ms. Davis another letter, dated February 16, 2007, informing her that if she did not contact him in writing within one week of his letter, he would withdraw as her attorney. After more than one week had passed, on February 28, 2007, Mr. Coleman filed a motion to withdraw as Ms. Davis' attorney in the Western Missouri Mental Health Center case, citing Ms. Davis' failure to respond to his deadline as his reason for withdrawal. Mr. Coleman did not send Ms. Davis a copy of his motion to withdraw. On March 2, 2007, Ms. Davis sent Mr. Coleman a letter requesting her files. Three days later, on March 5, 2007, Ms. Davis sent a representative to pick up her files. She took her files to one attorney and discussed her cases with another attorney. At some point, Ms. Davis received from the court a copy of Mr. Coleman's motion to withdraw, as well as a directive from the court that she respond on or before April 12, 2007, if she wished to object to Mr. Coleman's motion. Ms. Davis was uncertain as to how to proceed, so on April 5, 2007, she directed a letter to Mr. Coleman asking him a number of questions, including whether she needed to obtain another lawyer, what his fees were in her cases and whether he intended to assert attorney fee liens in her cases, the status of each of her cases, and whether he anticipated that she would have any problems in her cases. Her letter directed Mr. Coleman to respond by the next day, April 6. Mr. Coleman did not reply to Ms. Davis' letter. On April 5, 2007, the same day that Ms. Davis wrote Mr. Coleman with questions about her case, Mr. Coleman sent a letter to Ms. Davis acknowledging that her files had been retrieved from his office. Mr. Coleman closed his letter by stating, "It was a pleasure to serve you."[3] In May 2007, Ms. Davis filed an objection to Mr. Coleman's withdrawal in the Western Missouri Mental Health Center case because he had not provided the information sought in her letter. Despite her objection, the court granted Mr. Coleman leave to withdraw. Ms. Davis' case was placed on the fall 2007 accelerated trial docket. Unable to obtain new counsel, her case ultimately was dismissed. During the time period described above, summary judgment was issued against Ms. Davis in her wrongful death action. In March 2007, Mr. Coleman filed an appeal of the summary judgment. Mr. Coleman then filed a motion to withdraw as Ms. Davis' counsel, which the court of appeals granted. Similarly, in June 2007, Mr. Coleman requested and was granted permission to withdraw as Ms. Davis' attorney in her wrongful termination action against Three Rivers. In July 2007, Ms. Davis filed a complaint against Mr. Coleman with OCDC. OCDC filed an information, in April 2008, charging that Mr. Coleman violated multiple rules of professional conduct in his legal representation of Ms. Davis and another client. In his answer, Mr. Coleman denied all of the alleged violations. *862 Independent to the issues arising out of his representation of Ms. Davis, other conduct by Mr. Coleman came to the attention of OCDC. In this Court's April 2008 order regarding Mr. Coleman's third disciplinary matter, Mr. Coleman was ordered to pay a $750 fee, plus costs, to the clerk of this Court. On or about June 11, 2008, Mr. Coleman wrote a check on his IOLTA account for the amount due. When it was discovered that Mr. Coleman paid his fee and costs from an IOLTA account, OCDC opened a complaint and initiated an investigation against Mr. Coleman. OCDC discovered that Mr. Coleman regularly deposited checks for settlement proceeds into his IOLTA account. When a settlement check cleared the bank, it was Mr. Coleman's practice to pay out the client's share of the settlement proceeds to the client. Once his clients were paid, Mr. Coleman sometimes would leave his share of the settlement proceeds in the account and write checks to pay personal obligations directly out of the IOLTA account. Mr. Coleman did not keep records or ledgers identifying deposits into the account. OCDC amended its previously filed information to add an additional count, alleging that Mr. Coleman mishandled his IOLTA account by failing to hold client and third-party property separate from his own property. A disciplinary hearing panel ("the panel") heard the matter in October 2008, and found there was insufficient proof to establish any violations regarding the other client. Regarding Mr. Coleman's representation of Ms. Davis, the panel found insufficient evidence that Mr. Coleman violated Rule 4-1.5, unreasonable or excessive fees; that the clause in the contingent fee contract purporting to give Mr. Coleman the exclusive right to settle Ms. Davis' cases created a conflict of interest in violation of Rule 4-1.7; or that Mr. Coleman failed to protect Ms. Davis' interests on termination of his representation in violation of Rule 4-1.16(d). The panel also found there was no evidence Mr. Coleman commingled client funds with his own funds in violation of Rule 4-1.15, the rule requiring the safekeeping of client property. However, the panel did determine that Mr. Coleman violated Rule 4-1.2(a), by accepting the settlement offer in the Western Missouri Mental Health Center case despite Ms. Davis' expressed wishes. This conduct also violated Rule 4-8.4(d), regarding conduct prejudicial to the administration of justice. The panel recommended that Mr. Coleman be reprimanded publicly. OCDC filed its rejection of the panel's recommendation, which brings the matter before this Court.[4] OCDC argues that the evidence proves that Mr. Coleman violated the following rules of professional conduct: (1) 4-1.2, Scope of Representation, in that Mr. Coleman accepted a settlement agreement with Western Missouri Mental Health Center without consent of his client, Ms. Davis, and then moved the court to enforce the agreement despite Ms. Davis' decision not settle; (2) 4-1.5, Fees, in that Mr. Coleman converted an hourly fee agreement into a contingent fee agreement and did not give Ms. Davis credit for the more than $30,000 in fees she had paid; (3) 4-1.7, Conflict of Interest: Current Clients, in that Mr. Coleman entered into a contingent fee agreement with Ms. Davis that purported to give him the exclusive right to settle all of her cases and subsequently took action *863 in court despite Ms. Davis' explicit refusal to settle; (4) 4-1.16, Declining or Terminating Representation, in that Mr. Coleman failed to notify Ms. Davis that he had withdrawn as her counsel and failed to provide Ms. Davis with information regarding her rights and obligations; (5) 4-1.15, Safekeeping Property, in that Mr. Coleman regularly commingled personal and client funds; and (6) 4-8.4(d), Conduct Prejudicial to the Administration of Justice, in that Mr. Coleman caused harm to the judicial system and his clients by violating multiple rules of professional conduct. OCDC requests that this Court suspend Mr. Coleman's license, with no leave to reapply for a period of one year. Standard of Review "Professional misconduct must be proven by a preponderance of the evidence before discipline will be imposed." In re Crews, 159 S.W.3d 355, 358 (Mo. banc 2005). This Court reviews the evidence de novo, independently determines all issues pertaining to credibility of witnesses and the weight of the evidence, and draws its own conclusions of law. In re Belz, 258 S.W.3d 38, 41 (Mo. banc 2008). This Court treats the panel's findings of fact, conclusions of law, and the recommendations as advisory. In re Crews, 159 S.W.3d at 358. Moreover, this Court may reject any or all of the panel's recommendations. In re Madison, 282 S.W.3d 350, 352 (Mo. banc 2009). Discussion Many of the facts essential to resolve Mr. Coleman's alleged disciplinary rule violations are not contested. Mr. Coleman admits that he drafted three contingent fee agreements for Ms. Davis to sign that purported to give him sole authority to settle her three lawsuits. He admits that he agreed to settle her case against Western Missouri Mental Health Center for $20,000, even though she expressly told him that she did not agree that the case should be settled for that amount. He also admits that he filed a motion in federal district court seeking enforcement of the agreement and proceeded on his motion. Mr. Coleman does not dispute that, after the court overruled his motion, he filed a motion to withdraw as counsel and did not mail a copy of the motion to Ms. Davis. He admits receiving a letter on April 5, 2007, from Ms. Davis requesting detailed information about her pending lawsuits and not responding in any way. Mr. Coleman further admits that he wrote personal checks out of his IOLTA account. The bank records of that account establish that personal checks were written by Mr. Coleman at a time when a client's funds were in the account. These facts are sufficient to establish that Mr. Coleman violated Rules 4-1.2, 4-1.7, 4-1.15, 4-1.16, and 4-8.4. Violation of Rule 4-1.2 Rule 4-1.2(a) requires a lawyer to accept and adhere to the client's decision whether to accept an offer of settlement of a matter: (a) A lawyer shall abide by a client's decisions concerning the objectives of representation, subject to paragraphs (c), (d) and (e), and shall consult with the client as to the means by which they are to be pursued. A lawyer shall abide by a client's decision whether to accept an offer of settlement of a matter. This rule recognizes that fundamental to the attorney-client relationship is the concept that an attorney advocates for the client's objectives. "The client-lawyer relationship itself implies some decisions *864 [are] reserved to the client. Thus, a client and lawyer could not enter into a valid contract that only the lawyer would have the authority to decide what would benefit the client[.]" REST. OF LAW GOVERNING LAWYERS § 22 cmt. a (2000). Settlement decisions have the potential both to benefit and harm the client. Rule 4-1.2(a) requires a client to be in control of the decisions that have the capacity to affect the client profoundly, specifically referencing the decision whether to accept a settlement of the case, so an attorney may not execute a contract that gives the attorney the sole right to settle a case.[5] For the three cases in which he represented Ms. Davis, Mr. Coleman drafted contingent fee contracts that included a provision that "[he] shall have the exclusive right to determine when and for how much to settle this case." He executed the three agreements with this provision and then had Ms. Davis execute them as well. This expressly is disallowed by Rule 4-1.2(a). While Rule 4-1.2(c) allows an attorney to limit the scope of representation,[6] the rules of professional conduct do not allow an attorney to expand the scope of representation by a client's agreement so that the attorney may determine whether to accept or reject a settlement offer instead of the decision being made by the client. Mr. Coleman also violated Rule 4-1.2(a) when he acted under the authority of the invalid agreement with Ms. Davis and advised counsel for Western Missouri Mental Health Center that he agreed to settle Ms. Davis' case against the center for $20,000. He agreed to settle Ms. Davis' case after she expressly had told him that she did not accept the $20,000 settlement offer. He further violated Rule 4-1.2(a) when he filed a motion requesting that the court order Western Missouri Mental Health to honor the agreement he accepted despite Ms. Davis' refusal to settle the case. Mr. Coleman's attempt to compel Ms. Davis to settle her case when he knew she did not consent to that settlement expressly is disallowed by Rule 4-1.2(a). Violation of Rule 4-1.7 Rule 4-1.7(b) provides that "a lawyer shall not represent a client if the representation of that client may be materially limited ... by the lawyer's own interests, unless: (1) the lawyer reasonably believes the representation will not be adversely affected and (2) the client *865 consents after consultation." The comment to this rule recognizes that "[l]oyalty... [is] an essential element in the lawyer's relationship to a client" and that "[t]he lawyer's own interests should not be permitted to have adverse effect on representation of a client." Rule 4-1.7, Comment. Although Rule 4-1.7(b) provides that representation by an attorney is permissible despite a conflict of interest if the client consents after consultation, there is a limitation as to when a client may waive a conflict. The client should not be asked to consent to representation despite the conflict, unless it is reasonable to believe that the conflict will not adversely affect the attorney's representation of the client. It was not reasonable for Mr. Coleman to believe that his interests would not adversely affect his representation of Ms. Davis under the contingent fee agreement that gave him the sole right to settle her cases. Under the contingent fee agreement, Mr. Coleman would receive a contingent fee only if Ms. Davis settled her cases or received a favorable verdict. At the same time, he improperly contracted with her for the exclusive right to settle her cases with or without her consent. The conflict of interest the agreement created was apparent by Mr. Coleman's inclusion of a statement in the agreement indicating his intent was to protect his financial interests rather than her legal interests: "[Y]ou agree I shall have the exclusive right to determine when and for how much to settle this case. That way, I am not held hostage to an agreement I disagree with." The contingent fee contract gave Mr. Coleman a motivation to protect his financial interests. By divesting Ms. Davis of settlement authority, Mr. Coleman attempted to protect his financial interests by allowing Mr. Coleman to give his interests greater priority than Ms. Davis'. In doing so, this agreement creates a conflict between Ms. Davis' interests and Mr. Coleman's personal, financial interests. Additionally, Mr. Coleman mailed the agreements to Ms. Davis and never discussed with her the provisions that gave him the sole right to settle her cases. Mr. Coleman again violated Rule 4-1.7 by acting against Ms. Davis' interests when he advised counsel for Western Missouri Mental Health Center that he would settle Ms. Davis' lawsuit for $20,000, despite her explicit refusal to accept the settlement. Mr. Coleman further acted against her interest, in violation of Rule 4-1.7, by filing a motion in the federal lawsuit asking the court to enforce the agreement despite Ms. Davis' explicit refusal to settle, and by proceeding on the motion. In each of these instances, Mr. Coleman promoted his personal interests when he knew they directly conflicted with those of Ms. Davis. If a conflict of interest arises after representation is undertaken, a lawyer may be able to protect his client's interests by withdrawing from representation. Rule 4-1.7, Comment. While Mr. Coleman withdrew from his representation of Ms. Davis in her three cases, he did not seek to withdraw at the time it became apparent that his interest was adverse to hers. He withdrew only after the federal court overruled his motion to enforce the agreement against her in the Western Missouri Mental Health Center case. His withdrawal was untimely, and it did not mitigate the conflict. This Court finds that Mr. Coleman violated Rule 4-1.7. Violation of Rule 4-1.15 Rule 4-1.15(c) states: "A lawyer shall hold property of clients or third persons that is in a lawyer's possession in connection with a representation separate from the lawyer's own property. Client or *866 third party funds shall be kept in a separate account[.]"[7] Mr. Coleman violated this rule. He admits that, from January to July 2008, he regularly deposited settlement proceeds into his IOLTA account for the purpose of allowing the settlement checks to clear. Mr. Coleman acknowledges that, after each check cleared, he paid the client the client's portion of the settlement. He then regularly paid personal obligations out of his portion of settlement proceeds that remained in his IOLTA account. Mr. Coleman argues this was not a violation of Rule 4-1.15 because "there are no other funds in his IOLTA account, except [Mr. Coleman's]." Mr. Coleman misunderstands Rule 4-1.15(c). Rule 4-1.15(c) explicitly states that there must be an account for client and third-party funds that is kept separate from any account holding an attorney's own funds. While it may be true that Mr. Coleman did not misuse funds by using client funds to pay personal bills or convert any client funds, he did use his IOLTA account for personal use.[8] That is strictly prohibited. Rule 4-1.15, Comment 1. Commingling personal and client funds is only "permissible when necessary to pay bank service charges on [the IOLTA] account." Rule 4-1.15, Comment 2. Any funds owed to Mr. Coleman should have been transferred into a personal account before the money was withdrawn via a check. Contrary to Mr. Coleman's argument that there were never client funds in the account when he wrote personal checks, records from his bank show that, during the time he kept his personal funds in the IOLTA account and wrote personal checks to expend the funds, there were client funds in the account. This is a classic example of prohibited commingling of attorney and client funds. Additionally, "accurate records must be kept regarding which part of the funds is the lawyer's." Rule 4-1.15, Comment 2. When questioned about the identity of the owner of specific deposits, Mr. Coleman was unable to say to whom the money belonged. Violation of Rule 4-1.16 OCDC also argues the evidence proves that Mr. Coleman violated Rule 4-1.16 because he failed to protect Ms. Davis' interests at the termination of his representation. Specifically, OCDC claims Mr. Coleman did not notify Ms. Davis that he filed motions to withdraw in her cases and did not provide her with information regarding her rights and obligations even though she made a written request for information. Rule 4-1.16(d) provides that, "[u]pon termination of representation, a lawyer shall take steps to the extent reasonably practicable to protect a client's interests, such as giving reasonable notice to the client, allowing time for employment of other counsel[.]" One step necessary to protect a client's interest is giving notice to the client that the attorney has filed a motion to withdraw as counsel of record. That notice should be given unless it reasonably cannot be done, such as when the client has no known address. After Mr. Coleman was unsuccessful in enforcing the provision giving him the sole right to settle Ms. Davis' Western Missouri Mental Health Center case, Mr. Coleman filed motions to withdraw as counsel in all three of her cases. The *867 three motions to withdraw filed by Mr. Coleman included certificates of service, indicating that notice of each motion to withdraw was given by mailing a copy of the motion. The certificates of service do not show that Ms. Davis was mailed a copy. Mr. Coleman did send her a letter advising her that, in light of their conflict over whether the Western Missouri Mental Health Center case should be settled for $20,000, he needed to know whether he was still representing her in that case and her other two cases. He stated that, unless he heard from her in writing within one week, he would file a motion to withdraw in the Western Missouri Mental Health Center case. She did not respond to his inquiry as he requested, so Mr. Coleman filed a motion to withdraw as counsel in that case on February 28, 2007. It was reasonable for Mr. Coleman to believe that Ms. Davis should know that her failure to respond to his request for communication would result in him filing a motion to withdraw as counsel. Nevertheless, he had an obligation to protect her interest by giving her notice that he actually had filed a motion to withdraw as counsel of record in all three of her cases. Mr. Coleman further violated rule 4-1.16 by failing to take all reasonable steps to mitigate the consequence of his withdrawal on Ms. Davis, and to protect her interests by providing her with the information she requested. After Ms. Davis learned that Mr. Coleman had filed a motion to withdraw in the Western Missouri Mental Health Center case, she sent Mr. Coleman a letter requesting information that would help her decide whether to object to his withdrawal. In the letter, she asked him when her federal lawsuit would be set for trial, whether she would need another lawyer, what problems there would be, and whether any offer had been made other than the $20,000 offer that she declined. She asked what fees she owed and requested an itemized statement of all charges and work done to support the charges. She also asked him questions about the status of her two discrimination cases, for a copy of her fee agreement in the Two Rivers case, for itemized statements of the work done, and for Mr. Coleman's charges in both cases. She asked whether he believed he was entitled to a portion of any recovery in the cases and whether he intended to file liens for his fees. Ms. Davis requested that his response be in writing. At the time that Ms. Davis sent her letter requesting information from Mr. Coleman, she had picked up all of her files from his office. Although she later disputes that her act communicated a decision to terminate Mr. Coleman as her attorney, Mr. Coleman was reasonable in interpreting her action as a termination of his representation. Even so, under Rule 4-1.16, Mr. Coleman was required to protect Ms. Davis' interests on termination of his representation, if reasonably practicable. Most of the information she requested was vital to her ability to hire counsel to represent her in the three pending cases.[9] While she had possession of her case files at the time she requested information from Mr. Coleman, an attorney considering whether to represent her would be aided greatly by a statement from the prior attorney of record as to the status of her cases. Additionally, any attorney asked to undertake representation in plaintiff's *868 pending cases would be concerned about receiving payment for services rendered. Whether Mr. Coleman was claiming a percentage of any recovery and whether he intended to file attorney liens were major considerations in any attorney's decision whether to represent Ms. Davis, because she no longer had the ability to pay an hourly rate. Mr. Coleman was mistaken that he did not have an obligation to give Ms. Davis information about her cases because she had terminated their attorney-client relationship. Mr. Coleman's duty to provide the information arose because of the termination of their relationship. Mr. Coleman violated Rule 4-1.16 by failing to protect Ms. Davis' interests when he did not provide information that was readily within his knowledge and possession. Violation of Rule 4-8.4 Rule 4-8.4 defines professional misconduct for which an attorney may be disciplined. Rule 4-8.4(a) states that "[i]t is professional misconduct for a lawyer to: (a) violate or attempt to violate the Rules of Professional Conduct." Rule 4-8.4(d) also provides that it is professional misconduct for a lawyer to "engage in conduct that is prejudicial to the administration of justice." By violating rules of professional conduct, Mr. Coleman "has necessarily violated Rule 4-8.4(a)." In re Caranchini, 956 S.W.2d 910, 916 (Mo. banc 1997). OCDC asks this Court to find that Mr. Coleman also violated Rule 4-8.4(d) because his conduct in violation of the rules of professional conduct wasted judicial resources and negatively impacted the judicial process so it was prejudicial to the administration of justice. Regarding Rule 4-8.4(d), Mr. Coleman wasted judicial resources when he filed a motion and attempted to enforce a prohibited agreement purporting to give him the sole right to settle Ms. Davis' case. His conduct also negatively impacted the judicial process because his failure to give Ms. Davis information at the termination of his representation hindered her ability to obtain new counsel that was necessary to adjudicate her claims in the pending cases. Mr. Coleman, therefore, violated Rule 4-8.4(d). Insufficient Evidence of Violation of Rule 4-1.5 OCDC asserts that Mr. Coleman violated an additional rule of professional conduct, Rule 4-1.5, by making agreements with Ms. Davis for unreasonable fees. Despite the fact that Mr. Coleman can produce no written records or accountings to show the work he performed, OCDC does not contend the $30,000 to $50,000 in fees that he charged and that Ms. Davis paid were unreasonable. Instead, OCDC asserts that Mr. Coleman's conversion of his agreements for a $200 hourly fee to contingent fee agreements where Mr. Coleman would receive 30 percent of any recovery was per se unreasonable because Ms. Davis received no credit for the $38,000 in fees she already had paid in her three cases. Additionally, OCDC notes that, at the time Ms. Davis was asked to consent to the contingent fee agreement, she had invested five years in Mr. Coleman's representation with no resolution in any of her actions. OCDC suggests that she had little choice but to agree because her financial resources had been depleted. This Court is concerned there are no records to document the work Mr. Coleman did for Ms. Davis, the amount he billed her for his work, or the expenses he incurred on her behalf. Mr. Coleman claims that he gave all his records to Ms. Davis, except possibly tax records, and did not keep copies for himself. While not keeping copies of records is ill-advised, *869 there is nothing in the record to dispute his claim.[10] Importantly, there was evidence that Mr. Coleman spent significant time and effort on Ms. Davis' behalf. Although Ms. Davis paid Mr. Coleman $38,000, a large sum of money, his work was on three lawsuits spanning five years. All three lawsuits involved complex legal issues: they were a medical malpractice wrongful death suit and two discrimination lawsuits. He communicated with her on a regular basis, updated her about the status of the cases, and provided her copies of documents he filed and received in the court proceedings. Mr. Coleman consulted two doctors and hired one as an expert in the wrongful death case. A portion of the amount Ms. Davis paid Mr. Coleman was for litigation expenses, including fees for the two doctors. Finally, the percentage that Mr. Coleman agreed to accept under the contingent fee agreement was 30 percent, which is on the lower end of contingent fee percentages, and that percentage was applicable even if the case was tried. The disciplinary hearing panel found that the evidence was insufficient to show Mr. Coleman violated Rule 4-1.5, and this Court agrees. Appropriate Discipline Having found by a preponderance of the evidence that Mr. Coleman committed multiple acts of professional misconduct, this Court now turns to the appropriate discipline to impose. "The fundamental purpose of an attorney disciplinary proceeding is to `protect the public and maintain the integrity of the legal profession.'" Crews, 159 S.W.3d at 360 (quoting In re Waldron, 790 S.W.2d 456, 457 (Mo. banc 1990)). This Court relies on the ABA Standards when imposing sanctions to achieve the goals of attorney discipline. In re Storment, 873 S.W.2d 227, 231 (Mo. banc 1994). The goals of attorney discipline are to protect the public, ensure the administration of justice, and maintain the integrity of the profession. ABA STANDARDS FOR IMPOSING LAWYER SANCTIONS 6 (1992). Under the ABA Standards, the factors considered are the ethical duty and to whom it is owed, the attorney's mental state, the amount of injury caused by the attorney's misconduct, and, finally, any aggravating or mitigating circumstances. ABA STANDARDS at 7. Ethical duties are owed to the client, the public, the legal system, and the profession. Of the duties that are relevant to Mr. Coleman's violations, the duties owed to the client include the duty to preserve the client's property, Rule 4-1.15, and the duty to avoid conflicts of interest, Rule 4-1.7. The duties owed to the public are protected in part by Rule 8.4, attorney misconduct. This rule helps guarantee that the public can trust lawyers to protect their interests and property. The duties owed to the legal system and the legal profession helps to ensure that the administration of justice is swift and accurate. Rules 4-8.4, attorney misconduct, and 4-1.16, declining or terminating representation, work to protect these interests. The ABA's next factor to examine when calculating sanctions is the attorney's mental state while committing the rule violation. The ABA Standards use three mental states: intent, knowledge, and negligence. *870 Intent, the most culpable mental state, is displayed when "a lawyer acts with the conscious objective or purpose to accomplish a particular result." Id. at 10. Knowledge is shown when "the lawyer acted with conscious awareness of the nature or attendant circumstances of his or her conduct both without the conscious objective or purpose to accomplish a particular result." Id. The least culpable mental state is negligence, exhibited when "a lawyer fails to be aware of a substantial risk that circumstances exist or that a result will follow, which failure is a deviation from the standard of care that a reasonable lawyer would exercise in the situation." Id. When discussing injury to the client, the ABA Standards look at the actual injury to the client as well as the potential injury to the client, public, and legal system or profession that is "reasonably foreseeable at the time of the lawyer's misconduct." Id. at 13. The level of injury ranges from "serious" to "little or no" injury. Finally, the last inquiry in the ABA Standards considers aggravating and mitigating factors. Aggravating factors, which increase the severity of the sanction, include prior discipline by the disciplinary committee or vulnerability of the client. Id. at 11. Mitigating factors, which decrease the severity of the sanction, include inexperience in the practice of law, remorse, character and reputation, and absence of a selfish motive. Id. at 11, 28. When multiple charges of misconduct are found, "the ultimate sanction imposed should at least be consistent with the sanction for the most serious instance of misconduct among the violations." Id. at 12. Applying these factors to Mr. Coleman, OCDC asserts that the most serious instances of misconduct involve Mr. Coleman failing to follow his client's directives, creating a conflict of interest, and mishandling client funds. Mr. Coleman's failure to follow Ms. Davis' directive in violation of Rule 4-1.2 is governed by ABA Standard 4.4, which provides that suspension is appropriate when a lawyer knowingly fails to perform services for a client and causes injury or potential injury. Reprimand is appropriate when a lawyer is negligent and does not act with reasonable diligence in representing a client. Despite the plain language of Rule 4-1.2 requiring that the client make any decision regarding settlement, Mr. Coleman knowingly had his client execute an agreement where he was given the sole authority to settle his client's cases. While Mr. Coleman did not intend to violate the rules of professional conduct and is reluctant to accept that his actions are improper and prohibited, he knowingly failed to abide by his client's directive to reject the state's settlement offer by not only advising the state that the settlement offer was accepted but also filing a motion to enforce the settlement agreement. Likewise, Mr. Coleman's conduct in violating Rule 4-1.7, creating a conflict of interest, and Rule 4-1.15, safekeeping of client property, was knowing conduct. With regard to the injury to Ms. Davis from these violations, she was required to take action to protest Mr. Coleman's efforts to enforce his settlement with Western Missouri Mental Health Center. Additionally, the improper agreement contributed to the deterioration of her relationship with Mr. Coleman, which left her without counsel in all three cases and led to the dismissal of her case against Western Missouri Mental Health Center. As to aggravating circumstances, this Court admonished Mr. Coleman in 1990 and 1999, and publicly reprimanded him 2008. The applicable *871 mitigating factor is the absence of dishonest motives. Applying the ABA standards, the nature of Mr. Coleman's conduct justifies the suspension of Mr. Coleman's license to practice law without leave to reapply for one year. The ABA Standards provide, however, for lesser discipline where the behavior was not intentional. The ABA Standards suggest that probation is the appropriate punishment when the conduct can be corrected and the attorney's right to practice law needs to be monitored or limited rather than revoked. ABA STANDARDS, Rule 2.7 Probation, Commentary. This concept is recognized by this Court's Rule 5.225, which provides: A lawyer is eligible for probation if he or she: (1) Is unlikely to harm the public during the period of probation and can be adequately supervised; (2) Is able to perform legal services and is able to practice law without causing the courts or profession to fall into disrepute; and, (3) Has not committed acts warranting disbarment. Mr. Coleman's actions arose out of ignorance of the rules of professional conduct instead of an intention to violate the rules,[11] and it is likely that his misconduct can be remedied by education and supervision. Because this Court finds that Mr. Coleman's violations make him a proper subject for probation, it orders that execution of the suspension of his license to practice law be stayed and that he be placed on probation for one year. The conditions of his probation shall include attendance at the ethics school conducted by OCDC, participation in law practice management education and mentoring, preparation of an office management plan that is approved by OCDC, filing of quarterly responsibility reports, submission to periodic financial audits, employment of a qualified consultant, maintenance of adequate trust account records, and commission of no other violations of the rules of professional conduct. Conclusion Mr. Coleman is ordered suspended from the practice of law without leave to reapply for one year. The execution of his suspension is stayed, and Mr. Coleman is placed on probation for a period of one year, with terms of his probation as imposed herein. All concur. NOTES [1] All citations are to the 2007 Missouri Rules of Professional Conduct, unless otherwise indicated. Mr. Coleman's violations of the rules of professional conduct, except his IOLTA account violations, all occurred prior to July 1, 2007, when amendments to the rules took effect. Mr. Coleman's IOLTA account violations occurred from March to June 2008, so they are governed by the version of Rule 4-1.15 in effect from January 1 to July 1, 2008. The subsequent amendments to the rules do not change the essence of the rules. [2] The contingent fee agreements stated that Mr. Coleman forgave unpaid balances due of $4,387.69 in the wrongful death action, $3,249.89 in the Two Rivers case, and $9,647.94 in the Western Missouri Mental Health Center case. [3] It is unknown whether Mr. Coleman or Ms. Davis was aware of the other's letter at the time of drafting his or her April 5th letter. [4] OCDC does not challenge the panel's finding that Mr. Coleman did not violate the rules of professional conduct in his representation of his other client. [5] In addition to violating the rules of professional conduct, fee agreements that purport to give an attorney control over settlement of the case have not been enforced by the courts of other jurisdictions that, like Missouri, have adopted the ABA Model Rules of Professional Conduct. E.g. Parents Against Drunk Drivers v. Graystone Pines Homeowners' Ass'n, 789 P.2d 52, 55 (Utah Ct.App. 1990) (declaring that fee agreements that give attorneys control over the settlement of the case is contrary to the Utah Code of Professional Responsibility, contrary to public policy and, ultimately, void). [6] Under Rule 4-1.2(c): "A lawyer may limit the scope of representation if the client consents after consultation." The comment to Rule 4-1.2 expressly provides that the attorney may not ask the client to surrender the right to settle litigation that the attorney may want to continue: The objectives or scope of services provided by a lawyer may be limited by agreement with the client or by the terms under which the lawyer's services are made available to the client. * * * An agreement concerning the scope of representation must accord with the Rules of Professional Conduct and other law. Thus, the client may not be asked to agree to representation so limited in scope as to violate Rule 4-1.1, or to surrender the right to terminate the lawyer's services or the right to settle litigation that the lawyer might wish to continue. [7] The version of Rule 4-1.15 in the 2008 Missouri Rules of Professional Conduct governs. This version was in effect from January 1, 2008, to July 1, 2008. [8] OCDC never alleged that Mr. Coleman converted any client funds. [9] Mr. Coleman does not assert that it was not reasonably practicable to respond to Ms. Davis' inquiries or object to the information requested. Nor does he claim that he was unable to respond to her request because she stated she needed his reply within one day. While it was not practicable for him to respond in one day, he never responded at all. [10] When Mr. Coleman gave Ms. Davis her case files and billing records, he did not make and keep copies for his own records. While duplicate records are not required by the rules of professional conduct, the amended version of Rule 4-1.15(c) does require that a lawyer preserve "complete records" of a client's trust account for five years. Because a client generally requests his or her files only when problems have arisen in the attorney-client relationship, however, this case demonstrates why retaining such records is prudent. [11] It is unlikely that Mr. Coleman would have filed a motion in court in an attempt to enforce the settlement agreement against the Western Missouri Mental Health Center despite Ms. Davis' explicit refusal to settle, if he knew such an agreement violated the rules of professional conduct. Likewise, it is unlikely that Mr. Coleman would have written a check on his IOLTA account to pay his fees and costs in his previous disciplinary proceeding if he had known that doing so violated the rules of professional conduct.
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a radio paging receiver with a display unit and, more particularly, to a radio paging receiver with a display unit which can effectively use a limited memory area. 2. Description of the Prior Art In a conventional radio paging receiver with a display unit, a received message is stored in a memory and can be displayed upon reception. In this case, if an identical message is received, the message is not stored in the memory. Otherwise, every time a new message is received, a message display operation is performed for notification, and the message is stored in the memory. FIG. 1 is a block diagram showing an arrangement of a conventional radio paging receiver with a display unit. Referring to FIG. 1, a radio signal received through an antenna 21 is amplified and demodulated by a radio section 22 and is input, as a digital signal B, to a decoder 24. The decoder 24 collates an address contained in this digital signal, i.e., a paging number, with a self- address C stored in an ID memory 25. If they coincide with each other, a CPU 28 stores a message code D accompanying the address code in a message memory 26, and outputs a control signal (3) for controlling a notification/display means 27 to perform a notification and display a message E. Reading of the self-address C from the ID memory 25 is performed by using a control signal (1) from the decoder 24 and reading/writing of the message D from/in the message memory 26 is performed by using a control signal (2) from the CPU 28. If the received message is stored in the message memory 26, the message D or E stored in the message memory 26 can be displayed by the notification/display means 27 using control signals (2) and (3) from the CPU 28. A control section 23 consists of the decoder 24 and the CPU 28. In the conventional radio paging receiver with a display unit, however, if a large number of similar messages and messages whose contents change with time in association with specific items are received, a large number of unnecessary messages, confusing messages, and messages expressing the current states of specific contents are stored. Consequently, a cumbersome operation is required to select a useful message, and a limited message memory area cannot be effectively used, resulting in a large increase in transmitted data. SUMMARY OF THE INVENTION It is an object of the present invention to provide a radio paging receiver with a display unit, in which only one latest message associated with an item defined by a specific character and symbol train, i.e., a specific item, is always stored, and old messages with little necessity and confusing messages are automatically removed when a large number of similar messages and messages whose contents change with time in association with specific items are received, whereby selection of a useful message is facilitated in a message display operation, and the limited message memory area can be effectively used. In order to achieve the above object, according to the first aspect of the present invention, a radio paging receiver with a display unit comprises a first memory for storing a specific character and symbol train, and a second memory for storing a received message. Message collating means are also provided for determining whether the specific character and symbol train is contained in the received message as well as second memory control means for controlling storage of the received message in the second memory on the basis of the determination result obtained by the message collating means. According to the second aspect of the present invention, in the radio paging receiver with a display unit according to the first aspect, when the message collating means determines that a first message containing the specific character and symbol train has already been stored in the second memory, and the specific character and symbol train is also contained in a newly received second message, the second memory control means erases the first message and causes the second memory to newly store the second message. According to the third aspect of the present invention, the radio paging receiver with a display unit according to the first aspect further comprises input means for selecting an individual information item to be received, storage/display means for storing and displaying the individual information item selected by the input means and information associated with the information item, and update means for updating the storage/display means upon reception of a new message only when a specific portion of the message corresponds to the selected individual information item, and contents of information associated with the information item change. The received paging number is contained in a received information field, a specific portion of a received message corresponds to an individual information item, in the information field, which is to be received, and a remaining portion of the message corresponds to information associated with the information item. In addition, according to the fourth aspect of the present invention, the radio paging receiver with a display unit according to the third aspect further comprises input means for, when the individual information item to be received is selected, further selecting an important individual information item, and notifying means for performing a special notification when contents of information associated with the selected important individual information item change. According to the radio paging receiver with a display unit of the present invention, it is checked whether a specific character and symbol train is contained in a received message. If it is determined that a first message containing the specific character and symbol train has already been stored in the second memory, and the specific character and symbol train is also contained in a newly received second message, the first message is erased, and the second message is stored in the second memory. With this operation, when a new message associated with an item defined by a specific character and symbol train, i.e., a specific item, is received, an old message stored in the message memory is erased, and one latest message can be stored. Therefore, only one latest message associated with an item defined by a specific character and symbol train, i.e., a specific item, is always stored, and old messages with little necessity and confusing messages are automatically removed when a large number of similar messages and messages whose contents change with time in association with specific items are received, whereby selection of a useful message is facilitated in a message display operation, and the limited message memory area can be effectively used. In addition, according to the present invention, in the radio paging receiver with a display unit, an individual information item to be received is selected, and the selected individual information item and information associated therewith are stored and displayed. The stored contents are replaced with a newly received message only when a specific portion of the received message corresponds to a selected information item, and the contents of information associated with the information item change. After an individual information item to be received is selected, an important individual information item is further selected. If the contents of information associated with the selected important individual information item change, a special notification is performed. With this operation, when a large number of messages whose contents change with time with respect to specific items are received, messages indicating the current states of specific contents can be efficiently received. A large number of old messages with little necessity and confusing messages are not stored, and a cumbersome operation of selecting a useful message need not be performed. Therefore, a limited message memory area can be effectively used, and information which are required to be obtained in real time can be effectively ensured. The above and many other advantages, features and additional objects of the present invention will become manifest to those versed in the art upon making reference to the following detailed description and accompanying drawings in which preferred structural embodiments incorporating the principles of the present invention are shown by way of illustrative example. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing the arrangement of a conventional radio paging receiver with a display unit; FIG. 2 is a block diagram showing the arrangement of an embodiment of the present invention; FIGS. 3A and 3B together form a flow chart associated with an operation of the embodiment shown in FIG. 2; FIGS. 4A and 4B illustrate display samples in the embodiment shown in FIG. 2; FIG. 5 is a block diagram showing the arrangement of another embodiment of the present invention; FIG. 6 is a flow chart associated with an operation of the embodiment shown in FIG. 5; and FIGS. 7A and 7B illustrate display samples in the embodiment shown in FIG. 5. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention will be described in more detail below in association with the preferred embodiments shown in the accompanying drawings. The embodiment shown in FIG. 2 comprises an antenna 1, a radio section 2 for demodulating a received input signal a to output a digital signal b as a paging number, a decoder 4 for receiving and decoding the output signal b from the radio section 2, an ID memory 5 for storing a self- address, a first memory 6a for storing both a preset individual information item and information indicating whether the individual information item is selected as a desired reception item by a carrier of the receiver, a first message collating section 8a for collating a message code output from the decoder 4 with the individual information item in the first memory 6a, a CPU 9, a second memory 6b for storing detailed information associated with a selected individual information item sent through a message, and a second message collating section 8b for receiving the information associated with the selected individual information item from the first message collating section 8a, collating the information with the stored contents of the second memory 6b to check a change in information contents, and sending the resultant data to the CPU 9. The embodiment also comprises an LCD driver 10, an LCD 11, a loudspeaker driving section 12, and a loudspeaker 13, which are included in a notification/display means 7, and an item selecting switch 14 for setting an individual information item. As is apparent from FIG. 2, a control section 3 is constituted by the first and second message collating sections 8a and 8b, the decoder 4, and the CPU 9. Of these components, the item selecting switch 14 and its cooperative components such as the CPU 9, the LCD driver 10, the LCD11, the first memory 6a constitute an input means; the second memory 6b, the CPU 9, the LCD driver 10, and the LCD 11 constitute a storage/display means; the first and second message collating sections 8a and 8b, and the second memory 6b constitute an update means; and the first and second memories 6a and 6b, the first and second message collating sections 8a and 8b, the CPU 9, the LCD driver 10, the LCD 11, the loudspeaker driving section 12, and the loudspeaker 13 also serve as a notification means. An operation of the embodiment will be described next. The radio signal a received by the antenna 1 is amplified and demodulated by the radio section 2 and is input, as the digital signal b, to the decoder 4. The decoder 4 collates an address contained in the digital signal b, i. e., the paging number, with a self-address c stored in the ID memory 5. When they coincide with each other, the decoder 4 outputs a message code d accompanying the address code to the first message collating section 8a. In this case, the self-address c is read out from the ID memory 5 under the control (1) of the decoder 4. The self- address used in this case corresponds to the field of information sent through a message, e.g., &quot;PARKING-LOT INFORMATION&quot; or &quot;AIRLINE TICKET VACANCY STATE&quot; rather than an address unique to the radio paging receiver with a display unit (to be referred to as the receiver hereinafter). The first memory 6a serves to store a character and symbol train transmitted as a specific portion of a message corresponding to an individual information item preset with respect to such an address, which item is, for example, &quot;YOKOHAMA STATION HIGASHIGUCHI DAIICHI&quot; or &quot;YOKOHAMA STATION NISHIGUCHI CHIKA&quot; for &quot;PARKING-LOT INFORMATION&quot;. The first memory 6a also stores information indicating whether such an individual information item is selected, as an item to be received, by the, carrier of the receiver. When the message code d is input to the first message collating section 8a, a selected individual information item e.sub.1 stored in the first memory 6a is referred to (control (2)), and a specific portion of the message is collated with the selected individual item to check whether they correspond to each other. A collation result f.sub.1 is output to the second message collating section 8b. Information g transmitted through the message and associated with the selected individual information item e.sub.1 is stored in the second memory 6b. Upon reception of the collation result f.sub.1 from the first message collating section 8a, the second message collating section 8b compares the information portion, of the received message, which is associated with the individual information item with the current information g by referring to the second memory 6b (control (3)) so as to check whether the contents of the information have changed. The second message collating section 8b outputs the resultant data to the CPU 9. Upon reception of an output f.sub.2 from the second message collating section 8b, which output indicates that new information associated with the individual information item is received, the CPU 9 receives a received message i from the decoder 4 and performs loudspeaker control (4) of the loudspeaker driving section 12 to cause the loudspeaker 13 to generate a notifying sound j. In addition, the CPU 9 performs driver control (4') of the LCD (Liquid Crystal Display) driver 10 to cause the LCD 11 to display a message k, and writes new information h in the second memory 6b to update the corresponding information. As is apparent, the information g or h stored in the second memory 6b can be read out by the CPU 9 (control (5)) and can be displayed on the LCD 11 upon driver control (4') of the LCD driver 10. In order to set each individual information item, the CPU 9 writes selected information n associated with each individual information item in the first memory 6a in response to a signal m input to the item selecting switch 14. When an individual information item is selected using the item selecting switch 14, the selecting operation can be easily performed while displaying items of selection on the LCD 11. When an item of special interest is selected by operating the item selecting switch 14 from those similarly selected by the operation of the item selecting switch 14, the selected information is written in the first memory 6a by the CPU 9. When it is detected that new information associated with the item is received, the CPU 9 controls the loudspeaker driving section 12 to generate a special notifying sound from the loudspeaker 13. For example, the special notifying sound may be a sound whose volume, frequency, tone, or the like is different from that of a normal notifying sound. Alternatively, in such a case, a notifying sound may be generated even in a situation wherein no notifying sound is generated upon reception of a normal item. FIG. 3 is a flow chart for explaining an operation of the embodiment. An individual information item at each address is selected by operating the item selecting switch 14 (step 101). When an item selection mode is set, an item in a wait state for selection setting is displayed (step 102). When a switch operation for selection setting is performed (step 103), the item in the wait state for selection setting is selected (step 104). When the item is selected, whether to set the item as an item of special interest (step 106) is determined by a switch operation (step 105) , and another item is set in a wait state for selection setting (step 107) . When a switch operation is performed to set another item in a wait state for selection setting while the wait state for selection setting is displayed (steps 102 and 103), another item is set in a wait state for selection setting (step 108). When, however, a predetermined period of time elapses without operating the item selecting switch 14 (109), the item selection mode is terminated. When a switch operation is performed to display information (step 110), information associated with each item is displayed (step 111). When a message is received (step 112), a specific portion of the message is collated with the selected item to check whether the message corresponds to the selected item (step 113). A change in the information is then checked by collating it with the latest information stored in the second memory 6b (step 114). When there is a change in the information associated with the selected item, it is checked whether the item is of special interest (step 115). When the item is of special interest, a notifying sound is generated by the loudspeaker 13 (step 116). In addition, the information associated with the item is displayed (step 117) , and the received information associated with the item is written in the second memory 6b to update the corresponding information (step 118). When the item is not of a special interest (step 115), only display of the information (step 117) and updating of the information (step 118) are performed. FIGS. 4A and 4B illustrate message display samples in the embodiment. FIG. 4A shows a display of information associated with each item. More specifically, &quot;PARKING-LOT INFORMATION&quot; is an information field corresponding to an address, and &quot;YOKOHAMA STATION HIGASHIGUCHI DAIICHI&quot; and &quot;YOKOHAMA STATION NISHIGUCHI CHIKA&quot; are individual information items. Two pieces of latest information &quot;FULL&quot; and &quot;VACANT&quot; are displayed on the right side of the respective items. The frame enclosing &quot;YOKOHAMA STATION HIGASHIGUCHI DAIICHI&quot; indicates that the item is of special interest. FIG. 4B shows a message display in a case wherein a new message associated with the item &quot;YOKOHAMA STATION HIGASHIGUCHI DAIICHI&quot; is received. When a large number of messages which change with time are received in association with special items, useful messages are automatically selected in this manner, allowing effective use of a limited memory area and efficient acquisition of necessary information. Another embodiment of the present invention will be described below with reference to FIGS. 5 to 7B. FIG. 5 is a block diagram showing the arrangement of the second embodiment. The components denoted by the same reference numerals and symbols as those in FIG. 2 have the same functions as those of the corresponding components. Therefore, a repetitive description of these components will be avoided. FIG. 5 shows the minimum necessary arrangement. This arrangement is different from the arrangement of the first embodiment shown in FIG. 2 in that a message memory control unit 9a is included in a CPU 9 in place of the second message collating section 8b. An operation of the second embodiment will be described below. A received message h is stored in a second memory 6b in the following manner. First, a message collating section 8 checks upon collation whether a message code d output from a decoder 4 contains a character and symbol train identical to a specific character and symbol train e read out (control (2)) from a first memory 6a for storing a character and symbol train. The message collating section 8 then outputs a collation result f and the message h to the message memory control unit 9a. If the collation result indicates that the special character and symbol train is not contained, the message memory control unit 9a performs control (3) through a CPU 9 to store the received message h in the second memory 6b. In contrast to this, if the collation result indicates that the special character and symbol train is contained, the message memory control unit 9a erases a message containing the same character and symbol train and stored in the second memory 6b, and causes the second memory 6b to store the newly received message h. FIGS. 7A and 7B illustrate messages displayed by the radio paging receiver shown in FIG. 5. FIG. 7A shows a case wherein a message stored in the second memory 6b is displayed until a new message is received. Assume that a new message &quot;$ (YEN): 140) is received in this state. In this case, in the receiver shown in FIG. 5, it is considered that a new message containing the selected individual information item &quot;$ (YEN):&quot; is received. Therefore, a message 21 shown in FIG. 7A is erased, and a message 22 shown in FIG. 7B is stored. FIG. 6 is a flow chart showing an operation of the radio paging receiver with a display unit in FIG. 5. When a stored message is to be displayed (step 201), the latest message is displayed (step 202). When messages other than the latest message are to be displayed (step 203), the messages are sequentially displayed, from a message at the next memory number indicating a message storage location assigned to the second memory 6b (step 204). When a paging number corresponding to a self-adress is received (step 205), it is checked upon collation whether a character and symbol train identical to a specific character and symbol train pre-stored in the first memory 6a is contained in the received message (step 206). In this case, the specific character and symbol train is &quot;$ (YEN):&quot;. If &quot;$ (YEN) :&quot; is not contained in the received message, the new message is simply stored in the second memory 6b (step 207). If, however, &quot;$ (YEN):&quot; is contained in the received message, it is further checked whether a message containing &quot;$ (YEN):&quot; is stored in the second memory 6b (step 208) . If a message containing &quot;$ (YEN):&quot; is stored, the message is erased, and the received message is stored in the second memory 6b instead (step 209). As is apparent, if a message containing &quot;$ (YEN):&quot; is not stored in the second memory 6b, the received message is simply stored in the second memory 6b (step 207). It should be understood that the foregoing relates to only preferred embodiments of the present invention, and the present invention is not limited to these embodiments. It will be obvious to those skilled in the art that various modifications and changes can be made without departing from the spirit and scope of the invention.
Identifying the predominant peak diameter of high-density and low-density lipoproteins by electrophoresis. Particle size distributions of high-density (HDL) and low-density (LDL) lipoproteins, obtained by polyacrylamide gradient gel electrophoresis, exhibit apparent predominant and minor peaks within characteristic subpopulation migration intervals. In the present report, we show that identification of such peaks as predominant or minor depends on whether the particle size distribution is analyzed according to migration distance or particle size. The predominant HDL peak on the migration distance scale is frequently not the predominant HDL peak when the distribution is transformed to the particle size scale. The potential physiologic importance of correct identification of the predominant HDL peak within a gradient gel electrophoresis profile is suggested by our cross-sectional study of 97 men, in which diameters associated with the predominant peak, determined using migration distance and particle size scales, were correlated with plasma lipoprotein and lipid parameters. Plasma concentrations of HDL-cholesterol, triglycerides, and apolipoproteins A-I and B correlated more strongly with the predominant peak obtained using the particle size scale than the migration distance scale. The mathematical transformation from migration distance to particle diameter scale had less effect on the LDL distribution. The additional computational effort required to transform the HDL-distribution into the particle size scale appears warranted given the substantial changes it produces in the gradient gel electrophoresis profile and the strengthening of correlations with parameters relevant to lipoprotein metabolism.
For my first design, I selected an image of a girl with paint on her hands and face. I wanted to make the paint the main focus of the image so I decided to keep only those parts in colour. Overall, I was happy with this result as the colours of the paint, being the main focus really stands out from the black and white background which is effective when drawing the viewers’ eye. This task helped me to understand how to use the quick selection tool and to see how I could refine my selection by clicking ‘select and mask’ to include the fine details of the hand and face. Altering the background of a photo For my second design, I took an image of a dull landscape, where I wanted to replace the sky for a bright blue sky. The results were successful as the image looks realistic as if it had a bright blue sky to begin with. This task helped me to become more familiar with the quick selection and ‘select and mask’ tool and to adjust the hue/saturation levels to match with the original image. To improve, perhaps I could have moved the sky down or enlarged it as you can see the foreground of the background image which does not match the colours exactly. Altering reality of a photo From here onwards, I experimented with different ways of seeing how I could create more depth into this frame as it seemed to look too flat and simple. The results of my third design were quite successful as the brush tool enhanced the features of the sunset as if it was really projecting outside into the room, however, I do think with some more detail, for example, adding more shadow around the door frame could bring a more in-depth alter reality. Software tutorials These tutorials were very useful in the fact that they were quick and easy to follow. Even though I have use photoshop in the past, I was not familiar with all the tools so these tutorials helped me to get more familiar with the quick selection tool when changing the colour of a specific section or changing the background of an image. How to Color Part of an Image in Photoshop! Partial Black and white in Photoshop! Partly B+W How to Use Layer Masking: Photoshop | Adobe Creative Cloud I did not use the third tutorial in my final design, however, it did help me to understand more about lighting and the direction it travels which I incorporated in my third design. I would like to further my skills and knowledge of using the drop shadow tool in photoshop as this could have brought out some of my designs more and add more depth to it. Design resources and articles https://www.befunky.com/learn/black-and-white-photos-with-color/ This page inspired to me choose an image with a variety of bold colours to make them stand out more by making the rest of the image black and white. https://www.makeuseof.com/tag/how-to-change-the-background-of-a-photo-in-photoshop/ I was inspired by this page to enhance the background by changing it entirely to a bright blue sky and match it to the same tones as the original image. https://slate.com/culture/2013/02/popel-coumou-using-photography-and-collage-to-alter-reality-photos.html I was inspired by the first two images on this page when creating an altered reality of overlapping a scene on top of another one to look like you could walk through it. Learning across the module Across the module, I have improved my skills in photoshop such as using the quick selection tool to remove/change backgrounds. I have also learnt how to use the clone stamp tool to help me remove unnecessary objects in the background. I have also improved my illustrator skills in creating a variety of different shapes for my text using the effects tool. Furthermore, I really enjoyed creating the gifs in AfterEffects and since I had never used that software before, the tutorials I watched were very useful and I feel confident in using this software again.
https://typography.network/author/claratsang/
Schematics 4 Free Service manuals, schematics, documentation, programs, electronics, hobby .... Login: Pass: [register] [send pass] S earch B rowse U pload M ost W anted N ow downloading free:panasonic FS470 Download panasonic FS470 docs - Audio equipment,digital audio, home audio, professional audio systems service manuals and repair information File information: File name: PanasFS470.zip [preview FS470] new window Size: 154 kB Extension: MD5: 3b13e01b22eed6934295289962979aa9 Mfg: panasonic Model: FS470 Original Chassis: Descr: panasonic FS470 scheme Group: Electronics > Consumer electronics > Audio > Uploaded: 2004-07-12 23:45:12 User: voivoda Multipart: No multipart Information about the files in archive: Decompress result: OK Extracted files: 0 File name Text PanasFS470.pdf Download restrictions information: Limit Downloaded Day 500 0 Month 5000 34 Wrong count ? * daily quota restriction will be cleared in 14h 21m Download >> To download the file, please, click here ! Did You find this Service Manual useful? Share it: Was this file useful ? Share Your thoughts with the other users. User ratings and reviews for this file: Date User Rating Comment Average rating for this file: 0.00 ( from 0 votes) Similar Service Manuals :
https://www.eserviceinfo.com/downloadsm/5962/panasonic_FS470.html
Tunable memristor emulator using off-the-shelf components. (Elsevier, 2020)Emerging memristor technology is as of late drawing broad consideration due to its potential for several applications. But, the non-availability of solid-state memristive devices puts a practical limitation. So, this paper ... - Shallow and deep learning approaches for network intrusion alert prediction. (Elsevier, 2020)The ever-increasing frequency and intensity of intrusion attacks on computer networks worldwide has necessitated intense research efforts towards the design of attack detection and prediction mechanisms. While there are a ... - Low-complexity high-performance deep learning model for real time low cost embedded fire detection system. (Elsevier, 2020)Correct and timely detection of fires has been an active area of research. Both shallow learning (with manual feature engineering), as well as deep learning (with its promise of automatically extracting meaningful ... - Trace model for cyclic behavior in wireless LANS. (Pakistan Association for the Advancement of Science, 2018-09): For the estimation of cycle time in WLANs different parameters were used to analyze the cyclic behavior like packet loss cycle, successful packet cycle and collision cycle. This paper proposes a model to control all ... - Property based attestation for a secure cloud monitoring system. (IEEE, 2014-02)—In this paper, we consider the problem of trust in cloud monitoring systems. We design and develop a novel scheme for trust certification using property based attestation (PBA). The PBA is based on a trusted platform ... - Single system image: a survey (Elsevier, 2016-04)Single system image is a computing paradigm where a number of distributed computing resources are aggregated and presented via an interface that maintains the illusion of interaction with a single system. This approach ... - SVS - a secure scheme for video streaming using SRTP AES and DH. (Euro Journals, 2010)Video streaming technologies are becoming immensely important with the growth of the multimedia technology. With streaming, the end user can start watching the file almost as soon as it begins downloading. Security becomes ... - MIKEY and SRTP integration for multicast streaming. (Euro Journals, 2006)This paper presents the design and implementation of key management architecture for secure multimedia audio streaming to multicast receivers. It describes the methods and techniques which are used in making a multicast ... - Effectiveness of cleaning and disinfection procedures on the removal of enterotoxigenic Bacillus cereus from infant feeding bottles. (International Association for Food Protection, 1998)Reconstituted infant milk formulas are considered a food class of high risk because of the susceptibility of the infant population to enteric bacterial pathogens, severe response to enterotoxins, and increased mortality. ... - Vehicular cloud networks: architecture, applications and security issues. (IEEE, 2016-03-14)Vehicular Ad Hoc Networks (VANET) are the largest real life application of ad-hoc networks where nodes are represented via fast moving vehicles. This paper introduces the future emerging technology, i.e., Vehicular Cloud ... - A container-based edge cloud PaaS architecture based on Raspberry Pi Clusters. (IEEE, 2016-08)Cloud technology is moving towards multi-cloud environments with the inclusion of various devices. Cloud and IoT integration resulting in so-called edge cloud and fog computing has started. This requires the combination ... - Containers and clusters for edge cloud architectures - a technology review. (IEEE, 2015-10-26)Cloud technology is moving towards more distribution across multi-clouds and the inclusion of various devices, as evident through IoT and network integration in the context of edge cloud and fog computing. Generally, ... - Audio masking effect on inter-component skews in olfaction-enhanced multimedia presentations. (ACM, 2016-08)Media-rich content plays a vital role in consumer applications today, as these applications try to find new and interesting ways to engage their users. Video, audio, and the more traditional forms of media content continue ... - IProIoT: an in-network processing framework for IoT using information centric networking (IEEE, 2017-07-27)The Internet of Things (IoT) network supports various network applications through billions of heterogeneous connected devices. Efficient processing of enormous amounts of IoT data collected and exchanged by these devices ... - Transparent encryption with scalable video communication: Lower-latency, CABAC-based schemes (Elsevier, 2017-05)Selective encryption masks all of the content without completely hiding it, as full encryption would do at a cost in encryption delay and increased bandwidth. Many commercial applications of video encryption do not even ... - A flow based architecture for efficient distribution of vehicular information in smart cities. (IEEE, 2019-12-23)The Introduction of the Internet of Things has made the vision of “Smart Cities” a very reachable goal. Aggregating data from a wealth of sensors throughout a city, with the aim of improving quality of life for its ... - A governance architecture for self-adaption & control in IoT applications. (IEEE, 2018-06-25)The “Internet of Things” has become a reality with projections of 28 billion connected devices by 2021. Much R&D is currently focused on creating methods to efficiently handle an influx of data. Flow based programming, ... - A hybrid machine learning/policy approach to optimise video path selection. (IEEE, 2020-02-27)Services such as interactive video and real time gaming are ubiquitous on modern networks. The approaching realisation of 5G as well as the virtualisation and scalability of network functions made possible by technologies ... - An architecture for intelligent data processing on IoT edge devices. (IEEE, 2018-05-17)As the Internet of Things edges closer to mainstream adoption, with it comes an exponential rise in data transmission across the current Internet architecture. Capturing and analyzing this data will lead to a wealth ... - Joint crypto-stego scheme for enhanced image protection with nearest-centroid clustering. (IEEE, 2018-03-12)Owing to the exceptional growth of information exchange over open communication channels within the public Internet, confidential transmission of information has become a vital current concern for organizations and ...
https://research.thea.ie/handle/20.500.12065/2435
INTRODUCTION The present disclosure relates to a method and system for cruise control and, more particularly, to a cruise control method and systems for automatically maintaining a following distance (or following time) between a driven vehicle and a followed vehicle. Cruise control is currently calibrated to rigidly control a driver's set speed and can be aggressive and inefficient in its attempt to maintain the set speed on changes in road grades. Further, the cruise control algorithm is automatically aborted during adaptive headway control to avoid contact with another vehicle. This leads to lower fuel economy and unnatural behavior (e.g., aggressive tip-ins and downshifts while going up hills, riding the brakes down hills, etc.). SUMMARY To improve fuel economy, the presently disclosed cruise control method and systems consider and synthesize elevation data and forward camera (or radar) data to compute upcoming trajectories for the driven vehicle, and the followed vehicle (i.e., the vehicle directly in front of the driven vehicle). These trajectories are analyzed, and the vehicle axle torque is adjusted to deliver maximum steady state operation, while not violating a minimum, safe following distance (or following time) to the vehicle in front. In an aspect of the present disclosure, a cruise control method to control a driven vehicle includes: receiving, by a controller of the driven vehicle, a set speed, a minimum allowed speed, and a maximum allowable speed; commanding, by the controller, a propulsion system to produce a commanded axle torque to maintain the set speed; monitoring a current speed of the driven vehicle; monitoring an elevation of a terrain at predetermined-upcoming locations of the driven vehicle; determining projected speeds of the driven vehicle at each of the predetermined-upcoming locations using the current speed of the driven vehicle and the elevation of the terrain at the predetermined-upcoming locations; monitoring a current speed of a followed vehicle, wherein the followed vehicle is driving directly in front of the driven vehicle; determining projected speeds of the followed vehicle at each of the predetermined-upcoming locations using the current speed of the followed vehicle and the elevation of the terrain at the predetermined-upcoming locations; determining a plurality of following times at each of the predetermined-upcoming locations using the projected speeds of the driven vehicle and the projected speeds of the followed vehicle, wherein each of the plurality of following times is a time required for the driven vehicle to reach the followed vehicle; comparing each of the plurality of following times with a predetermined-minimum time threshold; determining whether at least one of the plurality of following times is less than the predetermined-minimum time threshold; and in response to determining that at least one plurality of following times is less than the predetermined-minimum time threshold, commanding, by the controller, the propulsion system of the driven vehicle to decrease the commanded axle torque by a torque adjustment in order to prevent each of the plurality of following times at each of the predetermined-upcoming locations from being less than the predetermined-minimum time threshold. In an aspect of the present disclosure, the cruise control method further comprises generating a projected-speed table using the projected speeds of the driven vehicle at each of the predetermined-upcoming locations of the driven vehicle. The driven vehicle includes a user interface configured to allow a user to set the maximum allowable speed, and the minimum allowable speed. In an aspect of the present disclosure, the cruise control method further comprises generating a table of times required for the driven vehicle to reach each of the predetermined-upcoming locations using the projected-speed table. In an aspect of the present disclosure, the cruise control method further comprises generating a table of the times required for the followed vehicle to reach each of the predetermined-upcoming locations. In an aspect of the present disclosure, the times required for the followed vehicle to reach each of the predetermined-upcoming locations is determined using the projected speeds of the followed vehicle at each of the predetermined-upcoming locations, a vehicle acceleration of the followed vehicle, and an initial distance between the driven vehicle and the followed vehicle at a first one of the predetermined-upcoming locations. In an aspect of the present disclosure, the cruise control method further comprises determining a speed of the followed vehicle, the vehicle acceleration of the followed vehicle, and the initial distance between the driven vehicle and the followed vehicle at the first one of the predetermined-upcoming locations using one or more radars of the driven vehicle. In an aspect of the present disclosure, the cruise control method further comprises determining a difference between the times required for the driven vehicle to reach each of the predetermined-upcoming locations using the projected-speed table and the times required for the followed vehicle to reach each of the predetermined-upcoming locations to determine each of the plurality of following times. In an aspect of the present disclosure, the cruise control method further comprises, in response to determining that each of the plurality of following times is equal to or greater than the predetermined-minimum time threshold, commanding, by the controller, the propulsion system of the driven vehicle to maintain the commanded axle torque. In an aspect of the present disclosure, the cruise control method further comprises determining the torque adjustment. The predetermined-minimum time threshold is equal to a minimum following time times a first safety factor. The torque adjustment is a function of a current speed of the driven vehicle, a current speed of the followed vehicle, and a vehicle acceleration of the driven vehicle at the predetermined-upcoming locations. In an aspect of the present disclosure, the torque adjustment is calculated using the following equation: <math overflow="scroll"><mrow><mrow><mi>&#x394;</mi><mo>&#x2062;</mo><msub><mi>τ</mi><mrow><mi>a</mi><mo>&#x2062;</mo><mi>z</mi></mrow></msub></mrow><mo>=</mo><mrow><mrow><msub><mi>r</mi><mi>w</mi></msub><mo>&#x2062;</mo><mrow><mi>m</mi><mo>&#x2061;</mo><mrow><mo>[</mo><mfrac><mrow><mo>[</mo><mrow><msub><mi>v</mi><mi>c</mi></msub><mo>-</mo><msub><mi>v</mi><mrow><mi>f</mi><mo>&#x2061;</mo><mrow><mo>(</mo><msub><mi>x</mi><mrow><mi>c</mi><mo>&#x2062;</mo><mi>r</mi><mo>&#x2062;</mo><mi>i</mi><mo>&#x2062;</mo><mi>t</mi></mrow></msub><mo>)</mo></mrow></mrow></msub></mrow><mo>]</mo></mrow><mrow><mo>[</mo><mrow><msub><mi>t</mi><mi>c</mi></msub><mo>+</mo><msub><mi>t</mi><mrow><mrow><mi>f</mi><mo>&#x2061;</mo><mrow><mo>(</mo><msub><mi>x</mi><mrow><mi>c</mi><mo>&#x2062;</mo><mi>r</mi><mo>&#x2062;</mo><mi>i</mi><mo>&#x2062;</mo><mi>t</mi></mrow></msub><mo>)</mo></mrow></mrow><mo>-</mo><msubsup><mrow><mi>β</mi><mo>&#x2062;</mo><mi>t</mi></mrow><mi>min</mi><mo>*</mo></msubsup></mrow></msub></mrow><mo>]</mo></mrow></mfrac><mo>]</mo></mrow></mrow></mrow><mo>-</mo><mrow><mfrac><mn>1</mn><msub><mi>x</mi><mi>N</mi></msub></mfrac><mo>&#x2062;</mo><mrow><munderover><mo>&#x2211;</mo><mrow><mi>i</mi><mo>=</mo><mn>0</mn></mrow><mrow><mi>N</mi><mo>-</mo><mn>1</mn></mrow></munderover><mo>&#x2062;</mo><mrow><msub><mi>a</mi><mi>i</mi></msub><mo>&#x2062;</mo><mi>&#x394;</mi><mo>&#x2062;</mo><msub><mi>x</mi><mi>i</mi></msub></mrow></mrow></mrow></mrow></mrow></math> where: az Δτis a change in axle torque to meet a plurality of auto-following distance constraints; c vis the current speed of the driven vehicle; f(x crit ) crit vis a speed of the followed vehicle at a distance x; crit xis a distance at which a violation of the predetermined-minimum time threshold occurs; c tis a current following time; f(x crit ) crit tis a time required for the followed vehicle to reach the distance x; i ais a vehicle acceleration of the driven vehicle an index i. i Δxis an incremental distance between cells i and cell i+1; w ris a rolling radius of a wheel of the driven vehicle in millimeters; and m is a mass of the driven vehicle in kilograms; β is a second safety factor that is greater than the first safety factor in order to avoid violating the predetermined-minimum time threshold; and min t* is the predetermined-minimum time threshold. The present disclosure also describes a vehicle system including a propulsion system and a controller in communication with the propulsion system. The controller is programmed to execute the method described above. The above features and advantages, and other features and advantages, of the present teachings are readily apparent from the following detailed description of some of the best modes and other embodiments for carrying out the present teachings, as defined in the appended claims, when taken in connection with the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic block diagram of a vehicle. FIG. 2 FIG. 1 is a schematic diagram of part of a user interface of the vehicle of . FIG. 3 is a schematic illustration of an elevation look-ahead table representing the elevation of the terrain at the predetermined-upcoming locations of the vehicle system. FIG. 4 is a schematic illustration of a projected-speed table including the projected speeds of the vehicle system at each of the predetermined-upcoming locations of the vehicle system. FIG. 5 is a schematic illustration of an updated, projected-speed table. FIG. 6 is a schematic illustration of a following times table. FIG. 7 is a schematic illustration of an updated following time table. FIG. 8 FIG. 1 is a flowchart of a method for controlling the cruise control of the vehicle system of to optimize fuel economy. FIG. 8A FIG. 6 is a first part of an acceleration control process of the method of . FIG. 8B FIG. 8 is a second part of the acceleration control process of the method of . FIG. 8C FIG. 8 is a third part of the acceleration control process of the method of . FIG. 9A FIG. 8 is a first part of a deceleration control process of the method of . FIG. 9B FIG. 8 is a second part of the deceleration control process of the method of . FIG. 9C FIG. 8 is a third part of the deceleration control process of the method of . FIG. 9D FIG. 8 is a fourth part of the deceleration control process of the method of . FIG. 10A FIG. 8 is a first part of a headway control process of the method of . FIG. 10B FIG. 8 is a second part of a headway control process of the method of . DETAILED DESCRIPTION The following detailed description is merely exemplary in nature and is not intended to limit the application and uses. Furthermore, there is no intention to be bound by expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description. As used herein, the term “module” refers to hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in a combination thereof, including without limitation: application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. Embodiments of the present disclosure may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by a number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of the present disclosure may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. In addition, those skilled in the art will appreciate that embodiments of the present disclosure may be practiced in conjunction with a number of systems, and that the systems described herein are merely exemplary embodiments of the present disclosure. For the sake of brevity, techniques related to signal processing, data fusion, signaling, control, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that alternative or additional functional relationships or physical connections may be present in an embodiment of the present disclosure. FIG. 1 10 12 14 17 10 14 12 10 14 12 17 12 14 As depicted in , a driven vehicle generally includes a chassis , a body , front and rear wheels and may be referred to as the host vehicle. The vehicle may be referred to as a motor vehicle. The body is arranged on the chassis and substantially encloses components of the vehicle . The body and the chassis may jointly form a frame. The wheels are each rotationally coupled to the chassis near a respective corner of the body . 10 89 10 89 10 10 10 The driven vehicle may be an autonomous vehicle, and a control system is incorporated into the driven vehicle . The control system may alternatively be referred to as a vehicle system. The driven vehicle is, for example, a vehicle that is automatically controlled to carry passengers from one location to another. The driven vehicle is depicted in the illustrated embodiment as a passenger car, but it should be appreciated that another vehicle including motorcycles, trucks, sport utility vehicles (SUVs), recreational vehicles (RVs), marine vessels, aircraft, etc., can also be used. The vehicle may be a so-called Level Four or Level Five automation system. A Level Four system indicates “high automation”, referring to the driving mode-specific performance by an automated driving system of the aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene. A Level Five system indicates “full automation”, referring to the full-time performance by an automated driving system of the aspects of the dynamic driving task under different roadway and environmental conditions that can be managed by a human driver. 10 20 22 24 26 28 30 32 34 36 20 10 21 20 21 20 20 33 20 33 33 22 20 17 22 26 17 26 24 17 24 10 29 31 33 20 31 33 The driven vehicle generally includes a propulsion system , a transmission system , a steering system , a brake system , a sensor system , an actuator system , at least one data storage device , at least one controller , and a communication system . The propulsion system may include an electric machine such as a traction motor and/or a fuel cell propulsion system. The driven vehicle further includes a battery (or battery pack) electrically connected to the propulsion system . Accordingly, the battery is configured to store electrical energy and to provide electrical energy to the propulsion system . Additionally, the propulsion system may include an internal combustion engine having a plurality of cylinders. When the propulsion system engages active fuel management (AFM), not all of the cylinders of the internal combustion engine are active. Conversely, when the propulsion system disengages AFM, all of the cylinders of the internal combustion engine are active. The transmission system is configured to transmit power from the propulsion system to the vehicle wheels according to selectable speed ratios. The transmission system may include a step-ratio automatic transmission, a continuously variable transmission, or other appropriate transmission. The brake system is configured to provide braking torque to the vehicle wheels . The brake system may include friction brakes, brake by wire, a regenerative braking system such as an electric machine, and/or other appropriate braking systems. The steering system influences a position of the vehicle wheels . While depicted as including a steering wheel for illustrative purposes, the steering system may not include a steering wheel. The vehicle may include an air-conditioning system with a compressor coupled to the internal combustion engine of the propulsion system . The compressor may be driven by the internal combustion engine . 28 40 10 40 30 42 20 22 24 26 28 40 40 10 40 34 g g g The sensor system includes one or more sensing devices that sense observable conditions of the exterior environment and/or the interior environment of the vehicle . The sensing devices may include, but are not limited to, radars, lidars, global positioning systems, optical cameras (e.g., forward-looking cameras), thermal cameras, ultrasonic sensors, clocks for measuring time, and/or other sensors. The actuator system includes one or more actuator devices that control one or more vehicle features such as, but not limited to, the propulsion system , the transmission system , the steering system , and the brake system . In various embodiments, the vehicle features can further include interior and/or exterior vehicle features such as, but are not limited to, doors, a trunk, and cabin features such as air, music, lighting, etc. (not numbered). The sensing system includes one or more Global Positioning System (GPS) transceiver configured to detect and monitor the route data (i.e., route information). The GPS transceiver is configured to communicate with a GPS to locate the position of the vehicle in the globe. The GPS transceiver is in electronic communication with the controller . 32 10 32 10 32 32 34 34 34 The data storage device stores data for use in automatically controlling the vehicle . In various embodiments, the data storage device stores defined maps of the navigable environment. In various embodiments, the defined maps may be predefined by and obtained from a remote system. For example, the defined maps may be assembled by the remote system and communicated to the vehicle (wirelessly and/or in a wired manner) and stored in the data storage device . As can be appreciated, the data storage device may be part of the controller , separate from the controller , or part of the controller and part of a separate system. 34 44 46 44 34 46 44 46 34 10 32 46 35 35 10 10 10 35 FIGS. 3, 6, and 7 FIG. 3 The controller includes at least one processor and a computer non-transitory readable storage device or media . The processor may be a custom made processor, a central processing unit (CPU), a graphics processing unit (GPU), an auxiliary processor among several processors associated with the controller , a semiconductor-based microprocessor (in the form of a microchip or chip set), a macroprocessor, a combination thereof, or generally a device for executing instructions. The computer readable storage device or media may include volatile and nonvolatile storage in read-only memory (ROM), random-access memory (RAM), and keep-alive memory (KAM), for example. KAM is a persistent or non-volatile memory that may be used to store various operating variables while the processor is powered down. The computer-readable storage device or media may be implemented using a number of memory devices such as PROMs (programmable read-only memory), EPROMs (electrically PROM), EEPROMs (electrically erasable PROM), flash memory, or another electric, magnetic, optical, or combination memory devices capable of storing data, some of which represent executable instructions, used by the controller in controlling the vehicle . The data storage device and/or the computer readable storage device or media may include a map database . In the present disclosure, the term ‘map database” means a database that stores geographical and topographical data, such as roads, streets, cities, parks, traffic signs, elevation information, two-dimensional or three-dimensional arrangement of objections with attributes to location and category. The map database includes data about the elevation E of a terrain Trr () at predetermined-upcoming locations of the vehicle . The data about the elevation E of a terrain Trr () at the predetermined-upcoming locations of the vehicle is referred to herein as upcoming elevation data ED. In the present disclosure, the terrain Trr is the terrain Trr in which the vehicle is traveling or will be traveling. The map database may alternatively be referred to as the map module. 44 28 10 30 10 34 10 34 10 FIG. 1 The instructions may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions. The instructions, when executed by the processor , receive and process signals from the sensor system , perform logic, calculations, methods and/or algorithms for automatically controlling the components of the vehicle , and generate control signals to the actuator system to automatically control the components of the vehicle based on the logic, calculations, methods, and/or algorithms. Although a single controller is shown in , embodiments of the vehicle may include a number of controllers that communicate over a suitable communication medium or a combination of communication mediums and that cooperate to process the sensor signals, perform logic, calculations, methods, and/or algorithms, and generate control signals to automatically control features of the vehicle . 34 89 10 23 23 34 34 23 23 In various embodiments, one or more instructions of the controller are embodied in the control system . The vehicle includes a user interface , which may be a touchscreen in the dashboard. The user interface is in electronic communication with the controller and is configured to receive inputs by a user (e.g., vehicle operator). Accordingly, the controller is configured to receive inputs from the user via the user interface . The user interface includes a display configured to display information to the user (e.g., vehicle operator or passenger). 36 48 11 11 36 36 FIG. 2 The communication system is configured to wirelessly communicate information to and from other entities , such as but not limited to, other vehicles (“V2V” communication), infrastructure (“V2I” communication), remote systems, and/or personal devices (described in more detail with regard to ). The followed vehicle may communicate relevant information (i.e., a speed, acceleration and/or location of the followed vehicle ) to the driven vehicle via V2V communication. In an exemplary embodiment, the communication system is a wireless communication system configured to communicate via a wireless local area network (WLAN) using IEEE 802.11 standards or by using cellular data communication. However, additional or alternate communication methods, such as a dedicated short-range communications (DSRC) channel, are also considered within the scope of the present disclosure. DSRC channels refer to one-way or two-way short-range to medium-range wireless communication channels specifically designed for automotive use and a corresponding set of protocols and standards. Accordingly, the communication system may include one or more antennas and/or transceivers for receiving and/or transmitting signals, such as cooperative sensing messages (CSMs). FIG. 1 89 10 34 89 26 20 28 26 17 26 17 10 20 10 20 33 20 33 40 28 17 34 10 28 40 10 40 34 17 28 10 11 40 11 11 10 11 34 10 28 34 11 28 34 10 11 28 s s is a schematic block diagram of the control system , which is configured to control the driven vehicle . The controller of the control system is in electronic communication with the braking system , the propulsion system , and the sensor system . The braking system includes one or more brake actuators (e.g., brake calipers) coupled to one or more wheels . Upon actuation, the brake actuators of the braking system apply braking pressure on one or more wheels to decelerate the driven vehicle . The propulsion system includes one or more propulsion actuators for controlling the propulsion of the driven vehicle . For example, as discussed above, the propulsion system may include internal combustion engine and, in that case, the propulsion actuator of the propulsion system may be a throttle specially configured to control the airflow in the internal combustion engine . The sensing devices (i.e., sensors) of the sensor system may include one or more accelerometers (or one or more gyroscopes) coupled to one or more wheels . The accelerometer is in electronic communication with the controller and is configured to measure and monitor the longitudinal and lateral accelerations of the driven vehicle . The sensor system may include one or more speed sensors configured to measure and monitor the speed (or velocity) of the driven vehicle . The speed sensor is coupled to the controller and is in electronic communication with one or more wheels . The sensor systems may include one or more radars and/or forward-looking cameras to measure the distance between the driven vehicle and a followed vehicle . Thus, the sensing devices may be radars and/or forward-looking cameras. The radars and/or forward-looking cameras may also be used to determine the speed and/or acceleration of the followed vehicle . In the present disclosure, the followed vehicle is a vehicle that is directly in front of the driven vehicle. As a result, no other vehicle is between the driven vehicle and the followed vehicle . Accordingly, the controller is programmed to monitor the speed and/or acceleration of the driven vehicle based on the input from the sensor system . Further, the controller is programmed to monitor and determine the speed and/or acceleration of the followed vehicle based on the input from the sensor system . Additionally, the controller is programmed to monitor and determine the distance from the driven vehicle to the followed vehicle based on the input from the sensor system . FIG. 2 23 10 25 23 10 25 23 27 23 23 37 37 is a schematic diagram of part of the user interface . The vehicle has cruise control, and the driver's set speed (shown in the user interface ) can be adjusted by the driver with, for example, up/down arrows on the steering wheel of the vehicle . Aside from the driver's set speed , the user interface also shows the speed tolerance , which includes a maximum allowed speed and a minimum allowed speed. The driver may adjust the maximum allowed speed and and/or minimum allowed of the speed tolerance using the user interface . The user interface shows the allowed speed range , which is calculated as a function of the set speed, the maximum allowed speed, and the minimum allowed speed. The maximum allowed speed and the minimum allowed speed are each a speed boundary of an allowed speed range . FIG. 3 FIG. 6 FIG. 3 100 10 34 35 28 10 34 10 With reference to , the present disclosure describes a cruise control method () that uses upcoming elevation data ED in order to understand in advance when steady-state cruise control operation will lead to a speed violation (drifting outside the driver's bounds). The driven vehicle can then prepare for the upcoming violation and will adjust torque command at opportunistic moments (in efficient ways) using this understanding of the terrain Trr ahead. To do so, the controller receives and monitors the elevation data ED about the upcoming terrain Trr from the map database and/or vehicle sensors/cameras (e.g., a sensor system ). As discussed above, the data about the elevation E of a terrain Trr () at predetermined-upcoming locations of the vehicle is referred herein as upcoming elevation data ED. Using this upcoming elevation data, the controller then generates an elevation look-ahead table. The elevation look-ahead table EDT includes a plurality of look-ahead elevation points. The look-ahead elevation points are equidistant from each other. In other words, the look-ahead points are separated from each other by a predetermined distance, and the first look-ahead point is separated from the current location of the driven vehicle by the same predetermined distance. FIG. 4 FIG. 2 34 10 34 10 10 10 10 34 10 10 34 34 37 With reference to , the controller determines (i.e., calculates) a projected speed of the driven vehicle at each look-ahead elevation point. In other words, the controller is programmed to determine the projected speeds of the driven vehicle at each of the predetermined-upcoming locations of the driven vehicle as a function of the current speed of the driven vehicle and the elevation E of the terrain Trr at the predetermined-upcoming locations of the driven vehicle . Then, the controller generates a projected-speed table PST using the projected speeds PS of the vehicle at each of the predetermined-upcoming locations of the driven vehicle . Next, the controller determines whether there is a speed violation V. In other words, the controller determines whether one or more of the projected speeds is outside the allowed speed range (). FIG. 5 FIG. 2 34 37 34 With reference to , after identifying a speed violation V, the controller computes the necessary increase in initial torque to accommodate the elevation change, resulting in meeting the speed allowed range (). The increase in computed axle torque results in new projected speed profile for the same elevation, now allowing speed deviation allowance to be met. In other words, the controller generates an updated, projected-speed table UPST based in the increased, computed axle torque as discussed in detail below. FIG. 6 FIG. 8 FIGS. 3, 4, and 5 100 10 11 34 10 11 11 10 10 11 10 11 i i With reference to , as discussed below, the cruise control method () entails determining the following time (in seconds) between the driven vehicle and the followed vehicle at each predetermined-upcoming locations. The controller then generates a following time table FTT. In the present disclosure, the term “following time” means a time required for the driven vehicle to reach and come in contact with the followed vehicle . As discussed above, the followed vehicle is directly in front of the driven vehicle in open-road conditions and no other vehicle is between the driven vehicle and the followed vehicle . The following time table FTT includes the following times (in seconds) at each predetermined-upcoming locations. The predetermined-upcoming locations may be referred to as distance x. The predetermined-upcoming locations (i.e., distance x) correspond to the predetermined-upcoming locations in tables shown in . The following times at each predetermined-upcoming locations should be less than a predetermined-minimum time threshold in order to prevent the driven vehicle from coming into contact with the followed vehicle . FIG. 6 FIG. 4 FIG. 5 34 10 10 34 11 34 11 40 11 10 11 34 11 11 11 10 11 34 11 i i i i i With continued reference to , to generate the following time table FTT, the controller generates a first distance table FDT. The first distance table FDT includes the time required for the driven vehicle to reach each predetermined-upcoming location (i.e., distance x). The time required for the driven vehicle to reach each predetermined-upcoming location may be calculated using the projected-speed table PST () or the updated, projected-speed table UPST () if the commanded axle torque has been adjusted. The controller also generates a second distance table SDT. The second distance table SDT includes the time required for the followed vehicle to reach each predetermined-upcoming location (i.e., distance x). To do so, the controller may determine the current speed and acceleration of the followed vehicle using the sensing devices (e.g., forward-looking camera and/or radar) and/or information transmitted from the followed vehicle to the driven vehicle via V2V communications. Using the current speed and the current acceleration of the followed vehicle , the controller determines the projected speeds of the followed vehicle at each predetermined-upcoming location (i.e., distance x). Then, using the projected speeds of the followed vehicle at each predetermined-upcoming location (i.e., distance x), the acceleration of the followed vehicle and the initial following distance from the driven vehicle to the followed vehicle , the controller determines the time required for the followed vehicle to reach each predetermined-upcoming location (i.e., distance x) and generates the second distance table SDT. FIG. 6 i 34 10 11 34 34 With continued reference to , then, for all the points in the first distance table FDT and the second distance table SDT where the distance xis less than a predetermined calibratable distance, the controller calculates the difference between the time required for the driven vehicle to reach each of the predetermined-upcoming locations (as shown in the first distance table FDT) and the time required for the followed vehicle to reach each of the predetermined-upcoming locations to determine each of the plurality of following times at each predetermined-upcoming location, thereby generating the following time table FTT. Next, controller then reads at the following time table FTT to identify a following time violation FTV. The following distance time violation FTV occurs solely when one of the following times at the predetermined-upcoming location is less than a predetermined-minimum time threshold. For example, the predetermined-minimum time threshold may be 0.8 seconds, and the controller may identify a following distance time violation FTV solely when one or more of the following times in the following time table FTT is less than 0.8 seconds. FIG. 7 34 20 10 34 With reference to , in response to determining that at least one plurality of following times is less than the predetermined-minimum time threshold, the controller commands the propulsion system of the driven vehicle to decrease the commanded axle torque by a torque adjustment in order to prevent each of the plurality of following times at each of the predetermined-upcoming locations from being less than the predetermined-minimum time threshold. Then, the controller generates an updated-first distance table UFDT, updated-second distance table USDT, and a UFTT to determine whether any of the updated following times is less than the predetermined-minimum time threshold. FIG. 8 FIG. 1 100 10 100 102 102 34 10 23 23 102 23 10 102 34 23 37 102 34 40 102 100 104 ss max min ss max min max min s is a flowchart of a cruise control method for controlling the cruise control of the driven vehicle of to optimize fuel economy. The method begins at block . At block , the controller determines that the cruise control has been engaged by the vehicle operator of the driven vehicle . The vehicle operator may engage the cruise control through the user interface . For instance, the vehicle operator may press a button on the user interface to engage the cruise control. At block , the vehicle operator may also set the set speed v, the maximum allowed speed v, and the minimum allowed speed vthrough the user interface by, for example, pressing up/down arrows on the steering wheel of the vehicle . Thus, at block , the controller receives the set speed v, the maximum allowed speed v, and the minimum allowed speed vfrom the user interface . As discussed above, each of the maximum allowed speed vand the minimum allowed speed vis a speed boundary of the allowed speed range . At block , the controller also determines and monitors (in real time) the current vehicle speed v based on the inputs of the speed sensor . After block , the method proceeds to block . 104 34 34 20 100 106 ss ss ss ss At block , the controller sets the commanded axle torque τto road load torque at the set speed v. To do so, the controller commands the propulsion system to produce the commanded axle torque τin order to maintain the set speed v. Then, the method continues to block . 106 34 10 35 34 10 10 34 35 106 35 10 100 108 FIG. 3 FIG. 3 At block , the controller also determines and monitors the elevation E of the terrain Trr at predetermined-upcoming locations of the vehicle using the upcoming elevation data of the map database . Also, the controller generates the elevation look-ahead table EDT () using the elevation E of the terrain Trr at the predetermined-upcoming locations of the vehicle . As discussed above, the elevation look-ahead table EDT () includes a plurality of look-ahead elevation points, which correspond to the predetermined-upcoming locations of the driven vehicle . The controller uses upcoming elevation data ED from the map database to generate the elevation look-ahead table EDT. Therefore, block also entails retrieving elevation data ED from the map database and then using the upcoming elevation data ED to generate the elevation look-ahead table EDT. The look-ahead elevation points of the elevation look-ahead table EDT are equidistant from each other. In other words, the look-ahead points are separated from each other by a predetermined distance, and the first look-ahead point is separated from the current location of the vehicle by the same predetermined distance. Then, the method proceeds to block . 108 34 10 10 34 10 0 ss i 0 i 0 v g h −h v 2 At block , the controller determines projected speeds of the vehicle at each of the predetermined-upcoming locations of the vehicle as a function of the current speed vof the driven vehicle and the elevation E of the terrain Trr at the predetermined-upcoming locations of the driven vehicle . To do so, the controller assumes the driven vehicle maintains a constant torque (i.e., road load torque at set speed v) and calculates the projected speeds at each look-ahead point in the elevation table EDT (given the changes in elevation in the elevation table EDT) with the following equation: =√{square root over (2()+)} where 0 10 vis the current speed of the vehicle ; 0 10 his the current elevation of the terrain Trr at the current location of the vehicle ; i his the elevation at point i in the elevation look-ahead table EDT; g is the gravitational acceleration; and i vis the projected speed at point i in the elevation look-ahead table EDT. 34 10 10 108 100 109 FIG. 4 Using the equations above, the controller calculates the projected speed at each look-ahead point and generates the projected-speed table PST () using the projected speeds of the vehicle at each of the predetermined-upcoming locations of the vehicle . After block , the method proceeds to block . 109 34 400 400 10 11 100 111 111 34 400 100 111 100 104 109 100 110 FIGS. 10A and 10B FIGS. 10A and 10B FIGS. 10A and 10B At block , the controller executes a headway control algorithm (). The headway control algorithm () determines whether the following times (between the driven vehicle and the followed vehicle ) at predetermined-upcoming locations is less than the predetermined-minimum time threshold. If any of the following times at the predetermined-upcoming locations is less than the predetermined-minimum time threshold, then the method proceeds to block . At block , the controller executes the headway control algorithm () until the conditions allow for the open-road algorithm to be resumed. The open-road algorithm refers to the cruise control method , except for block , and the conditions that allow for the open-road algorithm to be resumed refer to the following times at the predetermined-upcoming locations being equal to or greater than the predetermined-minimum time threshold. Thus, once the conditions allows for the open-road algorithm to be resumed, the method returns to block . If, at block , the following times at the predetermined-upcoming locations being equal to or greater than the predetermined-minimum time threshold, then the method proceeds to block . 110 34 37 37 110 34 100 104 34 300 112 300 34 20 10 10 37 100 104 34 200 114 200 34 20 10 10 37 100 104 FIG. 4 FIGS. 8A, 8B, 8C, and 8D FIGS. 7A, 7B, and 7C max min max min max min At block , the controller compares each of the projected-speeds at each of the predetermined-upcoming locations with the allowed speed range to determine whether any of the projected speeds is outside the allowed speed range . In other words, at block , the controller determines whether there are any projected speeds in the projected-speed table PST () is in violation of the maximum allowed speed vand/or the minimum allowed speed v. If there are no projected speeds that are in violation of the maximum allowed speed vand/or the minimum allowed speed v, the method returns to block . If there are projected speeds that are in violation of the maximum allowed speed v, the controller begins the deceleration control process () at block . In the deceleration control process , the controller commands the propulsion system of the vehicle to adjust the commanded axle torque to maintain the actual speed of the vehicle within the allowed speed range at each of the predetermined-upcoming locations. After executing the deceleration control process, the method proceeds to returns to block . If there are projected speeds that are in violation of the minimum allowed speed v, the controller begins the acceleration control process () at block . In the acceleration control process , the controller commands the propulsion system of the vehicle to adjust the commanded axle torque to maintain the actual speed of the vehicle within the allowed speed range at each of the predetermined-upcoming locations. After executing the acceleration control process, the method returns to block . FIGS. 8A, 8B, and 8C 200 114 34 200 200 202 202 34 34 202 200 204 min min illustrate the acceleration control process . At block (as discussed above), the controller enters acceleration control (i.e., begins the acceleration control process ). Then, the acceleration control process proceeds to block . At block , the controller uses the projected-speed table PST to identify a first speed point in violation of the minimum allowed speed v. In other words, the controller identifies the first speed point in the projected-speed table PST that is less than the minimum allowed speed v. After block , the acceleration control process continues to block . 204 34 202 200 204 206 34 200 206 i+1 i peak peak peak At block , the controller , starting at the first speed point identified in block , moves forward in the projected-speed table PST until v>vin order to find the first local minimum of the projected speed). In the acceleration control process , this first local minimum of the projected speed is referred to as v. After block , the method proceeds to block . The first local minimum vof the projected speed may correspond to a local maximum elevation in the elevation look-ahead table EDT. Thus, controller also determines the local maximum elevation in the elevation look-ahead table EDT and its corresponding index iin the elevation look-ahead table EDT. Next, the acceleration control process continues to block . 206 34 10 10 206 200 208 peak peak At block , the controller determines and stores the distance from the current location of the vehicle to the local maximum elevation and its corresponding index iin the elevation look-ahead table EDT. The distance from the current location of the vehicle to the local maximum elevation is referred to as a peak distance d. After block , the acceleration control process continues to block . 208 34 208 200 210 peak min At block , the controller sets the desired speed at the peak distance dto be the minimum allowed speed v. After block , the acceleration control process proceeds to block . 210 34 34 FIG. 5 At block , the controller computes a scaled, projected-speed table, such as an updated, projected-speed table UPST shown in . To do so, the controller may use the following equation: <math overflow="scroll"><mrow><msub><mi>v</mi><mrow><mi>i</mi><mo>,</mo><mi>scaled</mi></mrow></msub><mo>=</mo><mrow><msub><mi>v</mi><mn>0</mn></msub><mo>+</mo><mrow><mrow><mo>(</mo><mrow><msub><mi>v</mi><mi>i</mi></msub><mo>-</mo><msub><mi>v</mi><mn>0</mn></msub></mrow><mo>)</mo></mrow><mo>&#x2061;</mo><mrow><mo>[</mo><mfrac><mrow><msub><mi>v</mi><mn>0</mn></msub><mo>-</mo><msub><mi>v</mi><mi>min</mi></msub></mrow><mrow><msub><mi>v</mi><mn>0</mn></msub><mo>-</mo><msub><mi>v</mi><mrow><mi>p</mi><mo>&#x2062;</mo><mi>e</mi><mo>&#x2062;</mo><mi>a</mi><mo>&#x2062;</mo><mi>k</mi></mrow></msub></mrow></mfrac><mo>]</mo></mrow></mrow></mrow></mrow></math> where: 0 10 vis the current speed of the vehicle ; i 10 vis a projected speed of the vehicle at an index point i; min vis the minimum allowed speed; peak 204 vis the first local minimum of the projected speed determined in block ; and i,scaled 10 vis the scaled, projected speed of the vehicle at an index point i. 34 34 210 200 212 min peak By using the above equation, the controller generates a scaled, projected-speed table. Thus, the controller computes the scaled, projected-speed table a function of the minimum allowed speed vand the first local minimum v. After block , the acceleration control process proceeds to block . 212 34 34 min peak At block , the controller calculates the required work input W to achieve the minimum allowed speed vat the peak distance d. To do so, the controller may use the following equation: <math overflow="scroll"><mrow><mi>W</mi><mo>=</mo><mrow><mfrac><mrow><mi>m</mi><mo>&#x2061;</mo><mrow><mo>(</mo><mrow><msubsup><mi>v</mi><mi>min</mi><mn>2</mn></msubsup><mo>-</mo><msubsup><mi>v</mi><mi>peak</mi><mn>2</mn></msubsup></mrow><mo>)</mo></mrow></mrow><mn>2</mn></mfrac><mo>·</mo><mfrac><mn>1</mn><mi>η</mi></mfrac></mrow></mrow></math> where: 10 m is the mass of vehicle ; peak 204 vis the first local minimum of the projected speed determined in block ; η is a calibratable (and/or learned) engine-to-road efficiency factor; min vis the minimum allowed speed; min peak W is the required work input to achieve the minimum allowed speed vat the peak distance d. min peak 200 214 After determining the required work input W to achieve the minimum allowed speed vat the peak distance d, the acceleration control process proceeds to block . 214 34 req min peak At block , the controller calculates the adjusted torque τrequired (if applied constantly) to achieve the minimum allowed speed vat the peak distance dusing the following equation: <math overflow="scroll"><mrow><msub><mi>τ</mi><mi>req</mi></msub><mo>=</mo><msup><mrow><mfrac><mrow><mrow><mi>W</mi><mo>&#x2061;</mo><mrow><mo>(</mo><msub><mi>i</mi><mi>peak</mi></msub><mo>)</mo></mrow></mrow><mo>&#x2062;</mo><msub><mi>r</mi><mi>w</mi></msub></mrow><mn>2</mn></mfrac><mo>&#x2061;</mo><mrow><mo>[</mo><mrow><mrow><mo>(</mo><mrow><munderover><mo>&#x2211;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>i</mi><mrow><mrow><mi>p</mi><mo>&#x2062;</mo><mi>e</mi><mo>&#x2062;</mo><mi>a</mi><mo>&#x2062;</mo><mi>k</mi></mrow><mo>-</mo><mn>1</mn></mrow></msub></munderover><mo>&#x2062;</mo><mrow><mfrac><mrow><mo>(</mo><mrow><msub><mi>v</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>v</mi><mi>i</mi></msub></mrow><mo>)</mo></mrow><mrow><mo>(</mo><mrow><msubsup><mi>v</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow><mn>2</mn></msubsup><mo>-</mo><msubsup><mi>v</mi><mi>i</mi><mn>2</mn></msubsup></mrow><mo>)</mo></mrow></mfrac><mo>&#x2062;</mo><mrow><mo>(</mo><mrow><msub><mi>x</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>x</mi><mi>i</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow><mo>)</mo></mrow><mo>&#x2062;</mo><mrow><mo>(</mo><mrow><munderover><mo>&#x2211;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>i</mi><mrow><mi>p</mi><mo>&#x2062;</mo><mi>e</mi><mo>&#x2062;</mo><mi>a</mi><mo>&#x2062;</mo><mi>k</mi></mrow></msub></munderover><mo>&#x2062;</mo><msub><mi>v</mi><mi>i</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>]</mo></mrow></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup></mrow></math> where: w 17 ris the radius of one of the wheels (i.e., the wheel radius); i 210 vis the projected speed at index point i in the scaled, projected-speed table generated in block ; i 10 210 xis the distance from the current location of the vehicle to the index point i in the scaled, projected-speed table generated in block ; i+1 210 vis the projected speed at index point i+1 in the scaled, projected-speed table generated in block ; peak peak iis the index point (i.e., location) at the first local minimum vis of the projected speed; peak−1 peak iis the index point (i.e., location) immediately before the first local minimum vis of the projected speed; and req min τis the adjusted torque required (if applied constantly) to achieve the driver defined speed minimum speed limit v; 214 214 200 216 At block , the efficiency will be maximized if the required work W is added to the system at a constant rate. After block , the acceleration control process proceeds to block . 216 34 216 200 218 req min peak ss ss At block , the controller re-computes the projected speed table assuming that the commanded axle torque is held constant at the sum of the adjusted torque τrequired (if applied constantly) to achieve the minimum allowed speed vat the peak distance dand the commanded axle torque τto road load torque at the set speed v. After block , the acceleration control process proceeds to block . 218 34 200 202 34 300 112 200 220 peak peak peak peak FIGS. 8A, 8B, 8C, and 8D At block , the controller determines whether there are any speed violations prior to the peak distance d. If there are minimum speed violations prior to the peak distance d, then the acceleration control process returns to block . If there are maximum speed violations prior to the peak distance d, then the controller begins the deceleration control process () at block . If there are no speed violations prior to the peak distance d, then the acceleration control process proceeds to block . 220 34 34 20 200 222 req min peak ss ss req min peak ss ss At block , the controller sets the commanded engine torque to the sum of the adjusted torque τrequired (if applied constantly) to achieve the minimum allowed speed vat the peak distance dand the commanded axle torque τto road load torque at the set speed v. Also, the controller commands the propulsion system to produce an updated, commanded axle torque. This updated, commanded axle torque may be equal to the adjusted torque τrequired (if applied constantly) to achieve the minimum allowed speed vat the peak distance dplus the commanded axle torque τto road load torque at the set speed v. Then, the acceleration control process proceeds to block . 220 222 10 222 34 1 peak peak peak d =d −dx Between block and , the vehicle travels to the look-ahead point xin the elevation look-ahead table EDT. At block , the controller sets the peak distance dusing the following equation: where: peak dis the peak distance; and 0 1 dx is the distance between look-ahead point xand look-ahead point xin the elevation look-ahead table EDT. 222 200 224 After block , the acceleration control process proceeds to block . 224 34 200 216 200 226 226 34 peak peak peak At block , the controller determines whether the newly set peak distance dis less than zero. If the newly set peak distance dis not less than zero, then the acceleration control process returns to block . If the newly set peak distance dis less than zero, then the acceleration control process proceeds to block . At block , the controller exits acceleration control. FIGS. 9A, 9B, 9C and 9D 300 112 34 300 300 302 302 34 34 302 300 304 max max illustrate the deceleration control process . At block (as discussed above), the controller enters deceleration control (i.e., begins the deceleration control process ). Then, the deceleration control process proceeds to block . At block , the controller uses the projected-speed table PST to identify a first speed point in violation of the maximum allowed speed v. In other words, the controller identifies the first speed point in the projected-speed table PST that is greater than the maximum allowed speed v. After block , the deceleration control process continues to block . 304 34 302 300 304 306 34 300 306 i+1 i peak peak peak At block , the controller , starting at the first speed point identified in block , moves forward in the projected-speed table PST until v<vin order to find the first local maximum of the projected speed). In the deceleration control process , this first local maximum of the projected speed is referred to as v. After block , the method proceeds to block . The first local maximum vof the projected speed may correspond to a local minimum elevation in the elevation look-ahead table EDT. Thus, controller also determines the local maximum elevation in the elevation look-ahead table EDT and its corresponding index iin the elevation look-ahead table EDT. Next, the deceleration control process continues to block . 306 34 10 10 304 300 308 peak peak At block , the controller determines and stores the distance from the current location of the vehicle to the local minimum elevation and its corresponding index iin the elevation look-ahead table EDT. The distance from the current location of the driven vehicle to the local minimum elevation is referred to as a peak distance d. After block , the deceleration control process continues to block . 308 34 308 300 310 peak max At block , the controller sets the desired speed at the peak distance dto be the maximum allowed speed v. After block , the deceleration control process proceeds to block . 310 34 34 FIG. 5 At block , the controller computes a scaled, projected-speed table, such as an updated, projected-speed table UPST shown in . To do so, the controller may use the following equation: <math overflow="scroll"><mrow><msub><mi>v</mi><mrow><mi>i</mi><mo>,</mo><mi>scaled</mi></mrow></msub><mo>=</mo><mrow><msub><mi>v</mi><mn>0</mn></msub><mo>+</mo><mrow><mrow><mo>(</mo><mrow><msub><mi>v</mi><mi>i</mi></msub><mo>-</mo><msub><mi>v</mi><mn>0</mn></msub></mrow><mo>)</mo></mrow><mo>&#x2061;</mo><mrow><mo>[</mo><mfrac><mrow><msub><mi>v</mi><mi>max</mi></msub><mo>-</mo><msub><mi>v</mi><mn>0</mn></msub></mrow><mrow><msub><mi>v</mi><mrow><mi>p</mi><mo>&#x2062;</mo><mi>e</mi><mo>&#x2062;</mo><mi>a</mi><mo>&#x2062;</mo><mi>k</mi></mrow></msub><mo>-</mo><msub><mi>v</mi><mn>0</mn></msub></mrow></mfrac><mo>]</mo></mrow></mrow></mrow></mrow></math> where: 0 10 vis the current speed of the vehicle ; i 10 vis a projected speed of the vehicle at an index point i; max vis the maximum allowed speed; peak 304 vis the first local maximum of the projected speed determined in block ; and i,scaled 10 vis the scaled, projected speed of the vehicle at an index point i. 34 34 310 300 312 max peak By using the above equation, the controller generates a scaled, projected-speed table. Thus, the controller computes the scaled, projected-speed table a function of the maximum allowed speed vand the first local maximum v. After block , the deceleration control process proceeds to block . 312 34 34 max peak At block , the controller calculates the required work input W to achieve the maximum allowed speed vat the peak distance d. To do so, the controller may use the following equation: <math overflow="scroll"><mrow><mi>W</mi><mo>=</mo><mrow><mfrac><mrow><mi>m</mi><mo>&#x2061;</mo><mrow><mo>(</mo><mrow><msubsup><mi>v</mi><mi>peak</mi><mn>2</mn></msubsup><mo>-</mo><msubsup><mi>v</mi><mi>max</mi><mn>2</mn></msubsup></mrow><mo>)</mo></mrow></mrow><mn>2</mn></mfrac><mo>·</mo><mfrac><mn>1</mn><mi>η</mi></mfrac></mrow></mrow></math> where: 10 m is the mass of vehicle ; peak 304 vis the first local maximum of the projected speed determined in block ; η is a calibratable engine-to-road efficiency factor; max vis the maximum allowed speed; min peak W is the required work input to achieve the maximum allowed speed vat the peak distance d. max peak 300 314 After determining the required work input W to achieve the maximum allowed speed vat the peak distance d, the deceleration control process proceeds to block . 314 34 req max peak At block , the controller calculates the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dusing the following equation: <math overflow="scroll"><mrow><msub><mi>τ</mi><mi>req</mi></msub><mo>=</mo><msup><mrow><mfrac><mrow><mrow><mi>W</mi><mo>&#x2061;</mo><mrow><mo>(</mo><msub><mi>i</mi><mrow><mi>p</mi><mo>&#x2062;</mo><mi>e</mi><mo>&#x2062;</mo><mi>a</mi><mo>&#x2062;</mo><mi>k</mi></mrow></msub><mo>)</mo></mrow></mrow><mo>&#x2062;</mo><msub><mi>r</mi><mi>w</mi></msub></mrow><mn>2</mn></mfrac><mo>&#x2061;</mo><mrow><mo>[</mo><mrow><mrow><mo>(</mo><mrow><munderover><mo>&#x2211;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>i</mi><mrow><mrow><mi>p</mi><mo>&#x2062;</mo><mi>e</mi><mo>&#x2062;</mo><mi>a</mi><mo>&#x2062;</mo><mi>k</mi></mrow><mo>-</mo><mn>1</mn></mrow></msub></munderover><mo>&#x2062;</mo><mrow><mfrac><mrow><mo>(</mo><mrow><msub><mi>v</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>v</mi><mi>i</mi></msub></mrow><mo>)</mo></mrow><mrow><mo>(</mo><mrow><msubsup><mi>v</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow><mn>2</mn></msubsup><mo>-</mo><msubsup><mi>v</mi><mi>i</mi><mn>2</mn></msubsup></mrow><mo>)</mo></mrow></mfrac><mo>&#x2062;</mo><mrow><mo>(</mo><mrow><msub><mi>x</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>x</mi><mi>i</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow><mo>)</mo></mrow><mo>&#x2062;</mo><mrow><mo>(</mo><mrow><munderover><mo>&#x2211;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>i</mi><mrow><mi>p</mi><mo>&#x2062;</mo><mi>e</mi><mo>&#x2062;</mo><mi>a</mi><mo>&#x2062;</mo><mi>k</mi></mrow></msub></munderover><mo>&#x2062;</mo><msub><mi>v</mi><mi>i</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>]</mo></mrow></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup></mrow></math> where: w 17 ris the radius of one of the wheels (i.e., the wheel radius); i 210 vis the projected speed at index point i in the scaled, projected-speed table generated in block ; i 10 210 xis the distance from the current location of the vehicle to the index point i in the scaled, projected-speed table generated in block ; i+1 210 vis the projected speed at index point i+1 in the scaled, projected-speed table generated in block ; peak peak iis the index point (i.e., location) at the first local minimum vis of the projected speed; peak−1 peak iis the index point (i.e., location) immediately before the first local minimum vis of the projected speed; and req max τis the adjusted torque required (if applied constantly) to achieve the maximum driver defined speed limit v; 314 314 300 316 At block , the efficiency will be maximized if the required work W is added to the system at a constant rate. After block , the deceleration control process proceeds to block . 316 34 316 300 318 req max peak ss ss At block , the controller re-computes the projected speed table assuming that the commanded axle torque is held constant at the sum of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dand the commanded axle torque τto road load torque at the set speed v. After block , the deceleration control process proceeds to block . 318 34 300 302 34 200 114 300 320 peak peak peak peak FIGS. 7A, 7B, and 7C At block , the controller determines whether there are any speed violations prior to the peak distance d. If there are maximum speed violations prior to the peak distance d, then the deceleration control process returns to block . If there are minimum speed violations prior to the peak distance d, then the controller begins the acceleration control process () at block . If there are no speed violations prior to the peak distance d, then the deceleration control process proceeds to block . 320 34 29 29 300 322 29 300 324 req max peak req max peak req max peak At block , the controller compares the absolute value of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dwith the absolute value of the torque necessary to run the air-conditioning system (i.e., the maximum alternator torque). If the absolute value of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dis greater than the absolute value of the torque necessary to run the air-conditioning system , then the deceleration control process proceeds to block . If the absolute value of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dis not greater than the absolute value of the torque necessary to run the air-conditioning system , then the deceleration control process proceeds to block . 324 34 20 21 10 324 300 340 ss req max peak At block , the controller maintains the commanded axle torque τ. Also, the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dis provided via battery regeneration. In the battery regeneration, the propulsion system charges the battery of the vehicle . After block , the deceleration control process proceeds to block . 340 10 340 300 342 1 At block , the vehicle travels to look-ahead point xin the elevation look-ahead table EDT. After block , the deceleration control process proceeds to block . 342 34 peak peak peak d =d −dx At block , the controller sets the peak distance dusing the following equation: where: peak dis the peak distance; and 0 1 dx is the distance between look-ahead pint xand look-ahead point xin the elevation look-ahead table EDT. 342 300 344 After block , the deceleration control process proceeds to block . 344 34 300 316 300 346 346 34 peak peak peak At block , the controller determines whether the newly set peak distance dis less than zero. If the newly set peak distance dis not less than zero, then the deceleration control process returns to block . If the newly set peak distance dis less than zero, then the deceleration control process proceeds to block . At block , the controller exits deceleration control. 322 34 20 20 21 10 20 31 29 322 34 29 322 300 326 req max peak req max peak At block , the controller commands the propulsion system to engage a maximum battery regeneration. In the maximum battery regeneration, the propulsion system charges the battery of the vehicle . In the second deceleration mode, the propulsion system drives a compressor of the air conditioning system . At block , the controller sets the absolute value of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dto be equal to the absolute value of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dminus the torque necessary to run the air-conditioning system . After block , the deceleration control process proceeds to block . 326 34 29 29 300 328 29 300 330 At block , the controller determines if the air conditioning system is on. If the air conditioning system is on, then the deceleration control process proceeds to block . If the air conditioning system is off, then the deceleration control process proceeds to block . 328 34 29 31 29 29 300 332 29 300 334 req max peak req max peak req max peak At block , the controller compares the absolute value of the newly set adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dwith the absolute value of the torque necessary to run the air-conditioning system (i.e., the maximum A/C compressor torque). The maximum A/C compressor torque is the maximum torque required to run the compressor of the air-conditioning system . If the absolute value of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dis greater than the absolute value of the torque necessary to run the air-conditioning system , then the deceleration control process proceeds to block . If the absolute value of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dis not greater than the absolute value of the torque necessary to run the air-conditioning system , then the deceleration control process proceeds to block . 332 34 332 34 31 29 332 300 330 req max peak req max peak At block , the controller sets the maximum A/C compressor load to maximum for current climate settings. At block , the controller sets the absolute value of the newly set adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dto be equal to the absolute value of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dminus the maximum torque required to run the compressor of the air-conditioning system . After block , the deceleration control process proceeds to block . 334 34 31 29 334 300 340 ss req max peak At block , the controller maintains the commanded axle torque τ. Also, the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dis provided via the A/C compressor load (i.e., the load of the compressor of the air-conditioning system . After block , the deceleration control process proceeds to block . 330 34 29 29 300 336 29 300 338 req max peak req max peak req max peak At block , the controller compares the absolute value of the newly set adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dwith the absolute value of the torque necessary to run the air-conditioning system . If the absolute value of the newly set adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dis greater than the absolute value of the torque necessary to run the air-conditioning system , then the deceleration control process proceeds to block . If the absolute value of the newly set adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dis not greater than the absolute value of the torque necessary to run the air-conditioning system , then the deceleration control process proceeds to block . 336 34 34 20 336 34 26 336 300 340 ss At block , the controller sets the virtual pedal input to zero (tip out completely). In other words, the controller commands the propulsion system to produce zero torque. At block , the controller commands the brake system to actuate to provide the remaining commanded axle torque τ. After block , the deceleration control process proceeds to block . 338 34 34 20 338 300 340 req max peak ss ss req max peak ss ss At block , the controller sets the commanded engine torque to the sum of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dand the commanded axle torque τto road load torque at the set speed v. In other words, the controller commands the propulsion system to reduce the commanded axle torque to the sum of the adjusted reduction in torque τrequired (if applied constantly) to achieve the maximum allowed speed vat the peak distance dand the commanded axle torque τto road load torque at the set speed v. After block , the deceleration control process proceeds to block . FIGS. 10A and 10B FIG. 8 FIG. 6 FIG. 6 400 100 400 109 400 402 402 34 10 34 10 402 400 404 34 402 404 400 0 1 2 schematically illustrate a flowchart of the headway control algorithm or process of the cruise control method (). As discussed above, the headway control process begins at block . Then, the headway control process proceeds to block . At block , the controller computes and generates a table of times required for the driven vehicle to reach each distance breakpoints x, x, x, etc. (i.e., the first distance table in ) using the open-road projected speeds (i.e., projected-speed table PST or updated, projected-speed table UPST if the commanded axle torque was adjusted). Stated differently, the controller is programmed to generate a table of times required for the driven vehicle to reach each of the predetermined-upcoming locations i.e., the first distance table FDT in ) using the projected-speed table (i.e., projected-speed table PST or updated, projected-speed table UPST if the commanded axle torque was adjusted). After block , the headway control process proceeds to block . Alternatively, the controller may execute block and simultaneously to minimize the time it takes to execute the headway control process . 404 34 11 10 11 11 11 34 11 34 10 11 11 11 40 34 400 406 0 1 2 0 FIG. 6 FIG. 6 At block , the controller computes and generates a similar table for the followed vehicle . Specifically, given the initial following distance from the driven vehicle to the followed vehicle , the speed of the followed vehicle , and the acceleration of the followed vehicle , the controller computes how long it will take the followed vehicle to reach the same distance breakpoints x, x, x, etc. (i.e., the first distance table in ). As discussed above, the controller may determine the initial following distance from the driven vehicle to the followed vehicle at the first distance breakpoint x(first predetermined-upcoming location), the speed of the followed vehicle , and the acceleration of the followed vehicle using the sensor devices , such as radars or forward-looking cameras. Therefore, the controller generates the second distance table SDT (), which includes the times required for the followed vehicle to reach each of the predetermined-upcoming locations. Next, the headway control process proceeds to block . 406 34 10 11 34 11 10 400 408 FIG. 6 FIG. 6 i i i i i i i i i At block , for all the predetermined-upcoming locations in the first distance table FDT () and the second distance table SDT () where the distance xis less than a predetermined calibratable distance, the controller calculates the difference between the time required for the driven vehicle to reach each of the predetermined-upcoming locations (as shown in the first distance table FDT) and the time required for the followed vehicle to reach each of the predetermined-upcoming locations to determine each of the following times at each predetermined-upcoming location, thereby generating the following time table FTT. To do so, at each predetermined-upcoming location (i.e., distance x) that is less than the predetermined calibratable distance, the controller subtracts the time it will take the followed vehicle to reach the distance xfrom the time it will take the driven vehicle to reach the same distance xto obtain the following time at each of the predetermined-upcoming location (i.e., at each distance x). The distance xmay be referred to as point x, and the time required to reach the distance xmay be expressed as a function of time t(x) or t(x). The headway control process then proceeds to block . 408 34 406 34 406 406 400 410 min At block , the controller determines whether any points in the array (i.e., table) generated in block is less than the predetermined-minimum time threshold. In other words, the controller compares the following times with the predetermined-minimum time threshold to determine whether at least one of the following times (determined in block ) is less than the predetermined-minimum time threshold. The predetermined-minimum time threshold is equal to a minimum following time ttimes a first safety factor α. If none of the following times (determined in block ) is less than the predetermined-minimum time threshold, then the headway control process proceeds to block . 410 34 20 34 20 410 400 412 At block , the controller maintains commands the propulsion system to maintain the commanded axle torque. In other words, the controller commands the propulsion system to maintain the commanded axle torque in response to determining that each of the determined following times is equal to or greater than the predetermined-minimum time threshold. After block , the headway control process proceeds to block . 412 34 400 100 104 412 400 At block , the controller exits from the headway control process , and the method proceeds to block . In other words, at block , the headway control process passes its check. 406 34 400 414 Returning to block , if the controller determines that at least one of the following times is less than the predetermined-minimum time threshold, then the headway control process proceeds to block . 414 34 20 10 34 10 11 10 11 34 20 crit veh crit f crit crit f crit min V x V x t x t x t At block , the controller commands the propulsion system of the driven vehicle to decrease the commanded axle torque by a torque adjustment in order to prevent each of the following times at each of the predetermined-upcoming locations from being less than the predetermined-minimum time threshold in response to determining that at least one of the following times is less than the predetermined-minimum time threshold. As a result, the controller prevents the driven vehicle from reaching the followed vehicle during cruise control. Specifically, if there is a projected violation of the predetermined-minimum time threshold before a predetermined calibratable distance D, then the axle torque command will be reduced to prevent the driven vehicle from too closely following the followed vehicle . The controller commands the propulsion system to modify the commanded axle torque so that, at distance x(i.e., the distance at which the violation occurs): ()=() and (()−())=β* where: veh crit crit 10 V(x) is a speed of the driven vehicle at the distance x(i.e., the distance at which the violation occurs); f crit crit 11 V(x) is a speed of the followed vehicle at the distance x(i.e., the distance at which the violation occurs); crit crit 10 t(x) is a following time of the driven vehicle at the distance x(i.e., the distance at which the violation occurs); f crit crit 11 t(x) is a following time of the followed vehicle at the distance x(i.e., the distance at which the violation occurs); min t* is predetermined-minimum time threshold; and β is a second safety factor that is greater than the first safety factor α in order to avoid violating the predetermined-minimum time threshold. 34 11 min β is a second calibratable second safety factor that is greater than the first safety factor α in order to avoid violating the predetermined-minimum time threshold. This will ensure that the controller targets an acceptable following time βt* and matches the speed of the followed vehicle at this minimum following time. The axle torque change required to achieve these conditions (i.e. the torque adjustment) is summarized in the following relation: <math overflow="scroll"><mrow><msub><mi>τ</mi><mrow><mi>a</mi><mo>&#x2062;</mo><mi>z</mi></mrow></msub><mo>=</mo><mrow><mrow><msub><mi>r</mi><mi>w</mi></msub><mo>&#x2062;</mo><mrow><mi>m</mi><mo>&#x2061;</mo><mrow><mo>[</mo><mfrac><mrow><mo>[</mo><mrow><msub><mi>v</mi><mi>c</mi></msub><mo>-</mo><msub><mi>v</mi><mrow><mi>f</mi><mo>&#x2061;</mo><mrow><mo>(</mo><msub><mi>x</mi><mrow><mi>c</mi><mo>&#x2062;</mo><mi>r</mi><mo>&#x2062;</mo><mi>i</mi><mo>&#x2062;</mo><mi>t</mi></mrow></msub><mo>)</mo></mrow></mrow></msub></mrow><mo>]</mo></mrow><mrow><mo>[</mo><mrow><msub><mi>t</mi><mi>c</mi></msub><mo>+</mo><msub><mi>t</mi><mrow><mrow><mi>f</mi><mo>&#x2061;</mo><mrow><mo>(</mo><msub><mi>x</mi><mrow><mi>c</mi><mo>&#x2062;</mo><mi>r</mi><mo>&#x2062;</mo><mi>i</mi><mo>&#x2062;</mo><mi>t</mi></mrow></msub><mo>)</mo></mrow></mrow><mo>-</mo><msubsup><mrow><mi>β</mi><mo>&#x2062;</mo><mi>t</mi></mrow><mi>min</mi><mo>*</mo></msubsup></mrow></msub></mrow><mo>]</mo></mrow></mfrac><mo>]</mo></mrow></mrow></mrow><mo>-</mo><mrow><mfrac><mn>1</mn><msub><mi>x</mi><mi>N</mi></msub></mfrac><mo>&#x2062;</mo><mrow><munderover><mo>&#x2211;</mo><mrow><mi>i</mi><mo>=</mo><mn>0</mn></mrow><mrow><mi>N</mi><mo>-</mo><mn>1</mn></mrow></munderover><mo>&#x2062;</mo><mrow><msub><mi>a</mi><mi>i</mi></msub><mo>&#x2062;</mo><mi>&#x394;</mi><mo>&#x2062;</mo><msub><mi>x</mi><mi>i</mi></msub></mrow></mrow></mrow></mrow></mrow></math> where: az Δτis a change in axle torque to meet a plurality of auto-following distance constraints; c vis the current speed of the driven vehicle; f(x crit ) crit vis a speed of the followed vehicle at a distance x; crit xis a distance at which a violation of the predetermined-minimum time threshold occurs; c tis a current following time; f(x crit ) crit tis a time required for the followed vehicle to reach the distance x; i ais a vehicle acceleration of the driven vehicle at index i. i Δxis an incremental distance between cells i and cell i+1; w ris a rolling radius of a wheel of the driven vehicle in millimeters; and m is a mass of the driven vehicle in kilograms; β is a second safety factor that is greater than the first safety factor in order to avoid violating the predetermined-minimum time threshold; and min t* is the predetermined-minimum time threshold. 414 416 416 414 34 10 11 400 406 After block , the method proceeds to block . At block , given the new axle torque command (i.e., the torque adjustment) determined in block , the controller re-computes the projected speed tables PST of the driven vehicle and the followed vehicle and the time/distance tables (i.e., the first distance table FDT, the second distance table SDT, and the following time table FTT). The headway control algorithm returns to block . The detailed description and the drawings or figures are supportive and descriptive of the present teachings, but the scope of the present teachings is defined solely by the claims. While some of the best modes and other embodiments for carrying out the present teachings have been described in detail, various alternative designs and embodiments exist for practicing the present teachings defined in the appended claims.
CROSS-REFERENCE TO RELATED APPLICATION BACKGROUND SUMMARY DETAILED DESCRIPTION This application is a continuation of and claims priority to International Patent Application No. PCT/JP2013/078868 filed on Oct. 24, 2013, which claims priority to Japanese Patent Application No. 2012-241981 filed on Nov. 1, 2012, subject matter of these patent documents is incorporated by reference herein in its entirety. (i) Technical Field The present invention relates to focal-plane shutters and optical devices. (ii) Related Art Japanese Unexamined Patent Application Publication No. 09-179167 discloses a focal-plane shutter driving a blade by two arms. To improve the shutter speed, it is contemplated that two arms are made of synthetic resins for reduced weight. In this case, the rigidity of the arms are reduced, so that the arms might flap in the optical axis direction when the blade stops after moving from a state of closing an opening to a state of opening the opening. This might damage the arms. According to an aspect of the present invention, there is provided a focal-plane shutter including: first, second, and third boards each including an opening and arranged in an optical axis direction passing through the openings; a first blade arranged between the first and second boards and capable of opening and closing the openings; a second blade arranged between the second and third boards and capable of opening and closing the openings; first and second arms arranged between the second and third boards, connected to the second blade, made of synthetic resins, and capable of rotating about respective different fulcrums; a drive member connected to the first arm and driving the first arm; and a support member provided within a working region of the first arm when viewed in the optical axis direction or in a position close to the first arm and distant from the second arm in an opened state where the second blade opens the openings, the support member being provided in the first board side and supporting the second board. FIG. 1 FIG. 2 FIG. 1 FIGS. 1 and 2 1 1 1 1 10 20 20 31 32 31 32 40 40 50 50 72 72 72 72 50 50 10 11 20 20 11 72 20 72 20 a a b b a b a b a b a b a b a b is a front view of inner structure of a focal-plane shutter according to the present embodiment. is an external perspective view of the focal-plane shutter . The focal-plane shutter is employed in an optical instrument such as a digital camera or a still camera. The focal-plane shutter includes a board , a leading blade A, a trailing blade B, arms , , , and , drive members and , output members and , and rotors and . Additionally, the rotors and , and the output members and are omitted in . The board includes an opening . The leading blade A and the trailing blade B open and close the opening . The rotor is included in an actuator for driving the leading blade A. The rotor is included in an actuator for driving the trailing blade B. Each actuator includes a stator around which a coil is wound, and is omitted in . 20 21 23 20 20 20 11 11 11 11 20 11 20 11 a a FIGS. 1 and 2 The leading blade A includes plural blades to . The trailing blade B also includes plural blades. Each of the leading blade A and the trailing blade B can shift between an overlapped state where the plural blades overlap one another and an expanded state where the plural blades are expanded. These plural blades recede from the opening in the overlapped state to bring the opening into a fully opened state. These plural blades close the opening in the expanded state to bring the opening into a fully closed state. illustrate the state where the expanded leading blade A closes the opening and the trailing blade B recedes from the opening . 20 31 32 20 31 32 31 32 31 32 14 15 14 15 10 a a b b a a b b a a b b FIG. 1 The leading blade A is connected to the arms and . The trailing blade B is connected to the arms and . As illustrated in , the arms , , , and are rotatably supported by spindles , , , and provided in the board , respectively. 40 40 31 31 40 40 43 43 31 31 10 13 13 43 43 40 40 72 72 50 50 72 72 50 50 40 40 20 20 50 50 40 40 a b a b a b a b a b a b a b a b a b a b a b a b a b a b a b The drive members and drive the arms and , respectively. The drive members and are provided with drive pins and connected to the arms and , respectively. The board is formed with escape slots and for permitting the movement of the drive pins and , respectively. The drive members and will be described later in detail. The rotors and are respectively connected to the output members and . The rotation of the rotors and rotates the output members and , so the drive members and rotates, which drives the leading blade A and the trailing blade B, respectively. The positions of rotational axes of the output members and and the drive members and are different from one another. 50 50 40 40 55 55 45 45 55 45 55 45 50 50 40 40 31 31 31 31 31 31 31 31 a b a b a b a b a a b b a b a b a b a b The output members and and the drive members and are respectively formed with gear portions , , , and . The gear portions and mesh with each other, and the gear portions and mesh with each other, so that the rotation of the output members and respectively rotate the drive members and . The arms and are partially attached with reinforcement members A and B. The arms and and the reinforcement members A and B are made of synthetic resin, and each thereof has a thin plate shape. FIG. 2 10 19 13 19 13 13 13 40 40 10 a a b b a b a b As illustrated , the board is formed with a positioning portion near one end of the escape slot . Likewise, a positioning portion is formed near one end of the escape slot . The other ends of the escape slots and are provided with rubbers Ga and Gb for absorbing the impact of the drive members and , respectively. In addition, the board is assembled with a holder holding the above actuators not illustrated. FIG. 1 FIG. 1 10 10 10 4 4 1 2 31 32 s s b b As illustrated in , the board is secured with a support member GS. The support member GS is separate from the board . Further, the board is provided with plural support projections is to . The support member GS and the support projections is to will be described later. Furthermore, in , working regions R and R of the arms and are depicted by dotted lines. FIG. 3 FIG. 3 FIG. 1 FIG. 3 43 14 1 10 10 10 10 10 10 10 10 11 10 10 10 20 31 32 10 10 20 31 32 10 10 10 20 20 31 31 10 10 10 20 20 b b a a b b a b is a sectional view around the support member GS. Additionally, is the sectional view taken along line passing through the support member GS, the drive pin , and the spindle in . As illustrated in , the focal-plane shutter includes boards A and B besides the board . The boards to B are arranged in this order in the optical axis direction. That is, the board A is provided between the boards and B. Like the opening provided in the board , an opening is provided in each of the boards A and B. The leading blade A and the arms and are arranged between the boards and A. The trailing blade B and the arms and are arranged between the boards A and B. The board A prevents the interference of the leading blade A with the trailing blade B and the interference of the arm with the arm . The board is an example of a first board. The board A is an example of a second board. The board B is an example of a third board. The leading blade A is an example of a first blade. The trailing blade B is an example of a second blade. 80 10 72 80 10 84 80 42 40 40 43 31 10 10 31 20 31 32 43 20 10 43 10 13 a b b b b b b b b b b b b A holder assembled on the board holds the actuators not illustrated. The actuator includes the rotor , the stator, and the coil. The holder is assembled on the board . A spindle of the holder is fitted into a support hole of the drive member for rotation. Therefore, the drive member is rotatably supported. The drive pin extends in a predetermined direction and is connected to the arm arranged between the boards A and B. As mentioned above, the arm is connected to the trailing blade B. The arm is an example of a first arm. The arm not connected to the drive pin but to the trailing blade B is an example of a second arm. Additionally, the board A is provided with an escape slot for receiving the drive pin . The board B is provided with an escape slot B. 10 10 10 10 10 10 18 18 10 10 18 10 18 10 18 10 18 4 10 10 10 s The support member GS is secured to a surface, of the board , facing the board A, and supports the board A from the board side. The support member GS is made of a rubber. In particular, the support member GS is a foamed rubber forming bubbles, but not limited to this. For example, the support member GS may be made of a synthetic resin having no elasticity or may be a rubber, a leaf spring, a coil spring, or the like having elasticity. The support member GS may be integrally formed in the board . In an inner peripheral edge of the board , a receiving portion protrudes inwardly from the inner peripheral edge. The receiving portion supports the board A from the board B side. In other words, the receiving portion receives the board A serving as the second board between the receiving portion and the support member GS from the board B side serving as the third board. Herein, the support member GS is secured near the receiving portion , and an end portion of the board A is sandwiched between the support member GS and the receiving portion . Further, the support projections is to formed in the board also support the board A from the board side. FIG. 1 31 32 20 11 31 b b b. Next, the position of the support member GS will be described. As illustrated in , the support member GS is provided at a position close to the arm and distant from the arm in the state where the trailing blade B opens the opening . That is, it is provided near the arm 72 50 40 20 11 43 40 13 19 40 31 31 31 31 31 43 31 32 31 31 31 32 31 11 20 20 31 b b b b b b b b b b b b b b b b b b b b b For example, the rotor causes the output member and the drive member to rotate from the state where the trailing blade B closes the opening . Thus, the drive pin of the drive member moves within the escape slot and abuts the positioning portion . At this time, the impact is applied to the drive member , so the arm might flap in the optical axis direction. When the arm flaps in the optical axis direction, the load is applied to the arm , so the arm might be damaged. In particular, since the arm engages with the drive pin , the area of the arm is formed to be greater than that of the arm . In addition, the reinforcement member B is also attached to the arm . Thus, the arm is formed to be heavier than the arm . For this reason, the arm tends to flap. Further, in the opened state where the opening is opened, the blades of the trailing blade B are brought into the overlapped state. Therefore, the trailing blade B further tends to flap in the optical axis direction, and the arm tends to flap. 31 20 10 10 31 31 31 31 31 b b b b b b FIG. 1 In the present embodiment, the support member GS is provided near the arm in the opened state of the trailing blade B. The support member GS presses the board A, whereby the board A suppresses the arm from flapping. This can prevent the damage to the arm . Further, since the support member GS has elasticity, it is possible to absorb the impact generated from the arm . Furthermore, as illustrated in , the support member GS overlaps the arm when viewed in the optical axis direction. Accordingly, the flapping of the arm can be effectively suppressed thereby absorbing the impact. 18 31 32 20 11 31 31 20 10 18 31 b b b b b. Further, like the support member GS, the receiving portion is formed at a position close to the arm and distant from the arm in the opened state where the trailing blade B opens the opening . That is, it is formed near the arm . Since it is formed near the arm when viewed in the optical axis direction in the opened state of the trailing blade B, the board A can be stably held between the receiving portion and the support member GS. It is thus possible to suppress the flapping of the arm 31 32 32 b b b Also, since the flapping of the arm is suppressed, it is also possible to suppress the flapping of the arm . Therefore, the arm is prevented from being damaged. 1 32 31 b b In addition, the support member GS may be provided within the working region R when viewed in the optical axis direction. Moreover, the support member GS may be provided at a position closer to the arm than to the arm in the opened state. FIG. 1 3 4 10 10 1 31 31 1 2 32 s s b b b As illustrated in , the support projections and integrally formed in the board and supporting the board A are provided within the working region R of the arm when viewed in the optical axis direction. It is thus possible to press the flapping of the arm in the optical axis direction in cooperation with the support member GS. Further, the support projection may be provided not only within the working region R but also within the working region R. It is also possible to suppress the flapping of the arm in the optical axis direction. 2 20 20 10 s Furthermore, the support projections is and are formed at positions to overlap the trailing blade B in the optical axis direction in the opened state. This suppresses the trailing blade B from flapping through the board A. 31 20 b In this way, the arm is suppressed from flapping, thereby suppressing the trailing blade B from flapping. Accordingly, the operation noise of the focal-plane shutter can be reduced. While the exemplary embodiments of the present invention have been illustrated in detail, the present invention is not limited to the above-mentioned embodiments, and other embodiments, variations and modifications may be made without departing from the scope of the present invention. 20 20 31 20 10 31 31 20 10 31 b b a a In the present embodiment, the leading blade A and the trailing blade B are driven by use of the actuators. However, the present invention is not limited to this. For example, the operation of an electromagnet and a spring may drive the blade through the drive member. Further, in the present embodiment, in order to suppress the flapping of the arm driving the trailing blade B, the support member GS is provided in the board at the arm side. However, in order to suppress the flapping of the arm driving the leading blade A, such a support member may be provided in the board B at the arm side. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a front view of inner structure of a focal-plane shutter according to the present embodiment; FIG. 2 is an external perspective view of the focal-plane shutter; and FIG. 3 is a sectional view around a support member.
Negative and positive visual hypnotic hallucinations: attending inside and out. Hypnotic perceptual alteration affects brain function. Those hypnotic instructions that reduce perception by creating an illusory obstruction to it reduce brain response to perception in the cognate sensory cortex, as measured by event-related potential (ERP) amplitude and regional blood flow (PET). Those hypnotic instructions that affect the subject's reaction to perception activate the anterior attentional system, especially the anterior cingulate cortex in PET studies. Hypnosis involves activation without arousal and may be particularly mediated via dopaminergic pathways. Hypnotic alteration of perception is accompanied by measurable changes in both perceptual and attentional function of those specific regions of the brain that process these activities, modulated by the nature of the specific hypnotic instruction. Positive obstructive hallucinations seem to allow for a hypnotic focus inward, activating the functioning of attentional neural systems and reducing perceptual ones.
Member Since: Aug 10, 2008 Gender: Male Goal Type: Other Running Accomplishments: Marathon PR - 3:05 (3:06 at Boston) Completed a dozen marathons and a handful of 50 mile ultras. Short-Term Running Goals: Diagnosed with Hashimoto's disease so just trying to stay in shape and minimize further damage to thyroid and heart. Long-Term Running Goals: Maybe complete a 5K again someday. Personal: Started runnning more regularly in 2005 after years of mostly strength training. I have cut back on endurance training since developing thyroid and heart related issues over the years. Reintroduced regular strength training and walking sessions for a more balanced approach to health. Another 12 miles finishing fast. Beauty of a day. Now it's house chores to get the place ready for sale. Last mile progressively faster. Easy run mixing in some faster paces during second half. I was going to do 20 miles, but my right leg has been complaining all week. This morning my right hamstring sent out a few bites so I slowed down and decided to shorten the run. It was feeling much better once it warmed up, but I had to be careful not to overstride. The plan was to run 15 miles, but I felt like exploring a bit more because the conditions were excellent outside. Pushed the pace during miles 13-18 and the final mile. I hardly wear a watch anymore so all I know is it was faster than normal. Now it's family time. Nothing fancy. Last mile was faster than the rest. It felt better than expected given the run yesterday. Another twelver with the last mile faster. Felt like a million bucks today with 53 and sunny conditions. I need to lose about 10 lbs to have a good race late in Oct. Ran around the neighborhood doing all kinds of goofy loops. Did the guy thing and made up some nicknames for running mileage. Came up with these while running today. 8 = Snowman 9 = Jimi 6 10 = Dime 11 = Snake Eyes 12 = Twelver 13 = Baker's Dozen 14 = Charlie Hustle A ho-hum twelver on a bike path. Mixed in some faster stuff during the second half, but just wanted to be done today. My mind was a total blank most of the time. Long run with some faster miles mixed in during the second half trying to trick the legs into thinking it was closer to 20 miles. Everything felt really good until mile 15 when I started to get some random aches in my lower hamstrings and left achilles. At first I thought I over did the speed, but then all the aches went away and I had a solid feeling during the last mile. Kind of strange. I took the past two days off so I thought I'd push the distance today. Kept everything at a normal pace (in the 8s) and drank water throughout. It was a slug fest at miles 19-21 since I didn't take in any fuel. Thankfully it got a better during the last few miles so I didn't walk it in. Pretty tired now. 12 miles at normal paces during the first half and then pushed the pace more during the second half mixing in some fartleks at various speeds. Legs were stiff during the first mile, but had good zip during the rest of the run. I think eating a solid meal last night was key to the recovery. Jogged and walked 2 miles with Claire. Intended to go my usual 12 miles, but after a bathroom stop I felt clammy and dizzy so I turned around at 4 miles. Only thing worth noting is I mixed in some speed play and I hit a bad spot during miles 16 and 17 that eventually subsided. A real struggle today. Just wasn't into running and the heat/humidity made it really tough at the end. I just wanted to sit down in the shade after this one. A hilly 15 miles. Felt a little bit better, but was sweaty mess at the end because of the humidity. Felt good and tired at the end. Had to stop and walk a few times because of stomach cramps. My feet were complaining about the string of 15 miles this week. Better day temp and humidity wise. Ran really easy and enjoyed the trip more than the past couple of days. Included a fast quarter down to 10K pace during the last 3 miles. Focused on staying relaxed and effortless during the whole run. Absolute perfect day for running! Suppose to rain tomorrow so I did 20 today. Mixed in a faster quarter during each of the last 4 miles. I felt pretty good overall and probably could have stretched it out a bit more, but had to get back home. 4 more weeks until the marathon. An easy snowman on the TM. A little faster at the end. 12.4 easy 1.6 fast during last 4 miles 13.5 easy, but pushed the pace faster after 10 miles. 12 miles easy with a Van Aaken finish over the last mile.
https://fastrunningblog.com/u/rej/blog-09-2010.html
Dimensions of diversity, including the intersectionality of culture, gender, age, race, ability, class, ethnicity, sexual orientation, religion, spirituality, and family, are examined within the context of social service delivery. Particular attention is given to aging populations and Indigenous Peoples of Canada. Students assess the systemic contexts of identity, and critically examine their own social locations. Students also review national and provincial human rights codes, and investigate systemic barriers facing vulnerable populations. Credits 3 Course Hours 42 Students registering for credit courses for the first time must declare a program at the point of registration. Declaring a program does not necessarily mean students must complete a program, individual courses may be taken for skill improvement and upgrading.
https://www.georgiancollege.ca/academics/part-time-studies/courses/diversity-multiculturalism-sswk-2000/
Taoism is the ancient Chinese wisdom tradition that is based upon the study of the Tao. Tao is the way of the divine source revealed in the natural world. The roots of Taoism reach back to the ancient Shamanic traditions of China 3,000 years ago. These currents were eventually expressed in poetic philosophy by the sages Lao Tzu and Chuang Tzu. They taught that the source of all manifestation was ever present but beyond the limitations of verbal expression. The way to experience Tao was to become receptive, to empty the mind of limiting preconceptions and to harmonize and flow of nature. Since these beginnings, various schools of Taoism developed each stressing a different way of becoming more receptive and harmonious. Some traditions stressed ritual and philosophical contemplation while others focused upon the purification and transformation of the body through the practice of meditation and breathing exercises. Some aspects of this knowledge were taught openly for popular consumption, while others were kept secret, a practice maintained until very recently. Both a popular and a secret tradition were taught and continue to the present time. The popular tradition took the form of a very colorful religion that served the needs of the common person. The secret tradition focused on spiritual development intended for the serious practitioner, it contained the methods of emptying and purifying that the serious student used in their spiritual development. Today, Taoism in the United States is characterized by the emergence of the “secret” traditions into the marketplace of the New Age subculture and the Kung Fu martial arts. There is now a wide variety of Taoist methods and disciplines readily available to the interested student.
http://www.taoistsanctuary.org/taoism/
Immigration has always been a part of America’s history. Recently immigration reform has become the center of heated political debates. There are some that suggest that immigration is a good thing for the Nation while others suggest that it’s not. Traub and Tayler (2009) suggested that in 2006 the population of undocumented immigrants stabilized at 11.9 million. It has been suggested that as of 2007, 12.6% of the entire US population is foreign-born. More than half of the immigrants in the US are from Latin America and the Caribbean. Approximately 1 in 6 immigrants have either entered the country illegally or have entered legally and stayed beyond their time (Palivos, 2009). Researchers, however, have questioned the legitimacy of the numbers recorded. Some, suggest that the exact number of undocumented immigrants is unknown simply because the numbers are based on those that were arrested and have failed to enter the US (Nadadur, 2009). There has been talks that a large number, if not majority, of the crimes that occur in the US are commented by undocumented immigrants. This notion that undocumented immigrants commit more crimes often dominates the anti-immigration legislation. However, studies have suggested that immigrants are less likely to commit crimes than natives (Butcher & Piehl, 2008). Wadsworth went on to suggest that on the contrary immigrants are partly responsible for one of the most noticeable declines in crime that the US has experienced (2010). Thus, it is imperative to conduct the proper research documenting the public perception on the issue. Most importantly for us, it’s the need to understand how future professionals in areas of public justice and overall human services perceive immigration issues. During the past summer, Dr. Rivera-Vázquez and two undergraduate students, Renée Sterling and Tiffany Huynh, from the State University of NY at Oswego decided to take a closer look at this issue. The research team received grant funding to conduct research on the perception of immigration reform, undocumented immigrants, and crime. The sample consisted of 97 Pubic Justice students. Most participants perceived that the U.S. immigration system is broken. When it comes to their perception about immigration and crime, a majority of the participants do not perceive immigrants to be more likely to commit a criminal act when compared to non-immigrants. However, there are still high percentages who believe otherwise. Over 30% of the participants agreed undocumented immigrants should be forced out the country. Results from this study also suggest that 17% of the sample perceived property crime have increased due to undocumented immigration. A large majority of the sample also agreed that undocumented immigrants use smugglers to enter into the U.S. There is documented evidence that many immigrants enter the U.S. legally and simply overstay, thus they do not necessarily use smugglers or cross the borders to come in the country. Given these results, we need to look carefully at how we train criminal justice and/or human services professionals on issues related to sensitiveness to the immigration and crime debate. Our students often express their interest to enter fields such as: police, probation and parole, juvenile justice, ICE, FBI, counseling, social work, etc. Ultimately, these future professionals will pursue careers that will put them at increased contact with immigrant communities. Thus, having our students become aware of their own biases will only increase the quality of professionals we produce. About the authors: Omara Rivera Vázquez, Ph.D. is Puerto Rican and holds an Assistant Professor position within the Department of Public Justice at the State University of New York -Oswego. She received her doctoral degree in Family and Child Ecology as well as a dual master’s degree in Criminal Justice-Urban Affairs from Michigan State University. Renee Sterling is an undergraduate senior studying Psychology and Public Justice (minor) at the State University of New York at Oswego. Shehopes to pursue a career in social work.
https://cnylatinonewspaper.com/english/political/social-november-2013/
The UNHCR said Australia has violated Torres Strait islanders' rights by ignoring their concerns about rising seas caused by climate change. Binoy Kampmark reports. Torres Strait Islands Eight Torres Strait Islander elders are taking the federal government to the United Nations Human Rights Committee over its inaction on climate change. Coral Wynter reports. Eddie Mabo did not fight for ngau lag (my land) only to lose it to climate change, argues Makiba. Mark Kabay-Saleh, from the islands of Masig and Poruma in the centre of the Torres Strait, spoke powerfully about saving his islands at the Brisbane School Strike 4 Climate. Joseph Elu, chair of the Torres Strait Regional Authority, told Radio National’s PM on January 5 that the islands that have been home to Indigenous people for thousands of years are “being inundated”, right now because of climate change. “A couple of our islands, the tide rises over the sea walls of the beachfront and it flows under the houses and out the other end ... They’re predicting that in 100 years, then they’ll go under.” The Show Will Go On Mau Power Coming soon www.maupower.com Mau Power's new album takes listeners through the big changes in his life - and the first of those came when he was jailed. "In 2001, I got incarcerated," the Torres Strait Islands rapper tells Green Left Weekly. "And that time I was in lock-up for nine months." Mau Power, also known as Patrick Mau, was put away for grievous bodily harm after a street fight in the southern Queensland country town of Toowoomba.
https://www.greenleft.org.au/tags/torres-strait-islands
If Peyton Manning hadn't decided to team up with his dad to write a book in 2001, there's a good chance the quarterback's reputation wouldn't be under fire in 2016. Back in July 2001, Peyton and his dad Archie, along with ghostwriter John Underwood, collaborated on a book called, Manning: A Father, His Sons, and a Football Legacy (you can see it here). By the time the book was set to be released, the alleged "mooning" incident that took place at Tennessee between Peyton and former Volunteers trainer Dr. Jamie Naughright was a thing of the past. Naughright officially left Tennessee in August 1997 after receiving a $300,000 settlement from the school. The settlement wasn't exclusively tied to the incident involving Peyton, it was also tied to 32 other claims she made during her tenure at the school, according to an Associated Press story from 1997. Five years after the incident, both parties had seemingly moved on: Peyton was the quarterback of the Indianapolis Colts and Naughright was the program director of Florida Southern's athletic training program. The incident between Manning and Naughright took place in February 1996 and by 2001, it looked like it was behind everyone, never to be talked about again. However, that changed when Archie and Peyton decided to release their book on July 28, 2001. Two months before the book came out, an excerpt was sent to Naughright at Florida Southern. The piece of mail was addressed to "Dr. Vulgar Mouth Whited," according to court documents filed by Naughright's legal team. Whited was Naughright's last name while she was at Tennessee. The excerpt was mailed to Naughright in May 2001, which was when she first found out about the book. Not too long after the book was released, Naughright would lose her job at Florida Southern. In May 2002, Naughright decided to file a defamation lawsuit against Peyton Manning, Archie Manning, John Underwood, and the company that published the book, HarperCollins. That defamation suit is what led to the 74-page court document that was the subject of the recent New York Daily News story that called into question Peyton's "squeaky-clean image." Basically, if Peyton doesn't write the book in 2001, then there's no defamation suit and if there's no defamation suit, there's no 74-page "facts of the case" document, and if that document doesn't exist, Peyton's reputation is probably still intact. So what did Peyton write in that 2001 book? For one, the quarterback mentions that having a woman in a men's locker room "is one of the most misbegotten concessions to equal rights ever made." Manning claims that he "mooned" Naughright and it was "inappropriate." However, the quarterback also added that what he did wasn't "exactly a criminal offense." Here's the exact description of the incident from Manning: A Father, His Sons, and a Football Legacy, where Peyton mentions that his brother Cooper would've done the same thing. "Then one day I was in the training room and a track athlete I knew made some off-color remark that I felt deserved a colorful (i.e., Cooper-like) response. I turned my back in the athlete's direction and dropped the seat of my pants." The athlete who Peyton allegedly "mooned" refuted that the "mooning" ever happened, according to court documents (pg. 20). It's important to note that although Manning claims he mooned Naughright, she says Manning went way further than that. In court documents filed by her legal team (via USA Today), Naughright says Manning used his "gluteus maximus, the rectum, the testicles and the area in between the testicles. And all that was on my face when I pushed him up. ... To get leverage, I took my head out to push him up and off." In the book, Peyton adds that he thought Naughright would find the incident humorous, given "the environment" they were in. Peyton also mentions in the book that Naughright "had been accumulating a list of complaints against the university that she intended to take action on -- alleged sexist acts that, when her lawyer finally put it together, resulted in a lawsuit charging thirty-five counts of sexual harassment." Peyton closed out the Naughright chapter of the book with this sentence, "It's all past history now ... but it hurt me." Naughright was clearly offended by all of this, which led to the defamation lawsuit, and again, if Manning doesn't mention her in the book, then there's no defamation lawsuit. The "facts of the case" obtained by the New York Daily News was originally entered into court in October 2003. Two months after that in December 2003, the two parties settled the defamation case and agreed to never talk about it again, citing a confidentiality agreement. However, Peyton was sued again in January 2005. This time Naughright sued him for breaking the confidentiality agreement from the earlier lawsuit. According to the second lawsuit (via Jacksonville.com), Manning brought up the Tennessee incident in an episode of ESPN Classic Sports Century: Peyton Manning that aired on Dec. 30, 2004. That lawsuit was settled in July 2005. So why did Peyton keep talking about the case? Only Peyton knows the answer to that question. Whatever the reason was though, Peyton's inability to adhere to the confidentiality agreement about the incident is why the story never died and because the story never died, many people are reading it and finding out about it for the first time this week.
https://www.cbssports.com/nfl/news/peyton-manning-detailed-his-alleged-mooning-incident-in-this-2001-book/
I agree; that is crazy! I have been using CA for 50 years and never heard this. I did a search for “CA glue fire danger” and your post was the #4 result, after articles about California fires. Further research also said in addition to cotton, it can react with wool. I looked it up on Wikipedia and saw that it was discovered at B.F. Goodrich in 1941. RussellAugust 23, 2020 at 2:35 PM in reply to: Schedule 2020 #15172 I had a post ready yesterday, but apparently did not hit SUBMIT. I had proposed Group C for the next proxy race, but I understand Marty’s comment about continuing the tradition of Group C being the first race series each year. I had thought that Group C might get the greatest number of entries for a proxy race. We could quite likely still be running proxy races in January. Between Slot.it McLaren and Group 5, I would personally prefer the McLaren. I assume the Group 5 Marty mentions is the Group 5 in 2017. Although technically not a Racer series, that was the car needed to be competitive. Not having a Racer, I would rather not have to purchase one. I also await member’s thoughts on what series to run and which series they have cars they would be willing to enter. RussellApril 9, 2020 at 12:43 AM in reply to: Adjustable Voltage #14950 There will be an LED readout at each station. Ary demonstrated at last video conference. the assembly was 3 or 4 inches square, a couple of inches tall.April 8, 2020 at 4:34 PM in reply to: Adjustable Voltage #14948 I would think that some experimentation would be required on this topic. It will be simple enough to try running various cars at different voltages (using the voltage adjustment on the Pyramid power supply) and comparing lap times versus drivability. This will take some work. Say with a Group C car, try it at 12 volts and see how it runs. Is it faster? Is it faster but much twitchier? Put a numerically taller gear in it and see if you can get a faster lap speed than the baseline but still have it drivable. Try it at 13 volts and test it some more. I think we will need time to have everyone do this experimentation. It looks like adjusting the voltage could be an easy fix to help a car with a weak motor but it might make setup more complicated because it could also work with a “normal” or “hot” motor but require different gearing, which would raise the bar and the “weak” motor would be outclassed once again. If this is the case, we could require a specific gear and let everyone choose their voltage. For this testing we may need to wait until safer track access is available. It sounds like Ary’s voltage regulators would cost $30 plus for each lane, plus the time to install them. With club dues now due once again, and the possibility that some members may be short on money, I don’t think we should spend the money for the regulators right now. Except for the cost and labor I do not see a down side for installing the regulators, whether we use them, or limit their use, or not. It does give us more options.March 16, 2020 at 11:57 AM in reply to: ASCC Monthly Meeting – March 14 2020 #14874 Ditto on the quick write-up, Mark.March 9, 2020 at 4:16 PM in reply to: 2020 Thunderslot Discussion #14856 Marty My bad for not getting Thuinderslot rules posted yet. The rules will be the same as the 2019 rules except for allowing the McLaren M6A and the new, thicker wood guide. The 2019 rules said gears were open choice. If anyone has already purchased an Elva please let me know. RussellMarch 8, 2020 at 10:07 PM in reply to: 2020 Thunderslot Discussion #14853 Marty reports that the new Elva being produced is a very fast car. There is also no telling what may happen to slot car supplies with the Caronavirus outbreak and it is possible the Elva could be in short supply anyway. Because of that, for right now I am inclined not to allow it in the series this year. The McLaren has seemed to be very similar to the Lolas and several members have them so they will be allowed. That should give us enough variation for this year and give us time to check out the Elva for the future.February 5, 2020 at 9:24 PM in reply to: 2020 Series Discussion #14773 I agree with completing/running Targa Florio and BRM1/24. If we are going to run Ford V Ferrari I think we should open it up to Scalextric Ferrari P4/412 cars with 3D printed chassis. There are number of Ferrari liveries available. Of course, they might find it tough against the Slot/it GT40s. How does the Policar Ferrari run? I would also go for ThunderslotJuly 29, 2019 at 4:56 PM in reply to: Moving Out of Kings #14451 I did some checking on the rigidity of the track at King’s today. I took a floor jack and lifted one end of the track. With that end 3” off the floor, the center legs were just raised. A string line confirmed that the deflection (droop) at the center of the track was 1 ½”. I believe that the track is movable intact. I have been thinking about what kind of casters, wheels, or dollies might be used. A pair of wheels in the center of the track, with the track balancing on them, would be very maneuverable. To start with, 8’ 2 X 12’s would be bolted to the inside of the central legs. My first thought was then to attach a 5/8” axle between the 2 X 12’s with two wheelbarrow type wheels and tires. Harbor Freight has a variety of wheels and tires, including solid rubber tires. The difficulty with these wheels would be in having to raise the track to install or remove the wheels. Another option would be boat trailer jacks bolted to the 2 X 12’s. Harbor Freight has a 1500 lb. trailer jack with dual 6” solid rubber wheels. The two trailer jacks would be swung into position and cranked down. In order to load on a flat bed trailer an axle would be installed on the leading end of the track with two wheels mounted. With enough manpower we could lift the rear end as the front end of the track was rolled onto the trailer. The 2 X 12’s would be permanently installed. The trailer jacks would be left in place for further use. The 1 X 4’s (if necessary) could be removed. I believe everything else on the track could be left in place including the shelving for the computer and the computer itself. I also did some checking to see if we could roll the track out the storage area through the overhead door. The overhead door is at an angle. I used a laser pointer with a board the width of the track to see if the angled door and storage area width would have enough clearance and I believe there is. It is hard to tell for sure until more of the storage area is cleaned out. If the clearance is tight to get out the door, the driver’s station and tape dispenser could be removed.July 16, 2019 at 3:58 PM in reply to: Moving Out of Kings #14425 Moving the big track intact is a possibility, but it will be very heavy. There is not a lot of structural bracing holding the three tables together, but each table is fairly rigid. There are 1 X 4 braces underneath the lateral 2 X 4’s and the track sections are screwed to a 1 X 12 brace underneath each joint in the track surface. We would need additional bolts fastening the tables together. The strongest place for these bolts would be through the tops of the legs. This would necessitate cutting off the remainder of the legs once the table is on its side. (If we separate the three tables, the legs could be unbolted to remove.) I think that with these bolts the table would be rigid enough to move, but would need a big trailer/truck. I have removed my two electric heaters, DVD player, some cables, and loose pictures. I may remove the amp and speakers later in the week.July 7, 2019 at 1:54 PM in reply to: 2019 Gumball/Christmas Run #14408 Mark I don’t think we should use a grabber during a “hot track.” The primary concern for marshalling, especially for this race, is to pick up the de-slotted car as quickly as possible. The other cars will be coming very quickly and travelling very fast. We want to avoid an unnecessary crash. If it takes longer to re-slot the offender, that is on the driver for de-slotting in the first place. The grabber should only be used during a button caution.June 24, 2019 at 9:18 AM in reply to: BRM GROUP C DISCUSSION #14379 When the decision to go to foam tires was brought up last year, I appeared to be in the minority in opposing this change. My criticism was two-fold: first that the change was unnecessary. We had a problem with Revo Slot tires last year but a different rubber tire could have been chosen. We ran the ScaleAuto cars with rubber tires and they ran fine. They were slower than other series cars but were controllable and we had no issue with tire wear. My other criticism with foam tires is one of principal. I did not participate in the Thingie/wing car racing of the 70’s and 80’s so I have no nostalgia for foam. My love of scale slot racing is centered on scale. Regular rubber (and urethane) slot tires correspond quite well to 1:1 tires, even to different tire compounds and grip/wear compromises mirroring their real-life counterparts. I have never seen foam rubber tires on a real car. The foam tires are dependent upon tire treatment. While the “tire cleaner” may not be as messy as other tire goop/glue, it nonetheless leaves dark streaks on the track. These streaks detract from the scale look of the track (this would be mitigated if the track surface was black). There is nothing wrong with wing cars, black streaks on the track and super-fast lap speeds but to me this results in a more toy-like sport rather than one based on scale racing. Now, my above philosophical comments do not address the issue at hand, and I am not recommending a return to real rubber for this series. The different ScaleAuto wheel/tire may prove to work well enough. The BRM and RevoSlot foam tire series to be run later this year may well prove to be provide worthy variety for ASCC racing series, as the rally track (and the unlimited Christmas race) already does. I am merely stating my personal preference for real rubber and true-scale racing.June 3, 2019 at 9:31 AM in reply to: Results WorkBook Template – Group A Results Issues #14293 Thanks, Mark. I have corrected the Group A results.May 27, 2019 at 9:42 PM in reply to: My 1/43 Hillclimb #14185 Wow, Marc, that is a really cool layout. I think I would buy a 1/43 car or two to run on it. I recognized “Invasion of the Body Snatchers” (1956) and “Them” but I don’t recognize the one with the cabin and COE truck.April 4, 2019 at 11:48 PM in reply to: 2019 Thunderslot #13824 For newer members, or anyone interested, I recommend going back through the Thunderslot discussion forum for last year: https://www.austinslotcarclub.com/forums/topic/thunder-slot-discussion/ This was a separate thread from this year’s discussion.
https://www.austinslotcarclub.com/forums/users/chapracer65/replies/
Recently I purchased a chocolate bar & I thought "I could totally make this". I was right! This cranberry orange dark chocolate bark tastes just as amazing and is super simple to make. Perfect for holiday gifting or a party. Course Dessert Prep Time 5 minutes Cook Time 25 minutes Total Time 30 minutes Servings 8 servings Author Taryn Solie Ingredients One pound gluten free dark chocolate One teaspoon orange flavoring extract or oil 2/3 cup dried cranberries Zest from one clementine or orange or satsuma Instructions Items Needed: A Sheet Pan Parchment Paper Scissors One medium pot One medium heat-safe bowl A spatula A pot holder or oven mitt A measuring spoon Directions: Take a sheet pan out and cut a piece of parchment paper to fit the sheet pan, leaving enough extra to hang over the sides of the pan. Place about an inch of water in a medium pot on the stove and turn heat to medium. Heat until water is nearly boiling, but not quite. Break chocolate into small pieces (if not using chocolate chips) and place in heat-safe medium-sized bowl. Place bowl over pot to create a double boiler. You can also place bowl in the water, just make sure no water splashes into the chocolate. With a spatula, stir the chocolate as it melts, making sure to scrape the sides and bottom of the bowl so no chocolate burns. As the chocolate is melting, measure the orange flavoring with a measuring spoon and pour it into the bowl slowly, stirring continuously. If chocolate starts to seize or get grainy, keep stirring to incorporate. Once the flavoring is blended into the chocolate, take the bowl out of the pot using a potholder or oven mitt and scrape the chocolate onto the parchment paper using the spatula. Spread the chocolate into an even layer about one-quarter inch thick. To remove any lines from spreading, gently shake the pan back and forth to settle the chocolate. Take the cranberries and sprinkle them onto the chocolate in an even layer. Using the microplane, grate the zest of a clementine (or orange or satsuma, whatever you have) and sprinkle the zest onto the chocolate. It can help to zest a bit, then tap the fruit on the microplane to get the zest to fall off as it tends to stick together a bit. Make sure not to scrape the pith (or white part) of the peel, as that will be bitter. Once you’re finished zesting, place the pan in the freezer for 10 to 15 minutes to harden. Once it’s hard to the touch, it can be broken up and placed in a bag or container to keep in the fridge. Notes Notes: *Melting chocolate can be tricky. It needs to be stirred nearly continuously and kept an eye on. *Be careful if you’re using orange extract. Adding liquid to chocolate can sometimes cause it to seize and get grainy or clump together. Your best bet is to use an extract with oil in it – or just orange oil. That tends to incorporate better. *This recipe is very versatile and can be adjusted to suit your tastes. You can use more or less cranberries, or add in extras like chopped nuts, coconut, or salt. *Don’t keep your bark in the freezer for too long! It will start to get freezer burn on the top if you do. I wouldn’t go longer than 30 or so minutes, just to be on the safe side.
https://www.hotpankitchen.com/wprm_print/480
- Applications close: RACS is a one of Australia’s oldest and most successful dedicated community legal centres with a vision of justice and dignity for refugees and a world where those who seek Australia’s protection are able to live their lives with security, family unity and freedom. Through individual advice sessions, community education and public advocacy, RACS strives to ensure that individuals and families, at risk of persecution or other forms of significant harm, gain access to equal and fair representation before the law, are granted protection by Australia, and opportunities to seek family unity, in accordance with Australia’s international obligations. Our work is premised on a commitment to fundamental human rights, human dignity and international protection. RACS demonstrates this commitment through its independent, impartial and professional advice; the integrity of its staff and volunteers; its belief in continuous learning, including through partnerships with other organisations; and the fair and flexible conditions it provides for staff and volunteers. RACS is committed to working together to achieve a more inclusive community. Our workplace strives to be one that embraces and celebrates diversity and the wide range of skills, expertise and experience we can all bring to strengthen our dynamic, collaborative and responsive environment. RACS encourages people from all different backgrounds to apply, including Aboriginal and Torres Strait Islander peoples, people from culturally and linguistically diverse (CALD) backgrounds, people that identify as LGBTIQ+ and people with disabilities. The Solicitor/ Casework is primarily focused on providing services to clients in immigration detention and in the community who are seeking asylum or have refugee status. The position is permanent full- time (35 hours per week). We are currently seeking two candidates with varying skill level. Flexible working arrangements can also be considered. Remuneration for the position is $66,000 – $76,000 pa, (commensurate with qualifications and experience), plus superannuation contribution and salary packaging The position reports to a RACS Supervising Senior Solicitor. The Solicitor will carry out their duties in accordance with RACS policy and funding guidelines. The successful applicant will be working at RACS’ main Randwick office, but will be expected to travel to our outreach locations, to external meetings, detention centres and other locations as reasonably required. Essential OR Desirable A full Position Description is attached below.
https://www.ethicaljobs.com.au/Members/RACS/solicitors--caseworkers-x2
- A national defense contractor needed enhanced ASHRAE Level I energy audits to maintain its sustainability credentials with the U.S. government and to enable continuous improvement in energy efficiency. - Company leadership selected us to perform energy audits for ten facilities in seven states. Ranging in vintage from the 1950s to the 2000s, the ten facilities included research and development sites as well as manufacturing sites. The latter consist of three to four buildings apiece, including office, assembly, and testing spaces. - Our assignment prerequisites included achieving applicable security clearances for our team and participating in mandatory local site safety training; we also had to adhere to limitations on allowable site and equipment photography. - The scope was enhanced to include more extensive analysis of capital investment opportunities that were identified during the energy audits. - We were tasked not only with delivering consistently high-quality reports, but also with establishing internal benchmarking metrics that we tracked throughout project execution. Result - Our team met or exceeded all project requirements, identified both lost-cost and capital-driven energy-saving opportunities at each of the ten facilities. - We completed financial analysis of each energy-saving opportunity, calculating simple payback, net present value (NPV), and internal rate of return (IRR). We also completed a separate utility rate analysis for each facility so that our calculations of the potential reduction in energy cost were based on the facility’s unique unit cost of energy. - Due to their comprehensive content, we characterized our final reports as more robust than an ASHRAE Level I energy audit but less extensive than an ASHRAE Level II energy audit. - Based on full implementation of the energy-saving measures we uncovered through our technical analysis, we projected a reduction in annual energy use ranging from 10% to 30% per facility, depending on the type and scale of equipment and lighting upgrades. Low-cost measures alone typically accounted for a reduction of 10% to 15%. - Aggregating all of our recommendations for the portfolio, we presented the contractor with a detailed plan for achieving nearly a 10% annual energy cost reduction.
http://siebenenergy.com/portfolio/item/multi-site-national-defense-contractor
PCB Antenna Vs External Antenna Here is detailed comparison article about PCB Antenna Vs External Antenna. When selecting antennas for your embedded systems, it’s critical to understand the differences between chip and PCB antennas. Your electronic gadget requires an antenna (RF) to connect via radio frequency. In the electronic manufacturing sector, walkie-talkies, Bluetooth-enabled devices, and satellite communications are common examples of RF devices. Antennas are the most important component of RF devices and substantially impact their performance. The essential requirements for current RF applications are high performance, small size, and cheap cost. When comparing chip antennas to PCB antennas, it is beneficial to consider a PCB trace antenna when lowering the overall device cost. On the other hand, Ceramic Chip antennas offer excellent general performance in terms of compactness and performance. This article delves into the issue between chip and PCB antennas. It goes over the advantages and disadvantages of each and the design considerations you should keep in mind while selecting the proper antennae for your project. Antenna For PCB Trace A PCB trace antenna is a circuit board trace drawn directly on it. It is critical to understand that the trace type is determined by the antennae type and the available space. All options are inverted F-shaped traces, straight, curved, meandering, and circular PCB traces. A PCB trace antenna is a wireless communication technology in general. Furthermore, it would help if you laminated your trace onto the board surface during PCB manufacture. However, PCB traces can span multiple layers, especially in multi-layered PCBs. Pros Of Antenna For PCB Trace Chip antenna vs PCB antenna proponents believes that a trace antenna is difficult to install, build, and tune. This is especially true in a reliable and modest business. A trace antenna’s size is also determined by the ultimate bandwidth frequencies, just like a wire antenna. The advantages of a PCB trace antenna are listed below. Because the trace is integrated into the board manufacturing process, production costs are minimal. - When maximally adjusted, a trace antenna may accommodate a wide bandwidth. - The PCB trace structure is basic, and the structure profile is relatively small because the antenna is mounted on the surface. - It improves network reliability and strength capacities. - It is simple to incorporate into your PCB during production. Cons Of Antenna For PCB Trace The following are the drawbacks of a Trace antenna: - It’s difficult to make, especially at low frequencies. - A trace antenna is particularly susceptible to PCB layout changes, necessitating adjustment after each change or reproduction. - It necessitates a lot of areas, especially at low frequencies. - The necessity for more board space increases design costs. - They are vulnerable to both human and environmental influences. Antennas are essential components in wireless technology, ranging from small antennas incorporated in mobile devices to enormous antenna arrays used in cellular or satellite base stations. Although antennas come in various shapes and sizes, they almost all fall into one of two categories: internal or external. Internal antennas are located inside a device’s casing and are out of reach of the end-user. A small chip or PCB-etched antennas incorporated onto the board to flexible printed circuit (FPC) antennas affixed to the interior of a product’s enclosure are examples. External antennas can also be connected to an RF connector on the exterior of a device’s casing. A rubber-duck antenna mounted on the outside of an internet router is a good example. This article aims to describe the many types of external antennas, as well as their essential performance factors and design advantages. Antennas: External Vs Internal Aside from the apparent differences in size & form factor, external antennas have several design advantages over internal antennas: simplicity of integration. External antennas are essential plug-and-play devices that connect to a transmitter via a specific connector. Internal antennas, on the other hand (like surface-mounted chips), necessitate more design effort because antenna tuning and optimization are necessary. The performance of an internal antenna is impacted by the PCB ground plane, which acts as an antenna extension. The boarding area and components on the PCB would have to be considered in this situation. An impedance matching network may need to be created before the antenna feed point to account for these elements on the PCB that will detune the antenna. Signal loss produced by the product enclosure can also affect an internal antenna. The majority of external antennas, on the other hand, are “ground plane independent,” making them an excellent choice for customers looking for a solution that requires fewer design resources and integration time, allowing for a faster time-to-market. External antennas have performance advantages over internal antennas and are easier to integrate. Overall, external antennas provide better range and sensitivity because of their bigger size. As a result, their rated gain (dBi) is generally higher than their internal equivalents. External antennas provide better directional behavior for applications where signal emissions must be concentrated in a certain direction due to their higher strength. Another major consideration is that lower frequencies with longer wavelengths necessitate a larger antenna. As a result, many high-gain external antennas can maintain a bandwidth in the lower sub-GHz region while still providing adequate performance. An inside antenna might not be as effective at supporting lower frequencies due to its smaller size. In the 400MHz bandwidth, for example, it would be impossible to find an internal antenna that could compete with an external antenna in terms of performance. These inherent performance advantages (greater range, sensitivity, and ease of integration) and the fact that external antennas are located outside the enclosure, giving better signal line-of-sight, make them more appropriate for demanding applications. Customers should consider cost when considering this option, as larger external antennas require more manufacturing techniques and materials than a basic ceramic chip antenna. External Antennas Types These antennas are similar to the ones used on wireless access points. The antenna element would be covered in a rubber or plastic sheath with an exposed RF connector in a conventional design. Because these circumferential antennas are always ground plane independent, the only required for integration is a simple coupling to the transmitter. These antennas are designed to be vertically oriented to the ground due to their non-directional nature, as they tend to radiate widely in the horizontal (x-y) plane. This type of radiation pattern would be ideal for any wireless applications that need point-to-multipoint communication, for instance, in any office setting where a router is required to transmit and receive signals from many client devices such as computers, phones, or other end-node modules. Omnidirectional Antennas These puck-style antennas are designed to be flat on a flat surface, such as a car’s ceiling or roof. Depending on the antenna model, they can be installed on a metal or non-metal surface. The form factor is a key distinguishing feature, as they often have a lower profile than terminal-mount antennas, making them perfect for clients searching for an alternative aesthetic. Many puck-style antennas are built to support integrated Low Noise Amplifiers (LNA), which can drastically increase signal reception–especially for weak incoming GNSS signals. Unlike whip form antennas, Puck shape antennas are frequently designed to be horizontally orientated to the ground or sky, as they have a greater 360-degree vertical coverage. An example is a Wi-Fi ceiling-mounted antenna in a single office-level center. Another benefit of the puck-style antenna is that various variants can handle several wireless protocols. This is ideal for any base station that needs to combine many antennas required for GNSS, cellular, and Wi-Fi into a single package. A combinational antenna (Figure 7) effectively has three separate antenna elements housed in a single container, each with its connection and connector for each protocol. Antennas With Directions Applications that require long-range Point-to-Point or Point-to-Multipoint communication are the focus of directional antennas. Because of these antennas’ highly focused emission patterns, their datasheets will always show a high rated gain (dBi) (typically above 9dBi). These antennas are perfect for any demanding long-range application. An end-node device or a collection of devices is focused in a specific area, thanks to their high peak gain in a single direction. An outdoor-rated Yagi or panel antenna on each side of a pair of office buildings sharing the same wireless network, for example, would form a Point-to-Point communication link with both antennas pointing toward one another. Conclusion I hope you will understand the difference between PCB Antenna Vs External Antenna. The increasing demand for a wide range of antenna solutions has emerged from the ongoing expansion of IoT applications. Whether an engineer prefers an internal solution for low cost, large volume, small size, or an external option for ease of design and guaranteed performance, the antenna will always be the most important interface for your wireless system. To achieve the best performance, the antenna should be finalized early in the design phase of a project. Suppose you decide on your product’s specifications (PCB design, size, and enclosure) before considering the antenna. You will limit your flexibility to change your design if the chosen antenna does not fit or is incompatible. Being familiar with the many antenna types, their unique advantages, and performance criteria (gain, bandwidth, VSWR, radiation characteristics) will help limit the numerous antenna designs that exist today. Frequently Asked Questions What is the quality of PCB antennas? The PCB Trace antenna has a wide operation bandwidth (if optimally tuned). It has high network dependability and strength (if optimally tuned). Antennas for PCB Trace have a slim profile (two-dimensional). What is a printed circuit board antenna? In a high-frequency PCB, a PCB antenna is a transducer that converts current waves into electromagnetic (EM) waves. PCB antennas convert high-frequency electricity into electromagnetic waves that travel through the air. In a high-frequency PCB, there are two antennas. They are etched copper structures inserted in the PCB. Which antenna is made out of PCB? Patch antennas, also known as microstrip antennas, utilize high-frequency laminate materials and ordinary printed-circuit-board (PCB) procedures. Why do GPS antennas include ceramic elements? The larger the patch, a gateway for RF signals, the more bands the antenna may efficiently operate on. On the other hand, a surface-mountable antenna can work successfully throughout these broad frequency bands.
https://www.factsmaniya.com/pcb-antenna-vs-external-antenna/
An annual event in Kadriorg Park in Tallinn is organized in order to celebrate the beginning of autumn. The park turns into a magical place. Thousands of candles, torches, and lanterns are lit after sunset. Every part of the park is specifically decorated with light. The Swan Pond, the Flower Garden, the Concert Square, the area in front of the palace, everywhere you go, you'll find something mystical and special. Traditional music concerts of local folk, pop, jazz, and other music bands are organised on different stages. At the end of the concerts you'll be able to see a large fire show. The entry is free of charge and everyone is welcome to enjoy the magical evening.
https://rove.me/to/estonia/light-walks-in-kadriorg
Last Date for Submission of Application is on January 30th, 2021. Post and Vacancies: Electrical Engineer – 01 Post Junior Assistant – 01 Post Job Location – Medchal Education Qualification : NALSAR University of Law, Hyderabad a premier institution of eminence invites applications for the following contractual positions Application Fee: An application fee of Rs. 500/- shall be paid by way of demand draft drawn in favour of ‘Registrar, NALSAR University of Law’ payable at Hyderabad and should be attached along with the application form. Application fee will not be refundable. General Information 1. The prescribed qualifications and experience are minimum and the mere fact that a candidate possesses the same will not entitle him / her for being called for interview. The University reserves the right to restrict the candidates to be called for interview to a reasonable number on the basis of qualifications and experience higher than the minimum prescribed; or by any other condition that it may deem fit. Those who are possessing higher qualifications will be given preference in shortlisting the candidates. The University may constitute a Screening Committee to scrutinize the applications and short-list the candidates. Call letters for test / interview will be sent only to the short-listed candidates and no correspondence will be made with applicants who are not short-listed. 2. It would be open to the University to consider the names of suitable persons who may not have applied, but recommended by experts in their respective fields. 3. The University will have the right to relax any of the qualifications, experience etc. 4. Canvassing in any form on behalf of any candidate will disqualify such a candidate. 5. The Selection Committee may decide its own method of evaluating the performance of the candidates in interview. The University may utilize written test / skill test or seminar / colloquium / mock class as method of selection. 6. Incomplete applications in any respect shall not be considered at all. 7. No interim queries regarding test / interview / selection will be entertained. 8. University will not be responsible for any postal delay at any stage. 9. In case of any disputes / suits or legal proceedings against the University, the Jurisdiction shall be restricted to the Courts in Hyderabad, which is the Headquarters of the University. Address Registrar, NALSAR University of Law, Post Box No. 1, Justice City, Shameerpet, Medchal Dist. 500 101, Telangana Selection Procedure : Selection Will be Based either Written Exam/Interview How to Apply : Candidates should apply in the prescribed application form which can be downloaded from the University website www.nalsar.ac.in along with detailed bio-data and the same should be sent to the ‘Registrar, NALSAR University of Law, Post Box No. 1, Justice City, Shameerpet, Medchal Dist. 500 101, Telangana’ latest by 30-01-2021.
http://www.govtjobsmela.com/2021/01/nalsar-2021-jobs-recruitment.html
St Andrews Scottish Songbook St Andrews Voices and the University of St Andrews’ Laidlaw Music Centre invited composers born in, or living/studying in, Scotland to send proposals for contributions to the St Andrews Scottish Songbook in December 2020. Background St Andrews Voices, Scotland’s Festival of the Voice, intends to publish a new Scottish Song Book at its 2022 edition to mark VisitScotland’s Year of Scottish Stories. The songbook will consist of 24 settings for voice and piano of existing (text + melody) Scottish traditional songs. The project seeks to - Provide a new body of art song repertoire for voice and piano based on traditional Scottish songs. - Honour the legacy of former St Andrews Professor of Music Cedric Thorpe Davie and others such as George McPhee, George McVicar and Frank Spedding in bringing traditional Scottish songs to a wider public through their celebrated collections. - Present a collection of songs representing a broad survey of traditions, dialects and geographical areas, including the Gaelic tradition, in new, creative arrangements. - Provide an opportunity for composers of all ages to creatively engage with the cultural legacy of traditional Scottish songs and to have their work published, performed and recorded. - Provide selected young/student composers with mentoring during the composition process. - Promote and revitalise the public performance of art song in Scotland. The Collection The St Andrews Scottish Songbook will consist of 24 songs. The first six of these have been commissioned from a group of Scottish and/or Scottish-based composers: - Sir James MacMillan (world-renowned composer) - Ailie Robertson (traditional harpist, Composer in Residence at Glyndebourne, commissions from the BBC Proms, London Philharmonic Orchestra) - Tom David Wilson (Royal Conservatoire of Scotland, St Mary’s Music School) - Mary Ann Kennedy (celebrated performer, composer, broadcaster and champion of Gaelic song) - Alasdair Nicolson (Artistic Director, St Magnus International Festival, compositions performed by leading ensembles throughout the world) - Eddie McGuire (versatile and celebrated Scottish composer whose work encompasses classical and folk traditions) The composers of the remaining 18 songs will be chosen from the December 2020 call for proposals. What are the key dates?
https://www.standrewsvoices.com/songbook
In The North Face Apparel Corp. v. Sanyang Industry Co., Ltd., Opp. No. 91187593 (September 18, 2015), the Trademark Trial and Appeal Board (“TTAB”) handed The North Face Apparel Corp. (“The North Face”) significant victories its battle against Sanyang Industry Co. Ltd.’s (“Sangyang”) registration of its trademark. Sanyang is a Taiwanese company that manufactures and sells a variety of motor vehicles. In the United States, its primary product line is scooters, which are generally sold through motorsports dealerships. In 2008, Sanyang filed two, use-based US trademark applications for its logo: The two applications claimed a very large number of goods and services in multiple classes: Class 7 (machinery), Class 11 (environmental control apparatus), Class 12 (vehicles), Class 16 (paper goods), Class 25 (clothing), Class 35 (advertising and business), and Class 37 (building construction and repair). The North Face is an American outdoor product company that specializes in outdoor and athletic clothing, and equipment such as backpacks, tents, and sleeping bags. Its SUMMIT SERIES line employs the registered trademarks: and The North Face opposed registration of Sanyang’s applications, based on priority and likelihood of confusion. MOTION TO AMEND APPLICATION As a preliminary matter, the TTAB addressed Sanyang’s Motion to Amend its Answer, which itself attempted to relitigate an earlier-filed Unconsented Motion to Amend the opposed trademark applications. A settled trademark rule of practice prohibits trademark applicants of contested applications from amending the identification of goods in the applications without the consent of the opposing party. 37 CFR § 2.133. In this case, Sanyang filed an Unconsented Motion to Amend to delete certain goods and services in Classes 16 and 35, and all the goods in Class 25. That motion was denied because Sanyang had not complied with the requirements for an unconsented amendment, particularly the entry of judgment against it with respect to the goods proposed to be deleted. The opposition proceeded to trial, with nothing more being said about the proposed amendment. Then, one week before the close of The North Face’s rebuttal testimony period, Sanyang filed the Motion to Amend the Answer. Although the TTAB found a number of problems with the motion, the principal problem was that Sanyang did not list the goods sought to be deleted. Normally, likelihood of confusion is found as to the entire class if there is likelihood of confusion with respect to any item listed in that class. By moving to delete goods without specifying which ones it sought to delete, Sanyang effectively imposed on The North Face the obligation to prove likelihood of confusion with respect to each of the numerous goods listed in the applications, rather than with respect to one or more (but not necessarily all) goods or services in each class. The TTAB noted that that it did not relish the idea of engaging in that exercise either. Therefore, the TTAB denied Sanyang’s Motion to Amend. LIKELIHOOD OF CONFUSION After dismissing the Motion to Amend and disposing of other procedural and evidentiary disputes, the TTAB proceeded to the heart of the matter. The North Face’s priority in its marks was not disputed, so the TTAB simply evaluated the marks for confusing similarity by analyzing the facts based on the factors set forth in In re E.I. du Pont de Nemours & Co., 476 F.2d 1357 (CCPA 1973). First, the TTAB discussed the du Pont factor of fame. The TTAB noted that The North Face’s evidence of fame was largely based on its sales and advertising figures, which were in the millions of dollars. However, since The North Face failed to provide any context for its figures, the TTAB could not measure those sales against other brands to draw any comparisons. Further, the sales data did not distinguish between the sale and advertising of goods branded with The North Face’s S-Design mark (either with or without the SUMMIT SERIES portion of the mark) and its other goods. Finally, the sales data did not indicate how the sales are broken down by goods (i.e., how much for S-Design apparel versus, say, S-Design tents). Thus, The North Face’s sales and advertising figures, although impressive as raw numbers, were insufficient standing alone to establish fame. As additional evidence of fame, The North Face introduced five photographs of five celebrities engaged in various activities while wearing apparel branded with the S-Design. However, the existence of the S-Design mark was not highlighted in the photos any way. The TTAB also noted that the evidence lacked information as to where or when these photographs appeared, or what exposure they may have received. Therefore, this submission also fell short of establishing the fame of The North Face’s S-Design mark. Thereafter, the TTAB simply compared the parties’ marks for similarity in appearance and commercial impression, and similarity of the parties’ goods and services claimed in the opposed applications and cited registrations. Concerning the appearance and overall commercial impression of the marks, the TTAB acknowledged that The North Face more frequently used the SUMMIT SERIES S-Design mark than the S-Design standing alone. Inasmuch as the S-Design conveyed its own commercial impression, that impression was influenced by The North Face’s extensive advertising efforts and promotion of the design in conjunction with the words. overall, however, the TTAB found that the similarities in the parties’ marks outweighed the differences. The TTAB then compared the parties’ goods. With respect to clothing in Class 25, the TTAB held that the items listed by Sanyang were legally equivalent to those listed in The North Face’s registrations. The opposition to registration of Sanyang’s marks with respect to clothing in Class 25 was sustained. With respect Class 35, Sanyang’s application identified many distinct services, including “retail stores featuring clothing, textiles, and clothing accessories” in a long list that also included core services such as “online retail store services featuring automobiles, motorcycles and their accessories” and “retail motorcycle parts and accessories stories.” Since operating a retail clothing store was related to The North Face’s Class 25 goods (clothing), and since finding of a likelihood of confusion with respect to those services was sufficient to sustain the opposition as to the entire class, the opposition was sustained with respect to all services listed in Class 35. For the other classes listed in Sanyang’s applications, the similarity with The North Face’s goods was found to less clear. The North Face was unable successfully to assert that such goods as “vehicle reflectors” or “brake linings for machines” were related to any of its goods. The TTAB reasoned, “[M]erely because a part can be used on a bicycle, and an item of clothing or a backpack can be worn when riding a bicycle, does not make a vehicle reflector or a vehicle headlight and a backpack or outdoor clothing complementary goods.” Therefore, the oppositions to Sanyang’s applications in Classes 11, 12, 16 and 37 were dismissed. This case reinforces a number of key points of TTAB practice. First, fame is difficult to prove. Although The North Face is a very well-known brand that sells millions of dollars of product a year, it ultimately failed to prove fame because it did not provide specific enough information concerning the reach of the particular marks in question. Second, as we see in this case, the TTAB does not typically grant unconsented motions to amend the identification of goods in an application after the opposition has been initiated. This leads to the next crucial point — the more goods and services an applicant includes in any particular class in its application, the greater the likelihood that a challenge will “knock out” the entire class. Trademark applications should exercise caution when drafting the identification of goods to make sure they capture the core goods and services as broadly as necessary without “over-claiming.” In this case, Sanyang’s imprudent inclusion of a claim of “retail stores featuring clothing, textiles, and clothing accessories” in the identification killed the application with respect to its core services of “online retail store services featuring automobiles, motorcycles and their accessories” and “retail motorcycle parts and accessories stories.” Experienced trademark practitioners will recognize Sanyang’s applications as quite typical of non-US registrations, where use in commerce is not necessary and applicants can claim anything and everything in a chosen class for no additional cost and little risk of challenge. Presumably, Sanyang used its non-US registrations as a model for its US applications. That decision led to the loss of the entire application in Class 35. ___________ The North Face did not add include a separate “fame” count in its Notice of Opposition. Therefore, the TTAB analyzed the possibility that The North Face’s marks were famous only within the context of the du Pont analysis.
https://www.marksworksandsecrets.com/2015/09/north-face-scales-sanyang-applications-for-clothing-and-services/
As Duke continues to work toward becoming a more equitable institution, many faculty are working to reimagine aspects of their professional experiences and interactions with others. For some, this includes examining their school/departmental climates and revisiting the norms and behaviors that impact their students, staff and faculty colleagues. This short course will guide faculty through sessions to explore their current school/departmental climates and empower them with practical steps to increase equity in their local environments. Topics will include: • Examining the norms and values that currently exist in schools and departments, and identifying how they influence departmental culture and practice • Considering the differing lived and professional experiences of faculty, staff and students in academic units • Building team and collaborative relationships among colleagues • Developing practical strategies to approach climate challenges across groups • Exploring how departmental policies and structures impact retention and professional success • Identifying methods to assess and evaluate the effectiveness of departmental practices The short course will run for four consecutive days from 9:00 a.m. - 12:00 p.m. each day. Participants who complete all four sessions will receive a certificate of completion at the conclusion of the course.
https://facultyadvancement.duke.edu/departmental-climate-curriculum
During the week of March 31 to April 8, 2018, my family and I visited Toronto, Ontario. It was simply a homecoming for all of us. The primary reason for our visit was to celebrate my aunt’s 80th birthday and to gather with family and family friends. The majority of the time was spent in Markham and Richmond Hill; both cities are located in the suburbs and centers of the Chinese community in Toronto. Once again, we enjoyed the delicious Chinese food and noted changes since 2013 (our last visit). For the remaining days that we spent in Toronto, we tried various types of Chinese cuisine. On my aunt’s actual birthday, together with our extended family, we went to this Chinese restaurant and tried their set meal, consisting of winter melon soup, mustard steak, Peking duck with thin pancake slices, lettuce and crispy rice wraps with a mixed duck meat and nut filling; soy sauce chicken, stir-fried vegetables with bean curb and goji berries; and eggplant with meat claypot. The food came elaborately decorated, especially the winter melon soup. The taste was mediocre and the food was saltier than expected (especially the eggplant with meat claypot). We did feel that we were not given the entire Peking duck when they finished cutting the crispy skin from it, judging by the size of the dish that we were given. The best dishes were the winter melon soup and the mustard steak for its flavor. Overall, the food was average at best despite the ornate atmosphere of the restaurant. We tried two places: Congee Queen and Congee Wong (which translates to Congee King). This is a restaurant franchise that has multiple locations throughout the Toronto suburbs. It is a great place to go for every meal of the day, if you fancy something simple. My family and I have eaten at this restaurant a couple of times but at different locations. This time we ordered fried Singapore-style rice vermicelli, wonton noodle soup with shrimp dumplings, ride noodle breadsticks, roast pork rice, Chinese donuts, house special congee, and stir-fried snow pea leaves. We visited this restaurant with our family from Canada to celebrate my father’s and my other aunt’s birthdays on March 31. We tried the set meal for eight people consisting of steamed fish, roasted duck, shark fin soup, garlic crab, stir-fried vegetables with eggplant and mushrooms, and fried yi mein (or e-fu noodles). Overall, not too greasy and salty. The fish and duck were very flavorful and tender. After a tough semester, I spent a week (August 3-10, 2013) in Toronto– the city of my birth– to visit relatives and family friends; go sightseeing, and investigate various suburban areas (i.e. Markham and Richmond Hill) to determine the market for pharmacy. In this account, I will highlight some of the important must-see sights of Toronto and their exquisite cuisine. We left San Francisco and arrived in Toronto on time. The plan was to meet my mother and brother, who would arrive at the airport from Montreal at 8 PM EST, at Payless car rentals. Unfortunately, we had trouble looking for the rental place, so we waited for the rest of our family before going together. After asking the airport staff, we learned that the car rental place was off-site and one had to call the place for a shuttle to go there. Not very convenient as we have liked. We rented a Hyundai Elantra that was fairly easy to drive. Since we had not returned to Toronto in years, my father had trouble getting onto the highway toward Markham/Unionville. By the time we arrived at the Hilton Suite (formerly Markham Suite Hotel), it was already 10:30PM. We then headed to Richmond Hill Court restaurant for a quick meal of wonton noodle soup, beef congee, ying yang fried rice, gai lan, and satay vermicelli noodles. We finally went to bed at 2 AM.
https://jinmadeluxing.com/category/travel/canada/
Custodians have a duty to protect the privacy of patients and the confidentiality of health information in their custody or control, as outlined in section 60 of the Health Information Act (HIA). The risks of communicating with patients electronically must be considered. Responsibility for safeguarding health information cannot be transferred to a patient by having a patient sign a consent form or disclaimer to accept the risks associated with electronic communications. Electronic communications with patients can improve efficiency by: - Sending appointment reminders - Setting up specialist appointments - Notifying patients about a new service offering - Following up with patients on a treatment plan Risks of Electronic Communications Electronic communications are susceptible to certain risks, such as: Interception: If accounts or devices are shared or accessible by multiple people, the wrong recipient may read the message. - Misdirection: Patients may have similar names or account addresses and a message may be sent to the wrong patient. - Alteration: Test results can be sent to a patient who may alter the document and send the changed results to another health care provider, which will appear to be trusted health information. - Loss: If a service provider manages cloud storage of emails or other electronic records, when there is an outage, a security breach, or a service provider goes out of business or is taken over by another entity, access to health information may be lost. Additionally, certain security incidents may result in the loss of health information entirely. - Inference: The name and nature of a health service provider on its own may reveal health information of an individual if other individuals, such as friends or family members, have access to or can see notifications on a patient’s device. Mitigating Risks HIA requires that custodians take reasonable steps to maintain safeguards to protect the confidentiality of health information and to protect against any threats to the security of health information, to the loss of health information, and to any potential breaches of health information (e.g. unauthorized use, disclosure, modification or access to health information). In light of the various risks associated with communicating with patients electronically, consideration must be given to protecting health information, including: - Managing electronic records: There are additional challenges for the secure storage and maintenance of electronic communications. - Identification: Electronic communications raise questions about how a patient can verify and trust that the sender is a clinic or custodian. - Device management: Electronic communication is often done with the use of mobile devices. Safeguards around how a device is stored, whether devices are used outside a clinic or office environment, who owns the device, whether health information is stored in a cloud or on a device itself, and appropriate uses of devices outside of a clinic or office environment must be considered. - Encryption: Diagnostic, treatment and care information should be encrypted. A message itself, attachments or a combination of these may require encryption. If mobile devices are used to store health information, those devices must be encrypted.Consider programs or technical advice to help in setting up processes and procedures for encrypting electronic communications and devices. - Limiting amount of health information: When sending or receiving health information that does not include clinical details, limit the amount of health information sent electronically; limit the amount of health information collected using web forms or electronic templates; and tell patients exactly what will and what will not be communicated electronically, in addition to how messages containing clinical information will or will not be accepted. - Policies: There are certain policies and procedures that should be considered, such as policies that address: - Communicating with patients electronically and acceptable uses of mobile devices - Training staff on secure electronic communication (e.g. training on encryption methods) - Determining how to manage records sent by patients (e.g. if a patient sends unsolicited health information via email, how will it be managed?) - Regularly confirming patients’ preferred methods of communication and contact information (e.g. ensure email addresses are up to date and that a patient prefers to receive certain updates via email) - Notifying patients of risks when communicating electronically (e.g. whether another individual has access to certain accounts or electronic devices) Policy and PIA Requirements HIA requires that a custodian establish or adopt policies and procedures to facilitate implementation of the Act (section 63). It also requires a custodian to submit a privacy impact assessment (PIA) to the Office of the Information and Privacy Commissioner before implementing a new practice or information system – or when making changes to an existing practice or system – that collects, uses or discloses individually identifying health information (section 64). If a custodian is considering electronic communication tools to correspond with patients they must have appropriate risk mitigation strategies and policies. A PIA helps to manage privacy risks when communicating with patients electronically before such tools are implemented.
https://oipc.ab.ca/resource/electronic-patient-communication/
By Susan Feldman on February 27 2018 15:33:41 Essential for pre-study works, the wiring diagram allows the cable list to be produced. If the wiring diagram is useful during the pre-study phase, it is also an important tool for the design of the installation functional scheme. Starting with the wiring diagram, the user can start producing his installation as a pre-study, and then launch his study during the installation cabling, allowing a reduced production time. In the United Kingdom, wiring installations are regulated by the Institution of Engineering and Technology Requirements for Electrical Installations: IEE Wiring Regulations, BS 7671: 2008, which are harmonised with IEC 60364. The 17th edition (issued in January 2008) includes new sections for microgeneration and solar photovoltaic systems. The first edition was published in 1882. Use wiring diagrams to assist in building or manufacturing the circuit or electronic device. They are also useful for making repairs. DIY enthusiasts use wiring diagrams but they are also common in home building and auto repair. The wiring diagram lets you define the different installation components and locations (cabinets, control panels, consoles, connection boxes, etc.) and gives you a synoptic, functional view of the installation. The terminal strips and cabling connections between locations can be defined at the beginning, during or at the end of the project design. The length of the pathways can also be defined by the user and thus serve as a default value, for all cables within the given pathway.
https://jennylares.com/air-handler-wiring-diagram/trane-wiring-diagrams-on-air-handler-diagram-and-jpg-with-for/