text
stringlengths
205
677k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
15
2.02k
file_path
stringlengths
125
126
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
47
152k
score
float64
2.52
5.16
int_score
int64
3
5
Trailblazing feminist author, critic and activist bell hooks has died at 69 The prolific and trailblazing author, poet, feminist, cultural critic and professor bell hooks died Wednesday at age 69. Her death was first announced by her niece, Ebony Motley, who said that she had died at home surrounded by family and friends. No cause of death was reported, but Berea College in Kentucky, where hooks had taught since 2004, said in a news release that she had died after an extended illness. Preferring to spell her name with no capital letters as a way of de-emphasizing her individual identity, bell hooks was born Gloria Jean Watkins as the fourth of seven children in Hopkinsville, Ky., on Sept. 25, 1952. Her pen name was a tribute to her maternal great-grandmother, Bell Blair Hooks. She attended segregated schools in her native Christian County, Ky., before earning her undergraduate degree at Stanford University in California, a master's degree in English at the University of Wisconsin and a doctorate in literature at the University of California, Santa Cruz. She taught at Stanford University, Yale University, Oberlin College in Ohio and the City College of New York before returning to Kentucky to teach at Berea College, which now houses the bell hooks center. The author of more than three dozen wide-ranging books, hooks published her first title, the poetry collection And There We Wept, in 1978. Her influential book Ain't I a Woman: Black Women and Feminism followed in 1981. Three years later, her Feminist Theory: From Margin to Center explored and criticized the feminist movement's propensity to center and privilege white women's experiences. Frequently, hooks' work addressed the deep intersections of race, gender, class, sexuality and geographic place. She wrote about her native Appalachia and growing up there as a Black girl in the critical-essay collection Belonging: A Culture of Place and in the poetry collection Appalachian Elegy: Poetry and Place. In a 2000 interview with All Things Considered, hooks spoke about the life-changing power of love — that is, the act of loving and how love is far broader than romantic sentiment. "I'm talking about a love that is transformative, that challenges us in both our private and our civic lives," she said. "I'm so moved often when I think of the civil rights movement, because I see it as a great movement for social justice that was rooted in love and that politicized the notion of love, that said: Real love will change you." She went on: "Everywhere I go, people want to feel more connected. They want to feel more connected to their neighbors. They want to feel more connected to the world. And when we learn that through love we can have that connection, we can see the stranger as ourselves. And I think that it would be absolutely fantastic to have that sense of 'Let's return to kind of a utopian focus on love, not unlike the sort of hippie focus on love.' Because I always say to people, you know, the '60s' focus on love had its stupid sentimental dimensions, but then it had these life-transforming dimensions. When I think of the love of justice that led three young people, two Jews and one African American Christian, to go to the South and fight for justice and give their lives — Goodman, Chaney and Schwerner — I think that's a quality of love that's awesome. ... I tell this to young people, you know, that we can love in a deep and profound way that transforms the political world in which we live in." Additional reporting contributed by Steve Smith. Copyright 2022 NPR. To see more, visit https://www.npr.org.
<urn:uuid:b0c3cf84-da00-4ba3-a49e-03a428dcb960>
CC-MAIN-2023-50
https://www.keranews.org/2021-12-15/trailblazing-feminist-author-critic-and-activist-bell-hooks-has-died-at-69
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.974947
773
2.609375
3
What is digital exclusion in health and care? Digital exclusion refers to the lack of access, skills and capabilities needed to engage with devices or digital services that help people participate in society. In health care, additional factors that are not relevant to other online interactions can contribute to digital exclusion, for example, privacy may be required for online health services. Digital exclusion can be a barrier when digital tools are the preferred or only way of accessing public services. As more services are delivered online through websites, apps, email and SMS, and online becomes the preferred means of contact, digitally excluded people are in danger of being left behind. Digital inclusion is the approach for overcoming exclusion by addressing the barriers to opportunity, access, knowledge and skills for using technology. There are commonly held assumptions about who is and isn’t able to use and benefit from digitally enabled health and care services. Groups commonly considered digitally excluded or who experience poorer care through lack of digital services include older people, people with disabilities, ethnic minorities, people who are homeless, sex workers, people from Gypsy, Roma and Traveller communities, people living in rural areas, people from low socio-economic background and those with low digital or literacy skills. Understanding who is excluded is complex and nuanced, which means exclusion can’t be assumed from demographic groups alone. Digital exclusion and its inverse, digital inclusion, are both dynamic and evolving. There are many reasons for this, including, but not limited to, changing technologies, changes in individual’s capabilities and changing public expectations. It is essential that digital services are designed and developed with this constant state of change in mind. In reality, it is possible for anyone to be digitally excluded at some point as their health or personal circumstances change. What can be done to improve digital inclusion? Why improve digital inclusion? Improving digital inclusion means that everyone is equally able to engage with all public services, including health and care services. Well-designed and inclusive digitally enabled services lead to improvements in convenience for staff and users, communication between staff and users, health outcomes, quality and experience of care. If services are not well designed or people are excluded, they are left feeling frustrated, angry and powerless to have the care they seek, as evidenced by the experiences of people we spoke to. If policy and funding prioritise digital-first services without addressing the barriers to digital inclusion, it’s highly likely to result in increasing inequalities by excluding people who are unable to benefit from digital services. This is because they will have reduced access to digital health services, resources and information and there may be no alternative routes. Digitally enabled services and physical services need to be able to work interchangeably to provide the same quality of care, experience and outcomes. In line with the complexities of digital exclusion, there is no silver bullet to reducing digital exclusion in health and care. Those designing services need to recognise the diversity of people’s digital capabilities and access and adjust services accordingly. As part of The King’s Fund’s work to understand and help leaders develop digitally inclusive health and care services, we spoke to service providers about the approaches they are taking to improve digital inclusion. Below, we outline some of the key learning from successful approaches. These approaches are taken from two workshops we ran with people from health, social care, local authorities, and voluntary, community and social enterprise (VCSE) organisations, and are also informed by our conversations with members of the public. What we heard from workshop participants can be broadly grouped under the following three headings: - fixing the fundamentals - structuring services around people’s needs and preferences - improving the quality and consistency of services. Fixing the fundamentals In England 27 per cent (14 million) people have the lowest digital capability. This means they don’t have regular access to a device or the skills and confidence to turn on a device, use an app, log in and/or enter information on a digital device by themselves. Furthermore, about 30 per cent of people who are offline (ie, no online access or use) find the NHS to be one of the most difficult organisations to interact with. This matches what we heard from the people we spoke with. People told us time and again that access to devices and the associated costs were limiting factors for accessing services digitally alongside familiarity and confidence. We heard how organisations are tackling these fundamental barriers through a combination of donating devices, providing data, and creating community assets to help build confidence and skills. There are different approaches to making devices available to people without access. One approach is to seek funding, sponsorship or industry partnerships to source devices and give them to people. A second approach is to purchase devices and loan them to people for a fixed term, after which they can be purchased or returned. Device ownership and loaning devices both have advantages. We heard how device ownership can support individuals to feel of responsible and empowered. Those using loan schemes, typically ‘try-before-you-buy’ schemes, said they help to reduce the barriers to using technology. For example, we heard how in Stoke-on-Trent, the adult social care team has been offering older people with the least financial resource an Echo Show (a smart speaker with screen) to introduce them to a fairly simple digital technology tool. The team also offers structured support, which includes set up and some face-to-face training on how to use the technology. We heard from organisations and teams providing devices that they had felt inclined to restrict how the devices could be used, perhaps limiting them for only health purposes. However, this was found to be counter-productive. Restricting device functionality typically makes a device less useful and potentially less valued by individuals and so increases the likelihood of it being damaged, lost or not used. It’s also important to consider that non-health care activities, such as streaming music and using social networks, improve digital skills and digital confidence. For example, we heard from a VCSE organisation Simon Community Scotland, which, in 2022, launched ‘Get Connected’ a digital inclusion programme aimed at giving people experiencing homelessness access to the digital world. The programme provides homeless people with a digital device (smartphone or tablet), 12 months of unlimited internet data and support from a trained digital champion based around a learning framework of digital skills. This helped people to use devices for many purposes including health care, music, messaging and personal calls. Many organisations providing devices agree that it is important to ensure there is some device management to minimise the risk of cybersecurity attacks or scams. Similar to devices, there are a number of approaches that can provide data, in the form of SIM cards, to people who need through funding, sponsorship or industry and charity partnerships. We heard about some data-donation initiatives in the community. For example, some charities, such as the National Data Bank from The Good Things Foundation and SimPal, provide people in need with data. The challenge with providing data through SIM cards is it can feel like a cliff edge when the data runs out for people who have limited options to obtain more data particularly if devices have become integral to their lives. One option is changing to another donated SIM. However, while changing SIM cards can be a fairly easy process to administer, it still requires a level of digital skill and confidence that people may not have. Changing SIMs has a number of disadvantages for the user, such as transferring existing contact details onto the new SIM, which requires more extensive digital skills. Data provision is essential to overcome this barrier to digital services. When paired with providing people with knowledge of public internet access points, for example, in libraries, it can help to reduce mobile data use so it can go further. Social tariffs can also make home broadband a more affordable option again reducing mobile data use. Building digital skills and confidence Even with a device and connectivity people can still be digitally excluded if they don’t have the necessary confidence or the skills. We heard about several different approaches to support people, provide education and build confidence. VCSE organisations are working in the community to provide digital skills training and support to the public. They typically work within a specific community to help build confidence in tasks such as turning on devices, using email and common apps, listening to music, watching videos and using the internet. ‘Tech to Community Connect’, a programme developed by the Surrey Coalition of Disabled People, is a collaborative digital-inclusion project with two target outcomes: reduce digital exclusion and reduce feelings of loneliness. The coalition loans devices and data to disabled people alongside a full training and support package. Participants are matched with volunteer ‘tech angels’ for support. The tech angels offer several training sessions including how to use the device, how to shop safely online, an entertainment module, and training depends on agreed outcomes. A medical services module includes information on how to find pharmacy services, how to book a GP appointment, how to book a video consultation and many others. Several providers are developing partnerships between health and VCSE organisations finding it really helps with cross-fertilisation of ideas and support. These organisations benefit from working together to try different approaches to device, data and skills provision, target particular groups, learn which approaches work well and avoid duplication to make limited resources go further. In some areas, there are networks of volunteers with digital skills who support people by phone or within community settings such as libraries and cafes. For example, The Roxton Practice, a GP practice in north-east Lincolnshire, has developed digital care ‘pathways’ for its patient population and has volunteer ‘care connectors’ available in public libraries to help patients become more skilled and confident in accessing their health and care via digital channels Volunteers can help people to become more familiar with digital health tools by downloading and setting up the NHS app. Volunteers can also support people to use specific NHS initiatives such as NHS@Home to measure their blood pressure, monitor their symptoms or conditions (for example, chronic obstructive pulmonary disease), collect data, and upload it. Patient education videos Some health professionals and organisations are creating video content to help patients navigate digital NHS services and get the best from digitally enabled care. We heard that for some people video can be more accessible and easier to understand than text-based approaches. Images can easily convey what should be done and how, for example, by showing how the NHS app should look at each stage of use or how a clinically usable picture of a rash should look. Images can also overcome literacy barriers to convey health information in a usable way. Through our workshops we heard from a GP, Dr Hussain Gandhi, who became very aware of patients being excluded from care services because of the shift to digital consultations and care during the Covid-19 pandemic. In response Dr Gandhi developed a series of YouTube videos to help patients register with the NHS app, how to register with a local GP, how to send a photo to your doctor. We also heard how staff can find existing videos or other material online to use to help people navigate digital services. Overall, providing a device, connectivity and support are all equally important for digital access; if one of those elements fail, then attempts to improve the use of digital services and make them more inclusive are likely to fail. Structuring services around people’s needs and preferences Digital inclusion does not begin and end with providing access to devices, data and developing skills. People’s needs change and so you can’t assume digital services always work for an individual. A one-size-fits-all approach or a rigid service without choice means the service is more likely to exclude people. People’s changing needs and preferences, and understanding those needs and preferences, can lead to the design of different digital options. This approach means services can flex around evolving expectations and changing patient needs. Here we highlight approaches that can help with designing and transforming services to be digital and inclusive. Identifying people’s capability and preferences In north-east Lincolnshire, The Roxton Practice has undertaken extensive work to understand the digital preferences of its population. Engaging with its 12,500 patients through face-to-face conversations, telephone, letter, SMS and the practice’s website, the practice asked questions to gauge people’s willingness, confidence levels and preferences for digital health care. People’s preferences were logged on the electronic patient record system and used to develop a digital literacy level for each individual. Using this level, the practice suggests digital or non-digital services for the patient, but it is the patient who chooses to accept the suggestion or request an alternative approach. Workshop participants agreed it was important to set the baseline preferences for patients, but also agreed that the baseline does not predict best care at time of need and information needs to be kept up to date. However, pairing this approach with existing links into the community, such as social prescribing link workers, is a step forward and can help inform communities’ needs and experience of health and care services. Offering services with different levels of digitalisation Some organisations we heard from have been redesigning services to use different levels of digital technologies within a service, creating more flexible or multiple pathways to better match patients’ expectation of choice. A single digital pathway is unlikely to meet the diverse needs of all patients and so The Roxton Practice has redesigned its services so people can move between high-, low-, and no-tech pathways for a single service. A high-tech pathway enables patients to use websites and apps on their personal devices to access services and collect and upload their own data for review by the primary care team. A low-tech pathway enables patients to use digital access points scattered across the whole community (in GP practices, libraries, pharmacies, workplaces, etc) with support from volunteers or staff onsite. A no-tech option offers face-to-face care. This is just the start of the pathway transformation, The Roxton Practice is now researching how people on the low- and no-tech pathways use tech outside health care, eg, for online shopping or banking. By understanding why people are reluctant to engage with digitally enabled health care despite using digital services elsewhere, the practice is exploring whether community support can build confidence and trust to empower greater use of digital health care services. Working with communities to develop more inclusive services Inclusive digital services mean that the technology works as people expect it to with information in the preferred languages with relevant cultural context and the features they need. Workshop participants shared how involving service users and members of the public in service design and transformation has helped change services so they are better structured around people’s expectations, needs and preferences. Simon Community Scotland provides people experiencing homelessness with access to digital devices, data and training. The organisation regularly engages with users to ensure that its digital platforms are fully co-designed and produced by the people using them. This has resulted in having information and resources that are valuable to users front and centre. Working with women experiencing homelessness who use drugs means they now have access to evidence-based information, services and digital resources that may prevent drug-related harm or death through an accessible and reliable app. Co-design and co-production were crucial in building trust in the app and the information it contains. Another example we heard of working with people to develop more inclusive digital services is the need to reliable and up-to-date information about pregnancy – as well as the health and care support available to people who are pregnant. The Mum and Baby app was originally developed by one North West London trust as a source of information during the birth and post-natal period. However, there was a sense the app could do more to enable more choice and personalisation of care through the whole maternity journey. The app has developed through co-production with clinicians and users across the local maternity and neonatal system in north-west London and now includes input from across England to ensure it meets the needs of service users. It gives information about maternity units across the users’ area, local information about home births, continuity of carer, personalised information about the unit selected and services it provides. The app facilitates personalised care plans – covering the entire maternity journey (including an appointment tracker). Several different approaches have been found to reduce digital exclusion by improving accessibility. For example, the Mum and Baby app is available in languages other than English, as a website for those without a smartphone or tablet, and it has a downloadable/printable version – which has undergone an accessibility review making it readable for those who have vision impairment. The app has also been reviewed to ensure it uses plain and simple language (to ensure accessibility for those with lower literacy levels). Improving the quality and consistency of services People expect services to be similar, but in reality often experience significant differences in digitally enabled services from a single provider or service and across the health and care system. Inconsistencies in if and how digital technologies are used across providers and services create confusion and frustration in the public. There are differences in what is offered digitally, how it is provided and how well it works. Creating a centralised group of expertise Some organisations are choosing to tackle digital exclusion in isolation, while others are collaborating and working in partnership. 100% Digital Leeds is a model for partnership across a city. Working in partnership with health and care organisations across the city, as well as VCSE organisations, 100% Digital Leeds supports organisations to improve digital inclusion. The ambition is to increase digital inclusion for everyone, in a way that’s sustainable and embedded within existing services. The team take a ‘furthest first’ approach, working in partnership with organisations to tackle the needs of those most digitally excluded first. The 100% Digital Leeds approach aims to ensure consistency of approach across all partner organisations and acts as a central hub for sharing learning and improvement. Following a successful pilot programme to tackle digital exclusion, the Surrey Coalition of Disabled People was able to bring together organisations from across the health and care system in its area to collaborate to addressing exclusion. Through regular meetings they were able to develop an understanding of the services being offered and where the approaches taken increased digital exclusion. The Surrey Coalition then offered to support these services to become more digitally inclusive for all disabled people across the region. For example, Surrey Coalition was able to support the NHS@Home initiative by assisting patients to collect and upload health data on to devices, unlocking the benefits of NHS@home for more people. We also heard from NHS Black Country ICS digital leads who have been developing system-wide approaches to digital health to ensure services developed in partnership with VCSE groups and learning is built on. A centralised group or collaboration is helpful to continually learn and apply best practice to make services more inclusive. However, equally important is acknowledging the centralised group needs to work in partnership to address the needs of particular population groups. For example, 100% Digital Leeds work with partners across the city to help them reduce the barriers to digital exclusion for the people they are working with. 100% Leeds Digital has broad knowledge and expertise but by working with partners is can better understand the issue specific groups face, co-produce solutions and then support partners to embed interventions within existing services. Patients and the public face challenges relating to fundamental requirements for digital services such as devices and data, rigidity of services not meeting preferences or need, and inconsistency of approaches creating confusion. However, many areas are successfully overcoming these challenges to improve digital inclusion and meet public expectations. The solutions include providing devices, data and skills support for patients, working with communities to understand and meet their preferences and needs, and collaborating across organisations to create a centralised group of expertise to improve best practice and its application in all services. By exploring and applying the approaches outlined above in detail integrated care systems and providers can significantly improve digitally enabled services and mitigate widening inequalities. Very interesting and wide ranging report which I have drawn many people's attention to. I wondered if you considered literacy? Digitally navigating healthcare does require reading. About 7 million people are functionally illiterate so if we add these to your figures the problems are event greater? I'm with John on the PDF issue. It is much easier just to download a PDF than to print to PDF. Also, users of MS Edge have to make up a name for the download - at least Google Chrome keeps the file name. I heartily agree that people need to know how to save your very useful articles. Is it possible to add instructions - or an extra pictogram. Going to 'print' is not intuitive. I will be referencing this in a paper I am currently writing, and readers may want to keep a soft copy - not just read it. Please can pdf save option be made clearer - going to 'print' is not intuitive. I will be referencing this in an article I am currently writing & I'm sure readers will appreciate being able to save a pdf version. It will also help save trees. Hi John, thanks for the comment. To access this work offline, you can print straight from your website browser. Under the print option you can save it as a PDF file or print on paper if you prefer. Thanks, Ian. No pdf again, so how are people meant to save this? Thank you for this project and the report. It’s particularly helpful to be reminded of the range of different considerations in improving equity (there isn’t “just one thing” to do). The practical focus will also be helpful for local teams.
<urn:uuid:b14b693a-1d0e-4d38-bca5-6c88abb2a2ed>
CC-MAIN-2023-50
https://www.kingsfund.org.uk/publications/exclusion-inclusion-digital-health-care
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.950082
4,397
3.734375
4
(b)exercised in a judicial manner which can withstand judicial scrutiny. ” "PUBLIC ORDER" and "LAW AND ORDER" 6. In Ram Manohar Lohia v. State of Bihar, AIR 1966 SC 740, Court emphasised the difference between “public order” and “law and order” situation and held that situation of public order is more grave than law and order. This suggests that more liberties will be curtailed in public order situation and hence the material fact should suggest the gravity of situation. Thus imposing Section 144 in anticipation of violence alone but without due inquiry would be an anathema on individual liberties. “Urgent situations” or “emergency” is to be found on case to case basis. Even if by any chance there is material for law and order situation, other means of lesser regressive nature should be first imposed. It is to be reiterated that power under Section 144 CrPC is an extraordinary one and hence shall be used in extra ordinary situations with extraordinary material facts and extraordinary care. Though such an order is an executive one, but shall always be well reasoned and based on material facts.
<urn:uuid:fef56714-39ba-41fb-b710-9e3acb49fca5>
CC-MAIN-2023-50
https://www.lawabinitio.com/2020/07/section-144-crpc-1973-in-light-of.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.92033
245
2.75
3
The arrival of spring means a long break from icy, snow-packed roads and winter driving conditions. Although snow and ice may not be a problem during the warmer months, inclement weather can still cause hazardous driving conditions. High winds and heavy rains are common in the spring, and peak tornado season is during the spring and summer. Weather Hazards In The Spring and Summer During the warmer months, drivers may encounter a variety of hazardous conditions related to inclement weather, including: Lightning: Thunder and lightning come with summer storms. Lightning can cause fires and knock down power lines. It can also cause temporary blindness when it strikes near you, particularly at night. Flash floods: With a rapid rise of water in a short amount of time, flash floods are one of the great dangers associated with thunderstorms. High winds: During a severe storm, the wind may be strong enough to jerk your car out of its proper lane or even off the road. The wind may also toss tree branches or cause other flying debris to hit your car or fall in its path. Low traction: Rain on the road can cause your car to have very low traction during a storm. This means more time required to brake and less control over your vehicle. Poor visibility: Rain pouring down all around you makes it difficult to see when you are driving during a storm. Even with defrosters on and windshield wipers working at full speed, driving visibility will be poor. Hail: Spring is the time of year when hail is most frequent. If you happen to be out driving when a hailstorm hits, it can cause serious damage to your vehicle. Other drivers: When a storm hits and you slow down to a speed that is safe for conditions, some drivers may become impatient, tailgating, speeding, or passing unsafely. Other drivers may be become nervous and drive too slowly or erratically, contributing to traffic congestion. Either type of driving behavior can cause accidents. Tips For Safe Driving In Hazardous Weather Conditions If you are caught in a spring or summer storm, the following tips can help you and your passengers stay safe: Turn your lights on: Visibility is low during a storm. Turning your lights on lets other drivers see you. Do not tailgate: Leave plenty of room between your car and the vehicle in front of you. Avoid bridges: They are more likely to collapse during a severe thunderstorm or tornado. If you have to go over a bridge, wait for the storm to pass. Do not try to outrun a tornado: Their paths are unpredictable and can switch directions at any time. Pull over to the side of the road and seek shelter, either in the lowest level of a nearby building or at the lowest point of the ground, lying down, and covering your head with your hands. Do not take shelter under a vehicle, in a tunnel, or under a bridge. Check your auto insurance coverage: Our experienced agency can help you find the car insurance you need at the best available rates.
<urn:uuid:723f3923-48be-4086-8133-3a73d53bb44b>
CC-MAIN-2023-50
https://www.legacyinsusa.com/article/driving-safely-inclement-weather/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.947554
619
3.125
3
A long-snouted reptile that plied the seas some 120 million years ago, got into a scuffle that landed it with a gouged and scratched jaw -- battle wounds that are seeing the light of day, thanks to a recent discovery. Remains of this dolphin-like sea creature called an ichthyosaur were found in the remote desert near the town of Marree in northern South Australia. Ichthyosaurs were fast-swimming predators that fed on fish and squid-like animals with their 100-plus crocodile-like teeth. This individual, spanning about 16 feet (5 meters) in length, is a member of the genus Platypterygius. The researchers found a gouge on the lower jaw that was about 0.9 inches long and 0.6 inches wide (23 mm by 16 mm), along with two jagged furrows and another gouge. "The bone itself was not broken, rather it was scored, suggesting that the bite was strong but not 'bone puncturing' like that of a predator," said study researcher Benjamin Kear of Uppsala University in Sweden. In fact, the researchers say this Platypterygius likely survived the brawl to live some time afterward, as the wounds showed advanced healing, including evidence a callus had formed. [Top 10 Deadliest Animals ] When the ichthyosaur was alive, the Australian continent was still joined to Antarctica as part of the supercontinent Gondwana, and would have been much farther south than it is today, close to the southern polar circle. What is now arid grassland was then the bottom of a vast inland sea that experienced freezing water temperatures and even seasonal icebergs, the researchers say. The researchers aren't sure what the fight may have been over or the opponent's identity, though they speculate the injuries came from a showdown with another ichthyosaur of the same species, possibly over mates, territory or food. "The bone itself was not broken, rather it was scored, suggesting that the bite was strong but not 'bone puncturing' like that of a predator," Kear said. Here are the other suspects they considered: The gigantic pliosaurid called Kronosaurus, a marine reptile that may have exceeded 33 feet (10 m) in length, was around at the time and sported a "head the size of a small car and teeth as big as bananas," Kear told LiveScience. This beast is known to have hunted very large marine vertebrates, such as big sharks of the day; however, its giant teeth would've inflicted horrific injuries, much more so than those seen on the ichthyosaur remains. Large laminar sharks have teeth that could leave parallel scratch marks, though the gouge doesn't match its feeding tendencies, the researchers say. An accidental encounter with a small plesiosaur, whose teeth are closely spaced and conical in shape, may have left the wounds. The finding will be detailed in a forthcoming issue of the journal Acta Palaeontologica Polonica. Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter. Jeanna served as editor-in-chief of Live Science. Previously, she was an assistant editor at Scholastic's Science World magazine. Jeanna has an English degree from Salisbury University, a master's degree in biogeochemistry and environmental sciences from the University of Maryland, and a graduate science journalism degree from New York University. She has worked as a biologist in Florida, where she monitored wetlands and did field surveys for endangered species. She also received an ocean sciences journalism fellowship from Woods Hole Oceanographic Institution.
<urn:uuid:cbdcbae6-894c-4416-9bfe-684fa04b483f>
CC-MAIN-2023-50
https://www.livescience.com/14022-ichthyosaur-ancient-sea-creature-fossils.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.971815
759
3.15625
3
In 1962, Howard Lotsof, a 19 year-old New Yorker with a daily heroin dependency, tried a mysterious substance extracted from a West African shrub given to him by a chemist friend. Within 30 hours, after a long and intense trip, his desire for heroin was gone. Lotsof was astounded. A day and a half went by without using, yet he had zero withdrawal symptoms. The experience itself was illuminating too. “Ibogaine showed him that heroin was something that emulated death,” says anthropologist and ibogaine researcher Thomas Kingsley Brown, PhD. “Before taking ibogaine, he regarded heroin as something that gave him comfort.” Curious to see if the effects could be duplicated, Lotsof gave it to seven friends who were also addicted to heroin. After the experience, five immediately quit. “The other two said they could have stopped but they just didn’t want to. They liked using,” says Brown. Ibogaine is the potent psychedelic compound with anti-addictive properties found in the shrub Tabernanthe iboga. It was first isolated in 1901 by J. Dybowski and Ed. Landrin, and introduced to the Western public in France in the 1930s, where a diluted preparation of it was marketed as a mental and physical stimulant. Lotsof sparked its association with treating opioid-dependency, going on to advocate for further research into it, and inspiring many medical practitioners to administer this treatment. But the history of its usage goes far beyond the west, to the jungles of West and Central Africa, where its steward communities use it as a ceremonial sacrament to this day. Ibogaine’s Ritual History in West Africa Tabernanthe iboga grows primarily in Gabon, along with surrounding areas like the Congo and Cameroon, where the Pygmy people originally resided. The ritual use of iboga can be traced back to the Pygmies, who introduced it to the Bantu people in the late 1800s, when French colonizers pushed the Bantus out of their coastal villages towards Gabon. The Pygmies initial use of iboga is unknown, likely dating back hundreds, possibly thousands, of years, says Brown. From this cultural mixing emerged Bwiti, a spiritual tradition which incorporates animism and ancestor worship. Iboga plays a central role in Bwiti as a sacrament for spiritual growth and community bonding, used in healing rituals and initiation rites. Gabon has at least forty different ethnic groups, resulting in a myriad of Bwiti branches. Most ceremonies involve music (participants play a number of traditional instruments, including percussion, harp and mouth bow) and dance to induce a prolonged trance state, lasting up to five days. Although the majority of the Gabon population is Christian, most Bwiti practitioners have not adopted Christianity into their practice, with the exception of the Fang people, whose syncretic practice incorporates Christian elements. Bwiti has been persecuted by Christians since its inception, and faces condemnation by missionaries to this day. Aside from the church, Bwiti is well-accepted in Gabon, and a number of government officials and members of the police and army can be found among its initiates. Ibogaine’s Emergence in the West as a Treatment for Opioid Addiction After his experience, Lotsof was single-mindedly dedicated to lobbying for ibogaine to be taken seriously as a treatment for addiction. His widow, Nora Lotsof, remembers him as “a real gentleman who believed whole-heartedly that anyone with a substance abuse problem should have the right to choose when, and by what means, to stop self-medicating.” In the 1980s, by which point ibogaine had already been added to the list of forbidden Schedule 1 substances during the War on Drugs, Lotsof founded the NDA International, an organization that promoted research into ibogaine. In 1986, Lotsof developed a patent for ibogaine as an addiction treatment, and through NDA, co-sponsored human studies on ibogaine in the Netherlands and Panama with other addiction treatment groups in the early 90s. With the results of these studies, Lotsof was able to persuade the National Institute on Drug Abuse to conduct further research into ibogaine, eventually leading to F.D.A. approval of a Phase 1 clinical trial. Unfortunately, the trial was never completed, in part due to lack of funding and criticism from pharmaceutical companies. Despite this setback, Lotsof continued to advocate for ibogaine with other researchers and doctors, and work with independent ibogaine clinics in Mexico, Europe, and the Caribbean. Ibogaine in the Modern Day The psychedelic renaissance, which is ushering in a new wave of acceptance for these substances, has raised awareness about ibogaine in psychedelic communities, among newcomers and veterans alike. However, ibogaine remains a Schedule 1 drug which, unlike other psychedelic substances, like psilocybin, has no significant research that could bring it towards FDA approval on the horizon. Still, ibogaine clinics have been administering this treatment in places where it’s legal or unregulated since the 1990s by physicians and advocates who, like Lotsof, fervently stand by ibogaine’s potential. The Mexico-based ibogaine clinic Beond is staffed with physicians like Dr. Jeffrey Kamlet and Dr. Felipe Malacara, who’ve been successfully using ibogaine to treat addiction for decades in various clinics outside the U.S. Recent developments have been made around the cultivation and exportation of ibogaine, which has been illegally harvested in Gabon for use in the West. Since the Nagoya Protocol, an international treaty established in 2014 that calls for the fair and equitable sharing of genetic resources, a number of ibogaine organizations are teaming up with Gabonese officials to develop a legal distribution channel for ibogaine that will allow Gabonese communities to benefit. Among the groups participating is Beond, who will buy Nagoya-compliant iboga when it becomes available, and contribute a portion of their clinic’s proceeds towards projects that support Gabonese communities and indigenous leaders.
<urn:uuid:e6a26299-6ac7-467f-94a7-5d2d91e1bc7a>
CC-MAIN-2023-50
https://www.lucid.news/ibogaine-addiction-treatment-in-the-west/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.963765
1,312
3.03125
3
Crowns are a type of dental restoration which, when cemented into place, fully covers that portion of a tooth which lies at and above the gum line. In comparison, composite fillings/inlays/onlays are dental restorations that are used to fill in or cover over just a portion of a tooth. Since dental crowns encase the entire visible aspect of a tooth, a dental crown in effect becomes the tooth's new outer surface. Crowns are used to rebuild broken or decayed teeth, to strengthen teeth, and as a means by which to enhance the cosmetic appearance of teeth. Crowns can be made out of porcelain/ceramic, gold alloy, or a combination of both. Dental crowns are often referred to as "dental caps".
<urn:uuid:c805a2cb-448a-486f-ba4a-55e27f19ca4f>
CC-MAIN-2023-50
https://www.manningfamilydentistry.com/services/cosmetic-dentistry/crowns/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.955803
164
2.796875
3
When Steven Frank, M.D., had an idea to improve the effectiveness of a common prostate cancer treatment, he didn’t know his vision would result in the creation of an innovative device that would win Food and Drug Administration (FDA) approval and subsequently be spun off into a startup company. It all began when Frank tackled a decadesold problem plaguing the field of prostate brachytherapy, a treatment in which tiny radioactive seeds are implanted in the body to destroy cancer cells. Because the seeds are difficult to view through imaging such as computed tomography (CT) scans, which are necessary to evaluate the success of treatment, uncertainties are created that can impact the therapy’s effectiveness. Frank proposed that a highly visible marker of sorts, or implantable contrast agent, could be developed and placed between the seeds to guide treatment.“The best analogy to describe this technology is looking at footage of machine gunners in World War I who would spray bullets everywhere and not have any idea where the bullets were going,” says Frank, associate professor in Radiation Oncology. “Finally, someone had an idea to insert a tracer every few rounds to increase accuracy. That’s essentially what we’ve created.” In 2006, Frank collaborated with a biomolecular engineer at the University of Houston and together they began testing compounds believed to be visible under an MRI scan, the most accurate way to monitor brachytherapy seeds. The answer arrived in a compound called cobalt chloride, which to their surprise, lit up the scans during the investigation. After discovery, taking on the ‘beast’ Armed with this new finding, Frank’s next steps propelled him into unfamiliar — yet necessary — territory that he had to navigate if his idea was to become reality. He needed guidance to understand the very complex process required to commercialize this technology and bring it to patients. For that he turned to Olivier Wenker, M.D., clinical professor in Anesthesiology and Perioperative Medicine, and Tom Lee, director of Technology Commercialization’s Active Venture Development, who are responsible for unearthing innovations within MD Anderson’s walls and determining commercial potential. The laborious process, which Frank describes as “a beast,” involves raising capital and obtaining financing, patents and regulatory approval. “The ultimate goal is not just discovery, but taking that achievement from the laboratory to the patient, which is a very complicated process in medicine,” Frank says. “I was at a crossroads because I needed to license the technology out of MD Anderson, otherwise it would die.” With Wenker and Lee’s mentorship, and the help of outside counsel experienced in negotiating licensing agreements, Frank successfully transferred his discovery out of MD Anderson over the course of nine months. With an agreement in place, the institution would receive royalties and Frank would be free to incorporate a business of his own. A maze of regulations In 2009, three years after making his initial discovery, Frank established C4 Imaging. He immediately began the first of two fundraising rounds that would generate more than $3 million in total startup capital. But challenges remained. Before any new medical technology can be used clinically, it must be approved by the FDA. In anticipating the regulatory phase, Frank hired a CEO to lead the important next steps. Around that time, he also learned of an innovative program that would prove especially beneficial. “The National Institutes of Health created a mechanism where experts with regulatory experience hand-select aspiring companies to help navigate their way through the FDA, and we were selected,” Frank says. “During this process we were also busy conducting trials and evaluating toxicity, so it was a critical time to make sure everything happened correctly.” After three years of discussions with the FDA, the world’s first permanently implantable MRI marker for use in prostate brachytherapy was approved. Frank, who worked tirelessly on the development of his idea while his family — including four children less than 10 years old — slept, achieved an incredible feat without any formal business training, and in an industry known for high failure rates. This past March, the first group of patients received the marker during their therapy. While Frank credits his success to a team of advisers and supporters, the end goal was always focused on improving patient care. “We can now limit uncertainty, provide optimal quality assurance and minimize side effects,” Franks says. “This technology could change the way brachytherapy is planned and evaluated for future patients.” Let us explain … The Office of Technology Commercialization (OTC) The OTC identifies technologies suitable for startup company formation. These companies help create institutional value for the intellectual property resulting from inventions made by MD Anderson researchers and clinicians. Since 1987, the OTC has been involved in the creation of 11 affiliated companies that have raised more than $300 million on the strength of MD Anderson-based discoveries. Four portfolio companies listed on NASDAQ have raised more than $230 million and funded $25 million in sponsored research at the institution.
<urn:uuid:b625c75f-1b5a-47fd-a13a-3097c1b605a2>
CC-MAIN-2023-50
https://www.mdanderson.org/publications/conquest/from-patent-to-patients.h34-1589046.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.965322
1,059
2.65625
3
AHA stands for alpha-hydroxy acid, and BHA stands for beta-hydroxy acid. AHA and BHA are hydroxy acids that people use to treat skin conditions such as acne. The most common hydroxy acids include glycolic acid, lactic acid, and salicylic acid. People can find these ingredients in a variety of cosmetic products claiming to treat different skin conditions and improve skin features. Choosing the most appropriate product to get a specific result can be challenging. This article explores the differences between AHA and BHA, which conditions they treat, and how best to use them. - glycolic acid - citric acid - malic acid - tartaric acid - lactic acid These are weak acids that may improve the appearance of the skin. People can find AHAs in the form of skin peels to treat: - melasma (brown or gray patches of skin) - hyperpigmentation (patches of darker skin) - age spots - seborrhea (rash with red and itchy spots and white scales) Salicylic acid, which is a type of BHA, is a common ingredient in acne products. Different BHAs include: - salicylic acid - beta-hydroxybutanoic acid - tropic acid - trethocanic acid One of the most common To measure improvements in sun-damaged skin, doctors check skin roughness, changes in skin color, and collagen density. Because of the reported benefits of hydroxy acids, many skin care companies have developed cosmetic products that include them. AHAs and BHAs both work as exfoliants, but they work in different ways. AHAs work by reducing the concentration of calcium ions in the skin. This promotes the shedding of skin cells at the surface. BHAs are also a skin peeling agent, but salicylic acid has additional antibacterial actions. Despite the many studies on AHAs and BHAs, Studies typically use products with different active ingredients and different instructions for use, making comparing products a challenge. Many products with hydroxy acids are exfoliants and moisturizers. People can also find them in low concentrations in over-the-counter prescription creams and lotions. People can find hydroxy acids in higher concentrations in chemical peels used for treating calluses, acne, photoaging, skin growths, and psoriasis. AHAs and BHAs are both exfoliants, but each hydroxy acid has other properties that make it more appropriate for treating certain skin conditions or improving certain skin features. Compared with the AHA glycolic acid, salicylic acid causes Another difference between AHAs and BHAs is that BHAs increase the skin’s resistance to ultraviolet skin damage and also have antibacterial effects. The antibacterial effects of BHAs make them appropriate ingredients for acne products. AHAs provide more They also have effects on collagen and procollagen production. These are substances that can improve the appearance of photoaged skin. Since AHAs are more aggressive, their use requires caution because of the potential for sun sensitivity. A person should base their choice of skin product on which hydroxy acid is most suitable for their specific need. BHA seems to be more effective for treating skin conditions like acne because of its antibacterial properties. AHAs, such as glycolic and lactic acid, may be Also, because of AHA’s more aggressive mechanism of action and its However, it is worth noting that much of the existing research appears gathered from studies whose participants had lighter skin tones. The safety and effectiveness of these products in populations with darker skin tones requires further study. Products that contain AHAs report several - smoothing fine lines and surface wrinkles - improving skin texture and tone - unblocking and cleansing pores - improving skin appearance in general AHAs work by exfoliating the skin. Exfoliation sheds surface skin cells. How much a product with AHAs exfoliates the skin depends on the concentration of the AHA ingredient, its acidity, and other ingredients found in it. Using AHAs comes with certain side effects. Most often, these side effects occur in skin peeling products. Side effects are local, meaning they affect the area of skin where the product was applied. They may include: - burning sensation on the skin - changes in skin color - blisters or welts - skin peeling - skin irritation - chemical burns - increased risk of sunburn To use skin products containing AHA safely, follow the directions on the product’s label carefully. Be aware of any warnings on the product label. People who use products containing AHA should regularly use sun protection. Sun protection includes wearing sunscreen, wearing protective clothing, and limiting sun exposure. - The concentration of AHA is 10% or less. - They have a pH of above 3.5. - The product protects the skin from increased sun sensitivity, or the package recommends daily sun protection. Before choosing a product with AHA, people should speak with a doctor or dermatologist to ensure the product will be safe and effective. According to the If the producer of the skin care product expects that the user may experience sun sensitivity after applying their product, there must be warnings clearly displayed on the product packaging. The Food and Drug Administration (FDA) recommends certain precautions when using products that contain BHA. These precautions include: - testing products that contain BHA on a small area of the skin before applying to a larger surface of the skin - following the instructions on the product label closely - avoiding exceeding the recommended applications - avoiding using BHA-containing skin products on infants and children - practicing sun protection when using BHA-containing products Before using a product with BHA, people should speak with a doctor or dermatologist to find the safest and most effective product. Since both AHAs and BHAs are exfoliants, they can be very irritating to the skin if combined. If a person wants to use both AHA and BHA products for different skin problems, they should consult a doctor. Excessive skin irritation may worsen skin conditions and appearance. Some types of AHA are less aggressive and may be more appropriate to combine with BHA. Products that contain hydroxy acids may also not require daily use, which can help with skin irritation if people need more than one product. Spot treating the skin with different products may also help prevent irritation to the entire skin. For example, a person can try applying an anti-aging or sun damage repair product on their entire face, but spot-treat areas of the skin with acne with a BHA-containing product. Many skin care companies add hydroxy acid ingredients to their products because of their reported benefits. AHA and BHA both exfoliate the skin. AHA seems to be more effective for treating issues with skin pigmentation. BHA is less aggressive and irritating and has additional antibacterial properties. Both AHA and BHA repair sun-damaged skin. To choose the most appropriate product, people should speak with a doctor or dermatologist who can help diagnose skin conditions and recommend the safest and most effective product.
<urn:uuid:fa9dcf2e-5968-422d-ace6-2bcc737720d1>
CC-MAIN-2023-50
https://www.medicalnewstoday.com/articles/aha-vs-bha
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.928603
1,533
2.90625
3
A rotary Union, Rotary Joint or Swivel is a mechanical device that allows the transfer of fluids and/or gases to and from rotating equipment. They are utilized in machinery that requires a constant flow of lubrication, air, water or other liquids during rotation. A slip ring is a device that allows the transfer of electrical power, signals and data to and from rotating equipment. Rotary union technology contains elements such as mechanical seals, Teflon seals, hydrostatic seals and integrated or stand-alone electrical slip rings. The demands and uses of rotary unions and slip rings are vast, which is why standard and custom built assemblies are available in numerous material, seal and passage size options. Web Page: www.rotarysystems.com Company Tel: 073-2000250 Direct Tel: 073-2000224 Direct Tel: 073-2000243 Direct Tel: 073-2000219
<urn:uuid:59d52c4a-fef6-417f-986e-6cfc78da438d>
CC-MAIN-2023-50
https://www.medital.com/products/slip-rings
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.768147
191
2.65625
3
Abstract and Introduction The isolation of Δ9-tetrahydrocannabinol (THC), the major psychoactive ingredient in cannabis, set the stage for the discovery of an endogenous cannabinoid (endocannabinoid) transmitter system. Endogenous signaling molecules for this system were subsequently isolated.[2,3] Anandamide and 2-arachidonoylglycerol (2-AG),[2,3] the best characterized endocannabinoids isolated to date, bind to and activate cannabinoid CB1 and CB2 receptors. CB1 is the primary cannabinoid receptor found in the CNS, whereas CB2 is predominantly, but not exclusively, found in the immune system.[6–8] The discovery of cannabinoid receptors allowed researchers to synthesize cannabinoids and characterize their pain-relieving properties. Anandamide and 2-AG are degraded by the enzymes fatty-acid amide hydrolase and monoacylglycerol lipase, respectively. Enzymes catalyzing endocannabinoid breakdown also represent targets for analgesic drug development. This article will briefly summarize the findings of preclinical and clinical studies evaluating the therapeutic and side-effect profile of cannabinoids as pharmacotherapies for neuropathic pain. Future Neurology. 2011;6(2):129-133. © 2011 Future Medicine Ltd. Cite this: Cannabinoids for the Treatment of Neuropathic Pain - Medscape - Mar 01, 2011.
<urn:uuid:bb8cca88-f6a3-43f6-894e-056ee1d865be>
CC-MAIN-2023-50
https://www.medscape.com/viewarticle/739103
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.838893
291
2.875
3
This easy button snail craft is a wonderful way for kids to learn more about this fascinating, slow-moving creature and gives the kids the freedom to create! Spring is upon us, and you might be noticing garden snails in your backyard, especially after rain. Snails are not only shelled gastropods that are slow-moving, but also an integral part of our natural ecosystem. Did you know that there are nearly 40,000 species of snails worldwide, living in environments ranging from seawater and freshwater to land? The shell of a snail grows with it. This means that the shell a snail is born with is the same one it will have for its entire life, expanding as the snail grows This button snail craft for kids is an exciting way to bring all the interesting facts about snails to life. Using a simple snail shell template and a handful of buttons, your little ones can create adorable snails that are as unique as they are. Whether you’re a parent looking for a fun weekend activity, a teacher in need of a classroom project, or a babysitter wanting to engage the kids in a creative task, this button snail craft is a fantastic choice. And since we are doing a snail craft, make sure to take your time and enjoy every moment! How to Make a Button Snail Craft Many of the supplies for the button snail craft, such as the popsicle sticks, buttons, and craft foam, can be found in the Dollar Store! - Jumbo craft stick - Googly eyes - Acrylic paint - Pipe cleaners - Craft foam - Assorted buttons - Glue gun and glue sticks - Permanent marker - Printable snail shell template (scroll to the bottom of the post for download instructions) 1. Apply the green acrylic paint to both sides of the jumbo craft stick. Set aside to dry. A second coat may be necessary if you can still see the wooden stick under the green paint. 2. Make two small dots using pink acrylic paint. We found dipping a new pencil eraser into paint and stamping the eraser on the craft stick the easiest way to achieve nice, round circles. You’ll later use a permanent marker to draw a smile line between the two dots, so make sure to leave enough space in between. Set the craft stick aside to dry. 3. Cut the pipe cleaners so that you have two 4-inch sections. Bend each in half. Glue a googly eye to the bent end of the pipe cleaners. Using a glue gun and glue stick is the fastest option, but if you don’t have them at home, you can use craft glue or tacky glue. Set the pipe cleaners and googly eyes aside to dry completely. 4. Glue the pipe cleaners to one end of the craft stick. Make sure it’s the end with the two pink dots on the other side. 5. Flip over the craft stick and draw a smile between the two pink cheek circles. 6. Download and print out the snail shell template. There are several options: - Print the template on printer paper. Cut out the shell outline and trace the snail shell template on craft foam with a pencil. Then cut along the traced line. - Print the template on cardstock and use it for your snail shell. - Don’t have a printer? You can certainly use a small paper plate as a substitute, though you may need to use acrylic paint to base coat the paper plate and give the shell some color. 7. Glue the buttons on the snail shell in a spiral pattern. You can use the snail shell template as a visual guide. For younger kids, you may want to lightly draw the spiral on the craft foam with a pencil so that they can glue the buttons along the line. I also suggest using craft glue for preschool or lower-elementary kids instead of the glue gun pictured below. 8. Glue the finished shell to the craft stick body. Click on the image below to grab your free printable button snail craft template! YOU MAY LIKE:
<urn:uuid:922669ae-2eff-421f-a1b1-8dac76b7e681>
CC-MAIN-2023-50
https://www.mombrite.com/button-snail-craft/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.925966
851
3.140625
3
The Church In Babylon Unleashing the Power of a Spirit-Filled Witness What comes to mind when you hear the name Babylon? Babylon is often used as a symbol of man’s rebellion against God. Thanks to their rebellion, the Jews were eventually carried off into Babylon, where they tried to maintain their faith in the midst of a pagan culture. These messages from the book of Jeremiah walk through what happened before the coming of the Babylonians and the destruction of Jerusalem. We will see parallels to our own nation; more importantly, we will look into our own hearts and see that we, like generations before us, must prepare for dark and difficult days. These messages are intended to motivate us to be faithful in a pagan culture that is under the judgment of God.
<urn:uuid:dd22d969-1613-4e40-a540-98fa76e79e77>
CC-MAIN-2023-50
https://www.moodymedia.org/sermons/church-babylon/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.958559
157
2.65625
3
A User Guide Looking for something specific? The User Guide, Formats, Reversing Topics and FAQ are indexed within the app — open Archaeology, click on the Help menu (Command-?), and type in the Search field. Results listed under Help Topics link directly to the appropriate section here. Opening a File in Archaeology There are several ways to open file using Archaeology: - In Archaeology, choose File > Open, locate the file and click Open. - In the Finder, locate the file and drag it onto the Archaeology icon, either in your Dock or in your Applications folder. - From the Terminal, use the “trowel” command-line tool to tell Archaeology to open a given path. Whichever method you choose, Archaeology examines the file to identify the binary format and — assuming it is a format that Archaeology recognizes — opens a new window with the best representation possible: In this example, the binary data is in BER-encoded ASN.1 format, as shown on the right side of the window, under the toolbar. The representation underneath will vary with the actual format: see Formats for how to navigate each supported format. Or click the Help toolbar button to directly open the format-specific documentation for the current representation. Archaeology tells macOS that it can open any kind of file, because binary formats are often used without file extensions, and might have any Kind, or a generic Document one. Archaeology does its best to decode whatever you open, but if it doesn't find a format it recognizes, it will say that the file “contains data in a format that can't be decoded.” Decoding an Embedded Data Value Depending on the file and its binary format, you will often find an embedded “chunk” of data that is further encoded in some way. In Archaeology, this will appear as an item whose contents is described as “X bytes of data” or in the ASN.1 format, as an item with a tag like OCTET STRING. In this case, you can ask Archaeology to further decode this chunk of data. Select the item and click the Decode toolbar button or use Go > Decode Item (Cmd-Down Arrow). Assuming that Archaeology recognizes the format of the data, it will decode it and show a new view, replacing the file that it came from. Each time you find another embedded chunk of data, you can continue recursively decoding in this manner. Of course, not every chunk of data will be decodable. Some have formats that aren't known or supported by Archaeology. Some are unique identifiers or cryptographic hashes of some sort — it's not uncommon to come across SHA-256 digests or the like. (If you come across data that is exactly 16, 20 or 32 bytes, it's very likely a digest of some sort, for example.) However, you can always ask Archaeology to try. If Archaeology can't decode a specific chunk of data, you might learn more by inspecting the value info. If, at any point, you want to go back to the previous (containing) view, click the Back toolbar button or use Go > Back (Cmd-[). To go back multiple steps at once — or just to see where you've come from — hold down the toolbar button briefly: Most of the time, Archaeology will figure out the best way to decode a chunk of data (if it can decode it at all). But there are a few cases where data can be decoded in different ways — such as Cocoa Keyed Archives, which are also macOS Property Lists. To override Archaeology's default choice, hold down the Decode toolbar button briefly, and choose a specific format: Getting More Info About a Data Value In addition to asking Archaeology to decode an item containing a chunk of data, you can see some general information about that chunk. This can be especially helpful where Archaeology doesn't know how to decode the chunk in question. To show this additional information, select the item and click the Info toolbar button or use File > Show Value Info (Cmd-I): From this data value inspector, you'll find the following information: - Source File shows the actual file that the data value was found in, which might be different than the item you opened (especially for File TOC types that are really bundles). Click the adjacent Show in Finder button to reveal the actual file. - Size of Value shows how large the data chunk is. - Offset in File shows where the data chunk resides within the Source File, if Archaeology can figure this out. (This will depend on the file format, and on the format of any recursive decodes that were performed to get to this point.) If the offset is known, you can click the adjacent Copy button to copy a Terminal command that will extract this precise chunk of data from the Source File (which you can then pipe into some other command). - Hex Encoded shows the actual data in hexadecimal encoding — or up to one line's worth. If you click on this hex representation, Archaeology will insert a space between every byte; click again to change the number of bytes per group. This may make the value more recognizable, in some cases. You can click the adjacent Copy button to copy the hex-encoded data value to the clipboard, as discussed further below. - SHA-1 Digest and SHA-256 Digest show the result of calculating the named digest algorithm against the data value shown. Whether or not these digests have any meaning is unknown to Archaeology — it just provides them as a convenient alternative to copying out the value and using an external app to hash it. (The example above is one where the SHA-256 does have meaning: this digest of the “Code Directory” is the code signing digest or cdhash value.) The Open in Hex Editor and Export Data Value buttons are simply shortcuts to the integration features described below. Moving a Data Value to Another App Archaeology provides a few ways to get a specific data value into another app or Terminal command: - To export the selected data value to a new file, click the Export toolbar button or use File > Export Value As (Cmd-Shift-E). - To copy the selected data value to the clipboard, click the Copy toolbar button or use Edit > Copy Data (Cmd-Shift-C). Archaeology places the data value on the clipboard as a hexadecimal-encoded string. We've found this to be the most useful form for getting into other apps. (There isn't really a widely-understood clipboard type for arbitrary binary data, as far as we've found.) - To open the selected data value in another app — presumably, one that can view or edit arbitrary binary data — use File > Export Value in Hex Editor (Cmd-Option-E). Archaeology writes the data value to a temporary location, and asks the other app to open it. Use Archaeology > Preferences > General > Open values in hex editor to change the app that Archaeology uses for this purpose. Searching the Decoded View For some binary formats, Archaeology allows you to search in the decoded data. For example, you can search a Cocoa Keyed Archive for specific keys, class names or string values; you can search a macOS Property List for keys or values. If searching is possible for the current view, the toolbar search field will be enabled. Click in the search field (or use Cmd-F) and enter text: Press Return to perform the search. How the search results are shown varies with the binary format: see Formats for details. To change which aspect of the decoded data is being searched — keys or values, say — click the search button inside the search field. The “trowel” Command-Line Tool Archaeology provides a command-line tool — called trowel — that can be useful when you're working in the Terminal. You can use it to open a file in Archaeology by path, without needing to go through the File > Open dialog. You can also specify the expected format and other options. The trowel tool is delivered inside the Archaeology application bundle, but consistent with our philosophy that you should decide when and how to install software on your Mac, the app does not try to install the command line tool for you. The easiest way to use trowel is to make a symbolic link to it, inside some directory that is already in your shell's search path. You can get the path to the tool (regardless of where Archaeology itself is installed) by using Help > About the Command-Line Tool, and clicking Copy Path to “trowel” Tool. Then paste that into a Terminal command something like this: ln -s ⌘V /usr/local/bin Run trowel with no arguments to see usage information. The trowel tool relies upon other resources inside the Archaeology application bundle, so don't try to copy or hard-link the tool somewhere else, or it will simply abort when you try to run it. A symbolic link works because the executable can still find the rest of Archaeology relative to itself. If you don't want to use a symbolic link, though, you can use any other mechanism that expands to the actual tool path retrieved through Copy Path to “trowel” Tool, such as a shell alias.
<urn:uuid:d5a7bddb-50e6-4967-ba83-75ab6f10cf3a>
CC-MAIN-2023-50
https://www.mothersruin.com/software/Archaeology/use.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.875713
1,985
2.9375
3
Bite plate vs Education for TMJ disorder in migraine treatment Temporomandibular Joint Disorder (TMD) also known as TMJ disorder, is a disorder of the jaw causing pain to the joint and surrounding muscles, which often contributes to migraine pain and impairs the jaws movement. One way to treat this is a bite plate, also known as an occlusal splint. Depending on the nature of the bite plate it can stabilize and support the movement of the jaw, as well as prevent clenching of the teeth. This study compared the treatment of TMJ disorder patients treated with a bite plate to those who entered an education program about TMJ disorder. Forty-four patients were randomly placed in the two groups out of which forty-one patients completed the study. Group one were educated about TMJ disorder and what they could do about it themselves. This group had four males and nineteen females with an average age of thirty-one years. The second group were treated with an occlusal splint or bite plate and was made up of five males and thirteen females, also with an average age of thirty-one years. Each patient was then assessed every three weeks for a three month period. There was no significant difference in outcomes in these two groups of patients. View the original study at this link: Evaluation of the short-term effectiveness of education versus an occlusal splint for the treatment of myofascial pain of the jaw muscles - ” ..our findings indicate that for successful management of myofascial pain, education of patients regarding self-care as well as extensive communication between patient and doctor may be more effective than an occlusal appliance.”
<urn:uuid:95fcacd5-cb8e-44c3-b0bd-680c0287e375>
CC-MAIN-2023-50
https://www.mrimigraine.com/education-program-vs-a-bite-plate-for-tmj-disorder-in-migraine-treatment/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.975466
348
2.875
3
Thrush is a fungal infection that is caused by the Candida species of fungus. This is normally Candida albicans. The fungus can grow in your throat, mouth, and other parts of your body. Thrush can affect different areas of the body. Although it is not normally anything to worry about in most cases, it may be uncomfortable and unpleasant. Occasionally, if you keep getting thrush infections, it may also indicate that you have another medical issue. Vaginal thrush is a common yeast infection that most women will get at some point. Common symptoms of vaginal thrush include: Oral thrush is more common in babies and older people with dentures. Typical symptoms of oral thrush include: In men, thrush usually affects the head of the penis. This can cause discharge, redness, and irritation. Other symptoms of thrush in men include: If you are a man and believe you may have thrush, you can visit our male thrush treatment service for more information and treatment options. If you have thrush, symptoms should go away within 7 to 14 days of beginning treatment. You may need to continue taking treatment (up to 6 months) if you keep getting thrush, in which case you must speak to a doctor about the best treatment. Mild cases of thrush may go away on their own, but if left untreated, the infection can worsen. It is best to speak to your doctor or pharmacist for advice.
<urn:uuid:6814755f-41ac-4186-a044-0e358978ad4f>
CC-MAIN-2023-50
https://www.neemtreepharmacy.co.uk/service/thrush
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.956681
307
3.046875
3
How WWT chemicals contribute to the value of Industrial WWT? Wastewater treatment chemicals are becoming more significant in the effluent wastewater treatment process. Because of increased operating expenses and rigorous administrative considerations, there is now a greater need than ever before to optimize operational performance. Fortunately, emerging innovations are assisting businesses in improving procedures while lowering regulatory costs. Companies who collaborate with Netsol Water achieve more consistent outcomes, allowing plants to run at higher levels of efficiency. In primary industrial treatment, several chemicals are employed to clear wastewater. They are classified as follows: - pH adjustment chemicals - Coagulant chemicals - Flocculent chemicals Chemicals for pH correction Chemicals used to regulate the pH of wastewater serve to change the ionic charge of the effluent. The pH is stated quantitatively, with values ranging from 0 to 14. A pH of 7.0 is neutral, pH less than 7.0 is acidic, and pH more than 7.0 is basic or alkaline. Changing the ionic charge of wastewater alters the solubility of specific materials and causes physical objects to attract each other. Choosing chemicals to alter wastewater pH necessitates consideration of the wastewater's chemical composition. The following basic or alkaline compounds are used to increase the pH of wastewater: Because of their availability, cheap cost, and high capacity, CaO (calcium oxide or lime), MgO (magnesium oxide), Ca (OH) (calcium hydroxide, a hydrated version of lime), and Mg (OH) (magnesium hydroxide) are the most widely used compounds. Sludge bulk (volume) is a significant issue, however recovery is achievable. Lime is often rich in calcium and is available as quicklime or hydrated. It is sold dry and must be combined with water to produce a slurry before usage. Sodium hypochlorite (caustic soda), Sodium hydroxide (NaOH) is a cost-effective, controlled, and widely accessible chemical. It is often utilized for modest or infrequent applications, or where limiting sludge deposits is desired. Sulphuric acid (H2S04) is the cheapest and most widely accessible acidic chemical for lowering wastewater pH. It is very corrosive, thick, oily, and clear or dark brown in colour (depending on purity). It is available in a variety of grades ranging from 60 to 94 percent H2SO4. It is noncorrosive to steel drums in the 93 percent grade; however, after dilution, it becomes very corrosive. Carbon dioxide (CO2) and sulphur dioxide (SO2) in gaseous form may be used if accessible. Flue gases are readily available and cost-effective for neutralizing alkaline fluids in some sectors. In addition, waste streams with concentrated high or low pH can be utilized to alter the pH of other waste fluids rather than employing chemicals to do so. Coagulant chemicals are used in wastewater treatment to modify the pH and start the coagulation of particles in the wastewater. Choosing chemicals to coagulate solids in wastewater necessitates consideration of the wastewater's chemical composition. The following compounds are used to coagulate sediments in wastewater: Aluminium Chloride, Ferric Chloride, Aluminium Sulfate, and Ferric Chloride (FeCl3). Coagulant chemicals are used to alter pH, and the pH is adjusted using these chemicals. Chemicals that are flocculent Flocculent chemicals, which are mainly synthetic and have no impact on pH, are used in wastewater treatment to flocculate and remove solids from cleared wastewater. Choosing chemicals to flocculate and separate particles in wastewater necessitates consideration of the effluent's chemical composition. Chemicals used to flocculate and separate particles in wastewater utilizing quick mixing and mild agitation are classified as several types of polymers, which are listed below for your convenience: 1: Anionic polymers contain a negative charge in the wastewater and are used to separate solids using a quick mix and mild agitation. 2: Non-ionic polymers have a neutral charge in wastewater and are used to separate solids using a quick mix and mild agitation. 3: Cationic polymers contain a positive charge in the wastewater and are used to separate solids using a quick mix and mild agitation. 4: Polymers are often acquired in concentrated dry or liquid form and combined in water to a certain percent solution (e.g., 2% solution) before usage. How can Netsol Water help? Netsol Water provide complete project support, from professional advice on proposals to ongoing maintenance performed by our specialized staff of experts. We also provide process assurances, which provide you piece of mind that the needed performance will be met on a constant basis.
<urn:uuid:8b9bf4c0-0e9e-43ac-9984-fe3fcf5c65d8>
CC-MAIN-2023-50
https://www.netsolwater.com/how-wwt-chemicals-contribute-to-the-value-of-industrial-wwt.php?blog=2313
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.926846
1,010
3
3
Wastes are materials that are no longer needed and are not economically recoverable, through additional processing. They are derived through human activities, such as agriculture, industry, and home activities, among others. Domestic, industrial, commercial, clinical, construction, nuclear, and agricultural waste, are the different types of waste. Waste is classified as inert, toxic, or flammable based on its properties. If these pollutants are not treated, they pollute the air, water, and land. As a result, the pollution caused by solid waste has serious consequences. Thus, solid waste management is critical. Sources of Solid Waste Pollution Municipal waste, industrial waste, and hazardous waste are the three types of solid waste. Municipal waste is generated by human domestic activities.Industrial waste is generated by industrial activity, and hazardous wastes are pollutants that endanger plants, animals, and humans. Radioactive substances, chemicals, biological wastes, flammable wastes, and explosives, are some examples of common hazardous waste. Factors contributing to solid waste generation · Population growth It is a significant factor influencing the rise of pollution. Solid waste is an urban issue in which people practice using a range of commodities, and then abandoning them. In rich societies, per capita consumption is relatively high, and people discard many products on a regular basis, which contributes significantly to solid waste. It has altered the way people use things. It is clearly visible in the packaging business for the majority of economic commodities. The technology is shifting away from returnable packaging, and towards non-returnable packaging. Returnable glass containers or bottles, for example, are being replaced with non-returnable cans, plastic containers, plastic bottles, and so on. Packaging materials, such as those made of plastic, are mostly responsible for solid waste pollution, because they are non-biodegradable. What are the Pollution caused by Solid Waste? It is caused by poor solid waste management from human activities, waste collectors, and waste disposal firms. This sort of pollution causes the spread of hazardous microorganisms in the environment, as well as disagreeable scents, which result in air pollution. The water also becomes contaminated, transmitting parasites and bacteria to humans. · Air Pollution The contamination of air by smoke particles and hazardous substances is known as air pollution. · Land Pollution This type of pollution is the degradation of the Earth's surface. It is mostly caused by improper waste disposal, and resource mis-management. Examples of land pollution include: - Litter on every street corner and roadside - Spills of oil - Illegal dumping in natural environments - Unsustainable logging operations · Beach Pollution Beach pollution is caused by wastes such as, plastic bags, nets, or cigarette filters that are discarded on the beach. This waste is harmful to aquatic life and has an impact on the marine environment. It is sometimes caused by inconsiderate beach visitors who waste and leave their waste, after a picnic or get-together. · Plastic Pollution Plastic pollution is caused by non-biodegradable hard and soft plastic, which remains on the earth for thousands of years. It can remain in the soil indefinitely, causing harm to the soil's health and composition. Solid waste, whether solid, liquid, or gaseous, is damaging the environment. Solid waste contamination is mostly generated by urbanization and industrial waste.As a result, solid waste management is critical, as it aids in the reduction of solid waste pollution, and the creation of a pollution-free and clean environment. Composting, recycling, incineration, pyrolysis, disposal, and other procedures are used to control solid waste pollution. As a result, solid waste management lowers or controls solid waste contamination, and its dangerous consequences. How can we assist? Netsol Water provides solid waste converters, organic waste recyclers, green waste recyclers, and other solid waste treatment products, to avoid pollution caused by solid waste. This is a sustainable way of treating solid waste, as it doesn’t cause any pollution. But, it helps to form fertilizer which is used as a soil conditioner.
<urn:uuid:086e6ae9-478d-495d-8590-a3c7853074ea>
CC-MAIN-2023-50
https://www.netsolwater.com/pollution-caused-by-solid-waste.php?blog=3627
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.928468
875
3.96875
4
Compared to some of the largest refugee crises in Europe, the genocide and exile of Circassian people in Alexander II’s conquest of the Caucasus in the mid-1800s is all but forgotten about. However, the Circassians were, in fact, the first stateless people in modern history. By 1864, the state of Circassia had lost the northeast shore of the Black Sea, which now forms a part of the Russian South, and after 600,000 of its native people had been slaughtered, the surviving villages were uprooted. While in the West the Circassians were an “extinct race”, among the Ottoman Empire, where the remaining population had been displaced, “Circassian” became a term for “highwayman”. Today, the Circassian diaspora is found in Israel, Turkey, Syria, Jordan and across the Middle East, while a small amount of the population actually resides in the Caucasus, in Nalchik and Sochi (the last capital of the independent Circassian state). British photographer James Arthur Allen travelled to Israel’s Kfar Kama, which is among the only remaining villages in the world where Circassian continues to be taught and spoken. “The history of genocide is ingrained in their psyche” Allen’s interest in the Circassians arose while the photographer had been shooting his ongoing series on Georgia, entitled The Land of Wolves. “I realised that in many ways the Georgians aren’t always Georgians, the entire country is made up of different tribes,” he says. Apart from Circassians, the other tribes that once composed the Caucasus include the Balkar, the Karachais and the Nogais. Researching Circassian history, Allen had found that not only were British maps of the 1800s amongst the few to recognise Circassia, but the very Circassian national flag was designed by a Scotsman, David Urquhart, in an attempt to unite the Circassian tribes into one military force — hence the star-and-arrow design, each star presenting a different tribe. Today, the flag can be spotted on many of Kfar Kama’s restaurants, community centres and schools, while “Adiga” (Circassia’s native name) or “Adiga forever” is emblazoned on its streets and walls. “Everyone in the village is fiercely proud of being Circassian. If you ask them where they are from they are not from Israel, they are from Circassia. The history of genocide is ingrained in their psyche, which is why they don’t marry outside of the Circassian community. It’s common for guys from Nalchik to travel here to find a Circassian bride,” the photographer explains. Kfar Kama has established itself as a second home for Circassian culture: each year children from Jordan and Turkey travel here as part of a summer camp to learn more about their heritage. One of the reasons for this is that schools in Kfar Kama are allowed to teach their own curricula: “Up until the age of 13-14 the kids attend only Circassian schools, where they are taught Circassian, Hebrew, English, Arabic, among other Circassian customs, such as traditional dance and accordion music. Then they go to Israeli school.” Once of the most significant parts of Circassian traditions is habze, a cultural code that is followed by all generations. A true Circassian must never interrupt their elders, respect women and strangers and even invite an enemy to stay in their home. “As I’d learned in Georgia, it’s all about trust. Before anything I had to get a blessing from the mayor. By the end of the week, people would invite me to stay with their family. They fact that I’d been to the Caucauses helped because they knew I’d made the effort and could understand the tensions in the region,” Allen recalls of his arrival to Kfar Kama. “There are direct flights to Nalchik. A lot of the Israeli Circassians go back frequently. They have one big meeting in Nalchik every year.” Outside of events and family visits, it is difficult to return there permanently. Would they go back tomorrow if they could? “Some say, yes. Others say they are part of the fabric of Israel. They hold a social standing this way,” Allen reveals. Loyalty is another Circassian trait: Circassians fought for the Ottoman Turks up until the First World War. In Israel, which is entangled in its own internal conflict, the Circassians are known for bringing people together. “They have their football team which has the Circassian flag on it. In that team, you have Jewish, Circassian and Arab players. They really pride themselves on that,” Allen says. Among the portraits in the series is a photo of Kfar Kamar FC. Talking about the locals’ love for football, the photographer explains the similarity between the younger generations and their Western counterparts. “They play FIFA, drink soft drinks, talk about motorbikes. I played football with them and got called Harry Kane and Steven Gerard. They support whatever team is doing well in the Champions’ League, whether that’s Chelsea or Manchester City.” In spite of this, they are very traditional. “I think it’s a fear. They are so terrified of it all vanishing.” Living in the UK, the photographer had always accepted the idea that national identity could be inclusive. “I understand that I’m the sum of different cultures. When you go to the Caucasus, it’s an entirely different attitude. They say, ‘this is what we are’ and ‘this is the way it has been’. The fact that these people have been removed from it, but they are still so fiercely guarding that culture is quite an eye opener. It’s why I find it so fascinating,” Allen says. The series, Adiga, was funded by the Rebecca Vassie Trust as winner of the inaugural Rebeca Vassie Memorial award. The photographer plans to go back to Kfar Kama and build a bigger body of work devoted to the Circassian population.
<urn:uuid:8e05a045-fbbb-4871-8364-dffa1b3563f0>
CC-MAIN-2023-50
https://www.new-east-archive.org:443/features/show/8787/circassians-caucasus-israel-documentary-photography
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.975975
1,332
3.75
4
Personal trainers working with young athletes can strongly influence whether or not the child’s future gets monopolized by training and competing, literally defining his or her formative years. The knowledge base of the coach, as well as his or her approach and experiences, are of utmost importance to ensure a young athlete receives the training necessary, while still preserving the purity of childhood experience. Starting with the End in Mind A good athlete exhibits a combination of athleticism and sport-specific skills. When parents seek out a coach to cultivate an innate talent for a chosen sport, understanding the differences between these two factors is critical. Are they looking to recruit a skill-specific coach–a specialist in building the tools required for the child’s single interest? Alternatively, they may wish to begin with a strength and conditioning coach–one who excels in making an athlete faster, stronger, more flexible, etc. Often well-meaning parents want a single individual who, naturally, can optimize all of these factors for their child. However, this may be an untenable and unrealistic goal. Budding young athletes perform best when participating in training that begins with general physical skill development. “Physically fit” athletes demonstrate basic levels of aerobic conditioning, coordination, mobility, flexibility, and balance–a strong foundation upon which to build. By introducing these young athletes to cross-training that runs the gamut of general athleticism to highly sports-specific, young clients may develop aspects of fitness that they may not have considered relevant. This involves combining components such as balance, flexibility, speed, strength, and stamina with the skills that will then propel them towards a more advanced level. A dedicated youth coach prioritizes patience over a demanding, punitive training style. They focus rather on the positive skills the client exhibits, encourage young athletes to have fun, and teach how to minimize the importance of winning as well as graciously accepting defeat. When embarking upon an intensive training protocol for a new athletically-inclined client, one who may choose to engage in competitive sports for several years, coaches face many unique challenges. Here we present some of the most critical aspects for trainers/coaches to consider before taking on a serious young athlete: Not every top athlete ought to coach children While many professional athletes-turned-coaches certainly have “walked the walk”, they may not have encountered an opportunity and therefore may lack the necessary skills, to train a budding athlete. Being sensitive to a developing personality may be an innate instinct, but for many who went through a “tough love” athletic beginning, he or she may be unwilling to acknowledge that their experience was not the most ideal to pass along. Not every dedicated parent can excel at coaching children Frequently, parents who show up to every practice and competition end up getting recruited as coaches. While proceeding with the best intentions, they often make crucial mistakes in evaluating and executing appropriate training protocols. Dedication simply cannot take the place of experience, technical drills, nutritional planning, and sports psychology. Overly-ambitious coaches often cause athlete burnout This type of coach focuses on immediate performance, often neglecting to realize and therefore respect the age of the child. Once a young athlete shows real promise and seeks to specialize her focus (pitching versus overall baseball skills, for example; breaststroke in place of general lap-swimming), this approach is valuable; but up until that point, purely supportive, safe and respectful training paves the way for an ideal sports experience. Manipulating the child as a vehicle for fulfilling parents’ dreams A frequently observed phenomenon involves parents pushing a child toward achieving what they themselves missed out on during their younger athletic years. When such parents find themselves “participating in the competition”, young athletes feel pressured to perform, quite possibly beyond their abilities or desires. A quality coach knows how to spot this and intervene appropriately so as to avoid injury, whether physical or emotional, to his charge. The coach demonstrates awareness of a sport’s potential for harm Over the course of childhood/pre-teen development, bodies often meet with specific challenges due to excessive training loads. Parents can keep an eye out for scoliosis, fatigue, stress fractures, and nutritional deficiencies. On an emotional level, setbacks and championship losses may lead to burnout and/depression, particularly in a driven and over-achieving young athlete. Coaches, therefore, must either have undergone training specifically geared toward addressing these issues or have a list of sports psychologists to whom they can refer the families. The coach forges a healthy relationship between the young athlete and their sport The reality of competitive sports is that not every child reaches an elite level, regardless of his determination and hours of training. Armed with this truth, a quality coach must attempt to create a lifelong love of physical activity first and foremost, helping to cultivate a more realistic and wholesome outlook and habit as the child proceeds through adulthood. The coach strives to create stable foundations upon which the young athlete can build as he or she readies themself to advance Sports training must be focused on managing basic tenets as a springboard for successful competition. This requires teaching technique, rules, standards of behavior, and the tactical procedures necessary to excel while keeping in mind age-and level-appropriate motor skill development. Safety, Health, and Biomechanics of Young Athletes Athletes as young as seven years old can safely embark upon a strength training routine. A well-prepared coach will indicate the importance of first consulting with the child’s physician, especially if there is a familial history or current evidence of ailments such as a heart condition, high blood pressure or seizures. Most highly sought-after gyms use specialized youth-sized strength training equipment. A trainer/coach who demonstrates proficiency at utilizing such machines, as well as the patience and knowledge required to properly teach young athletes, will always set themself apart from the masses. Unfortunately, a majority of parents fail to take into consideration the principles of sports biomechanics as they apply to young active bodies. They simply want the child to engage in training that looks like their sport of choice. The demands of any competitive sport place considerable stressors on the body. A quality personal trainer recognizes the need to prepare the body for such demands and eases into training and skill-building on an appropriate timeline. Personality, Perspective, and Group Dynamics Just as we acknowledge that no two adult clients possess the same goals/abilities, the same holds true when coaching young athletic teams. Getting to know the players, both as a collective as well as individuals, can foster a deeper understanding of how each learns/processes new tasks and skills. The process takes time, patience and trust, but is well worth the effort. Only in this manner can a true leader understand what best motivates his team, thereby allowing each athlete to reach his full potential. Team-building opportunities help players bond; this, in turn, fosters an internal structure of support for each individual. Successful youth coaches/trainers find ways to promote confidence and unity, all while enabling each young person to find his “physical voice” and rise to the level of his strengths. Coaches can cultivate parents’ trust by setting aside time to meet, listening with a non-judgmental ear and always pointing them in a direction best suited for the abilities of each child. The personal trainer who chooses the path of youth coaching can find personal fulfillment in shaping a budding athlete’s competitive sports success. Attending competitions, and witnessing those moments of glory firsthand, is a lifelong win-win for both parties.
<urn:uuid:a3c6460f-31b4-4ed7-a75d-b4a8dd6ead90>
CC-MAIN-2023-50
https://www.nfpt.com/blog/training-young-athletes
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.951831
1,573
2.875
3
Tragedy in Ethiopia: zero waste could have saved recyclers´ lives GAIA, March 15th, 2017. More than 70 recyclers were killed and others are still missing after the collapse of tons of waste at the Koshe landfill of Addis Ababa, Ethiopia, last Saturday. The landfill has been receiving waste from the Ethiopian capital for more than 50 years — though for more than 7 years they have been aware about the inability of the landfill to continue operating. This tragedy is the latest in a long list of accidents caused by the operation of landfills and incinerators, and a clear signal that something drastic needs to change. Currently, construction is underway for a waste burning incinerator. Yet like landfills, incinerators are highly prone to fires, accidents, and pollution that is hazardous to human health. If authorities proceed with the construction of an incinerator or any other technology that tries to handle an ever-increasing amount of waste, they have missed an important lesson from this tragedy when it comes to waste: the only way to protect life and health is to reduce the waste we generate and invest in zero waste strategies. In the Global South, recyclers are working to expand their materials recovery activities, and there are hundreds of successful collaborative stories between recyclers’ cooperatives and local institutions. Unfortunately, this is not the case in Addis Ababa. Since identifying the problem of waste in the city, valuable years were lost during which zero waste systems could have been implemented, as well as programs that would have dignified and improved safety for recyclers. The pressure of local authorities to close the 50 year old landfill and build a multi-million waste-to-energy facility came at the expense of the living-wage of waste pickers, who lost their only income source when the incinerator began construction. Negotiations that ended in the approval of an incinerator that has taken years to be built, which is not yet in operation, and aims to burn 80% of waste, at an investment cost of millions of dollars. Instead of these technologies — plagued by failures around the world — the city could be investing in education and diffusion programs for recycling and composting with the incorporation of recyclers who, left to their fate, today are buried under the waste the city tried to hide. While the operation of advanced systems of material recovery managed by municipalities is common in industrialized countries, in the Global South most recyclers are self-employed, mainly in the informal economy, and recover reusable and recyclable items. In this way, recycling provides livelihoods to 15 million people worldwide – 1% of the population in the Global South.
<urn:uuid:01612544-4270-4f96-a7e8-abe21db43264>
CC-MAIN-2023-50
https://www.no-burn.org/tragedy-in-ethiopia-zero-waste-could-have-saved-recyclers-lives/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.959429
547
2.96875
3
Do you have a child that seems to get jealous easily? Does it seem like they can’t be happy for others accomplishments or successes, and are constantly wanting what others have? If so, then this blog post is perfect for you! Avoiding jealousy within your children can be difficult, but with the right strategies in place and consistent communication from you as a parent, managing these feelings of envy can become more manageable. We will discuss the various forms of jealously in children and provide actionable tips on how to effectively help them deal with those jealous emotions. All parents want their child to grow up without the stain of jealousy – let’s work together towards finding harmony rather than strife in our homes! Identify the Signs of Jealousy in Your Child The signs of jealousy in a child can vary, but they usually include outbursts of anger or tears, displaying aggression towards siblings or peers, making comparisons between themselves and others, or withholding affection. If not addressed properly, jealousy can cause negative outcomes for your child’s psychological development. If you recognize any of these signs in your child, it is important to address the root of the situation and come up with solutions to help them cope with their feelings. Taking appropriate action from the start will ensure that your child doesn’t experience more serious issues as they continue to grow older. Here are 5 signs of jealousy in children: - Unwanted Attention: Children who exhibit jealousy often become clingy and demanding of attention. They may demand more hugs or not want you to leave the room. - Insecurity: Jealous children often feel insecure and will express this through negative behavior such as name-calling, bullying, or being spiteful towards others. - Poor Concentration: If a child is feeling jealous they might have difficulty focusing on tasks or school work. - Possessiveness: A jealous child might display possessive behavior over their toys or belongings, becoming upset if someone else wants something that belongs to them even if it is shared equally with others in the family regularly. - Aggression: Some jealous children will display aggression towards siblings or peers over seemingly trivial matters due to their underlying feelings of jealousy and insecurity about losing attention from adults or peers alike.. Help Your Child Understand That It’s Normal to Experience Jealousy Jealousy is a normal feeling among children that should not be ignored or condemned. As parents, it’s important to recognize and help your child understand this emotion. Have an honest conversation with your child about jealousy and explain why they feel it when they see other people around them get attention. Be open-minded and raise awareness on how to cope when these difficult moments arise. Remind them that they are not alone in feeling jealous, especially as young minds are often overwhelmed by emotions. You can also try activities like journaling to help express their feelings and encourage a healthy dialogue between the two of you. The key takeaway is for your child to remember that experiencing jealousy does not make them a bad person – it simply means that they care deeply about something, which is ultimately a sign of growth and maturity. Ask Your Child Open-Ended Questions About Their Feelings When dealing with feelings of jealousy in children, it is important to have an open and honest dialogue. One way to facilitate such a discussion is to ask your child open-ended questions about their emotions. This helps them to put words on the strong emotions they may be experiencing, allowing for further understanding and acceptance. Open-ended questions can also help explore the underlying causes of jealousy, which could be anything from low self-confidence to fear of missing out. In the end, clear communication and healthy conversations are the keys to reducing feelings of jealousy among children. Encourage Self-Compassion and Self-Care Strategies When dealing with jealousy in children, it is important to encourage self-compassion and self-care strategies. Being able to think positively about oneself is key to helping children develop tools that can help them manage their feelings of jealousy. This can include activities such as positive affirmations, journaling, and recognizing strengths and abilities that the child may possess. Parents can also promote self-care habits such as regular exercise, healthy sleep patterns and balanced eating through role modeling and providing a safe space for the child to explore different hobbies and activities. Self-care strategies with parents’ support are proven to make a difference in helping children cope with emotions of jealousy. Provide Opportunities for Children to Connect With Others Providing opportunities for children to build meaningful relationships with other people is a great way of helping them manage jealous feelings. It’s important for parents to prioritize connecting their children with friends and family, both in person and virtually, as this sets the foundation for their social development. It also allows their younger minds to understand the concept of different types of relationships such as friendships, siblings and family. Furthermore, it provides an outlet where they can express themselves without judgment or retaliation and consequently make healthier decisions when facing jealousy outbursts. Allowing children to regularly engage in activities that bring them joy while also having the opportunity to interact with others will help them develop healthy coping strategies when confronting issues related to envy. Model Healthy Ways of Interacting With Others Parenting is a balancing act, and when it comes to teaching our children healthy ways of interacting with others, modeling the behavior we want to see is key. It’s important to explain why jealous behaviors are unhealthy, as well as how they can be damaging in relationships. As adults, we should strive to show our children how to handle jealousy like we would want them to manage their own: without resorting to hurtful words or aggressive behavior. Teaching values of empathy and understanding for oneself and others through role-play and open conversations about feelings can help children learn how to feel secure in their relationships with family and friends, instead of resorting to jealousy as a means of coping. Ultimately, dealing with your child’s jealous behavior is about helping them find their own balance with emotions, other people and strategies for responding to situations. By identifying the signs of jealousy early, using open-ended questions to motivate curiosity and growth, helping your child understand that it’s normal to experience jealous feelings, teaching them self-compassion strategies, providing opportunities for connection with others, and modeling healthy ways of interacting in relationships – you can empower your child to develop a sense of self-confidence while learning how to manage not only their own inner life but also their outer world. Through these actions you can set a strong foundation that encourages healthy social interactions which will supplement all aspects of personal development in the long run. This article is for informational purposes and does not take the place of your or your children’s doctors. Please always consult with your doctor.
<urn:uuid:5f036118-4279-4722-a2c3-a6efaa0d768d>
CC-MAIN-2023-50
https://www.noodlesoup.com/jealousy-in-children/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.955973
1,411
3.03125
3
Behavioral health environments play an essential but often overlooked role in our healthcare system. These specialized medical facilities provide care and treatment for individuals struggling with anxiety disorders, depression, bipolar disorder, substance abuse, and mental illness crises. The critical issue today, is that many patients lack access, while facilities lack enough beds to keep up with those who do. As behavioral and mental health cases continue to increase, it’s important to take a deeper look at the environments built to support patients. Can design impact patient recovery? And how can designers, contractors, architects, and suppliers work together to positively impact comfort and well-being? To find out, our colleagues at OpenSquare in Seattle hosted a panel of behavioral health provider and design experts for an in-depth discussion around this critical issue. Here’s what we learned: Balancing Welcome and Safety When designing for behavioral health environments, safety isn’t just important – it’s critical. That’s because patients going through mental health treatment often feel vulnerable. Designers should look for new ways to make spaces feel welcoming while offering patients more choice and control to minimize anxiety. Putting a little more welcome into environments can go a long way toward enhancing wellbeing. Safety also extends to each patient’s physical wellbeing. For example, our panelists agreed that behavioral health facilities should always be designed to reduce the risk of self-harm or harm to staff, family, and visitors. Provider protocols, training, and wellness are critical components for success, along with design interventions. Design features should enhance transparency and visibility for safety. Visibility allows clear sightlines for staff to supervise patients and proactively ensure safety. Access control tools (card readers and keypads) and safety protocols help manage who can enter and exit specific areas of the facility and anti-ligature measures are universally required. In the past, these features might have made monitoring easier without further consideration into the patient experience. Today, design also incorporates open gathering areas that encourage better socialization for accelerated recovery. Comfortable Welcoming Spaces Behavioral health treatment is often stressful. To increase patient comfort and provider wellbeing, designers should consider creating welcoming, home-like environments that support activities known to reduce stress like yoga, meditation, and art making. Introducing design elements that enhance comfort like warm, bright earthy tones and soft furniture can make historically clinical environments feel more hospitable. Non-threatening environments contribute to recovery, where patients can feel relaxed, comfortable and “at home” such as rooms with en suite bathrooms and access to nature can all positively impact patients’ mental and emotional wellbeing. Consider the Senses The more designers can infuse emotional and mental stability into each space, the better the healing process becomes. This can also be achieved by engaging the senses - quiet spaces for meditation or relaxation, a variation in lighting sources and qualities to promote calming settings to help balance the patient's mood and promote better sleep, along with artwork and music to encourage a greater sense of wellbeing. Flexibility in spaces that promote social interaction, such as communal areas where patients can interact with each other and participate in group activities mentioned earlier, can be achieved with furniture pieces that can be moved, albeit with concerted effort. When safety, comfort, and healing are prioritized, behavioral health environments can become catalysts for healing. These spaces give patients more than the tools and space they need to feel their best. They also offer caregivers an environment where they can provide the best care possible. However, this takes a specialized team that has the experience and expertise in designing behavioral health environments to ensure that you create spaces that are both functional and conducive to the patient's recovery journey. The primary objective is to create healing spaces that is both welcoming, engaging and safe. More to Come Our conversation concluded where we began, recognizing the lack of adequate behavioral health resources to support the State of Washington and the country, with positive encouragement to be aware of pending legislation and support additional resources whenever possible. One Workplace Healthcare team members are available for behavioral health design consults and conversations. Please reach out with questions any time. And stay tuned, we will continue the conversation. Hero image: Civil Center for Behavioral Health at Maple Lane by BCRA
<urn:uuid:e3c6a0fe-ea64-49c5-a2e2-7d22de64e7b1>
CC-MAIN-2023-50
https://www.oneworkplace.com/blog/designing-for-behavioral-health-prioritizing-integration-safety-comfort-and-healing
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.931887
869
2.6875
3
The equality of all humans should be one of the most fundamental principles embedded in the moral frameworks and legal systems of civilised societies. It rightly forms the basis of Article 1 of The Universal Declaration of Human Rights. Unfortunately, such a fundamental principle has not been properly established in many countries. Equality is denied when discrimination occurs. Discrimination is relatively commonplace, and particularly firmly entrenched in many religious organisations. Widespread discrimination can lead to intolerance and conflict, because, unsurprisingly, those who are discriminated against object to being treated as second-class citizens. Australia's recently drafted Human Rights and Anti-Discrimination Bill 2012 is commendable in its objectives but does little to reduce discrimination. It claims aspiringly 'to eliminate discrimination, sexual harassment and racial vilification, consistently [sic] with Australia's obligations under the human rights instruments and the ILO instruments'. However, this proposed legislation offers special measures, including exemptions to religious organisations so they can continue to discriminate on such attributes as religion, sexual orientation, gender identity, pregnancy etc. It would seem that most governments lack the courage to stop religious organisations from discriminating. Some religions discriminate against people if they are not of the requisite religion (and subjective religions are related to race/culture), or preferred sex, sexual orientation, and marital status. Australia's proposed legislation does not remove this inequity. A distinction should firstly be made between invidious discrimination, which should be eliminated, and appropriate differentiation, of individuals or groups. Invidious discrimination occurs when a person or organisation treats others unfavourably because of their particular attributes, whether that be a person's sex, sexual orientation, marital status, race etc. In contrast, appropriate differentiation would allow, for example, without claims of discrimination, segregated sporting events to occur for men and women, and an age limit to be applied to learning drivers, because a reasonable and objective explanation can be developed in these cases. Within this framework, it is apparent there is no reasonable and objective explanation why a mathematics teacher at any school could not be an unmarried, pregnant, multi-coloured lesbian of no religion (or of another religion). The ability to teach mathematics is independent of the aforementioned attributes. To be denied a job because of a person's particular attributes is a denial of equality that ought not be tolerated in a civilised society. The discriminatory and bigoted values of some of the mainstream churches are no more ethically 'right' than the racist values that were relatively commonplace in the middle of the twentieth century. How can racial discrimination be ethically wrong but sexual discrimination be permitted? How can there be a moral basis for an Islamic black man who discriminates against women complaining that he is being discriminated against? How can a religious male politician who denies lesbians the right to marriage or to be a leader in his church claim that he treats people equally? There is no justification for any of these situations because there is no moral distinction between these types of invidious discrimination. Intolerance of, and discrimination against, people with particular attributes is bigotry. Many religions try to justify their religious discrimination as a right, the freedom to practise one's religion. But such a right impacts adversely on others. So what happens when there is a conflict between religious freedoms and the rights of an individual, such as an individual's right to be treated equally and not to be subjected to invidious discrimination? Many religions preach some variant of the ethical golden rule, or doing unto others as they do unto you. Members of one religion would not like members of other religions to exercise their religious freedom if that involved the imposition of the other religion on them, or allowing the other religion to kill them (if that were a 'view' of the other religion). Even if it were something more trivial, such as having another religion's eating rituals being imposed on them, this would be a cause of stern objection. That people do not want their individual rights to be violated by another religion (or any other person, organisation or government for that matter) is the key. It is then straightforward to conclude that a freedom of religion should only extend so far as to where it does not impinge on the rights of other individuals. People can believe in and practice what they wish, no matter how profound, or silly and deluded, that might be, but not if it denies other people's equality or human rights, causes discrimination, or otherwise adversely affects other individuals. A regime of religious discrimination juxtaposed on a principle of doing unto others as they do unto you is hypocrisy. To avoid claims of having hypocritical bigoted views, one would think that religious organisations would reject their current discriminatory views and advocate legislative change that condemned and prohibited all invidious discrimination. Unfortunately, enlightened change is not the way of the bigot. To explore further the nature of religious discrimination, consider the following scenario. What if a new religion were to be established tomorrow, and an inspired person drafts a religious text that reflects the views of the newly conceived and perfect God. The newly drafted religious text includes the following verses attributable to the new God. · A black person should learn in quietness and full submission. I do not permit a black person to teach or to have authority over a non-black person; the black person must be silent. · Any black person who is arrogant enough to reject the verdict of the priest who represents your God must die. · A black person who works on God's holy day will be put to death. · If a person has sex with a black person, both of them have done what is detestable. They must be put to death; their blood will be on their own hands. The above verses are racist and abhorrent. They deny black people equality. Such a religious text must be treated with the contempt that any racially discriminatory text deserves. The proponents of the new religion would say that their God moves in mysterious ways or that the text is not meant to be taken literally. Neither explanation conceals the underlying racism and discrimination. The astute observer would realise that these verses have been extracted from the Christian Bible and reworked to substitute the phrase 'black person' in biblical verses that condemn women, non-believers, a person who works contrary to God's laws, and homosexuals. It is clear that the terms 'woman', or 'gay, lesbian, bisexual, transgender, intersexperson', 'Caucasian male', or 'pregnant person' could have been similarly substituted. If the newly drafted religious text is abhorrent, discriminatory and unacceptable in modern society, then so too is the Christian religion. Other discriminatory religions and organisations should be condemned with equal vigour. It follows that public funding or support of any discriminatory religious organisations should be handled in the same manner as that for a body that might discriminate on the basis of race: governments should condemn them and never support or finance them, directly or indirectly. It is absurd in modern society that governments give massive tax exemptions, and exemptions from discrimination legislation, to religious organisations. Many religions teach that only people of their religion are worthy of reward in a speculated afterlife. They discriminate in churches and hospitals, educational institutions and nursing homes. In recent times a horrendous record of child sexual abuse in religious institutions has become public. Furthermore, many churches indoctrinate children to worship a god or gods that, according to their scriptures, are guilty of indiscriminately killing humans-the most warped of moral messages. Religions peddling discrimination and perverse moral messages deserve condemnation. It would seem that the Universal Declaration of Human Rights is no more than an aspirational piece of paper. People must work hard to secure the most fundamental of rights, because, while governments continue to allow people, organisations and religions to invidiously discriminate, there can be no equality.
<urn:uuid:a346b124-6a36-42c3-b477-ab191bd6bf80>
CC-MAIN-2023-50
https://www.onlineopinion.com.au/view.asp?article=14664&page=0
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.947099
1,570
2.90625
3
The problem with working on computers is that one experiences a certain type of physical strain because of constantly sitting in front of the keyboard, while the eyes suffer due to excess exposure to the monitors light. This is worse when using a laptop, especially at night when working in bed without the lights on. This article looks at the latter scenario and recommends the use of Redshift to protect your eyes. The position one sits in can be adjusted based on comfort, and coloured light from the monitor can be reduced by using computer glasses or following the common 20-20-20 rule, which states that you need to rest your eyes after every 20 minutes for a period of 20 seconds by looking at objects which are at least 20 metres away. If you do not take enough care, you will suffer from red eyes, blurred vision, headaches, etc. If this continues over a long period of time, you might even suffer from computer vision syndrome (CVS). The disadvantage of using computer glasses is that they are costly (the basic versions start from about ` 5000 (US$ 85) in India. There may be other problems associated with these glasses like not being comfortable, leaving marks on the face and negatively altering a persons appearance. I suggest that you use them, even if you are using Redshift, and maybe you can increase the time limit for constantly working on a screen to 30 minutes or an hour from the earlier twenty minutes. How Redshift works Jon Lund Steffensen (the creator of Redshift) says that he created his own tool, as f.lux didnt work well for him. He made it open source (GPLv3) and it is under constant development. You can contribute to its development at https://github.com/jonls/redshift. The working of Redshift is simpleit adjusts the temperature of the monitor based on the time and the surroundings. In the background, it makes use of the RandR (resize and rotate) communication protocol extension for X11 and Wayland (which is supposed to replace the X11) protocol for the display servers to adjust the colour. An alternative to RandR is the VidMode extension. Redshift can also be configured to change the colours individually and to reduce the brightness in a different manner. It actually doesnt reduce the light from the monitor but it applies a black filter to the screen, which creates a similar effect. Adjusting both the temperature and the brightness makes the monitor very eye friendly. Figures 1, 2 and 3 give three screenshots of opensourceforu.com with Redshift in various configurations. Ubuntu/Debian: Installing Redshift in Debian/Ubuntu can be done by issuing the following command in a terminal: sudo apt-get install redshift redshift-gtk After installing the redshift-gtk you get the status icon in the tray with which you can control Redshift very easily. You can toggle between On, Off and Pause Redshift for certain periods if you are doing colour sensitive work such as editing pictures or videos. Fedora/Red Hat: In these systems, you can use the Yum package manager to install Redshift, as follows: sudo yum install redshift redshift-gtk Note: In the recent version of Fedora 22, Yum is replaced by dnf, which is better. So dnf should be used in place of yum in the above command. SUSE based systems: If you are using a system based on Suse you can use zypper as below: sudo zypper install redshift redshift-gtk Arch based systems: The command for Arch linux will be: sudo pacman -S redshift redshift-gtk If you are using some other Linux distro in which the package manager doesnt have the package Redshift, you can install it from the source using the following commands: wget -c https://github.com/jonls/redshift/releases/download/v1.10/redshift-1.10.tar.xz tar xf redshift-1.10.tar.xz cd redshift-1.10/ ./configure --prefix=$HOME/redshift/root \ --with-systemduserunitdir=$HOME/.config/systemd/user make make install Note: Earlier, version 1.8 redshift-gtk was known as gtk-redshift so you might need to change the name while installing. Also, there is redshift-qt for the KDE desktop. By default, Redshift configures itself based on the users location. The location is obtained using GeoClue which may or may not work out-of-the-box, in all cases. It can be configured manually. Uninstall the Redshift-gtk and not Redshift, and follow the steps below to use Redshift without the GUI. Location: The location for Redshift can be set manually by using the -l(location) flag: redshift -l 22.5:88.3 The above is the latitude and longitude of Kolkata, which works for most of India. Feel free to change the above value to that of the city which is nearest to you. Temperature: The temperature of the screen can be set by using the -t(temperature) flag: redshift -t 4500:3000 where 4500 is the daytime temperature and 3000 is the night temperature. Brightness: The brightness of the screen can be set by using the -b (brightness) flag. And the value for it can be between 0.1 and 1.0. redshift -b 0.8 Note: As I have mentioned above, brightness doesnt reduce the light from the screen but it applies a grey filter, which creates a similar effect. And you can combine the above commands according to your needs. Configuration file: You can also use a configuration file instead of the above commands. Here is the configuration file example given by Jon Lund Steffensen in the Redshift website. You can create the new file in your favourite editor and put it at ~/.config/redshift.conf: ; Global settings for redshift [redshift] ; Set the day and night screen temperatures temp-day=5700 temp-night=3500 ; Enable/Disable a smooth transition between day and night ; 0 will cause a direct change from day to night screen temperature. ; 1 will gradually increase or decrease the screen temperature. transition=1 ; Set the screen brightness. Default is 1.0. ;brightness=0.9 ; It is also possible to use different settings for day and night ; since version 1.8. ;brightness-day=0.7 ;brightness-night=0.4 ; Set the screen gamma (for all colors, or each color channel ; individually) gamma=0.8 ;gamma=0.8:0.7:0.8 ; This can also be set individually for day and night since ; version 1.10. ;gamma-day=0.8:0.7:0.8 ;gamma-night=0.6 ; Set the location-provider: geoclue, geoclue2, manual ; type redshift -l list to see possible values. ; The location provider settings are in a different section. location-provider=manual ; Set the adjustment-method: randr, vidmode ; type redshift -m list to see all possible values. ; randr is the preferred method, vidmode is an older API. ; but works in some cases when randr does not. ; The adjustment method settings are in a different section. adjustment-method=randr ; Configuration of the location-provider: ; type redshift -l PROVIDER:help to see the settings. ; ex: redshift -l manual:help ; Keep in mind that longitudes west of Greenwich (e.g. the Americas) ; are negative numbers. [manual] lat=22.5 lon=88.3 ; Configuration of the adjustment-method ; type redshift -m METHOD:help to see the settings. ; ex: redshift -m randr:help ; In this example, randr is configured to adjust screen 1. ; Note that the numbering starts from 0, so this is actually the ; second screen. If this option is not specified, Redshift will try ; to adjust _all_ screens. [randr] screen=0 Note: I suggest that you do not use the screen option in the configuration file to adjust all screens, like external monitors too. Making Redshift a start-up application: You can make Redshift start at boot using the status icon in Ubuntu. If you have any difficulty with the status icon, you can add Redshift to the start-up application, manually. If you cant find the start-up application in your Linux distribution desktop environment, you can also add this file at ~/.config/autostart/redshift.desktop. [Desktop Entry] Type=Application Exec=redshift -O 3000 Hidden=false NoDisplay=false X-GNOME-Autostart-enabled=true Name[en_IN]=Redshift Name=Redshift Redshift vs f.lux The most important reason to use Redshift rather than f.lux is the freedom that comes with it. It is licensed under GPLv3 and works very well with GNU/Linux (it can be used for Windows too). The only disadvantage is that it cant be installed on OSX. If you face any difficulties with Redshift, you can report them at https://github.com/jonls/redshift/issues/new.
<urn:uuid:8d064fb5-850d-4fbc-8f80-64184c985bdd>
CC-MAIN-2023-50
https://www.opensourceforu.com/2015/08/protect-your-eyes-with-redshift/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.873471
2,017
2.5625
3
Winemaking is a delicate and millennia-old craft. To achieve a desirable product, vintners have to pay close attention to soil, rain, heat, and sunlight. Mice and gophers are other problems that vintners often turn to rodenticides to solve. In a bid to provide a more natural solution, a team of students at Humboldt State University in California is testing the efficacy of a centuries-old practice: using owls to hunt pesky rodents. As part of the long-term study, the researchers placed around 300 owl nest boxes in random places through vineyards in Napa Valley. The aim of the study is to test how effective owls are at removing pests and if they can offer a feasible natural alternative to pesticides. The students surveyed a total of 75 wineries in Napa Valley, 80 percent of which have reported a difference in rodent control since they started using the owl nest boxes. During the nesting season — which is around four months — barn owls spend about one-third of their time hunting in the fields. On average, a family of barn owls eats about 1,000 rodents during the nesting season. So far, the study has revealed that the owls were doing a pretty good job at reducing the number of gophers in vineyards, while the number of mice hasn’t been affected. With that said, the key part of the study is whether the owls are leading to a decrease in rodenticide use in Napa Valley. According to the researchers, that is indeed the case, with most of the participating vintners reporting using no poison since they introduced the owl next boxes.
<urn:uuid:3f8284fc-5b1b-4480-964e-4a2398b08468>
CC-MAIN-2023-50
https://www.optimistdaily.com/2022/03/owls-are-helping-winemakers-stay-away-from-pesticides/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.960585
338
3.171875
3
Early orthodontic treatment, also known as interceptive orthodontic treatment, is used to prevent future orthodontic issues. Between the ages of seven and 14, the teeth and jaw are still developing, making them more malleable for effective interceptive orthodontic treatment. Sometimes patients as young as seven years old are able to begin orthodontic treatment, though their candidacy depends on specific conditions of their mouths. Is your child a candidate for early orthodontic treatment? To determine if your child is eligible for early orthodontic treatment, we recommend visiting the orthodontist at age seven. The earlier problems are caught, the better your child’s oral health will be in the future. Your child must have at least one of the following conditions in order to be considered for early orthodontic treatment: - Crowded teeth: When a child’s jaw is too small, the result is severe dental crowding, where some or all of their teeth overlap. - Gapped teeth: Excessively spaced teeth. A gap in the upper front teeth is the most common type an orthodontist sees. - Underbite: The lower front teeth overlap the upper teeth as a result of the lower jaw being pushed forward. - Jaw irregularities: An unusual jaw size or narrow dental arch. - Crossbites: When the jaw shifts to one side. - Thumb or pacifier sucking: Long-term thumb or pacifier sucking has affected the teeth or jaw growth. - Mouth breathing: If your child only breathes through their mouth, it can cause crooked teeth, facial deformities, or poor growth. - Extra or missing teeth: Orthodontic treatment will need to be adjusted accordingly. For extra teeth, extractions may be necessary. Types of Early Orthodontic Treatment When the jaw has formed abnormally and is not wide enough for permanent teeth to erupt, this device expands the jaw over time to create more space. A wider jaw allows your child to receive more effective and quicker orthodontic treatment, by means of clear aligners or traditional metal braces. This device is also useful for children with a narrow palate; it helps align the upper teeth and jaw. Other common conditions where a palatal expander may be needed include impacted teeth, crossbites, dental crowding, and breathing problems. Palatal expanders can only be used on younger children since their jaws are still developing. An orthodontist’s go-to for orthodontic treatment, traditional braces are commonly used in instances of misaligned teeth, crooked teeth, or other bite problems. While this equipment is reserved for patients between ten and 14, some children younger than ten might need traditional braces if they have severely overcrowded teeth, an underbite, gapped teeth, or missing teeth. This device may be necessary if your child’s jaw is severely misaligned. It is used together with braces and is secured around the head and face with a neck strap. It’s important to note that braces are only capable of correcting teeth positioning. Headgear encourages proper jaw growth. There are three common types of headgear: - Cervical pull for overbites and underbites - Reverse-pull for underbites and crossbites - High pull for open bites Also known as invisible braces, clear aligners have steadily grown in popularity over the years because they are removable, hard to see, and comfortable to wear. While usually meant for adults, some younger children can use them if they still have baby teeth.
<urn:uuid:94e6516f-0b42-470d-b35f-9a282702af52>
CC-MAIN-2023-50
https://www.paulafillakdmd.com/articles/premium_education/915177-early-orthodontic-treatment
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.925197
749
3.3125
3
October is Breast Cancer Awareness month, so I wanted to take some time and highlight the topic a bit more. Breast cancer is something that runs deep in my family, so it hits a little closer to home. Both of my grandmothers and one great-grandmother all suffered with the diagnosis. Thankfully, due to the amazing medical community and those who have dedicated their careers to improving the lives of patients, I am lucky to still have both grandmothers here to this day. Since breast cancer is the second most common type of cancer among American women (behind skin cancer), it’s important to remain educated on the topic so that we can remain as vigilant as possible through continuing education, screenings, and early detection. Below I’ve highlighted the different types of breast cancer, some things to keep an eye out for, and the strides being made within the field. Breast Cancer 101: Ductal Carcinoma: It is the most common type of breast cancer and begins in the lining of the milk ducts. Ductal carcinoma may be either invasive (IDC) or non-invasive (ductal carcinoma in situ, DCIS). -Invasive Vs. Non-Invasive: Non-invasive ductal carcinoma (also called ductal carcinoma in situ, or DCIS) is an early cancer stage that has not spread beyond the ducts. It is usually caught during a routine breast exam or mammogram. If ductal carcinoma spreads to the surrounding tissue, it is considered invasive. Invasive ductal carcinoma is the most commonly diagnosed form of breast cancer. Lobular Carcinoma: This cancer begins in the lobules (milk glands) of the breast and may be either lobular carcinoma in situ (LCIS, non-invasive) or invasive lobular carcinoma (ILC). LCIS rarely becomes invasive but having it in one breast increases the risk of developing invasive cancer in either breast. Inflammatory Breast Cancer: A type of breast cancer in which the breast looks red, swollen and feels warm. The redness and warmth occur because the cancer cells block the lymph vessels in the skin. Molecular Receptor Status/Subtypes: Receptors are molecules that cancer cells produce on their surface and there are 3 main subtypes that are commonly identified in diagnosis. -HER2-Positive: breast cancer is HER2-positive when it has much higher levels of the protein than normal -Hormone Receptor-Positive/ER-Positive: This is the most common type of breast cancer. This form’s receptors bind with one either estrogen and progesterone- which are naturally occurring hormones in the body. –Triple-Negative Breast Cancer: This describes breast cancer cells that do not have estrogen receptors, progesterone receptors or large numbers of HER2/neu protein. It is also called ER-negative PR-negative HER2/neu-negative breast cancer. Recurrent Breast Cancer: Comes back when treatment doesn’t fully remove or destroy all the cancer cells. What To Look For: There are different schools of thought now on the self-breast exams, and while they shouldn’t be your sole tool, they can be a part of your screening. You can find a pretty thorough step-by-step self-exam guide here. According to MD Anderson, “Breast cancer symptoms vary from person to person and there is no exact definition of what a lump or mass feels like. The best thing to do is to be familiar with your breasts so you know how “normal” feels and looks”. It’s also important to note that just because you might experience some of the symptoms doesn’t guarantee you have breast cancer- so it’s critical to see your doctor to discuss any concerns. Some common symptoms can include: - Lump or mass in the breast or underarm - Localized, persistent breast pain - Swelling of all or part of the breast - Changes in the breast skin (irritation, redness, thickening, scaliness) - Changes in the nipple (discharge, dimpling, puckering, changing direction) - Any changes in breast size or shape It’s Not All Bad News! We’re lucky there are so many doctors, researchers, and advocates dedicated to improving the lives of breast cancer patients with the goal of eradicating the disease. Due to hard work and continued research, major advancements within breast cancer diagnosis and treatment are being made regularly. There are many circumstances that are out of our control, but there are things you can do to help reduce your risk. In addition to remaining observant, here’s an article from MDACC for a few ways you can help reduce your risk for breast cancer. While there are more than 275,000 new breast cancer cases diagnosed in our country each year, new therapies have increased the five-year survival rate to 90%. And there are now nearly 3 million breast cancer survivors in the US! Please know we’ll be thinking about you, but there’s also no shortage of support for those impacted by breast cancer. One resource is https://community.breastcancer.org/ which has a number of forums, engagement opportunities, and webinars.
<urn:uuid:5665ff59-41db-4a54-8215-d95805e3ea4e>
CC-MAIN-2023-50
https://www.pharmafinders.com/2021/10/20/breast-cancer-awareness-month/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.938636
1,117
3.140625
3
Calcium phosphate in the form of di- and tricalcium phosphate are natural calcium-phosphorus compounds. Calcium phosphate is, among other things, a constituent of bones and teeth in humans and animals. The substance is used as a filler and anti-caking agent to achieve a stable and even distribution of the tablet's ingredients. In addition, the phosphorus content enhances the effect of the antioxidants present. Both forms of calcium phosphate have E-number 341. The difference between the two forms is that tricalcium phosphate contains more calcium than dicalcium phosphate. The rules prescribe that phosphate is spelled with ph. Dicalcium phosphate and tricalcium phosphate are substances that can be found as both excipients and as active ingredients. The amount of calcium phosphate in a preparation is, of course, significantly less when it acts as an excipient than if it is included as an active substance, that is, as a calcium supplement. As an excipient, a maximum of 70 mg phosphate / kg body weight is permitted. Pharma nord uses di- and tricalcium phosphate, where the calcium part is extracted from limestone, which is chemically treated. We do not use animal sources for these substances.
<urn:uuid:9a9484f5-b390-4ae2-be4a-3a8c64299a07>
CC-MAIN-2023-50
https://www.pharmanord.ie/ingredients/articles/calcium-phosphate
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.942403
248
2.65625
3
How to Plant, Grow and Care for Yoshino Cherry Tree – Full Guide The Yoshino Cherry Tree, scientifically known as Prunus × yedoensis, is a species of cherry tree that hails from Japan. Its most mesmerizing feature is undoubtedly its blossoms, which emerge in early spring. The foundation for a thriving Yoshino Cherry Tree lies in the quality of its soil. These elegant trees flourish best in well-drained soil that doesn’t retain excessive water, which can lead to root rot. These trees, belonging to hardiness zones 5 to 8, are resilient within a certain temperature range. While they can withstand cold winters typical of Zone 5, they also thrive in milder climates found in Zone 8. Preparing your Yoshino Cherry Tree for the winter months is vital to ensure its resilience and vitality come spring. As the temperatures drop and frost sets in, these trees, like many deciduous plants, enter a state of dormancy.
<urn:uuid:24f38949-fcb7-4f7d-b750-5a912041896c>
CC-MAIN-2023-50
https://www.planetnatural.com/web-stories/how-to-plant-grow-and-care-for-yoshino-cherry-tree-full-guide/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.93269
200
2.953125
3
Agriculture is the backbone of our civilization. From feeding ourselves to providing for our communities, it’s a crucial part of human life. Solar pumps for irrigation offer an innovative solution. They allow us to efficiently use sunlight to power systems that transport water from its natural source into areas needed for agriculture. Benefits Of Solar Pump Systems For Irrigation Irrigation systems powered by solar pumps provide a more reliable water supply, reduce operational costs, and are better for the environment than other options. Solar pumps use photovoltaic cells to convert sunlight into electricity, which powers the pump motor. There are no moving parts inside the solar panel, so they require little maintenance and have long lifespans. Additionally, as they don’t need any additional fuel or resources to operate, they can help save on operating costs compared to diesel-powered models. These features make them attractive for farmers looking to increase their yields with efficient water management. How A Solar Pump System Works A solar pump system consists of three main components: a photovoltaic (PV) panel array, an inverter, and one or more submersible electric pumps. The PV panels collect sunlight and generate electricity which is then fed into the inverter. The inverter converts this power into alternating current that can be used by the pump motor in order to move water from its source up to the designated area for irrigation purposes. Components Of A Solar Pump System At the heart of every solar-powered water pump system are its photovoltaic cells. These cells capture energy from sunlight and convert it into electricity that powers your watering solution. You will also need additional components such as controllers, power converters, cables, batteries and pumps. Depending on whether you want to use surface or submersible pumps, you may require other parts too. Advantages Over Other Irrigation Methods Solar pumps boast an impressive sustainability record: they produce zero emissions during operation and require no maintenance beyond occasional cleaning, making them an eco-friendly choice. This has made them increasingly popular among farmers looking for ways to reduce their carbon footprint without sacrificing crop yields and output. Finally, solar pumps can be installed quickly and easily thanks to their simple design and minimal components – something traditional diesel or electric pumps cannot compete with. Factors To Consider When Choosing A Solar Pump System It is important to note that 40% of farms in the US are now using solar-powered pumps for their water needs – making them an increasingly popular choice amongst farmers. Look at the amount of power each panel or battery generates compared to how much energy is required to run your pump. This will help you decide if investing in more panels or batteries and a backup system will make sense from a financial standpoint and give you peace of mind knowing your pump can handle whatever load you need it to take on. If there’s a chance of flooding, check out the types of pumps available that can withstand high levels of water pressure – this could save you money over time and reduce unwanted downtime due to maintenance issues. Case Studies On Solar Pumps In Agriculture Smallholder farmers in Ethiopia are using solar irrigation systems to increase yields from existing water resources. The system is designed to reduce labor costs associated with manual pumping and power consumption by up to 80%. This has resulted in increased incomes for farmers and improved nutrition due to better crop production. India’s Kisan Solar Pump Program. This program provides rural households access to reliable off-grid electricity through the use of solar-powered pumps. It has enabled over 6 million households to pump water for agricultural purposes without relying on grid electricity or diesel generators. Frequently Asked Questions How Much Does A Solar Pump System For Irrigation Cost? The cost of a solar pump varies greatly depending on if it is purchased or leased, as well as the size and type of the system. Generally speaking, small systems can range from $4,000-$10,000 USD, while larger ones can reach up to $20,000 USD or more. Installation fees are typically extra too. What Kind Of Maintenance Is Required For A Solar Pump System? Fortunately, most solar pumps don’t require too much manual labor or technical knowledge – just necessary checks every now and then – so you won’t have to worry about spending hours tinkering around with each component of your system. In fact, some manufacturers even offer annual check-up packages for easy maintenance plans tailored specifically for their customers’ needs – so why not take advantage of them? Are There Any Government Incentives For Using Solar Pumps For Irrigation? Incentives vary from state to state, but most governments offer tax credits or rebates when businesses or individuals install a solar pump system for agriculture purposes. In addition, many states offer additional benefits such as grant funding or subsidies to cover installation costs. These incentives make investing in renewable energy solutions like solar pumps easier than ever. In conclusion, a solar pump system for irrigation can be an effective and cost-efficient choice. On average, these systems range from $1,200 to $4,000 depending on the size of the farm or garden being irrigated. Maintenance is minimal, though it does require periodic cleaning and inspection. Solar pumps are perfect for most soil types but may not work as well in extremely rocky areas. Installation typically takes two days, making them incredibly convenient compared to other irrigation systems. Lastly, some governments offer incentives for using solar pumps for irrigation purposes – in Canada alone, over 10 million dollars has been allocated towards such initiatives!
<urn:uuid:40501707-0367-4d68-9363-cdf702a48421>
CC-MAIN-2023-50
https://www.planetresource.net/pumps/solar-pumps-for-irrigation/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.930618
1,137
3.3125
3
The Northwest Territory was the first organized territory created by the Northwest Ordinance in 1787. The territory was defined as east of the Mississippi River, northwest of the Ohio River, south of the Great Lakes, and west of Pennsylvania. The Northwest Territory included what is now Indiana, Illinois, Michigan, Ohio, Wisconsin, and the northeast part of Minnesota. History of the Territory: After France’s defeat in the French and Indian War, the territory of New France was relinquished to Great Britain by the Treaty of Paris in 1763. The 1783 Treaty of Paris ceded the land west of the Appalachian Mountains and north of the Ohio River to the United States by the Great Britain. Connecticut, Massachusetts, New York, and Virginia made claims over parts of the territory to Congress. Between 1780 and 1802 the states surrendered their claims and the western territory eventually came under the public domain of the United States. Congress developed a national policy and enacted three ordinances, 1784 Land Ordinance, 1785 Land Ordinance, and the 1787 Northwest Ordinance. The 1784 Land Ordinance was the first ordinance introduced by Thomas Jefferson and it provided a procedure for dividing the territory into multiple individual states. The 1785 Land Ordinance established a standardized a method of surveying and subdividing the land. The Northwest Ordinance was passed on July 13th, 1787 and established a territorial government and made General Arthur St. Clair the first Governor. The ordinance also proposed that the territory should be divided into no less than five states. It was also decreed that when the adult male population reached 5,000 the residents could elect their own legislature. Also, when the population reached 60,000 territories would be granted statehood. On July 15th, 1788, General St. Clair formed a government and established Marietta as the territorial capital. In 1790 however, the administrative capital and military center was shifted to Fort Washington in Cincinnati. The Northwest Territory was divided into two parts in 1800. The larger portion was known as Indiana Territory and included states now known as Wisconsin, Illinois, parts of Michigan, Minnesota, and a major part of Indiana. The smaller portion retained the name Northwest Territory and included parts of Ohio, Michigan, and a small portion of Indiana. On March 1st, 1803, Ohio became the first US state formed out of the Northwest Territory. This was followed by the creation of Indiana, Illinois, Michigan, Wisconsin, and Minnesota. Ghosh, Diptarka. “Northwest Territory.” WorldAtlas. WorldAtlas, April 21, 2021. https://www.worldatlas.com/geography/northwest-territory.html.
<urn:uuid:30eab4c9-c5c9-4c24-bbfe-a4d2978cde39>
CC-MAIN-2023-50
https://www.potawatomiheritage.com/encyclopedia/northwest-territory/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.954624
552
4.03125
4
Why vaccinations for children are important Benjamin Franklin once said, “An ounce of prevention is worth a pound of cure.” Nearly 300 years later, our Plano pediatricians agree. Vaccinations for children are important because they allow kids to avoid the serious diseases that once routinely sickened or killed children. Types of vaccinations for children Vaccines are administered to children according to the American Academy of Pediatrics (AAP) vaccination schedule. Common vaccinations for children are listed below. Our Plano pediatricians may recommend additional vaccines that are not on this list. - The hepatitis B vaccine protects against hepatitis B, which can cause serious liver problems. - The rotavirus vaccine can prevent rotavirus, which can cause severe diarrhea and vomiting. - The DTaP vaccine protects against diphtheria, which can cause heart swelling, tetanus, severe muscle spasms and whooping cough. All of these conditions can be deadly. - The Hib vaccine can protect against Haemophilus influenzaetype b, which can cause deadly illnesses like meningitis and epiglottitis. - The PCV13 vaccine protects against pneumococcus bacteria, which can cause severe and deadly infections of the blood and spinal covering. - The IPV vaccine protects against polio, which can cause paralysis and result in death. - The annual flu vaccine protects against the dominant strains of flu, which can cause serious illness and discomfort. Flu complications can lead to death. - The MMR vaccine protects against measles and mumps, which can cause mild to life-threatening illnesses or death, and rubella, which in pregnant women can lead to stillbirth or serious pregnancy complications. - The varicella vaccine prevents chickenpox, which causes a rash and discomfort that can progress to serious and deadly infections of the skin, lungs or brain. - The hepatitis A vaccine protects against hepatitis A, which can cause illness, liver problems, and diseases of the pancreas and kidneys. - The MenACWY vaccine protects against meningococcal disease, which can cause sometimes-fatal meningitis and blood infections. - The HPV vaccine protects against human papilloma virus, which can cause a variety of cancers in men and women. Our Plano pediatricians can help ensure that your child is up-to-date on the recommended vaccination schedule and is receiving maximum protection. Vaccinations for children are safe and effective Our Plano pediatricians want you to feel confident in vaccinating your child. Research has demonstrated that vaccines are safe and effective ways to prevent diseases. While some vaccines may cause side effects, these are usually mild. If you become concerned about your child’s reaction to his or her vaccination, our office can help. To learn more about vaccinations for children, or to schedule your child’s next appointment, contact our office today. - Yearly checkups - Sports and school physicals - Keeping kids active - Safe ear piercing - Healthy teeth and mouths - Pool Safety When to see the doctor - Sick visits - Giving medication safely - Stomach Bugs - Bowel Movements - Whooping cough - Ear infections - Ear tubes - Swimmer's ear - Abdominal pain - Childhood illness - Sore throat - Wounds and injuries - Stings and bites - Sight and hearing checks - Childhood cancers - Weight loss - Learning differences - ADD and ADHD - Dyslexia, dysgraphia, dyscalculia - Hearing loss - Vision impairment - Cerebral palsy - Mental health
<urn:uuid:739f3d14-c667-4a0c-8146-6981cb6b059e>
CC-MAIN-2023-50
https://www.psopkids.com/pediatric-services/vaccinations-for-children/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.897287
752
3.375
3
A recent 4-year study linked apathy to a hastened decline in persons with Alzheimer disease (AD). Another recent study found that persons with mild cognitive impairment (MCI) were more likely to convert to AD a year later if they also had apathy. "People are getting excited about apathy nowbecause it may be a behavioral marker for amore rapidly progressing dementia," saidPrasad Padala, MD, assistant professor ofpsychiatry at the University of NebraskaMedical Center in Omaha. A recent 4-year study linked apathy to a hastened decline in persons with Alzheimer disease (AD).1 Another recent study found that persons with mild cognitive impairment (MCI) were more likely to convert to AD a year later if they also had apathy.2 Another factor that has increased interest in apathy is the growing understanding of physical changes in the brain. For example, an autopsy study found that among persons with AD, those who had chronic apathy tended to have more neurofibrillary tangles than those without apathy.3 Clinically apathetic persons with dementia may even have a different genetic makeup than persons with dementia who are not apathetic. A recent study showed that persons with AD were more likely to be carriers of the ApoE e4 allele if they also had apathy.4 Apathy traditionally has received less attention than other neuropsychiatric states in dementia, such as depression, agitation, aggression, and psychosis. This is slowly changing. Although researchers are becoming more knowledgeable about the condition, treating it remains difficult. "There aren't many medications or other treatments that have shown efficacy in treating apathy," said Tiffany Chow, MD, assistant professor of neurology and geriatric psychiatry at the University of Toronto. But given the new interest in the condition, researchers expect full-scale randomized controlled trials to follow. COMMON BEHAVIORAL PROBLEM Apathy, which refers to a loss of motivation, is marked by such characteristics as diminished initiation, poor persistence, lack of interest, indifference, low social engagement, blunted emotional re- sponse, and lack of insight.5 It is the most common behavioral disturbance in dementia. Prevalence rates are as high as 80% in clinic samples of patients with primary dementia and range from 27% to 36% in community samples.6 An analysis of 3 European studies produced a mean of about 56%.7 The incidence of apathy increases with the severity of the dementia. "In the beginning stages of dementia, people may withdraw from activities because they're aware of what's going on and want to avoid making themselves look bad," said Chow. "Then in the moderate to severe stages, they do shut down and are not able to initiate their own activities." Apathy has a dramatic effect on persons with dementia and their families. First, the condition leads to decreased function. In one study, persons with apathy were nearly 3 times more likely than those without apathy to be impaired in dressing, bathing, transferring from bed to chair, using the toilet, walking, or eating and more than 3 times more likely to be impaired in all 6 activities.8 Second, apathy is linked to executive cognitive dysfunction. For example, a study of 184 patients with probable AD found that apathetic patients had significantly poorer performance in naming, word-list learning, verbal fluency, and set-shifting than those without apathy.9 "Without interest or initiative, it's difficult for these patients to use their remaining cognitive function," said Philippe Robert, MD, director of the Memory Center for Care and Research at Nice University Hospital in France. Third, apathy makes persons less likely to comply with treatment. This can lead to setbacks in treatment not only for dementia but also for concomitant health conditions. Finally, the extreme burden on caregivers created by these deficits tends to increase caregiver distress. "People with apathy tend to depend a lot more on caregivers, even for things they can do on their own," said Padala. Although it has not been proved in studies, patients with apathy may require hired home care and institutional care earlier than other patients with dementia. Apathy may pack an additional punch for caregivers who are family members because it chips away at the patient's personality. "The caregiver sees the person withdrawing and shutting down, and it seems like they're fading into the distance right before their eyes," said Chow. "A lot of family members say that they prefer someone doing obsessive-compulsive things to just being apathetic," she added. The situation may be different in a nursing home, where apathetic patients who sit quietly and do not cause disturbances might be viewed as easy to care for. "If I were working in a nursing home, I would probably not be perturbed by apathetic patients as much as by someone who is having active hallucinations," said Padala. This may be one reason why apathy research has traditionally received less funding than research on problems such as agitation and psychosis. Conducting research on apathy requires an accurate measurement scale. The most often used scales for measuring apathy in dementia are the Apathy Evaluation Scale (AES), the apathy subscale of the Neuropsychiatric Inventory (NPI), and the Apathy Inventory (IA). The AES is an 18-item scale that has separate versions for the patient, informant, and clinician. It measures 3 clinical dimensions of apathy: emotional blunting, lack of interest, and lack of initiative. The IA also has 3 versions and measures 3 clinical dimensions but is shorter than the AES. The NPI, which is a long test that is used widely in drug trials, measures 10 common behavioral disturbances in persons with dementia and other neurological disorders. The informant supplies answers to 4 questions related to apathy; a positive response to 1 triggers a subset of 8 more specific questions on apathy. Other scales include Starkstein's 14-item scale, the Irritability-Apathy Scale, the Cambridge Behavioral Inventory (CBI), and the Lille Apathy Rating Scale (LARS). The Starkstein scale is similar to the AES but shorter. The Irritability-Apathy Scale is a brief, informant-based scale with 5 items related to apathy that is designed for use in persons with AD or Huntington disease. The CBI is an informant-based scale that measures neuropsychiatric symptoms and functional ability in dementia. The LARS is long, with 33 items, but is designed for simplicity because it requests "yes" and "no" answers instead of ratings on a numbered scale. Diana E. Clarke, PhD, psychiatric epidemiologist in the Department of Mental Health at the Johns Hopkins Bloomberg School of Public Health in Baltimore, said that based on her assessment of the AES, "informant- and clinician-rated scales are better than self-rated scales."10 "As a screening for whether apathy is present, I think that any of the scales are fine," said Chow. However, comparative studies are needed to see which scales are best at measuring apathy, which is important for treatment studies. Chow pointed out that it can be difficult to detect apathy in office visits because they are brief, lasting only about 15 or 20 minutes, and many patients are more alert in a medical office. "It drives the families crazy, because they say that the person doesn't respond all day at home and then he's fine in the office." She said this is why it is important for the neurologist or neuropsychiatrist to ask family members whether the person is keeping up with usual activities. Apathy also correlates with physical changes in the brain. In addition to the recent autopsy study that found a link between chronic apathy and neurofibrillary tangles in persons with AD, another study found that among 31 persons with AD, those with low initiative and interest scores had significantly reduced perfusion in the right anterior cingulate cortex on a single photon emission CT scan compared with those who had normal scores.11 DISTINGUISHING APATHY FROM DEPRESSION Depression, although not as prevalent, is another condition that is commonly seen in patients with dementia. Apathy and depression have some overlapping characteristics--namely, loss of interest or pleasure in activities. Both are associated with functional and cognitive decline. As a result, the conditions can be confused with each other. To complicate matters, many patients have both apathy and depression. In the Cache County Study on Memory, Health, and Aging, which looked at more than 5000 persons aged 65 years and older, about 42% of persons with apathy had depression and about 37% of those with depression had apathy.6 In the European Alzheimer's Disease Consortium study, which analyzed more than 3000 persons with AD, 22% of the participants had apathy alone, 10% had depression alone, and 15% had apathy and depression.12 There are many important differences between apathy and depression, however. "Apathy relates more to indifference, whereas depression is more hopelessness about the future," said Padala. Apathy and depression also look very different in the brain. For example, a recently published study of 84 older persons found that those who were depressed had smaller gray matter volumes in the orbitofrontal areas as measured by MRI, whereas those with apathy had decreased gray matter volume in the right anterior cingulate cortex.13 The more severe the apathy, the smaller the gray matter volume. Differentiating between apathy and depression is important because the treatment approaches are different. For example, Padala pointed out that selective serotonin reuptake inhibitors used for treatment of depression, such as fluoxetine (Prozac) and sertraline (Zoloft), may actually contribute to apathy. "One of my patients explained to me that after he started the medication, he no longer felt depressed, but he also didn't feel anything else," Padala commented. Conversely, the stimulant methylphenidate (Ritalin), which appears to be helpful in treating apathy, is not effective in depression. LINK BETWEEN APATHY AND MCI Investigators are now finding links between apathy and MCI. One study found that among persons with MCI, those in whom AD developed at 1-year follow-up were significantly more likely to have symptoms of apathy at baseline (92%) than those in whom AD did not develop (27%).2 Robert told Applied Neurology that his 3-year results, which have been submitted for publication, continue to show that the presence of apathy increases the risk of conversion to AD for persons with MCI. "The presence of lack of interest--a soft behavioral sign that's easy to detect during a clinical interview--could indicate potential decline in patients with MCI," said Robert. "It's important to carefully check the cognitive status of these patients." In addition, the Cache County Study of Memory and Aging found that clinical apathy was more common in persons with a mild cognitive syndrome (3.1%) than in those who were cognitively normal (1.4%), although not as common as in persons with dementia (17.3%). TREATMENT IN ITS INFANCY The treatment of apathy in dementia is still in its infancy, and it is unclear which treatments are effective. "Our knowledge has been limited by a lack of clinical trials," said Clarke. Some recent research has focused on cholinesterase inhibitors, but results have been mixed. "Cholinesterase inhibitors help in some cases, but not in the majority," said Chiadi Onyike, MD, assistant professor of psychiatry and behavioral sciences at Johns Hopkins University. Several studies have analyzed the cholinesterase inhibitor donepezil (Aricept) for treatment of apathy in AD, with varying results. One 6-month, randomized, double-blind, placebo-controlled trial in 290 outpatients with AD found that donepezil significantly reduced apathy, depression, and anxiety.14 By contrast, a placebo-controlled study in 208 nursing home patients reported that although donepezil significantly reduced agitation, it did not reduce apathy.15 It is possible that donepezil may be more effective for treating apathy in mild to moderate AD than in late-stage AD, but further studies are needed. Two 6-month, open-label prospective studies in nursing home patients have looked at the drug rivastigmine (Exelon). The first study, which involved 669 patients with moderate to severe AD, found an improvement in apathy at 3 months but only a trend toward reduced apathy at 6 months. The second study of 173 patients with moderate to severe AD found an improvement in neuropsychiatric symptoms overall but no improvement in apathy.16,17 Galantamine (Razadyne) improved the condition of patients with apathy and anxiety in a 6-month, placebo-controlled, double-blind study of nearly 600 patients with mild to moderate AD.18 However, a pooled sample of more than 2000 patients with mild to moderate AD found that galantamine did not improve a combined end point that included effects on hallucinations, anxiety, apathy, and aberrant motor behaviors.19 A small case series of 3 patients with frontotemporal dementia found that their apathy improved with memantine (Namenda).20 Researchers have also tried stimulants such as methylphenidate, dextroamphetamine (Adderall), and modafinil (Provigil) in persons with apathy and dementia. Padala has reported positive results with the use of methylphenidate in a small case series21 and with modafinil in a case study.22 Onyike said that amantadine (Symmetrel), bromocriptine (Parlodel), and bupropion (Wellbutrin) may be useful in some cases. The research on nonpharmacological interventions for apathy in dementia is even sparser than that for pharmacological interventions. One study found that persons with dementia and apathy were engaged by live music.23 An Italian study of elderly persons, most of whom had dementia, suggested that validation therapy improved apathy.24 Other treatments that have been used to treat apathy include Snoezelen (multisensory stimulation) and aromatherapy.25 Onyike said that structured activity programs that socialize the person who is living in a residential community could be useful; another approach is to have the caregiver oversee a regimen of brief focused activities and interpersonal interactions. "Neurologists and neuropsychiatrists are becoming increasingly aware of apathy in dementia and its consequences on the course of the illness and its impact on caregivers," said Clarke. However, she added that "more research is needed . . . to improve our knowledge of the biology and genetics of apathy and dementia and enhance treatment strategies." Onyike agreed that researchers need to develop a better understanding of apathy. "How does a person develop drive, become aware of it, and initiate actions based on the drive?" he asked. He said that researchers could attempt to answer this question and then work to define the brain circuitry that underlies the process. Robert van Reekum, MD, assistant professor of psychiatry at the University of Toronto, agreed that researchers "need a comprehensive biopsychosocial understanding of motivational behavior." van Reekum said that future research also should involve testing the validity of the upcoming DSM-V diagnosis of apathy, developing and testing measurement tools, conducting randomized controlled trials of promising interventions, and investigating the possibility of various subtypes of apathy. Chow agreed, saying that a problem with treatment approaches is that "we haven't spent enough time parsing the type of apathy that we're trying to treat." It is possible, for example, that persons with affective apathy might be more likely to respond to antidepressants; those with cognitive apathy might be more responsive to cholinesterase inhibitors. "Until we separate patients out into these different groups, then maybe the drug trials haven't been properly conducted," she said. Starkstein SE, Jorge R, Mizrahi R, Robinson RG. A prospective longitudinal study of apathy in Alzheimer's disease. J Neurol Neurosurg Psychiatry. Robert PH, Berr C, Volteau M, et al, the PreAL study. Apathy in patients with mild cognitive impairment and the risk of developing dementia of Alzheimer's disease: a one-year follow-up study. Clin Neurol Neurosurg. Marshall GA, Fairbanks LA, Tekin S, et al. Neuropathologic correlates of apathy in Alzheimer's disease. Dement Geriatr Cogn Disord. Monastero R, Mariani E, Camarda C, et al. Association between apolipoprotein E epsilon4 allele and apathy in probable Alzheimer's disease. Acta Psychiatr Scand. Landes AM, Sperry SD, Strauss ME, Geldmacher DS. Apathy in Alzheimer's disease. J Am Geriatr Soc. Onyike CU, Sheppard JM, Tschanz JT, et al. Epidemiology of apathy in older adults: the Cache County Study. Am J Geriatr Psychiatry. Robert PH, Verhey FR, Byrne EJ, et al. Grouping for behavioral and psychological symptoms in dementia: clinical and biological aspects. Consensus paper of the European Alzheimer disease consortium. Freels S, Cohen D, Eisdorfer C, et al. Functional status and clinical findings in patients with Alzheimer's disease. Kuzis G, Sabe L, Tiberti C, et al. Neuropsychological correlates of apathy and depression in patients with dementia. Clarke DE, Reekum R, Simard M, et al. Apathy in dementia: an examination of the psychometric properties of the apathy evaluation scale. J Neuropsychiatry Clin Neurosci. Robert PH, Darcourt G, Koulibaly MP, et al. Lack of initiative and interest in Alzheimer's disease: a single photon emission computed tomography study. Eur J Neurol. Robert PH, Byrne J, Aalten P, et al. Apathy and depressive symptoms in Alzheimer's disease. Results from the European Alzheimer's Disease Consortium. Alzheimer's Dementia J Alzheimer's Assoc. Lavretsky H, Ballmaier M, Pham D, et al. Neuroanatomical characteristics of geriatric apathy and depression: a magnetic resonance imaging study. Am J Geriatr Psychiatry. Gauthier S, Feldman H, Hecker J, et al, Donepezil MSAD Study Investigators Group. Efficacy of donepezil on behavioral symptoms in patients with moderate to severe Alzheimer's disease. Tariot PN, Cummings JL, Katz IR, et al. A randomized, double-blind, placebo-controlled study of the efficacy and safety of donepezil in patients with Alzheimer's disease in the nursing home setting. J Am Geriatr Soc. Dartigues JF, Goulley F, Bourdeix I, et al. Rivastigmine in current clinical practice in patients with mild to moderate Alzheimer's disease. Rev Neurol (Paris). Cummings JL, Koumaras B, Chen M, Mirski D; Rivastigmine Nursing Home Study Team. Effects of rivastigmine treatment on the neuropsychiatric and behavioral disturbances of nursing home residents with moderate to severe probable Alzheimer's disease: a 26-week, multicenter, open-label study. Am J Geriatr Pharmacother. Erkinjuntti T, Kurz A, Gauthier S, et al. Efficacy of galantamine in probable vascular dementia and Alzheimer's disease combined with cerebrovascular disease: a randomised trial. Herrmann N, Rabheru K, Wang J, Binder C. Galantamine treatment of problematic behavior in Alzheimer disease: post-hoc analysis of pooled data from three large trials. Am J Geriatr Psychiatry. Swanberg MM. Memantine for behavioral disturbances in frontotemporal dementia: a case series. Alzheimer Dis Assoc Disord. Padala PR, Burke WJ, Bhatia SC, Petty F. Treatment of apathy with methylphenidate. J Neuropsychiatry Clin Neurosci. Padala PR, Burke WJ, Bhatia SC. Modafinil therapy for apathy in an elderly patient. Holmes C, Knights A, Dean C, et al. Keep music live: music and the alleviation of apathy in dementia subjects. Tondi L, Ribani L, Bottazzi M, et al. Validation therapy (VT) in nursing home: a case-control study. Arch Gerontol Geriatr. Overshott R, Byrne J, Bums A. Nonpharmacological and pharmacological interventions for symptoms in Alzheimer's disease. Expert Rev Neurother.
<urn:uuid:99865c56-3682-41b1-a314-df8da21e88c0>
CC-MAIN-2023-50
https://www.psychiatrictimes.com/view/recognition-apathy-marker-dementia-growing
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.934663
4,386
2.859375
3
Light carries energy and momentum, laying the physical foundation of optical manipulation that has facilitated advances in myriad scientific disciplines, ranging from biochemistry and robotics to quantum physics. Utilizing the momentum of light, optical tweezers have exemplified elegant light–matter interactions in which mechanical and optical momenta can be interchanged, whose effects are the most pronounced on micro and nano objects in fluid suspensions. In solid domains, the same momentum transfer becomes futile in the face of dramatically increased adhesion force. Effective implementation of optical manipulation should thereupon switch to the “energy” channel by involving auxiliary physical fields, which also coincides with the irresistible trend of enriching actuation mechanisms beyond sole reliance on light-momentum-based optical force. From this perspective, this review covers the developments of optical manipulation in schemes of both momentum and energy transfer, and we have correspondingly selected representative techniques to present. Theoretical analyses are provided at the beginning of this review followed by experimental embodiments, with special emphasis on the contrast between mechanisms and the practical realization of optical manipulation in fluid and solid domains. Light can exert forces (torques) on objects during the light–matter interaction and therefore is used as an optical manipulation tool for micro/nano-objects. As early as 1619, the concept of “force of light” was first proposed by Johannes Kepler in an attempt to explain the phenomenon that when a comet enters the solar system, its tail is always deflected away from the sun. The underlying mechanism was later summarized by Maxwell’s electromagnetic theory, which states that light, though electromagnetic waves, carries momentum. Accounting for forces that stem from the momentum exchange between the radiation field and the interactive matter, the force of light belongs to a general phenomenon known as the “radiation force”[3–5]. For the sake of brevity, the electromagnetic radiation force has now been more frequently addressed as the “optical force.” Specifically, the most vivid picture of optical force should be the case that when a beam of light is fired at a reflecting mirror, a pushing force is generated as the consequence of the momentum transfer from photons to the mirror, as the direction of light momentum is reversed upon reflection. However, due to the “extreme minuteness” of the optical force, John Henry Poynting deemed its application untenable in driving mechanical locomotion in terrestrial scenarios. The potential of light momentum, or rather, the optical force, was not truly appreciated until the advent of the laser and the landmark invention of optical tweezers by Arthur Ashkin, who demonstrated optical trapping and manipulation of micro/nano particles, living cells, and molecules using optical force in fluidic environments[7,8]. By virtue of his remarkable work, Ashkin was awarded the Nobel Prize in Physics in 2018. His experiments also formed the basis of another Nobel Prize in Physics in 1997 for Steven Chu’s work on the optical cooling of atoms, showing that optical manipulation is a fascinating field in fostering scientific explorations at the “bottom” (quoting Richard Feynman’s speech). In the macro regime, a semi-quantitative estimation of the optical force exerted on a reflective surface is , where is the optical power, the reflectance, and the speed of light[7,10]. The expression of optical force shares the general traits of radiation force, proportional to the incident power divided by the wave velocity[3,4]. Correspondingly, for a light beam of power , the optical force is 1 nN, and that is still under the assumption of perfect reflection (). A force of such order of magnitude can easily be eclipsed by gravitation or even environmental perturbations at the macroscale, which somehow confirms the assertion of Poynting[10,11]. In the micro regime, considering that not all incident photons would fall within the target volume, it is the local light intensity (i.e., the Poynting power density) rather than the optical power that is of more immediate concern. Hence, the notion of “pressure” should be adopted instead, as in “radiation pressure” in early literatures[7,11,12], given by , where denotes the light field intensity[11,13]. For micrometer or sub-wavelength objects, a -scale optical force would become relevant in that it generates roughly 102−105 times the gravitational acceleration, which thus sets it apart from that at the macroscale. To ensure optical force reaches piconewton, which, from an empirical standpoint, has become the basic standard to implement stable trapping in optical tweezers, coherent light sources and high-numerical-aperture (NA) objectives should be employed that condense the incident light power within diffraction-limited spots and maintain the local light intensity as high as , and that is where the role of lasers and Ashkin’s design of the optical trap apparatus should be appreciated. Over the past few decades, continuous efforts have been made in enhancing the optical force attainable at micro/nano scales, successful examples including the incorporation of plasmonic[14,15] and resonance dielectric structures[16,17]. Yet the diffraction and speed of light (or the dispersion relation of photons) have fundamentally set the limit of optical force, which typically could not surpass even with rather strong field localization. Because of its magnitude, optical force has mainly been exploited in fluidic environments, where the significance of optical force is still prominent against countering effects such as Brownian diffusion and viscous drag. Sign up for Photonics Insights TOC. Get the latest issue of Photonics Insights delivered right to you!Sign up now To implement optical manipulation in the fluidic domain, aside from acquiring mechanical momentum directly from light momentum, an indirect route can be taken, which requires making use of the hydrodynamic surroundings. A representative example is the photophoretic force, denoting the migration of light-absorptive particles in gaseous suspensions[18,19]. Specifically, the generation of photophoretic force demands the existence of gas molecules, the collisions between which and the particle create a net force pointing opposite to the particle’s surface temperature gradient. Assuming a particle with zero thermal conductivity, the relevance of the gas pressure greatly diminishes, and the photophoretic force can be estimated as , where denotes the conversion efficiency from light energy to the thermal energy of surrounding air molecules, the accessible optical power in the target region, and the average gas molecular velocity[20–22]. The Maxwell’s law of velocity distribution has determined to be in the range of at room temperature, considerably smaller than the traveling speed of light. Consequently, for strongly absorbing and poorly thermal conducting particles, the effects of optical force could be overwhelmed by photophoretic force, rendering the latter a destructive factor in optical-force-based manipulation; while from another perspective, by interfacing the energy channel of light, the photophoretic force, together with other heat-mediated and fluidic-assisted effects, e.g., the Ludwig–Soret effects and electrothermoplasmonic flow[24,25], provides alternative options to enable robust, long-distance and multi-mode optical manipulation with relaxed requirements upon local light intensity. Notably, in these schemes, instead of imparting forces directly on target objects, light, through photothermal effects, will first induce flows of hydrodynamic environments by imposing specific temperature profiles, which then give rise to the concomitant locomotion of suspended particles. Fluidic environments have proved to be golden testing grounds for optical manipulation, while an inevitable trend is to further extend its capability to the solid domain, the exploration of which is doomed to be difficult because of two major challenges: (1) the adhesive and friction forces reach the order of , causing a tremendous scale gap with the optical force, and (2) the fluid-assisted effects are inaccessible on dry surfaces due to the lack of fluidity and the no-slip condition along solid boundaries. To meet the challenges, a tactful resolution is to “inflict fluidity” in solid environments through photothermal effects, which bypasses the adhesion and additionally creates auxiliary hydrodynamic flows for mass transportation. In the meantime, it is vital that proper drives should be found that are both adequate in magnitude and compatible with solid environments. Still interfacing the energy channel of light, researchers in the early 1990s demonstrated the detachment and propulsion of adhering particulates via pulsed light illumination on absorbing substrates, the technique of which has been widely exploited in semiconductor surface cleaning[28,29]. As the result of opto-thermo-mechanical muliphysics coupling, the impulsive thermal expansion/compression of the substrate translates into surface acoustic waves, and the particulates immersed in the acoustic momentum field experience the acoustic radiation force calculated as , where denotes the photoacoustic conversion efficiency, the transient optical power of the pulsed incident light, and the sound speed in elastic media (e.g., of bulk silicon is )[30,31]. Denoting the momentum exchange between the acoustic field and the particle, the acoustic radiation force is the acoustic counterpart of the optical force (i.e., electromagnetic radiation force) and possesses the same fundamental expression, which, owing to the moderate sound speed as opposed to that of light, is of considerably larger magnitude at the same input power. Typically, acoustic waves are excited by applying electric signals on piezoelectric substrates and have been utilized for the construction of acoustic tweezers and enabling acoustic sorting and assembly of particles[3,32,33]. The fact that acoustic waves can also be excited via pulsed light illumination provides opportunity for optical manipulation extending to the solid domain. Specifically, apart from its large photoacoustic conversion efficiency, the photothermal process of pulsed light features enormous transient energy deposition (large ). At a moderate pulse energy of , it is capable of generating transient acceleration of adhering particulates sufficient in escaping from the adhesive force[28,29]. Inspired by the working principles of machineries, a series of intriguing work has reported multi-degree-of-freedom locomotion of micrometer-sized actuators in dry adhesive environments based on an internal force-driven mechanism[5,35–38]. Interestingly, instead of endeavoring to find the proper driving forces comparable to the -scale adhesive/friction forces, which is admittedly difficult when the force of interest belongs to the “external force,” researchers took a different approach by inducing the impulsive deformation of actuators via pulsed light illumination. The deformation, though also in the form of acoustic waves, functions effectively as the internal force in facilitating the locomotion on frictional substrates, in analogy to the operation of machineries or the crawling of earthworms via internal coordination rather than external assistances. Owing to its internal force nature, light-induced deformation effects may not be explicitly expressed as in the form of optical force, photophoretic force, or acoustic radiation force, which are all external forces (see Table 1 for comparison). Following the same mechanism, flexible polymers or hydrogels could be constructed into light-addressable soft robotics with delicate design and assembly, which take on more familiar modalities outside the framework of acoustic waves. Comparatively, their deformations are more profound and should be completed within considerably longer time windows in a semi-steady state, which are typically driven by modulated CW light. Properties of Different Light-Induced Forces/Effects. Optical ForcePhotophoretic ForceLight-Induced Acoustic Radiation ForceLight-Induced Deformation EffectsMechanismLight momentum channelLight energy channelLight energy channelLight energy channelMomentum transfer between light and objects1. Photothermal effects1. Opto-thermo-mechanical coupling1. Opto-thermo-mechanical coupling2. Nonuniform collisions between the particle and air molecules2. Momentum transfer between acoustic waves and objects2. Internal force-driven mechanismIntuitive expressionaMagnitudeb∼pN–nN or beyondWorking environmentFluid domainFluid domainSolid domainSolid domain Approximate expressions for intuitive understanding of the scale of corresponding forces. The definitions of related variables can be found in the main text. Magnitudes of forces when applied to micro/nano objects. As the opening salvo of this review, the above discussion is aimed to introduce the topic of optical manipulation and provide some general ideas about its actual implementation in fluidic and solid domains from the perspective of different light-induced forces/effects (Table 1). A more comprehensive overview will be provided in the following content, which is also summarized in Fig. 1. The organization of this review is as follows. In Sec. 2, we introduce the physical mechanisms and theories of several light-induced forces involved in optical manipulation processes, including the optical force/torque and the thermophoretic force in fluidic environments and light-induced deformation effects in solid environments. Next, revolving around the fluid as the operational environment, we revisit representative optical manipulation techniques in Sec. 3 and categorize them by the locomotion degree of freedom. In Sec. 4, techniques adapted to the solid domain optical manipulation are presented, which are classified with respect to the working mechanisms. In Sec. 5, we selectively introduce several applications regarding historically important or emerging topics. Finally, we conclude the main contribution of this review, and envision future directions in the field of optical manipulation. Figure 1.Overview of optical manipulations in fluid domains and solid domains. Optical manipulations generally include optical trapping, pulling and pushing, lateral manipulation, spinning and orbital rotating, and multi-degree-of-freedom manipulation. Optical manipulations in fluid domains are based on light directly induced forces (i.e., optical gradient forces and optical scattering forces) and indirectly induced forces (e.g., photophoretic force and thermal-electric mediated forces) whose amplitudes are typically of the order from pN to nN. In contrast, optical manipulations on solid surfaces need driving forces larger than μN to overcome the tremendous adhesion/friction forces in micro/nano scales. Examples include opto-thermal-elastic forces, pulsed light-induced forces, light-induced forces generated from photoactive polymers, and photothermal deformation-based actuations. In Sec. 1, we introduce four types of light-induced forces (effects) that can be exploited in optical manipulation, namely, the optical force, photophoretic force, light-induced acoustic radiation force, and deformation effects, the latter two both originating from opto-thermo-mechanical coupling. For the sake of clarity, we denote that in what follows in this review, optical force (also optical torque) refers exclusively to the force (or torque) arising from momentum transfer between the light field and matter, i.e., electromagnetic radiation force (torque), not to be confused as the general term for all light-induced forces. Moreover, the photophoretic force is a sub-branch under a larger category termed “thermophoretic force,” describing the transmission of small particles in both air (i.e., photophoretic force) and liquid (i.e., Ludwig–Soret effects) media, the differentiation between which will be made clear in Sec. 2.2. Acoustic-wave-related forces, on the other hand, consist of both the external force, as in acoustic radiation force, which stems from acoustic waves excited in substrates, and internal force in the form of light-induced deformation effects (acoustic waves) in actuators. In this section, physical mechanisms and theories are presented about three representative forces (effects). In Sec. 2.1, the origin and theoretical derivation of optical force and torque will be first provided (from Secs. 2.1.1–2.1.4), followed by a brief introduction of the measurement methods (Sec. 2.1.5) of optical force, which is of great practical significance in optical tweezer experiments. Section 2.2 is devoted to introducing the thermophoretic force in air (Sec. 2.2.1) and liquid suspensions (Sec. 2.2.2), which are associated with different interpretations and analytical treatments. Given that the acoustic radiation force is adequately illustrated in Sec. 1, Sec. 2.3 mainly focuses on the part of the internal force, that is, specifically, the light-induced deformation effects. Note that we have left out “force” in addressing these effects to avoid their being miscomprehended as external forces. 2.1 Optical Force and Optical Torque 2.1.1 Physics origin Due to the fundamental homogeneity and isotropy of space, closed physical systems carry two conserved quantities termed as linear and angular momenta. Unsurprisingly, light also has linear and angular momenta; thus, it could exert force and torque on physical objects via light–matter interactions, such as reflection, refraction, scattering, and absorbing processes. The earliest realization of the existence of the linear momentum of light can be tracked back to 1619, when Kepler speculated that the pressure of sunlight pushes comet tails away from the Sun. Two centuries later, after the establishment of his famous electromagnetic theory, Maxwell correctly calculated the pressure of solar radiation on the Earth’s surface, the similar physics of which confirms Kepler’s speculation. By analog with a point particle in classical mechanics, it is straightforward to argue that a light beam with a linear momentum should carry an angular momentum , the so-called external angular momentum, which depends on the choice of the origin of the coordinate system. However, light is beyond this expectation. Besides the external angular momentum, light possibly carries intrinsic angular momentum independent of the choice of the coordinate system. In 1909, Poynting first pointed out that circularly polarized light of angular frequency carries angular momentum, the ratio between which and light energy is , where or for left- and right-circular polarization, respectively. This polarization associated angular momentum is nowadays called spin angular momentum (SAM) of light. In addition to SAM, light can also carry orbital angular momentum (OAM), the discovery of which was in 1992 by Allen et al., who recognized that a Laguerre–Gaussian mode with a twist phase surface has OAM equal to . At the fundamental micro-level, the force and the torque exerted by light can be computed by summarizing the Lorentz force () on individual atoms without referring to the concept of light momentum. However, this approach is formally inconvenient because electromagnetic fields at atoms can be precisely obtained only by using microscopic electromagnetism, in which bound atomic charge densities and convective free-carrier currents are treated as elementary sources in free-space Maxwell equations. Instead, a convenient, widely accepted way is to apply macroscopic electromagnetism, which neglects atomic features and considers average electromagnetic fields at scales well beyond atomic sizes. The linear and angular momenta of light are then prioritized as the basic concepts, the dynamic evolutions of which directly give the force and the torque just as in classical mechanics. 2.1.2 Electromagnetic energy–momentum tensor In macroscopic electromagnetism, the electromagnetic energy–momentum tensor, a matrix concerning the densities and fluxes of the electromagnetic energy and momentum, is a tool to characterize the energy–momentum dynamics and derive the optical force and torque. However, even though macroscopic electromagnetism is generally considered well developed, rival expressions of the electromagnetic energy–momentum tensor surprisingly exist, each supported with compelling evidences and arguments. Among the various expressions, two of the most famous are arguably the so-called Minkowski and Abraham tensors, which were both introduced in the first decade of the 20th century. Specifically, the Minkowski tensor, denoted by , is given by Here, is the velocity of light in vacuum; and denote the electromagnetic (EM) energy density and energy flux density, respectively; and represent the linear momentum density and flux density tensor, respectively. Note that the negative of the momentum flux density tensor, , is also called the Maxwell stress tensor. In a lossless, non-dispersive and reciprocal medium, the expressions of , , , and are given by where denotes the outer product, and is the identity tensor. The momentum flux density tensor, , is symmetric due to the reciprocity with and ( and are material permittivity and permeability tensors, respectively, and the transpose operation applied to a tensor ). The energy–momentum tensor is, however, asymmetric since unless in free space. The asymmetry of the Minkowski tensor was criticized considerably, since it violates the conservation of angular momentum. To fix this issue, the Abraham tensor was proposed with the momentum density modified to which equals , thus recovering the symmetry of the energy–momentum tensor. The contradiction between the Minkowski and Abraham energy–momentum tensors [or more precisely, the Minkowski and Abraham momentum densities, cf. Eqs. (2.c) and (3)] is confusing, since it leads to the indefiniteness of the light momentum that is, however, to have a unique expression due to its physical reality. To explicitly illustrate the predictive difference between the Minkowski and Abraham formulations, we consider that a light wave packet with volume and a linear momentum transmits from free space into a dielectric medium with refractive index . Further, we assume that the dielectric medium is transparent, so that the reflection is negligible, and, thus, the power fluxes of the incident and transmitted light are equal. As a result, the Minkowski and Abraham momentum densities in the dielectric medium are and , differently, where denotes the light momentum density in free space. By integrating the momentum density of the wave packet that occupies a reduced volume of in the dielectric medium, the linear momentum of transmitted light is or . Therefore, the Minkowski expression predicts that, on entering a dielectric medium () from free space, light increases its momentum, while the Abraham one claims the opposite. With this dramatic contrast, one naturally expects that the correct formulation should be undoubtedly identified in principle by measuring the mechanical deformation of the dielectric interface induced by the recoiling force due to the increment or decrement of the light momentum. In 1973, Ashkin and Dziedzic performed such a measurement by sending a laser beam from air into water, and observed that the water surface rises. The observation at first sight seems to support Minkowski’s prediction. However, later serious calculations, by taking into account the nonuniformity of laser illumination, revealed that it is the Abraham momentum acting in this type of experiment. Moreover, before Ashkin and Dziedzic’s experiment, Jones and Richards measured the light pressure exerted on a mirror suspended in water in the 1950s[49,50]. They observed that light pressure linearly increases with the refractive index of water, thus supporting the Minkowski momentum. The experimental evidences did not settle the arguments[51–53]. This Minkowski–Abraham dilemma has attracted considerable theoretical efforts since the late 1960s. The solution that emerged is a bit unexpected: both the Minkowski and Abraham energy–momentum tensors are physically “acceptable,” yet “flawed,” because they alone are incomplete in describing a closed light–matter system. A complete energy–momentum tensor should include both electromagnetic and material parts. Therefore, if appropriate material counterparts are supplemented, the summarized energy–momentum tensor should always be unique. However, as pointed out by Brevik in 1979 in his comprehensive paper, there unfortunately exists no unique solution partitioning the energy–momentum tensor into electromagnetic and material parts, the reason behind which partly relates to the ambiguous definition of momentum densities. Notably, as derived by Barnett in 2010, the Abraham and Minkowski momentum densities [cf. Eqs. (2.c) and (3)] correspond to kinetic and canonical momentum densities, respectively. The difference between such two types of momentum densities can be understood by specifically considering a charged particle in electromagnetic fields. In this case, the kinetic momentum density of the particle is simply the product of the mass density and velocity, , which describes the motion of the particle. Differently, the canonical momentum density is (, particle charge density; , electromagnetic vector potential), which is the conjugate variable of the position and is the translation generator in quantum theory. Similarly, light also has its kinetic and canonical momentum. In this sense, the Minkowski and Abraham momentum densities are both meaningful, but have different physical meanings. The preferred choice of one over the other is simply a matter of convenience for interpreting the physics without referring to the full light–matter expression. As argued by Barnett, in most optical experiments that mainly focus on measuring displacements of micro-objects in host fluidic media, the Minkowski form is preferable since the canonical momentum intimately relates to the translation operation. For the Minkowski tensor [Eq. (1)], we denote its accompanied material tensor by , and, thus, the complete light–mater tensor is . is material dependent. For a non-viscous, nondispersive, isotropic fluid, Mikura in 1976, derived thatwhere is the material density, is the internal energy of non-electromagnetic nature, and is fluidic pressure. With the unique light–matter tensor , the energy and momentum conservation laws are given by , where correspond to 4D space–time coordinates . Specifically, the momentum conservation law is expressed with The right-hand side of Eq. (5) gives the force density exerted on the fluid, the optical part of which is . Generally, optical manipulations with optical force are performed with continuous laser light, that is, electromagnetic fields have a harmonic time dependence . In this case, the time-averaged optical force density, denoted by , is concerned, which is given by with . Accordingly, the time-averaged optical force exerted on a closed domain enclosed by the boundary is where denotes the outward unit normal vector on . Note that in some literature, the optical force is expressed differently in terms of the Maxwell stress tensor, which is the negative of the momentum flux density, so that there exists a sign difference compared with Eq. (7). 2.1.3 Optical force The time-averaged optical force exerted on a solid particle embedded in a fluid, a prototype problem in optical manipulation, can be calculated by performing the surface integral of the momentum flux density with Eq. (7). Note that, even though this formulation is derived in the fluidic case, its application also extends to other materials as long as electrostriction and magnetostriction effects are negligible. Departing from Eq. (7), further analysis of the optical force can be conducted either through examining the linear momentum transfer of light and representing the optical force in terms of the scattering and absorption quantifiers (e.g., time-averaged scattering and absorption power), or by expressing the optical force in terms of the particle polarizabilities and the electromagnetic fields acting on the particle. The former approach adopts a light wave perspective, while the latter is more oriented to a particle perspective. They together provide complementary insights. Light wave perspective An elementary plane wave with wave vector carries a time-averaged momentum flux density equal to , where is the refractive index of the background medium, is the time-averaged power flux density, and . A generalized incident light beam could be expressed with a linear superposition of plane waves of different : where and are two orthogonal polarization vectors with and , with spherical angular coordinates and that specify the plane wave vector , and is the amplitude of each plane wave component. Consider that a light beam interacts with a micro-object. Generally speaking, part of incident waves is absorbed and part is scattered. The scattered electric fields are expressed with where denotes the amplitude of a scattered plane wave. Note that in Eq. (8.b), at positions far away from the object, e.g., at and , integration is restricted to angular ranges and to respect the Sommerfeld outgoing-wave conditions. The total electric field is the summation of incident and scattered fields, and , respectively. Then, by integrating the momentum flux density on the boundary with Eq. (7), the time-averaged optical force is obtained with Here, with denotes the time-averaged optical scattering power, while with denotes the time-averaged extinction power, which sums up the scattering and absorption power. Also, and are defined as and , respectively. The physical meaning of is straightforward from its definition, and its direction is interpreted as the average direction of the scattered waves. However, the physical meaning of seems to be ambiguous. Nevertheless, by referring to a specific case in which the incident wave has only a single plane wave component, i.e., (, the wave vector of the incident plane wave), we notice . In this sense, we interpret as the average direction of the incident waves sensed by the micro-object. Note that in Ref. , an expression similar to Eq. (9) is presented, which, however, is limited to propagation-invariant incident waves (such as a single plane wave, and Bessel beams) and to the lossless particle, and Eq. (9) is free of these limitations. Equation (9) well interprets the origin of the optical force from the perspective of light scattering and absorption. First, this equation is reformulated to , where is the time-averaged absorption power. Then, we notice that the first term on the right-hand side of the above expression gives the optical force due to the absorption of the light beam with the linear momentum in the direction of , while the second term describes the recoiling force due to the change of the direction of the light momentum. Thus, Eq. (9) well characterizes that the optical force is generated due to the linear momentum transfer from light to matter. Moreover, with Eq. (9), the order of the optical force can be immediately estimated. For instance, when , , which is often the case in typical experiments of optical manipulation, we have . When the incident wave is a single plane wave component with wave vector , Eq. (9) is simplified to . Especially, when the target object happens to be a reflecting mirror with a planar surface and incident light is directed vertically to it, from a geometrical optics point of view, , and Eq. (9) will be reduced to the intuitive expression provided in Sec. 1 (Table 1). In a more general sense, for a passive particle, we have , so that is positive, that is, the optical force exerted by a plane wave always pushes the particle in the same direction as its propagation. Therefore, to enable a pulling force on the particle, a non-plane wave, such as a Bessel beam as suggested by Chen et al., is indispensable. The light wave perspective is advantageous in providing a clear physical picture of the optical force in terms of the linear momentum exchange through scattering and absorption processes. However, it does not bring concrete insights on how one could engineer light beam profiles or electromagnetic responses of micro-objects to generate a desired optical force. Moreover, conventional classifications of the optical force into scattering, gradient forces, and other types, do not follow from the light wave perspective. Here lies the worth of the particle perspective, in which the electromagnetic responses of the micro-object are parameterized by a series of induced electric and magnetic multipoles. Particularly, for the object size much smaller than light wavelength, i.e., in the so-called Rayleigh-limit regime, it suffices to consider only the electric and magnetic dipole moments, denoted by and , respectively, which relate to the incident electromagnetic fields by and , where and are the electric and magnetic polarizabilities of the object, respectively. The scattered electromagnetic fields can be expressed in terms of and by employing the Green’s function technique. Then, after knowing in terms of and , the optical force can be calculated with Eq. (9), and its expression is given by[4,57]with Here, is the gradient force, which describes a type of optical force that points towards the hotspot of the light beam. is usually called the scattering force in the existing literatures. This is because, under the incidence of a plane wave with wave vector , , while , which, besides the optical absorption, intimately relates to wave scattering. Therefore, in a rigorous sense, it is more proper to call the extinction force. Both and are contained in the first term on the right-hand side of Eq. (9), i.e., . describes the optical force due to the joint contribution from electric and magnetic dipoles, corresponding to the second term in Eq. (9) that relates only to scattered fields, i.e., . The non-vanishing of necessarily requires that both electric and magnetic dipoles exist. Otherwise, if only the electric or magnetic dipole exists, the scattered fields will distribute symmetrically around the axis of the dipole orientation, thus leading to , which makes the second term in Eq. (9) vanish. The optical gradient and scattering forces are widely explored for various types of optical manipulation, the detailed review of which is provided in Sec. 3. Specifically, the most straightforward way to generate the gradient force is to use a focused Gaussian beam, such that the force points towards the beam center. This gradient force makes it possible to trap the particle against Brownian motion, of which the “optical tweezer” is perhaps the most famous application. Nowadays, due to new insights emerging from nanophotonics, the use of a focused beam is no longer a mandatory condition for generation of the gradient force. The steep hotspots can be induced with an unfocused beam by unitizing plasmonic near-field effects and dielectric resonances. On the other hand, as long as a propagating beam is scattered or absorbed by a micro-object, the scattering force always exists, which can be used to drive the motion of the object. In this regard, to maximize the scattering force, the electric and magnetic resonances, manifesting in the spectral peaks of and , can be utilized. Even though the gradient and scattering forces are derived in the Rayleigh-limit regime, their existence is independent of the particle size. Particularly, when the particle size is much larger than light wavelength, an intuitive way to understand the gradient and scattering forces is to use ray optics, where optical forces can be regarded as a recoiling force originating from the momentum direction change of the rays due to refraction, as shown in Fig. 2, which intentionally highlights the application of optical tweezers. Figure 2.Illustration of the basic principles of optical force in optical tweezers using ray optics. (a) A trapping beam is focused with the help of a high-NA objective into the sample plane, and a particle can then be trapped in the focal point of the beam due to the large intensity gradients created. The trapping laser is reflected and refracted through the particle and imparts the momentum to the particle. (b) The scattering force produced by laser reflection pushes the particle along the laser propagation direction. (c) The gradient force caused by the light intensity gradient will pull the particle toward the maximum intensity of the laser. (d) Similar arguments along the transverse direction. (e) For Rayleigh particles, the electric field of the light produces an induced dipole in the particles, which are subject to the optical gradient force pointing toward regions of high field gradients. The validity of the ray optics requires that the particle size is much larger than the wavelength, which is to roughly say at least one order of magnitude larger. The transfer of the angular momentum of light generates an optical torque on the object. Under the Minkowski energy–momentum tensor, the angular momentum density and its accompanied flux density tensor are expressed by Here, directly follows from the definition of the angular momentum in classical mechanics, while is derived from the conservation law of the angular momentum. The torque density is given by . Then, considering that a light beam with an angular frequency of interacts with a micro-object, the induced time-averaged optical torque, denoted by , is given by similar to the formulation of the optical force in Eq. (7). Here, is the time average of with given below Eq. (6). The angular momentum of light includes OAM and SAM. In quantum mechanics, the angular momentum operators are generators of simple rotations, which here rotate both the amplitudes and the polarization orientations of electromagnetic fields. Intuitively, the rotation of the field amplitudes associates with the OAM, while the rotation of the polarization orientations relates to the SAM. Despite this clear physical picture, neither OAM nor SAM is true angular momentum as pointed out by van Enk and Nienhuis, since their respective rotation operators violate the transversality of electromagnetic waves[59,60]. Nevertheless, both OAM and SAM are physically meaningful, as have been measured by a number of experiments. The angular momentum density can be decomposed into OAM and SAM components. Specifically, for harmonic electromagnetic fields, the time-averaged OAM and SAM densities, denoted by and , respectively, are given by Using Maxwell’s equations, it can be directly checked that . Note that the term appearing in resembles the OAM operator in quantum mechanics. Moreover, the non-vanishing of raises specific requirements on light polarization; for instance, for linearly polarized light, . The similar separation of the angular momentum flux density tensor is mainly discussed in the context of paraxial optics or cylindrically symmetrical light beams. Later, by referring to a concrete light beam that carries both OAM and SAM, we will provide more discussion on the OAM and SAM flux densities. Historically, the intensive study of the angular momentum of light appeared in the 1990s, when Allen explicitly showed that a Laguerre–Gaussian beam carries a well-defined angular momentum, which could be decomposed into an OAM part associated with the azimuthal phase ( is an integer) and an SAM part relating to light polarization. After Allen’s milestone discovery, researchers found that, besides the Laguerre–Gaussian beam, myriad beams, such as Bessel beams, perfect vortex beams, and higher-order Poincaré sphere beams, all carry angular momentum. To pedagogically elucidate Allen’s discovery, we here consider a light beam propagating in the direction, the electric fields of which are expressed in the cylindrical coordinate system by For simplicity of mathematical derivations, the component of is set to zero without loss of generality. The complex numbers are normalized coefficients for left- and right-circularly polarized light with and , respectively, and satisfy . The overall polarization state of the light beam, quantified by , is defined by Apparently, , and correspond to left- and right-circular polarizations, respectively, while characterizes the linear polarization. Since the considered light beam is cylindrically symmetric, only the -component angular momentum exists, the evaluation of which involves the -component magnetic field , as indicated in Eq. (12.a). From Maxwell’s equations, we derive that . Then, applying Eq. (13), the component of the time-averaged angular momentum density is obtained with . Therefore, the time-averaged -component angular momentum is derived to be with Here, is the total electromagnetic energy of the incident fields. and characterize the time-averaged -component SAM and OAM, respectively, which can also be computed by performing volume integrations to the SAM and OAM densities expressed in Eqs. (14.a) and (14.b). Concerning the angular momentum flux density of the cylindrically symmetric beam as discussed above, Barnett derived the expressions of the -component flux density that explicitly separate the OAM and SAM contributions: where and denote the time-averaged OAM and SAM flux densities through the x−y plane in the direction, respectively. The time-averaged OAM and SAM fluxes through the plane, denoted by and , respectively, are contributed only from and , and their expressions are where denotes the time-averaged power flux through the plane, which for the light beam expressed in Eq. (16) is along the direction. Apparently, if the light beam is absorbed by a micro-object, part of its OAM and SAM can be transferred to the object. The generated optical torque can then be calculated by integrating the flux density over a closed surface enclosing the object (e.g., ) with Eq. (13). In this way, there is , where denotes the time-averaged optical power absorbed by the object, which will induce the rotation of the micro-object. It is interesting to point out that absorption also leads to an optical force , as suggested by Eq. (9). As a result, in this case, the magnitude of is related to that of by a simple relationship: where is the wavelength of the background medium. As discussed above, is typically of the order of pN when . Besides assuming that the light wavelength is of the order of µm and the quantum number of the exchanged angular momentum is of the order of one, we estimate that the magnitude of is of the order of . In Eq. (20), the explicit value of the quantum number of the exchanged angular momentum is unknown, and it assumes that the momentum transfer is through absorption processes. It is proposed mainly to gain a concrete appreciation of the order of the magnitude of the optical torque in certain physical scenarios. Consequently, to precisely estimate the optical torque, electromagnetic simulations should be indispensable. If the object is transparent and birefringent and does not alter the wavefront of the light beam, only the SAM can be transferred to the object. More precisely, denoting that the time-averaged optical power associated with the polarization change due to the birefringent effects by , the torque is calculated to be from Eq. (13), where quantifies the polarization change of the light beam. This polarization-conversion-induced torque was first demonstrated in 1936 by Beth who used a birefringent wave plate to enable conversion between left- and right-circularly polarized light, thus inducing the rotation of the wave plate. The usages of light beams with angular momentum and birefringent effects of target objects are not the only approaches to induce optical torque. Many other approaches exist, e.g., by taking advantage of anisotropic electric responses in non-spherical particles[66,67], and special beam shapes[68,69]. Their underlying physical mechanisms are very dispersive, the review of which is discussed in Sec. 3 by referring to concrete experimental demonstrations. 2.1.5 Measurement methods for optical force (based on optical tweezers) Optical force can be measured or calibrated in optical tweezer systems. With directly accessible data being the recorded particle displacements, the force status of the trapped particle is obtained indirectly by correlating the two sets of data, namely, “x” and “F,” through the Langevin equation (Newtonian laws of motion) in fluidic environments. Hence, deduction of the optical force is a two-step process: (1) collecting the temporal trajectories of the trapped particle, and (2) converting “x” data to “F” data. The corresponding experimental setup and the data processing methods are introduced below. On the other hand, optical torques always associate with the change of light beam polarization, the measurement of which is relatively uncomplex compared to that of the optical force, and relevant content is covered in Sec. 3.3.1. Typically, an optical tweezer is built on top of a commercial confocal microscope. At the heart of the trapping system is a high-NA objective lens that produces diffraction-limited focal spots. To avoid water absorptions, trapping lasers with wavelengths in the visible and near-infrared regimes are favored, which reduces the heating effects and mitigates photodamage to fragile biological samples. Figure 3 displays an instance of ordinary optical tweezer configurations. Using a beam expander, the trapping laser is expanded to either slightly underfill or overfill the objective, with the aim of optimizing the trapping efficiency by weighing the relative effects of the size of the laser focus against the light power truncation. The optical path of the trapping laser is then coordinated by two dichroic mirrors ( and ), while a white light source is aligned with it for sample illumination and direct observation through the charged couple device (CCD camera). Figure 3.Experimental schematic of the conventional optical tweezers. A simple telescope is used to expand the laser beam to fill the back aperture of the objective. The expanded laser beam, reflected by a dichroic mirror, is coupled into the high-NA objective (lower objective in the sketch) and focused into the chamber. The QPD is placed in a conjugate plane of the condenser objective, collecting the interference signal between incident light and forward-scattered light from the sample. LED light is used to illuminate the sample and imaged with a CCD camera. In situations that require high-precision measurements of the optical force and torque, or alternatively, when optical tweezers are calibrated for accurately sensing external stimuli, simply visualizing trapped particles using imaging devices is no longer effective. Instead, it is imperative to “track” the moving trajectories of the particles at high sampling rates, by which the trap stiffness can be deduced with particles’ Brownian motion as the reference signal[72,73]. Video recordings could encode the temporal positions of the trapped particle. However, even for high-speed cameras, their sampling rate is ultimately limited by the exposure time and imaging processing technique, which compromises the measurement accuracy considerably. In this context, quadrant photodetectors (QPDs), bearing the advantage of high-bandwidth recording, enter the picture, and they have now been widely adopted in state-of-the-art research for position detection. Specifically, the QPD should be placed at the conjugate plane to the back focal plane of the condenser, where the collected light pattern unveils the interference between incident light and scattered light from the sample (see Fig. 3). Each of the four quadrants of the QPD produces a voltage signal, denoted by , , , , and the lateral displacement of a spherical object (the origin is chosen to coincide with the trap center) can be calculated from the normalized differential outputs as The Langevin equation describes the stochastic behaviors in fluidic suspensions, and it bridges the two sets of information (x and F) with the following formula[10,74]: In the 1D Langevin equation (readily extended to other dimensions), the left three terms denote the inertial force, friction force, and optical restoring force in the optical trap, respectively, and on the right-hand side is the contribution from random thermal fluctuations that arise from particle colliding with the surrounding fluids. Note that for spherical particles, the Stokes drag coefficient is a known quantity with ( is the viscosity of the fluid and the radius of the particle), and so is the Brownian diffusion constant derived from the Stokes–Einstein relation , where stands for the product of the Boltzmann constant and the absolute temperature. Moreover, to account for the Gaussian randomness of the collision events, the function is determined as In an overdamped system, as is always the case in fluidic environments (except in high vacuum), the inertia term is dwarfed by both the viscous drag and the optical force, and therefore can be dropped for simplicity. Consequently, taking the Fourier transformation of the times series , the as-obtained power spectrum would possess the feature that its expectation value corresponds to a Lorentzian[73,75]: In the above formula, is the cutoff frequency of the damped oscillator, from which the optical force, or rather, the trap stiffness, can be determined by optimizing the fitting parameters of the Lorentzian. Apparently, faithfully recording the power spectrum is key to precise measurements, which is possible only when the bandwidth of the detector is adequate to avoid aliasing and the loss of high-frequency signals. Another method for optical force deduction is based on the energy equipartition theorem, which statistically relates the thermal fluctuation to the averaged energy as The equipartition theorem applies to all three dimensions. For 3D optical trapping using a single Gaussian beam, the lateral and axial trapping stiffness will differ, which leads to unequal mean square displacement (different level of Brownian diffusion) of the trapped particle along the two directions. Though not explicitly dependent on the viscous drag, this method is intrinsically relevant to the former approach in that the mean square displacement of the particle corresponds to the integral of the power spectrum . In addition to the requirements regarding the bandwidth of position detection, extra care should be taken to calibrate the origin of particle displacement (trap center). 2.2 Light-Induced Thermophoretic Force Thermophoresis is particle motion in fluidic suspensions driven by the temperature gradient, and the ultimate energy source, as far as this section is concerned, is from light absorption. Intrinsically, thermophoresis in air and liquid environments has the same mechanism, that is, to be specific, modification of the particle–medium interface by the spatially varying temperature field. Nonetheless, for historical reasons, scientists took different routes in pursuit of proper descriptions of thermophoretic forces involved in the two fluidic environments, i.e., the photophoretic force in air and the Soret effects in liquids, with the kinetics model specialized for the former and the hydrodynamic treatment favored for the latter, which we will address concretely and separately below. 2.2.1 Photophoretic force Photophoretic force originates from nonuniform absorption and the thermal process of particles suspended in gaseous environments (aerosols), which can push, pull, or drive complex motions of light-absorbing particles, depending on specific physical conditions. Different from the optical force, photophoretic force is based on the momentum transfer between gas molecules and the target particle, where light functions as the energy pump instead of the momentum carrier. Specifically, when an absorbing particle is subject to light irradiation, the scheme of momentum transfer occurs via nonelastic collisions of gas molecules that are unbalanced between hot and cool sides. Obeying Maxwell’s law of velocity distribution, gas molecules bounced off the particle acquire thermal energy related to a statistical mean velocity of , where denotes the temperature of the leaving molecules after collision, as opposed to the initial temperature of the gas surroundings and the surface temperature of the particle ( in this discussion). Note that does not necessarily coincide with . Indeed, particles differ in their capabilities of endowing colliding gas molecules with thermal energy, which is quantified by the thermal accommodation coefficient, calculated as[see Fig. 4(c)]. When , , gas molecules gain no additional thermal energy from the particle, while when , , full accommodation occurs, and gas molecules grab the largest share of thermal energy possible, which is manifested in their collective bounce-off velocities. The accommodation coefficient is affected by both the composition and surface topology of particles. For instance, glazed platinum has an of 0.315, whereas platinum black with a very structured surface has an of 0.72. This phenomenon can be understood by the multiple reflections of the molecules in platinum black, which have a greater chance to pick up the surface temperature. Figure 4.Photophoretic force, which is divided into -force and -force. Schematics of -force for (a) strongly absorbing and (b) weakly absorbing particles. (c) Schematic of -force exerted on a particle with nonuniform thermal accommodation () coefficient. White, black, and blue arrows in (a)–(c) indicate the propagation of incident light, the vectoral representation of molecular velocity before and after nonelastic collisions, and the direction of the resultant photophoretic force, respectively. Adapted from Ref. . In the spirit of momentum conservation, the particle receives a recoil kick from each individual gas molecule through nonelastic collisions, or rather, a local pressure is exerted on the particle pointing opposite to the molecule’s bounce-off direction. The net impulsive force is then obtained by integrating the local pressure over the entire surface, which would be nullified unless asymmetrically heated, with the particle featuring two separate “hot” and “cold” regions, or alternatively, the particle is intrinsically nonuniform in its thermal accommodation coefficient [19,81]. In the two schemes that result in either - or -force, gas molecules favorably acquire more thermal energy (higher ) at the side of the particle featuring higher temperature or larger accommodation coefficient. Though intertwined in experiments, and forces are often treated independently for numerical and theoretical clarity. Assuming a homogenous distribution of the thermal accommodation coefficient across the particle, the photophoretic motion of the particle is purely driven by the -force. Given that the -force is directed from the hot to cool side, in the geometric-optics regime, it generally acts as the repulsive force for strongly absorptive particles, where the optical near side (illuminated side) is the thermally hot side, and reversely, for weakly absorptive particles, as the attraction force towards the light source, where the optical rear side (unilluminated side) generates more heat due to the focusing effect of the particles’ convex surfaces or when they serve as ball lenses [see Figs. 4(a) and 4(b)][19,22,82,83]. For particles in the Mie or Rayleigh regime, situations become more complex in that multipole interference and resonances might be involved and cause significant directionality of the light field not necessarily aligned with light propagation. More rigorous analyses of wave optics are needed for these situations. Quantitatively, the expression of the -photophoretic force differs for different values of the Knudsen number, which is the ratio of the mean free path of the surrounding gas molecules to the radius of the target particle, . In the continuum and slip flow regimes of fluid mechanics, that is, , Yalamov et al. constructed a theoretical framework that coupled the electromagnetic field, heat transfer, and hydrodynamics governing equations and have deduced the -force as[77,84]where denotes the thermal slip factor, the radiation intensity, the particle thermal conductivity, , , the viscosity, mass density, and temperature of the surrounding media, respectively, and most importantly, as the symmetrical factor that accounts for the electromagnetic field distribution in the target particle, in that the positive value of signifies photophoretic pulling, and the negative value suggests pushing. Note that is a highly packaged item depending on myriad factors, the determination of which requires meticulously solving the electromagnetic equations with assigned boundary conditions, material and geometric parameters, etc., and could be intricate when dealing with Mie or Rayleigh particles, as stated before. Being case dependent, the factor may alter considerably with a slight change of parameters, the engineering of which adds to the degree of freedom of optical manipulation in determining the direction of the -force (detailed discussion provided in Secs. 3.1.5 and 3.2.2). Contrastingly, in the free molecule regime (), Hidy et al. obtained the expression of the -force in the case of a completely opaque sphere, and the resultant formula is more explicit in its physics in that the involved physical quantities carry more direct meanings as compared to Eq. (26), which is approximated aswhere denotes the particle radius, the radiation intensity, the gas pressure, the average speed of the gas molecules, the particle thermal conductivity, and the ambient gas temperature. Note that Eq. (27) keeps only the linear term in the original solution, which is expressed by the Legendre polynomial. Despite this approximation, it is rather instructive in manifesting the evolution of the -force in relation to the dominant physical quantities such as the atmospheric pressure, ambient pressure, and thermal conductivity of the particle of interest. Notably, if , Eq. (27) will be reduced to the intuitive expression in Sec. 1 (Table 1). The nonuniformly distributed thermal accommodation coefficient leads to asymmetry in the momentum transfer between the particle and gas molecules, and hence would result in an unbalanced -force, as shown in Fig. 4(c). The -variation inside a single particle might be caused by a difference in surface roughness or material composition, and the latter relates to the use of Janus particles for asymmetry-induced optical manipulation (see Secs. 3.2.2 and 3.3.2). Since a larger accommodation coefficient corresponds to a faster mean velocity of the reflected gas molecules, the law of momentum conservation determines that the -force, as the recoil force, points in the descending direction of , which is directly relevant to the particle orientation while irrespective of (or less affected by) the illumination directions [see the lower panel in Fig. 4(c)]. Concrete deduction of the -force is still lacking, whereas readers can refer to Refs. [19,85] for more insight. 2.2.2 Soret effects in liquids Thermophoresis in liquid environments, or rather, the Soret effects, may not be easily embodied as a concrete “force,” since the kinetic theory loses its validity, and what proves convincing instead is the hydrodynamic picture. In general, scientists and researchers would preferably use the concept of “thermal diffusion” and “mass flux” to describe and quantify, respectively, the Soret effects in liquid suspensions. Specifically, a temperature gradient should exist before the Soret effects can take place. Different from photophoresis where this very thermal nonuniformity is assured by the particle absorption, in liquids, the long-range temperature field is more often established through substrate absorption (except for Janus particles), whereas the absorptivity of the particles is not quite relevant. For instance, even transparent particles can be manipulated via Soret effects, provided that the substrate is light-absorptive and the particles possess a nonzero Soret coefficient (which will be introduced below). With the presence of a spatially varying temperature field, an extra drift of particles could occur on top of Brownian diffusion, typically from the hot to cold side, and the total mass flow can be expressed as[23,86]where is the concentration of the target objects (e.g., particles, molecules), is the thermophoretic mobility (or thermal diffusion coefficient, for historical reasons), is the temperature gradient, and is the Brownian diffusion coefficient. In particular, the first term represents the mass flow due to the “extra drift” (drift velocity ), which is proportional to the temperature gradient. In the steady state, with the net mass flow vanishing, a nonuniform concentration profile would be established, written as where the Soret coefficient is defined as . The magnitude of measures the strength of thermophoretic flow relative to the tendency of random Brownian diffusion, and its sign specifies the direction of thermophoretic flow. For , which is the most common situation, objects move from the hot to cold region and exhibit a thermophobic property, and vice versa for . Note that the Soret coefficient is essentially related to the detailed configuration of the particle–medium interface. Hence, the thermal diffusive behavior of different particles would vary significantly, which can also be tuned by adding surfactants that effectively modify the interfacial property, or by adjusting the pH or setting up the temperature range ( is a function of ) of the solvents[87,88]. Recently, by utilizing the Soret effects in electrolytic solutions, researchers have introduced a real “force” upon suspended particles, which is based on the opto-thermally induced electric field[87,89]. To go from the light signal to the electric field, light–thermal conversion is again the necessary intermediate to imprint the temperature field in liquid suspensions. Afterwards, the trick is to first decorate the target particle with charged micelles, and then trigger the spatial segregation between positive and negative ionic species, which is fostered by their difference in thermophoretic mobilities. Indeed, it is a sub-branch of the Seebeck effects in the liquid domain. More details on the opto-thermoelectric force are provided in Secs. 3.1.6 and 3.2.3. 2.3 Light-Induced Deformation Effects 2.3.1 Optical manipulation on solid interfaces The aforementioned optical and photophoretic forces are generally applied to manipulate micro-objects in low-adhesive fluidic environments, where adhesion/friction forces are tiny, typically of the order of pN or even smaller. Their use on dry solid interfaces is, however, expected to fail, because the adhesion/friction forces therein are way too strong, easily reaching , which is several orders of magnitude greater than optical and photophoretic forces. Consequently, other physical mechanisms/effects are required to achieve optical manipulation on solid interfaces. Among various proposals for optical manipulation on solid interactions, one group of explorations based on light-induced deformation effects attracts significant interest due to its rich physics at interfaces among nanophotonics, nanomaterials, and solid mechanics and to its promising technological applications. Roughly, light-induced deformation effects are a simplified term that covers a type of phenomenon—an object deforms its shape under the irradiance of light. They necessarily need to convert light energy into mechanical energy. The energy conversion could be mediated by thermal effects, which change the temperature of the object and then induce lattice oscillations (i.e., elastic waves). Alternatively, it can also take advantage of phase transition effects, so that the lattice reconfigurations generate strong stresses to enable shape deformation. During the shape deformation of the object, the adhesion/friction forces act as resistance, which, however, generally cannot overwhelm the deformation, as long as the latter is sufficiently intensive. Light-induced deformation effects themselves do not automatically render the desired optical manipulation. The realizations of optical manipulation and motion control additionally demand elaborate structural designs and material choices. The research on this topic is diverse and multidisciplinary. Notable exemplifications include the use of a bimorph structure composed of two stacked thin films with large contrasting coefficients of thermal expansion, liquid crystal elastomers and networks, hydrogels, and so on, a detailed review of which is provided in Sec. 4.2. Even though these examples demonstrate versatile motions, such as vibrations, translations, and rotations, the target objects are largely limited to macroscopic dimensions (, cm). Recently, a new solution to manipulate micro-objects on solid interfaces based on using elastic waves induced by pulsed light was reported. Compared with the conventional approaches that are similarly based on light-induced elastic deformations, the essential technical ingredient of this new solution lies in the use of nanosecond pulsed light rather than continuous light, thereby transforming the physical picture from “quasi-static” elastic deformation conceptually based on, e.g., thermal expansion/contraction, to the dynamic deformation picture that requires to take temporal elastic wave evolutions into account. Using this technique, researchers have successfully achieved nearly a full degree of freedom actuation of micro-sized objects on micro-fibers, as reviewed in Sec. 4.1.2. Below, we intentionally highlight the physical mechanisms behind this technique, since they have not been comprehensively reviewed before, and, moreover, the authors of this review have been intensively working on the technique for the last five years. Highlighting it here is a matter of personal taste, and we hope that readers will allow this choice. 2.3.2 Optical manipulation with elastic waves induced by pulsed light To pedagogically clarify the principle of optical manipulation on solid interfaces driven by pulsed-light-induced elastic waves, we here refer to a concrete 2D physical model, as shown in Fig. 5. A thin microplate is placed on a substrate. The friction force, which is simplified to be a point force, is exerted on the plate when the latter moves on the substrate. Under irradiance of pulsed light, elastic waves are excited due to temperature rising via optical absorption. Note that in the existing literatures on this topic, some specific names for excited mechanical waves, such as acoustic waves, Lamb waves, and Rayleigh waves, can be found, which, nevertheless, can all be grouped under the same name—elastic waves—for clarification. Along with the excitations of elastic waves, the plate deforms its shape and induces the friction force. In such physical processes, to better guide our discussion, we raise two questions to the reader: (i) whether the plate could be driven in the sense that the whole plate translates a distance on the substrate, and (ii) if it could be driven, what are the essential physical mechanisms. Figure 5.Sketch of a micro-object on a substrate driven by elastic waves induced by pulsed light. A point friction force is exerted on the micro-object when the latter moves on the substrate. Adapted from Ref. . To approach the proposed questions, we first write the linear elastic equation that describes the elastic deformation of the plate due to the temperature variation and the induced friction force [see Fig. 5]: where denotes the displacement fields of the excited elastic waves, and , , , and denote Poisson’s ratio, Young’s modulus, mass density, and linear coefficient of thermal expansion of the plate, respectively. Then, to explicitly reveal the contribution from the friction force, we decompose into , where is the solution of Eq. (30) with , i.e., quantifying the sole contribution from the temperature variation , and, thus, the friction contribution is left in . We first point out that, in the absence of the friction force, the excited elastic waves can deform only the shape of the plate, but are unable to drive the plate translation. This is because, without the friction force, the external force in the translation direction vanishes, thus leaving the center of mass of the plate unchanged, that is, the spatial average of is zero. Consequently, to enable the translation motion, an external friction force is indispensable. This deduction is not surprising, and, actually, the same principle is unconsciously exploited by ourselves every day when walking. We analyze the friction-induced elastic displacement by expanding it with elastic modes. Assuming that the plate thickness is much smaller than the wavelength of the elastic waves, the fundamental longitudinal elastic modes are dominantly excited with wavenumber and velocity at angular frequency given by The longitudinal modes have the displacement fields almost parallel to their propagation direction (i.e., direction) and are uniformly distributed over the plate thickness. The dominant -component of , denoted by , is approximated by with Here, is one-round-trip travel time for the elastic waves propagating in the plate with denoting the plate length in the direction; () denote the travel time for the elastic waves traveling from the friction point to the observation point four times within one round trip, with , , and ; is the Heaviside step function with for and otherwise; label the round trips of the elastic waves bouncing back and forth inside the plate; is the mass of the plate. The friction force is the parallel component of the surface adhesive force. It includes the contribution mainly from van der Waals forces, and could also be affected by a variety of surface factors, such as roughness and possibly accumulated surface charges. Therefore, a precise estimation of the friction force from the first principles seems to be impractical. To bypass this difficulty, Tang and co-authors in Ref. introduced a phenomenological model to determine . In their model, the key quantity is the sliding resistance , which is the maximum allowable static friction, and the dynamic friction force (when the plate is in motion) is set to equal for analysis simplicity. Their strategy to determine is based on an intuitive physical argument: is induced to mitigate the elastic deformation by letting the magnitude of the deformation velocity () at the friction point be as small as possible. Along this line, it is found that the friction force depends not only on , but also on itself at the previous time. Here, for intuitive demonstrations of the essential physics, we sacrifice the exactness and neglect the less important latter dependence; more precisely, this approximation amounts to retaining the leading-order term ( and ) in Eq. (32.b). As a result, the friction force is approximately given as follows: when the friction point is still, which requires where denotes the deformation velocity of the friction point due to the temperature variation , there is otherwise, when the friction is in motion, which occurs when there is where for and for . In Eqs. (33.a) and (33.b), the threshold velocity —the relationship between it and defining the motion state of the friction point—is given by Summarizing and , the component of the displacement fields of the friction point can be approximately formulated: where is the Heaviside step function defined below Eq. (32.b). We note that the -component displacement of the friction point, , is generally different from the displacement of the center of the mass due to inhomogeneous deformation. Nevertheless, it still offers a proper reference to infer the motion state of the plate. This is similar to the deduction of someone’s movement by observing the positions of his/her feet. Moreover, as with , the plate returns to its initial non-deformed shape, where the displacement of the friction point is the true displacement of the whole plate. Therefore, it is meaningful to use Eq. (34) to clarify the motion physics of the plate, which is summarized as follows. First, as straightforwardly indicated by the presence of the Heaviside step function in Eq. (34), the friction point moves only in the period when the deformation velocity due to the temperature variation is sufficiently large with . In this regard, the friction force, which determines in Eq. (33.c), plays a negative role in preventing the friction point from moving. However, this picture is incomplete: if the friction force completely vanishes, we have approaching zero as due to the absence of the external force. In this sense, the friction force is also indispensable for enabling the translation of the plate. Second, Eq. (34) implicitly suggests the use of pulsed light to achieve a large displacement distance (as ). To elucidate the benefits from the use of pulsed light, we recall that, under injection of a light pulse, the plate undergoes thermal heating and cooling phases successively, during which the temperature rises and falls, respectively. Specifically, the elastic deformations in the heating and cooling phases expand and contract the plate volume differently. As a result, in the corresponding two thermal phases, shows opposite signs, tending to cancel each other in Eq. (34) and reduce the displacement value. To realize a large displacement, it is thus favorable to enlarge the asymmetry in the heating and cooling time scales, so that the magnitude of in one thermal phase significantly exceeds the other. For instance, considering that the heating phase occurs much faster than the cooling phase with , an ideal scenario is that in the heating phase, while in the cooling phase. In this way, the friction point moves only in the heating phase, while being kept still by the friction force in the cooling phase, which helps the accumulation of a large displacement. Apparently, this suggested fast heating and slow cooling can be conveniently implemented by using pulsed light with temporal duration much smaller than the cooling time. Last, considering that the energy of a light pulse remains unchanged, the magnitude of in the heating phase increases with the decrease of the temporal duration of the pulse. Therefore, using a shorter pulse, it is easier to make exceed , accordingly enabling the motion of the friction point. For a micro-sized plate (e.g., composed of gold, with mass ) and assuming that the sliding resistance , . Numerically, it has been verified in Ref. that an absorbed laser pulse with ns-scale duration and nJ-scale energy could generate with a peak value of about m/s, well above . Experimentally, it has been demonstrated that nanosecond pulsed laser light can be used to drive the motion of gold microplates on microfibers with µN-scale friction, while continuous light cannot fulfill this task. Echoing the two questions raised at the beginning of this subsection, we now answer that: (i) the plate could be driven by elastic waves induced by pulsed light; (ii) the motion necessarily requires both rapid thermal deformations to overcome the friction resistance and considerable asymmetry in the thermal heating and cooling phases to accumulate the net displacement distance. To concretize these answers, numerical evidences, adopted from Ref. , are plotted in Fig. 6. A 2D gold plate with length and height of 10 µm and 50 nm. respectively, in the plane, sits on a substrate, as shown in Fig. 6(a). A light pulse with temporal width is injected to the 2D plate and results in an absorption energy . A point friction force with is placed 1 µm distance from the left edge of the plate. The optical absorption is set to have a Gaussian distribution in the direction, centered at the friction point, with 1/e width 1 µm, and a uniform distribution in the direction. The shaded region in Fig. 6(b) demonstrates the temporal evolution of the thermal energy, featuring asymmetry at the heating–cooling time scales. Particularly, heating occurs rapidly within the period of pulse injection, while cooling takes place slowly, requiring time exceeding over hundreds of nanoseconds due to the short thermal contact that is set to 1 µm in the direction. The temporal evolution of the -component elastic displacement of the friction point [dark solid line in Fig. 6(b)] shows that the friction point mainly moves in the heating period, when thermal deformation is intense, i.e., with . As a result, as , the plate accumulates a negative sliding distance about several nanometers with the displacement of the friction point approaching the same value as the plate centroid (dashed line). Contrastingly, without friction, the gray solid line in Fig. 6(b) shows that the friction point returns to its original position due to the absence of the external friction force. Further, as plotted in Fig. 6(c), the profiles of the -component elastic displacements of the plate at demonstrate that the left and right sides of the plate are initially stretched in opposite directions by the thermal deformation, and then, the two sides gradually crawl toward the friction point that is anchored by the friction force. Figure 6.Numerical exemplification of motion dynamics of a micro-object driven by excited elastic waves induced by pulsed light. (a) Sketch of the studied problem. A gold microplate with length and height of 10 µm and 50 nm, respectively, in the plane and extension of 10 µm in the direction sits on a substrate. A light pulse with temporal width 3 ns is injected into the plate and results in total optical absorption energy . A friction point is placed at a 1 µm distance to the left edge of the plate, which provides a sliding resistance of . (b) Temporal evolutions of thermal energy (shaded area), and -component elastic displacements of the friction point (dark solid line) and plate centroid (dashed line). For better comparisons, the -component elastic displacement of the friction point without friction force is additionally plotted (gray solid line), which approaches zero as . (c) Profiles of -component elastic displacements at different times . Adapted from Ref. . In the pioneering works of Arthur Ashkin, it was first demonstrated that a focused light beam was capable of “trapping” micro- and nanoparticles against Brownian motions via the exertion of radiation pressure[7,8]. The proposed experimental setups earn the name “optical tweezer” for their tweezer-like function to seize tiny objects and stably confine their motion within a diffraction-limited region. With continuous developments in this research topic, various light-induced effects other than optical force have been explored and become fast-growing branches of optical trapping technology. Sec. 3.1 is dedicated to reviewing the state of the art of corresponding “branches” and comparing their diversities in both operational principles and conditions. From Secs. 3.1.1 to 3.1.4, light gradient force and scattering force are utilized to trap and levitate/propel micro-nano objects, respectively, the mechanism of which largely lies upon direct momentum exchange between the incident photon and the manipulated object. Sharing the same principle, the four sections are specialized in their respective field with regard to light field localization/focusing and device configuration. The heat effect in the first four schemes, in particular, is regarded as an inevitable yet obstructive byproduct with the use of strongly focused and highly intense laser beams, leading to unwanted heat damage of manipulated objects and compromised trapping stability. Different from previous content, from Secs. 3.1.5 to 3.1.7, the heat effect from either absorptive particles or surrounding media would instead play a constructive role, which can be delicately harnessed to drive particle motion or enhance the trapping capability of the established tweezer systems. Surprisingly and counterintuitively, with the presence of the heat effect, the required light intensity for stable trapping could be reduced by several orders of magnitude, depending on the specific tweezer systems[87,92]. In addition, to establish potential wells for particle trapping in liquid environments, stagnation zones (near-zero flow velocity) can be formed by exploiting various hydrodynamic forces, which cancel out each other at specific spots in the light-induced temperature field. This extra method undoubtedly enriches the degree of freedom in light-enabled particle manipulation and will be discussed in detail in Secs. 3.1.6 and 3.1.7. 3.1.1 Conventional optical tweezers In general, conventional optical tweezers exploit the optical gradient force of a focused light beam to trap particles at its beam center. The optical scattering force, on the other hand, is either harnessed to counterbalance the gravitational force or considered as a destabilizing factor, setting particles into motion along the light propagation direction[7,10]. As the light beam gradually diverges, the transversely trapped particle slips from the trapping site due to stochastic diffusion or radiometric forces. Indeed, the very first prototype of optical tweezers consisted of two counterpropagating light beams (loosely focused) to ensure the nullification of opposite scattering forces, and 3D particle trapping was realized by both the transverse gradient force and balanced axial force. A single beam optical tweezer was later developed, where a highly focused laser beam was implemented to strengthen the axial intensity gradient [Fig. 7(a)][8,10]. As a result, a backward radiation force existed and worked synergically with the transverse gradient force, providing restoring actions that pull the particle towards the trapping center. The harmonic approximation of the as-established potential well assigns the Hookean nature to both transverse and longitudinal gradient forces, which scale proportionally to the particle displacement as ( denotes the optical gradient force, stands for the trapping stiffness, and is particle displacement). In this way, the external perturbations imparted on the particle could be immediately canceled out, and the trapped particle would be confined dynamically within a small region, defined by the concrete shape of the potential well. Figure 7.Conventional optical tweezers. (a) Schematic showing the origin of optical gradient and scattering forces in the Mie regime. Note that when the laser is tightly focused (right panel), the particle is subject to an axially backward radiation pressure. (b) Illustration of the potential well of the optical tweezer and its stability criterion. FWHM denotes the full width at half-maximum of the potential well. (c) Experimental setup of the optical tweezer with 3D feedback cooling. (d) Diagram of the feedback mechanism along one direction in (c). The derivative circuit () is to deduce the particle velocity from the detected position signals. (e) Parallel manipulation of gold nanoparticles via HOT. Insets are dark field (top) and scanning microscope (bottom) images of the fabricated periodic patterns. (f) Consecutive images exhibiting dynamic manipulation of semiconductor nanowires by a holographic optical trap system. Post-processing such as cutting and welding of nanowires is included. (a) Adapted from Ref. . (b) Adapted from Ref. . (c), (d) Adapted from Ref. . (e) Adapted from Ref. . (f) Adapted from Ref. . There are two criteria concerning the trapping stability of an optical tweezer system, namely, the “depth” and “steepness” of the potential well. As generalized by Ashkin, the potential well should be sufficiently deep that the particle can hardly escape from the trapping site via thermal fluctuation [Fig. 7(b)][8,93]. Mathematically, this criterion can be expressed by ( is for the trapping potential and for Boltzman constant), and is empirically regarded as a safe condition, considering the occasional high energy burst predicted by the Maxwell velocity distribution[8,14,94]. The other figure of merit is often quantified by the second derivative of the parabolic trapping potential, the trapping stiffness . Based on the Langevin equation and the law of energy equipartition, is related to the deviation of the particle position by (, denote the th component of and the mean-squared position variance, respectively)[72,95]. In brief, a larger value of would lead to more rigidly confined particle trajectories along the th direction. Though proposed more than 50 years ago, the research of optical tweezers is far from reaching the plateau. Conversely, it is marching towards higher levels of efficiency and versatility by incorporating other advanced technologies. For instance, dynamic feedback control of the particle position and the trapping stiffness could be achieved, the prerequisite of which is ultra-precise particle tracking[72,96]. Quadrant photodiodes are most often adopted to extract the position signal from the interference pattern between the undiffracted light beam and scattered light by the trapped particle (refer to Sec. 2.1.5)[72,97]. Needless to say, precise () and instantaneous () particle tracking could directly benefit researches of stochastic effects (e.g., Brownian motion) and reveal their mechanism at shorter time scales, as discussed in Sec. 5.2.1[98,99]. On top of that, the Brownian motion of the trapped particle could be efficiently cooled to the sub-kelvin regime by applying a feedback force opposed to the instantaneous moving direction of the particle[100,101]. As illustrated by the diagrams in Figs. 7(c) and 7(d), the measured velocity of the trapped particle is instantly sent back to modulate the power of the output cooling lasers, and the scattering forces generated by the cooling lasers (i.e., the feedback force) are adjusted accordingly to nullify the particle’s net motion. In recent years, holographic optical tweezers (HOTs) have been trending, owing to their capability of parallel and dynamic manipulation of micro-nano objects using a single light source. Naturally extended from single-beam optical tweezers, HOTs utilize dynamic diffractive elements such as spatial light modulators (SLMs) to shape the input light field into arbitrarily distributed outputs, and multiple particles could be simultaneously trapped within individually separated 3D trap arrays by optical gradient forces [Fig. 7(e)][93,94,102]. Moreover, to dynamically update the field pattern in a step-by-step manner, nontrivial structures with large aspect ratios (e.g., nanowires) initially free-floating in liquid suspensions can be transported and assembled with the use of HOTs [Fig. 7(f)]. To stably trap and drag the nanowires, the profile of the trapping potential should be spatially extended along the length direction of each individual nanowire to maintain their orientations and prevent them from drifting in the liquid suspension, highlighting the advantages of adopting HOT. More intricate functions can be integrated into the HOT platform. For instance, after assembly of the nanowires, post-processing techniques such as cutting and nanowelding, both of which are manifested in Fig. 7(f), can be implemented by incorporating high-power or pulsed lasers on top of the existing trapping beam, permanently transforming the initially separable nanowires into complex and monolithic structures[104–107]. Later, when deposited on solid substrates, they can be constructed into functional electronic or nanophotonic devices. 3.1.2 Plasmonic tweezers Using optical tweezers to trap particles in the deep sub-wavelength regime () encounters multiple problems, the first being the drastic weakening of the optical gradient force along with the reduced polarizability as[2,108]where is the particle radius, and is the ratio of permittivity of the particle and the surrounding media. The Clausius–Mossotti relation reveals a downscaling rate of of the magnitude of the induced dipole in the Rayleigh particle with its size shrinking, which is also experienced by the optical gradient force calculated by[8,10]where denotes the frequency-dependent complex polarizability. To make the situation even worse, the degradation of the restoring force is accompanied by a decreased level of damping in the harmonic system, due to the reduction of viscous drag in the Langevin equation(see Sec. 2.1.5). With the above two factors combined, the potential well would become both “shallower” and more “slippery” as the dimension of the Rayleigh particle goes down, thus making the trapping status less stable, as indicated by Ref. and Fig. 8(a). Figure 8.Plasmonic tweezers. (a) Schematic of particles trapped in potential wells with radius of 1R (left) and 0.8R (right). Lower panels are calculated distribution probability based on the force status of corresponding particles. (b) Focused SPP trapping of Mie metallic particles via generated plasmonic virtual probe. The glass substrate satisfies the Kretschmann coupling condition. (c) Patterned SPPs for parallel trapping of colloids. (d) Detected particle displacements in an optical tweezer on a glass substrate (left) and upon a plasmonic nanogap (right). (a) Adapted from Ref. . (b) Adapted from Ref. . (c) Adapted from Ref. . (d) Adapted from Ref. . However, to constantly push the boundaries at the “bottom” (Feynman’s speech), the demand is bound to increase for optical manipulation of ever-smaller objects such as biological molecules and single atoms. The plausible solution using conventional optical tweezers requires either increasing the laser power or reducing the laser spot (enhancing the laser focus), which are detrimental to the target samples, hard to implement, and ultimately limited by diffraction. In recent years, researchers have turned to a robust and more cost-efficient optical trapping scheme, that is, to combine plasmonics with optical tweezers. Surface plasmon polaritons (SPPs) are surface waves supported by planar metal–dielectric interfaces. The large effective index, or -vector, of SPPs deduced directly by Maxwell equations is key for the sub-wavelength localization of light field intensity, as the diffraction limit scales as [108,109]. As a result, the local gradient of light intensity would be so strong that a 40-fold enhancement of optical radiation pressure could be measured experimentally at the SPP near field. Though the intensity gradient along the surface normal could be far beyond the nominal diffraction limit, on a planar metal–dielectric interface, the in-plane gradient force is merely supported by the propagation attenuation, and the resultant trapping stiffness would appear impotent on the transverse plane. To solve this issue, SPPs should be spatially focused, and one of the approaches is to generate plasmonic virtual probes[110,111]. As depicted in Fig. 8(b), by coupling a radially polarized beam to a structureless metal film, a novel SPP mode with a probe-like intensity profile could be excited, where both Rayleigh particles and metallic particles in the Mie regime (up to ) experience strong restoring forces in all three dimensions and could be stably trapped at the central peak of light intensity near the substrate surface. Note that typically, mesoscale and Mie-sized metallic objects favor escaping from the optical tweezers due to the intensified scattering-plus-absorption/extinction force[113–115]. With both the deep sub-wavelength light localization () and inward-directed power flux of the SPP virtual probe field, the strong optical gradient force and the transversely attractive scattering force would work synergistically to immobilize the hard-to-trap particles at the trapping center, which further extends the capability of plasmonic tweezers to cover the metallic Mie regime. Compared to direct light field modulation on a structureless metal surface, substrate patterning is more widely adopted to implement 3D plasmonic trapping. Figure 8(c) shows that discretized SPP fields can be supported by gold micro-discs (fabricated on the glass substrate) and are coupled from non-focused incident light in the Kretschmann configuration[116,117]. Indeed, the plasmonic structures function as micro-nano objectives that compress the incident light field more effectively than the bulky counterpart to sub-wavelength volume, whose transverse and vertical dimensions are restrained by structure boundaries and the evanescent-wave nature, respectively. The generated potential wells coincide with the prescribed plasmonic patterns, where colloidal particles can be trapped with slight forward displacement due to the in-plane scattering force. Alternatively, localized surface plasmons (LSPs) supported by sub-wavelength metallic structures can also be harnessed for particle trapping, which naturally feature strong field localization at plasmonic hot spots in all three dimensions and possess the extra advantage of direct light coupling[108,109,118]. Among myriad LSP configurations, gap antennas appear to be the most promising candidates, since they can enhance a local electric field by up to four orders of magnitude. Figure 8(d) shows a plasmonic tweezer built upon two closely placed gold nanopillars. Compared to the case with the glass substrate (left panel), the stochastic motion of the Rayleigh particle is significantly suppressed when the gap plasmons are excited, since they provide an enhancement factor of for the near-field optical force. Besides the increased trapping strength, plasmonic tweezers could be modulated with polarization (s or p incidence), incident angle, and even the status of the trapped object, establishing a versatile and feedback-compatible optical trapping scheme[112,116,121,122]. Interested readers are encouraged to refer to corresponding publications. 3.1.3 Resonance dielectric tweezers While plasmonic tweezers provide powerful tools to immobilize sub-wavelength particles by exploiting strong light confinement beyond the diffraction limit, they generally suffer from Joule heat associated with the large of metallic structures[123,124]. The resultant thermal effects such as thermophoresis or ablation would cause undesired convection of the fluidic medium or damage to the trapped samples, which dims the merits of plasmonic platforms as nanotweezers. Alternatively, dielectric structures exhibiting comparable light enhancement capabilities at resonance conditions hold potential for lossless particle trapping at nanoscale. Photonic crystal (PhC) cavities feature both high quality factors (Q factors) and small mode volumes, naturally fulfilling requirements of optical tweezers with regard to high light intensity (for deeper potential wells) and spatial localization (for larger trapping stiffness). Indeed, the level of local field enhancement of PhC cavities could surpass that of the plasmonic counterparts due to ultrahigh Q factors up to , while the Q factor for the latter case is most often below [126,127]. Moreover, the standing wave nature of the cavity field ensures real static trapping of particles with nullified propagation components, similar to the working principle of dual-beam optical tweezers [Figs. 9(a) and 9(b)]. On top of a 1D photonic resonator, a calculated optical force of 700 pN could be realized for a 100 nm particle, and stable trapping of polystyrene nanosphere was experimentally demonstrated, suggesting the capacity of dielectric resonators in enabling deep sub-wavelength particle manipulation with negligible heating. Figure 9.Resonance dielectric tweezers. (a) Schematic of a 1D silicon photonic crystal resonator used for optical trapping. (b) Simulated mode profile of the photonic crystal resonator in (a) at resonance. The electric field magnification and localization are characteristic of the cavity mode of a dielectric resonator. Black arrows denote the magnitude and direction of the local optical force. (c) Schematic of the multiplexed optical trapping based on an all-dielectric metasurface supporting quasi-bound states in the continuum in each of its unit cells. Nanoparticles would be trapped at the gaps of the elliptical nanoantenna pairs (the unit cell), where the local electric fields are strongly enhanced due to the lack of out-coupling channels. (d) SEM image of a nanocuboid array fabricated with amorphous silicon supporting anapole modes. (e) Calculated profile of the optical force upon 100 nm bead in the plane above a unit cell of the device surface. The local light intensity is . (a), (b) Adapted from Ref. . (c) Adapted from Ref. . (d), (e) Adapted from Ref. . Recently, researchers are increasingly paying attention to lossless nanoresonators so as to further reduce the device footprints with respect to micro-scale PhC cavities. In 2021, Yang et al. proposed an all-dielectric metasurface-based nanotweezer, where elliptical silicon resonators pair up to form symmetry-protected quasi-bound states in the continuum with nearly vanishing outgoing radiations. By adjusting the tilt angle between the resonator pairs in each unit cell, the Q factor of the paired nanoresonators can be tuned accordingly. Specifically, with the tilt angle kept as small as 5°, a more than 100-fold local field enhancement could be achieved in the trapping sites, owing to the suppressed out-coupling of near-field dipole mode to the radiation channel[129,130]. Given that each paired element occupies only a few hundred nanometers in all three dimensions, a multiplexed trapping scheme could be readily established by arranging the elements into an arrayed metasurface, which solves the scalability issue [Fig. 9(c)]. Leveraging a similar approach, through the destructive interference between electric and toroidal dipole moments in the far field[131–133], anapole mode as another scattering dark state was harnessed to implement strong near-field light concentration at resonance, and light capture of sub-100 nm particles was reported [Figs. 9(d) and 9(e)] with a relaxed incident field requirement in comparison to conventional optical tweezers plus the advantage of minimized heating effects compared with plasmonic tweezers[124,134]. 3.1.4 Integrated optical tweezers In the previous two sections, plasmonic and dielectric metasurfaces functioning as optical tweezers are given adequate attention, which, to some extent, can be regarded as having achieved a certain level of integration on planar architectures. Following a more standard definition used in integrated optics and also from a practical perspective, in this section, we mainly focus on optical tweezers established on waveguide or optical fiber platforms. Though most waveguides do not possess open spaces for direct light–matter interaction, evanescent fields with light tunneling through high-index sidewalls into the low-index surrounding medium can be utilized for particle trapping. In recent years, parallel and dynamic particle manipulations have been extensively reported on various waveguide platforms including slot waveguides, PhC waveguides, plasmonic waveguides, etc., where optical gradient forces are imposed via evanescent fields featuring exponential decay[16,135–137]. Moreover, light waves transmitted in guided modes or whispering gallery modes would additionally provide optical scattering forces along the light propagation direction, given their traveling wave nature[127,138,139]. As schematically displayed in Fig. 10(a), apart from being transversely trapped by the optical gradient force, the particle also experiences a longitudinal push and consequently circulates around the ring resonator at a constant speed. The temporal evolutions of the and coordinates of a 500 nm bead trapped on top of a micro-ring take on sinusoidal formats, corresponding to a linear velocity of around [Fig. 10(b)]. Aside from evanescent fields, freeform optics could also be reproduced at chip-scale dimensions, where a dual-beam trapping architecture could be formed by planarly interfacing waveguides with reflective or refractive elements [Fig. 10(c)]. Far above the waveguide near field region, suspended particles could be immobilized at the trapping sites due to the effectively focused light field, as shown in Fig. 10(d). Though adopting free-space optics, the proposed device was integrated on-chip and possessed a drastically reduced footprint () in contrast to conventional free-space optical tweezers. One thing worth noting is that, instead of relying on complex light paths or bulky focusing lenses, waveguide-based optical tweezers can access input light via the fiber-coupling technique, the all-planar configuration of which makes them readily transferrable to lab-on-chip applications once combined with microfluidics. Figure 10.Integrated optical tweezers. (a) Schematic of a micro-ring system with a trapped dielectric particle moving around on top of it. Incident light is coupled from the left port into the bus waveguide. (b) Recorded time-dependent and displacements of a trapped particle () on a 10 µm radius micro-ring, corresponding to a rotation frequency of 2.5 Hz. (c) Cross-section sketch of the on-chip optical tweezer based on freeform optics. (d) Simulated light intensity distribution of the transverse-electric (TE) mode. Inset illustrates the formation of a standing wave along the axis. (e) Simulated electric field distribution at a fiber end face terminated with polystyrene micro-spheres. (f) Dark field optical image showing parallel trapping of 190 nm fluorescent nanoparticles on the shadowy side of the microlens array. (g) Holographic parallel trapping of nine particles. The trapping sites are projected through a homemade high-NA multimode fiber. (h) Optical images and schematics of two particles being delivered into a cavity by the fiber-integrated HOT. Scale bars are 5 µm in (g) and 10 µm in (h). (a), (b) Adapted from Ref. . (c), (d) Adapted from Ref. . (e), (f) Adapted from Ref. . (g), (h) Adapted from Ref. . Alternatively, integrated optical tweezers on fiber platforms are another trend leading towards in vivo technologies. The extremely slender and flexible structure of optical fibers is tailor-made for intruding into hard-to-access environments in the human body such as blood vessels and living tissues. More importantly, optical fibers serve as “pipelines” that transmit both incident trapping light and detected sample signals independently, which is key to the function of endoscopes[142–144]. Owing to the relatively small NA () of optical fibers, dual-beam optical tweezers can be more easily implemented with the use of two oppositely oriented fibers[145,146]. However, they suffer from tedious alignment procedures and extra encapsulation. The demand for “monolithic” fiber integration is still there, and to improve the light confinement capability, distal facet modification is a workable solution[146,147]. Figure 10(e) depicts such an example, where polystyrene spheres were assembled onto the end face of a fiber probe serving as a micro-lens array. As a result, near-field light beams termed as photonic nanojets can be formed at the shadowy side[149,150], facilitating the establishment of sub-wavelength nanotraps for multiplexed particle trapping [Fig. 10(f)]. In 2018, Leite et al. successfully synthesized multimode fibers with NA () comparable to PhC fibers. Instead of engineering the terminations, they chose the fiber core and cladding materials with high-contrast indices and compensated for mode-dependent power loss by rearranging the input light profile using an SLM. Based on this configuration, researchers demonstrated 3D and holographic manipulation of multiple particles in a pre-defined square-grid manner [Fig. 10(g)]. Since holographic tweezers are integrated on a single fiber end face in diameter, the small footprint and mechanical flexibility of the device ensure operation in vessel-like structures, as shown in Fig. 10(h). 3.1.5 Thermophoretic tweezers In Sec. 2.2, phenomena of thermophoresis are categorized into photophoresis and Ludwig–Soret effects, based on the nature of fluidic environments. Following the same classification, here we introduce light-induced thermophoretic tweezers in the two schemes separately. Thermophoretic tweezers in an air environment, or rather, photophoretic tweezers, could trap light-absorbing particles in local minima of light intensity. As discussed in Sec. 2.2.1, a net kick would be received by the illuminated particle, pointing from the hot side to the cold side. Utilizing this thermophobic feature, a potential well could be established by creating the asymmetry of irradiation, where light absorbing particles can be trapped in the intensity dark region surrounded by “repelling bright walls”. In 1982, Lewittes et al. first reported radiometric levitation of micro-particles, in which a Gaussian beam ( mode) was coherently superimposed with a doughnut beam ( mode) to form a lateral intensity minimum. In that experiment, dye-filled particles were both axially held up against gravity and laterally trapped at the doughnut center by photophoretic force. The lateral trapping can be intuitively understood in this way: once the particle deviates from the doughnut center, asymmetric heating occurs and the particle is “bounced back” by the bright wall, as technically, the bright region becomes the hot side and the dark center becomes the cold side in the dynamic balance. A similar but transverse configuration could also be adopted, where the propagation direction of the vortex beam is aligned horizontally, and the particle trapping spot would slightly shift downward to balance the gravitational force with asymmetric heating [Fig. 11(a)]. Fueled by axially asymmetric laser heating, laterally trapped aerosol particles would be continuously fueled to move along the direction of laser propagation. In this way, directed particle transportation could be achieved and was demonstrated to function over meter-scale distance with a positioning accuracy within [Fig. 11(b)]. Figure 11.Thermophoretic tweezers. (a) Experimental setup of the optical vortex pipeline for long-range particle delivery. Inset shows a photograph of the transverse trapping of an absorbing particle that is slightly displaced from the vortex center due to gravitational drag. (b) Schematic of remote particle manipulation. (c) Calculated and measured light intensity profile along the axial direction within a paraxial aberrated focus. denotes the position of the Gaussian focus without aberration. (d) Opposite side views of the 3D light intensity profile of a bottle beam. Absorptive aerosol particles can be trapped within the annular bright walls that “cup” the trapping site. (e) Sketch and measured trajectory points of a 200 nm PS particle showing successful thermophoretic trapping within the region surrounded by laser-heated hexagonal gold patches. The laser illuminates one patch at a time with a rotation frequency of 18.9 Hz and 5 mW light power. (f) Bar graph of the radial position distribution of the trapped particle in (e), which can be fitted with a Rayleigh distribution function. (g) Schematic of the opto-refrigerative tweezer exploiting laser cooling. (h) Measured temperature profile under laser cooling with an illumination intensity of . White arrows point along the direction of temperature increase, while the pink arrow indicates the thermodiffusive velocity of the particle. (i) Comparison of the time-resolved fluorescence of dye particles trapped by an opto-refregerative tweezer (ORT) and a conventional optical tweezer. (a), (b) Adapted from Ref. . (c), (d) Adapted from Ref. . (e), (f) Adapted from Ref. . (g)–(i) Adapted from Ref. . Indeed, apart from doughnut beams, light fields with alternating dark and bright regions of light intensity could also be found in the focal volume of an aberrated lens. By carefully arranging the input Gaussian beam and the receiving plano–convex lens, near the theoretical Gaussian focus (), the light field along the axial direction would exhibit scattered dark regions surrounded by local intensity maxima [Fig. 11(c)]. Considering the cylindrical symmetry featured by the paraxial focal volume, the dark regions from the 3D perspective are essentially dark traps “cupped” by annular bright walls, the structure of which thus earns the name of “bottle beam”. Based on this specific intensity profile, researchers have demonstrated simple yet robust particle trapping using bottle beams, where agglomerates of carbon nanoparticles can be immobilized inside the aberrated focus, as displayed in Fig. 11(d). Thermophoretic tweezers in liquids In liquid environments, Soret effects come into play by counteracting Brownian motion and introducing extra thermal diffusion[23,76]. With nonuniform temperature distribution, the steady-state particle concentration gradient is given by (Sec. 2.2). When , as is the most common case, the thermophobic property still stands, and particles drift from hot regions towards cold regions. In principle, absorptive particles could still be trapped within the low-light-intensity region, and a potential well could be readily formed with the use of Laguerre–Gaussian modes. In contrast, for transparent particles, the nonuniform heat effect of the environment equivalently creates a temperature gradient, and the key to stable trapping is to establish a similar doughnut-like temperature field with a central cold zone surrounded by ring-shaped hot regions. Using this mechanism, Braun et al. proposed a thermophoretic tweezer platform integrated on top of a hexagonal gold patch array fabricated by microsphere lithography. Figure 11(e) suggests that, with the laser spot dynamically steered to move along the peripheral plasmonic ring at frequencies beyond a certain threshold, a dynamic thermophoretic trap can be well implemented, and the trajectory of liquid-suspended nanoparticles is rigidly confined within the less-heated open zone [Fig. 11(f)]. The threshold frequency corresponds to a velocity of the varying temperature field that should be considerably larger than the thermophoretic drift velocity of target particles, so that a nonzero net inward drift can be guaranteed on top of the Brownian diffusion, ensuring an effective radial confinement of particles according to the Langevin equation[157,158]. Note that the particle trajectory follows Gaussian distribution, the radial distribution of which should be fitted by the Rayleigh distribution function, not centered at . In contrast, for situations where the velocity of the thermal field and that of the thermophoretic drift become comparable, target particles primarily pick up the tangential speed correlating the rotation of the thermal field, while radially becoming randomly distributed due to the lack of inward drift that functions similarly to the restoring force in a stationary optical trap. A more delicate scheme termed as the opto-refrigerative tweezer was reported in 2021 by Li et al. [Fig. 11(g)]. Instead of using a laser to inject heat, this work exploits laser cooling to take away phonons and create a cold region right at the laser focus spot [Fig. 11(h)]. The localized laser cooling was realized through anti-Stokes fluorescence of ytterbium-doped yttrium lithium fluoride (Yb:YLF) crystals, which were dispersed on a glass substrate and submerged into heavy water to minimize laser absorption. The concept of opto-refrigerative tweezers is inspiring in that it directly offers a solution to light manipulation and in situ study of fragile and heat-sensitive objects. For example, Fig. 11(i) shows that the quenching effect of fluorescent polystyrene (PS) nanoparticles is greatly subdued in the opto-refrigerative tweezer compared with the conventional optical tweezer. Since the Soret effect is intrinsically an interfacial effect, the magnitude and sign of the Soret coefficient could be tuned by engineering the particle-solvent interface[160,161]. For the case when , suspended particles are repelled away from the cold regions and subsequently become thermophilic. This effect enables thermophoretic trapping at laser-induced plasmonic hot spots[162,163]. Similar to photophoresis in air environments, the Ludwig–Soret effects in liquids could significantly relax the requirement of tight focuses and high light intensity; the contribution of the light gradient force in corresponding studies is usually neglected. Specifically, under the circumstance of loose focuses and moderate light intensity, the thermophoretic trapping force could dwarf the optical force by two to three orders of magnitude[162,163]. 3.1.6 Opto-thermoelectric tweezers Inspired by thermophoretic effects in liquids, the concept of opto-thermoelectric tweezers was first brought up in 2018, where ionic species are introduced to migrate under the temperature gradient and establish an electrostatic field. In this work, colloidal particles were chemically decorated with charged surfactants, and the electric force could be readily utilized as the trapping force for the proposed tweezer system. In corresponding works, cetyltrimethylammonium chloride (CTAC) is the most commonly used surfactant. When added to colloidal suspensions, CTAC molecules dissolve into positively charged micelles and ions. Due to the hydrophobic property of nonpolar carbon chains, CTAC surfactants are absorbed onto the surface of colloidal particles with specific orientations dictated by interfacial energy, as depicted in Fig. 12(a). As a result, the introduced surfactants regulate the solution components into three major categories: positive micelles, negative counter ions, and positively charged colloids. With the temperature gradient generated upon light irradiation on the absorptive substrate, first there comes the spatial segregation of the former two species, mainly owing to their large difference in Soret coefficients with (micelle) . Subsequently, an electrostatic field can be obtained pointing in the direction of the temperature gradient, the magnitude of which can be calculated using the equationwhere is the Boltzman constant, the elemental charge, the ionic species in the solvent, the charge number, and and the ionic concentration and Soret coefficient of corresponding ions, respectively. Indeed, this Seebeck effect in liquid phase was reported and exploited to stimulate “electrophoretic migration of charged particles” much earlier than the proposal of opto-thermoelectric tweezers, only without the exclusive use of light for the generation of the temperature gradient[164–166]. Given that colloidal particles are decorated with positive charges, they can be propelled towards the localized hot zone where positive micelles are depleted, and be trapped there by the radially balanced electrostatic forces [Fig. 12(b)]. Figure 12.Opto-thermoelectric tweezers. (a) Schematic of the solution components when added with CTAC. Left to right: colloidal particles decorated with CTAC, CTAC micelles, ions. (b) Mechanism of opto-thermoelectric trapping. The two panels on the right illustrate the establishment of the temperature and electrostatic fields and the subsequent particle trapping by electrostatic force. The electric field is induced by the thermophoresis of charged solution components. (c) Hybrid and multi-dimensional assembly of colloidal particles via opto-thermoelectric manipulation. Scale bar: 5 µm (left four panels) and 2 µm (right four panels). (d) Pattern transfer from the graphene substrate (left) to the opto-thermoelectrically trapped PS particles (right). The substrate is patterned by direct laser writing. (a), (b) Adapted from Ref. . (c) Adapted from Ref. . (d) Adapted from Ref. . In general, the versatility of opto-thermoelectric trapping is demonstrated through either the configuration of the light field or the thermoplasmonic substrate, which are two independent factors determining the profile of the temperature field. In the first approach, digital micromirror devices and SLMs have been employed to create arbitrarily shaped light fields, achieving parallel and dynamic particle manipulation [Fig. 12(c)][167,168]. On the other hand, special arrangement of the thermoplasmonic substrate could also achieve holographic particle manipulation, with the substrate geometry being transferred to pattern the colloidal assemblies, as shown in Fig. 12(d)[89,169]. Also, by modulating both the light field and the absorptive substrate, directed particle transportation and recapture are possible among different trapping sites[89,169]. It should be noted that, although colloidal particles are typically trapped at plasmonic hot spots, the effect of the optical gradient force is trivial with loosely focused low-intensity light beams, as mentioned before. Indeed, when transferring the proposed scheme from plasmonic substrates to transparent substrates (e.g., glass), or conducting the same experiments without adding ionic surfactants, no stable trapping could be observed[167,169]. 3.1.7 Opto-thermoelectrohydrodynamic tweezers In previous sections, force analyses are mainly conducted over suspended particles as they are trapped or propelled through interaction with external fields (light field, temperature field, electric field, etc.). The truth is, the solvent, which constitutes the hydrodynamic environment of suspended particles, can also be influenced and set in motion by the applied fields following the Navier–Stokes equations. The resultant hydrodynamic flow perturbs the suspended particles in the same way as it induces the Brownian motion and thermophoretic effects[170,171]. It can be expected that once the flow field is well organized and oriented, the stochastic motion of the particles would surrender to more directed and predictable motion patterns, which is the foundation of particle trapping and manipulation. And this time, the force status of the liquid medium would become the first concern. In 2016, Ndukaife et al. developed a hybrid electrothermoplasmonic tweezer system that integrated functions of both long-range particle delivery and near-field particle trapping. While the latter function relies on the enhanced optical gradient force at plasmonic hot spots, which has been discussed in earlier content, the well-directed particle delivery towards trapping sites is realized through engineering the flow field. As sketched in Fig. 13(a), upon nonuniform laser illumination, a microfluidic flow termed the eletrothermoplasmonic (ETP) flow is induced with the synergetic effect of the temperature gradient and the alternating electric (a.c.) field. According to Refs. and , a non-isothermal fluid is embedded with non-zero free charge, permittivity and conductivity gradients [ and ]. With the presence of electric fields, a body force would be exerted on the fluid pointing towards the plasmonic hot spots, which can be calculated in the a.c. regime as where the applied field equals . The experimentally measured radial velocity of the fluidic flow is shown in Fig. 13(b), verifying the solidity of the above force analyses in both the flow direction and magnitude. Note that purely thermal convection can induce only a weak flow (without switching on the a.c. field); the exploitation of ETP not only guarantees well-directed particle delivery, but also largely shortens the time scale of particle capturing down to a few seconds. Figure 13.Opto-thermoelectrohydrodynamic tweezers. (a) Optical setup of the hybrid electro-thermoplasmonic tweezer. The arrows indicate the direction of the ETP flow. (b) Mapping and vectorial plot of the measured flow velocity. The maximum flow velocity exceeds . (c) Schematic of particle trapping with balanced ETP flow and electro-osmotic flow above a gold nanohole array. While the external a.c. field is applied perpendicular to the substrate, the tangential electric component exists due to the non-perfect planar electrode. (d) Dynamic manipulation of an individual protein molecule. The protein (framed gray dot) follows the motion of the laser spot (red dot) and gets re-trapped at the new stagnation site. (a), (b) Adapted from Ref. . (c), (d) Adapted from Ref. . Electro-osmotic flow is another hydrodynamic effect boosted by the application of the a.c. (or d.c.) electric field, denoting the slip of ions in the electric double layer adjacent to the charged electrodes. One of the prerequisites of electro-osmosis is the existence of a tangential electric component, which could be accomplished in a vertically applied field by introducing defects to the planar electrodes (e.g., virtual electrodes by light-patterning of photoconductive layers)[175,176]. Hong et al. proposed that, on top of a plasmonic nanohole array, the tangential component of the a.c. electric field could cause electro-osmotic flow directed away from the nanohole array, which counterbalanced the inward ETP flow (the laser illuminated area is inside the nanohole array) and formed stagnation zones [Fig. 13(c)]. Suspended particles could be trapped in the stagnation zone with balanced counterflows[178,179]. Moreover, by translating the laser spot, the trapping sites would evolve accordingly, which always locate several micrometers away from the plasmonic hot region, as exhibited in Fig. 13(d). Therefore, trapped particles were free from both photo and thermal damage, and the possible influence of the optical gradient force can be ruled out. An extra degree of freedom regarding particle manipulation lies in the dependence of electrohydrodynamic force on the applied electric field. By tuning either the magnitude or frequency of the a.c. field, the trapping dynamics would change correspondingly and so would the positions of the trapping sites[25,177]. 3.2 Optical Axial Manipulation: Pulling Using Light While optical radiation pressure has long been used to push objects along the direction of light propagation, the reversed case, that is, to pull the object all the way towards the light source, is rather counterintuitive. The following sections are devoted to this extraordinary event of pulling using light, where optical tractor beams based on optical force and photophoretic force are covered in Secs. 3.2.1 and 3.2.2, respectively. In Sec. 3.2.3, a novel scheme of light-assisted pulling is discussed, which utilizes the opto-thermoelectric effects in micro-fluidic systems. 3.2.1 Optical pulling In single-beam optical tweezers, the trapping force in the axial direction, which relies on a strong gradient force overcoming the radiation pressure, functions to “pull” the particle backward towards the axial intensity maximum[8,10]. However, this pulling scheme works within a rather short range, featuring a single static equilibrium point that stops the particle from moving further upstream towards the light source. To achieve optical traction over longer distances, in general, one can think from the three perspectives: (1) structuring the incident electromagnetic field, and (2) modifying objects or (3) the surrounding media[180,181]. Researches using structured light have been enhanced by the development of SLMs, and so has the field of optical manipulation. For instance, optical conveyor beams could be constructed by superimposing coherent Bessel beams generated by an SLM. The resultant light beam possesses periodic intensity variations along the light propagation direction, and particles could be delivered either downstream or upstream by imposing time-dependent phase offsets among the constituent Bessel beams, to which the axial intensity of the conveyor beam would be modulated accordingly. In this scheme, retrograded particle delivery was realized by “retreating” the conveyor beam together with its axial intensity maxima towards the light source, whereby the particle would follow the same retreating pace and move upstream under the influence of the axial gradient force. Alternatively, the optical scattering force, though counterintuitive, can also be directed to implement optical pulling. In 2010, Lee et al. demonstrated the holographic construction of optical solenoid beams with spirally evolved intensity maxima, the wavefronts of which could be inclined independently in a retrograde direction relative to beam propagation, thus leading to negative radiation forces enabled by the reversed phase gradients [Fig. 14(a)]. To construct a more generalized picture of optical pulling force, Chen et al. considered the case of a single Bessel beam (with a vanishing intensity gradient along the optic axis) interacting with individual particles. The diagram in Fig. 14(b) shows that particles would experience backward radiation pressure only when the projected axial momentum of the re-emitted irradiance surpasses that of the incident beam. In the spirit of linear momentum conservation, the illuminated particle would be subject to a backward recoil force. However, upon multipole interference, the situation favoring forward scattering is rare, which poses strict constraints regarding the particle dimension (relative to the wavelength), permittivity, permeability, and the -vector distribution of the incident Bessel beam. The hard-to-fulfill condition hence explains the difficulty in achieving negative scattering forces in experimental practices. Figure 14.Optical axial manipulation: pulling using light. (a) Optical solenoid beams with tilted wavefronts. From left to right are three circumstances where the local -vectors are directed along, perpendicular to, or opposite to the spiral intensity profile. The spiral pitches are kept the same for comparison. (b) Angular distribution of the scattered irradiances of two particles relative to the incident Bessel beam (green dotted line). 0° denotes the forward direction. (c) Schematic showing the gain of a backward recoil force owing to directional SPP excitation. Inset shows the composition of the system. (d) Dependence of J-factor on the geometrics of the hybridized particle and laser beam polarization. (e) Illustration of polarization-controlled particle delivery. Inset shows the geometry of the hybridized particle, which is composed of a glass shell and a thin Au coating. (f) Schematic showing the temperature profile of an illuminated silicon particle, distribution of ionic species, and the resultant opto-thermoelectric field. CTAC surfactants are added in the solution to regulate the charged species. (g) Sequential optical images of long-distance particle pulling towards the fiber tip. Scale bar: 5 µm. (a) Adapted from Ref. . (b) Adapted from Ref. . (c) Adapted from Ref. . (d), (e) Adapted from Ref. . (f), (g) Adapted from Ref. . For the second approach, chiral particles have been widely exploited to couple the light angular momentum to mechanical linear momentum. Sometimes, with a delicate arrangement of the particle chirality as well as light polarization, this angular-to-linear momentum cross-coupling can give rise to the optical pulling force[184–186]. Particles with negative polarizability can be propelled against the light propagation direction by either the optical gradient force or scattering force[187,188]. In contrast to the former two approaches, the last approach places emphasis on the surrounding media, so that the incident light and particles can be chosen more freely. Metamaterials with hyperbolic dispersion support cross-shaped volumetric modes, whose high densities of state open up scattering channels and implement steep light intensity gradients in the underlying substrates, readily to be harnessed for optical pulling[189,190]. Moreover, leveraging the same principle as in Ref. , researchers have illustrated that asymmetric excitation of SPPs could give rise to a backward recoil force, this time without redundant prerequisites imposed on manipulable particles or the incoming light field. In brief, when a Rayleigh particle is placed in the proximity of a metallic surface, a rotating dipole is induced that favorably couples into the forward propagating SPPs [Fig. 14(c)], leaving the rest of the story easily interpretable by the law of momentum conservation. Owing to their extraordinary guided modes, tunable band structures, and momentum topology, PhCs possess a large parameter space exploitable for the realization of optical pulling. For instance, by utilizing a PhC with an unusual concave-shaped topology of light momentum, a structureless plane incident wave, upon scattering of the target particle, can couple to off-axis modes that nevertheless correspond to a larger net axial component of light momentum, thus generating a backward recoil force in the form of scattering force. Alternatively, sustainable and long-range optical pulling can be provided by the gradient force in a PhC supporting self-collimating Bloch modes, the interplay between which and the particle locally generates a negative gradient region to pull the particle in a self-adaptive fashion. Given the broadness of its parameter space, there are still plenty of research works dedicated to the last approach, where optical pulling forces are provided by waveguide mode conversion[193,194], backpropagating beams[195,196], or through multi-body coupling. For detailed information, readers are referred to more exclusive reviews such as Refs. and . 3.2.2 Photophoretic pulling The phenomenological interpretation of photophoresis typically depicts the image of an aerosol particle being pushed from the hot side to the cold side in the presence of uneven heating in a gaseous environment[199,200]. The convention is to assume the near side of the particle to the light source as the hot side and the shaded side as the cold side, so that the photophoretic force will be directed along the electromagnetic energy flux, which is indeed photophoretic pushing. However, the reversed case, termed photophoretic pulling, is also possible under certain circumstances. As discussed in Sec. 2.2.1, photophoretic effects can be divided into two categories: type and type. The first type, also the better-known one, deals with situations where a particle with a uniform accommodation coefficient is subject to nonuniform heating, thus leading to temperature variations within the particle, especially profound along the direction of light propagation. In contrast, the second type takes into consideration the inhomogeneous distribution of , the photophoretic force of which under a constant temperature points from the high- side to the low- side. In the first scheme harnessing the -type photophoretic force, the optical near side does not necessarily come as the thermally hot side. As pointed out in Refs. and , when the light penetration depth is significantly shorter compared to the particle dimension, denoted as ( is the light wavelength, the imaginary part of the particle refractive index, the particle dimension), the light–thermal effect mainly concentrates at the optical near side and consequently results in photophoretic pushing[77,84]. For weakly absorbing particles, however, the heat favorably locates at the rear side further away from the light source, given that the particle can effectively function as a focusing element[22,77]. As a result, photophoretic pulling can occur, which has been verified both theoretically and experimentally[21,77,84,201]. Considering particles and the atmosphere in the Mie () and slip-flow regimes (Knudsen number ), respectively, a factor accounting for the heat source asymmetry was first developed by Yalamov et al. in 1976 [see also Eq. (26)], expressed aswhere is the complex refractive index of the particle, is the reduced spatial coordinate, and stands for the electric distribution inside the particle. As the quantifier for heat source distribution, the sign of determines the direction of the photophoretic force in that positive (heat is predominantly absorbed at the non-illuminated side) leads to photophoretic pulling and vice versa. More intuitively, in the most extreme cases where , is approximated to be ( is the absorption cross section)[22,84]. By using azimuthally and radially polarized doughnut beams, the same research group that performed the work in Ref. and Figs. 11(a) and 11(b) again demonstrated long-range light transportation of micro-particles, this time strengthening the realization of photophoretic pulling. Figure 14(d) shows that the asymmetric factor would flip its sign upon tuning the geometry of the corresponding hybridized particle [inset in Fig. 14(e)]. On top of that, particles with the same configuration would respond differently (marked by the shaded area) as the polarization of incident light changes, which provides novel opportunities for particle delivery switchable from downstream to upstream by polarization transformation, as sketched in Fig. 14(e). For the second scheme, the accommodation coefficient is defined as , where , , are temperatures of the particle surface, incident gas, and leaving gas molecules bounced off the particle, respectively (refer to Sec. 2.2.1). As a parameter evaluating the efficiency of momentum transfer via molecular-particle collisions, the asymmetric distribution of α can also be leveraged to achieve negative photophoresis[85,203]. Employment of Janus particles with heterogeneous absorption properties, for example, is promising while barely explored so far, possibly due to the difficulty in controlling the particle orientation in fluidic suspensions[198,204]. 3.2.3 Opto-thermoelectric pulling For strongly light-absorbing particles, both the optical gradient force and photophoretic force imparted on them tend to be repulsive. The former case is on account of the enlarged scattering-plus-absorption cross section[113,114]; the latter is the result of heat generation predominately at the illuminated side. To inflict pulling force on these particles through light irradiation, a counter effect that directs the particle motion opposite to the energy flux or along the temperature gradient should be exploited. Recently, such a scheme has been realized by researchers using self-induced opto-thermoelectric force of silicon nanoparticles. As depicted in Fig. 14(f), when irradiated by laser power, a considerable temperature gradient can be generated inside a silicon particle pointing from the rear pole to the illuminated front, since amorphous silicon features both strong light absorption and relatively low heat conductivity (1.8 W/mK). Subsequently, in an aqueous solution, a thermoelectric field could be established with the presence of ionic CTAC surfactants, the process of which is the same as that discussed in Sec. 3.1.6. Note that the direction of the generated electric field is opposite to the -vector of incident light, and so is the direction of the electrostatic force exerted on the silicon particle (positively decorated). Hence, by utilizing a collimated optical beam from a single-mode fiber taper, suspended silicon particles were observed to be drawn over a long distance to the fiber tip and finally got trapped there [Fig. 14(g)]. Even longer-range pulling () has been demonstrated with a multimode fiber. The mechanism behind this phenomenon is interpreted as the self-induced opto-thermoelectric effect, where the illuminated object gives rise to the local temperature field without assistance of plasmonic substrates (which shows the difference between Sec. 3.1.6 and the content here). In this proposed scheme, the electrostatic force sustains even in the dynamic process, which means that the migration of the silicon particle, the positive micelles, and the negative counterions keep pace with each other without disturbing the spatial separation between the latter two species. Apart from opto-thermoelectric force, other effects such as electro-thermoplasmonic force[177,175] and electro-osmosis, both introduced in Sec. 3.1.7, can also direct particle motion towards the hot regions; only that in published works, they are mainly used to foster directed transportations parallel to the substrate plane instead of along the optical path. Still, they hold potential to achieve light-induced pulling in more delicate microfluidic systems. 3.3 Optical Lateral Manipulation In general, lateral actuation requires in-plane symmetry breaking in the light–matter system, as opposed to the concept of trapping. For instance, in a well-established optical tweezer, a dielectric particle isotropic in both its geometry and refractive property will be trapped stably at the laser focus. The asymmetric factor in the light field such as polarization and wavefront chirality, or in the particle in the form of elongation, handedness or birefringence will disequilibrate the light–matter ecology, which, in the context of optical trapping aiming at particle immobilization, would be detrimental. However, the same factor could be highly exploitable for the purpose of object actuation. In the following two sections, we mainly focus on the asymmetry in the light field and the interacting object as the key enablers for in-plane optical manipulation. 3.3.1 Torsional optomechanics Apart from the linear momentum that is best defined for plane waves (eigenmodes for the operator), light can also carry SAM and OAM, determined by the dynamic rotation of the light field in the polarization or wavefront sense [see Fig. 15(a)]. For the former case, the modern quantum theory summarizes that collimated circularly polarized beams are eigenmodes of the operator, corresponding to eigenvalues of , where is for left-handedness and for right-handedness (refer to Sec. 2.1.4). This theory coincides with the classical deduction of Poynting, suggesting that circularly polarized light possesses amount of angular momentum (, denoting light energy, and the angular frequency). In contrast, OAM describes the wavefront helicity of optical vortex beams, which are associated with more complex Laguerre–Gaussian modes as a set of solutions to the paraxial Helmholtz equation[42,70]. For vortex beams with an azimuthal phase factor ( denotes the azimuthal angle), they contribute an OAM of , in which takes only integer values. Upon light–matter interaction, the two forms of angular momentum carried by light fields can be coupled to the mechanical counterpart, typically resulting in spinning or orbiting of micro-nano objects in the transverse plane. Figure 15.Optical lateral manipulation enabled by light field and structural asymmetry. (a) Schematic of light carrying SAM (left) and OAM (right). (b) Rotation of a nanodumbbell levitated in a circularly polarized beam. (c) Power spectrum density of the rotational motion in (b). The peak at 2.2 GHz corresponds to the rotation frequency of 1.1 GHz. (d) Experimental setup for producing holographic optical traps carrying transverse phase gradients. (e) Relationship between the traverse speed of a captured colloidal particle and the linear phase gradient in the transverse plane. The sign and the vector norm of represent the direction and the magnitude of the lateral phase gradient, respectively. (f) Calculated electric field distribution (pseudo-color image) and Poynting flux profiles (red arrows) of an illuminated plasmonic gammadion. (g) Sequential dark field images showing the rotation of individual motors powered by a single or multiplexed gammadion engines. The illumination wavelengths are 810 nm in the top two panels and 1700 nm in the bottom panel. (h) Generation of the opto-thermoelectric field near an illuminated Janus particle in a defocused light field. (i) Schematics illustrating the self-propelled rotation of a Janus particle trapped in a focused Gaussian beam. and denote the and components of the optical force, and and are the and components of the opto-thermoelectric force, respectively. is the Stokes drag force. (b), (c) Adapted from Ref. . (d), (e) Adapted from Ref. . (f), (g) Adapted from Ref. . (h), (i) Adapted from Ref. . In the late 1990s, researchers verified that circularly polarized vortex beams, carrying a total angular momentum of , transfer their SAM and OAM in manners equivalent to particles via absorption[208,209]. However, in other situations where diffraction and scattering dominate, light manifests its spin and angular momenta differently. For instance, circularly polarized light beams have been used to drive birefringent particles to spin, where the particles impose phase retardations between the ordinary and extraordinary components and alter the polarization state of the light field[210–212]. The recoil torque received by the particle can be calculated as , where measures the extent of change in polarization induced by the particle. The resultant torque reaches the maximum for , when birefringent particles act as micro half-wave plates, the principle of which is the same for the experiment conducted by Beth in 1936 using an actual quartz wave plate[65,210]. Non-spherical particles experience torques in both linearly and circularly polarized light fields. Given that their polarizabilities are tensors in nature, the generated dipoles are not aligned parallel to the electric field, hence leading to a torque of ( denotes the electric dipole)[213,214]. According to the reasonings in Refs. and , radiation torques on dipoles are more about energy transfer between light and objects rather than the flow of angular momentum. For a light field exhibiting fixed linear polarization, the torque is restoring, which is dependent on the angle between the long axis of the non-spherical object (typically rod-shaped or ellipsoidal) and the electric field aswhere and denote the polarizability components parallel and vertical to the long axis, respectively. As a result, such alignment torque plays the same role as the Hookean force in the Langevin equation in the rotational sense, which could cause torsional vibrations in weakly damped systems (the Stokes drag torque should be sufficiently small for manifestation of the alignment torque). Comparatively, circularly polarized light beams generate constant torques that are balanced by the viscous drag in a steady-state regime[212,214]. As shown in Fig. 15(b), a silica nanodumbbell is both optically trapped and rotating in a circularly polarized Gaussian beam. Since the experiment was conducted in high vacuum, the largely reduced Stokes drag coefficient resulted in an unprecedented rotation frequency of 1.2 GHz when the friction torque could finally equalize the driving torque, and a rotating Q factor beyond [Fig. 15(c)]. Recently, a concept of negative torque has been proposed, where micro-nano objects rotate opposite to the handedness of light polarization, a phenomenon very much resembling the optical pulling force. Complying with the law of momentum conservation, the essential requirement of negative optical torque is for the scatterer to scatter photons into higher angular momentum channels, generating a recoil torque that overcompensates for extinction torque. To produce such abnormal behaviors, researches have demonstrated mechanisms including scattering retardation, plasmonic effects, discrete rotational symmetry, induced dipole interactions among neighboring particles in optical matter arrays, etc.[216–220]. Helically phased light beams are endowed with OAM. In Sec. 3.1.5, we introduced vortex beams (linearly polarized) being used for photophoretic trapping, where the absorbing particles are confined within the enclosed dark regions, exhibiting no torsional movements. Indeed, the capability of optical vortices to transfer OAM is largely compressed in those cases, given that the manipulated particles are held tightly on the beam axis and too small to “sense” the whole beam profile[103,222,223]. For off-axis cases, transparent particles illuminated by vortex beams would orbit around the optical axis while being trapped within the bright annulus[222,223]. Moreover, exploiting both the SAM and OAM in a circularly polarized vortex beam, researchers have observed simultaneous spinning and orbiting of individual particles around their own axes and the beam axis. It is worth noting that the torque imparted on the particles mainly originates from the transverse phase gradient of Laguerre–Gaussian beams, or rather, the lateral scattering force associated with the linear momentum flow in the azimuthal direction: [70,224]. When multiplied by the radial vector, the resultant out-of-plane angular momentum would take on a more familiar look: —whereas this term for the fundamental mode is trivial with its -vector mainly pointing in the axial direction (cross-product rule of ). A more generalized theory was provided in Ref. , which claims that transverse phase gradients imposed by either the skewed wavefronts of vortex beams or any arbitrarily configured phase profiles using SLM could give rise to a transversely directed radiation pressure. An instance is given in Fig. 15(d), showcasing the generation of ring-shaped and line-shaped optical traps with lateral phase gradients inflicted on them. By tuning the direction and magnitude of the lateral phase gradient, e.g., ( is the unit vector for the coordinate, and denotes the unit vector in the direction of light propagation) for the line-shaped trap, the particle captured by the trap would be pushed along the associated lateral scattering force and traverse with a speed proportional to , that is, proportional to the linear momentum of the light field in the transverse plane [Fig. 15(e)]. Therefore, optical lateral manipulation based on momentum transfer could go beyond spinning and rotational degrees of freedom and extend to higher levels of customization. 3.3.2 Meta-vehicles: actuating via structural asymmetries In the previous section, the dynamics of a dipole rotating in a light field with linear polarization were attributed to the symmetry breaking in the polarizability tensor, as in . Indeed, to enable in-plane actuation, there is much more space to be explored simply by introducing asymmetries in the interacting object, and there are voluminous researches dedicated to further adding variety to this field. Here, we introduce only a few distinctive works. In 2010, Liu et al. developed a light-driven motor, the building blocks of which are gammadion-shaped plasmonic structures embedded in silica microdiscs. Upon illumination of a linearly polarized beam, the gammadion strongly scatters light in directions determined by the excited plasmonic mode profiles, corresponding to incident wavelengths [Fig. 15(f)]. Owing to the electron inertia, relative phase retardations were induced among the currents at different arms of the gammadion as source terms, which are projected to the re-emitted light. Consequently, the scattered light field is endowed with extra angular momentum by the helically distributed phase profile. As compensation, the gammadion receives a recoil torque to maintain the conservation of momentum, which functions as an engine (either individually or collectively) to fuel the rotational motion of the whole structure [Fig. 15(g)]. Likewise, the linear momentum of incident light can be transformed to mechanical angular momentum through crossed momentum transfer, provided that the interacting object features a chiroptical response to the electromagnetic field[227–230], and the sign of the lateral force can be switched by simply reversing the particle handedness. To dig further into the fundamental physics, this feature stems from the concomitant transition of electric and magnetic dipoles, which is shared by structures with neither the center of inversion nor mirror symmetries (the definition of chirality)[228,231]. In this way, the lack of helicity in incident light can be reproduced by the helicity of the structures. Similarly, by utilizing the asymmetry, or rather, the chirality of microstructures, micromotors can be constructed under a uniform illumination of incoherent light. Instead of through the momentum transfer between the light field and matter, this time the driving torque is provided by surface tension forces at liquid–air/solid interfaces not directed across the centroid (non-zero moment arm), which demonstrates the generality of symmetry breaking in achieving rotational motions. Janus particles possess two distinct properties across their surfaces, the synthesis of which is one of the prevailing methods to create structural asymmetry. Very often they are present as dielectric particles half-coated with thin metal films so as to maximize the contrast between the two opposite surfaces. For an individual Janus particle, typically a few micrometers in diameter, captured by an optical tweezer, the dielectric hemisphere would be attracted to the beam center due to the optical gradient force. On the other hand, the metallic hemisphere would be repelled from the light intensity maxima by the dominant scattering-plus-absorption force. Consequently, the dynamic interplay between the attractive and repelling forces would forcibly adjust the orientation of the Janus particle, and, with extra perturbations to break the symmetry along the dielectric–metallic boundary face, self-navigation and propulsion of the Janus particle would occur in the plane transverse to the light propagation[233,234]. Moreover, the two hemispheres of Janus particles also differ in opto-thermal efficiency, which induces a well-directed temperature gradient pointing from the transparent side to the absorptive side, and various thermally driven processes such as the Soret effect or thermocapillary effect, and thermoelectric drift would occur thereafter, readily to be harnessed for directed particle delivery[204,235–237]. Figures 15(h) and 15(i) illustrate such an example of an opto-thermoelectric microswimmer. In a defocused laser beam and with the presence of CTAC surfactants, a local electric field forms near the illuminated Janus particle (positively decorated), which is propelled in the direction of the temperature gradient [Fig. 15(h)]. The self-propelled circulation is further demonstrated in Fig. 15(g), where a focused laser beam is used instead. As a result of the concrete temperature distribution, the radial and azimuthal components of the electric field provide the centripetal () and peripheral forces (), respectively, counteracting the optical force as both the repulsive force () and the resistance (). Stable rotation and precise navigation of Janus particles have been reported in multiple literatures[204,233,236], and smart manipulations exploring higher degrees of freedom are expected for more delicate structural and light field designs. Besides Janus particles and other kinds of micro-nano objects with a high extent of asymmetry intentionally introduced in the particle geometry/compositions, minor asymmetries that function as perturbations can translate into evident and regular rotations of particles by utilizing the criticality of the surrounding fluids. Specifically, through light-induced absorption, demixing of a critical liquid mixture can produce a diffusiophoretic force that counters the restoring force and pulls the particle out of its trapping center in an optical tweezer, and in the azimuthal direction, provides a bias for rotation with the presence of minor structural asymmetries. For particles that possess perfect structural symmetry, the asymmetrical bias necessary to trigger the lateral motion should be provided by the light intensity profile or the derivative physical fields through, for instance, the deviation of the light beam from the particle center[239,240]. 4 Optical Manipulation in Solid Environments As the antithesis of Sec. 3, which discusses optical manipulation in the fluidic domain, this section concentrates on the implementation of optical manipulation in the solid domain. Two major challenges come along with the change of the working scenario. First, in solid environments, the resisting force (e.g., van der Waals force) exerted upon micro-nano objects increases dramatically compared to that in fluids, typically reaching the scale of , in the face of which the optical force () pales into insignificance. Second, since the objects are in direct contact with the substrates rather than being “suspended” in fluids, derivative forces relying on light-induced fluidic motions (e.g., thermophoretic force) would become impotent due to the lack of “flow” along the solid boundaries (no-slip condition for viscous fluids). The large-scale gap between the accessible driving force and the resistance thus prohibits directly transferring the light actuation scheme from fluidic to solid domains. Instead, to enable optical manipulation of micro-nano objects in solid environments, modifications and even new mechanisms are expected, which could be achieved by modulating CW light into pulsed forms (Sec. 4.1) and utilizing the associated impulsive physical effects, exploiting the light-induced photothermal deformation (Sec. 4.2), or inflicting fluidity to the actuating systems (Sec. 4.3). Apart from the pulsed optical force (introduced in Sec. 4.1), the rest of the mechanisms all exploit the optically induced/assorted effects by interfacing the “energy channel” of light. 4.1 Driving Using Pulsed Light In Sec. 4.1, we introduce four different actuating mechanisms induced by pulsed light irradiation, which involve the pulsed optical force (Sec. 4.1.1), elastic waves excited in actuators (Sec. 4.1.2) or the substrates (Sec. 4.1.3), and the transient light–thermal effects (Sec. 4.1.4). By virtue of the pulsed nature of the light source, the pivotal physical processes involved in the four scenarios all exhibit transient dynamics and impulsive characteristics. The first scheme, namely, the pulsed optical force, is an immediate extension of the conventional optical force typically discussed under the CW light framework. The last scheme is novel and entails intense light–matter interaction, yet a general theory accounting for the experimental phenomena is still lacking, and the particles should experience thermal ablation before the actuation takes place. In comparison, the opto-thermoelastic wave manipulation, the theory of which is introduced in Sec. 2.3, establishes a distinct picture that connects multiple physical fields with rigid and unambiguous coupling relations, and is capable of inducing multi-degree-of-freedom locomotion with the presence of -scale resistance force while maintaining the integrity of the actuators. Though at the current stage, it has been demonstrated on only a few platforms (e.g., micro-nano fibers), we believe that the elastic-wave-assisted scheme would act as the main force in the “march of optical manipulation towards highly adhesive regimes.” 4.1.1 Pulsed optical force for stuck particle ejection Compressing electromagnetic energy into pulsed forms brings about tremendously high peak power, which could be several orders of magnitude larger than average power, depending on the pulse repetition rate and the extent of “compression” in the time domain. In the meantime, the optical force of a pulsed laser would inherit the temporal evolution of the impulsive power flux, exhibiting peak values that are significantly elevated compared with the CW counterpart. Hence, it is possible that at some point of the pulse’s rising edge, the transient optical force could surpass the strength of the van der Waals adhesion. Inspired by this deduction, researchers have utilized pulsed lasers to eject particles initially attached on a glass substrate, which could then be captured and levitated by a conventional CW light optical tweezer after detachment[241,242]. In corresponding works, the axial gradient force of the pulsed laser would “kick” the attached particle in a pulsed fashion, and the detachment would not occur until the transient kick surmounts the strength of van der Waals force, which is estimated to be at nanonewton level in experimental scenarios (the situation here differs from those where actuators locomote “along” the substrate surface and experience stronger adhesive forces at scale)[242,243]. Considering the transparency of both the particles and substrate to incident light and also the fact that such a large force is beyond reach of a CW light optical tweezer with power output, the axial optical force (which also includes the scattering force) of the pulsed laser, the peak power of which is at scale, becomes the sole reason for particle ejection. Further shrinkage of the pulse width could in principle further increase the peak value of the pulsed optical force, while its average value would remain basically the same with the single pulse energy kept constant. This very feature has made pulsed light sources competitive in optical actuation, and also as comparable as CW light in regard to optical trapping, as long as the repetition rate is high enough to prevent the particle from drifting during the pulse intervals[245,246]. Regardless, relevant studies concerning pulsed light are still lacking (in contrast to those of CW light), in the field of either optical trapping or light actuation. To fully exploit the transient optical force beyond enabling particle detachment, for instance, in driving the locomotion of micro-objects against the in-plane resistance force, extra care should be taken to trade off further “compressing” the laser pulses against unwanted nonlinear absorptions. 4.1.2 Actuator-supported elastic waves for multi-mode manipulation Despite the -scale adhesion force in the solid domain, recently, a series of studies focusing on light-induced multi-mode actuations has been reported, including out-of-plane and in-plane rotations, translation[36,247], and composite locomotion combining both rotational and translational degrees of freedom driven by nanosecond pulsed light[37,38]. These works share the same basic experimental setups based on the microfiber–plate/nanowire systems, in which the microfiber functions as both the evanescent waveguide in the optical part and the stator in the mechanical part, while the plasmonic microplate/nanowire plays the triple role of being the light absorber, acoustic waveguide, and actuator. General actuation principles A general picture of the driving mechanism is depicted in Fig. 16(a). In brief, upon pulsed light irradiation through the microfiber, the plasmonic actuator absorbs the evanescent light tunneling through the fiber sidewall and converts it to heat, which subsequently couples to the guided elastic waves propagating in the actuator. The essence in the actuation lies in the interplay between the surface friction (external force) and elastic waves (internal force) during the impulsive heating and cooling cycles, as summarized in Ref. . In the fiber–plate system shown in Fig. 16(b), assuming that elastic waves mainly propagate along the axis, a rectangular microplate, as the acoustic waveguide, supports longitudinal and transverse modes [Fig. 16(c)], which, respectively, give rise to its locomotion along the azimuthal (i.e., rotation) and axial directions (i.e., translation) of the cylindrical microfiber. Figure 16.Spiral, rotational, and translational motions induced by actuator-supported elastic waves. (a) Illustration of the driving mechanism in the opto-thermoelastic scheme, which centers around the interplay between surface friction and the thermally induced elastic waves, with the enabling elements being pulsed light, absorption, and the heating and cooling cycles. (b) Schematic showing the zoomed-in configuration of a fiber-microplate system. The shaded region denotes the contact surface at which the friction force functions as a “fence,” blocking the transmission of thermally excited elastic waves. The inset table links the motion states with the relation between the effectively absorbed power and the threshold power , and with the relation among the friction force , maximum static friction , and transmittance of the thermally excited elastic waves . (c) Calculated band structure of a rectangular gold plate as an elastic waveguide. L and T modes denote longitudinal and transverse modes, respectively. (d) Sequential optical images showing the spiral motion of a hexagonal gold plate around a static microfiber during one rotation period. Scale bar: 15 µm. (e) Illustration of the rotation of gold microplates with opposite lateral asymmetries. The solid and dashed lines in purple denote the propagation of excited and reflected elastic waves, respectively. (f) Single-pulse locomotion of the gold plate showing its simultaneous crawling towards the SW and the turning relative to the stator. The contact point O should remain unchanged after a complete motion step. (g) Translation of plasmonic nanowires on microfibers driven by pulsed light of different wavelengths. (h) Schematic of the nanowire exhibiting earthworm-like crawling motion in a heating-cooling cycle induced by a single light pulse at 1064 nm. The top two panels are within the heating period, and the lower two panels correspond to the cooling period. (i) Temporal evolution of the displacement of the nanowire’s frontend in the direction [coordinates are denoted in (h)]. (a) Adapted from Ref. . (b)–(d) Adapted from Ref. . (e), (f) Adapted from Ref. . (g)–(i) Adapted from Ref. . Notably, from the perspective of the elastic wave equation, since the friction force and transient light absorption are both source terms contributing to the net displacement fields (refer to Sec. 2.3), a threshold light power exists, at which point the counter effect of the maximum static friction is perfectly cancelled out by that induced by light absorption , marking the initiation of the actuation [see the inset table in Fig. 16(b)]. A more vivid picture is to depict the friction force as a “fence” at the contact surface [shaded region in Fig. 16(b)] resisting the transmission of the absorption-induced elastic waves, and the start of transmission, that is to say, , signifies the initiation of the microplate locomotion. An empirical estimation of the threshold power absorption for gold microplates experiencing -level friction force is at scale, written aswhere and are the specific heat capacity and thermal expansion coefficient of gold, respectively, the sound velocity of the specified acoustic mode, the time for the thermally induced elastic waves to be launched from the absorption center and reflected back to the contact surface (several back-and-forth reflections might be involved), and the elastic wave lifetime. It is noteworthy that for CW light, the effectively absorbed power descends to zero, given that the power absorption and power leakage would nullify each other upon reaching the steady state. Experimentally, spiral motions of the gold plate revolving around the microfiber (stator) have been observed through the optical microscope [Fig. 16(d)], which can be decomposed into the two constituent locomotions, namely, rotation and translation, exactly corresponding to the two fundamental acoustic modes in Fig. 16(c). The same events can be reproduced using hexagonal, triangular, circular, and rectangular shaped microplates, suggesting the generality of the proposed mechanism. Further delving into the experimental observations, researchers have found that the “asymmetry” in the fiber–plate system is the necessary bias required to activate the actuation. Specifically, for rotational locomotion, the lateral asymmetry demarcated by the fiber–plate contact line determines the sense of rotation of the actuator in that its short side, or more vividly, the short wing (SW), would always “drag” the long wing (LW) to advance along the fiber’s circumference, regardless of the relative pose of the microplate [Fig. 16(e)]. A phenomenological interpretation was first given by Lu et al., stating that the geometric asymmetry would be accompanied by unequal propagation lengths of the elastic waves on the two wings, and the longitudinal oscillation in the SW should dominate that in the LW due to less attenuation. The effect of asymmetry is also implicitly embedded in the term in Eq. (41): given that the elastic waves take less time to complete a round trip (marked by the solid and dashed lines in purple) on the SW, they are associated with a smaller threshold power , meaning the relative easiness in enabling locomotion induced by SW elastic waves compared to LW waves. Conforming to the tendency of thermal expansion at the heating edge, the prevailing displacement carried by SW longitudinal waves would point from the contact line to the short side, and vice versa for the LW. Hence, the microplate would crawl favorably to the short side, which is essentially irrelevant to the light launching direction in the microfiber. Moreover, during the cooling period that follows, the tendency of contraction would not annihilate the displacement built up at the heating edge, since it is resisted by the adhesion force that counterintuitively serves as a facilitator preserving the previously attained locomotion of the contact surface. The above heating–cooling cycle would repeat at each individual light pulses, which prompts pulse-wise locomotion of the microplate with sub-nanometer resolution. Additionally, with the presence of the radial component of the adhesion force, the microplate would crawl tangentially and be pulled centripetally towards the fiber at the same time, the total locomotion of which would thereupon be in the form of rotation around the microfiber driven by consecutive light pulses [Fig. 16(f)]. Following the same deduction, the translational degree of freedom is unlocked by the synergetic effects of the pulsed-light-excited transverse acoustic modes and the bias caused by the axial asymmetry. In 2021, Linghu et al. demonstrated the actuation of plasmonic nanowires on microfiber platforms, as shown in Fig. 16(g). Owing to the small width of nanowires (a few hundred nanometers), the necessary bias required by the rotational degree of freedom is missing, thus making the translation of the actuator more explicit in the “purified” composite locomotion, as opposed to the hybrid motion observed in the plate–fiber configuration. In the nanowire–fiber system, an intriguing feature of the leftover eigenmode (i.e., translation) is that the movement direction of the actuator flips upon a change of the light source wavelength, while the direction of light propagation is kept constant [Fig. 16(g)]. This phenomenon can be accounted for by adopting the electromagnetic theory: the interference patterns between the excited mode in the plasmonic nanowire and the guided mode in the microfiber have different spatial distributions at different wavelengths. Specifically, at 1064 nm, the electric field intensity peaks at the frontend of the nanowire (the far end relative to incident light), whereas at 532 nm, the electric field mainly concentrates at the backend. Thereupon, at the heating edge in the former case, the frontend of the nanowire exhibits stronger photothermal effects, associated with more intense thermal expansions both along and vertical to the fiber–nanowire interface, leading to a net forward motion of the nanowire centroid and a gradient shrinkage of the interfacial gap, which is the most profound at the nanowire’s frontend [top two panels in Fig. 16(h)]. As the cooling process sets in, the earthworm-like translation of the nanowire is expected, in which its frontend possessing the smallest interfacial gap is anchored as the most adhesive region and its backend crawls forward, conforming to the general tendency of contraction [lower two panels in Fig. 16(h)]. In consequence, the nanowire locomotes translationally in a way that the more heated end drags the less heated end to advance along the fiber axis in a pulse-wise manner, indeed following the same regulation as that in Fig. 16(f). The underlying mechanism is the asymmetric excitation of the transverse elastic waves along the nanowire length, which is further assisted by the adhesion force, manifesting the delicate duality of the latter in both resisting and facilitating the solid-domain locomotion in the opto-thermoelastic wave coupling scheme. The elastic wave nature of this mechanism is unveiled by probing the local displacement of the nanowire at nanosecond resolution, as shown in Fig. 16(i). During a single heating–cooling cycle, an initial impulsive thermal expansion is followed by fluctuant contractions in the cooling period, which indicates the back-and-forth oscillation of the transverse elastic waves and should be accompanied by a similar fluctuant friction force that constantly flips its signs. The gradual weakening of the oscillation marks the elastic attenuation. Apart from the asymmetric distribution of light absorption, nonuniform contact between the actuator and the fiber appears as a second source of the axial bias needed for translational locomotion, and correspondingly, it is the contact side that drags the non-contact side to crawl forward. Other motion patterns Rotational and translational locomotion of the microplate/nanowire is induced by the longitudinal and transverse acoustic modes, respectively. Other than the two fundamental locomotion modes and hybrid spiral motions, several other motion patterns have been reported. Recently, Lyu et al. have demonstrated the in-plane rotation of gold microplates on microfibers, which describes the phenomenon that the microplate turns by a certain angle around an axis perpendicular to its base plane upon illumination of pulsed light, as shown in Fig. 17(a). The blue dot denotes the rotation center, which, in a quantitative sense, essentially stays still, while the rest of the structure picks up nonuniform in-plane displacements proportional to the distance between the rotation center and the local volume element. This time, two sources of asymmetry provide the bias that guides this locomotion, namely, the geometric asymmetry in the two wings and the absorption asymmetry along the fiber–plate contact line [Fig. 17(b)]. The combined effect of the two asymmetric factors leads to a gradient distribution of azimuthal displacement along the contact line, which causes a general motion of the actuator towards the SW and a simultaneous turning of the microplate. The same effect has also been discussed in a fiber–nanowire system in Ref. , which manifests in the self-parallel parking of the nanowire. As suggested in Fig. 17(c), remarkably, once the nonuniformity in the absorption profile is erased (lower panel), the in-plane locomotion no longer stands, as the excited longitudinal waves along the contact line oscillate in the same magnitude. Figure 17.Other motion patterns observed on the fiber–plate system. (a) Sequential optical images showing the in-plane rotation of a gold microplate on a microfiber with continuous light pulse injection. The base plane of the gold plate coincides with the plane in the sketch on the left. (b) Mechanism of the in-plane rotation of gold plates on optical fibers, which incorporates electromagnetic interference, light absorption, and the asymmetrically excited longitudinal elastic waves. (c) Comparison between the two cases with a linear absorption profile along the fiber–plate contact line (upper panel) and uniform distribution (lower panel). The displacements of the two highlighted points at the extremities of the contact line are recorded, showing the close relation between the asymmetry in the displacement field and the asymmetry in the optical absorption. (d) Dynamic recordings of the back-and-forth oscillating motion of a gold plate on a tapered fiber probe. Supercontinuum light was adopted as the light source and delivered into the tapered fiber. (e) Proposed mechanism accounting for the bidirectional locomotion of the gold plate. The oscillation was believed to be the result of competition between the optical pushing force and photophoretic pulling force. (f) Sequential SEM images showing the spiral motion of an antimony telluride plate on a microfiber. The repetition rate of the pulsed light is kept low (230 Hz), and the average power used is 0.1 mW. Scale bar: 5 µm. (g) Ablated microplate exhibiting liquid-like motions with a micro-bump contacting the underlying microfiber. The repetition rate of the pulsed laser is 11.5 kHz, and average light power is 5.4 mW. (a)–(c) Adapted from Ref. . (d), (e) Adapted from Ref. . (f), (g) Adapted from Ref. . A novel scheme of back-and-forth locomotion of gold plates on a tapered optical fiber probe was reported in 2017 [Fig. 17(d)]. As sketched in Fig. 17(e), the initial explanation of this observation is the synergetic action of the optical force and the photophoretic force, with the former pointing along the light propagation, and the latter directed against it. Hence, once the microplate is close to the end of the fiber probe, it experiences stronger photophoretic force, given that the evanescent-wave-induced photothermal effects are highly enhanced at the tip region, and so is the temperature gradient on the gold plate; when the microplate is pulled far away from the tapered fiber end, the optical force becomes dominate and pushes it back to complete the oscillation cycle. Despite the alluring dynamics depicted in this explanation, the calculated optical force and photophoretic force are both at scale, which, referring to previous analyses in this review, should have been overwhelmed in face of the scale gap with the level friction force. An alternative interpretation based on the opto-thermoelastic mechanism might be able to resolve this confusion, which is further supported by the fact that pulsed light was adopted as the light source in this work, albeit that more information should be provided to account for the bidirectionality of the reported motion patterns. Or rather, assuming that the author’s initial deduction still stands, instead of taking the time average of the optical force and photophoretic force, their temporal evolution in the nanosecond pulsed form might be considered instead, which directly relates to whether the magnitude gap can be filled to enable the locomotion. Besides metallic materials, pulsed light driven actuation has been tested on 2D topological insulators, a group of materials hosting unique optical and electronic properties, given special attention due to the existence of topologically protected boundary states. Figure 17(f) shows the spiral motion of an microplate around a microfiber recorded in situ in an SEM chamber. In effect, to qualify as a suitable actuator for opto-thermoelastic actuation, none of the featured properties of topological insulators is relevant. Instead, the general requirement is that the material of concern should be efficient in light–thermal conversion, large in thermal expansion, and relatively small in heat capacity and mass density. From this perspective, might be superior to gold plates regarding actuation efficiency, and experimentally, the single-pulse step size of the actuator could be more than 10 times that of the gold plate. Moreover, owing to the poor thermal conductivity of , the heat on the plate could not be completely diffused within the finite cooling window at a high pulse repetition rate (), thereby leading to multi-pulse heat accumulation and the local phase transition of the material from solid to liquid. The phenomenon of liquid-like motion uncovered in the opto-thermoelastic scheme is displayed in Fig. 17(g). Briefly, a micro-bump in the viscoelastic state forms at the fiber–plate contact region as the result of Marangoni effects. Asymmetric contact angles at the two edges of the micro-bump give rise to the unbalanced Young’s interfacial force that drives the whole plate to move towards the side with a larger contact angle. Unstable spiral motions can be observed in the high-repetition-rate regime, possibly because of the continuous thermal ablation of the contacting material, and the superposition of two sets of motion patterns, elastic-wave-induced locomotion and liquid-like motions. 4.1.3 Substrate-supported elastic waves for particle detachment Particles adsorbed on substrates are anchored by van der Waals adhesion and are motionless in the presence of -scale optical forces, while they can be driven to detach substrates by excited surface elastic waves. Following the same principle of light–thermal–mechanical coupling as in the previous section, upon irradiation of laser pulses, the absorptive substrates undergo impulsive thermal expansions and contractions with the deposited light energy converted to heat, subsequently endowing the attached particles with sufficient acceleration to escape beyond the acting range of van der Waals force, which then continues to move upwards as a result of inertia[28,29]. The transient force imparted on the particles can be estimated as , where denotes the photoacoustic conversion efficiency, the transient optical power of the pulsed light, and the sound speed in the substrate (see also Sec. 1). Notably, instead of the anchored particles (the actuator in context), it is the substrates that generate acoustic waves, considering that they possess the necessary geometric dimensions to be qualified as acoustic waveguides, similar to the metallic plates and wires introduced in the previous content. Indeed, this scheme of particle manipulation has found its way to being applied in semiconductor industries for pulsed laser cleaning, which we discuss exclusively in Sec. 5.6. Recently, Alam et al. have proposed a nanoprinting method where stuck particles can be transferred from the donor substrate, whose top thin layer is made of polydimethylsiloxane (PDMS), to the receiver by virtue of fast substrate expansion [Fig. 18(a)]; surprisingly though, only the CW laser was adopted, the switch-on moment of which carries an impulsive feature and would trigger abrupt and intense surface deformation of the flexible substrate, thereby ejecting the stuck particles [Fig. 18(b)]. After detachment, while the inertia force maintains the particle motion in the vertical direction, the focused light beam binds the released particle transversely via optical gradient force, which guarantees the pinpoint printing accuracy on the receiver. In contrast, the same scheme has failed to work on hard substrates, where the stuck particles cannot gain sufficient propulsion and essentially remain still and bind tightly to the substrate, suggesting the limited applicability of CW light in coupling to elastic waves. Figure 18.Particle propulsion via light illumination. (a) Illustration of the nanoprinting process where particles are released from the flexible donor substrate and transported to the receiver plate. Insets are SEM images of particles deposited on the receiver plate. (b) Simulated temperature profile and the thermal expansion of the PDMS layer via plasmonic absorption of a gold particle. An escaping force from the van der Waal’s adhesion is provided by the thermal expansion of the PDMS layer on the donor substrate. Scale bar: 500 nm. (c) Laser modification of the gold nanoprisms deposited on nonwettable substrate. The laser fluence increases from top to bottom in the left panel and from left bottom to right top in the right panel. Beyond a certain threshold, the deposited particles would be propelled from the substrate. (d) Schematic of the laser-induced forward transfer of nanopatterned particles from a donor to a receiver substrate. (e) Dark field microscopic image of arrays of transferred particles on the receiver substrate. The adopted laser beam has a square profile. (f) SEM image showing the sub-features contained in a single square pixel shown in (e). The initial patterned geometry on the donor substrate was obtained via nanosphere lithography, which explains the hexagonal alignment of particles transferred on the receiver substrate. (g) SEM images showing the propulsion of deposited gold materials with minor ablation upon femtosecond light illumination. The laser pulse intensity is . (a), (b) Adapted from Ref. . (c) Adapted from Ref. . (d)–(f) Adapted from Ref. . (g) Adapted from Ref. . 4.1.4 Transient light–thermal effects for ablative propulsion Nanopatterned particles are most often fabricated through lithography and thin-film deposition on solid substrates. In general, the binding between the deposited material and the substrate goes beyond the van der Waals regime and involves stronger physical and chemical interactions, given that the deposited material “grows” on and binds with the top atomic layer of the substrate instead of being adsorbed as separable individuals. Therefore, the manipulation, or detachment, of such particles has to rely on correspondingly more intensive processes such as dewetting, phase transition, and plasma formation, which can be generally categorized as “laser ablation,” and renders the description of “noninvasive,” which is frequently associated with optical manipulation unapplicable in these schemes. In 2005, Habenicht et al. experimentally demonstrated that nanofabricated gold structures were propelled from the substrate at a speed of upon nanosecond laser irradiation [Fig. 18(c)]. The nanostructures transform their shape from flat (wetting) to spherical (non-wetting) before they can be ejected[251,252], which has led researchers to conclude it is the release of surface energy that fuels the jumping particles. The key to particle ejection is the fast energy deposition that enables the phase transition and also the fast shape transformation along with the lift of the structure’s center of mass, which is then assisted by the inertia and reduced adhesion (from solid–solid to solid–liquid interfacial interactions). Later investigations have verified this mechanism in multiple initially nonwetting systems using either nanosecond or femtosecond pulsed light sources, and the propelled particles can even be collected by an arbitrary second substrate placed in close proximity to the “donating” substrate, effectively realizing inter-substrate transfer of nanopatterned metallic particles [Fig. 18(d)], the technique of which was summarized as laser-induced forward transfer (LIFT)[253–257]. Owing to the dewetting process prior to particle propulsion, the receiver substrate can receive particles only in spherical drop-like shapes regardless of their initial geometries, while the faithfulness regarding the particle transfer can be achieved on other fronts such as size and distribution, with the former being guaranteed by the law of mass conservation (e.g., by controlling the size and thickness of the nanopatterned particle on the donor substrate)[255,257] and the latter by mild air fluctuations (e.g., in vacuum chambers)[258,259]. Complex and hierarchical patterns can be created using LIFT by additionally scanning the light source or the donor substrate in horizonal directions. As displayed in Fig. 18(e), the ejected material can be imprinted on the receiver in a pixel-by-pixel manner, and the acquirement of user-defined geometries is through controlling parameters including scanning trajectory, the shape of the laser spot, and the timing to fire the light shots. Moreover, the structural hierarchy manifests in that each pixel can host sub-features when the light spot encircles multiple nanopatterned particles on the donor substrate [Fig. 18(f)]. Using plain metallic thin films as the donor layer, more systematic researches have revealed that complementary processes take place in LIFT: the etching in the donor substrate, which removes the local material in heat-affected regions; and the deposition in the receiver, which appends extra material to areas lying in the path of the ablative propulsion. Hence, LIFT leaves complementary traces in the donor and receiver substrates, both of which, if optically well designed, can be employed as plasmonic devices, metal in-diffused waveguides, diffractive elements (e.g., holographic plates), or photomasks with opposite tones. Considering that the key process of material transfer does not pose special limitations with respect to the substrate geometry, using LIFT, non-planar and high-curvature structures such as optical fibers can be patterned with sub-micrometer metallic features to form gratings for sensing and filtering applications. Despite researchers’ efforts in improving the diversity and versatility of LIFT, there is still a lack of comprehension of the fundamental mechanism behind the ablative propulsion. Indeed, besides obtaining momentum from the center of mass elevation during fast dewetting, the nanopatterned particles can be propelled by the explosive pressure that builds up at the particle–substrate interface upon impulsive ablation, and the latter explanation has been more often adopted in situations involving femtosecond lasers[256,260]. Another interpretation is to draw an analogy between LIFT and pulsed laser deposition, a standard physical vapor deposition technique in which high-energy laser pulses are involved and the ejected species are in the form of plasma plumes[261,262]. Interestingly, when both the pulse width and pulse energy are at appropriate levels, the transient light–thermal effects can give rise to stand-up, jump, flip, and even rotation of patterned geometries in more intact forms with minor ablation [Fig. 18(g)][263,264]. Ultrafast dynamics should be taken into account in corresponding results. It is likely that the ordinary channels of nonradiative relaxation leading to phonon excitations were blocked, and what occurred instead were more impulsive and localized phenomena such as ionization and material sublimation[265,266]. Both the compressed heat generation and limited time for heat transfer (meaning small heat-affected regions) might have maintained the integrity of the large proportion of propelled particles. 4.2 Photothermal-Deformation-Based Actuation Direct conversion of various environmental stimuli into mechanical work provides opportunities for designing actuators. Photothermal actuation, which links the light signal to material deformation via light-to-heat conversion, emerges as an appealing approach since it usually possesses the properties of simple design, controllable reconfiguration, and the capability of realizing multi-degree-of-freedom locomotion in solid-state machineries. Indeed, thermal-deformation-based actuation has been widely exploited in micro–electro–mechanical systems (MEMS), where heat responsive materials are configured into the moving parts of the machinery to be driven by electrical resistive heating[267,268]. Following the same principle, the electric part in the heat-mediated MEMS can readily be substituted by light components so as to construct the micro–opto–mechanical system (MOMS) counterparts. A variety of photothermal effects can be exploited in MOMS devices or even to actuate objects at macroscale, such as light-induced volume expansion, molecule desorption, and material phase transition, which are not restrained to certain working environments and are widely applied in solid domains. The basic mechanism for photothermal actuation is based on a two-step process, which successively includes light-to-heat and heat-to-work conversion. To begin with, light carrying electromagnetic energy should be directed to illuminate the target machinery, whose key components are photothermal materials (e.g., carbon-based materials, plasmonic structures). Upon light–matter interaction, the photoexcited electrons are relaxed via electron–phonon or electron–electron scatterings, which, from the perspective of quantum statistics, leads to heat generation. Next, expectedly, the photothermal materials undergo various changes in their shape, phase composition, surface energy, etc., stimulated by the temperature increase. Note that to build moving parts in the actuator, apart from stimulating the above changes in the materials’ physical properties, external constraints or machinery connections should be implemented in certain configurations (the commonly seen example is the two-layer cantilever with the interface forced to extend to the same level) before the deformation or mechanical work can be manifested and collected. In the following content, three photothermal effects are discussed together with their applications in enabling optical manipulation in solid environments. 4.2.1 Photothermal-induced expansion Thermal expansion is a common phenomenon where materials change their shape and volume with the increase of temperature, which is quantified by the (linear) coefficient of thermal expansion (CTE) as the relative elongation per unit temperature increases: where and denote the material linear dimension and temperature, respectively. Apparently, in response to the light-to-heat conversion, asymmetric deformation occurs in hybridized structures that incorporate materials with the CTE mismatch. According to this basic principle, Javey and co-workers constructed hybrid films composed of polycarbonate (PC) and single-walled carbon nanotube (SWNT) layers. The intrinsic light absorbance of the SWNT can convert visible to near-infrared light into thermal energy, while the PC membrane, though basically transparent, is responsible for providing the large CTE contrast relative to the absorptive SWNT layer. When heated, both constituent layers undergo thermal expansion but with considerably different magnitudes, given that the CTE of PC (65 p.p.m./Kelvin) is dozens of times larger than that of SWNT (). As a consequence, in the in-plane direction, the PC layer will accumulate more extension than the SWNT layer, leading to the hybrid film rolling up towards the latter and the PC layer being the convex side in the resultant “cantilever plate” [see Fig. 19(a)]. Figure 19.Photothermal deformation-based manipulation. (a) Curling of the SWNT-PC dual-layer structure induced by the CTE mismatch upon light–thermal effects. (b) Schematic showing selective activation of the elementary building block of the artificial muscle. The initial GO-PMMA bilayer structure can be laser-modified into rGO-PMMA (indicated by the shaded areas) to form “joints” of the artificial muscle. Au nanorods are embedded in the bilayer matrix to enhance the light–thermal effects, which would also exhibit wavelength selectivity. (c) Sketch of the cantilever beam with micrometer footprint. The other layer that provides the CTE contrast is . The whole device can be prepared using CMOS fabrication procedures. (d) Actuation mechanism of photothermal-induced moisture change based on an rGO/GO-PDA dual-layer structure. (e) Schematic of the assembly-free light-addressable hand. (f) Light-manipulable arm integrated by pre-deformed dual-layer components. (g) Worm-like crawling of a dual-layer machinery based on the phase transition of thermotropic LCs. (a) Adapted from Ref. . (b), (e) Adapted from Ref. . (c) Adapted from Ref. . (d) Adapted from Ref. . (f) Adapted from Ref. . (g) Adapted from Ref. . Based on this dual-layer configuration, grippers, smart curtains, rollers, and other machineries can be obtained, whose locomotion relies on the bending of elementary building blocks, i.e., cantilever plates, and can be modulated by the direction of illumination, incident wavelength, and the on–off states of the light source[272–277]. An example of a light-manipulated arm is displayed in Fig. 19(f). The integrated movement is maneuvered by the photothermal effects, and can be decomposed into twisting/untwisting of the helix structure, which functions as the limb, and the folding/unfolding of the stripes attached to the helix, which function collectively as the claw. Both the limb and claw were pre-shaped so that they could perform complex tasks in a way that the cantilever-like deformation should be superimposed on the prescribed configuration in each volume domain, which is a general methodology to obtain multi-degree-of-freedom locomotion beyond bending/unbending. Inspired by the resemblance between the bending/relaxation of dual-layer structures and the contraction/expansion of muscles, Sun et al. developed monolithic artificial muscles, though free of component assembly, that can reproduce complex locomotion patterns of human limbs or the jointed legs of arthropods. The elementary building block of the artificial muscle is displayed in Fig. 19(b). In their work, the bilayer structure is constituted by a layer of polymethyl methacrylate (PMMA), which features large positive CTE, and gold nanorod-embedded graphene oxide (GO) that is cast upon it, providing both light absorption (enhanced by the plasmonic effects of gold nanorods) and the necessary CTE contrast (the CTE of GO is either small or negative) to the PMMA. Through one-step laser scribing, the bilayer structure is patterned in a way that GO in the illuminated area can be transformed to reduced GO (rGO), which, compared to unmodified GO regions, possesses significantly increased light-to-heat conversion efficiency. Therefore, an rGO pattern laid in between GO regions could effectively function as the “joints” or “nodes” that coordinate the connected “muscle pieces” via light manipulation, which lays the foundation for building assembly-free and light-addressable robots, as illustrated in Fig. 19(e). 4.2.2 Photothermal-induced phase transition Materials undergo phase transitions with their structures reconfigured at molecular or crystalline levels, which, when accumulated in bulk objects, can induce considerable deformation. The most ubiquitous approach to trigger phase transition is via temperature change, and this is where photothermal effects come in handy. Unlike the thermal-expansion-based scheme in which object deformation is proportional to temperature change, only a small temperature window is demanded to obtain large deformations through phase transition, since it occurs more abruptly, temperature wise. Three schemes of photothermal-induced phase transition are introduced below, each represented by a group of specialized materials. Shape memory material (SMM) can simultaneously transition from its pre-deformed state to a permanent and “memorized” shape, when heated to beyond the transition temperature . This very phenomenon can be utilized for optical manipulation, where the stimulus is exerted through photothermal effects that lead to the release of strain energy stored in the temporary state[278,279]. Considering the limited capability of light in inducing temperature variations, the pedagogically best-known case of shape memory alloys cannot be easily reproduced using light as the energy source. Instead, shape memory polymers (SMPs), possessing relatively low (typically equals the glass transition temperature), are largely addressable by light illumination through either their intrinsic absorption or the heat transfer from photothermal inclusions to the polymeric matrix, and the latter can further endow wavelength selectivity to the device. Owing to the mechanical flexibility of SMPs at their elastomeric state, multi-degree-of-freedom locomotion can be achieved by delicately designing the pre-deformed structure[281,282], or alternatively, when the polymer matrix is transparent, by patterning it with light absorptive materials, which results in spatially varying light opacity. Indeed, it is possible to bypass the light-to-heat conversion and induce shape recovery of pre-strained SMPs directly by light. In that situation, the SMPs are required to host photoresponsive groups, and the temporary state is frozen by the light-initiated crosslinking of the polymeric network rather than through the glass transition or crystallization[284,285]. Liquid crystals (LCs) are known for exhibiting phase transitions with external stimuli, among the multiple variations of which the thermotropic type mainly answers to the stimulus of temperature change. The nematic–isotropic phase transition of thermotropic LCs involves mesogenic units rearranged from highly oriented along the long axis to randomly distributed, accompanied by a contraction in the original long axis direction and an expansion perpendicular to it. When adopted to realize light actuation with the assistance from photothermal agents, LCs might be superior to SMPs since reversible deformation of LCs comes along naturally with repetitive heating–cooling cycles, while extra strain or stress should be applied to preset the SMPs in the temporary state, which renders the “reversibility” in the latter case not as easily attainable. To harness the deformation of thermotropic LCs, methods such as a double-layer configuration or patterning of the illuminated regions should be employed[286,287]. As illustrated in Fig. 19(g), worm-like crawling movements can be obtained by scanning the laser spot back and forth along the dual-layer stripe that consists of an LC layer and a passive layer. Given that the orientation of nematic LCs can be adjusted by polarized light, even LC films can be pre-patterned with customer-defined alignments (e.g., azimuthal, radial) using photomasks, which adds to the degree of freedom in optical manipulation since the films would deform correspondingly to the encoded pattern via phase transition. Moreover, the extensively researched trans-cis isomerization in azobenzene-functionalized LCs can also be employed in building light-addressable soft robots, which are driven by the photochemical instead of photothermal process. Interested readers could refer to Refs. and . The crystalline structure transition of the inorganic compound vanadium dioxide () is of great interest in photothermal actuation, given its relatively low trigger temperature . Upon exertion of heating and cooling cycles, exhibits a reversible transition between insulator/monolithic () and metal/tetragonal () states, accompanied by the reordering of the unit cell, and subsequently, remarkable deformation on the bulk scale. Hence, by depositing monolithic thin films on cantilever beams and applying a heat source, the insulator-to-metal transition of will generate strains at the film–cantilever interface and result in the bending of the structure. Note that there still exists the basic “dual-layer” geometry to implement the mechanical constraints. Moreover, incorporating photothermal materials, e.g., SWNT, with could enhance the light-to-heat conversion efficiency and reduce the thermal response time, thereby optimizing both the power consumption and dynamic performance of photothermal actuators [see Fig. 19(c)]. Most importantly, since material (unlike polymeric material) and its film deposition technique are very compatible to the modern CMOS platform, -based actuators can be scaled down to the micrometer regime similar to MEMS devices using nanofabrication techniques, which feature suspended cantilever beams patterned via lithography and released from the substrates by etching. Promisingly, more complex locomotion can be obtained in the optical counterpart of MEMS, or rather, MOMS, by utilizing micro-nano fabrication methods to sculpture and decorate the devices. 4.2.3 Photothermal-induced moisture response A volumetric change can be induced via adsorption/desorption of water molecules, during which the material matrix will swell or shrink accordingly. This phenomenon is especially profound in hydrophilic materials, and the dynamic moisture response can be controlled by either environmental humidity or photothermal effects[292,293]. An exemplary demonstration has been made by Mu et al., where a quasi-dual-layer structure is adopted, composed of a layer of rGO, which is hydrophobic, and the other layer of polydopamine (PDA) decorated GO, which is hydrophilic. The photothermal capability of both rGO and GO-PDA ensures heat generation when subject to light illumination in a broad wavelength range. Consequently, following the on and off states of the light source, dynamic heating and cooling cycles set upon the matrix would cause the GO-PDA layer to desorb or adsorb water molecules, while the rGO layer, owing to its hydrophobicity, would be largely unaffected by light irradiation. Thereupon, as suggested by Fig. 19(d), the photothermally driven volume change of the GO-PDA is in stark contrast to the rGO whose volume exhibits little variance, which gives rise to the bending of the dual-layer film with the largest bending angle reaching 180°. On top of the dual-layer structure, by inflicting an additional component gradient in the lateral plane (with the assistance of reductant filtration masks), the as-patterned all-graphene papers can perform origami-like self-assembly or even be controlled to walk or swerve by light. Similar actuation schemes have been reported using different water-sensitive and -inert layers, or to enhance the actuating efficiency, using two active layers that respond oppositely to the trigger signals, which all stick to the most classic dual-layer structure as the elementary building blocks[293–295]. Reversible twisting and rotational motions can also be realized through a photothermally-induced moisture response. In brief, the GO-saturated matrix should be pre-molded into a twisted fiber. In response to modulated light signals, the fiber will experience an assisting torque and be overtwisted when the light is “on”/upon water desorption, and receive a counter torque that unwinds it to the original state when the light is “off”/upon water re-adsorption. Following the same mechanism, omnidirectional oscillation and a self-sustained swimmer can be achieved through the alternative shrinkage (water expulsion) and reswelling (water re-adsorption) of hydrogel components immersed in water baths. Instead of relying on input switches between on–off states of light, a constant photothermal stimulus was used, and the self-sustained oscillation was mediated by the built-in negative feedback loop of self-shadowing in each oscillation period. To date, this scheme of optical manipulation has mainly centered on large-scale objects, and correspondingly, the response time is usually unsatisfactory, considering the time for heat transfer at long dimensions and the intrinsically retarded desorption/adsorption dynamics following the Arrhenius theory. However, it provides an intriguing alternative to realize macroscale optical manipulation with moisture-gated capability, and has demonstrated a unique mechanism for materials to “shrink” upon temperature increase (see Sec. 4.2.1). 4.3 Tailoring Interactions with Environments In solid states, the pronounced adhesive force hampers the motion of subjects, since they essentially remain rigid bodies and are anchored either by the van der Waals interaction or chemical bonds. The deficiency of fluidity renders the diffusion-based processes that work well in liquid domains highly ineffective in solid environments, where the mass transfer flux is negligible. By inflicting fluidity on either the substrates or the supported cargoes, as discussed in the following two sections, the obstacles of adhesive forces can be bypassed with the exertion of -scale optical force or by virtue of the viscous flow of masses. Specifically, the tailored interactions between substrates and cargoes should be within a finite duration to guarantee that the system is in solid states to start with and would finally return to solid states after withdrawing the light source. Hence, alternative fluidification and resolidification are to be expected, which are stimulated by light–thermal effects and heat dissipation, respectively. 4.3.1 Inflicting fluidity on substrates For solid–solid interactions, van der Waals force would gradually become dominant when downscaling towards the micro and nano regimes, which dwarfs the light force and even the photophoretic force by several orders of magnitude. Under this circumstance, Li et al. have proposed an approach that bypasses the direct confrontation with the -scale adhesive force by introducing a solid-to-liquid phase transition of the substrates. Figure 20(a) illustrates that, upon CW light illumination, heat can deposit on the substrate via photon–phonon conversion through plasmonic particles, or alternatively, when the particles are transparent, the substrate should be absorptive. A surfactant layer (CTAC) is spin-coated on a solid planar material forming the top layer of the substrate, ready to experience reversible first-order phase transition at relatively low temperatures. Consequently, the interfacial condition can be switched between highly adhesive and fluidic by simply turning on and off the incident light, establishing an opto-thermal gate for on-demand photon nudging. More specifically, the light input triggers the “on” state of the opto-thermal gate, allowing objects to be nudged laterally by the optical scattering force or thermocapillary force; switching off the light instead triggers the “off” state, which immediately calls off the particle locomotion, as the surfactant layer transitions back to the solid state and sustains van der Waals adhesion unsurmountable by accessible driving forces. Using this technique, diverse locomotion patterns of particles with different geometries and compositions have been demonstrated [Fig. 20(b)], the capability of which is particularly appreciated for situations requiring reconfigurable solid-state assembly, such as electronic and photonic device construction[298,300]. Figure 20.Tailoring interactions with environments. (a) Schematic of the in-plane photon nudging in the “on” state of the optothermal gate. (b) Rotation, translation, and versatile particle assembly achieved by the optothermally gated photon nudging. Scale bar: 3 µm. (c) Generation and transportation of germanium particles within a laser-liquidized region of a silica fiber. (d) Construction of in-fiber p-n homojunctions in a dual-core fiber. The originally separated p and n type particles are both drawn to the laser spot and become contacted. (e) Schematic illustrating the self-assembly of liquid filament upon nanosecond laser illumination with prescribed periodic perturbation. The light-powered dewetting process leads to the breakup of liquid filaments into periodically arranged hierarchical nanoparticles. In comparison, without preassigned perturbations, the multimode Rayleigh–Plateau instability would result in randomly distributed particles (left top inset). (f) Time evolution of the filament geometry with the prescribed perturbation. The fine lines connecting neighboring first-order particles would dewet into second-order particles shown in (e). (a), (b) Adapted from Ref. . (c), (d) Adapted from Ref. . (e), (f) Adapted from Ref. . The phase-transition mechanism also applies to light manipulation of substances embedded in solid media. In 2019, Zhang et al. realized in-fiber particle manipulation through a solid-to-liquid phase transition and the Marangoni effect that follows, where particles are precipitated from the fiber core, as a result of Rayleigh–Plateau instability empowered by light–thermal effects. Displayed in Fig. 20(c) is the schematic showing the formation and directed migration of germanium particles induced by high-power CW laser irradiation. First, the fiber in the illuminated region is fluidized due to photothermal effects of silica materials at infrared wavelengths. Then, the resultant temperature field induces thermocapillary/Marangoni convections in the unevenly heated fiber matrix, which transports the embedded particles to the laser spot, and thus the whole in-fiber delivery scheme is dependent only on the fluidic field and can be generalized for particles with various sizes, shapes, and materials. To demonstrate this versatility, the authors successfully fabricated p-n homo- and hetero-junctions out of dual-core silica fibers [Fig. 20(d)]. Despite the differences in thermocapillary properties and the location mismatches between particles precipitated from each core, they can be brought together in pairs by the well-directed Marangoni flow towards the laser spot [see insets in Fig. 20(d)]. Indeed, compared to CW lights, ultrafast lasers are more often adopted to initiate phase transitions and direct mass flows during the impulsive liquid time, which is followed by the quenching process at pulse intervals. An additional advantage associated with pulsed lights, especially femtosecond light sources, lies in the smallness of the heat affected zone, which justifies direct laser writing of chemically or physically modified structures with unprecedented resolution[302,303]. For those situations that entail light-induced mass transfer, they should be inspected at molecular or even atomic scales. Recently, Sun et al. have demonstrated such a scheme where ultrafast laser pulses were used to reorganize the chemical compositions in halide-doped borophosphate glasses; the migration of halide ions powered by local temperature and pressure surges has enabled the direct lithography of perovskite nanocrystals with bandgaps tunable by laser parameters. 4.3.2 Inflicting fluidity on deposited materials Deposited materials, usually in the form of thin films, interact with underlying substrates via physical and chemical bonds. In Sec. 4.1.4, we introduced the detachment of deposited materials powered by ultrafast laser pulses, the motion of which should be categorized as in the out-of-plane direction. Indeed, in-plane modulation of the deposited materials is also possible by first increasing the mobility of the materials and then leveraging interfacially directed stresses. Metallic thin-film dewetting is such an example, which works at an elevated temperature in both solid and liquid states with relaxed limitations upon atomic diffusions. Before the launch of light illumination (or other heat sources), the metallic films are forced to be in cylindrical forms at nonequilibrium states, given that they intrinsically could not wet the substrates. Once the light-induced heat is generated within the absorptive films, the dewetting process sets in, during which the sharp-cornered edges retreat and are replaced by more obtuse ones. The driving force for this scheme of mass transfer is the minimization of surface energies and restoration of the equilibrium state that carries the feature of nonwetting interfaces (which favors deposited material in the form of droplets rather than thin films). Given sufficient time, the resultant in-plane mass transfer and modulations of the geometry can be substantial while being random, owing to the Rayleigh–Plateau instabilities of stripe-like fluids[305,306]; whereas by presetting regular perturbations, the multimode evolution of the instability leading to the breakup of the fluidic stripes could be compressed with only one single mode prevailing, as shown in Fig. 20(e). The temporal development of the surface geometry is further visualized in Fig. 20(f), where the prescribed perturbation patterns become increasingly manifested by the on-going mass transfer at the liquidizing stage, enabled by continuous injection of laser pulses[307,308]. Capillary forces that exist on interfaces between different phases can also be exploited in the general picture of optical manipulation, the prerequisite of which is still the fluidization of the deposited substances. Taking the thermal capillary force (also called the Marangoni effect) for instance, by creating specific temperature profiles using either focused light (concentrated hot spot) or light field interference (periodic temperature distribution), the liquid–gas interfacial tension can be tuned according to the relation , which leads to the mass transfer of the liquidized material along the gradient of the modified surface tension (typically directs from the cold to hot region). Utilizing light-induced Marangoni effects, myriad researches have reported the transformation of planar surfaces into complex structures such as periodic gratings and protruding antennas[309–311], where the deposited materials (initially in solid states) first liquidize, then experience in-plane mass flow under the influence of surface tension, and finally resolidify to maintain their geometries after the laser pulses recede. 5 Applications of Optical Manipulation After half a century’s development, optical manipulation has been applied to myriad scenarios. In Sec. 5, we introduce several representative applications of optical manipulation to provide some insight into where and how this technology can be of practical use and which disciplines can benefit from it. 5.1 Optical Tools for Analyzing Biochemical Molecules and Cells Ever since the concept of the optical tweezer was first put forward, researchers have been pondering over its application in the field of biochemistry and cellular biology, where single molecules or bioactive cells can be studied in situ while being optically trapped. In 1987, one year after the invention of single-beam optical tweezers, Ashkin switched from dielectric particles to the motile Escherichia coli bacteria as the targets to be captured by the focused laser beam, which marked the destined encounter between light manipulation and biological investigations at micro-nano scale. A straightforward use of optical traps in biological assays is mainly to single out and immobilize individual samples against the Brownian drift in liquid environments, which allows the precise in vitro or in vivo detection of single molecular signals not smeared out by the bulk average[66,158,313–315]. With continuous advances in this field, more delicate functionalities have been incorporated into optical tweezers, enabling versatile manipulation of captured molecules and simultaneous performance of force spectroscopies and fluorescent measurements, etc. Complex molecular activities of biological samples (e.g., protein folding and unfolding, DNA supercoiling and unzipping) involve non-rigid body movements, the study of which typically requires that the sample molecule be tethered to dielectric particles through handlers, as shown in Fig. 21(a). Therefore, controlling the sample motion can be translated into applying either the linear force or the torque to the handlers held in the optical tweezers, and the techniques are basically the same as those in Sec. 3. For instance, in the dual-beam scheme displayed in Fig. 21(a), a DNA molecule can be stretched by moving apart the two optical tweezers, which strains the handlers, during which both the level of extension and the linear force (restoring force in the optical tweezer) are recorded in a calibrated system. Or alternatively, the structural evolution of the sample molecule could be studied in a force clamp, where the pulling force is kept constant while allowing extension fluctuations. The latter arrangement was adopted by Abbondanzieri et al., who have successfully detected the stepwise translocation behavior of RNA polymerase during transcription [see Figs. 21(a) and 21(b)], and more importantly, found that the stepping increment corresponds to the dimension of a single base pair of . In 2007, an angular optical tweezer was first utilized to control and measure the torque of biological samples (DNA molecules), which was intermediated by a birefringent quartz cylinder captured in a linearly polarized light beam. Since the extraordinary axis of the quartz cylinder lies in the transverse plane relative to light propagation, an alignment torque (or from the angular momentum perspective, an SAM torque) is exerted upon the cylinder and drives it to circulate, which is then transferred to the tethered molecule (the other end of the molecule should be attached to a stationary object, e.g., the cover glass in the sampling chamber) and can be calculated as the difference of angular momentum between the output and input light, (refer to Secs. 2.1.4 and 3.3.1). Inspired by this technique, researchers further unveiled the role of torque generated by the RNA polymerase in regulating the transcription process, and the experimental setup is shown in Fig. 21(c). During transcription, RNA polymerase rotates DNA as a molecular motor, and the applied torque is reflected in the rotation of the quartz cylinder, which then couples to the helicity of the transmitted light. Synchronized measurement of the torque and extension has been carried out in either the equilibrium state or a pulsed form [Fig. 21(d)], so that the transcription stalling and resumption can be deciphered, which is encoded by the magnitude, orientation, and transient evolution of the torque exerted on the DNA. Also, conformational behavior of proteins can be analyzed on optical tweezer platforms, which basically entails the two reverse processes of folding and unfolding. The complexity lies in the concrete reconstruction of the configuration trajectory, given the hyperdimensional energy landscape that involves multiple intermediate states. Despite the difficulties, optical tweezers have proved to be powerful tools in unraveling the structural changes of macromolecules, and ultimately the folding and unfolding kinetics along the predefined reaction coordinate (extension direction)[313,320,321]. Figure 21.Optical tools for analyzing biochemical molecules and cells. (a) Schematic of a dumbbell geometry formed by a DNA molecule tethered to dielectric particles held in two separate optical tweezers. While the stiffer trap (left) is responsible for stretching the DNA molecule by steering away (via acousto-optic deflector) from the optical trap on the right, the weak trap establishes a force clamp where the particle is held at a zero-stiffness zone offset from the trap center. (b) Recorded temporal evolution of the DNA extension exhibiting step-wise behaviors. The experiment is conducted in a constant-force modality with 18 pN assisting load. The system noise is controlled below 1 Å for high spatial resolution. (c) Schematic of an angular optical tweezer for controlling and measuring the torque in the transcription process against the upstream supercoiling (–) and downstream supercoiling (+). The quartz cylinder is aligned with its extraordinary axis parallel to the transverse plane, so that an alignment torque is exerted on it in the linearly polarized light field. (d) Torque-extension relation in a transient pulsed form. The tested RNA polymerase would receive a pulsed resisting torque while transcribing. When the resisting torque is too large or lasts sufficiently long (pulse duration 5 s versus 0.5 s), the transcription would be deactivated, which is manifested in the extension traces. (e) Experimental setup of a plasmonic nanopore designed for optically trapping and sequencing DNA molecules. The strongly enhanced near fields at the tips of bowtie antennas provide both the anchoring sites for the molecule and the excitation signals for Raman spectroscopy of the exposed nucleotide. Consecutive on and off states of incident light enable the stepwise translocation of the DNA molecule under the electric bias across the nanopore. (f) Illustration of a “fleezer” system in a confocal configuration. The trapping beams and the excitation beam are spatially separated, with the former capturing the particle handlers and the latter focusing on the fluorescently labeled samples. (g) Signals recorded in both the optical tweezer and the fluorescence channels. The jumps shown in the upper panel correspond to the opening of the mRNA hairpin by one-codon steps. The spikes in the lower panel indicate the binding of fluorescently labeled EF-G. (a), (b) Adapted from Ref. . (c), (d) Adapted from Ref. . (e) Adapted from Ref. . (f), (g) Adapted from Ref. . Combining optical spectroscopy with optical tweezers is a natural thought, since their experimental setups are mutually compatible (including the light source, signal detection element, sampling stage, etc.)[322,323]. In 2015, Belkin et al. demonstrated a hybridized platform that enables trapping, displacing, and optically characterizing DNA molecules in plasmonic nanopores, as illustrated in Fig. 21(e). While the “on” state of light anchors the DNA towards the plasmonic hot spots, surface-enhanced Raman signals as the fingerprints of nucleotides could be excited by the same incident trapping light and be collected by a detector. The “off” state, in contrast, would release the DNA and allow it to translocate through the nanopore driven by the transmembrane electric bias. Hence, periodic modulation of the plasmonic field would result in stepwise displacement of the DNA molecule with the currently exposed nucleotide being excited for Raman spectroscopy, and the sequence of the DNA can be determined after the whole molecule passes through the nanopore. Alternatively, the trapping beam, which typically requires high-power fluence for particle immobilization, can be separated from the excitation beam so that the wavelength of the trapping light can be selected outside the absorption band of the sample molecule to avoid undesired thermal damage. Moreover, co-force-and-fluorescence measurements are possible in a “fleezer” system (optical tweezer with fluorescence capability), which allows the mechanical and material properties to be probed simultaneously and complementarily. Figure 21(f) shows such an example, where an mRNA hairpin is tethered to polystyrene beads held in two optical traps with a ribosome attached to its end. To uncover the catalytic dynamics of the translocation factor EF-G, they were fluorescently labeled so that their arrival at or release from the target site would generate spikes on the fluorescence channel, which, by comparing with extension steps (corresponding to unwinding of mRNA by one codon) captured in the optical tweezer channel, provides insightful information over how the EF-G binding synchronizes the translation process [Fig. 21(g)]. 5.2 Investigation and Test of Fundamental Physics with Optical Tweezers 5.2.1 Brownian particle velocity measurement One of the major characteristics of Brownian motion is randomness, which originates from particles colliding with the surrounding fluid. In 1905, Albert Einstein proposed that though being random, the Brownian movement of particles follows a diffusive pattern such that the mean squared displacement (MSD) of free particles scales proportionally with time: where denotes the diffusion constant, the same as that introduced in Sec. 2.1.5. Due to the high requirements of temporal and spatial resolution, Einstein deems it impossible to directly measure the transient Brownian movements of particles. However, this statement has been proved untrue with the detection limits of concern being broken by the increasingly mature optical tweezer platforms and high-speed photodiodes. For instance, instantaneous velocity of a single Brownian particle could be measured by trapping it in an optical tweezer and recording the pattern of interference between incident and scattered light[98,329]. Here, the function of the optical tweezer should be appreciated at three levels: establishing a 3D harmonic potential well that confines the particle within the detection range;isolating an individual particle from particle ensembles, which ensures direct observation of the Brownian motion without averaging effects;providing signals to deduce the temporal particle displacement (refer to Sec. 2.1.5). Under these conditions, the velocity data can be acquired from the measured particle displacement, whose distribution coincides with Maxwell–Boltzmann distribution and thus verifies the energy equipartition theorem. Deviation of the Brownian motion from the diffusive behavior was also reported below a millisecond time scale. As suggested by the Langevin equation (, particle mass; , friction force imposed by surrounding fluid as a damping factor; , forces originating from random thermal fluctuation; , other external forces), there exists a ballistic regime where MSD of the particle is proportionate to square before it transits into the diffusion regime[74,330]. In an under-damped optical trap (e.g., in thin air environments), since the autocorrelation time is several orders larger than the momentum relaxation time of the particle (), the influence of the potential well is negligible on short time scales, allowing researchers to adopt a semi-free particle approximation. Figures 22(a) and 22(b) show the time evolution of the measured MSD of a trapped particle. At early stages, that is to say, when , the curve deviates from the diffusive pattern and takes on a ballistic behavior due to the particle inertia neglected in Einstein’s theory [see Fig. 22(b), where the MSD exhibits a squared relation with time]. When , the particle gradually enters into the diffusive regime and is finally seized by the optical trap as the MSD approaches a constant value. By altering the incident beam power, the change of the trap stiffness will be reflected on the converged value of MSD following the regulation: the stiffer the trap is, the smaller the allowed displacement range is. Further modification regarding the mathematical model of Brownian motion was realized by taking the fluid inertia into consideration. Moreover, the detection of the Brownian movement can be used in reverse to calibrate the optical trap, which establishes a positive feedback loop in both research fields. Figure 22.Measurement of the instantaneous velocity of Brownian particles in an optical tweezer platform. (a) Measured MSD (symbols) of a trapped particle compared to Einstein’s deduction (dashed lines) and the prediction of the Langevin equation (solid lines) at different air pressures. (b) MSD of a Brownian particle at short time scales () showing ballistic behaviors. The dashed-dotted line is deduced from the ballistic assumption obeying the equation referred to by the arrow. The measurements were conducted in a vacuum chamber to attain the under-damped condition, where the air pressure can be adjusted. Adapted from Ref. . As a classical phenomenon of general relativity, gravitational waves are described as disturbances in the curvature of spacetime, which are generated by accelerating masses through gravitational radiation. When a gravitational wave passes through objects, the local curvature of spacetime is modulated, causing the relative positions and distances between objects to change at a speed corresponding to the frequency of the incoming gravitational wave. Precise detection of gravitational waves is invaluable in cosmology and astronomy, since they carry information of the early universe and the unexplored deep space that is otherwise untraceable by conventional techniques (e.g., space telescopes). However, given the astronomical distance, gravitational waves that reach the Earth are significantly attenuated, with the resultant strain of spacetime being less than . As a consequence, direct detection of gravitational waves requires ultrahigh force sensitivity, which was made possible only recently by the kilometer-scale Laser Interferometer Gravitational-Wave Observatory (LIGO) built exclusively for this cause. Despite the success of LIGO, a cavity-based optical tweezer also comes as an ideal candidate for gravitational wave detection. In 2013, Arvanitaki et al. proposed an experimental setup, where a nanoparticle could be optically levitated and trapped at the antinode of a cavity and serve as a force sensor. Upon impingement of gravitational waves, fluctuant displacements of the cavity mirrors and the trapped particle are to be expected, which result in minor deviation of the particle from its trap minimum. In this way, the effect of gravitational waves is equivalent to introduction of an additional oscillatory driving force, whose amplitude and frequency would be reflected in the phase of the detection light via optomechanical coupling. Ultrahigh sensitivity of the proposed detector relies on several factors. First, to reduce noise, the thermal contact of the particle to the environment is minimized through implementation of optical levitation and high vacuum. Under this working condition, the trapped particle as an oscillator (with optical trapping force as the restoring force) could possess a mechanical Q factor up to , which is an unprecedented value for conventional clamped oscillators[334,335]. Second, the effective temperature of the trapped particle could be properly adjusted with feedback cooling[100,334,336]. The minimum detectable force in a harmonic oscillator is expressed aswhere is the spring constant of the center of mass motion, the Boltzmann constant, the effective temperature, the measurement bandwidth, the natural frequency of the oscillator, and the effective Q factor. Though the reduction of would degrade the Q factor by introducing additional damping, there exists a certain configuration to achieve optimum sensing precision. A cavity-based optical tweezer is a compact device compared to LIGO (), while its sensitivity could appreciably surpass the latter (especially for high-frequency gravitational waves over 10 kHz), which is mainly limited by thermal noise instead of photon shot noise. Moreover, recent researches show that by substituting a stacked structure for the spherical particle and adopting the Michelson interferometer configuration, precision of the optical levitated sensor could be further improved[337,338]. 5.3 Particle Assembly and Nanoprinting In Secs. 3 and 4, we introduced basic optical manipulation techniques in both fluidic and solid environments. A straightforward application scenario for those techniques is to dynamically assemble particles into customized configurations, which entails trapping and transportation of particles and finally anchoring them at predefined sites to form patterns. In fluidic domains, particles tend to drift as a result of stochastic effects. Hence, the final step, that is to say, the particle anchoring, requires either permanent trapping in fluidic suspensions or the assistance of substrates via adhesive force. In both cases, SLMs are frequently employed to imprint time-varying holograms to establish arrays of trapping sites, where particles can be dragged to move along with the updated light profiles or immobilized in customized assembly [see Figs. 7(e) and 12(c)][93,167]. The use of SLMs is compatible with multiple mechanisms of optical manipulation, including the basic light–matter momentum transfer, thermophoretic effects, and the opto-thermoelectric hybridized scheme (see Sec. 3), which all rely on the temporal–spatial modulation of the light field. Indeed, the “trapping” and “transportation” operations need not necessarily be separated. Instead, they can be integrated into one single step, where the particles are laterally localized while being subject to longitudinal propulsion as “missiles” guided by light propagation [see Figs. 11(b) and 18(a)][92,249]. The 2D trapping modality ensures pinpoint positioning, in that the launched particles do not diffuse out of the diffraction-limited light spot, and the propulsion (via the light scattering force, photophoretic force, or the opto-thermoelectric field) is usually responsible for transferring the particles onto adhesive platforms for permanent anchoring, which spares the need of persistent input of light for particle immobilization in fluidic environments. In solid domains, the “anchoring” step is unrequired. What turns out to be the most formidable challenge is instead to overcome the inclination of being anchored, so that the particles can be released and transported at will. This problem can be bypassed by introducing a solid-to-liquid phase transition to the contact layer of the substrate, as previously demonstrated in Fig. 20(a). The temporarily induced phase transition by light–thermal effects effectively creates a fluidic environment, which lowers the threshold for the in-plane driving force to the same level as optical force and allows the substrate to reversely transition back to the adhesive state. Alternatively, out-of-plane ejection of particles, through either intensive thermal expansion of flexible substrates [Fig. 18(b)] or excitation of surface elastic waves, provides promising scenarios to launch particles, and subsequent to their detachment from the substrates, the light gradient force could implement control over the transportation trajectory and ensure the spatial accuracy. The remote and contact-free technique for nanoprinting can be realized thereafter, where particles can be launched from one substrate and pinpointedly deposited on the other with sub-100 nm accuracy. Notably, a plethora of researches have demonstrated controllable deposition of particles on solid domains via surface elastic waves. Nevertheless, they are not within the scope of this review, since the stimulus is most often a.c. signals applied upon piezoelectric materials[339–341]. More efforts should be made to uncover the relation between light pulses and acoustic mode profiles before opto-thermoelastic force can be utilized in versatile particle assembly and nanoprinting. 5.4 Photophoretic Trapping for Volumetric Display The aberrated focal volume of bottle beams contains alternating dark and bright regions, which can be exploited to build photophoretic traps for absorptive aerosol particles [see Sec. 3.1.5 and Fig. 11(d)]. Combined with RGB illumination and utilizing the fast laser beam scanning technique, the aberration-based photophoretic trap can be readily converted to a 3D volumetric display, as exhibited in Fig. 23(a). Once a particle ( in the experiments) is held in the trap, it scatters the visible radiations from the RGB lasers and effectively constructs a single full-color pixel. Subsequently, scanning the trapping beam functions as translocating the pixel in a 3D volume and can therefore produce images perceivable by human eyes, as long as the pixels can be traced at speeds beyond the requirement of the persistence of vision (frame rate frames per second). More specifically, to produce flickerless images, the scanning speed in the demonstrative experiments reaches up to 164 mm/s, with an extra and intrinsic requirement of not losing the trapped particle. Since the pixel trajectories, or alternatively, the trajectories of the beam’s trapping site, could in principle cover all the angles in a specified volumetric region, the proposed volumetric display is set apart from conventional holographic techniques that inevitably exhibit image clipping by the bounding apertures. For this reason, complex images unattainable by holograms can be constructed using the photophoretic volumetric display, such as those receivable from the side view [Fig. 23(b)] or interacting with obstructing objects [Fig. 23(c)]. The concept of autostereoscopy is thereby fully embodied in the optical trap display in that 3D imaging is perceivable by the naked eye, and moreover, unrestrained by the observation angle or obstructive surroundings. Regardless, at the current stage, this technique is still limited by the relatively slow scanning speed and air flow sensitivity, especially when high-resolution or high-frame-rate video imaging is required in more versatile operational environments (e.g., outdoor display). The exploitation of parallel traps, and the light beams associated with stiffer trap profiles could promisingly tackle the above issues and stimulate the advance of this novel technique. Figure 23.Photophoretic trapping for volumetric display. (a) Schematic illustration of the photophoretic trap display. Individual absorptive particles can be levitated in the dark region of a bottle beam (trapping beam) and scanned to form images at a speed beyond what is required by the persistence of vision. RBG lasers are collinearly aligned with the trapping beam to illuminate the trapped particle. (b), (c) Three-dimensional images exemplifying the capability of the proposed volumetric display. (b) The as-produced images can be received from arbitrary angles free of clipping. (c) “Wrap around” images can be created surrounding a 3D-printed arm model, whose imaging effect is not affected by the obstruction of real physical objects as what would appear in conventional holograms. Adapted from Ref. . In Sec. 4.1.2, the capability of nanosecond pulsed light was unfolded in great detail in coupling to the surface acoustic modes of thin absorptive materials. One important feature of the resultant actuation is the pulse-wise locomotion in either longitudinal or azimuthal direction, and the spatial resolution of each single step could reach sub-nanometer scale[35,37]. Based on these observations, ultrahigh precision machineries can be built on the already-explored fiber–plate or fiber–nanowire systems, where plasmonic structures with micrometer footprints are chosen to be actuators that locomote relative to silica fibers as both the light waveguide and the mechanical stator[35–37,343]. By controlling the initial actuator–stator configurations (to filter certain motional degrees of freedoms) and the number of pulses, the translational direction, distance (translational motion is driven by longitudinal asymmetry of the impulsive thermal expansion), rotation angle (the rotational motion is driven by the asymmetry in the two wings of the gold plate segmented by the contact line), and the stabilized pose of the actuator (influenced by both the wing asymmetry and the gradient thermal expansion along the contact line) can all be well adjusted with accumulative contributions from individual pulses at sub-nanometer accuracy. Nevertheless, to date, this technique is still at the development stage, and only a few application scenarios have been put forward that fit the currently certified fiber–plate/nanowire systems. Figures 24(a)–24(c) display a possible application proposed by Lu et al., where the rotating gold plate serves as a micromirror for laser scanning. Since the signal was collected in the far field, the advantage of high-precision mechanics was not quite substantialized. Figure 24.(a) Schematic of the fiber–plate system used for laser scanning. The gold plate as the rotor in the machinery exhibits high reflectivity at the incident wavelength, thus functioning as a micromirror that reflects the light beam as it rotates. (b) Sequential optical images showing the laser beam deflected with time. The rotation speed of the micromirror is 0.1 rad/s. (c) Comparison between the experimentally measured and calculated beam deflection with time. The stepwise feature can be clearly seen in the experimental data. (d) Schematic of a fiber-based photonic integrated circuit with plasmonic nanowires functioning as the moving elements. (e) Basic setup of the on-chip realization of opto-thermo-mechanical actuation based on waveguide platforms. (f) Schematic of on-chip multiplexed actuation of plasmonic vehicles. (a)–(c) Adapted from Ref. . (d) Adapted from Ref. . Needless to say, given the particularity and limitations of silica fibers, transferring the opto-thermo-mechanics mechanism from the fiber platform to arbitrary solid substrates, especially to on-chip photonic integrated circuits, is an irresistible trend. Figure 24(d) demonstrates a semi on-chip manipulation of plasmonic micro-vehicles on microfibers, which are fixed on low-index substrates. In principle, this scheme may also be reproduced on silicon-based waveguide platforms [Fig. 24(e)], which are readily compatible with nanofabrication techniques and are of a higher level of integration compared to silica fiber networks. In doing so, not only can this actuation scheme be theoretically generalized, but it would also be endowed with great practical value in building mobile and reconfigurable elements for light modulation [Fig. 24(f)], thus creating a closed loop in the form of light→thermo→mechanics→light. Nevertheless, the challenges would be harder to tackle, considering the adhesive force associated with the significantly increased contact area, the surface roughness induced in device fabrication (note that the surface roughness on a silica fiber is at angstrom scale), and the fiber–waveguide coupling loss. 5.6 Pulsed Laser Cleaning Particulates of sub-micrometer dimensions account for a major source of contaminants in the semiconductor industry, which deteriorates fabrication precision and introduces considerable loss to fabricated devices under working conditions[29,345]. Compared with conventional cleaning techniques (including ultrasonics, solution rinsing, high-pressure jet purge, plasma etching, etc.), pulsed laser cleaning is capable of generating sufficient particle acceleration and features a contact-free operation process, which makes it widely applied in scenarios requiring a high level of cleanness. When illuminated with a pulsed laser, the substrate absorbs light power and experiences abrupt expansion, resulting in the excitation of surface acoustic waves that detach and propel the adhered particulates. In contrast, a CW light source cannot be used for the same purpose due to the absence of “abruptness”. Upon irradiation of pulsed light, the generated transient acceleration of sub-micro-sized particulates can reach beyond , sufficient in overcoming the van der Waals force and sustaining particle motion through the viscous atmosphere by several millimeters to be collected. Note that the adhesion force, predominately contributed by the van der Waals force, exceeds the gravitational force by more than seven orders of magnitude in the sub-micro regime, which is hard to surmount using other cleaning techniques. By exploiting the pulsed-laser-induced phase explosion, even smaller particulates can be obliterated with reduced light power when a thin film of liquid (∼micrometer thickness within the heat diffusion length) is deposited onto the substrate. Researches have proved that the optimal cleaning efficiency could be achieved when the light absorption localizes at the liquid–substrate interface. With the synergic effect of surface acoustic waves and the transient pressure increase resulting from the explosive evaporation of the liquid film, sub-micro-sized particulates can be ejected from the substrate surface at an acceleration of more than [29,347,348]. Improvements of cleaning effectiveness can be realized through fine tuning of laser parameters and liquid compositions. Moreover, ejected plasma upon pulsed laser irradiation can also be used to clean surfaces contaminated by particles, where the generated shockwaves can effectively propel the contaminants, leaving behind intact and cleaned surfaces. Apart from particulate removal, pulsed laser cleaning is also applied in removing surface oxide layers on metallic workpieces and spatters from hill drilling[345,350,351]. 5.7 Particle Acceleration with Pulsed Laser Compared to CW light, pulsed light condenses energy within short pulse durations and features high peak power, thereby possessing advantages in reaching threshold conditions for exciting fast and intensive dynamic processes. For instance, upon pulsed light illumination, electrically neutral particles could obtain transient acceleration through laser-induced surface acoustic waves or plasma shockwaves[352,353]. The generated propulsion relies on the interaction between the pulsed laser and target media (fluidic atmosphere, illuminated particles, substrates, etc.), and the threshold condition, which is either to overcome the adhesion force in solid domain or ionize the media via avalanche processes[354,355], requires that the input light energy should be sufficiently condensed in both time and space. Typically, the peak light powers adopted in the two schemes are approximately in the sub- regime, and the corresponding transient velocities of particles are at the scale of a few [29,353,355]. Accelerating particles to the relativistic regime, which is of great significance in the field of high-energy physics, can also be realized by the use of pulsed lasers. First proposed in 1979, the concept of laser plasma accelerator (LPA) elegantly utilizes the plasma wake generated by the sudden burst of electromagnetic energy, establishing ultrahigh acceleration gradients () to accelerate fundamental charged particles (e.g., electrons) to near light speed[356,357]. Instead of harnessing the light-induced transient heat, this method exploits the ultrahigh electric field embedded in the longitudinal plasma wake to drive the motion of nonneutral particles. Note that while conventional particle accelerators exploit radio frequency (RF) electric fields in metallic chambers, they inevitably suffer from the metal breakdown threshold, which restrains the acceptable accelerating gradient to level. The acceleration structure in LPA, however, is composed of already-broken-down plasmas capable of supporting an accelerating gradient at scale with a plasma density of [358,359]. Given that the energy gain of accelerated particles is the product of the accelerating gradient and traveling distance, conventional particle accelerators are associated with characteristic dimensions of several tens of kilometers in circumference (e.g., the Large Hadron Collider), while the LPA, in comparison, could realize acceleration energy gain within a channel of only a few millimeter length[357,360]. Though questions still linger regarding the operation of LPA, such as electron injection, energy spreads, and upper limits of feasible acceleration, we can envision the prospects of tabletop particle accelerators, compact X-ray free-electron lasers (X-FELs), portable radiotherapy stages, etc., the application of which would deepen our understanding of fundamental physics and bring convenience to people’s lives at a lower cost[357,358]. As previously stated, to achieve substantial particle acceleration, extremely high energy should be condensed into ultrashort pulse durations. Due to chirped pulse amplification (CPA) technology, compact fs laser sources with intense power output up to petawatt ( W) have drastically boosted the research in LPA. Indeed, in 2018, the Nobel Prize in Physics was awarded to both the invention of optical tweezers and the technological breakthrough of CPA, which to a certain extent demonstrates the significance of using light as a tool towards more precise and more robust optical manipulation. 6 Conclusions and Perspectives Possessing advantages of remote, contact-free operation and high spatial resolution, optical manipulation has gained tremendous attention ever since the initial proposition of optical tweezers. Over the past few decades, optical approaches, especially represented by optical tweezer platforms, have provided powerful tools that satisfy the growing demand for exploration in the micro/nano world, enhancing advances in various disciplines from fundamental physics to real-life technologies. In this review, we have demonstrated a wide range of optical manipulation techniques adopting different mechanisms, specified for various operational scenarios. Humbled by the voluminousness of literature in this field, we have selected a particular perspective rarely reviewed before, which is to compare the implementation of optical manipulation in fluid and solid domains. In fluidic environments, the main task is to counter the Brownian diffusion of tiny objects and impose regular and programmable motion patterns on them. Considering both the scale of dominant forces and the characteristics of the operational environments, two approaches can be taken: (1) directly exerting optical force/torque upon the target objects by interfacing the momentum channel of light; (2) indirectly coercing the target objects into motion via hydrodynamic effects by interfacing the energy channel of light. To extend the optical manipulation from fluid to solid domains, the major challenge has become the adhesive force, which stifles stochastic behaviors while also overwhelming the optical force/torque. Aside from the pulsed optical force, the scheme of direct momentum transfer from light to matter loses its effectiveness. Resorting to the energy channel instead, the transient light–thermal effects and the associated light-induced acoustic waves or the solid-to-liquid phase transition provide alternatives to overcome or bypass the adhesion forces. Specially, the internal force-driven mechanism, whether in semi-steady state or in the form of acoustic waves, bears great significance in enabling more versatile and multimode manipulations in highly adhesive regimes. Despite the attempts to recollect as much of the historically important and emerging researches as possible, we could not cover the relevant work to exhaustion. Still, we have seized several directions in this field that we evaluate as burgeoning or bearing the potential of becoming significant in the future, and as complementary to the main text, we summarize them below. 6.1 Optical Manipulation Using Pulsed Light Ever since the first successful trials of optical tweezers, CW light has been chosen as the light source for optical manipulation, which, despite the neatness in the physics it entails, has excluded myriad interesting effects associated only with pulsed lasers. With temporally compressed energy within an ultrashort time span, pulsed lasers as the optical source could bring about high-peak-valued optical force, nonlinear optical effects, impulsive physical dynamics, etc., extending the capability of optical manipulation to realms hardly accessible via the mere use of CW light. Pulsed optical force: as discussed in Sec. 4.1.1, pulsed lasers could generate tremendous peak values of optical forces, which could surpass the adhesive force with moderate single-pulse energy and be used to release stuck particles from the solid substrate[241,363]. In addition, the giant magnitude of the transient scattering force can be harnessed for pulse-wise propulsion of suspended particles. In fluidic environments, high-repetition-rate ultrafast lasers could establish stable particle trapping, adopting the same apparatus as CW light optical tweezers. Nonlinear effects: due to the high peak power of pulsed light, micro/nano particles in ultrafast light fields could easily enter the nonlinear regime. For instance, by incorporating femtosecond lasers into optical tweezer platforms, in situ studying of two-photon photoluminescence or second-harmonic generation is possible, where the ultrafast light source assumes the dual responsibility of both trapping the samples and exciting the fluorescence/high-order harmonics[246,322,365]. In some situations, the nonlinear terms in the polarizability of target objects could induce unconventional phenomena, and the use of femtosecond lasers could change the landscape of the potential well by splitting the initial simplex minimum to multiple equivalent trapping sites[366,367], or induce abnormal ejection of particles in directions relevant to light beam polarization[245,368]. Moreover, at near-resonance conditions, the associated surge in polarizability could remarkably enhance the trapping stability by several folds, which provides a feasible alternative to further improve the spatial resolution of optical tweezers to the deep sub-wavelength or even atomic level[246,369]. Impulsive physical dynamics: pulsed light induces transient light–thermal effects in light-absorptive media, thereby endowing the impulsive feature to other auxiliary physical fields. Nanosecond lasers could effectively couple to heat and acoustic channels, the latter capable of countering the adhesive forces. On the other hand, ultrafast lasers could initiate intensive physical dynamics with minimized heating and cooling windows, localized heat-affected regions, and sometimes non-thermal transient ablation of materials when using lasers with pulse durations shorter than the electron–phonon coupling time, enabling material modification, high-precision nano machining[311,371], well-directed mass diffusion[265,304], and elaborate ablation of skin layer atoms due to non-thermal unbonding[265,266], which might be classified as optical manipulation in a broad sense and has been studied extensively in solid systems. 6.2 Optical Manipulation via Multiphysics Coupling Optical manipulation relies on harnessing either the momentum or the energy of light. Considering that photons are adequate energy carriers but poor momentum carriers, as determined by the dispersion relation (large speed of light), interfacing the energy channel of light is promising in inducing derivative forces that are several orders of magnitude larger than the optical force, in which multiphysics coupling is indispensable. Typically, auxiliary physical fields such as the flow field in the fluidic domain or the acoustic field in the solid domain are byproducts of light illumination mediated through light–thermal effects, i.e., the heat field. Alternatively, physical fields such as the electric field could be pre-assigned in the operational environment, which brings about dielectrophoresis in nonuniform electric fields (induced by light patterning of photoconductive layers), frequently exploited as an electrokinetic manipulation method in optoelectronic tweezers or machineries, or when coexisting with the heat field, generates directed mass flow of the fluid (e.g., ETP flow) and ultimately induces the motion of suspended particles. Examples such as thermophoretic force, opto-thermoelectric force, and opto-thermoelastic deformation effects are all synergetic results of multiple physical fields, possessing merits of large magnitude (compared to optical force), long working distances, free choice of actuating particles, or capability of inducing solid-domain locomotion, not shared by the light momentum-based optical force. For the purpose of enriching optical manipulation techniques as well as the achievable actuation modes, more complex and exotic cross-disciplinary schemes should be considered, such as electron acceleration by laser plasma wakefields (Sec. 5.7), particle ejection via laser-induced shockwaves[349,372], aquatic robotics powered by bubble expansion and photoacoustic streaming[3,373–376], migration of fluidic species due to light-electro-osmosis or light-induced Marangoni flow, cell concentration based on the synergism between the optical generation and acoustic activation of bubbles, out-of-plane rotation and combined multimode manipulation of spherically symmetric particles by delicately managing the interplay among multiple physics-induced forces and torques[379,380], or the propulsion of graphene sponges through electron emissions. Care should be taken when dealing with such complex situations so as to unmistakably recognize the real dominant mechanism in the multiphysics scenario[382,383]. 6.3 Optical Manipulation in Highly Adhesive Environments Compared to optical manipulation in fluidic environments, in the solid domain, corresponding studies are relatively poor in variety and versatility, and the theoretical framework is far from well established. An urgent need, which also bears great significance, is to acquire a higher level of controllability and realize multi-degree-of-freedom light-induced locomotion in adhesive environments, especially at the microscale. For one thing, free from Brownian diffusions, the assembled patterns of micro/nanoparticles can be self-sustained even after the withdrawal of the light source, enabling versatile micro/nano fabrication with high precision[249,298]. For another, from a more general perspective, the all-optical approach is trending versus the all-electric counterpart, and one promising direction is to construct MOMS as opposed to MEMS, which has progressively matured these years. In effect, the on-chip platforms of MEMS can be readily adapted for MOMS, only that the latter should search for proper driving mechanisms that underpin the solid-domain mechanical locomotion. Among the existing techniques discussed in this review, the photothermal-driven cantilever beam or the origami-inspired structures could be manufactured at microscale and function as the building elements of the desired machineries. Another feasible resolution is to reproduce the electro-acoustic coupling in the light regime. On piezoelectric substrates, the acoustic fields can be patterned by applying a.c. signals to opposing arrays of interdigital transducers, and the electric signal can be adjusted independently on each electrode[339,385]. In contrast, to induce user-desired acoustic fields in the opto-thermoelastic coupling scheme, multiplexed pulsed laser beams should be employed to parallelly illuminate one absorptive substrate (or micrometer-sized actuators), with the laser pulse parameters dynamically tunable. In addition, the substrate or the actuators can be patterned beforehand to support various modes of elastic waves. Still, more efforts are needed to solidify the theoretical framework, i.e., to map the relation between optical fields and the subsequent thermal and acoustic fields before the above practices can be carried out. 6.4 Optical Manipulation on Integrated Platforms A growing trend of optical manipulation is to improve the integration level of devices, meaning to transfer the experimental setup from bulk free optics to planar platforms with minimized footprints such as metasurfaces and on-chip waveguides, or for in vivo practices, to optical fibers. Instead of refractive optics, these newly developed techniques, when applied to optical-force-based manipulation, are largely reliant on evanescent fields, wavefront shaping, or nanophotonics (e.g., plasmonics or high-Q dielectric nanoresonators) to condense the incident light within diffraction-limited or even sub-wavelength dimensions, thus eliminating the need for high-NA objectives and minimizing the device footprint. Benefiting from modern nanofabrication techniques, these devices are portable, autonomous, integrable, and able to interface with other existing technologies including microfluidics and endoscopes[151,387], which coincides with the general quest towards higher versatility and practicality. On the other hand, for optical manipulation that involves multiphysics coupling, considering that the auxiliary physical fields should be intermediated with photothermal effects, the use of metasurfaces is advantageous since they are associated with huge parameter spaces to optimize the light–thermal conversion efficiency of the substrate. For instance, nearly perfect light absorptance can be achieved upon metamaterial absorbers by delicately engineering the geometry and material compositions along both the thickness and transverse dimensions[130,388–392], which could in principle improve the power efficiency of optical manipulation techniques that require heat generation, e.g., opto-thermoelectric and opto-thermoelectrohydrodynamic tweezers. The solid-domain optical manipulation, though starting late, should follow the same trend in becoming more compact in volume and adapted to concrete application scenarios. Specifically, all-optical modulation could be established on-chip based on the opto-thermoelastic wave actuation mechanism, where microplates or plasmonic nanowires could function as mobile and reconfigurable mechanical elements controlled by the input light pulses supported by evanescent field waveguides[36,37]. All in all, optical manipulation has provided powerful tools for scientific investigations into the micro world. We envision that corresponding researches will continue to gain momentum at intersections of electromagnetism and fundamental physics and biology. In the meanwhile, we should use imaginations and enrich our knowledge base to venture out of the comfort zone, and extend the capability of optical manipulation beyond conventional scenarios. Acknowledgment. This work was supported by the National Natural Science Foundation of China (Nos. 61927820, 61905201, and 62275221). The authors declare no competing financial interest. J. Kepler. Ad vitellionem paralipomena(1968). J. D. Jackson. Classical Electrodynamics(1999).
<urn:uuid:84848f33-3da8-4d14-bca4-ac525413468c>
CC-MAIN-2023-50
https://www.researching.cn/articles/OJa88f7839f1342f34
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.905536
53,687
3.28125
3
Drama in Literacy For our English learning, we are focusing on a whole school book called Journey. This is a wordless picture book and lots happens. We used our drama skills to sequence the events which happen in the book. We had to make sure we use strong voices and expression to convey what the character was feeling. We also had to improvise who was what in each scene.
<urn:uuid:b1eb1d72-72c6-4080-bdb6-84cb9b9109aa>
CC-MAIN-2023-50
https://www.roebuck.herts.sch.uk/blog/?pid=44&nid=6&storyid=3513
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.992175
80
3.78125
4
There are many reasons why nails can turn yellow. Some of the most common causes include: - Fungal infection: This is the most common cause of yellow nails, especially in toenails. Fungal infections can cause the nails to become thickened, brittle, and yellow. - Nail polish: Dark-colored nail polish can stain the nails yellow over time, especially if it is not removed properly. - Smoking: Smoking can stain the nails yellow and also damage the nail bed, making them more susceptible to infection. - Trauma: Injury to the nail, such as hitting it on something, can cause it to turn yellow. - Certain medications: Some medications, such as certain antibiotics, cancer drugs, and antidepressants, can cause yellow nails as a side effect. - Medical conditions: Certain medical conditions, such as psoriasis, thyroid disease, and kidney disease, can also cause yellow nails. If you have yellow nails, it is important to see a doctor or dermatologist to determine the underlying cause and get the appropriate treatment. If the cause is a fungal infection, your doctor may prescribe antifungal medication. If the cause is nail polish, you can try using a base coat to protect your nails from staining. If the cause is smoking, quitting is the best way to prevent your nails from turning yellow. Here are some tips to help prevent yellow nails: - Avoid wearing dark-colored nail polish for long periods of time. - Remove nail polish thoroughly with a good-quality nail polish remover. - Keep your nails clean and dry. - Wear gloves when using harsh chemicals or cleaning products. - Avoid biting your nails. - See a doctor or dermatologist for any underlying medical conditions that may be causing your nails to turn yellow.
<urn:uuid:168a3ddf-d765-4490-8749-7dc63c0855ab>
CC-MAIN-2023-50
https://www.royalhouseofbeauty.com/blogs/resources/why-do-nails-turn-yellow
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.928528
371
2.875
3
The European Union's General Data Protection Regulation (GDPR) was first drafted before blockchain became a widely adopted technology utilised across nearly every sector of the economy. One of the attractions of a blockchain platform is the immutability of data recorded on it, which seems to conflict with the right to erasure under the GDPR which allows an individual to have their personal information deleted. How can this tension be resolved? The GDPR – and the right to erasure Although EU law, the GDPR applies to New Zealand businesses that process personal data with an office in the EU and, more broadly, to New Zealand businesses that process personal data of individuals residing in the EU in certain circumstances. The GDPR's right to erasure provides that individuals have the right to have their personal data erased in certain circumstances, for example, the individual has withdrawn their consent to have their data processed. In contrast to the GDPR, the New Zealand Privacy Act 1993 (Privacy Act) does not currently contain an express right for users to require the deletion of their personal data. However, this may change. On 20 March 2018 the Minister of Justice introduced a Bill amending the Privacy Act, which is expected to come into force on 1 July 2019. The current form of the Bill includes a number of additional privacy requirements, such as mandatory breach reporting, but does not currently contain many of the additional requirements set out in the GDPR, including a right to erasure. The Bill may yet undergo significant changes before enactment and the Privacy Commissioner has advocated for a right to erasure to be included. So, how might this right to erasure be reconciled with the use of blockchains – which store personal data, given that once stored, information cannot be deleted? Is encryption the answer? Cryptography enables you to store information so that it cannot be read by anyone except the intended recipient. Public key cryptography uses a pair of keys for encryption: a public key, which encrypts data, and a corresponding private (or secret) key for decryption. You publish your public key to the world while keeping your private key secret so anyone with a copy of your public key can then encrypt information that only you can read. Just like door locks, there are different forms of encryption, of differing strengths. But the strongest forms are for all practical purposes, unbreakable. Could personal data stored on a blockchain be effectively 'deleted' by encrypting it and then destroying the private key so it can never be read? There is currently no firmly established answer, but there are reasons to be hopeful that this would be an acceptable solution: - It would be ironic for the promise of blockchain platforms to be thwarted by the GDPR. One of GDPR central aims is to protect individuals in relation to the processing of their data. That is also the attraction of blockchain platforms. It is consistent with the policies behind the GDPR to find solutions that allow the development of blockchain platforms which store personal data. - The GDPR is vague as to when exactly personal data is "erased". The wording is sufficiently wide for destroying a private key to meet the specified requirements, provided the level of encryption is sufficiently robust to ensure the personal data, once encrypted, is unable to subsequently be rendered intelligible without reference to the destroyed private key. - The concept of encryption is already recognised in the GDPR which recommends it as a security and personal data protection method. It is a natural extension for it to be accepted in the context of the right to erasure. These are then some reasons to expect the use of encryption techniques to allow blockchain platforms to both store personal data and comply with any right to erasure. However, we should sound a few notes of caution: - Much will depend on the details. To allow for any right to erasure, personal data stored on a blockchain would have to be strongly encrypted, and the private key would have to be permanently deleted or otherwise made inaccessible to others. These are practical issues which will have to be carefully worked out. - As strong cryptography makes the job of intelligence agencies more difficult, some countries have enacted law or regulation restricting or simply banning the non-official use of strong cryptography. Encryption may therefore not be a viable solution in all jurisdictions. - In theory, any type of encryption can be broken given enough time, energy and processing power. What is considered secure today may not be secure in the future. Merely encrypted data is therefore at risk – and working out the nature and extent of that risk will be an important part of the discussion. This article was first published by CIO New Zealand and available to view here. This article is intended only to provide a summary of the subject covered. It does not purport to be comprehensive or to provide legal advice. No person should act in reliance on any statement contained in this publication without first obtaining specific professional advice. If you require any advice or further information on the subject matter of this newsletter, please contact the partner/solicitor in the firm who normally advises you, or alternatively contact one of the partners listed below.
<urn:uuid:4583fa9e-a110-4e83-bb26-3c9dcfe6ba3b>
CC-MAIN-2023-50
https://www.russellmcveagh.com/insights/january-2019/blockchain-and-privacy-is-encryption-the-solution
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.947483
1,040
2.640625
3
This paper attempts to reconstruct the possible reasoning process that led the great Indian mathematician Brahmagupta in 628 A.D. to the formulation of two controversial rules for arithmetic, involving the number zero; rules which contradict modern arithmetic principles. Is it possible to explain these rules in some logical manner? This paper outlines a possible explanation of the issue based on similar reasoning. One may ask, why is the concept of zero so important? “From counting to calculating, from estimating the odds to knowing exactly . . . all of their parts swing on the smallest of pivots, zero” Kaplan . Today’s technology would simply be impossible, from the smallest electronic device to space technology, engineering, mathematics and physics. If it were possible to erase the existence of zero from the annals of human achievement, we would be thrown back into the ancient times. Humanity owes a great debt of gratitude to the original inventor of zero - Brahmagupta, as well as to the Indian culture.
<urn:uuid:088bf066-b18c-4cca-bfd8-dc523f972a3a>
CC-MAIN-2023-50
https://www.scienceopen.com/hosted-document?doi=10.14293/S2199-1006.1.SOR-.PPDIJRA.v1
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.937145
201
3.140625
3
Clinical research is a scientific study of the effects, risks, efficacy and benefits of a medicinal product. These are carried out prior to the release of the medicine in the market. These trials are undertaken at various stages and studies are conducted after the launch of a new product to monitor safety and side effects during large-scale use. A Clinical Research Associate (CRA) is a professional who monitors the administration and progress of a clinical trial (pharmaceuticals, biologics, or devices) on behalf of a sponsor. A clinical trial is a scientific study of the effects, risks and benefits of a medicinal product, including new drug substances and currently marketed drugs. A CRA might also be called a clinical research (or trials) monitor, executive, scientist or coordinator, depending on the company. Nature of Work To begin with the individual works for the Investigator or a University/Academic institution. He/She handles most of the administrative responsibilities of a clinical trial, is the liaison between the clinical trial investigative site and the sponsor of the clinical trial, and reviews all data and records before a CRAs (Clinical Research Associate) visit. The title can be research nurse, trial coordinator, study coordinator, research coordinator, or clinical coordinator. Ideally speaking Clinical Research branches off into various categories at the entry level. The most common entry-level position is that of a Clinical Research Associate (CRA). The role of a CRA is varied: They are key participants in the design, implementation and monitoring of clinical trials They shoulder the responsibility of planning and implementing all activities required to conduct and monitor complex clinical trials and ensure that Good Clinical Practices are followed. They assist in preparation of presentations, manuscripts of scientific meetings and technical journals. They attend scientific/professional meetings and training courses as appropriate. A clinical researcher studies issues and concerns related to mental health and wellness. Clinical researchers have a wide array of goals: understanding the etiology of disease and the disease process, identifying and evaluating potential treatments and their outcomes, and studying policies related to the delivery of services. This is a combination of Medical practice, Surgery and alike on one hand and Pharmacy & Research on the other. So the candidate should be mentally prepared to put in long working hours with compassion and humane attitude. B. Sc. degree is a must to enter this field. Ideally the industry is looking for science graduates from pharmacy, medicine, life science and bioscience. Within bioscience too there are various other fields such as botany, zoology, biochemistry and genetics. Anybody who fulfils these criteria can join the industry. Those with Nursing Degrees and or Nursing experience, Life Science degrees, Clinical Research Coordinators, Laboratory Technicians, Clinical Research Associates, Medical Research Associates, Clinical Scientists, members of an Institutional Review Board, Investigators, Investigator clinical trial site staff, Study Coordinators and Pharmacists can benefit from this course as the online training program gives an overview of conducting clinical trials, drug development process, and human safety protection. No age is a barrier for pursuing Clinical Research, but surely, the age range allowed to pursue training in Clinical Research depends on a number of factors, with safety always first in mind. Almost all Institutes imparting training in Clinical Research conduct courses like Diploma and Post Graduation. The PhD facility is in selected few. The CRCDM 6 month weekend course at Pune University is useful for: Doctors – Principal Investigator, Co-Investigator, Medical Advisor, Drug Developers, Regulatory Affairs Manager, Clinical Research Physician. Pharmacists, Life Science graduates, Science postgraduates in Biochemistry – Medical writers, Clinical Research Associate, Site Coordinator, Clinical Research Manager and Drug Development Associate, Biostatistician,Quality assurance Management Graduates (MBA) – Business Development, Clinical Project Management, Clinical Research Management, Regulatory Affair Management. IT professionals, Biostatistician, Engineers, postgraduates in Maths, Applied Maths, operational Research, Statistics – Clinical Data Manager, Drug Development Associate After completion of this course Participants should be able to: Have better understanding of Good Clinical Practice and Standard Operating Procedure for Clinical Research and Clinical Data Management. Contribute more effectively in their profession pharmaceutical/ biopharmaceutical companies in drug development, CROs offering clinical research and clinical data management services, in bio-IT life science industry, or academic research institution or as clinical investigators at hospitals/ medical sites conducting trials on subjects. Support overall clinical trial process electronically by implementing Electronic Data Capture (EDC) system and Project Monitoring. Participate in design, conduct and management of global clinical trials conducted at multicentric sites in India and Overseas. Have an understanding of evolving regulatory process standards and practices of ICH GCP in conduct of different therapeutic trials for preparing the submissions made to the regulatory authorities for seeking market authorization in India, US, EU, Japan etc Selection of Institute and courses It is an individual’s call to select the course depending on socio-economic factors and personal vision. There are a number of Institutes in India. The right choice would be the one backed by Pharmaceutical Organizations for Clinical trails, Sponsorships in various forms, funds to support in long term commitment. Colleges, Institutions and Universities Well-known Institutes are Bombay College of Pharmacy, Academy for Clinical Excellence-Mumbai, Institute of Clinical Research-Delhi. Besides for a focused hunt for various topics within the Clinical Research the Internet has a Directory available for Institutes in India and Abroad. This is a vast topic; on a precise selection of area of Research, disease or disorder one can find the suitable Organization. Series of workshops had been designed by Bioinformatics center, University of Pune, together with Synergy Network (I) pvt ltd. Renowned experts form industry and academics were invited for workshop and curriculum development meeting. Dr R. A. Mashelkar (DG CSIR), Dr. N. K. Ganguly (DG ICMR), Dr. Vasantha Muthuswamy (SDDG ICMR) and Dr M. K. Bhan (DBT) have appreciated the past three workshops on Clinical Research and Clinical Data Management and the syllabus for certificate courses developed by University of Pune, Bioinformatics center. Following Institutes charge around Rs 60,000-Rs 70,000 for the course: Department of Bioinformatics, University of Pune, Pune H.V. Desai Eye Hospital Mohammad Wadi, Hadapsar, Pune-28 Bioinnovat Research Services Pvt.Ltd., The Elements, 465 Phase V, Udyog Vihar Ind. Area Gurgaon 122015 Sankara Nethralaya ,18,College Road,Chennai-600006 T John College 88/1 Gottigere, Bannerghatta Road Bangalore 560083 L.V.Prasad Eye Hospital, Hyderabad L.V.Prasad Eye Institute, Patia, Bhubaneswar There are various growth opportunities for those who are willing to learn. For the ones interested in serious research work, a PhD is the ideal solution. There are post graduate degrees and diplomas offered by various colleges which come in handy to enhance one’s career prospects. Specialization in branches of pharmacy, life sciences, biochemistry will be useful. The CRA (Clinical Research Associate)will work and Interact with internal/sponsor company personnel working in clinical trials, e.g. Clinical Study Managers, other CRA’s, Drug Safety, Regulatory Affairs, Quality Assurance, Medical Writers, Statisticians, Data Management, etc. External interactions can include people from the FDA, a Contract Research Organization, the Hospital Pharmacy Department, Study site staff particularly the Investigator and Study Coordinator. This means good amount of exposure with diverse authorities. Career prospects include a professional career in Clinical Research industry either as a clinical investigator, site coordinator in at a hospital conducting clinical investigations or CRO (Clinical Research Organization).Jobs are also available in pharmaceutical industry, drug development, medical writing, biostatistics or as a Manager of Clinical Project, Clinical Research Business Development, Clinical Operations, Data Management, Regulatory Affairs and Auditing of Clinical Trials. There is high demand for trained professionals in this field; the pay package is impressive at the entry level. Freshers can expect a pay packet of around three lakhs or more per annum. If you have a master’s degree backing your qualifications, then the amount is almost doubled. Clinical research is an industry where experience counts, thus the longer you are in this field; higher the salary you can expect. India is the second largest pharmaceutical market in Asia growing by more than nine per cent annually. According to a report, there are more than 50,000 jobs in clinical research in Pharmaceutical and Biotechnology companies and Contract Research Organizations are always seeking new Clinical Investigators to help study new drugs and devices. These organizations (called Sponsors) provide grant funds to physicians and institutions, which more than cover the cost of conducting the clinical studies. In addition to financial incentives, physicians participate in clinical studies so that they can publish their study results and become better known in their field. Fact is that only one in five drugs tested ever becomes available to the public, it’s important to conduct and support many different trials at the same time. For this specialized manpower is in constant demand. A well known Research Organization recently reviewed the job positions in this avenue which are i.e. Business Development in Genomics, Product Manager Quality Control Research Scientist in Bioinformatics, Clinical Data\Archives Manager, Project Manager Regulatory Affairs Sales, Biostatistician Clinical Research, Associate Marketing\PR Proteomics Research Associate Technician, Skilled Trades etc. Medical Writing Technologist Patents\IP Positions Clinical Research Coordinator Technology Transfer Clinical Research Investigator. The IBPA (International Biopharmaceutical Association) newsletters deliver up to date information on upcoming events in the biopharmaceutical industry around the world. Information compiled from Websites of Bioinformatics Pune, Clinical research online, Institute of Clinical Research, Clinical Research Training in India INSTITUTE of CLINICAL RESEARCH (INDIA) http://www.icriindia.com/ Best Wishes From www.scientiaeducare.com Helping you choose a successful career….
<urn:uuid:52f17e1d-2105-49da-92b9-1e14bd505b43>
CC-MAIN-2023-50
https://www.scientiaeducare.com/clinical-research/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.909569
2,158
2.765625
3
Trout for All Seasons Trout may be the most highly regarded of all gamefish. Brook trout, rainbows and lake trout are the major species found in the Boundary Waters and Quetico, and anglers travel from all over the country to fish them in the area’s cold, deep waters. Brookies and rainbows are often found in the same general locations. Smaller lakes with rocky bottoms are the best choice; these fish will not be found in warmer, shallower dark-bottomed lakes or bays. Rainbows are also often found in connecting streams with swift, cold waters. Lake trout grow larger than either brookies or rainbow trout, and spend much of the year in the deeper waters of large lakes that have populations of cisco and whitefish for forage. Both brook trout and lake trout have light spots on a dark background, while rainbow trout have dark spots on a light background. It’s easy to distinguish between brookies and lakers, however. Brook trout are generally smaller than lake trout, and also more colorful; their spots are cream- to tan-colored on an olive body, and they also have worm-like markings on their sides. Lake trout have deeply forked tails, while the tail of a brook trout is squared. Lakers tend to be more greenish-gray, with white spots, so they appear less colorful than brook trout. More than one Boundary Waters angler has hooked a small lake trout near shore and thought at first that it was a northern pike. In the water, small lakers and northern pike look similar because both are dark with light spots. But the resemblance ends there, and once the fish is in hand, it is easy to distinguish between the two. A pike’s head features a duck-billed shape and rows of razor-sharp teeth; the lake trout’s mouth is more blunt in front. In addition, the trout’s dorsal fin is midway between head and tail, while on a pike, it is much closer to the tail. Rainbow trout get their name from the iridescent pinkish-to-blueish rainbow band along their flank. Like brown trout, they have black spots; however, brown trout are brownish in color, not silvery like rainbow. The spots on brown trout have white rings around them, and browns also have orange spots on occasion. Tails on rainbow trout are covered with black spots; tails on brown trout have few or no spots. Fly fishing is particularly effective for brook and rainbow trout, and can also take lake trout in spring and fall, when the fish are up in the shallows. Streamers in white or yellow are effective; so are dry flies in neutral colors. During a hatch, however, most fly anglers switch to nymphs and emergers; down-wing patterns are also effective during a hatch. Many fly anglers spend years learning to identify every bug in the area, and attempt to match them precisely; others feel that it is more important to simply present a fly in the appropriate color and general size than to painstakingly match the hatch. Spinners and Spoons For anglers using spinning gear, the straight-shaft spinner is probably the lure of choice for brookies and rainbows. Try casting a small Mepps or Panther Martin spinner in orange, brown, or yellow from shore or from the canoe; if the trout are holding deeper than a foot below the surface, count the spinner down until it reaches a suitable depth before beginning the retrieve. Small spoons and small minnow plugs in silver, silver-blue, or gold-fluorescent orange also work well. Lake trout are, as they say, a ”whole different kettle of fish.” In early spring and fall, lakers are in shallow water and can be caught with the same tactics and baits used for brookies and rainbows; a white jig tipped with a minnow or nightcrawler is also productive for shallow lake trout. But as the water warms in the early summer, lake trout head for the depths. Midsummer lake trout anglers drop one- to two-ounce jigs tipped with dead ciscoes into waters as deep as 100 feet. . . some report going even deeper. At that depth, light-colored lures are the best choice: yellow, white, bright chartreuse, or even phosphorescent colors. Typically, lake trout hang right on the bottom , although they will also suspend underneath schools of ciscoes. If you’re not catching lakers on the bottom, try moving the bait up until you have success. A portable depth finder is extremely helpful in locating schools of ciscoes. The Sutton spoon (or any other ultralight ”flutter spoon”) is a time-honored lure for lake trout. Most anglers use a heavy bottom-walking sinker or bead-chain weight well ahead of the lure to get down to the appropriate depth. A gentle breeze helps push the canoe or boat across productive waters. Another proven technique is vertical jigging with a jigging spoon, or with a vibrating blade like a Sonar. Drop the lure all the way to the bottom and jig it a number of times, with pauses between jigging; then reel in line and jig again at that depth. Continue until you have covered the entire column of water from bottom to top. Lake trout are very mobile, and will follow a lure up if they are interested; many anglers are surprised by a strike almost at the surface when they are reeling in for another cast. Landing a Lake Trout Fighting a lake trout caught in deep waters is one of freshwater angling’s biggest thrills. At first, the trout may sulk on the bottom, refusing to be moved; you may think you have a snag, but keep in mind that a snag in 80 feet is unlikely! Eventually, the trout decides to move, and sometimes swims so rapidly toward the surface that it removes the tension on the line and throws the hook. If the trout remains hooked, it will usually make a big run for the bottom once it sees the surface (and you); get ready for some fast back-reeling if this happens! Back-reeling, by the way, is a better technique than allowing the fish to pull out drag; your line will not get all kinked up as it does when the fish pulls out drag. It’s also fast-paced and very exciting. Unlike many fish, trout do not need to be scaled or skinned before cooking, although you may wish to skin your fillets before panfrying if you are using a batter or coating. Trout make excellent table fare, and are particularly suited to cooking in foil packets over a campfire. Small cleaned trout can be skewered lengthwise with a stick and roasted over the campfire; enjoy them right off the stick, rather like corn-on-the-cob! Some trout taken in the BWCAW and Quetico have white meat, while others, even of the same species and in the same water, have deep orange meat. Article copyright Teresa Marrone. ; all rights reserved. May not be reproduced without permission.
<urn:uuid:d8d6598e-2622-479a-8cae-fbec4d2f68c7>
CC-MAIN-2023-50
https://www.seagulloutfitters.com/lake-trout/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.946686
1,516
2.671875
3
Recently updated on November 20th, 2023 at 09:52 am In this pandemic, it has become crucial to wear a face mask to protect yourself and others as well. Mask is one of the most powerful tools which help you to control the spread of the coronavirus. Face masks play a vital role in the field of coronavirus and keep you protected. Covering your nose will protect you from the spread of any virus. The face mask is the most effective product which you need to consider in this pandemic to protect yourself and others as well. The face mask comes in different types and you can prefer these masks for this situation. Table of Contents Benefits of using Face Masks: There are several other possible reasons which work in your favour in this pandemic and you can consider using it every time wherever you go. Masks protect other people from getting infected: The coronavirus gets spread through salivary droplets. When an infected person coughs or sneezes, then this virus can spread to the near ones. It lowers the risk of the spread of the coronavirus and blocks the infectious droplets. Face masks work as the barrier and help to keep away virus-containing particles from a healthy person. You may not feel that you are contagious: Sometimes people get infected but they don’t realize that they are contagious. Masks are highly recommended for people who know that they have COVID-19. This will protect them from further infections and will also protect the people around them. We all know that this virus is spreading at a higher rate and it can be transmitted by people before they start getting any symptoms. Make sure to wear a face mask to cover your face and nose to protect yourself from getting infected. Face Masks will protect you as well: According to recent studies, it is believed that face masks provide major protection for the wearer as well and its protective perks are quite obvious when every person covers their nose and mouth to reduce the spread of this virus from one person to another. Face masks highly reduce the risks of infection. Make sure to wear a face mask whenever you are in public or wherever you go. Face Masks also help the economy to recover: Face masks offer an economic boom. It can also help the economy to recover. Face masks are beneficial as earlier there was no vaccination and this was one of the effective things to protect lives reduce the spread of coronavirus. One of the main features of using a face mask is that it is very affordable and one can easily buy masks to use in this pandemic. Best Disposal Face Masks in India 1: ASGARD 3 Layer Protective Face Mask with NOSE CLIP: ● Make sure to clean your hands before using a face mask. Cover your mouth and nose with this disposal mask and ensure that there are no gaps in between your face and the mask. ● This mask also comes with three-layers. The first layer is water-resistant and helps to block the outside droplets from entering your nose or mouth. The middle layer is mainly for filtration and blocks germs, bacteria, and pollution as well. The inner layer of this disposal mask is skin-friendly and observes moisture. ● It’s comfortable and has ultra-stretchable elastic which can be used for a long time without hurting your ears. 2: We Cool KORRUN 3 ply Disposable Face Mask 3-Layer Medical Masks: ● This disposal face mask provides smooth breathing and it’s lightweight. ● The sizes are available for everyone and can be used by adults and children as well. ● The face mask has elastic bands that are soft and stretchy. ● This disposal mask is very effective and breathable. ● This mask protects users from dust, infections, germs, and many more. ● This disposal face mask is only for one-time use. What to expect with these masks? You can expect the comfort level with these masks. The masks are the disposal kind, comfortable to use and, also provide a high mode of safety. Types of masks in the market? There are several types of masks available. Some of them are cloth masks, disposal surgical masks, cone style masks, and several others. Who this is for? These masks are for everyone as all of us need to wear a face mask to protect ourselves from infection, bacteria, germs, pollutants. Are these comfortable to wear? Yes, these masks are very comfortable to wear. The disposal masks have three-layer which don’t allow entering any particles or infections into your mouth or nose. The elastic bands are made up of soft fabric which provides an extra comfort zone. - 10 Best Air Purifiers in India - Best Air Purifiers Under Rs 10,000 - 10 Best Hand Sanitizers in India - Best Hand Wash Liquids in India - 10 Best Electric Toothbrush in India
<urn:uuid:1b874aa3-5365-4b7e-9fc0-134e6be2b7ba>
CC-MAIN-2023-50
https://www.shubz.in/best-disposal-face-masks/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.955705
1,029
2.546875
3
ASL Sign Language Dictionary Search and compare thousands of words and phrases in American Sign Language (ASL). The largest collection online. How to sign: ask for or request earnestly "The prophet bid all people to become good persons"; Similiar / Same: beseech, entreat, conjure, bid, press How to sign: command solemnly Categories: burden, charge, saddle Sign not right? Or know a different sign? Upload your sign now.
<urn:uuid:7a0d6d0c-7c1b-41ff-8258-6c875aab4c2e>
CC-MAIN-2023-50
https://www.signasl.org/sign/adjure
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.809013
103
2.6875
3
Cacti have numerous adaptations that enable them to survive in arid climates; these adaptations enable the plant to efficiently gather water, store it for a long time, and conserve it (minimizing water loss from evaporation). Cacti have thick, succulent stems with rigid walls that store water when it rains. The stems are fleshy, green, and photosynthetic. Either the stem’s inside is spongey or hollow (depending on the cactus). The water inside the cactus is prevented from evaporating by a thick, waxy layer. Long, fibrous roots are common in cactus, and these roots take moisture from the earth. Some cacti, such as ball cacti, have smaller, more compact roots that can capture dew that falls from the cactus. Most cacti feature scales or spines in place of leaves (which are modified leaves). These scales and spines do not evaporate their water (unlike regular leaves, which lose a lot of water). Predators (animals that would like to consume the cactus to gain food and/or water) are kept at bay by the spines. On a cactus, areoles are a circular collection of spines. An areole is where flowers bud, and it is also where new stems branch. Why do cacti have tiny spines and a big meaty stem? Why do cacti differ from other plants in having thorns instead of leaves and a thick, meaty stem? The appropriate choices are A The cacti can hold more water as a result. Water would be lost because of broad leaves. In the desert, you can find thorny shrubs and cactus. Their thick, fleshy stems allow them to hold more water for a longer period of time. In addition, plants in the desert have thorns rather than leaves because big, broad leaves would make water evaporate quickly. Water moves through a plant during transpiration, and it evaporates from aerial parts including leaves, stems, and flowers. How can a cactus benefit from a fleshy stem? Hint: Cactus plants are desert residents that thrive in arid environments. These plants have adapted to stop water from evaporating from their surface. Complete response: Cladophylls are modified stems with a leaf-like appearance and a green color that are specialized for photosynthesis. They are typically flattened. A cladophyll is a leaf even though it is anatomically a branch because it has nodes from which new stems, leaves, flowers, and even roots can grow. When it rains, cacti’s thick, tough-walled, succulent stems can hold water. The stem is typically either hollow or spongy on the inside. A thick, waxy layer prevents the stem from losing any water. It prevents evaporation by keeping the water inside the cactus. Informational note: The cactus have lost their true leaves. Additionally modified with spines, the cactus’ leaves aid in lowering transpiration. Cacti’s spines also offer some protection from animals and cover. Areoles give rise to these specialized structures (highly reduced branches). Areoles are a distinguishing characteristic of cacti plants. Additionally, they produce tubular and multi-petaled flowers. Therefore, “Stems engineered to generate food using photosynthesis” is the right response. Note: Cladophylls make up a large portion of succulents. – Cladodes, or prickly pear pads, are other names for cladophylls. – Flowers from various plants are produced by cacti, and these flowers are typically showy, delicate, and very alluring. – The pad cactus, sometimes known as prickly pear cacti, are found in the genus Opuntia, which is a sizable genus. The prickly pears are the fruit.
<urn:uuid:896fe22d-524f-4968-a2cb-9e58c1a5f698>
CC-MAIN-2023-50
https://www.smileysprouts.com/succulents-and-cacti/why-do-cacti-have-thick-stems
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.947595
819
4.28125
4
As the debate escalates over how we publicly remember the Civil War following the tragic events in Charlottesville, Virginia, the passionate and contentious disputes have centered on symbols like monuments, street names and flags. According to a study by the Southern Poverty Law Center, at least 1,503 symbols to the Confederacy are displayed in public spaces, mostly in the South and the Border States, but even in decidedly Yankee locales like Massachusetts. Most of these monuments sprang from the Lost Cause tradition that developed in the wake of the war, during the establishment of white supremacist Jim Crow laws around 1900, and as a response to the Civil Rights Movement of the 1950s and 1960s. Those artifacts are not the only way we legitimize and honor the deadly and racist 19th-century rebellion against the United States. Much of the language used in reference to the Civil War glorifies the rebel cause. The language we turn to in describing the war, from speaking of compromise and plantations, to characterizing the struggle as the North versus the South, or referring to Robert E. Lee as a General, can lend legitimacy to the violent, hateful and treasonous southern rebellion that tore the nation apart from 1861 to 1865; and from which we still have not recovered. Why do we often describe the struggle as between two equal entities? Why have we shown acceptance of the military rank given by an illegitimate rebellion and unrecognized political entity? In recent years, historians in academia and in the public sphere have been considering these issues. Historian Michael Landis suggests professional scholars should seek to change the language we use in interpreting and teaching history. He agrees with people like legal scholar Paul Finkelman and historian Edward Baptist when they suggest the Compromise of 1850 be more accurately referred to as an Appeasement. The latter word precisely reflects the sway that Southern slaveholders held in the bargain. Landis goes on to suggest that we call plantations what they really were—slave labor camps; and drop the use of the term, “the Union.” A common usage in the 19th century to be sure, but now we only use “the Union” in reference to the Civil War and on the day of the State of the Union address. A better way to speak of the nation during the war, he argues, is to use its name, the United States. In the same way, we could change the way we refer to secessionist states. When we talk of the Union versus the Confederacy, or especially when we present the strife as the North versus the South, we set up a parallel dichotomy in which the United States is cast as equal to the Confederate States of America. But was the Confederacy really a nation and should we refer to it as such? When historian Steven Hahn participated in the 2015 History Film Forum at the Smithsonian's National Museum of American History, he noted that using these customary terms to tell the story of the Civil War —Hahn suggests we use “War of the Rebellion”—lends legitimacy to the Confederacy. “If you think about it,” Hahn said, “nobody in the world recognized the Confederacy. The question is can you be a state if no one says you are a state?” Of course, international recognition and support for the rebellion was intensely important to secessionist leaders, not just because Jefferson Davis desired the military backing of Great Britain and other European nations, but because they sought the legitimacy that came with it. Hahn says that President Abraham Lincoln and his administration believed that its leaders didn’t have the right to leave the United States or the authority to take their states with them. Looking at leaders like Lincoln during the war and Frederick Douglass in its aftermath, it is apparent that the concept of being careful about the terms we use to describe the period is not a new challenge. In his writings, Lincoln referred to the group he was fighting as the “so-called Confederacy” and Jefferson Davis never as president, only as the “insurgent leader.” And if the so-called Confederacy wasn’t a country, but rather what political scientists would call a proto-state, because not a single foreign government in the entire world recognized it as a nation-state, then could Jefferson Davis legitimately be a president? Could Robert E. Lee be a General? The highest rank Lee achieved in the United States Army was colonel, so given his role as general in service to a failed revolution by a group of rebels, how should we now refer to him? It would be just as accurate to refer to Lee, who led an armed group against national sovereignty, as an insurgent or a warlord, if not a terrorist. Imagine how different it would be for a school-age child to learn about the War of the Rebellion if we altered the language we use. When news reports about the debate over monuments say “Today the City Council met to consider whether to remove a statue commemorating General Robert E. Lee, commander of the Confederate Army,” what if they instead were written in this way: “Today the City Council debated removing a statue of slaveholder and former American army colonel Robert E. Lee, who took up arms in the rebellion against the United States by the so-called Confederacy?” Yale historian David Blight, whose book Race and Reunion called for a reexamination of how we remember the war, says our memorializing language and ideology about the Confederacy became a potent revisionist force in how we understand our history. The Lost Cause tradition, which Blight said he always calls “a set of beliefs in search of a history, more than actually a history,” revolves around an “idea that there was one Confederacy, and there was this noble struggle to the end to defend their sovereignty, and to defend their land and to defend their system, until they could defend it no more. And that image has been reinforced over the intervening years in popular literature and in films like Birth of a Nation, and Gone with the Wind, and the many monuments as well as the use of the Confederate flag.” Frederick Douglass was, Blight says, “acutely aware that the postwar era might ultimately be controlled by those who could best shape interpretations of the war itself.” Just a few years after the war, Douglass had already begun to see that the losers of the war were winning the peace because he felt that the American people were “destitute of political memory.” Douglass often referred to the war as a “rebellion” and was careful not to speak of the rebels in any honorific way, and pledged himself to never forgive the South and to never forget the meaning of the war. On Memorial Day in 1871 at the Civil War Unknown Monument at Arlington National Cemetery, Douglass’ speech was resolute: We are sometimes asked in the name of patriotism to forget the merits of this fearful struggle, and to remember with equal admiration those who struck at the nation’s life, and those who struck to save it—those who fought for slavery and those who fought for liberty and justice. I am no minister of malice . . . I would not repel the repentant, but . . . may my tongue cleave to the roof of my mouth if I forget the difference between the parties to that . . . bloody conflict . . . I may say if this war is to be forgotten, I ask in the name of all things sacred what shall men remember? As Douglass was already concerned that the victors were losing the war of historical memory to the supposedly vanquished, I am not sure that he would have been surprised that not far from where he stood at the national cemetery—often considered the nation’s most hallowed ground—a Confederate memorial would be built in the early 20th century to the insurgents he felt “struck at the nation’s life.” Douglass knew, day-by-day, after the shooting stopped, a history war was playing out. It is clearly not over yet. Words, though they do not stand as marble and bronze memorials in parks and in front of buildings or fly on flagpoles, are perhaps even more powerful and pernicious. The monuments we've built with language may, in fact, be even more difficult to tear down. UPDATE: 9/18/2017: A previous version of this article misidentified the location of the 1871 Frederick Douglass speech, which took place at the Civil War Unknown Monument, not the Tomb of the Unknown Soldier.
<urn:uuid:96d0175e-12dd-448a-86a6-93a652e647c0>
CC-MAIN-2023-50
https://www.smithsonianmag.com/smithsonian-institution/we-legitimize-so-called-confederacy-vocabulary-thats-problem-180964830/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.968971
1,765
3.90625
4
Recently at the Climate Summit, US President Joe Biden promised to reduce US greenhouse gas emissions to at least 50 percent by 2030. Indian Prime Minister Shri Narendra Modi reiterated India’s resolve to also contribute its bit in combating climate change. The summit has brought into focus the emergent and immediate need of the world to address issues of global warming. As the world moves ahead with rapid industrialization and development, the role of sustainable or green architecture in this area takes center stage. Since people spend most of their time indoors at workplaces, building designs have to encompass a holistic approach in the construction process including materials, ventilation and air-conditioning, waste handling, synergy with the environment, etc. Therefore, going forward, building designs need to reflect sustainability and achieve resource efficiency. The US Environmental Protection Agency defines green buildings as “the practice of creating structures and using processes that are environmentally responsible and resource-efficient throughout a building’s life cycle.” This brings us to the concept of “Green Buildings” and the positive effect they can have on the health and productivity of occupants. The World Green Building Council lists certain features that make a building green. These include: - Efficient use of energy, water, and other resources - Use of renewable energy, such as solar energy - Measures to reduce pollution and waste, and enable reuse and recycling - Good indoor environmental air quality - Use of non-toxic, sustainable, and ethical materials - Consider environment in design, construction as well as operation - Consider the quality of life of occupants in design, construction, and operation - A design that allows adaptation to a changing environment The efficient use of resources, ensuring worker health, increasing productivity, and reduction of harmful impact on the environment requires a holistic approach that takes into consideration construction materials, building architecture, and the need for sustainability. One of the major areas of focus in workplaces these days relates to health and wellness of employees. Design of buildings incorporating green features can have a substantial effect on employee health, thereby inculcating a more productive environment. All this ultimately leads to sustainable value creation in the long run. Green Buildings: Features and their benefits There is suggestive evidence that “green” buildings have positive effects on the occupants as compared to conventional buildings. Thus, at workplaces, green buildings reduce sick time and improve employee well-being. A spinoff is increased productivity. Green buildings tend to focus on the basics like lighting, ventilation, biophilia, etc. Though many of these features tend to overlap and work in conjunction, the synergy has a positive impact on the inhabitants. Some of the features of green buildings that have an effect on employee productivity: - Improved Air Quality. Green buildings circulate fresh air inside the building through ventilation systems, reduction of pollutants, and have maintenance practices to keep these systems in proper shape. Improved ventilation leads to a substantial increase in the cognitive performance of employees. - Lighting. Artificial light and glowing computer screens cause a disruption of our body’s processes and thus, have a detrimental effect on health. A green building’s interior has natural lighting and outdoor views. This reduces the energy needs besides enhancing user comfort. Green buildings are designed to maximize the percolation of natural light and create an ambient level of lighting for employees. Natural light has a positive quotient and creates a harmonious, productive work environment thereby boosting productivity. - Acoustics. Green buildings use interior designs and materials to achieve noise control and speech privacy. Noises tend to cause distractions, affect short-term memory and impair concentration of the worker. Reduction of noise in workplaces, external and internal, has a major role in enhancing employee productivity, concentration, and productivity. - Thermal Comfort. This refers to use of strategies, designs, and systems to maintain indoor temperature and humidity in an effort to ensure the comfort of employees. Green buildings use natural ventilation through wind circulation and building orientation to maintain ambient temperatures and reduce the need for air conditioning. This has a positive effect on the health and wellness of workers. - Natural Surroundings. Green buildings establish a connection of the occupants with nature through sensory perception natural elements like water, air, light, and greenery. This helps in creating a more productive and healthy environment for employees. - Building Aesthetics. Though perceived as superficial, a beautiful and harmonious work environment is known to benefit the mental state of employees. Thus a work environment that is well designed, has social spaces and recreational facilities for employees and is certainly helpful in boosting employee productivity. Productivity through Well-Being The well-being of employees is being recognized as one of the major drivers of productivity within an organization. Providing a harmonious work environment cuts down absenteeism, reduces sick downtime, and improves worker engagement. The result is a quantifiable improvement in productivity. Green buildings are now moving beyond energy efficiency and integrating health and wellness features in an effort to focus on employee harmony and prosperity. The goal is to boost productivity, foster innovation, and reach a stage where employees are satisfied with their jobs. Pioneering by example: Sehgal Foundation’s headquarters “Green” building The World Day for Safety and Health at Work is an important tool to raise awareness on making workplaces safe and promote worker well-being. This day is being promoted as part of the Global Strategy on Occupational Safety and Health. One of the ways this initiative is being conceived is through holistic work environments made possible by green buildings. S M Sehgal Foundation (Sehgal Foundation), a rural development NGO, has constructed its headquarter building in Gurugram, Haryana, according to the Leadership in Energy and Environmental Design (LEED) Platinum standards set by the Indian Green Building Council and the U.S. Green Building Council. Sehgal Foundation’s green building includes features like photo-voltaic solar panels on the rooftop generating 35 kW of electricity; solar water heaters; shading devices; a rainwater harvesting storage tank of 800,000 liters; onsite recycling of gray and black water; groundwater recharging (zero runoff site); courtyards maximizing natural light and ventilation; recycled wood; various endangered plant species; use of in-situ bricks; maintenance-free exteriors; insulated walls; use of rapidly renewable rubber wood and bamboo; double-glazed glass, and a highly reflective roof finish, among others. The founders, Dr. Suri Sehgal and Mrs. Edda Sehgal, envisioned the use of “green” design, construction, operation, and maintenance of the building to be in keeping with the organization’s mission to promote sustainable development and reduce the building’s impact of human health and the environment. Most of the technology used in the building is about common sense, and only a small part deals with sophisticated technology. With intelligent designs such as that of the Sehgal Foundation building, almost 50% of the electricity cost can be brought down. The incorrect notion of the high cost of green buildings is a myth. Sehgal Foundation’s green building clearly shows that a sustainable project requires little or no extra expenditure and aligns with the organization’s mission to achieve positive social, economic, and environmental change.
<urn:uuid:26af2b74-a5b4-4d81-81bf-6a34fc4e05b8>
CC-MAIN-2023-50
https://www.smsfoundation.org/green-buildings-improving-health-and-productivity-of-employees/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.939375
1,492
3.546875
4
Brightness of galaxies measured using New Horizons’ data; probe enters hibernation A team of astrophysicists is using images captured by NASA’s New Horizons probe during its nine-and-a-half-year journey to Pluto to measure the brightness of all the galaxies in the universe. In a study published in the journal Nature Communications, the researchers, led by Michael Zemcov of the Rochester Institute of Technology (RIT), used archival data taken by the probe’s Long Range Reconnaissance Imager (LORRI) to study the Cosmic Optical Background – the light beyond the Milky Way galaxy. The photos, all facing away from the Solar System and outward into the Milky Way, were taken at several times and locations including New Horizons’ 2006 launch, its 2007 Jupiter flyby, and four separate positions between Jupiter and Uranus captured in 2007, 2008, and 2010. Because these images were taken in the outer Solar System, they enabled the researchers to accurately measure the upper limit of light coming from the cosmic optical background. This background is difficult to see and measure from Earth because sunlight reflected off interplanetary dust in the inner Solar System interferes with the view, significantly brightening Earth’s atmosphere in relation to that of distant galaxies. “Determining how much light comes from all the galaxies beyond our own Milky Way galaxy has been a stubborn challenge in observational astrophysics,” explained Zemcov, who is both a professor at RIT’s School of Physics and Astronomy and a member of its Center for Detectors and Future Photon Initiative. Once scientists have an accurate measurement of the cosmic optical background’s brightness, they can determine the number and locations of stars, better understand the inner workings of galaxies, and even gain insight into the activity of dark matter within those galaxies. NASA’s first outer Solar System missions, Pioneer 10 and Pioneer 11, took the first measurements of brightness beyond the Milky Way, providing scientists with a benchmark for the cosmic optical background’s brightness. Although these missions are sent to explore planets, their instruments provide a secondary benefit to astrophysics and could potentially be designed to maximize this use. “This result shows some of the promise of doing astronomy from the outer Solar System. What we’re seeing is that the optical background is completely consistent with the light from galaxies, and we don’t see a need for a lot of extra brightness; whereas previous measurements from near the Earth need a lot of extra brightness. The study is proof that this kind of measurement is possible from the outer Solar System, and LORRI is capable of doing it,” Zemcov said. For the research team, the study of LORRI images confirms the instrument is capable of providing accurate measurements of light coming from distant galaxies. The New Horizons mission has been extended to 2021, with the spacecraft traveling further into the Kuiper Belt. Zemcov hopes to use LORRI data captured in this phase of the mission to refine estimates of the cosmic optical background. “With a carefully designed survey, we should be able to produce a definitive measurement of the diffuse light in the local universe and a tight constraint on the light from galaxies in the optical wavelengths,” he emphasized. New Horizons enters hibernation While its data is being used for purposes beyond those of the mission, the New Horizons probe is taking a long-deserved rest, having been put into hibernation on April 7 for the first time since December 2014, when it was awakened to prepare for the July 2015 Pluto encounter. The spacecraft was awake for two-and-a-half years, or 852 days, which is the longest period since launch, beginning with its approach phase six months before the flyby and continuing through the 16 months it took to return all the data captured during the flyby back to Earth. It has already conducted distant observations of several Kuiper Belt Objects and dwarf planets and also studied the space environment of the Kuiper Belt. This time, the probe will remain in hibernation for 157 days before being awakened again on September 11, 2017. During the hibernation period, mission scientists will develop commands for its January 1, 2019, flyby of KBO 2014 MU69, which will be conducted over a nine-day period. Two potential flyby altitudes over MU69 are being considered. As scientists learn more about the object’s properties and orbit, one of these will be selected. “We’re looking forward to taking advantage of the reduced mission operations workload during this hibernation, as well as one early next year, to plan much of the MU69 flyby,” said mission operations manager Alice Bowman. Laurel Kornfeld is an amateur astronomer and freelance writer from Highland Park, NJ, who enjoys writing about astronomy and planetary science. She studied journalism at Douglass College, Rutgers University, and earned a Graduate Certificate of Science from Swinburne University’s Astronomy Online program. Her writings have been published online in The Atlantic, Astronomy magazine’s guest blog section, the UK Space Conference, the 2009 IAU General Assembly newspaper, The Space Reporter, and newsletters of various astronomy clubs. She is a member of the Cranford, NJ-based Amateur Astronomers, Inc. Especially interested in the outer solar system, Laurel gave a brief presentation at the 2008 Great Planet Debate held at the Johns Hopkins University Applied Physics Lab in Laurel, MD.
<urn:uuid:cf615128-d27f-47f7-9989-cb8d515fc37b>
CC-MAIN-2023-50
https://www.spaceflightinsider.com/missions/solar-system/brightness-galaxies-measured-using-new-horizons-data-probe-enters-hibernation/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.937374
1,143
3.484375
3
Emerging Treatments for Compartment Syndrome Chronic exertional compartment syndrome (CECS) is an underdiagnosed condition that causes lower and upper extremity pain in certain at-risk populations. Lower-extremity CECS is most often observed in running athletes and marching military members. Upper-extremity CECS is most commonly seen in rowers and professional motorcyclists. The proposed pathophysiology is increased pressure in muscle compartments during exercise that can result in muscle tightness and pain, which can then advance to paresthesia, muscle weakness, and exercise intolerance. CECS predominantly involves the lower extremities, primarily affects active young adults, and negatively impacts running or other endurance activities Needle manometry can be used to confirm diagnosis of CECS by measuring intracompartmental pressure. According to a 2016 systematic review, surgical intervention for CECS was only successful in 66% of those affected, with 13% reporting complications from surgery, and 6% requiring a repeat surgical procedure . Similarly, a 2013 retrospective analysis of military members showed fasciotomy for CECS was successful in 55% of patients. Additionally, 28% of these patients were unable to return to full activity, 16% suffered surgical complications, and 6% required repeat fasciotomy . Nonoperative management of CECS is more commonly described in the literature, and consists of cessation of activities, altering foot-strike pattern, physical therapy, taping, and injections of botulinum toxin A. Larger samples and a more diverse population are needed to better understand the outcomes of nonoperative management. Image 1: Anterior and Lateral compartments. Adopted from . This post will be an evaluation and update on the current evidence for the treatment of CECS with botulinum toxin A. One of the first reports involved 16 patients and 41 total compartments and showed a significant decrease with elimination of exertional pain in 94 percent of intracompartmental pressures for up to 9 months. Eleven of the patients (69%) did have a decrease in strength, however . An unpublished retrospective review of CECS patients treated with botulinum toxin A injections at the Ft. Belvoir Military Sports Medicine Clinic shows that 66% (19/29) of the patients returned to their desired activity level. Additionally, 20 patients were satisfied or somewhat satisfied with their treatment, and 12 patients continued to have sustained relief at the time of follow-up. However, seven patients experienced a recurrence of their symptoms with 7.8 months mean duration of improvement. Ninety-one percent (10/11) of patients who underwent both BoNT-A injections and fasciotomy reported a favorable response to botulinum toxin A before their surgery, suggesting that intramuscular botulinum toxin A injections for CECS might predict fasciotomy success . Baria and Sellon had similar results when treating a patient diagnosed with CECS with botulinum toxin. Pain following aggravating activity reduced from 9/10 to 1/10 on the visual analogue scale; paraesthesia resolved completely one week post-injection; and no atrophy, weakness, or other adverse effects were identified at 14-month follow-up. Hutto et al. conducted a similar case study in a patient with CECS who showed total pain resolution within two weeks of injection, and no recurrence of symptoms at 10-month follow-up. A retrospective study has recently been performed on sixteen patients in both the upper and lower extremities. The median age was 25.5 years and initial efficacy was reported at 69% with four having partial relief and seven having complete relief of symptoms. However, eight of the eleven patients showing relief had recurrence of symptoms within five months. Only minor adverse effects were observed . Initial management is almost always nonsurgical. This normally consists of rest, avoidance of the aggravating activity, NSAIDs and possibly anesthetic or corticosteroid injections. A diagnostic anesthetic can also confirm the diagnosis. Mouhsine et al. reported on 19 athletes with os trigonum syndrome. First-line nonsurgical treatment failed. To confirm diagnosis, the patients received an anesthetic injection administered under fluoroscopic control. Ten patients had complete resolution of symptoms after one injection, 6 patients after two injections, and the remaining 3 patients underwent open excision of the os trigonum . Some clinicians will immobilize in a short leg cast or CAM walking boot for one to four weeks and it has been shown to be more beneficial for acute injuries . Another case report evaluated a 30 year old manual worker with a prior forearm fasciotomy for compartment syndrome of the hand. His symptoms included pain during fine hand motions and pain with minimal effort. He described difficulty with fine hand motions for ten days and also had an estimated ten percent strength loss after a treatment with botulinum toxin in his first dorsal interosseous muscle and adductor pollici. He did report complete relief of pain and fatigue at a 15 month follow-up . There is one ongoing study in which plantarflexion and dorsiflexion are being measured and repeated 2 months following a botulinum toxin injection with 50 units into the tibialis anterior. Symptoms and strength are followed over a 2, 4 and 6 month period . Currently, based on these limited studies, Botulinum toxin is a safe and cost-effective alternative to fasciotomy for the treatment of CECS. In many sports medicine practices, 25 units of onabotulinumtoxinA delivered with ultrasound guidance into the muscle belly at two locations in the proximal and distal thirds of the affected compartment(s) in the anterior, lateral, and deep posterior compartments, for a total of 50 U in each compartment. In the superficial posterior compartment, 50 U of BoNT in the medial and lateral heads of the gastrocnemius and 50 U in the soleus muscle. Gait retraining can also be started along with physical therapy after a few weeks . Ultrasound-guided fasciotomy for CECS is also a newer technique of particular interest to non operative sports medicine providers. The ultrasound allows for visualization of the superficial peroneal nerve, vessels, and fascia. A cadaveric study was performed with ten anterior and lateral compartments for a total of twenty fasciotomies. No neurovascular injuries were reported and all achieved target length. This led to the proposal of an ultrasound guided fasciotomy for CECS . Another study that was similar performed ultrasound-guided fasciotomies in the superficial and deep posterior compartments with target length of 90 percent without tendinous or neurovascular damage . Over a 3-year period, seven patients underwent ultrasound-guided anterior-compartment release. All patients had decrease in pain and six of seven returned to presymptomatic exercise levels in about 35 days without any hemorrhage or peroneal nerve injury . One other case report exists that describes an anterior leg ultrasound guided release and a return to running in one week . The most successful technique involved using a small incision to dissect down to the fascial plane, which is then pierced using a small surgical blade, such that a blunt tunneling device can be inserted immediately deep into the tissue plane to be divided. Using real-time ultrasound guidance, the blunt tunneling device was then passed adjacent to the fascia for the required longitudinal distance, keeping the tip visualized throughout. When the tip of the blunt tunneler has passed the required distance under the tissue plane, a second small incision is made above the tip so that it may be brought to the skin surface. A 2–0-diameter braided silk suture is then tied to the end of the tunneler before being pulled through, leaving the silk thread deep to the tissue plane. The process is then repeated in the opposite direction with the blunt tunneler passing through the same incisions and passed, again under direct ultrasound guidance, superficial to the tissue plane to be divided. The silk thread is once again tied to the end of the blunt tunneling device and pulled through so that it now passes immediately deep and superficial to the tissue plane to be divided. The ends of the suture are then oscillated under tension creating a “cheese wire” effect, thus dividing the fascia. Evidence of fascial division could be obtained using ultrasound and was confirmed following open dissection . Figures 2,3. Real-time ultrasound images of ultrasound guided fasciotomy. Images of the tunneling device. Adopted from . In summary, chronic exertional compartment syndrome (CECS) is often seen in sports medicine clinics due to the population affected. Providers should be comfortable with the nonoperative treatment options. Botulinum toxin A has shown positive results in multiple small studies and may be an option, but further research is needed in this area. The ultrasound guided fasciotomy has been proposed as an additional treatment option for CECS and initial reports have shown positive results and a quicker recovery time when compared to traditional open fasciotomy. - Campano D, Robaina JA, Kusnezov N, et al. Surgical management for chronic exertional compartment syndrome of the leg: a systematic review of the literature. Art Ther. 2016; 32:1478–86. - Waterman BR, Laughlin M, Kilcoyne K, et al. Surgical treatment of chronic exertional compartment syndrome of the leg: failure rates and postoperative disability in an active patient population. J. Bone Joint Surg. Am. 2013; 95:592–6. - Isner-Horobeti ME, Dufour SP, Blaes C, Lecocq J. Intramuscular pressure before and after botulinum toxin in chronic exertional compartment syndrome of the leg. Am J Sports Med. 2013;41(11):2558–2566. doi: 10.1177/0363546513499183 - Moore, Clint DO; Hulsopple, Chad DO; Boyce, Brett MD. Utilization of Botulinum Toxin for Musculoskeletal Disorders. Current Sports Medicine Reports: June 2020 – Volume 19 – Issue 6 – p 217-222 - Baria MR, Sellon JL. Botulinum toxin for chronic exertional compartment syndrome. Clin J Sport Med. 2016;26(6):e111–e113 - Hutto WM, Schroeder PB, Leggit JC. Botulinum toxin as a novel treatment for chronic exertional compartment syndrome in the U.S. military. Mil Med 2019;184 (5–6):e458–e461. - Charvin M, Orta C, Davy L, Raumel MA, Petit J, Casillas JM, Gremeaux V, Gouteron A. Botulinum Toxin A for Chronic Exertional Compartment Syndrome: A Retrospective Study of 16 Upper- and Lower-Limb Cases. Clin J Sport Med. 2022 Jul 1;32(4):e436-e440. - Orta, C., Petit, J., & Gremeaux, V. (2018). Chronic exertional compartment syndrome in hands successfully treated with botulinum toxin-A: a case. Annals of physical and rehabilitation medicine, 61(3), 183-185. - Suer, Michael, MD. Botox for the Treatment of Chronic Exertional Compartment Syndrome. https://clinicaltrials.gov/ct2/show/NCT03922139 - Lueders, Daniel R., et al. “Ultrasound-guided fasciotomy for chronic exertional compartment syndrome: a cadaveric investigation.” PM&R 9.7 (2017): 683-690. - Balius R, Bong DA, Ardèvol J, Pedret C, Codina D, Dalmau A. Ultrasound-guided fasciotomy for anterior chronic exertional compartment syndrome of the leg. J Ultrasound Med. 2016;35(4):823–829. - Reisner, Jacob H., et al. “Ultrasound‐Guided Fasciotomies of the Deep and Superficial Posterior Leg Compartments for Chronic Exertional Compartment Syndrome: A Cadaveric Investigation.” PM&R 13.8 (2021): 862-869. - Finnoff, Jonathan T., and Wade Johnson. “Ultrasound-Guided fasciotomy for chronic exertional compartment syndrome: a case report.” Clinical Journal of Sport Medicine 30.6 (2020): e231-e233. - Davies, Joseph, Valerie Fallon, and Jimmy Kyaw Tun. “Ultrasound-guided percutaneous compartment release: a novel technique, proof of concept, and clinical relevance.” Skeletal Radiology 48.6 (2019): 959-963.
<urn:uuid:6531d450-b708-4e18-b1dc-29921b05281e>
CC-MAIN-2023-50
https://www.sportsmedreview.com/blog/emerging-treatment-compartment-syndrome/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.9156
2,713
2.609375
3
M will be the leukocytes, or white blood cells. These cells and their merchandise circulate constantly via the blood, lymph, and tissues in both surveillance and effector modes. The Signal Regulatory Protein gamma Proteins Gene ID Innate immune program is comprised principally with the mononuclear phagocytes (monocytes and3. MALE REPRODUCTIVE SYSTEMTHE IMMunE Method And ITS EndoCRInE ConTRolmacrophages) and granulocytes or polymorphonuclear cells (neutrophils, eosinophils, basophils, and mast cells), but in addition includes cells more closely aligned together with the adaptive responses (NK cells and dendritic cells). The cellular components on the adaptive immune program would be the lymphocytes (T cells, B cells, and NK cells), along with the “professional” antigen-presenting cells (dendritic cells and macrophages). In modern immunology, the cells from the immune method and their various functional subsets are primarily identified and even defined by expression of certain antigens, referred to as cluster designation (CD) markers, recognized by well-characterized monoclonal antibodies (Table 19.1).The Innate Immune ResponseThe innate immune program gives the initial line of defense against external threats via an inherent ability to recognize and swiftly respond to a broad range of pathogens along with other immunogens, and by advertising the method of inflammation. Innate immunity plays a basic part inside the response in the male reproductive tract to infections, nevertheless it also exhibits a a lot wider role in male reproduction mainly because many regulatory mechanisms are shared by the innate immune and the reproductive systems. Pattern Recognition Receptors and Activation of Innate Immunity Activation on the innate immune response involves pattern-recognition receptors, which recognize distinct motifs, or pathogen-associated molecular patterns (PAMPs), produced by bacterial, viral, fungal, and protozoan pathogens.105 In contrast to classical ligand receptors, these receptors are Cyclin-Dependent Kinase 7 (CDK7) Proteins Source capable to respond to several ligands that possess associated, as opposed to identical, structures. The canonical pattern-recognition receptors are a household of transmembrane receptors referred to as the Toll-like receptors (TLR), which are expressed around the cell surface and on intracellular endosomes.106 There are lots of families of intracytoplasmic pattern-recognition receptors: the nucleotide binding and oligomerization domain (NOD)like receptors (NLR), the retinoic acid-inducible gene (RIG)-like receptors (RLR), plus the C-type lectin receptors (CLR).107 Importantly, numerous of these receptors also can interact with endogenous molecules released by cell harm, named danger-associated molecular patterns (DAMPs), which include high-mobility group box 1 protein (HMGB1), heat shock proteins, extracellular matrix elements, and nucleic acids.108 The TLRs are extremely expressed by myeloid-lineage cells (monocytes, macrophages, and dendritic cells), but are also found on other leukocytes, epithelial cells, and stromal cells. There are ten TLRs (numbered TLR110) inside the human, but the laboratory rodents (ratsand mice) possess an extra three TLRs (TLR1113).106,109 These receptors detect exceptional ligands of bacterial, viral, and fungal origin, for instance bacterial and viral nucleic acids, bacterial lipopeptides, peptidoglycans, and lipopolysaccharides (LPS). LPS is often a element with the cell wall of gram-negative bacteria, including Escherichia coli, as well as the receptor for LPS is TLR4, which needs a co-receptor called MD2 (myeloid differentiation two protein), as well as the LPS-binding protein CD14.
<urn:uuid:6845c07b-773e-4d5b-b4cc-f0a0db29e2b6>
CC-MAIN-2023-50
https://www.squalene-epoxidase.com/2022/11/14/9020/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.923115
826
2.71875
3
Shrinking And Enlarging Work : HARDENING CARBON STEEL FOR TOOLS : The Working Of Steel Steel can be shrunk or enlarged by proper heating and cooling. Pins for forced fits can be enlarged several thousandths of an inch by rapid heating to a dull red and quenching in water. The theory is that the metal is expanded in heating and that the sudden cooling sets the outer portion before the core can contract. In dipping the piece is not held under water till cold but is dipped, held a moment and removed. Then dipped again and again until cold. Rings and drawing dies are also shrunk in a similar way. The rings are slowly heated to a cherry red, slipped on a rod and rolled in a shallow pan of water which cools only the outer edge. This holds the outside while the inner heated portion is forced inward, reducing the hole. This operation can be repeated a number of times with considerable success.
<urn:uuid:f7547de3-5779-4ce2-bc3f-75b17f97ab20>
CC-MAIN-2023-50
https://www.steelmaking.ca/Articles/Shrinking-And-Enlarging-Work.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.939364
214
3.171875
3
We can read text differently. I noticed the relevance between our questions and how we read texts. Different questions show richness in text. Plain Vanilla has enormous possibilities as vanilla ice cream has. I believe p4c has a great power when we make teaching materials. Often materials are constructed by teachers. And teachers want to teach as they want to do. But in p4c, students can make questions and also they can change whole class dynamics. So, teaching materials are common wealth in classroom. Facilitation reflect on who the facilitator is. It's true. Therefore in p4c sessions, students might notice the teacher's true self. This is why as a result of p4c, the class gets to be more mindful, friendly, gentle. I have new question from this reflection. Reflections in english (not my native language) illustrate me new perspective every time. Thumbs up and thumbs down is good way to evaluate p4c inquiry. But how do we evaluate p4c by writing reflections? And how can we assess p4c learning experience from written articles? Perhaps these are my new research questions. Do you like this post?
<urn:uuid:74a90fc2-8026-4495-88a8-c943958d1873>
CC-MAIN-2023-50
https://www.studyp4chawaii.org/masamichi/plain_vanilla
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.942713
234
2.703125
3
Transference and Countertransference Working with ChildrenKev216 Specific skills and knowledge are essential for a social worker working with children. Understanding transference and countertransference is crucial to a healthy therapeutic relationship. Both transference and countertransference can be evident in any client–therapist relationship, but are especially important in working with children because of a common instinct among adults to protect and nurture the young. The projection or relocation of one’s feelings about one person onto another, otherwise known as transference, is a common response by children (Gil, 1991). Countertransference, a practitioner’s own emotional response to a child, is also common. For this Discussion, review the Malawista (2004) article. ATTACHED Post your explanation why transference and countertransference are so common when working with children. Then, identify some strategies you might use to address both transference and countertransference in your work with children. Support your posts with specific references to the Learning Resources. Be sure to provide full APA citations for your references. - 5 years ago - ORGANIZATIONAL MANAGEMENT !! - MBA 6301 Unit III - Business Torts and Ethics Presentation - A+ Paper - ASSIGNMENT HELP - Confidence Intervals: Why are they useful in helping to determine clinical significance? There are many controversies surrounding the issue of clinical significance vs. statistical significance. Identify one of them and summarize it. Finish with an opinio
<urn:uuid:48334b2b-0bd1-4205-9f93-306bf16e75d8>
CC-MAIN-2023-50
https://www.sweetstudy.com/questions/transference-and-countertransference-working-with-children
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.902091
308
2.875
3
New implementation of SysAgria in Arad The type of culture: Hazel orchard Monitored parameters = relative air humidity, air temperature, leaf evapotranspiration, leaf temperature, solar radiation, precipitation amount, soil pH, soil electroconductivity, soil temperature, soil moisture. On the one hand, based on the above mentioned parameters, a careful monitoring of the process of hazelnut development is carried out and, on the other hand, predictions can be made, possible losses / productions can be evaluated, enabling preventive measures, etc. Given that hazelnut is a shrub, which usually tends to prefer less shady areas, with wet and more basic soil, the information provided by Sysagria is more than relevant for intelligent monitoring and management of such orchards.
<urn:uuid:5bae04f0-b0f9-40e3-bd0b-4bc96eea2615>
CC-MAIN-2023-50
https://www.syswinsolutions.com/post/sysagria-arad-hazel-orchard
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.879492
165
2.640625
3
Improved advanced composites manufacturing technologies developed by IACMI aid the integration of innovative practices and methods in manufacturing. Dr. Uday Vaidya Transitioning the United States into a clean energy economy will require the widespread adoption of transformative technologies that save energy and reduce emissions. Regulatory actions such as Corporate Average Fuel Economy (CAFE) aim to increase fuel economy standards for automobiles significantly by 2025. Fiber-reinforced polymer composites are a key enabler of energy efficiency gains and emissions reductions. High strength-to-weight ratios, exceptional durability and directional properties are some of the key benefits that make composite materials a valued choice for high-performance products across multiple markets and industries. The Institute for Advanced Composites Manufacturing Innovation (IACMI), Knoxville, Tenn., is accelerating the transition of advanced composites manufacturing technologies into the marketplace to facilitate the integration of innovative methodologies and practices across supply chains. The low-cost, energy-efficient production of advanced fiber-reinforced polymer composites in vehicles, wind turbines, and compressed gas storage applications is expected to revitalize U.S. manufacturing and innovation and yield substantial economic and environmental benefits. IACMI contributes to this vision through high-value research, development and demonstration programs that reduce technical risk for manufacturers while training the next-generation composites workforce. IACMI has several focus areas with advanced composites: - Materials and Processes; - Modeling and Simulation; - Compressed Gas Storage; - Wind Technologies, and Composite Materials and Processes (M&P) technology focuses on material intermediates such as pellets, tapes, fabrics, low-cost carbon fibers (LCCF), recycling of carbon and glass fibers, nondestructive evaluation (NDE), materials characterization, novel manufacturing methods, and more efficient precursors and conversion processes. The M&P area is led out of Oak Ridge National Laboratory and the University of Tennessee, with partnerships from Vanderbilt University and University of Kentucky. Modeling and Simulation (M&S) technology enables digital product definition through the use of modeling and simulation tools as a foundational methodology for designing, manufacturing, and sustaining composite products; education and training of the next-generation workforce in design tools and methodologies; and exploring multi-physics phenomena for manufacturing polymer composite materials and structures into simulation tools. The M&S technology is led out of Purdue University, Indiana. Compressed Gas Storage (CGS) technology is advancing conformal tank designs, braided composite preform designs, and methods that enable reductions in safety factors to reduce the amount of carbon fiber required in tank designs. Composite materials help meet the growing demand for compressed natural gas (CNG) vessels and eventually hydrogen storage tanks — as a low-emissions alternative to gasoline and diesel. The CGS area is led out of University of Dayton Research Institute (UDRI), Ohio. Wind Turbine technology explores thermoplastic resins, segmented wind turbine designs, automation to reduce cost and labor content, and joinable pultruded wind turbine components. Today’s composite wind turbines-ordinarily made with thermosetting resins are time-consuming to produce, economically challenging to recycle, and increasingly difficult to transport as blade lengths grow in size to capture more energy. The wind technology is led out of National Renewable Energy Laboratory (NREL), Golden, Colo. Vehicle Technology seeks to reduce manufacturing costs and improve recyclability through innovative design concepts, low-cost tooling, robust modeling and simulation tools, effective joining technologies, and reliable defect detection methods. Rising fuel economy standards which aim to reduce emissions and improve energy security are compelling automakers to seek vehicle mass reduction opportunities through the integration of lightweight materials. The vehicles area is led out of the Corktown facility in Detroit and Michigan State University, East Lansing, Detroit. IACMI’s technical activities are organized by key subtopics that cut across the above five Technology Areas (See Figure 1). These subtopics capture the full range of enabling technologies needed to maximize progress against 5- and 10-year IACMI technical targets of cost, energy, and waste reduction for composites manufacturing technologies. Advances in carbon fiber technologies via alternative precursors, efficient processes, and interface engineering are critical to cost reduction at improved performance. Alternative precursors such as textile grade polyacrylonitrile (PAN) and processing approaches are being adopted to engineer carbon fiber materials that yield superior final part properties at reduced production energy levels. Recent advances at the Oak Ridge National Laboratory have enabled a low-cost carbon fiber (LCCF) at properties and cost metrics for automotive, wind and CGS (See Figure 2). Innovative reinforcements, resins, additives and intermediates are enabling fast cycle times, reduced scrap, integrated features and reduction of embodied energy. Integrated fabrics, braids, preforms and pre-pregs are used in rapid fabrication of door inner, floor, seat back rest, roof, trunk and under the hood auto components, wind turbine blades and composite tanks (See Figure 3). Advanced manufacturing techniques such as injection overmolding, stampable preforms, locally stitched preforms, high-pressure resin transfer molding are some examples that reduce composites manufacturing costs and energy consumption and improve component performance and recyclability. Figure 4 illustrates a locally reinforced preform to provide directional properties. IACMI has partnership with the Long Island, N.Y.-based Composites Prototyping Center (CPC), for prototyping and fabrication. Composite recycling is of growing interest to the composites community. The next-generation technologies feature novel and increasingly complex combinations and formulations of fiber-reinforced composites, but these are difficult to recycle using current practices. Since recycled chopped carbon fiber costs 70-percent less to produce and uses up to 98-percent less energy to manufacture compared to virgin carbon fiber, recycling technologies are creating new markets from the estimated 29 million pounds of composite scrap sent to landfills annually. Advances in recycling technologies including pyrolysis, solvolysis, mechanical shredding and cement kiln incineration are enabling recycle, reuse, and remanufacture of products. IACMI has strategic partnerships in the recycling technologies with the American Composites Manufacturer’s Association (ACMA) and Composites Recycling Technology Center (CRTC), Port Angeles, Washington. Additive technologies in composites manufacturing offer a high-rate, low-cost alternative to traditional tool-making approaches, and shows promise as an effective processing method for printing composite structures from reclaimed structural fibers. Additive approaches have the potential to significantly reduce composite tool-making lead times and increase the recovery and reuse of structural carbon fibers. Advanced thermoplastic resins into current production processes: Thermoplastics have shorter cycle times and are more suitable for recycling. Increasing the use of thermoplastics for requires a variety of activities, including developing of novel in situ polymerization methods to improve thermoplastic fatigue performance, and establishing design-for-recyclability methods. Design, Prototyping, and Validation (DPV) are integral steps to turning conceptual designs into high-performance components and verifying that these components meet their intended product requirements. These product development steps rely on a robust understanding of material limits, processing capabilities, principles of mechanical design, and best manufacturing practices to optimize the safety, reliability, and performance of a system. IACMI is advancing innovative vehicle design concepts by addressing activities such as facilitating round-robin studies that compare composites joint and interface designs for various assembly methods, establishing design optimization approaches for manufacturability and recyclability, validating composite crash simulation models, and creating techno-economic analyses of automotive composite parts to provide manufacturers with design, prototyping, and validation examples. Modeling and simulation tools for automotive applications require a range of activities including assessing variability in end-to-end simulated manufacturing processes, conducting accelerated tests and validating models with experimental data, incorporating composite joint designs in crashworthiness models, and sharing key materials properties to inform simulation efforts. The integration of these efforts in IACMI is enabling to reduce product development time. Commercializing technologies for low-cost, energy efficient manufacturing of advanced fiber reinforced polymer composites for vehicles, wind turbines, and CGS applications will unleash significant economic and environmental benefits and help to revitalize U.S. manufacturing and innovation. IACMI-The Composites Institute is playing a pivotal role in shaping future competitiveness and job growth in the United States, and the technical activities needed to accelerate progress toward this vision. Editor’s Note: Dr. Uday Vaidya is the UT/ORNL governor’s chair in Advanced Composites Manufacturing, University of Tennessee, Knoxville, and chief technology officer at the Institute for Advanced Composites Manufacturing Innovation (IACMI), Knoxville.
<urn:uuid:dae94c28-d4b0-406b-90c0-eabb79984862>
CC-MAIN-2023-50
https://www.textileworld.com/textile-world/features/2017/03/advanced-composite-materials-and-manufacturing-in-vehicles-wind-and-compressed-gas-storage/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.901245
1,824
2.5625
3
From JFK to Biden: How the Catholic Presidency Changed When the United States elected John F. Kennedy as its first Roman Catholic president in 1960, my father was 12 years old. He spent the weeks before the election calling random numbers from the local phone book and impersonating a political pollster. Then, if the suckers bought it, he’d ask if they were scared JFK would tell national security secrets to the pope. A few weeks later, 80 percent of Catholic voters cast their ballots for Kennedy. Joe Biden, our second Catholic president, didn’t even come close to that number. Before Kennedy, Catholics did not vote so monolithically. True, FDR had equaled JFK’s share of the Catholic vote in 1936, but this anomaly is explained by his landslide victory, with over 60 percent of the vote, and his “demagogic” appeal to the ethnic working classes. Here’s a more typical example: Eisenhower’s 1952 and 1956 campaigns netted him 46 and 52 percent of the Catholic vote, respectively, according to data compiled by the Center for Applied Research in the Apostolate. Right down the middle. Elections after Kennedy (and Johnson) produced similar results. Reagan got 58 percent of the Catholic vote in 1984, and that’s the closest any candidate has come to replicating Kennedy’s success. Even the Catholic John Kerry only got 52 percent. In 2020, Catholic voters got another chance to cast a ballot for a member of their own church. But Biden’s Catholicism, like Kerry’s and unlike JFK’s, didn’t even move the needle. Biden and Trump split the Catholic vote 51/49. Why? What changed between 1960 and 2020 (or 2004)? I would propose two answers that are really the same answer: First, politics has become America’s religion. Plenty has been written on that subject, but it’s worth noting that in 1958, almost three-quarters of those polled didn’t care which political party their daughter’s future spouse belonged to. That number was down to 45 percent by 2016. In roughly the same timeframe, the rate of interfaith marriages tripled. Second, Catholics have lost the distinctive identity that helped them resist the temptation to subordinate faith to politics. I grew up near Pittsburgh, where “Catholic” was nearly synonymous with the working-class immigrants who, despite their poverty, built grand churches, closed ranks against religious bigotry, and anchored their identities in faith. No more. As Catholic journalist Ross Douthat writes, their “Catholic exceptionalism… dissolved into an unexceptional Americanism.” As Italians and Poles moved from the margins into the American mainstream, they were swallowed up by America’s all-consuming partisan divides. The days when my adolescent dad’s victims thought John XXIII could guilt Kennedy into coughing up the nuclear codes were a simpler time. Back then, Catholics loved the pope and Protestants were wary of him. Easy. In 20th-century America, however, someone’s opinion of the man in the Vatican is a far less surefire indicator of religion. Godless progressives applaud when Pope Francis criticizes capitalism, while right-wing Catholic conspiracy theorist John Zmirak has his readers half-convinced that the pontifex is a card-carrying Communist. Conservative Catholics now have more in common with conservative evangelicals than with their liberal co-religionists. This explains why “the Catholic vote” no longer exists in any meaningful sense, but it doesn’t explain why Catholics split just as evenly for Adlai Stevenson and Ike as for Trump and Hillary. A recent New York Times article may have the answer to that. The author, Elizabeth Bruenig, argues that the old breed of Catholic voters rejected many of the unstated premises that the American right and left shared. Some Catholics would vote for one party, some for the other—but because of their Church’s deep roots in the pre-Enlightenment world they never felt at home in either. They were Catholics first and Democrats or Republicans second. That’s why the Catholic vote swung 28 points when Kennedy came along. Neither Biden nor Kennedy articulated a Catholic political vision that transcended American partisanship. Both were, essentially, garden-variety Democrats for their time, but Kennedy had a twofold advantage: He didn’t have to weigh in on a polarizing abortion debate, and he benefitted from a still-vibrant sense of Catholic solidarity. In other words, Catholic voters in the 1950s split their vote because they didn’t fit into the American two-party system. Catholic voters in 2020 split their vote because they fit into that system far too well. The result may be the same, but the reason matters. All of us should be alarmed by how we’ve reduced every aspect of life to partisan politics. The Church’s social teaching should empower American Catholics to think outside that box and breathe new life into our politics. They have the opportunity to mingle anti-capitalism with anti-Marxism, care for immigrants and refugees with a rejection of shallow multiculturalism, climate activism with contempt for the eugenics into which that movement so often degenerates. The platform of the American Solidarity Party lays out such a program in all its glorious contradictions. I’m not saying I’d be glad if 80 percent of Catholics had voted for Biden, but the fact that they didn’t reveals that a Catholic solidarity has been lost, and with it, any chance of a uniquely Catholic vision for American politics. Another lump of particularism has liquified in the American melting pot, and we are all poorer for it. This piece initially neglected to mention that John Kerry was a Catholic presidential candidate. The language has been changed to reflect that. Grayson Quay is a Young Voices contributor based in Arlington, Virginia. His work has been published in The American Conservative, the National Interest, and the Spectator US.
<urn:uuid:7c04e115-ddc7-4df4-bf0b-ce9870c2ece9>
CC-MAIN-2023-50
https://www.theamericanconservative.com/from-jfk-to-biden-how-the-catholic-presidency-changed/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.964396
1,258
2.65625
3
Today, entire neighborhoods are caring for each other. In the wake of the COVID-19 pandemic, mutual aid has proven an effective response, even if it is not ideal. While these efforts are often contextualized within a radical heritage that includes the Black Panther Party’s community programs of the 1960s and 1970s, these genealogies of mutual aid tend to overlook queer care and volunteer responses to AIDS in the 1980s. Looking deeper at that response can shed light on commonalities between the AIDS/HIV and COVID-19 crises, but it also points to a much deeper tradition of the capacity of humans to care for one another, independent of the state (especially when its response has been negligent). In San Francisco, the HIV and cancer agency Shanti Project has lately reimagined its services to include programs like helping seniors and severely disabled people with their survival needs (and ballots) in the face of COVID. This is not the first time Shanti has altered its service model in the face of crisis. Originally a peer-counseling service for people with terminal diseases, Shanti was one of the first organizations in the U.S. to provide services to people living with AIDS (PWAs), starting in the 1980s. Shanti’s origin is steeped in the tenets of mutual aid, mirroring the way the anarchist Peter Kropotkin conceived it in the nineteenth century. Most established HIV/AIDS organizations are now nonprofits, thus marking them as not “pure” mutual aid, according to academics and activists who comply with the “solidarity not charity” definition of mutual aid. Featured image: By David Shankbone at English Wikipedia, CC BY 2.5, https://commons.wikimedia.org/w/index.php?curid=31348301
<urn:uuid:295b6d62-a68f-4d46-b787-bba6f0ca81b6>
CC-MAIN-2023-50
https://www.thecaregiverspace.org/community-care-in-the-aids-crisis/page/3/?et_blog
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.96154
377
2.84375
3
SMSC at St. John's At St. John’s CE we believe children live up to their potential if they are happy, have fun, feel nurtured and grow from a foundation of strong Christian Values. We value spirituality and the holistic development of the child. We fully embrace the National Curriculum by “promoting pupils’ spiritual, moral, social and cultural development.” Our role as educators is to guide the children in our care along the path of life. Children’s spiritual, moral, social and cultural development and, within this, the promotion of fundamental British values, are at the heart of the school's work. The following overview focuses on the spiritual, moral, social and cultural aspects which are promoted in daily life at St. John’s enabling children and adults to thrive in a supportive, highly cohesive learning community. Spiritual: Explore beliefs and experience; respect faiths, feelings and values; enjoy learning about oneself, others and the surrounding world; use imagination and creativity; reflect. Moral: Recognise right and wrong; respect the law; understand consequences; investigate moral and ethical issues; offer reasoned views and have an appreciation of British Values. Social: Investigate and moral issues; appreciate diverse viewpoints; participate, volunteer and cooperate; resolve conflict; engage with the fundamental values of British democracy. Cultural: Appreciate cultural influences; appreciate the role of Britain's parliamentary system; participate in culture opportunities; understand, accept, respect and celebrate diversity. We have created an additional Spirituality overview which identifies aspects of how Spirituality is further nurtured and developed at St. John’s- this was created in a workshop with all stakeholders from St. John’s represented. (please see the spirituality section in the curriculum area of the website) We use the Think Equal Programme in EYFS followed by the SCARF programme in years 1-6 to support PSHCE development in school and this also supports and promotes the teaching of SMSC. Weekly Picture News sessions for our KS2 children, which helps the children discuss and keep up to date with the fast-changing world around them. Helping to challenge their ideas and pre-conceptions and broaden their horizons; enabling them to deal with the modern world. Picture news focuses on world issues and British Values which we also use as a focus in our Collective Worship too.
<urn:uuid:e0129e90-da41-404e-be36-324066b1bd2d>
CC-MAIN-2023-50
https://www.thornhamce.rochdale.sch.uk/smsc-overview/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.935188
495
2.984375
3
You’ve heard of the gorgeous, rare all-white Cane Corso, and now you’re wondering how to get one. Unfortunately, this illustrious color is scarce and nearly impossible to get ahold of. That said, white (straw) Cane Corsos are beautiful creatures. If your heart is set on one, take a peek at our guide below and keep your eyes peeled for local, reputable breeders. Table of Contents What is a White Cane Corso? A Cane Corso is an Italian Mastiff breed, descending from a Roman war dog. They’re skilled in hunting and make excellent companion dogs. White is the rarest color for this breed, so they aren’t seen as often as the other several colors. Commonly called ‘straw,’ the coloring is more of a cream hue than white. Straw coloring dates back to crossbreeding between a Cane Corso and an Abruzzese Sheepdog years ago. While Italian farmers historically valued this color, it isn’t recognized in the modern day by the American Kennel Club as an acceptable color for this breed. How Rare Are White Cane Corsos? White Cane Corsos are a non-standard color and the rarest seen in the breed. The most common color is black, while other colors, such as chocolate and Isabella, are also somewhat rare. These colors result from recessive genes, which are more challenging to breed. These dogs are rare because: - The gene for the coloring is recessive - The color is a result of crossbreeding - Many breeders choose not to breed colors not acknowledged or approved by the AKC These factors contribute to the white Cane Corso’s continued rarity. Are White Cane Corsos Albino? White Cane Corsos are more straw-colored than white. Like any other dog breed, some can be Albino, but the straw color doesn’t automatically mean they are. That is rarely the case. Straw-colored Cane Corsos have light-colored pigment to their coat, whereas Albino ones don’t have any pigment. Albino Cane Corsos have other distinctions, such as lighter nose, eyes, masking around the mouth, and more visible blood vessels. How to Tell the Difference How do you know you’ve got a straw Cane Corso versus a light fawn instead? When looking at a DNA test for these dogs, a mutation at the E locus is evident. This mutation overrides any other colors, guaranteeing a straw coat. This genotype also prevents masking, which you can judge visually. A fawn coat will typically present with masking around the face, whereas a straw coat cannot produce any black pigment, so there will be an absence of masking. Breeding White Cane Corso Puppies Breeding white Cane Corso puppies is difficult because this color’s genes are recessive. This means it requires each parent to pass on the gene. This can occur in a few different ways when breeding. Situation #1: Two White Parents With two white parents, the odds are higher that some of the offspring will be white, although it’s not guaranteed. It’s likely a majority of offspring will be white, but not all of them. Situation #2: One White Parent & One of a Different Color, but Who Carries the Gene Genetic testing would have to be done to verify that the non-white parent carries the recessive gene for a straw coat for this situation to be plausible. If the gene is verified, likely half the offspring could turn out to be white. Situation #3: Two of a Different Color, but Both Carry the Gene This is the most common way for a breeder to attempt to have some white offspring, although the odds are not great, considering those recessive genes rarely win over the dominant ones. Via this method, less than one of every four offspring has the chance of being white. Some breeders welcome the surprise of a straw-colored pup, and some are against it due to them not being recognized by the AKC. Those who are against it typically sell them as pets, but they don’t sell them to compete. Beware of breeders marketing ‘rare’ white Cane Corsos and focusing on that as their business model. These breeders are likely not focusing on the dogs’ health as much as on how to get that color in the offspring, and even then, their results are still tough to achieve. You’ll be much better off with a higher-quality breeder. Do White Cane Corsos Have Health Problems? They don’t have health problems associated with Albinism but are susceptible to the same health issues as other colors of Cane Corso dogs. This includes hip and elbow dysplasia, gastric dilation volvulus (GDV), cherry eye, and epilepsy. Skin problems can occur with lighter-colored coats such as straw as well. These diseases can be prevented for the most part with proper nutrition, although some issues (hip dysplasia, for example) are genetic. Life expectancy is 10-11 years on average. Purchasing a Straw Cane Corso If they’re so rare, how do you get hold of one? A white or straw Cane Corso costs a pretty penny since they’re a rare color for this breed. For a regular Cane Corso, you can expect to pay an average of $1,500 for a purebred, although that number can vary. The range is typically between $1,000-4,000. Superior lineages cost much more, even nearing $10,000 in some cases. The straw color is so rare, though, that the odds of being able to find one to purchase are extremely low. There are only about 10-20 of these straw-colored dogs in the world, only one of which is in the United States. This is, of course, the ‘official’ number, and there are probably some out there that breeders have sold as pets that haven’t been reported to the governing bodies that keep track of these numbers and report them. Some even say this color doesn’t exist and that it’s a myth. That’s how rare it is. Choosing a Cane Corso When choosing a dog, you usually decide on a breed you want based on personality traits and how best they fit your lifestyle. Color is rarely a factor. Remember this when looking for a white version of this breed because the odds are pretty low that you’ll be able to find one. Most often, puppies are claimed before they’re even born, when deposits are put down. This is, of course, before the coat colors are even known. Then, once they’re born, customers and breeders typically work together to make matches based more on personality than color. This process doesn’t leave much room to find and request a straw or white coat precisely. Matching by Personality vs. Color The color of these dogs’ coats does not influence their behavior, so that doesn’t need to be factored into your decision when choosing a pup. The coat color can influence their lifespan and health, though. Some health issues are expected when these dogs are bred explicitly for color instead of for personality. Be sure to choose a reputable breeder to avoid these issues. Here are some examples: - Eye deformities - Color dilution alopecia (skin rashes & hair loss) - Pyotraumatic dermatitis (lesions on the skin) - Black hair follicular dysplasia (hair loss, skin infection, reduced coat quality & dry skin) - Otitis externa (inflamed ears) Other Colors to Consider If you are set on choosing a pup based on color, there are many more to choose from than just straw for Cane Corsos. There is a beautiful palette of 12 other colors. Black is the most common color, and black brindle is the longest-living coat color. Here are all of the options besides white/straw: - Black – red/fawn base color with black stripe overlay - Gray – gray coat; no masking - Fawn – most common color variant; black or gray masking - Red – coats vary from pale red to deep mahogany; black or gray masking - Formentino – gray mask with a light fawn overlay; rare color - Black brindle – red/fawn base with black stripes - Reverse black brindle – a lighter color overlay that lightens the overlay region instead of darkening it like with regular brindle coats - Gray brindle – fawn/red base color with gray overlay - Chestnut brindle – red/brown base with a reddish overlay - Blue – gray coat that’s diluted to a dark, almost bluish color - Chocolate/liver – brown coat similar to a red coat, but no black masking - Isabella – diluted shade of chocolate; no masking, but a purplish tint to their eyes & nose Costs to Consider As mentioned above, the purchase cost typically ranges between $1,000-4,000 for a purebred Cane Corso when you purchase from a respected breeder. Ask about receiving a health certificate before purchasing your pup or even providing a down payment. This can provide peace of mind that your pup has no severe genetic conditions that could cost significant money down the road. If you’ve got your heart set on a Cane Corso but don’t have the funds to purchase a puppy, you could search for a rescue to adopt. You’ll pay less upfront, only about $300-500. Remember, there will be more training needs than a puppy. Here are some other costs to consider: - Supplies – this includes a dog bed, a good leash, collar, tag, food & water bowls, poop bags, possibly a kennel, etc. - Vaccines & checkups – these will be more expensive right off the bat, but also a regular expense to factor in for when they’re due again. - Heartworm & flea prevention – this is usually a monthly chewable purchased from your vet’s office. - Food – Cane Corsos are large dogs, so their food cost is higher than it would be for a smaller breed. - Grooming – this includes professional grooming every couple of months and a brush for the home to use between grooming sessions, possibly a bath if they get dirty often, some dog shampoo, and a pair of nail clippers. - Entertainment – this includes solo toys such as chew toys, antlers, ropes, etc., as well as toys to use together, such as frisbees, fetching balls, etc. - Dog license – the cost of this varies based on where you live. - Doggie daycare – this is only applicable if needed or occasionally. If you’ve had your heart set on a white Cane Corso, it may be a bit of a heart-breaking revelation that they’re not typically seen and are extremely rare. It may be a bright spot, though, to learn there are so many gorgeous colors available for this loyal breed you know and love. Just because you don’t have much chance of snagging a straw-colored Cane Corso doesn’t mean you can’t find a more common one to love and join your family. You will also like: - Are Cane Corsos Hypoallergenic? - Interesting Facts About The Brindle Cane Corso - Have A Scaredy Cane Corso? For more Cane Corso Types Of Cane Corsos, check out the video below:
<urn:uuid:a3d414ab-23c6-4c2c-8265-2679b1047b0f>
CC-MAIN-2023-50
https://www.trendingbreeds.com/white-cane-corso/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.928183
2,538
2.546875
3
Lasers play a key role in the manufacturing of the future. Whether tailored 3D print of lightweight construction components or precise bores using an ultra-short pulse laser, the range of applications stretches across all sectors of manufacturing. Bundled light is a core element of Industry 4.0, in which the virtual, digital world is connected to real manufacturing. Localised welding in glass by way of ultrashort laser pulses © Fraunhofer IOF The laser is the universal tool in production: It cuts, hardens, welds, polishes, measures, produces microstructures, traces errors and removes material. In the process, lasers impress with their high precision and speed. In contrast to mechanical tools, bundled light works on a contact-free basis and does not wear out even when processing high-strength steels or hardened glasses for smartphones. The fact that lasers are so widely used in production technology today is thanks in part to Fraunhofer. In the last few decades, scientists, particularly from the Fraunhofer Light & Surfaces group (see box on page 8 and interview on page 14), have provided crucial impetus both in the development of new lasers and for their integration into production. Through research and development on behalf of laser manufacturers and innovative users, they contributed to Germany's current leading position in this market. According to the industry report of the associations Spectaris, VDMA and ZVEI as well as the Federal Ministry of Education and Research (BMBF), approximately 35% of the ray sources sold worldwide and 20% of laser systems for material processing come from Germany. © Fraunhofer ILT However, the potential of lasers is far from exhausted. Fraunhofer researchers work on next-generation lasers, readying them for use in production. An example is the high-performance ultra-short pulse laser (USP laser). It produces light pulses that are only a few picoseconds or femtoseconds short (trillionths or quadrillionths of seconds), but are very rich in energy. For the sake of comparison: While a ray of light requires approximately one second to go from the earth to the moon, in a picosecond it travels only 0.03 millimetres. Important foundations for the development and use of ultra-short pulse lasers were laid by experts of the Fraunhofer Institute for Applied Optics and Precision Engineering (IOF) in Jena and of the Fraunhofer Institute for Laser Technology (ILT) in Aachen. For instance, in 2009 scientists of the IOF demonstrated an ultra-short pulse laser with a capacity of 830 watts. In 2010, the ILT experts in Aachen surpassed the magic mark of 1kW with a femtosecond laser. Since then, 1.5kW has even been reached at the ILT with a scaled version of the femtosecond laser. However, the researchers at the ILT do not work just on performance enhancement, but they also develop tailored ray sources and new applications for ultra-short laser pulses. But what distinguishes ultra-short pulse lasers from traditional systems? "Thanks to the expert selection of the pulse duration, the pulse energy and the right focus, the material can be heated so quickly and so strongly that it evaporates without melting," explains Professor Andreas Tünnermann, Chairman of the Fraunhofer Light & Surfaces group and Head of IOF in Jena. Material removal takes place precisely and only where it should, micrometre by micrometre. Such "cold processing" is not possible with conventional lasers. The latter produce heat influence zones. For example, if a laser ray comes into contact with metal, the material melts partially and unevenness can form. The material must then be elaborately post-processed. This costs time and money. Producing with light flashes For a few years, experts have already been using ultra-short laser pulses to process even highly sensitive materials precisely and gently. However, for a long time the process was used mostly just in research laboratories. The first industrial applications have emerged only in the last few years. Thus, in cooperation with Bosch, Trumpf and Friedrich Schiller University Jena, IOF researchers managed to make ultra-short laser pulses into a successful series-production tool. The work of Prof. Stefan Nolte, who works at the Friedrich Schiller University and at the IOF, was an important pillar. The physicist researched the interaction between laser radiation and material, thus creating the scientific basis for processing almost all materials with the energy-rich, ultra-short laser pulses. Both industrial companies developed the technology further, thus making it possible to integrate it into manufacturing and system technology for industrial series production. For this, the experts received the Federal President's Future Award in 2013. Now, USP laser systems with outputs of up to 1kW are available on the market. For many industries, they open the way to new products that previously were extremely difficult to manufacture or impossible to manufacture. The technology is used particularly where materials must be processed especially gently and precisely. For instance, extremely fine nozzles for direct petrol injection valves as well as more compatible stents are produced with the new lasers or hardened glass is cut for displays in smartphones. The key challenge now is to combine the available laser pulses with suitable process technology and thus to develop further applications. A possible new area of use for USP lasers is structuring lightweight construction materials such as plastics or carbon-fibre-reinforced plastics (CFP). The modified surfaces absorb metal powder better. Thus, even lightweight construction materials can be coated using the highly efficient cold gas spraying process (cold spray technology). In this process, the material is applied to the base material in powder form at very high speed. The coated plastics or CFP are interesting particularly for the aerospace industry as well as the automotive sector. However, they enable a large number of applications in the electronics industry as well. With the cold spray technology, a copper layer that dissipates heat without air can be applied to non-conductive housing. In the joint EU project "Efficient Manufacturing of Laser-Assisted Cold-Sprayed Components" (EMLACS), researchers of the ILT are working with French, Dutch and German partners on the development of a corresponding process. Ultra-short pulse laser parallel processing with multi-ray technology© Fraunhofer ILT Ultra-short pulse lasers are of particular interest for the processing of glass, as they minimise voltages and thus possible damage such as tear formations. However, a sufficient understanding of the interaction between ultra-short laser pulses and the absorption effects in transparent materials has not yet been acquired. The "Femto Photonic Production" project aims to close this gap. The objective is to lay the foundations for the material processing of glass, sapphire and diamond. Based on these results, the optimal performance parameters for the different laser classes, adapted optics and system solutions are then to be derived for all relevant material classes and subsequently evaluated together with the industrial partners in experimental studies. The results are of particular interest for the manufacture of displays, modern LEDs or performance transistors for managing large voltages or currents. In the research project, which was launched in October 2014, experts from the Fraunhofer Institute for Laser Technology (ILT) and RWTH Aachen University, Chair for Laser Technology, collaborate with the ray source manufacturers Trumpf, Edgewave and Amphos as well as system providers 4Jet, LightFab and Pulsar Photonics. Completely new manufacturing opportunities are opened up by selective laser melting (SLM). Crucial foundations for this generative manufacturing process were laid by researchers at the ILT as early as the mid-1990s. Since then, they have continuously developed the process, which was patented in 1996. In SLM, the component is built up layer by layer with powder directly on the basis of the computer-generated construction data of the planned workpiece (CAD) – without using binding welding fillers. The starting material is mostly a metal powder that is melted selectively with the laser ray by means of local heat entry according to the calculated surfaces of the CAD model. The whole process basically works in a similar fashion to a printer, but in three dimensions. The process is now used in manufacturing – for example in tool construction, medical engineering, the automotive industry and the aviation industry. Generative manufacturing offers numerous advantages. Neither special tools nor moulds are required. In addition, hardly any waste is produced – the excess powder can generally be reused. The extent to which generative laser manufacturing protects resources in comparison to traditional processes is shown in the example of blade integrated disk (BLISK) turbine manufacturing. Previously, these high-quality parts have been cut out of a huge material block. However, a great quantity of the expensive material is lost in the process. Moreover, layer-by-layer production with laser surface cladding – in which a laser beam is aimed at the focus of a powder ray on the surface of the component to be processed – offers almost unrestricted scope for design and construction. The engineers can design a component in such a way that it fulfils its function in optimal fashion, without regard to whether it can even be manufactured. "With generative manufacturing, almost any complex geometries whatsoever can be realised, including with internal structures. Thus components can be designed in a functionally optimised manner, without having to take restrictions of previous manufacturing processes into account," emphasises Dr.-Ing. Wilhelm Meiners from the ILT. This makes the process of interest particularly for lightweight construction. For instance, ILT researchers used the SLM process to develop parts including a very lightweight traverse link bearer for a sports car, on which the wheels are hung individually. Thanks to a hollow structure in the interior, it is simultaneously lighter and more stable than cast or cut components. At this year's LASER World of Photonics trade fair, Fraunhofer, in collaboration with Materialise, is demonstrating how efficient the 3D technology is with regard to plastics. There, under the umbrella of the UNESCO Year of Light, they are exhibiting the word "LIGHT" in two-metre-high letters. What is special about it is that the letters consist of a complexly formed airy grid structure that was manufactured by means of 3D print on Materialise's patented mammoth stereolithography unit. © Fraunhofer IWS Dresden / Jürgen Jeibmann Previously, companies have used generative manufacturing with SLM above all for small metallic components. To enable large components to be printed out by means of selective laser melting as well, researchers at the ILT developed a new system concept. "Instead of relying on scanner systems in the SLM process, in our system we use multi-spot processing – i.e. a processing head from which five individual laser beams come," explains Florian Eibl, a scientist at the ILT. The advantage: The melting process is thus parallelised, meaning that even large parts can be produced quickly and without additional effort. The new system concept was developed, designed and constructed in the excellence cluster "Integrative Production Technology for High-Wage Countries". With the generative manufacturing process, even components under a high thermal load can be produced from nickel super alloys. In order that such hard-to-weld or even non-weldable high-performance materials can be processed with bundled light, researchers of the Fraunhofer Institute for Material and Ray Technology (IWS) in Dresden are combining laser-powder resurface welding with induction. "Through additional heat brought into the component locally and precise process control, the formation of hot and cold tears can be suppressed," explains Dr.-Ing. Frank Brückner from the IWS. Nickel super alloys are used mainly in stationary gas turbines or jet engines. They enable use temperatures of above 700°C. With the new technology, other novel high-performance materials such as intermetallic compounds made of titanium and aluminium can also be processed. Bundled light for Industry 4.0 For a few years, researchers from IWS Dresden have been developing processes and the necessary system technology in order to produce components directly using metallic materials in virtualised process chains. In the project "Additive-generative manufacturing – AGENT 3D", they are working on first designing products on the computer and then manufacturing them directly in an automated process, without further intermediate steps, as products ready for installation. The aim is to develop additive-generative manufacturing into the key technology of Industry 4.0. To this end, a consortium has been formed with 75 partners from business and science. The research project is part of the programme "Zwanzig20 – Partnerschaft für Innovation" (Twenty20 – Partnership for Innovation) supported by the Federal Ministry of Education and Research (BMBF). How light can be used as a tool in production that will be increasingly digitalised in future is being investigated in Aachen at the "Digital Photonic Production" research campus. Behind the term "Digital Photonic Production" (DPP) is the concept of controlling laser radiation photons with bits (computer data) and using them to combine atoms into materials – at any level of complexity and at as low unit numbers as desired, at permanently low unit costs. "The laser is the only tool that works as quickly as a computer thinks," explains Christian Hinke, who leads the group for integrative production at the Chair for Laser Technology of RWTH Aachen University and coordinates the DPP Initiative, which is being strategically promoted by the BMBF over the next 15 years. One of the initiators and spokespersons of the DPP Research Campus is Professor Reinhart Poprawe, Head of the ILT. Coated and generatively structured components made of different metal alloys © Fraunhofer IWS/Frank Höhler The following areas of focus are being worked on at the DPP research campus: Selective laser melting, the use of ultra-short pulse lasers and selective surface processing with innovative semi-conductor ray sources, whereby the light is radiated perpendicularly to the level of the semi-conductor chip. With such vertical-cavity surface-emitting (VCSE) lasers, surfaces can be refined selectively – i.e. on a space-resolved basis – in a very efficient manner. The ILT brings already existing activities into the research campus – for instance the EUR 10 million-strong Fraunhofer innovation cluster AdaM. In the cluster, the ILT works with organisations such as the Fraunhofer Institute for Production Technology (IPT) on generative manufacturing processes with which components for aircraft engines and gas turbines for energy generation can be manufactured. A key objective of the DPP research campus is to connect basic research, applied research and industry more heavily with each other. For this reason, the parties involved are testing new forms of cooperation, such as the enrolment model. Here, companies take up residence on the university campus and together with the scientists from RWTH Aachen University and Fraunhofer, they research topics that go beyond the short-term interest in new products. The companies do not just maintain small offices on the campus; their experts are also actively involved in research and education. This facilitates the knowledge transfer between science and business. The researchers find out what industry is interested in. And the companies can convert current research results more quickly into new products. Industrial groups such as BMW, MTU, Philips, Siemens and Trumpf as well as small and medium-sized companies such as Amphos, Innolite, ModuleWorks and SLM Solutions are involved at the research campus. The Federal Ministry of Education and Research (BMBF) is supporting the project for a total of 15 years with up to EUR 2 million per year. Additionally, by the end of the year an innovation centre – financed by private investors with more than EUR 11 million – will be completed, in which interested cooperation partners from industry can rent office rooms and laboratories in direct proximity to the ILT. With their work, Fraunhofer researchers contribute to making production using light as a tool fit for the challenges of the future.
<urn:uuid:cf300fca-948c-4465-85e3-9f8f6000523b>
CC-MAIN-2023-50
https://www.tube-tradefair.com/en/Media_News/News/Topics/Producing_efficiently_with_light
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.934001
3,367
3.328125
3
18 Жовтень 2017 Refugee and migrant children in Europe People have always migrated to flee from trouble or to find better opportunities. Today, more people are on the move than ever, trying to escape from climate change, poverty and conflict, and aided as never before by digital technologies. Children make up one-third of the world’s population, but almost half of the world’s refugees: nearly 50 million children have migrated or been displaced across borders. We work to prevent the causes that uproot children from their homes While working to safeguard refugee and migrant children in Europe, UNICEF is also working on the ground in their countries of origin to ease the impact of the poverty, lack of education, conflict and insecurity that fuel global refugee and migrant movements. In every country, from Morocco to Afghanistan, and from Nigeria to Iraq, we strive to ensure all children are safe, healthy, educated and protected. This work accelerates and expands when countries descend into crisis. In Syria, for example, UNICEF has been working to ease the impact of the country’s conflict on children since it began in 2011. We are committed to delivering essential services for Syrian families and to prevent Syria's children from becoming a ‘ lost generation ’. We support life-saving areas of health , nutrition , immunization , water and sanitation, as well as education and child protection . We also work in neighbouring countries to support Syrian refugee families and the host communities in which they have settled.
<urn:uuid:2a099f20-e99f-45dd-a897-590ab3150cef>
CC-MAIN-2023-50
https://www.unicef.org/eca/uk/search?f%5B0%5D=content_group%3A1699ab1e-dd43-4cd3-ae61-3570288ee7e9&f%5B1%5D=content_group%3Aa1f29d4a-f17e-45b1-bc23-e65206654420&f%5B2%5D=geographical_term%3A504d1e58-cf69-4e1e-93c6-cdc38f11b97e&f%5B3%5D=geographical_term%3A1578fde8-dfc1-4b48-856e-366acceafda7&f%5B4%5D=geographical_term%3Ac7d96f6d-5f75-4b9c-8efa-2d3f62f26ee7&f%5B5%5D=global_terms%3A4bd9a200-1a3b-4837-8b5f-23de3d628f7b&f%5B6%5D=global_terms%3A39dd44f3-b919-4ae2-8f7e-82a08e8d7729&force=0&query=immunization
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.962847
309
2.796875
3
Republican lawmakers in the United States have recently introduced new legislation at the state level regarding protests and demonstrations. The bills collectively place greater restrictions on individual protest rights and increase the penalties for those charged under such provisions. The majority of laws are currently pending, but send a dangerous message even if they do not pass. This movement indicates a shift in perception as to what constitutes an acceptable level of freedom of opinion and expression, freedom of assembly, and freedom of association. This worrying trend is prevalent in, but not exclusive to the US, as similar measures have been enacted in other liberal democracies. When leaders repress civil rights in response to movements they are uncomfortable with, it places the further erosion of foundational democratic values at risk. As of 21 April, G.O.P lawmakers in 34 states have introduced 81 anti-protest bills during the 2021 legislative session, more than twice as many proposals as in any other year. A vast majority of the legislation share common provisions, like expanding the definition of a ‘riot’, creating vague and ill-defined new crimes, or increasing penalties on already illegal conduct. Some expand the permissible levels of use of force by police officers, and others reduce the penalties for civilians who strike protestors with their cars. Bills in a number of states would also bar people from public employment, public benefits and public office. A continuing trend The recent push for more restrictive protest and demonstration laws in the US is not new. The movement began in early 2017 following the election of then President Donald Trump and was led exclusively by Republican lawmakers. The bills attracted the attention of the United Nations, particularly the Office of the High-Commissioner of Human Rights. Special Rapporteurs David Kaye and Maina Kiai, on the promotion and protection of the right to freedom of opinion and expression and the rights to freedom of peaceful assembly and of association respectively, wrote a letter to US authorities in March 2017 expressing their concern over the proposed laws which they referred to as ‘alarming’ and ‘undemocratic’. They noted that the bills, ‘with their criminalization of assemblies, enhanced penalties and general stigmatization of protesters, are designed to discourage the exercise of… fundamental rights.’ The letter further highlighted the ‘chilling effect’ the bills would have on the most marginalized communities, who often rely on the right to assemble to ensure that their voices are being heard. The Special Rapporteurs also expressed concern over the language in the bills, which refer to protests as ‘unlawful’ or ‘violent’. There can be no such thing in law as ‘violent protests’, the rapporteurs argue. There may be violent protesters, but those who are should be dealt with appropriately and individually as ‘one person’s decision to resort to violence does not strip other protestors of their right to freedom of peaceful assembly.’ Language is a concern in some of the 2021 bills as well. The anti-protest bill in Florida, signed into law by Republican Governor Ron DeSantis on 19 April, uses language that conflates the ‘right to peaceful protest with the rioting and looting’ that sometimes results from protests. Just as in 2017, the bills introduced in 2021 are a response to nationwide social justice movements. The Black Lives Matter Movement surged last May after the killing of George Floyd by former police officer Derek Chauvin in Minneapolis, and lasted throughout the summer. This G.O.P-led effort to restrict protests and demonstrations represents a growing trend among lawmakers to silence rather than engage with the message of protesters. Republican lawmakers argue that Democrats are pro-crime and violence, while Republicans are the party of ‘law and order,’ despite continued evidence that the 2020 summer protests for example were peaceful, including in a new study by The Washington Post which found that 96 percent involved no property damage or police injuries. International law and human rights obligations Freedom of opinion and expression, freedom of peaceful assembly and freedom of association are fundamental rights, not privileges to be determined state by state. Restrictions on these rights cannot be tightened or loosened in response to the subject matter of protests. Freedom of association is guaranteed primarily in the First Amendment to the US Constitution. Numerous provisions under international law also protect these rights. The right to freedom of peaceful assembly is guaranteed in article 21 of the International Covenant on Civil and Political Rights (ICCPR) and the right to freedom of opinion and expression in article 19. The right to protest is also reflected in article 8 of the International Covenant on Economic, Social and Cultural Rights (ICESCR). The right to freedom of opinion and expression and the right to assemble are guaranteed under articles 19 and 20 respectively in the Universal Declaration of Human Rights (UDHR). The UN has reaffirmed the importance of protecting these fundamental rights by establishing and extending the mandates for a special rapporteur on the rights to freedom of peaceful assembly and of association (resolution 15/21) and on the promotion and protection of freedom of opinion and expression (resolution 7/36). The most recent extensions came in 2019 and 2020. At its forty-fourth session, 30 June – 17 July 2020, the Council adopted resolution 44/12 on freedom of opinion and expression. The resolution reaffirms that ‘the right to freedom of expression… is a human right guaranteed to all… [and] it constitutes one of the essential foundations of democratic societies and development.’ It further recognizes that ‘the effective exercise of the right to freedom of opinion and expression is an important indicator of the level of protection of other human rights and freedoms.’ Finally, the resolution highlights the importance of combating misinformation and disinformation, which pose serious risks to the exercise of these rights. Resolution 44/20 on the promotion and protection of human rights in the context of peaceful protests similarly affirms that the right to peaceful protest and assembly is protected under international law. The resolution recognizes that ‘peaceful protests can make a positive contribution to the development, strengthening and effectiveness of democratic systems and to democratic processes, including elections and referendums.’ While certain contexts may necessitate restrictions on protests, like the COVID-19 pandemic, they must be ‘necessary, proportionate to the evaluated risk and applied in a non-discriminatory way.’ Protest rights at risk around the world The worrying trend of increased restrictions on these fundamental rights is not exclusive to the US. In 2019, ‘mass movements demanding social, economic and climate justice swept the streets of Paris, London, Brussels and scores of other cities.’ Police forces in Paris responded with particularly harsh measures and French courts convicted more than 21,000 people in 2019 alone for offenses such as contempt of public officials and organization of a protest without complying with notification requirements. In the UK, the recently tabled Police, Crime, Sentencing and Courts Bill gives police the power to impose severe restrictions on protests. Opponents of the bill argue that it threatens ‘fundamental freedoms that the British public hold dear […] and by giving the police the discretion to use these powers some of the time, it takes away our freedom all of the time.’ 2020 was a particularly pivotal year for protest and demonstration rights as risks posed by the COVID-19 pandemic coincided with the global movement against social injustice. But the international reaction to suppress protests has been concerning, particularly as it is currently playing out in the US. Restrictions on protests and demonstrations not only threaten fundamental rights like freedom of opinion and expression and freedom of assembly, but also risk initiating a domino effect and the subsequent violation of other fundamental rights. Crackdowns on the right to protest, demonstrate or assemble are often followed by further repressions on free speech, access to information and, in the most severe cases, can lead to unlawful arrests. The Carter Center identified freedom of association, freedom of assembly and freedom of opinion and expression as three of 21 fundamental obligations and rights related to democracy and elections. Should leaders continue to repress these fundamental rights, they risk further eroding the institutions and values that uphold democratic governance. A man holds up his fist while hundreds of demonstrators march to protest against police brutality and the death of George Floyd, on June 2, 2020 in Washington, DC. Win McNamee | Getty Images Share this Post
<urn:uuid:68811e10-f822-48bd-ab16-af9df96d76a6>
CC-MAIN-2023-50
https://www.universal-rights.org/more-restrictive-demonstration-and-protest-laws-risk-eroding-fundamental-democratic-values/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.940003
1,706
2.71875
3
The fear of gaining weight can happen for many reasons. Often being overweight is not necessarily a cause. The fear of weight gain can be a common factor in eating disorders like anorexia nervosa, and bulimia nervosa. Where does the Fear of Gaining Weight come from? Fear of any sort is a very powerful emotion can that come from events around us that trigger our inner world to respond in a way where we may panic, feel nauseous, and feel the need to run away. Fear can be triggered by other powerful emotions such as a threat to our safety, humiliation, embarrassment or disgust. These negative emotions may come from many different sources; when it comes to the fear of gaining weight, they might include: - Peers who are critical of weight; - Chronic health conditions that prioritise dieting; - Comments by others about appearance; - Low self worth and the emotional response to assume others feel negatively; - Family attitudes or habits that are focused around selective eating; - Traumatic events in childhood or adolescence where they have been shamed by others around appearance; - Media that focuses on “ideal” airbrushed images and the person is “sold” on this as their goal; - Current anxiety and/or depressive symptoms can increase the fear and create a vicious cycle. What does Fear of Weight Gain feel like? Fear is the emotion we use when we are experiencing very strong negative feelings. It causes us to want to respond: you will have heard of the fight, flight or freeze response that happens when we are scared. Our minds and bodies respond to fear strongly, often with powerful changes in our psyche and in our actions to contain the fear. We feel less fear when our “plans” to manage the fear are activated. Often such plans don’t remove the fear, rather they contain the fear. When experiencing “fear” it is hard for others to say or do things that are helpful, as self-protection mechanisms around the fears often cause a person to disconnect with relationships, health choices, and long term future plans. The fear itself is terrible for someone and deep down they don’t want to have this controlling their life. The fear will often cause a person to identify food and eating as the source of pain, rather than the social situations around them which might have created the environment to become afraid in the first place. It is common for someone to feel angry, hypersensitive, and even deny or minimise the fear around the triggers. What can I do to Help Someone? When someone is gripped by fear, it is not a useful strategy to just dismiss or disregard that fear. Emotional support is very important. It can be difficult as when someone has a fear the last thing they want to do is anything other than run from it. There are times when we have more courage than before, and sometimes there are times when the fear becomes all controlling and unavoidable. Often it is at this point of feeling overwhelmed, that a person will seek help, or ask others to support them. In an ideal situation, it is better for the person to seek help earlier if they can bring themselves to see someone. Being kind and supportive can be helpful. Of course if the situation is dire and a person is very unwell and you are worried, then seeking urgent medical treatment is warranted. To be supportive is important, to help someone seek help if the situation is getting worse. How can Fear be Treated? Fear is treated with a range of approaches that can help both immediately and in the long run. Some common strategies include: - Building a supportive therapeutic alliance; - Strategies to manage negative emotions; - Building distress tolerance skills; - Learning about the story around how the fear came to be; - Learning about what the fear of food looks like in day to day living; - Learning new ways of approaching fears so that control can be taken back; - Changing the way thoughts or emotions may become exaggerated or out of control; - Addressing social relationships so that they can become more supportive; - Consider getting a diagnosis and accepting that the fear in some conditions may be related to an eating disorder; - Moving the focus to managing the fear or emotions, rather than managing the triggers. If you or someone you care about has a fear of food or gaining weight, and are wondering about healthy approaches to managing the core negative emotions, then counselling might be a useful option. The first step is always to identify if help is needed, and if so then there are lots of options for managing. Seeing your GP and sharing your concerns is always a useful idea. Author: Vivian Jarrett, MAAPI, MAPS, MAICD, B Psych (Hons), GCert (ResCom). Vivian Jarrett is the Clinic Director at Vision Psychology in Wishart and now M1 Psychology at Loganholme. She is passionate about providing high quality psychology services to Australians from all walks of life. To make an appointment try Online Booking. Alternatively, you can call Vision Psychology Brisbane on (07) 3088 5422.
<urn:uuid:9bcef58e-e6a2-4513-862b-f5ccd41caaa6>
CC-MAIN-2023-50
https://www.visionpsychology.com/the-fear-of-gaining-weight/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.952165
1,068
2.59375
3
To get unlimited answers, . Lesson on Solving Polynomials How to Solve for a Polynomial Variable When solving for a variable within a polynomial equation, we work algebraically to isolate it. To isolate a variable, we use the reverse order of operations to move all terms and numbers to the opposite side of the equation of the variable. Once the target variable is alone on one side of the equation, it is solved. Solving Variables in Special Polynomials Some polynomial equation variables cannot be solved via basic isolation techniques. For these special polynomials, we may use a variety of other solving techniques. Commonly used techniques are factoring and the quadratic formula. Factoring may be used when the variable has an exponent. The quadratic formula may be used for second-degree polynomials. Sometimes a polynomial does not have any real, whole number, fractional, or rational solutions. When this happens, we may employ a computer that solves using numerical computation. The calculator on this page uses numerical computation for these special case polynomials. How to Factor Polynomials Factoring a polynomial is effectively the reverse action of simplifying terms grouped by parenthesis. For any factorable polynomial, we may use a method called completing the square (see our lesson for full tutorial). A polynomial must be in an equation to complete the square. If we are simply factoring a polynomial for the sake of reaching factored form, we are finished once the square is completed. However, completing the square also enables us to determine the zeroes or “roots” of an equation by converting it to a factored form if we desire a solution to a variable. How the Calculator Works The CAS is fed your polynomial and whether you are solving for x or factoring. The CAS treats the computation symbolically, preserving exact values of variables and numbers. In special cases where there are no rational or real number solutions, the CAS uses numerical methods to achieve a very accurate, approximated solution. Once your answer is calculated, it is converted to LaTeX code. LaTeX is a math markup and rendering language that allows for graphical equation printing on webpages. This page’s local LaTeX script renders that code in the answer area as the solution you see.
<urn:uuid:8d851685-990f-460f-82ed-13ba7e377648>
CC-MAIN-2023-50
https://www.voovers.com/algebra/polynomial-calculator/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.876578
495
3.890625
4
Gifted & Talented Education Gifted and Talented Program Mission, Vision, and Values West Fargo Public Schools’ GATE program supports and encourages the growth, interests, and social-emotional needs of gifted and talented learners through rigorous curriculum and instruction focused on critical and complex thinking, high level questioning, and collaboration. West Fargo Public Schools’ GATE program employs equitable identification practices that support a diverse group of learners who are engaged in high-level instruction and social-emotional development through highly skilled professionals committed to continuous growth and inspiring life-long learners. Equitable Identification Practices Proportional Student Representation Social Emotional Development High Level Questioning Exploration of Interests and Ideas Real World Analysis Critical and Complex Thinking and Reasoning GATE and Enrichment Curriculum GATE and enrichment services provide systematic programming that emphasizes depth and complexity while making meaningful connections to the general classroom curriculum. Through this work, learners are exposed to tasks that require high levels of critical thinking and engagement. Services are provided to K-2 students on an informal basis with formal identification occurring in the spring of 2nd grade. All K-2 learners receive whole group thinking skills lessons from the GATE teacher throughout the year using the Primary Education Thinking Skills (PETS) curriculum which provide opportunities for the GATE teacher to encourage critical thinking for all. Throughout the lessons, the GATE teacher and classroom teacher observe learner engagement and responses to guide their selection for additional small group enrichment opportunities led by the GATE teacher. Thinking skills enrichment services focus on the higher levels of Bloom’s Taxonomy. They offer opportunities for learners with various cognitive strengths as they engage in tasks and activities that support convergent analysis, divergent synthesis, visual/spatial thinking, and evaluative thinking. K-2 learners performing well above the class average may be considered for math enrichment services. Teacher observation, classroom performance, and assessment data is used to match learner needs with the services and units provided. Small group math enrichment is led by the GATE teacher during a learner’s math block and is centered around their ability to reason with numbers and engage in algebraic thinking. Learners are exposed to problems and activities that deepen their understanding of numbers as they work through big ideas such as proportional reasoning, computation, and variables. West Fargo Public Schools uses the Groundworks mathematics series for their K-1 small group enrichment lessons with the addition of a unit from the Project M2: Mentoring Young Mathematicians curriculum for 2nd grade. These experiences are correlated to a learner’s grade level standards, as well as standards one to two grade levels above. 3-5 GATE Services 3-5 learners who have qualified through the verbal portion of the CogAT, are at or above the 97%ile on STAR Reading or are 30%ile points above their class average may receive reading GATE services which focus on literary analysis with an application to various writing genres. GATE services occur during a learner’s reading block and use a variety of evidence-based curricula to merge reading and social studies concepts. Learners use specific thinking skills and engage in analysis of literary works and primary source documents to gain a broad understanding of perspectives, big ideas, patterns, and systems. 3rd graders utilize the PETS curriculum as well as the CLEAR Curriculum from the University of Virginia to analyze figurative language, abstract ideas, and cultural context in poetry and fairy tales. 4th and 5th graders utilize the CLEAR Curriculum as well as the William and Mary Gifted Social Studies Curriculum units. 4th graders will focus on analyzing and understanding critical literacy skills through fiction and nonfiction through reading and writing with a social studies emphasis on the systems of Colonial America. 5th graders will focus on learning and using advanced research skills with a social studies emphasis on the cause and effect relationships of the American Revolution. 3-5 learners who have qualified through the quantitative or spatial portion of the CogAT, are at or above the 97%ile on STAR Math or are 30%ile points above their class average may receive math GATE services which focus on fostering inquiry and engagement by thinking deeply about mathematics through rich discussion and in-depth written explanations to justify their thinking. Students are engaged in mathematical investigations, games, and activities that challenge them to dive deeper into their conceptual understanding of mathematics and work to further develop their reasoning abilities to solve problems in a way that mirrors what practicing mathematicians do. West Fargo Public School uses Project A3: Awesome Advanced Activities for Mentoring Mathematical Minds with their 3rd-5th grade GATE students. These experiences are correlated to a learner’s grade level standards, as well as standards one to two grade levels above. The CogAT is a series of three assessments that give insight on a learner’s problem-solving abilities through their verbal, quantitative, and spatial reasoning. A screener is given to all 2nd grade learners in the spring. Those in 3rd-5th grade who are new to the district will be screened in the fall or spring, depending on their date of enrollment. - Learners who obtain the necessary score on the screener, are at or above the 97th percentile on STAR, or are recommended by either the classroom or GATE teacher will take the full CogAT. The full CogAT contains a series of six additional assessments with two additional assessments in each of the verbal, quantitative, and spatial subsections. - Learners who obtain the necessary score on the verbal portion of the CogAT may receive reading GATE services. - Learners who obtain the necessary score on the quantitative or spatial portion of the CogAT may receive math GATE services. - Learners at or above the 97th percentile on STAR may receive GATE services. - Learners 30 percentile points or above their class average may receive enrichment services from the GATE teacher. - Learners performing well above the class average may be considered for enrichment services from the GATE teacher with supporting data. Determination for GATE Services GATE teachers will look at the full CogAT score, STAR scores, and teacher recommendations to match student needs with the services and units provided. GATE teachers will communicate with classroom teachers to ensure classroom work continues to support the need for GATE services. Families may elect to have a student “opt-out” of the GATE program as desired. Learners in the GATE program are expected to maintain proficient or advanced scores in academics. Learners who are achieving scores that are novice or approaching in academics or the approaches may no longer participate in GATE as deemed appropriate through the collaboration of the GATE and classroom teacher. Learners who do not qualify for GATE services may participate in enrichment opportunities and may be reassessed with the CogAT in subsequent years. Gifted education teachers work with learners during their math and reading blocks. These students go to the GATE classroom for services during the guided or independent practice portions of their math and reading blocks and will not miss classroom core instruction. General Curriculum Enrichment Enrichment opportunities are available for all learners in the core curriculum with targeted opportunities for high-achieving learners to take learning deeper and advance their skills through higher-level thinking. For more information, please contact the Director of Curriculum & Instruction, Heather Sand, at [email protected]. - Aurora Elementary, Legacy Elementary, Osgood Elementary - Brooks Harbor Elementary - Freedom Elementary, Independence Elementary - Harwood Elementary - Horace Elementary - South Elementary, L.E. Berger Elementary - Westside Elementary, Eastwood Elementary - Lisa Aune - Steven Brown - Meaghan Kirsch - Lisa Mercil - Erica Pollert
<urn:uuid:08c3a722-87cc-4e8a-99c7-f1a1a573be22>
CC-MAIN-2023-50
https://www.west-fargo.k12.nd.us/Page/780
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.937992
1,643
2.84375
3
High School Students Build Tiny Houses For Flood Victims Stand in the center of this house and you'll find yourself in the living room and the dining room. And the bedroom. Oh, and also the kitchen. At 500 square feet and designed to hold as many as six people, the house makes for quite a tiny home. But for many, it's just enough for now. Since flooding in West Virginia last June killed at least 23 people and destroyed more than 5,000 homes, residents have been struggling to find adequate housing. These small homes, built by high school students in nearby vocational schools, may be the solution. Dakota Carte, a student working on the building project at Carver Career Center in Charleston, stands inside one of the houses, gesturing to different areas in the house. There's a loft for sleeping up top. Then there's a hot water tank, a fridge and a stove, all in close proximity. "This is a tiny house, so everything is a little compact," Carte says. The entire structure is a little bigger than a generous walk-in closet. Because so many West Virginia families are still struggling, the school board decided students would build tiny homes for flood victims rather than working on bookshelves or birdhouses. "Folks in West Virginia were still suffering even though all the press had gone away," said Kathy D'Antoni, who oversees the state's vocational schools. For the project, the schools received $20,000 from the Board of Education, along with significant contributions from neighboring communities. So far, 15 homes have been built. By participating, students can learn practical skills like carpentry, electrical work and plumbing. Emily Glover, a student at Marion County Technical Center, worked on construction with classmates after school. "You learn everything from laying it out to actually building it," she said. One of these homes will belong to Brenda Rivers, who lost her house to the flood last June. For months, she lived in a camper on the back of her daughter's property. She had partial flood insurance and received assistance from the federal government to help pay off her mortgage. Even then, she couldn't afford the down payment for another home. Her son offered her a mobile home, but she said she couldn't find anyone to move it. "The weather was getting bad, and I said 'Just let me have my tiny house until spring or summer,'" she said. Rivers said she can't imagine living in the tiny home long-term. (It is pretty tiny, after all.) But for her and other families benefiting from the project, the houses are a tiny but altogether significant step in regaining a home. Copyright 2020 West Virginia Public Broadcasting. To see more, visit West Virginia Public Broadcasting.
<urn:uuid:6fe408de-4b15-4671-9511-a2a06503a8ce>
CC-MAIN-2023-50
https://www.wfit.org/2017-02-06/high-school-students-build-tiny-houses-for-flood-victims
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.973538
582
2.75
3
Apart from forest loss, poaching is one of major threats to the rare Sumatran tiger and its prey. David Macdonald reports on the remarkable dedication, and bravery, of practical men working to protect tigers in the tropical forest of West-Central Sumatra. A remarkable new paper published as a result of fruitful collaboration between Fauna & Flora International’s Indonesia Programme, Indonesia Kerinci Seblat NP Management Authority and WildCRU, made possible by a grant to WildCRU by World Animal Protection has explored the effectiveness of a longstanding Tiger Protection and Conservation Programme in Kerinci Seblat NP. Analysing a remarkable log of ten years (2005-2014) records on the activities of Tiger Protection and Conservation Units (TPCUs) the first thing that became clear was the gruelling nature of the work: six units have intensively patrolled nearly half of the 1.4 million hectares of national park forest, a total of 757 anti-poaching patrols had been conducted on foot totalling 3,713 patrol team days and 13,947 km walked. This huge effort paid off: the teams detected and destroyed 231 tiger snare traps and 3652 prey snare traps. What is more 619 active investigations into tiger poaching and trade were conducted, with an average of 62 investigation reports per year that led to 24 law enforcement operations that resulted in 40 suspected tiger poachers (n = 19) and traders (n = 21) being arrested, of these, 37 suspects were found guilty and prosecuted. This project puts boots on the ground, and it pays dividends. Alarmingly, the teams in under a half of the park, the teams documented information on the snaring of 24 tigers (not to mention sun bears and other carnivores). Furthermore, in the quest for sociological insight into the poachers’ minds, a detailed analysis revealed a peak in poaching for the two months before a major religious festival – when the demand for meat increases and hunting deer is an available option. Also alarming was the leniency of the punishments given to the criminals these dedicated patrolmen brought to book: while Indonesian Wildlife law allows a maximum 5 year sentence and fine equivalent to USD 7,600, our study showed that the average sentence was closer to one year, and a fine of USD 107. An important conclusion, therefore, is that it is important to not only continue investing in the TPCUs but also in the training of prosecutors and judges.
<urn:uuid:8b61380a-f19f-4e84-b72e-944cc5909343>
CC-MAIN-2023-50
https://www.wildcru.org/news/sumatran-tiger-poaching-patrols/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.941302
502
2.90625
3
Collaborative writing is typically a process by which more than one person is responsible for the creation and revision of a particular piece of writing. This usually surpasses the use of an editor, as two or more people are involved in the planning stages of such writing as well as the execution and revision of the writing. Such writing introduces a great deal of complexity into the writing process, since collaboration can be difficult, but can also result in a much richer and more rewarding end result. Some people also consider collaborative writing to include any writing in which more than one person’s work is used, which could include written works that cite the research or works of other writers. Also called collaborative authoring, collaborative writing typically begins with two or more people coming together to plan out a piece of writing. This is where such work surpasses the use of an editor for a work written by a single person, since such editing usually occurs only during revision. The planning stages for collaborative writing often include the potential for debate and argument over the subject, how this subject is explored, and how work may be divided between collaborators. Depending on how the collaborative writing is performed, the group may elect a single person to write the actual document or may divide the work among the group members. When a single writer is selected, he or she then writes the document and the group may work together to edit or revise afterward. Collaborative writing in which the entire group writes usually involves splitting the work into pieces and giving each group member an assignment. They may then work together to assemble the individual pieces together and they may all revise and edit as necessary once complete. The process of collaborative writing can add both difficulty and greater reward to the writing process. Some collaborators may find it difficult to work together on a project, and many writers prefer to maintain greater control over their work than such collaborations afford. Multiple perspectives and voices within a written work can ultimately improve a project greatly, however, and such collaboration is common for research projects that include information beyond the scope of an individual researcher. Creative works may also benefit from more than one voice in the writing, though this often requires just the right combination of writers. There are some people who consider any work with more than the content of a single individual to be collaborative writing. By this definition, then, any work that includes data, research, quotations, and ideas from another writer should be categorized as collaborative work. This can be the source for some debate, however, since the ideas utilized or cited in a work may be so radically challenged or interpreted that they no longer resemble the context in which they were initially presented.
<urn:uuid:a246e7eb-80e1-4dcd-94dd-061f3343182d>
CC-MAIN-2023-50
https://www.wisegeek.net/what-is-collaborative-writing.htm
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.969799
528
3.765625
4
As lockdown eased, I was able to take my two grandchildren to the local park. Stopping at our local café en route, I navigated the one-in-one out policy and secured a coffee. As I pushed the children on the swings, I reflected that these little ones were just months old when lockdown, masks and social distancing began – they have never known anything different. Later that afternoon, I called in on my mother who is in her 80s. She lives alone and for her, as for so many, the last year has been marked by loneliness and isolation. The truth is that whether we are two or 82, a furloughed 20-something or a parent who has managed the impossible task of home-schooling-home-working, Covid-19 has impacted all of our lives. It’s put our emotional wellbeing to the test and left many of us, including our children and young people, feeling worried and anxious. We use the term ‘emotional wellbeing’ to refer to the quality of our emotional experience – the barometer of how we think, feel and relate to others, and also to ourselves. Our emotional wellbeing affects how we see and understand the world, and it is an important part of our overall health. Everyday activities to boost wellbeing People who are emotionally healthy are better able to cope with life’s challenges, keep problems in perspective, and bounce back from setbacks. Of course, there are no guarantees, mental ill health can occur for all kinds of reasons, but during this last year many of us have discovered the value of slowing down and putting habits in place that will help nurture our wellbeing. While there are many important factors that affect it, there are some simple and practical things that will make a difference if we weave them into the fabric of our everyday lives: - Taking time to relax and recharge – for example, soaking in a deep bath or going for a run. - Breathing deeply and other relaxation techniques. - Taking a break from our normal routine or getting a change of scenery. - Making sure we get enough sleep and eat healthily. - Talking to trusted friends and family, and seeking professional support if needed. - Cultivating ‘an attitude of gratitude’ – intentionally noticing things to be grateful for. - Looking for opportunities to help others or performing random acts of kindness. Finding the positives The phrase ‘liminal space’ describes a waiting area between one point in time and the next – the threshold between the old season and the new – and Covid-19 has undoubtedly catapulted us all into such a space over the last year. Family routines have been unsettled and we’ve been forced to realign our priorities and recalibrate our lives. But while it has been an unbelievably difficult and uncertain time, there have been some positives. In particular, the pandemic has given many of us an opportunity to pause and think about the things that are important to us. And we can take heart by recognising that this year has also been an incredible opportunity for growth. Research shows that the important quality of resilience – the ability to bounce back from setbacks – can really only be learnt in challenging times. That is certainly something the Apostle Paul would agree with: As he said in Romans 5:3–4: “Even in times of trouble we have a joyful confidence, knowing that our pressures will develop in us patient endurance. And patient endurance will refine our character, and proven character leads us back to hope” (TPT). Katherine Hill is the UK director of Care for the Family. She has recently written a book looking at how parents can boost the emotional wellbeing of their children: A Mind of Their Own (Muddy Pearl).
<urn:uuid:9c033aa2-15d9-4e38-9982-49dd72c02d7e>
CC-MAIN-2023-50
https://www.womanalive.co.uk/home/lessons-on-wellbeing-from-lockdown/4853.article
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.965126
775
2.53125
3
A friend of mine often jokes that he likes to read the biblical text because it sheds such interesting light on the commentaries. The joke raises a real issue. How do we keep the Word itself at the center of our preparation and proclamation? A powerful method for doing this is biblical storytelling. Instead of reading through a text once or twice and then setting it aside in the quest for what will preach, the pastor takes the Word to heart during the whole week. On Sunday, she proclaims the story itself which she has learned in small but steady steps each day. The sermon arises out of a living encounter with the Word and it comes about as close to writing itself as one could imagine. Having lived with the text during a seven day creative process, the week’s activities−both hopes and sorrows−so fill the story that it overflows into a sermon. Here is a week of preparation for a sermon on the Syrophoenician woman text that comes up at the beginning of September. On Sunday night, type out the story on a page so that its rhythms and breaks stand out: “From there he set out and went away to the region of Tyre. He entered a house and did not want anyone to know he was there. Yet he could not escape notice.” On Monday, enjoy your Sabbath rest. Shift this instruction to another day if you must, but listen to God’s command and do rest! On Tuesday, write the whole pericope out by hand three times speaking out loud as you write. You will be amazed at how much the story begins to sink into your memory simply by doing this. You may notice connections to the text and your life already. You are making your children sandwiches and you think of the “children’s bread.” On Wednesday, set a copy of the story on your car seat and tape one up in your bathroom mirror. When you see it, take a moment to read it out loud. Begin to try out different possible tones as you do this. Is Jesus tired because his day off was interrupted? Can you relate? Tuck that experience and emotion into your sermon. Go to your local text study and let that conversation impact the story. By the time you go to bed, hopefully you have gone through the story out loud several times. On Thursday, see how much you can remember without having the text in front of you. Always do it aloud, by now incorporating gestures. Ask how close do Jesus and the woman get to each other? As you head to the hospital for a visit, you run through the text checking your cheat sheet at the red light. When you walk into the hospital room to visit the young girl awaiting surgery, her mother’s panicked look slips its way into the text. To have such a great need and know of your own helplessness! Tell yourself the text as you go to sleep, enjoying the rest that Jesus had been seeking. On Friday, start refining your telling. Drag a friend into the sanctuary with you and fill up the worship space with the story asking your partner to prompt you only when you are stuck. Stop to answer the knock at the door. Someone needs $20 for bus fare. Do what seems right to you and then tuck the experience into the story. If you can find the time, close your eyes and image the sights, smells, and sounds of the story as it happened. Bring in the senses perceptions from your own life. Let these aromas waft through the story. By Friday afternoon you will be filled with the story and the story will be filled with life. Sit down in the afternoon and write your sermon. Often the movements of the biblical text give form to your own sermon; your preaching itself becomes narrative. The bible story is not abstract and distant from you. The Word has entered into your heart and has become the heart of your preparation. On Saturday, do that wedding and then relax. Finally on Sunday proclaim the Word that has become a part of your life. If it is the first time introducing storytelling in a congregation, invite them to watch the story and take their eyes off any bulletin insert or Bible. The first time you might want to proclaim the text in the children’s sermon when people’s expectation for text-like accuracy is much lower and when people tend to pay attention. In fact, the proclamation could be your children’s sermon. Proclaim the story as accurately as possible without obsessing about one hundred percent accuracy. I guarantee you that more of the text will be heard than was heard in last week’s reading. So if you drop a phrase the Word will live and still do its work. I hope you will give this method a try. For me, it has been an amazing practice that has fed me spiritually, shaped me as an interpreter, and led me into faithful proclamation.
<urn:uuid:d7909ef3-df4c-418b-b2eb-f130ba272bba>
CC-MAIN-2023-50
https://www.workingpreacher.org/sermon-development/keeping-the-word-central
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.965974
1,008
2.625
3
Stand in the place where you live Now face North Think about direction Wonder why you haven’t before Why an obscure geodetic expression? It’s not all that obscure—it represents a foundational element requisite to nearly all human endeavor. It touches your life explicitly and in the abstract. All of the ways that humans have measured and gauged their world, each of the conventions they have used to visualize and convey representations of this physical world (posit), could boiled down and be expressed via the simple variants of “xyHt”: fore-aft (x), left-right (y), up-down (H), and epoch/time (t). This “DNA” of positioning is expressed as latitude/longitude, coordinates, orthometric elevation or even height above sea level—all expressions of location and geophysical shape originating in the root language of surveying and geodesy, fundamental to our physical existence, yet understood by few and spoken by even fewer. Every home, building, road, bridge, dam, tilled field, navigation system, location-based app, wireless network, Google Earth plug-in, GPS, GIS, or any map throughout history share some of this common “DNA” and owe their viability to the work and legacy of surveyors and geodesists. It is no accident that humans have chosen to express their world in terms of “xyHt”—this is how the human body and our senses view and navigate our physical surroundings. We stand erect for a high vantage, and we plumb to the gravitational center of our home orb. We orient ourselves to the horizons via the observed passage of our life-giving sun; our lives move to the rhythms of the sun, moon, seasons, and changes to our geophysical surroundings brought by the years. We are effectively walking surveying instruments, with level vials in our ears, gauging angles via the geometry of our limbs and the movement of the axis of our head and eyes. We judge distance and shape via stereovision, observing lines of perspective, echoes, resonance, touch, and heuristics borne of lifetimes of conscious and unconscious observation. The devices, systems, and instrumentation we have developed often merely mimic this “organic surveying instrument,” and even the most sophisticated technologies used to store the data of geosciences take cues from the workings of the human brain. Indeed, modern science has identified the workings of “place” and “border” cells in the hippocampus region of the brain and how we make literal “mental maps” of our surroundings. An example is the booming field of automated feature recognition, like how a camera can recognize and focus on a human face or a vehicle-mounted mobile laser scanner can pick out and identify a specific type of road sign from a digital library. The human brain does not reserve dedicated synapses for the face of every person we know; instead the basic geometry of a human face is stored, and we merely reserve data markers for what is unique about each of our acquaintances faces. These marvels of image and data compression that bring an entire virtual world to our smart phones are another example of this principle of stored root patterns. So many of the geospatial sciences, positioning, navigation, and location-based disciplines, have sprung from these common roots or are reaching new heights fueled by the power of precise measurement and data management. Civil engineering, construction, transportation, precision navigation, machine guidance, robotics, resource and asset management, location-based services—these professional and industrial beneficiaries of enhancements in geoscience technologies depend on the language of xyHtinmuch the same manner as the digital world breathes binary code. A revolution in precise positioning has reached into agriculture with GPS-guided tractors; it now guides autonomous robots on land, in the air, on and under the sea; it helps us find a nearby restaurant of our liking; feeds tsunami warning systems, warns us of vehicles in our blind spots, populates the displays of smart phones and the new wave of “wearables.” xyHt is the “paint” of the physical and virtual worlds of scientists, gamers, planners, consumers, and our classrooms. Seemingly disparate disciplines, in the consumer, “prosumer,” and professional realms share this “DNA”—the legacy, and language of xyHt—and increasingly, a promising future. Join team xyHt in exploring your world and discover new opportunities enabled by the power of precise location, positioning, and measurement. Be prepared to venture out of your comfort zone or industry silo, but be prepared to be pleasantly surprised. You might find technologies and methods being used in fields and industries unfamiliar to you that might just be the solutions you are looking for—or never thought to look for.
<urn:uuid:23d30538-3c2e-4cff-b1be-f1812e2e74c8>
CC-MAIN-2023-50
https://www.xyht.com/aerialuas/xyht/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.910257
1,015
2.75
3
History of Islam (Part 6) Harun al-Rashid died in 809 CE after a twenty-three year rule becoming an icon of adventure, chivalry and piety. A patron of learning and arts he is also cited as an impartial judge who had the knack of finding the real culprit behind a crime. Mamun is remembered more for his academic benevolence. He established an academy Dar-ul-Hikamah (the House of wisdom) that was pioneering institution in several continents. He established libraries and observatories and promoted various arts, philosophy and sciences, writes DR ABDUL GHAFFAR KHAN. The Barmakids: Barmak was an Iranian nobleman. His son Khalid got converted to Islam when the Abbasids launched a campaign on Khurasan Khalid too joined the expedition. Saffah made him a minister when he became victorious. He held that post during Mansur’s tenure. Later on he was appointed governor of Mosul. Yahya was Khalid’s son who was appointed Harun’s tutor, whom Hadi had lodged in prison. When Harun ascended the throne the Barmakids rose in power; Yahya being the Chief Minister ruled the nation along with his two sons – al Fadl and Ja’afar. They had their palace in eastern Baghdad and lived in grand style. They amassed fabulous fortunes and their generosity became proverbial. Even present day Arabic regards Barmaki as a synonym of generosity. ‘As munificient as Ja’afar’ has became a popular simile. They deserve credit for developmental projects undertaken which include construction of mosques and canals. Al–Fadl was the first in Islam to introduce the use of lamps in the mosques during the month of Ramadhan. Ja’far’s intimacy with Harun was disliked by his father Yahya which he regarded (suspiciously) immoral. As they had become too powerful they posed threat to the crown and it became necessary for Harun to get rid of the caucus. First, Ja’afar was slain on charge of having illicit relation with his sister al – Abbasah who is accused of giving birth to a son (secretly) sired by him. The aged Yahya, with his distinguished son Fadl and other sons were imprisoned. They died in confinement. All the property of the family amounting to 30,676,000 dinars in addition to farms, palaces, furniture was confiscated (Hitti : 294 – 96). Not only was Harun munificent in his charities his cousin–wife, Zubaydah, also had her share of glory. In addition of being a model of fashion and beauty (in rivalry with al–Mahdi’s daughter and Harun’s half–sister, Ulayyah) she would tolerate at her table no vessels not made of gold or silver and studded with gems. She was the first to ornament her shoes with precious stones. In one holy pilgrimage she is reported to have spent three million dinars which included the expense of supplying Makkah with water from a spring twenty-five miles away (Hitti. p.302). The canal is still around Makkah (in ruins). Harun died in 809 AD after a twentythree year rule becoming an icon of adventure, chivalry and piety. A patron of learning and arts he is also cited as an impartial judge who had the knack of finding the real culprit behind a crime. During his lifetime Harun tried to avert possible war of succession among his sons. He made a will according to which he was to be succeeded by Amin followed by the other two brothers Mamun and Must’asim one by one. The two brothers meanwhile were made viceroys of two regions. The will was hung in Kaa’ba to demonstrate its finality and consensus among the brothers. However, because of his temperament and also because of the instigation of his Chief Minister Fadl bin Rabe’a Amin appointed his own son Musa as his successor and got the documents brought from Kaa’ba and torn into shreds. The two brothers were also deprived of their viceroyalty. Amin was the son of an Arab mother Zubaida while Mamun was of a Persian one hence, the dispute between the two brothers took a racial overtone. The Persians jumped into the fray to demonstrate their allegiance to “son of our sister”. Mamun’s minister Fadl bin Sahal sent a force under Tahir while Fadl bin Rabe’a dispatched Amin’s royal force under Ali bin Isa. The two forces clashed at Rye in which Ali bin Isa got killed. Tahir reported the news of victory. Fadl bin Sahal conveyed the news to Mamun and congratulated him as Caliph. In several confrontations between Amin’s Baghdadi force with Mamun’s Tahir emerged victorious every time. Ultimately, a three-pronged attack on Baghdad was launched led by Zubayr on one side and Harshama on the other. Amin became utterly hopeless hence sought refuge under Harshama who was ready to oblige; but Tahir’s men arrested him before he could reach Harshama. He was killed and his head was sent to Mamun. He was the first Abbasid Caliph to be killed by his own people (813 AD). Amin’s death brought the entire land under Mamun who ascended the throne in 813 AD. Fadl bin Sahal had been his great mentor hence he became the Chief Minister. He made the Persians dominate all offices of power making Arabs to turn hostile. Having been in the company of Yahya Barmaki, Fadl bin Sahal had soft corner for the Alids and he created such impression that made many surprised and shocked by actions taken in Mamun’s name. Instead of the black costumes worn by the Abbasids he began wearing green (Alid) costumes. He got his daughter married to Imam Ali Raza and declared him his crown prince. The Abbasids became highly enraged and made Mamun’s uncle their caliph. Meanwhile the Umayyads also tried to exploit the opportunity to their advantage and tilt power to their favour. Disgruntled Arabs exposed his cause and the Abbasid forces took five years to suppress them. Ibn Taba Taba, a descendent of Hassan, led another uprising in Kufa. He along with Abu Saraya defeated the Abbasid forces and became the master of southern Iraq. Taba was caliph in Kufa. To Mamun’s good fortune the alliance between the two did not last long and Abu Sarayya got Ibn Taba Taba poisoned. With some more luck even Abu Sarayya’s menace came to an end in a battle led by another Alid. Makkans were also not far behind in choosing a new caliph, Muhammad bin Jaafar Sadiq. Abbasid forces besieged the holy city and brought the insurrection to end. During all such upheavals Mamun was away from the capital Baghdad preferring to live at Merv on the advice of his Chief Minister Fadl bin Sahal. One of the architects of Abbasid victory Harasama went to Merv and bluntly asked Mamun to take stock of situation and return to Baghdad. Mamun was blissfully unaware of all these developments as all such facts were never brought to his notice on Fadl’s instructions. Ali Raza apprised him of all the developments and convinced him to be vigilant of Fadl’s strategies. An angry Fadl had Harasma killed. While Mamun leisuredly marched towards Baghdad the two rivals Ali Raza and Fadl also got killed in separate incidents. With contenders gone there was no longer any cause for protests. Returning to Baghdad Mamun assumed all power with a totally different orientation. Mamun placated the Abbasids by restoring the black Abbasid costume. A brigand by the name of Babek made himself the master of Mazendaran. A follower of the Khurramiya sect he believed in transmigration of soul and other heretic doctrines. Adopting guerilla tactics he took the Byzantines as allies and caused great havoc to the Abbasids. In 827 AD Mamun published an edict by which Mu’tazalite doctrine was declared as state religion. Islamic scholars like Imam Ahmad bin Hanbal opposed it strongly, for which he was lodged in prison. Mamun is remembered more for his academic benevolence. He established an academy Darul Hikamah (the House of wisdom) that was pioneering institution in several continents. He established libraries and observatories and promoted various arts, philosophy and sciences. For such distinction Hitti lavishes praise on the Abbasids, which might make Europe angry: “All this took place while Europe was almost totally ignorant of Greek thought and science, while al Rashid and al – Mamun were delving into Greek and Persian philosophy their contemporaries in the West Charlemagne and his lords were reportedly dabbling in the art of writing their names” (Hitt : p. 315) During Mamun’s regime Africa, Yaman and Khurasan saw establishment of three new dynasties Aghlabids, Ziyadids and Tahirids who used to pay annual tribute and had his name recited in the Friday Khutba. As desired by his father, Harun al Rashid, Mamun nominated his brother Mustasim to the throne against the wishes of his on Abbas’s followers. Abbas showed the magnanimity of taking the oath of allegiance to his uncle. Till now the Muslim force relied solely on the Arabs. With the Abbasids the tilt turned in the favour of the Persians. Thus the army and court got divided into two hostile camps. In order to be free from the dominance of either of the two Mustasim made the fatal mistake of turning towards the Turks which in course of time became so powerfully strong that they began making and unmaking caliphs at their sweet will which ultimately made the Abbasid caliphs stipendiary servants. Later on Mustasim himself realized his monstrous folly but it was too late to retract. Though totally illiterate, Mustasim was an able administrator. Peace prevailed throughout the land. There were, however, several confrontations with the Romans who used to invade into Muslim settlements. After capturing the Muslims they would blind them. On an occasion they raided a town and captured Muslim women. A female member of Mustasim’s family was among those who were captured and began crying for “help” asking Mustasim to rush to her help. When he was apprised of her pathetic tale he was greatly shocked. At the head of a great force he attacked the Romans and set them right. He died in 842 AD. He was a man of great physical prowess and he could deface inscriptions on coins by rubbing his fingers on it. He could lift a beast of burden, fully loaded, in his hands. The Turks residing in Baghdad created a great menace. Being in favour of the caliphs they had became too arrogant and would knock down pedestrians while galloping on their horses. This made them extremely unpopular among the Baghdad residents. In order to solve this problem Mustasim decided to build a new city. Samarra was built on lavish scale with palaces and gardens on the banks of river Tigris sixty miles away from Baghdad. He made it his capital. Babek who had caused havoc during Mamun’s reign could not be contained. Several campaigns were undertaken to kill him but in vain. A large force under his Turkish general Haider Afshin assaulted his fortress. Taken captive he was sent in chains to Samarra where he was executed. The victorious general was given a royal welcome and loaded with honours. However, soon after, he was found guilty of instigating the Magian prince of Tabaristan, Maziar to revolt and hatching a conspiracy. Maziar was brought in chains and executed. Afshin was also imprisoned-he died in captivity. During Afshin’s trial it was revealed that many of the caliph’s troop were not even Muslims. Mustasim purged his army of all such people. Son of a Greek slave mother Wathiq succeeded his father in 842. Continuing the policy of his father he further aggrandized Turks. He appointed a Turkish General, Ashnas, as the Vicegerent of Sultan enjoying more power than the chief minister. He decorated him with a jeweled girdle and sword. This fatal mistake inflamed the Arabs who rose in revolt at several places. They resorted to pillage and even did not spare Makkah and Madinah. It was during his regime that Muslims could continue to occupy a strip on the southern Italian coast. He also managed exchange of 1600 prisoner of wars with the Byzantine. However, when Byzantines became aggressive by invading Muslim lands (Damietta in Egypt) he was claimed by death before any campaign could be launched in 847 AD. Like Mamun he also patronized literature and science and encouraged commerce and industry. He was a poet and philosopher, well versed in music. He was author of many melodies. He could play skillfully on lyre. He acted more as a constitutional head than as an administrator. Mutawakkil, a staunch Sunni and conservative, made many persons unhappy. His intolerance not only made Jews and Christians reel but also made Shias uneasy. He was hostile towards Alids to such an extreme that he got Imam Hussain’s grave exhumed. Because of his animosity towards non-Muslims he imposed several restrictions. Jews and Christians were asked not only to dress (distinctly) differently to declare their identity but also were deprived of the permission to ride a horse. Reacting sharply to the Mutazalite doctrines endorsed by his predecessors Mamun and Wathiq he expelled the rationalists from public office and banned discussions on sciences and philosophy. Fundamentalist scholars imprisoned by the earlier rulers including Imam Ahmad Ibn Hanbal, were set free. Qadhi Daud, his son and other prominent Mutazilities were imprisoned, and their properties confiscated. He banned pilgrimage to Najaf and Kerbala and confiscated the Alid property at Fidak. Ibn–us–Sikkita, a great scholar was tutor to his sons. One day Mutawakkil asked him who was dear to him, the two princes or Hassan and Hussain. The scholar replied that even a freed slave of Ali was dearer to him than the princes. An enraged caliph got the scholar put to death. Ibn Zayyat, the Chief Minister during Wathiq’s regime, had invented a machine of torture, which was used to kill several fundamentalist scholars. He used to humiliate Mutawakkil then. After assuming power, Mutawakkil ordered Ibn Zayyat’s execution in the machine that he had invented for others. Sufi Dhun Nun propounded the doctrine of gnosis (the communication of man with God), which was a heresy from fundamentalist perspectives. He was summoned to Samarra where Mutawakkil posed him several questions. As he did not find anything objectionable he was set free. His regime witnessed earthquakes and other natural calamities. He provided large-scale relief measures and led special prayers for protection against such calamities. Replacing Wathiq’s Turk Vicegerent, General Ashnas, he elevated General Wasif to the post. He had cordial relations with Turk Generals Itkah and Bugah. When he developed differences with his boon companion, Itkah, he got him killed. Later on, General Wasif’s property was confiscated. Bugha also had become too stubborn to manage and Mutawakkil began contemplating a plot to get him killed. However, it was Bugha whose conspiracy with Mutawakkil’s son brought the caliph to an abrupt end. He was the first caliph to be killed by his own army. Later on, this became a routine affair. While Sunnis hold him in high esteem the Shias condemn him as the “Nero of the Arabs” (Ameer Ali). Turk Generals raised Muntasir, the dead caliph’s son, who was their ally in the conspiracy to the throne. He reversed his father’s anti–Shia policy. Not only did he get the mausoleums of Ali and Hussain rebuilt but also allowed people to go to Najf and Kerbala for pilgrimage. He restored the Fidak property to the Alids. He also withdrew the restrictions imposed on Jews and Christians and allowed them to build new temples. However, he did not enjoy a single day of peace. The ghost of his slain father haunted his nights. At one of the state functions, an inscription on a Persian carpet attracted his attention: “I am Shiruyah, the son of Khusro, I slew my father and did not enjoy the sovereignty for more than six months”. An unnerved Muntasir became fully convinced that like Shiruyah he too would die within six months. This is what actually happened! Mustansir denounced the Turkish Generals for their role in the Mutawakkil murder. This made them highly enraged who conspired with the state physician, Ibn Tayfur, to poison him. On pretext of bleeding, the physician poisoned Mustansir to death. He died in 862 (within six months of succession). Mustain (862–866 AD) was a non-entity with no will of his own – a mere puppet in the hands of Turk Generals. Caliphate lost its prestige. He contributed largely to the disintegration of the Abbasid power. A popular doggerel describe his plight: “A caliph in cage, Between Wasif and Bugha He says what they tell him; And speaks as a parrot” After Mustain’s abdication Mutazz (866–869) was elevated to the throne but his too was a similar plight. Within three years his Turk Generals became his greatest foes. The army killed Wasif, in charge of finance, as he could not pay them arrears of salary. Bugha wanted to eliminate Mutazz but before Bugha could act, Mutazz got him killed. However, murder of Wasif and Bugha did not ease the situation. The army besieged the fort and demanded payment of arrears. Mutazz went to the Queen Mother for a loan of 50,000 dinars, which she declined to do though she had on that day double the amount with her. He was forced to abdicate. Then he was led to a hammam where he was forced to take bath in hot water. When he felt thirsty, water was denied to him. When his thirst grew intense he was given ice cold water. Drinking it, he dropped down dead. What an affectionate mother and how compliant an army! Abu Abdullah, a son of Wathiq, was offered the throne after Mutazz’s abdication. Pious by disposition, he refused to accept the proposal as he had been under an oath of allegiance to Mutazz. The Turk Generals produced Mutazz who absolved him of the oath of allegiance. Thereafter, he assumed office as Muhatadi in 869 AD. A chaste and firm ruler he tried to emulate Umar bin Abdul Aziz. He followed an austere way of life and forbade all extravagant practices. Putting down all wanton practices he brought singing to an end. The harem intrigues in which the Turk Generals were divided in hostile camps continued unabated. One group tried to overpower the other. They began competing in hostilities. They wanted Muhatadi to abdicate which he declined to do and died fighting valiantly. Eight years (after Mutawakkil) saw ascension and departure of three caliphs who were done to death by the Turk Generals. Such anarchy made the country drowned in chaos. The Roman atrocities on the borders mounted high and they began occupying territories wherever they could manage. Mustansir’s regime saw establishment of an independent Zaidi state in Tabaristan (250 AH) and a Safavid state in Sejistan (in 253 AH) under the leadership of Yuqub bin Laith. The Tulunids gained power under Ahmad bin Tulun. Abul Abbas Ahmad, a son of Mutawakkil, was released from prison in 870. Though colourless, his was a very long tenure of 22 years (870–892). The past ten years had seen not only growing of the Turk generals to enormous power but also their internecine quarrels and rivalries. They themselves asked the caliph to appoint a strong man as their commander-in–chief. He appointed his brother Muwaqqif who dealt very strongly with them and reduced them to non-entity. This, however, made him extremely powerful that he became a great menace to caliph. Seeing such affairs of the state the Samanids established themselves in Transoxiana under the leadership of Nasr bin Ahmad. Mutamid’s period was rocked by two great uprisings which shook the state authority substantially. The first was the rise of Negro slaves as the Zunj revolt. The slaves were pseudo communists sharing not only properties but also women, and were very fanatic. They were led by a Persian Bihbud who claimed to be a divine emancipator. The Zunjs practiced heretic doctrines e.g. they praised Abu Bakr and Umar but abused Uthman, Ali, Talha, A`isha and Zubayr. For their heresies they were condemned as Khabeeth. Together they captured several cities and let loose a reign of terror. They killed in Basrah 500,000 Muslims in one single day. Muslim women and children were sold as slaves. It took the Abbasids 14 to 15 years to crush them. Bihbud was killed in action. The second menace, which prolonged for several years much after Mutamid, was the rise of Qaramati who posed as a pious devout Muslim, who introduced several heretic practices. He held that ceremonial uncleanliness did not require ablution. Similarly, he held that only two days of fasting in Ramadhan would suffice. He even changed the Kibla back to Jerusalem. Qaramat was arrested from his Sawad headquarters and lodged in jail wherefrom he escaped to Syria and his followers let loose a reign of terror. Mutamid smarted under the oppression of his brother Muwaqqif (the commander-in-chief). He tried to escape to Egypt to seek asylum under the Tulunids but he was pursued and brought back. Though Muwaqqif died in 891 his son Abul Abbas proved stronger than his father and he coerced Mutamid to change his will. Thus he was the first Abbasid caliph who changed his will in favour of his son to his nephew’s favour. He died suddenly in 892. It was suspected that he had been poisoned. He brought the capital back to Baghdad from Samarra. Mutamid’s nephew Mutazid succeeded Mutamid having forcefully revoked the earlier nomination in his favour. He was a powerful person with firm grip on administration. However, the Qaramatian excesses made his success to pale into insignificance. For his ruthlessness he was called Saffah, the second. His marriage with the Tulunid princes Qatr–un–Nada, daughter of Khummaruwayh, was a re-enactment of Mamun’s marriage with Buran. The dowry among other things included 4,000 jewelled waistbands. He died after a ten-year rule in 902. He was a man of great personal courage. He had the reputation of engaging a lion single-handed. On his deathbed he kicked his physician who fell down dead. His son who assumed the throne as Muktaffi succeeded him. Muktafi reformed administration and pursued liberal policies. Destroying the underground prisons built by his father he converted them into places of worship. He restored to people their confiscated properties. He used to personally redress the grievances of people. Such measures made him immensely popular. He used to have regular annual campaigns during the summers to push the Byzantine in control. In one of such campaigns the city of Antaliyah was captured; and in another the city of Thessalonica was sacked. The Qaramatian menace grew in magnitude. In one of the campaigns Yahya bin Zakariya their leader was killed. His brother Hussain became their chief. He had a mole on his face which he described as proof of his Mahdi-hood and took title as al–Muddathir declaring that the Holy Qur’an’s Surah Muddathir (LXXIV) made a reference towards him. Muktafi himself took the field and had Hussain killed. However, their fury remained unabated. In 906 they captured Kufa and began threatening Baghdad. He died in 907 after a five-year rule.
<urn:uuid:2fc8feed-500b-41fb-8ae2-90bb43e5fe50>
CC-MAIN-2023-50
https://www.youngmuslimdigest.com/history/01/2010/history-of-islam-part-6/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.988019
5,311
3.0625
3
By Dimitrios Zavos Rule of Thirds grid The ability to produce aesthetically pleasing images in Photography, as in any other form of visual art, extends far beyond the technical abilities of the photographer and delves into the mysterious realm of visual perception. There is no way to accurately define how individual viewers “see” an image and what they get out of it. There are, however, long established guidelines on what the human eye tends to find visually harmonic and thus more aesthetically pleasing. These guidelines form a separate chapter in the book of photography techniques, called “Composition” and the purpose of this article is to examine and explain the principles behind the most common of its “rules” known as the “Rule of Thirds”. The Rule of Thirds is a guideline that proposes that an image should be divided into nine equal sections by two equally spaced horizontal and two equally spaced vertical imaginary lines. The principle behind it is that when important compositional elements of a photograph are placed along the above mentioned lines or on their intersections, the resulting image is better balanced and thus more visually appealing. Keep in mind though that ignoring the above guideline doesn’t automatically make your images unbalanced or uninteresting. On the contrary, there are numerous examples of beautiful compositions that break this rule. But to break a rule one needs to be aware of it first, right? When taking a photo there are two ways to implement the rule of thirds: By imagining the existence of a grid, thus dividing the frame in three vertical and three horizontal sections in your mind. By setting up your DSLR to show the relevant grid lines on its Live View LCD screen. Not all cameras offer this function, but in most it can be enabled from the settings menu. When shooting landscapes, most people tend to place the horizon at the center of the frame; however, landscapes tend to look more natural when the horizon is nearer the top or the bottom third of the frame. Obviously, this depends on which element of your composition you wish to highlight. If the most interesting part is the foreground, then the image will work better if the horizon is close to the top third of the frame. On the other hand, if you have an impressive sky, try to place the horizon in the lower third of the image to increase your sky's real estate. Rule of Thirds - Landscape In the example above, the horizon sits low in the bottom third to maximize the contrast between the blue of the sky and the weathered stone of Broadway Tower which is positioned at the right third of the frame and provides a natural focal point. It also intends to give the viewer the impression that the Tower is overlooking the surrounding landscape. When capturing moving subjects try to leave space in front of them into which they can move, considering the direction of movement. This should give your image a sense of “flow” and continuity of motion, while subliminally suggesting the "next step" to the viewer. In the following image for example, the frog has been framed with enough space in his direction of movement, into which he could leap. This empty space is also known as "breathing space" in a composition. Composition following the Rule of Thirds and allowing space into which the frog can move But, as mentioned above, you might wish to break this rule if you want to convey a different message through your photo, as in the following example, where the frog has been placed close to the edge of the frame, regardless of his direction. I just thought the image worked better with the subject in the foreground and the impression that it's about to leap out of the frame. It's simply a matter of judgement and what "works best" in the photographer's eyes. Composition following the Rule of Thirds, but breaking the "Rule of Moving Space" When shooting portraits there are a couple of ways to apply the Rule of Thirds. If you wish to fill the frame with the face and upper torso of your subject, then the eye line should be placed along the top horizontal axis and the end of the chin along the bottom one, but if you want your portrait to look less like a mug shot, try placing your subject off-centre, along one of the vertical grid lines and experiment with the background. Off-center portrait, following the Rule of Thirds Finally, you can always apply the Rule of Thirds in post-production. Try to revisit some of your old, badly framed photos, and use the Crop Tool of your editing program to re-frame your subject. You will surely discover some raw diamonds this way...! The following example shows the "Before" and "After" stages of a photo that has been re-framed: Original photo without any re-framing Re-framed to comply with the Rule of Thirds I'll leave it to you to decide which version makes the strongest impact... This completes our guide to understanding the impact the Rule of Thirds can have on your photography. I hope you'll experiment with it on your next photographic session and who knows... The results may put a smile on your face...! As always, feel free to post any comments or questions you may have in the "Comments" section below. Until next time, keep feeding your creativity!
<urn:uuid:bec81e8b-0901-4329-b73b-0f49ef957e6a>
CC-MAIN-2023-50
https://www.zavosphotography.com/tips-and-techniques/composition-rule-of-thirds
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.946342
1,100
3.03125
3
The book, published in 2016, chronicles the lives and achievements of three Black women — Dorothy Vaughan, Katherine Johnson, and Mary Jackson — and the racism and gender discrimination they overcame during World War II, the. Katherine Globe (Johnson) was a American mathematician who worked for NASA. The themes in Hidden Figures’ portrayal of Katherine were as follows: (1). Hidden Figures by Margot Lee Shetterly is a nonfiction account of the Black women who worked as human “computers” at NACA and NASA from the 1930s to the 1960s. HOW YOU KNOW HER: As the acid-tongued, fur-wearing matriarch Cookie on Fox's Empire. This small gesture makes a significant impression on the women, and gives them a greater sense of. Mary Jackson Character Timeline in Hidden Figures. But Hidden Figures isn’t a documentary. What is the judge’s decision when Mary Jackson petitioned to attend classes at Hampton High School?. S. . More than 50 years ago, they confronted employers who hired them only reluctantly. . Henson), Dorothy Vaughan (Octavia Spencer), and Mary Jackson (Janelle Monáe), who worked. 1 = Used. The journey is about three dedicated female mathematicians: Katherine Johnson, Dorothy Vaughn and Mary Jackson. It suggests how the film's protagonists—known as computers, due to their mathematical abilities to calculate numbers accurately and rapidly by hand, in support of the U. . Jan 31, 2017 · There was the moment John Glen called to have Katherine Johnson check his math and there was the judge who let Mary Jackson, played by Janelle Monae, take classes at the all-White school. Feb 28, 2017 · In the scene below, Mary takes her case to court, persuading the white judge to relax the rules with a masterful example of effective influencing. , Racism, and Rewriting His Role. " There are six elements that make up mise-en-scene acting, costume and make-up, setting, lighting, composition or space and lastly. She's the hero of this scene, and she works this judge to get him to give her what she wants. , Mathematics, Wilberforce University, 1929 Hired by NACA: December 1943 Retired from NASA: 1971 Date of Death: November 10, 2008 Actress Playing Role in Hidden Figures: Octavia Spencer In an era when NASA is led by an African American man (Administrator. . . . Hidden Figures is a clever title. Richard Brody on “Hidden Figures,” starring Taraji P. Cast: Taraji P. Cast: Taraji P.
<urn:uuid:af293cc1-7587-40ce-ae25-4003fafe59ba>
CC-MAIN-2023-50
https://xmtsx.wagenhuber-raumdesign.de/eil.miomia.de/ortofon-red-vs-grado-green.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.954535
558
2.90625
3
What Size Generator Will Run A Computer? May 16,2023 | YESGENERATOR When it comes to powering your computer during unexpected outages or in remote locations, having a reliable generator is essential. However, choosing the right generator size to meet your computer's wattage requirements can be confusing. This comprehensive guide will help you navigate through the process, providing professional insights and unique viewpoints to ensure uninterrupted power for your computer and sensitive electronics. Understanding Wattage Requirements: To determine the appropriate generator size for your computer, you need to consider both the running watts and starting watts of the devices you wish to power. Understanding the power requirements of your computer and associated peripherals is crucial for selecting the right generator. Factors to Consider When Choosing a Generator: Several factors come into play when selecting a generator to power your computer. These include the specific power requirements of your computer components, the total wattage of connected devices, the availability of backup power, and the desired runtime during outages. 3. Sizing a Generator for Your Computer: Calculating the generator size involves adding up the wattage requirements of your computer, monitor, peripherals, and any other devices you want to power simultaneously. By understanding the power consumption of each component, you can choose a generator that meets or exceeds the total wattage. 4. Considerations for Sensitive Electronics: Computers and other sensitive electronics require stable and clean power to avoid potential damage. Inverter generators are particularly well-suited for powering sensitive devices, as they provide clean power and minimize voltage fluctuations. 5. Exploring Inverter Generators: Inverter generators offer numerous advantages for powering computers. They provide precise voltage control, reduce noise levels, and are more fuel-efficient. Additionally, their compact and portable design makes them ideal for both outdoor and indoor use. 6. Optimizing Backup Power for Your Computer: To ensure seamless backup power for your computer, consider installing an automatic transfer switch (ATS). This device automatically switches the power source from the utility to the generator during an outage, eliminating the need for manual intervention. 7. Ensuring Safety and Efficiency: Proper generator installation, grounding, and ventilation are crucial for safety and efficiency. Follow manufacturer guidelines and consult with professionals to ensure your generator operates optimally and adheres to all safety requirements. Selecting the right generator size is essential to ensure your computer and sensitive electronics receive reliable power during outages or in off-grid situations. By understanding the wattage requirements, considering factors like backup power duration and inverter technology, and implementing safety measures, you can make an informed decision that guarantees uninterrupted operation for your computer.
<urn:uuid:74ed951b-9658-455b-ab90-be74839f8b77>
CC-MAIN-2023-50
https://yesgenerator.com/fr/blogs/knowledge-of-portable-gas-generators/what-size-generator-will-run-a-computer
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.896934
548
2.765625
3
No matter whether it’s exams around the corner or a rigorous classroom schedule awaits you after the holidays, there’s a lot that yoga can help you with. These situations typically see most students discarding recreational activities from their daily schedule, which results in increased stress, anxiety, digestive disorders and insomnia. In order to deal with such issues and beat the exam blues, students can opting for yoga and meditation instead. Experts opine that yogic techniques can not only help refresh you but also aid in boosting confidence. However, students should understand the basics of yoga and practice appropriate postures for positive results. Shavasana, popular among all yogis, requires you to lie flat on the floor, place your legs apart and close your eyes. Arms should be be placed along your body, with palms open and facing upwards. Hold this position for at least 10 minutes at a stretch. This posture helps to quickly refresh your body and brain when you are extremely tired. Balasana or the child pose, when practiced everyday can be beneficial in releasing stress. It can be easily practiced at home in your free time. Knees and hips are bent with the shins on floor such that the body faces the floor. The chest should rest on the knees and the head stretched towards the ground. Arms are stretched forward in front of the head. Sahaj Pranayam is a simple exercise to help reduce anxiety and de-stress your brain. One needs to be seated in the Padmasana with the back straight. After taking a deep breath, rest your chin down on the neck and hold your breath for around 30 seconds. Raise your chin and exhale through the mouth. Repeat for at least 15 times in order to benefit the most. Yoga teacher training India Sarvangasana is a yoga posture that brings a huge amount of benefits. Though a little difficult for newbies to perform, you need to concentrate hard on getting the back, waist and legs in a straight line vertically above the ground. The body should be balanced with hands placed on the back of the rib cage while the weight comes down on the neck, shoulders and the back of the head. While practicing trikonasana, the practitioner needs to stand upright with feet spread wide apart. The right foot should be turned 90 degrees outward and left foot 15 degrees inward. Stretch your left hand upward and your right hand towards the floor while keeping the body straight. Hold this position for around two minutes while taking deep breaths to relax the body. Repeat the movement on the other side, in a set of 15 every day. Makarasana is a yoga posture that resembles a crocodile resting on the water, with its face and neck just above the surface. One needs to lie down on the stomach with the head held up. Now, fold your hands backwards and rest them on your head. Straighten your legs and raise them up a bit. It is much suited for shoulder problems and is best repeated 10 times. The above listed asanas will help relax your body and mind and can be practiced anytime during the day. Related Post: THREE YOGA POSTURES TO HELP YOU SLEEP BETTER
<urn:uuid:0c9a8dc7-fb91-4c6d-949c-a95c83eb021d>
CC-MAIN-2023-50
https://yogvit.yoga/yoga-poses-can-help-studies/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.939365
653
2.59375
3
Table of Contents - 1 What city is Mission Santa Ines located? - 2 Why was Mission Santa Ines built in Solvang? - 3 When did Santa Ines close? - 4 Who built Santa Ines? - 5 What’s the oldest mission in California? - 6 Who lived in Santa Ines Mission? - 7 What is Old Mission Santa Inés known for? - 8 How much does it cost to visit Mission Santa Inés? What city is Mission Santa Ines located? Mission Santa Inés (sometimes spelled Santa Ynez) was a Spanish mission in the present-day city of Solvang, California, and named after St. Agnes of Rome….Mission Santa Inés. |Patron||Saint Agnes of Rome| |Nickname(s)||“Hidden Gem of the Missions”| |Founding date||September 17, 1804| |U.S. National Register of Historic Places| Why was Mission Santa Ines built in Solvang? Its purpose was to relieve overcrowding at those two missions and to serve the Indians living east of the Coast Range. Construction on Mission Santa Inés began in 1804 with one row of buildings. How far apart are the missions in California? approximately 30 miles The missions were built approximately 30 miles apart—about a day’s journey by horseback—covering 650 miles total. All 21 of them are open to visitors and feature a gift shop and museum, and most of them hold mass on Sundays (or even daily). What does Santa Ines mean in English? Old Mission Santa Ines in Solvang, California Mission Santa Ines was founded on September 17, 1804 by Father Estevan Tapis. It was named in honor of Saint Agnes, an early Christian martyr of the fourth century. The Spanish word for Agnes is Inés. The nearby town of Santa Ynez got its name from this same heritage. When did Santa Ines close? An Indian revolt in 1824 burned much of the original mission to the ground, and the Mexican government’s secularization of mission lands in 1834 nearly spelled the end for Santa Ines. Who built Santa Ines? Father Estevan Tapis Old Mission Santa Ines was the nineteenth of the 21 missions built in California from 1769 to 1836 by Spanish Franciscan priests led by Father Junipero Serra. The Mission was founded on September 17, 1804 by Father Estevan Tapis, it was the first European settlement in the Santa Ynez Valley. What special features are seen at Santa Ines? A popular exhibit at Mission Santa Inés is its collection of vestments (priests’ robes). The museum houses the largest collection of early California vestments. In fact, many of them date back to the 1400s, making them much older than the missions themselves. Mission Santa Inés is a National Historic Landmark. What is the smallest mission in California? Espada is the smallest and southernmost of the missions which seems to mean less tourist traffic. What’s the oldest mission in California? Mission San Diego de Alcalá Franciscan priest Father Junipero Serra founded the first mission in 1769. This was known as Mission San Diego de Alcalá and was located in present-day San Diego. Who lived in Santa Ines Mission? The Chumash were the Native Americans who lived in the area. The Spanish called them Inézeño. The Chumash built an aqueduct, raised livestock, and helped grow crops on the mission land. In 1824 Mission Santa Inés was the site of a Chumash revolt against Spanish soldiers. How long did it take to build the Santa Ines Mission? Old Mission Santa Ines was the nineteenth of the 21 missions built in California from 1769 to 1836 by Spanish Franciscan priests led by Father Junipero Serra. Who built Santa Ines Mission? What is Old Mission Santa Inés known for? Old Mission Santa Inés remains rich in tradition, legend, and history. The Mission was founded on September 17, 1804 by Father Estevan Tapis. It was named in honor of Saint Agnes, an early Christian martyr of the fourth century. How much does it cost to visit Mission Santa Inés? The mission offers self-guided and audio tours of the museum and gardens daily from 9:00am to 4:30pm, and is closed on Easter, Thanksgiving, Christmas, and New Year’s Day. Admission is $5.00 for adults and free for children under 11. For more information, visit the Mission Santa Inés website or call 805-688-4815. Why did Santa Inés build a temporary church in 1816? Santa Inés built a temporary church to sustain the mission er during its reconstruction. Despite the natural disaster, Mission Santa Inés reached its peak in 1816 with a population of 786 baptized Native Americans. When did the Santa Inés Mission become secularized by the government? In 1833 the missions in California began to be secularized, however, it wasn’t until 1835 that the Santa Inés Mission became secularized by the Mexican government.
<urn:uuid:31ac4f23-c177-456c-85ce-b24cabf43be5>
CC-MAIN-2023-50
https://yourwiseadvices.com/what-city-is-mission-santa-ines-located/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00100.warc.gz
en
0.944111
1,119
3.046875
3
A WWF-led research team, a Canon photographer, and crew traveled to Siberia’s Arctic coast on the Laptev Sea, to help solve a scientific mystery. The Laptev Linkages expedition was sponsored by Canon. I must say, we have really succeeded in our goal for the Laptev Canon-WWF Expedition: collecting DNA samples from walruses in this remote area. Now, we can work on settling a debate that’s over 50 years old – Where do the Laptev Sea walruses fit in the big story? These big smelly creatures live in between the Atlantic and Pacific walrus populations, and they have access to open waters in the winter. So the walruses living in the Laptev Sea might be a separate subspecies. Genetic analyses from old bones refer these guys to the Pacific population, but there are some uncertainties. Our main goal on this voyage was to collect DNA samples from the Laptev walruses for analysis. After days of crawling carefully up to walrus herds, culminating in a five-hour, all night sampling marathon, we reached the magic number: 32 small samples of walrus hide. The walrus DNA samples are now in safe in a lab in Moscow, ready for analysis. We hope to have an answer early next year, through the collaboration of walrus scientists in many different countries. Some interesting observations from the trip: - As a birder, I had an incredible trip, with more than 50 Arctic bird species. We found two breeding pairs of Sabine’s gull, and almost 20 species of waders. - Walruses everywhere! We may have observed almost 25 percent of the entire Laptev Sea walrus population in one single day. - Several interactions between polar bears and walruses, but nothing lethal. Most of the bears were in good shape despite being on land for relatively long time. Our guess is that in this specific area, the polar bears can handle being stranded without sea ice while they have walruses to feed on.
<urn:uuid:5691df48-8e0d-471a-98df-5cf478aff64d>
CC-MAIN-2023-50
http://arctic.blogs.panda.org/field/solving-the-walrus-mystery/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.944497
424
2.84375
3
What is a solstice? The word solstice comes from the Latin word for sun (sol) and the word meaning to ‘stand still’ because it appears that the Sun stops and changes direction. Since we can remember we have looked at the Sun, Moon and stars and watched these celestial bodies move across the sky. We have observed, documented and calculated these movements. Some have even built monuments to help predict positions and time, such as Stonehenge in England, the Great Pyramids in Egypt and the temple in Machu Picchu in Peru. In more recent times science has helped to explain the causation of astronomical events, although it is easy to see why myths and legends were shared to make sense of regular occurring and puzzling phenomena. The solstice – occurring twice in a year – is one such event caused by the Earth’s 23.5 degree tilt on its axis, and its elliptical orbital motion (it doesn’t orbit in a true circle but an oval) around the Sun. The Earth is split into 2 hemispheres with the equator being the centre line. Due to the Earths tilt the Northern and Southern Hemispheres trade places over the year (the time it takes the Earth to orbit the sun) in receiving the Sun’s direct light. This is why we have the heat of summer and the colder temperatures of winter. It is the tilt and not our distance from the Sun that causes winter and summer. For us in the Northern Hemisphere, the shortest day (hours of light) comes on the winter solstice. The winter solstice occurs on or around the 21st December and after the winter solstice, the days start to get longer and the nights start to get shorter. Living near the North Pole means that you won’t get any daylight hours around the winter solstice with the daylight hours increasing as you move further south. Mid winter is therefore the best time for astronomy with lovely long nights and the chance to see many more of the constellations and planets throughout a night’s observing. At this time of year you will notice the late dawns and early sunsets and the low arc of the Sun across the sky each day. Because we photograph the sunset most nights here at Astrofarm we can track the changes in the time and place for sunset with our wide open west view. The winter solstice will mark the furthest point south for our sunset and then we will see it gradually move west during the spring. You can also see how low the Sun appears in the sky at local noon – be sure to look at your noontime shadow. The winter solstice will be the longest noontime shadow of the year. After the 21st December this noon shadow will start to get shorter. Of course, the winter solstice effect is completely reversed for the southern hemisphere who are now enjoying the longest day and shortest night and the lovely summer temperatures! The South Pole is experiencing 24 hour daylight and their summer season. The summer solstice happens when Earth is at its furthest distance on its elliptical orbit of the Sun with the Northern Hemisphere tilting towards the Sun. Occurring on or around the 21st of June – exactly half a year from the winter solstice – we have more direct sunlight and subsequently the summer heat. The North Pole has 24 hour daylight at the point of the summer solstice and those living nearest to the pole enjoy more hours of light than those nearer the equator. During the day we notice that the Sun is now high overhead at noon and will not cast a shadow at all or be very short. The Sun will set in the west if it sets at all – depending where you are on the Earth, rapidly reappearing in the east. This may be good thing for some as it gives us the wonderful summer heat and warm evenings, for astronomers it means very limited observing during the summer months. Because we are much further south here at Astrofarm than those of you in the UK, we benefit from 4 hours of astronomical darkness even on the summer solstice – a much better deal than Aberdeen! And again, the summer solstice for those of us in the Northern Hemisphere means winter and shorter days in the Southern Hemisphere. The two solstice events along with the 2 equinox in spring and autumn (the mid points between the solstice when the days and nights are equal lengths) mark important points in the astronomical calendar. They are like road signs through the year mapping where we are in our journey around the Sun. And whilst we now have the knowledge to explain why these events occur, there is still something very human about sharing and enjoying the myths, legends and celebrations passed on from the distant past that helped our ancestors to make sense of their world. Happy Solstice wherever you are!
<urn:uuid:ce0dc2a1-072e-41b7-87b6-f5ae219b8ec3>
CC-MAIN-2023-50
http://astrofarmfrance.com/solstice-special/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.945229
978
4.25
4
This October SFUSD is celebrating Filipino American History Month, a month to celebrate the contributions of Filipinx Americans to the American political, economic, and cultural landscape. At SFUSD, our schools have offered lessons and other activities to honor Filipino history and culture. For example, Hazeline Mandapat and JustineRay Madarang from Sunset Elementary School put together this presentation on Filipino role models, cultural icons, and common Tagalog phrases. SFUSD celebrates the Filipinx community all year round through Kababayan SFUSD, an organization of SFUSD Filipinx employees who organize community events and help support the success and well-being of Filipinx students. This May, they held a virtual districtwide graduation ceremony for Filipinx seniors and families, following up on the inaugural ceremony last year. They also partner closely with the Filipino Mental Health Initiative of San Francisco. We’ve put together a resource guide with content to learn more about Filipinx American history and ways to engage with the local Filipinx community. Note: The adoption of the “x” in “Filipinx” by members in the Filipino American community is an attempt at inclusivity and breaking past the binary of gendered markers imposed by colonization. Filipinx should be seen as synonymous with Filipina or Filipino, without the gendered prescription.
<urn:uuid:f693f78f-a56a-449b-9d3f-66815ae37cef>
CC-MAIN-2023-50
http://blog.sfusd.edu/2020/10/celebrating-filipino-american-history.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.898838
270
3.203125
3
WHAT IS COPYRIGHT? Copyright is a legal term describing rights given to creators of a broad range of literary and artistic works. It is a bundle of rights given to the author of works to make sure that only he can use and reproduce what he has created for his own purposes. It would enable him to control the commercial exploitation of his works. The works covered by copyright includes: literary works such as novels, poems, computer programs, newspaper; dramatic works such as plays or choreography; musical works such as music; artistic works such as sculpture, architecture, maps, technical drawings, paintings, photographs, wedding dais and so on. Related Rights is a term that is associated with copyrighted works and provide similar rights. Works covered under related rights are: sound recordings, films, broadcasting, cable programs and published editions. Copyright and related rights protection is obtained automatically without any need for registration or other formalities. Unlike other types of intellectual property such as trademark, patent or industrial designs where these types of intellectual property must be registered for protection, copyright and related rights are unique where, the moment you create a work, it is automatically protected. Therefore, everything that you write (or draw or paint or whatever) regardless of whether it is an email, a recording, an image, a thesis, a web page, or anything else, it is automatically copyright protected. Not only that, works created in Brunei Darussalam will have protection in other countries who are a member of the World Trade Organization (WTO) or the Berne Convention for the Protection of Literary and Artistic Works, usually known as the Berne Convention. This would mean that your copyright or related rights is already protected in most countries without having to 'register' or go to those countries). *Brunei Darussalam is a party to both the WTO, Berne Convention, WIPO Copyright Treaty (WCT) and WIPO Performances and Phonograms Treaty (WPPT). IMPORTANCE OF © SYMBOL The use of the © indicates an assertion of copyright. The symbol © is usually followed by the name of the copyright owner plus the year when copies of the work were first made available. Example: © Brunei Intellectual Property Office 2020. Not using the © symbol does not imply a waiver or loss of copyright. It may, however, be a relevant fact in infringement proceedings. Basic rule is that the person who creates the work is the author and thus the first owner in the work. However, if works created in the course of employment or under a contract of service, then the employer would be the first owner. Nevertheless, this would depend on your employment contract. For government servant, His Majesty the Sultan and Yang Di-Pertuan will be the first owner of the copyright in a work made by a government servant in the course of his duties. DURATION FOR PROTECTION The copyright protection has a time limit of 50 years. So copyright protection will start the minute that you created the work plus 50 years after your death i.e. life plus 50 years. Copyright protection also includes moral rights which involve the right to claim authorship of a work, and the right to oppose changes to it that could harm the creator's reputation. COPYRIGHT MATERIALS MUST BE USED WITH PERMISSION Ask permission directly with the copyright owner for a licence to use or copy copyright material or through agencies or administrative bodies that licence certain uses on behalf of their members (copyright owners). Copyright materials can be used without permission but only in specific instances or fair dealing. Fair dealing is when a fair and reasonable portion of the work is copied for research/ private study, criticism/review and, report current affairs. COLLECTIVE MANAGEMENT ORGANISATION (CMO) OR COLLECTING SOCIETIES Many creative works protected by copyright require mass distribution, communication, and financial investment for their dissemination (for example, publications, sound recordings, and films). It is in the copyright owners' interest that their works are enjoyed by the widest audience, provided that they are rewarded for their work. In some sectors, copyright can be managed through individual contracts between the authors and users. However, in many cases it is impossible to negotiate individual licenses or permissions for dissemination of works. Think of playing songs on a radio station, showing a movie on a cable network, or performing a play in theatres around the world: there is no way each user could remunerate each individual creator or rights holder every time a work is accessed or enjoyed. In many of these cases rights are managed through the system of collective management organization or collecting society. These CMO or societies can provide their members the benefits of the organization's administrative and legal expertise and efficiency in, for example, collecting, managing, and disbursing royalties gained from the national and international uses of a member's work or performance. Certain rights of producers of sound recordings and broadcasting organization are sometimes manage collectively as well. There are two local collecting societies in Brunei who represent copyright holders in the music industry: •BeAt Berhad: This CMO represents music authors and composers. It was first established in 2010 and is affiliated with International Confederation of Societies of Authors and Composers (or CISAC). •BruMusic Sdn Bhd: BruMusic represents the record producers and it is affiliated with the International Federation of Phonographic Industry (IFPI). On 15th September 2017, both BeAt Berhad and BruMusic signed a Memorandum of Understanding to jointly license their rights to make it convenient and cost effective for users of musical works and sound recordings. BruMusic will issue the joint licences on behalf of both CMOs and collect the royalties from users of music in Brunei Darussalam. The Copyright Order 1999 The Copyright (Amendment) Order 2013
<urn:uuid:f519ed94-c684-433a-9a85-29af24bdedaf>
CC-MAIN-2023-50
http://bruipo.gov.bn/SitePages/copyright.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.941694
1,207
2.96875
3
How to write a topic proposal for an essay. How to Write an Effective Proposal Essay 2022-10-28 How to write a topic proposal for an essay Rating: A topic proposal is a brief overview of the main focus and purpose of an essay. It is an essential part of the writing process, as it helps you to narrow down your ideas and organize your thoughts. A well-written topic proposal will not only provide a clear roadmap for your essay, but it will also help you to convince your reader of the importance and relevance of your topic. There are several key steps to follow when writing a topic proposal for an essay: Choose a topic: The first step in writing a topic proposal is to choose a topic that interests you and that is relevant to the assignment or class. Consider your own interests and the goals of the assignment when selecting a topic. Narrow your focus: Once you have chosen a general topic, it is important to narrow your focus and select a specific angle or aspect of the topic to explore in your essay. This will help you to avoid trying to cover too much ground and will make your essay more focused and cohesive. Define your purpose: Clearly state the purpose of your essay in your topic proposal. This will help your reader to understand the main focus of your essay and will guide your writing as you develop your argument. Outline your main points: In your topic proposal, outline the main points or arguments that you plan to explore in your essay. This will help you to organize your thoughts and will provide a structure for your essay. Explain the significance of your topic: In your topic proposal, it is important to explain why your topic is important and relevant to your reader. This will help to engage your reader and convince them of the value of your essay. Review and revise: Finally, review and revise your topic proposal to ensure that it is clear, concise, and well-written. Make sure that your proposal accurately reflects the focus and purpose of your essay and that it is free of errors. By following these steps, you can write a clear and concise topic proposal that will set the stage for a well-organized and well-written essay. It is not clear what the topic of the essay should be based on the prompt "081 831 0101." 081 831 0101 appears to be a telephone number, but it is not clear how this number should be used as a basis for an essay. Without further context or guidance, it is difficult to write an essay on this topic. It would be helpful to provide more information or a specific question or topic to write about. How To Write A Topic Proposal For An Essay Or Research paper What will you choose? Browsing the website, you will find a collection of sample papers that will provide you with the most appealing ideas and writing instruments indispensable for flawless essay writing. Implement a few elements in your work to make it a blast. How to Write an Informative Proposal Essay? Some elements of the title page can differ, depending on the school and selected formatting style. Why is it relevant and important? So let's get started! This means that you need to make sure your text is clear and easy to follow. Once you have chosen, do some research so that you can better understand the issue. 100 Top Proposal Essay Topics For Students Who Feel Stuck What Is a Proposal Argument Essay? Tip from SpeedyPaper: Even the most interesting topic can be presented in the wrong way. Look through the list of effective topics mentioned below to get a better idea of how an influential theme should look like. How to write a proposal essay thesis? Currently, would have to write a proposal essay? By coming up with an effective essay proposal you can increase the likelihood that your professor will allow you to write on the topic that interests you the most. This is the opening part of the proposal essay, as it is the first thing the readers will see when they start reading the paper. In the three body paragraphs, you will need to provide three reasons why your solution is the best one. A proposal essay is an essay that suggests a particular idea on any subject. So if you are telling yourself "I need someone who can Tips for Proposal Argument Essay Writing Now that you know how to structure your work, let's talk about some tips. For writing a proposal essay outline, note down all relevant info, look for the best reference texts, and come up with a solid topic to talk about. But on the other, all proposal paper ideas could be presented in the right way if you know how to do it! We have a diverse team capable of coming up with the most amazing topics proposal could have. Have you already stated the problem? Required Resources Lay out what you need actually to complete your proposal. Select a topic that you care about, as you will likely enjoy the essay process more if you are writing about something that is important to you. If you manage to create a well-structured plan, you are likely to come up with an impeccable essay. Writing an Impeccable Proposal Essay: Inspiring Topics and Helpful Guidelines An impeccable outline is a skeleton of your essay that will save you much time and effort while writing. In case, you are running short of ideas, feel free to access the below-mentioned list of interesting proposal essay topic ideas on various categories and select an ideal essay topic of your choice. This can increase the chances of your proposal getting accepted. How To Write A Topic Proposal For An EssayOr Research paper Most students are required to write topic proposals for their essays in which they outline what they intend to cover in their upcoming written work. It is the opening section of an essay where you have to state your thesis statement or define the problem that you want to address in the essay. Use a real-life example, statistics, or other instruments that will help you clarify the problem that will be discussed in the paper. For example, if you are interested in environmental issues, you might choose to research the problems with plastic pollution. 186 Creative Proposal Essay Topics and Ideas For Students Take advantage of editing services offered at SpeedyPaper if you do not have much time, effort, and skills to succeed at this stage. When you craft a proposal essay, first, you should define a problem. For gathering information or data related to your essay topic, rely only on trusted websites, books, magazines, and published research papers. Additionally, do not forget that anticipation of the proposal outcomes is another objective of the paragraph, so make sure the readers know what to expect. Timely submission, top-tier quality, affordability, and anonymity are guaranteed. You can escape from plagiarism issues if you add a list of references. In the conclusion section, summarize all your major points and restate the thesis statement. Make sure the outline you create presupposes the logical flow of the ideas. Analyze a few essays, paying attention to the structure of the text, formatting aspects, and other specifications that can make a critical difference. For example, you could set forth to research the bad effects of plastic use and explain why this research is needed. What is a Proposal Essay? Therefore, there is no way your name the issue too many times. Add as much relevant info as you can to clarify your ideas. This will make it more likely that your reader will take you seriously. After you have offered influential ideas, changes, and solutions implementation of which can improve the situation, you should mention the action plan. Additionally, it introduces the matter to the readers, making them excited about the further data. Read an article and follow the instructions to create an impeccable paper worth the highest grade. For this, the subject would have to be known enough to generate a lot of research. Thus, the simplest way to sound persuasive is by learning the needs, requirements, preferences, level of knowledge, and competence of the readers. After a profound analysis of the topic and collection of critical information, you are ready to single out the most prominent aspects to present and explain. Finally, you are ready to get started with the proposal essay creation. And that means there are all sorts of fascinating themes you can write about! So many different topics to choose from! The title page is important, but it contains little information about the content of the paper. Therefore, once you know the target audience, you have a higher chance to contribute to the quality of your message and its clarity. On top of that, make sure you look for top proposal essay examples in your field of study to get some great proposal suggestions. Additionally, a clear thesis statement should be mentioned in the essay, offering an effective solution to the problem.
<urn:uuid:6cabe8a7-db58-4080-931f-b0f31cca7d35>
CC-MAIN-2023-50
http://childhealthpolicy.vumc.org/las35710.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.939832
1,791
2.546875
3
Katalina Groh, Larry Prusak: Some of the world's leading thinkers Larry Prusak on organization I Discussion I | Contact us | Bibliography on storytelling | |The enemies of storytelling down through the ages| Storytelling was an honored practice in primordial times as we huddled around campfires in little villages and shared stories about the wolves that were attacking the village, or the crops that had failed or the weather that had changed. Storytelling was the only means by which the village could assemble its knowledge and survive. In the last couple of thousand years, storytelling has been under a cloud of disapproval. Understanding the source of the disapproval is a key to recovering the power and benefits of this incredibly powerful technology. Plato: It is hard not to blame Plato as the original source of the disfavor in which storytelling has fallen, since a literal reading of The Republic shows that he urged that |storytellers be censored or banned from the cerebral republic he was describing. In his masterly explanation of The Republic in Preface to Plato (1963), Havelock demonstrates that fully half of The Republic is aimed at attacking the power of poetry and narrative. Plato's attack was so devastatingly successful, that a modern writer like Anthony Gottlieb can barely understand what Plato was trying to prove: see The Dream of Reason, chapter 11. However, Plato himself was one of the master poets and storytellers of all time, e.g. Symposium, a dinner party to end all dinner parties, and was obviously aware of the power of storytelling. There has been a tendency for his followers to adopt what Plato appeared to be preaching, rather than what he himself practiced. There is the possibility of an ironic interpretation of The Republic: Plato was really showing why a fully rational society was impossible. But this is not entirely compatible with Plato's efforts to realize the vision in Sicily. Aristotle: Aristotle also played a role in the denigration of storytelling. By placing a huge emphasis on the taxonomy and classification of what we know, he created a model for science which left storytelling in a peripheral role of illustrating abstract propositions. Abstract knowledge moved on to center stage. Francis Bacon: The arrival of the scientific method and experiment provided a route for verifying which stories were true, or more strictly, which stories were false. Scientists were thus able to distinguish fact from fiction, truth from myth. It was however less obvious to the scientists that the accounts of the experiments that they provided were also stories of a kind. Descartes: The separation of the self from the world meant the supposed abolition of feeling and emotions from rational discourse. It was only recently with the findings of science that the impossibility of separating thought from emotion was discovered. Feeding on their success in using experiment to fact from fiction, scientists began to claim that their experimental method was the sole guide to discovering the truth, and began to claim a monopoly on all forms of truth. The antagonism towards storytelling may have reached a peak in the twentieth century with the determined effort to reduce all knowledge to analytic propositions, and ultimately physics or mathematics. In the process, we discovered the limits of analytic thinking. We learnt of Godel’s proof of the incompleteness of arithmetic, and began to absorb the implications of the indeterminacy of quantum physics and complexity theory, but many years of schooling had instilled in us a continuing itch for reductionist simplicity. This itch reflects what Freeman Dyson calls the Napoleonic approach, and leads to hierarchy, procedures, rules and a distinctive form of myopia. Teachers: Education systems succumbed to the prevailing fashions and abstract syllabuses proliferated. Exceptional teachers used storytelling, but the average teacher stuck to the syllabus. Professionals: Abstract knowledge buttressed the power of the professions, as professional jargon, known only to initiated, and communicated in abstract fashion, enabled cadres of professionals to protect territory and maintain control. The limitations of abstract thinking: The result of all these efforts over several thousand years means that there is a huge cultural, social, intellectual, political and financial superstructure that has a vested interest in favoring abstract thinking and communication and that is hostile to narrative thinking and narrative modes of communication. In many ways, abstract thinking has served us well. But it is an approach with diminishing returns in a period of massive turbulence. As we enter the 21st Century, there has been a growing recognition, that abstract thinking alone doesn't help us much in coping with a rapidly changing world, where innovation is the key to success. Innovation – what Dyson calls the creative chaos and freedom of the Tolstoyan approach – swims in the richness and complexity of living. It breeds on the connections between things. As participants, we can grasp the inter-relatedness of things in the world – and so are able to connect them in new ways – much more readily than when we are seeing them as an external observer through the window of rigid analytic propositions. The storyteller needs to be aware of the immense superstructure supporting the enemies of storytelling and should not be surprised to encounter a huge wave of prejudice in the practice of storytelling. The best approach: don't argue (which would implicitly accept the primacy of abstract thinking). Instead: tell stories! |Books and videos on storytelling *** In Good Company : How Social Capital Makes Organizations Work by Don Cohen, Laurence Prusak (February 2001) Harvard Business School Press *** The Social Life of Information, by John Seely Brown, Paul Duguid (February 2000) Harvard Business School Press *** The Springboard : How Storytelling Ignites Action in Knowledge-Era Organizations by Stephen Denning (October 2000) Butterworth-Heinemann *** The Art of Possibility, a video with Ben and Ros Zander : Groh Publications (February 2001) |The views expressed on this website are those of the authors, and not necessarily those of any person or organization| |Site optimized in 800x600: webmaster CR WEB CONSULTING|
<urn:uuid:28187cf3-4838-4c23-83e3-f1d937d6bf82>
CC-MAIN-2023-50
http://creatingthe21stcentury.org/Intro8-enemies.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.953713
1,274
2.890625
3
On 19 December, 2016, the Centre for Scottish and Celtic Studies, jointly with the Scottish Centre for War Studies, welcomed Daniel Szechi (Manchester) to discuss ‘The Long Shadow of 1715. The Great Jacobite Rebellion in Jacobite Politics and Memory – A Preliminary Analysis’. Below is this listener’s brief summary of the lecture. Daniel explained that his work has focused on determining how the Jacobite loss in 1715 was perceived by the subsequent generation of Jacobites, those who took part in the uprising in 1745. He has studied the Jacobite mind through a close reading of texts from the period, especially those that had been recorded by veterans of the 1715 uprising. These sources lamented the failure of the 1715 uprising and blamed their defeat on various factors, although these usually fell into three categories: Secular (placed the blame on human actions or luck), Conspiratorial (blamed on betrayal by certain parties), or Cosmic Plan (God’s will, punishment for driving out Mary Queen of Scots, etc.). Daniel’s talk focused on the secular scapegoats. Much blame was placed on John Erskine, the Earl of Mar. While his accomplishments in administration kept the Jacobite army paid and fed throughout the 1715 uprising, he lacked military experience. Many of the sources declare that he was unfit to be the leader of the rebellion. Blame was also placed on the English Tories and Jacobites. The original plan had been for the rebellion to start in England first, but the plot had been discovered. Thirdly, the French did not provide the supplies that they had promised. Several battles were considered to have been key and could have been turning points for the Jacobite army. Many authors claimed that the Battle of Sherrifmuir should have been won, but that several opportunities were not taken by the commanders. The Battle at Preston was heavily criticized, especially by Reverend Patten who wrote a memoir about what took place. He mentions how the bridge over the Ribble could have been barricaded in advance, or that they could have fought outside of the town of Preston. In most accounts, Thomas Foster and his actions were disapproved of in several accounts. Finally, the disunity prevalent throughout the army was blamed. Although no particular incidents are blamed, all accounts recall the officers arguing and senior officers who failed to bring the army to discipline. Understandably, listening to their veteran fathers rue these various factors affected the Jacobites of 1745. Many were hesitant to follow a leader without military experience, wanted assistance from the English Jacobites and European powers, and traveling through Preston caused them particular apprehension. Overall, the Jacobites who took part in the uprising of 1745 learned from the mistakes made in the past, although there was still division within the army. As a result, they were more effective militarily than the army in 1715, and it appears that this improvement may have been due to a member of the royal family being present, which lent additional authority to the decisions made. While the Jacobite interpretation of what went wrong in 1715 was selective, and they did not appear to consider factors beyond their control, the lessons they learned from their defeat allowed them to be more successful in later rebellions, even if they were not successful in the end. Summary by Megan Kasten (PhD researcher) Our seminar series for this semester continues on the 26th of January with Chris Whatley (Dundee), ‘Manufacturing Robert Burns, 1859-1896: George Square to Irvine Moor’ [jointly with the Centre for Robert Burns Studies]. This will be held in Room 412, Lecture Theatre B in the Boyd Orr Building at 5:30 pm.
<urn:uuid:b09f126f-d5d2-4f4b-acd9-4c1a54208209>
CC-MAIN-2023-50
http://cscs.academicblogs.co.uk/the-long-shadow-of-1715-the-great-jacobite-rebellion-in-jacobite-politics-and-memory-a-preliminary-analysis/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.984128
767
2.9375
3
1.DEVELOPMENT OF THE CURRICULUM OF “EUROJOINER” EUROPEAN QUALIFICATION Compilation of the information contained on job profile of joiner/carpenter each country, related to: - Professional activities developed. - Training related to this job profile. - Elaboration of a comparative mapping that contains information about the joiner/carpenter in each of the participating countries, definition of the skills of the EUROPEAN JOINER. - Development of the curriculum will contain information with different competences and those aspects related to the theoretical and practical knowledge that an EUROJOINER should have and the list of activities that this professional profile should know how to perform. 2.E-LEARNING OPEN EDUCATON TRAINING COURSE - Elaboration of the learning objectives. - Development of the training material to compile the competences defined in the qualification relative to the EUROPEAN JOINER. - Translation of the training material. Each partner will translate the learning material into their own official language. - Inserting the training modules in Moodle Platform. Defined and translated the - Pilot test of the training course, among all partners. 3.EUROJOINER ICT TOOL: SELF-EVALUATION SYSTEM OF SKILLS - Define the Theoretical and practical questions to test if a student or a worker have the knwodledge and the Skills related to the EUROJOINER competences defined. - To define the possible answers related with the questions defined. - Elaborate the ICT SELF-evaluation tool.
<urn:uuid:9f8492b3-f3e6-43d2-8da4-0be62297acaf>
CC-MAIN-2023-50
http://eurojoiner.com/steps.asp
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.814018
356
2.546875
3
Posted on: 3 March 2020 Pallets are mostly used in the shipping, transport and logistics industries. They form a platform where items or commodities can be placed and secured. The platform mainly helps keep the commodities off the ground or a particular surface and also prevents movement during transit. Here's what you need to know about recyclable pallets: Different Material Compositions for Pallets You can either come across wooden, plastic or metal pallets. Recyclable wooden pallets are what you may commonly find, followed by plastic then lastly metal. This is because most of the time shipping companies do not recycle wooden pallets, which are considered cheap and replaceable. Shipping companies may, however, frequently recycle plastic and metal pallets. What Can You Use Recyclable Pallets For? Recyclable pallets, whether wood, plastic or metal, can be used for many applications. All you have to do is use your creativity. You might be surprised at how they can be of great use at an affordable price. Here are some uses: Completely damaged wooden pallets can be used as firewood, which helps you conserve forests. They are also cheaper when compared to firewood you buy at the store. Many landscaping projects require timber. Instead of purchasing expensive timber, you can purchase cheaper wooden pallets, use them as they are or dismantle them to access the wood planks. Depending on what you want to build with the wooden pallet pieces of wood, ensure to check on the wood quality. Companies that sell recyclable wooden pallets usually organise them based on their quality. It is, therefore, important to know the quality you want. Garden and hydroponic needs Wooden and plastic pallets are also used for gardening needs. Plastic, of course, is recommended for hydroponic plants because it can't get damaged by water. Wooden pallets are used for outdoor furniture (patio, park and yard furniture). All you have to do is ensure you hire a carpenter or woodwork designer who can design the pallets to a visually appealing, functional and durable piece of outdoor furniture. Buying Recyclable Pallets The first thing you might notice is that metal pallets are the most expensive, followed by plastic pallets and wooden pallets. This is just the price based on material. You may also come across a different range of prices in each material category; this is the price based on quality. Different people have different recycling needs, which leads to the variety in materials and quality.Share
<urn:uuid:d5ea7cad-b88f-48b8-bf9c-b50f90ad93f5>
CC-MAIN-2023-50
http://holzham.com/2020/03/03/what-you-need-to-know-about-recyclable-pallets/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.961571
528
2.90625
3
When it comes to order as opposed to chaos, that is, of holding things together, physicists speak of four fundamental forces of the universe. There is gravity, electromagnetic force, and the so-called “strong” and “weak” forces that hold particles together and govern their relations. These four forces supposedly explain everything. But what about life? And what about meaning? Do not living organisms have their own “life” force that holds the cells and parts of cells together and regulates their interactions? As for meaning, what holds the words a language together so that they make sentences? Why can’t just any word be combined with just any other? There must be something that makes meaning happen. Can these forces not also be considered “fundamental” forces of the universe? This question is important, at least if we want to avoid “physicalism,” that is, reducing everything to matter. Let us call the force that turns inanimate matter into living organisms “negentropy” and let us call the force that holds words together to make meaningful sentences and thoughts “power.” In 1944 the Nobel Prize winning physicist Erwin Schrödinger published a book entitled What is Life?. The question arises because living systems do not follow the Second Law of Thermodynamics, that is, the law of entropy. In living systems, order increases rather than decreases. This goes against the law of entropy. Life, therefore, is a fundamentally different form of order than matter. Life is a so-called “emergent” phenomenon which means that we don’t know where it comes from or how it comes into being, but we know it did and that it is very different from the purely physical organization of matter which the law of entropy regulates. In distinction to merely physical organization, which does not negate entropy, life seems to do this. Negentropy means the negation of entropy. Entropy is the tendency of energy to dissipate to equilibrium, that is, the equal probability of all states. For Schrödinger, this was a paradox. How can entropy be negated, and systems move from being less organized to being more organized? Another Nobel Prize winner, Ilya Prigogine, spoke of “dissipative systems” which run energy through their structures much like water running through a mill or food going through the metabolism of organisms. Such systems use entropy to negate entropy.Continue reading
<urn:uuid:e74b8433-3779-46b3-af79-6871d6c21ed7>
CC-MAIN-2023-50
http://interpretingnetworks.ch/category/power/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.957588
509
3.359375
3
The Big Bang Theory is the standard model for the formation of the universe and is widely accepted among today’s physicists. So my question for you is this, “What does this standard model and its acceptance mean for those of us who believe that God created the heavens and the earth?” In my opinion, it means something quite spectacular. Here is the standard model in a nutshell: The universe sprang into existence as a “singularity” around 13.8 billions years ago. This “singularity” was infinitesimally small, infinitely hot, and infinitely dense. (How do you even wrap your head around infinite as a quantity? Not sure, but let’s continue on.) This “singularity” was not a tiny fireball in space. Space did not exist. Time did not exist. Matter did not exist. Energy did not exist. They were all wrapped up inside the “singularity”. Then the “singularity” suddenly inflated. This sudden inflation was so rapid and so large that we have come to refer to it as the “Big Bang”, and its result is the universe that we now inhabit. The fascinating piece of the puzzle to me is that this inflation was not a constant and linear path through 13.8 billion years. No, when we say “suddenly inflated”, we mean “suddenly inflated.” How sudden? We measure time associated with the major events of the big bang in 10-43 seconds. That is a decimal point followed by 42 zeroes and a 1. That is a pretty tiny part of a second. Important events at the beginning of the expansion such as the separation of the four forces (gravity, electromagnetic, the strong and weak nuclear forces), the creation of matter and antimatter, the formation of quarks, gluons, and other elementary particles, rapid cooling, and much more are all measured in very very very tiny fractions of a second. So when we say “sudden”, we mean a sudden that is almost impossible to imagine. And when we say “inflated”, the numbers are just as incredible. The current theory has the universe increasing by a factor of 1026 in the first fraction of a second. That means going from the subatomic (smaller than the particles of an atom) to the cosmic (think huge galaxies) during these incredibly small time frames of the first second of the universe’s existence. Again, by “sudden” we are talking about time measured in 10-43 pieces of a second, and by “inflated” we are talking about the building blocks for every star and every planet in the cosmos created within that first second. Pretty incredible. Can you see where I am headed? Even as a math and physics guy used to working with outrageous numbers, the reality of the situation is that “suddenly” might as well be “instantaneous”. Because realistically, that is what it is. As a believer and a scientist, this gives me goose bumps. The prevailing theory for the formation of the universe suggests that all that we see, no matter how far we peer into deep space and time, was literally created instantaneously out of nothing. Does that sound like a creation account you are familiar with? It should, because it fits one of the main tenets of our faith. God created the world ex nihilo; out of nothing. The scientific steps that brought us to this point is a fascinating story in its own right, and I do not think we arrived at this understanding by accident. Dr. Lawrence Krauss, one of the science popularizers of the Big Bang Theory, recently said, “We are fascinatingly lucky at this point in time to be able to see the evidence of the Big Bang.” Are we “fascinatingly lucky” or is God revealing the wonder of Him instantly creating the world as we know it from nothing as the Bible teaches? It all depends upon your point of view. It all depends on your presuppositions regarding religion and the supernatural. The connection between the “Big Bang” and God’s instantaneous creation of the world is stunningly obvious to me. To Dr. Krauss, not so much. It is a comparison we will take up next time.
<urn:uuid:ab88dc7a-0130-4b8a-9847-b122fa3644e7>
CC-MAIN-2023-50
http://jaylehman.com/2016/03/dont-be-afraid-of-the-big-bad-bang-theory/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.958939
910
2.578125
3
Frequently Asked Questions (FAQ's) What is a window? Answer: A window is an opening constructed in the wall or roof of a building. Most windows are fitted with a frame containing panes of glass. - 139 What is the purpose of a window? Answer: A window is used to let in light. Some windows can be opened to also let in air. - 140
<urn:uuid:e101d81a-844e-4ed4-87f9-0507936b5d7a>
CC-MAIN-2023-50
http://rrwords.com/Topic/D/Dark-Window.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.949102
81
2.890625
3
write one page essay answering the following: Why does a blue sky have white clouds? Why do the clouds appear red at sunset. What is the “Green Flash seen in some sunsets in Hawaii? (There is no limit number of sources you can use any number of sources) Posted in essay.
<urn:uuid:3cc0d73f-61e1-49ba-9149-0dcf2bf72c2b>
CC-MAIN-2023-50
http://therealwriters.com/write-one-page-essay-answering-the-following-why-does-a-blue-sky-have-white-clouds/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.910397
62
2.609375
3
Microbial contamination of our water supply occurs for a variety of reasons and most often not discovered in time to prevent illness. While centralized water treatment can address these issues at the water treatment plant the infrastructure is the weak link! The risk factors from infrastructure distribution and transmission failures should be sufficient information for every potable water home or business to preventively install treatment that protect their premise potable plumbing supply for dirt/sediment and microbial contamination as it enters the building. This is especially true before, during and after a PWS has issued a Boil Order Advisory! Whether it’s the water used to clean fresh fruits and vegetables, drinking and ingredient waters for our food and beverages, aerosols generated by shower heads and aerator style faucets (that can cause respiratory illness) one must consider proactively treating these waters for microbial reduction. Once your potable water premise plumbing is exposed to microbial contamination it builds up bio film and colonies of organisms that can cause serious illness from bacteria such as e-coli, legionella spp., and many other pathogen organisms. Proactively filtering your water that you use to drink, shower, swim, wash vegetables, and other water using appliances can drastically reduce your risk of illness or even death. Homes, business, schools and industry are negatively impacted from these pathogens entering their homes from private and public water supplies. Statistics show that 20% of disease outbreaks in the U.S. were associated with distribution system deficiencies! With an average age of ~200 years and expected life of 75-120 years the infrastructure in the U.S. continues to challenge water utilities to provide safe drinking water to the end user. The FDA site shows over 49,000.000 food borne illness per Year! Daily we can read about Boil Order Alerts from public water utilities due to risk of microbial contaminants being in our drinking water. However, these public notices are always after the contamination or possible contamination has already exposed our homes and businesses. Waterline offer filtration solutions for home, business, schools and industry for point of use (POU) & point of entry (POE) using our CMF technology (Charged Membrane Filtration). CMF technology uses electro-absorptive filtration to capture organisms and retain them within the filtering media throughout the life of the system. Since most all organisms and proteins have a gram-negative charge on their surface, this technology has a strong positive charge to attract and retain them without relying on small pore size filters. The result is a filter that has a low pressure drop to allow it to be installed as the final barrier prior to use of the water resulting in water virtually free from pathogenic organisms. Waterline offers several options for your home, school or business to safeguard your family, customers and employees. If possible, we suggest installation of microbial filtration at the point of entry for all of the cold water supplying your facility, as well as after the hot water system. These systems are designed for hot or cold water with stainless steel outer vessel that house our microbial filters. Another option is to install point of use microbial filtration at high risk areas within the facility. The system are available for cold only water and for hot/cold water filtration. A partial list of High-Risk areas within home, schools and business facilities that can be impacted from microbial contamination. Point of Use Product Selection Guide Select system based upon flow rate for POU device Use Model DWS-CMF-HFC-1000 for protection before, during and after boil order advisory Built in pre-filter (.5 micron) using our carbon based system which is also certified for Chlorine taste/odor, lead, PFOA/PFOS, VOC and THM chemicals. Must use a stainless steel filter vessel for water >100 degrees F! Select stainless steel filter vessel based upon flow rate of device. The funding gap to repair and replace the aging infrastructure used for distribution of drinking water by public water utilities continues to grow. Reports showing the gap will be in excess of $500 billion over the next 20 years is alarming. This leads to problems for every user that rely on a public water system for their drinking water. The result is that drinking water is being distributed from the public water utility that passes the US EPA Safe Drinking Water regulations at the treatment plant but fails the smell test at the end of pipe. Customers are then left with the question- “Is their water safe”? Greater than 99% of boil order alerts issued are for concerns of microbial safety. These alerts can go undetected for years before they are identified and repaired. Prior to the alert the facility is subjected to unknown microbial contamination of entering their plumbing systems. One can wait for the problems to be fixed or they can take necessary action to safeguard their facility and provide secondary treatment for microbial intrusion to their potable premise plumbing system. From simple particulate filters to complex multi-barrier disinfection systems. We can design a solution to protect your employees, customers and product. These microbial contaminants can result in serious illness and even death due to breathing in aerosols, drinking and eating. Waterline Technology can recommend treatment solutions to solve these issues. Please review the products below and give us a call to discuss your concerns. Waterline offers filtration solutions for home, business, schools and industry for point of use and point of entry treatment methods using our CMF technology (Charged Membrane Filtration). CMF technology uses electro-absorption filtration to capture the organisms and maintain them throughout the filter life. Since most all organism and proteins are gram negative charged on the surface this technology works a bit like a magnet in the since its charge is a strong positive charge to attract the organisms rather than relying on pore size filters.
<urn:uuid:342d3211-93c1-4c88-8459-1e95afce2175>
CC-MAIN-2023-50
http://waterlinetechnology.com/industrial/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.944102
1,199
2.765625
3
Demonstrative communication is an organized way of expressing feelings or convey message to others. It is the best process in which sender sends a message through reliable medium to the receiver and he/she receives the message efficiently. It is the process of understanding and expressing your thinking or feelings to others. Demonstrate communication is the best way to understand other’s point of view and their emotions or thoughts. It is the awesome of transfer your thoughts and feelings through active listening and verbal and non-verbal communication skills. It is the process of expressing your feelings or thoughts through words. If you are going for speech, you must have strong verbal communication skills to persuade others. Non verbal communication is processing through gestures of body parts. Non verbal communication is also important as verbal communication to express your feelings. How to Develop Demonstrative Communication Skills It is the fact that demonstrative communication skills can be positive or negative, good or bad, effective or ineffective. It depends on how the sender sends the message and how the message will be communicated. Any type of relationship can be positive or negative because it is based on communication process whether it will be personal or professional life. To have an effective communication, you must need to convey your message effectively and deliver clear message to your sender. If you want to have effective communication with your family and friends, you must see how you send message or communicate to others. Your message must be precise and accurate that the receiver understands it efficiently. You should have demonstrative communication skills to express the core concept of the project. You must convey all of your ideas, thoughts or feelings in your message. Demonstrative Communication Speech If you are going to make a speech, you must possess demonstrative communications kills to achieve better results. Eye contact shows honesty and attentiveness and it is the important component of demonstrative communication skills. If you will not show proper eye contact, it will result ineffective communication. If you are focus on it while dealing with your personal relationship, it will represent your bad or good moods. Demonstrative Communications in Relationships Every relationship is based on demonstrative communications skills i.e. how you can express your feelings and communicates to others, how receiver perceives it. The tone of the voice is also important to convey your message or express feelings to others. The tone of the voice makes the communication effective or ineffective. So you should adopt the tone of voice according to your feelings or thoughts. If sender sends message in loud voice, he is going to show his anger to others or if he adopts polite or low voice tone, he is going to show his good behavior. Demonstrative communication is usually adopted to persuade your desired audience and send them details of information which you want to understand them effectively. Your desired audience understands your feelings or thoughts effectively if you have great communication skills. Demonstrative communication is one of the best ways to express your feelings or convey message to others effectively.
<urn:uuid:8bfe0bcd-62a4-438d-ae05-c77a2c82e835>
CC-MAIN-2023-50
http://www.alloutdigital.com/2012/10/how-to-develop-demonstrative-communication-skills/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.931005
602
3.59375
4
Silicosis is a disease of the lungs due to breathing of dust containing silica particles. Silica dust can cause fibrous or scar tissue formations in the lungs which reduce the lung's ability to work to extract oxygen from the air. There is no cure for this disease, thus, prevention is the only answer. MSHA published a final rule on dust control for surface highwall drills on April 19, 1994. The rule is designed to protect miners, working on and around surface highwall drills, from exposure to harmful amounts of dust containing crystalline silica. The most common exposures occur during the drilling of rock, crushing, and loading of mine material. Miners operating equipment such as highwall drills, end loaders, dozers and trucks on mine property have a high probability of exposure. Furthermore, all miners working at surface and underground mines are at risk of being exposed to silica-containing dust. Mine operators are required to provide and assure the use of appropriate controls for dust while drilling in rock. Miners should be sure to use all available engineering controls such as dust collectors, wet drilling, drill platform skirts and enclosed cabs. Miners should adjust their work procedures so that they do not stand in dust clouds. While not accepted as a primary control, miners should make use of respirators made available by the mine operator, to provide the maximum protection possible, especially when necessary to work in dust for short periods. If a respirator is used, the miner and mine operator should assure that it is approved for use in silica-containing dust, that it is maintained as approved, worn as designed (not altered in any way), equipped with new filters at least each shift, and fitted so as to provide a tight seal to the face. Miners wearing a respirator can not have beards/mustaches which interfere with the respirator seal to the face. The earliest recorded cases of silicosis date back to the first century A.D. In the mid 1930s, labor secretary Frances Perkins launched a nationwide effort to tackle the problem of silicosis. For more information on silicosis and its prevention, go to the MSHA silica webpage at http://www.msha.gov/illness_preventio... . This clip is from a press video for the 1996 national public education campaign, If It's Silica, It's Not Just Dust, to prevent silicosis. The Labor Department launched the silicosis prevention effort jointly with the American Lung Association and the National Institute for Occupational Safety and Health (NIOSH) in the U.S. Department of Health and Human Services. The entire video is available at the National Archives in College Park, Maryland.
<urn:uuid:85ef3cd3-832b-4d46-a640-8958b1f563b1>
CC-MAIN-2023-50
http://www.davittmcateer.com/2018/02/davitt-mcateer-featured-on-1996-msha.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.941337
548
3.609375
4
Top 5 Tips On How To Attract Bees To Your Garden For many centuries mankind has sought to entice bees to their gardens. The reason? Bees are one of nature’s most efficient pollinators. In fact, the honey bee accounts for approximately 80% of all the pollination carried out by insects. Subsequently, by inviting bees to nest in your back yard you can substantially improve the production of your home-grown flowers, plants, herbs and crops; as well as receiving a sustainable source of delicious honey! So if you have been considering keeping bees or simply wish to boost the pollinating power of your plants, then listed below are several simple yet highly effective tips on how you can attract bees to your garden. Plant bee-friendly wildflowers Local bees will be far more likely to flock to your garden if you have planted numerous flowers that they can pollinate! As such it is recommended that you plant a broad spectrum of different wildflowers in your garden that are indigenous to your local area. These wildflowers will successfully attract a diverse range of different bee and insect species to your garden. If you are unaware of the particular types of wildflowers that are native to your area then you should contact your local garden centre, DIY store or beekeeping association for targeted advice and support. Provide your local bees with a water source Bees are unable to land in deep water and as such are drawn to shallow areas wherein they can bathe and quench their thirst without placing themselves in unnecessary danger. Subsequently you should create a bee-friendly water source in your garden. From installing a bird bath that you only fill half-way to placing a shallow water dish in amongst your flowerbeds, by providing local bees with a safe and plentiful water source they will be sure to return to your garden and bring along their friends! Plant single petal flowers Did you know that flowers that possess a single row of petals are more attractive to bees than any other plant types? This is due to the fact that these single petal flowers generate more pollen than any others and their pollen is far easier for bees to reach because they only have to crawl across one row of petals. Consequently you should endeavour to plant as many single petal flowers in your garden as possible to increase your local bee population. Amongst some of the most popular single petal flowers include; roses, sunflowers, dahlias, snowdrops, geraniums, hyacinths, marigolds and poppies, as well as commonplace weeds such as dandelions and clovers. Refrain from using pesticides and other chemical products Neonicotinoid pesticides are present in the majority of plant and lawn treatment products that you can buy at your local DIY store or garden centre. In recent years a wave of scientific research has linked these pesticides to colony collapse disorder; a condition that has drastically reduced the global population of both domestic and wild bee colonies. It is suggested that these pesticides can disrupt the instinctual navigation traits of bees and even heighten their vulnerability to a wealth of different illnesses. Therefore it is vital that, in order to protect the welfare of local bees and to attract them to your garden, you should avoid using pesticide treatments on your plants, flowers, trees and allotment crops and only invest in organically produced, chemical-free products. Create a bee shelter Everyone needs a roof over their heads in the evenings and during poor weather conditions – even bees! As a result you should install a bee-house within your garden. Whether you decide to invest in an extensive beekeeping structure, a modest bee hotel or even layer a small wooden box with nesting tubes, these shelters will invite local bees to establish their hive within your back garden and even generate honey whilst there!
<urn:uuid:35c94d2a-f600-46aa-a0e3-6a6b2db30b68>
CC-MAIN-2023-50
http://www.gardengadgetzone.com/how-to-attract-bees/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.948989
777
2.828125
3
Interactive maps can use data to create physical manifestations of something that might seem abstract at first. The New York Times took data gathered by NASA’s Airborne Snow Observatory to create interactive maps that show the current state of the drought in the Sierra Nevada mountains. They used NASA’s data to compare the snow that accumulated during the winter of 2017 to the snow accumulation during 2015, which was a much drier year. Then, they superimposed the data onto aerial maps of two different areas in the Sierra Nevadas. In a single click, a user can switch back and forth to compare the snowfall between the 2017 and 2015. The snowfall in the Sierra Nevadas has a huge impact on the ongoing drought in California. One of California’s main water supplies is melted snow, so when there is very little snow, like in 2015, the drought intensifies. With the increased snowfall in 2017, it is possible that the drought is lessening. Unlike a satellite image, these interactive maps use data gathered throughout the entire winter to show overall amounts of snowpack. The legend is very simple – if an area is white, then there was a meter or more of snow in that area. With a single click, a user can see a visceral image that shows just how much more snow fell in 2017. It’s one thing to read that 10 times the amount of snow fell in 2017 than in 2015, and it’s another thing entirely to see what impact that has on a geographical area. In this particular instance, data visualization gives a physical face to data that has ramifications across the country.
<urn:uuid:a3320d56-8aec-4831-b88d-5f9cabd8e71c>
CC-MAIN-2023-50
http://www.thegraphicsreporter.com/2017/04/17/interactive-map-sierra-nevadas/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.949439
329
3.8125
4
Clay pipe bowls, moulded as a tulip (left) and an acorn (right). Note the smaller acorn which forms the part of the bowl called the ‘spur’ or ‘stud’. This was partly decorative and partly for knocking loose the congealed burnt tobacco. Kaolin... Clay pipe moulded in the shape of a wicker fishing basket with a herring swimming in through a hole at the base, where the bowl joins the stem. Two clay tobacco pipe, one embossed with buffalo horns and RAOB (Royal Antediluvian Order of Buffaloes), the other with leaf decoration up the seam and a heart, wreathed, beneath the sun, on one side, and a hand, wreathed, beneath the sun, on the other. The first pipe... Two clay tobacco pipes, possibly made by a Yarmouth pipemaker. One is decorated with fish scales going into a wicker basket. The other depicts fish or ripples swimming into a wicker basket (less crisply moulded). Designs of this sort may relate to the Yarmouth herring... Buffalo clay pipe, with the acronym RAOB (Royal Antediluvian Order of Buffaloes) and a buffalo’s face, with evidence, in the form of burn-marks, that the pipe has been smoked. Clay pipe with the acronym RAOB (Royal Antediluvian Order of Buffaloes) and a buffalo face embossed. Clay pipe bowls excavated from a ditch that was filled in 1883. Ally Sloper was a popular Victorian cartoon character. Here his head adorns a clay pipe, with his nose serving as the spur. The pipe has been in a destructor. Clay pipes: an animal (top); an Irish pipe with harp and shamrock on other side (middle), and the maker Parnell (bottom). Kaolin pipe bowl showing Edward, crowned as Prince of Wales. The other side shows Princess Alexandra. The second bowl (right) bears the symbol of the Crossed Keys. Originally a papal emblem, it attached to many pubs, and this pipe may have been purchased at a pub of... One ribbed, without spur. Plain clay pipe bowls, one with maker’s initials on spur. Kaolin pipe bowls, one with a horse’s hoof for a heel/ spur. Made in Holland, smoked in London, dumped in Essex. Dutch clay pipes are distinctive because the bowl sits at about 45 degrees to the stem. This one is in the shape of a tulip – another Dutch import to Victorian London. Clay pipe in the shape of a bird’s claw clutching an egg. Clay pipe in the form of a bird’s claw clutching an egg. Kaolin pipes, smoked in London and dumped in Essex. Top right, the ‘spur’ or ‘heel’ of the pipe is Ally Sloper, a Victorian cartoon character. Centre left: a basketwork design.
<urn:uuid:e8b699da-565d-486d-8208-0aec84284e0c>
CC-MAIN-2023-50
http://www.whatthevictoriansthrewaway.com/project_tag/clay-pipes/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.928615
645
2.875
3
Lev Semyonovich Vygotsky, a Soviet psychologist known for his work on psychological development in children, believed that dramatic play for toddlers helps promote social, cognitive, and emotional development (1). When toddlers reach the age of two or three, they develop a strong interest in the activities of the adult world and want to be a part of it. This desire is fulfilled through imitation and exploration of social relationships in the form of dramatic play. Read this post to know about the importance of dramatic play for toddlers and a few interesting dramatic play ideas you may try. Why Is Dramatic Play Important For Toddlers? How To Encourage Dramatic Play In Toddlers? - Observe your child’s interests. - Sit face-to-face so they can see you and follow your actions. - Take the lead in case your toddler does not know how to play it. - Avoid giving too many options at once that may confuse them. - Take an active part in the play with your child to encourage them. - Expose your toddler to various new experiences as they can be material for play pretend wherever they go. 26 Dramatic Play Ideas For Toddlers and Preschoolers 1. Grocery store Pretending to manage a grocery store will help develop communication skills in your toddler. You can start by clearing some shelves in your toddler’s playroom to replicate shelves of the store. For the groceries, you can use empty cereal boxes or plastic play food available in the market. You will also need some grocery bags and a cash register. Let your little one be the storekeeper, and you can be the customer coming into their store for shopping. Later, you may reverse these roles . 2. Flower market Gather a few artificial flowers in and around your home and place them in different containers. You can also create some signs on paper and hang them to give it an authentic feel of a flower shop. 3. Science Lab Pretend science lab is mixing a buffet of things, which are safe to mix. The most common thing you will need for the science lab is water and a few safe-to-use kitchen tools and beakers. You may also put a few drops of food coloring in the water to make it authentic. A lab coat and safety goggles are essential in the science lab. 4. Coffee shop Grab a few cups, some play coins, and doddle a menu card for your toddler’s coffee shop. You can give them some foam to be used as coffee. If you plan to have a foamed cappuccino on the menu, you can cut some brown paper into small pieces and use it as foam. Get your aprons, and the coffee shop is open for service. Your schild can have a wonderful time if you are totally involved in playing with them. 5. Pizza parlor You will need a few paper plates, crayons, sticky back foam, scissors, and pizza boxes for a pizza parlor. Cut the foam into different vegetable shapes t for use as toppings. The paper plates can be used as a base for the pizza and the crayons as the sauce. You can also use cardboard boxes to make an oven. Then, call in and place an order so they can make pizzas for you. 6. Airplane voyage Traveling is something that may excite anyone. So, when traveling is not possible, you can pretend play with airplane voyages. For this, you will need some chairs, a few pieces of paper, and a backpack. Let your toddler make boarding passes from the paper while you mark seat numbers on the chairs and create a ticket counter. Grab an atlas and select your travel destination and write it on your boarding passes. Get to the counter, show your boarding pass and your passport and reach your respective seats. You can turn their toys into fellow passengers and be the flight attendant yourself. Bon Voyage! 7. Tea party You can have tea parties with your children on special occasions or weekends for a wonderful time with each other. Set up the table with mini ceramic cups, play pastries, and cake. You can also arrange a mixer for baking to have lots of fun. Do not forget to keep some aprons ready for the bakers. Toddlers can also invite their toy friends over for a tea party. 8. Santa’s elf workshop Who doesn’t love Santa and the chance to become an elf working in Santa’s workshop?. Clear out a table and set up different stations for the workshop, including a toy testing station, a wrapping station, and a delivery station. You can also use some of their toys as gifts and put them in the stations to be wrapped and sent to Santa so that he can deliver them to the children on Christmas. 9. Library play Make a cute reading area at a cozy corner of your house. Set up a table near the reading area and make a register to note the books borrowed and returned to and from the library. You can also make an index card to place inside the library’s books and mark them when your child takes them for reading. You can also reverse the roles of the librarian with your toddler after a few times. 10. Ice cream shop Working at an ice cream shop is fun. First, clear the table and set up an ice cream counter for your child. Next, use colored pom-poms for ice-cream scoops and make cones using brown paper. Finally, to add a variety to the play, you can add small paper cups and plastic spoons to serve delicious ice-creams. 11. Pirate ship Grab an empty laundry basket, a few party streamers, a paper bag, sticky back foam, wrapping paper tube, and scissors. On the inside of a paper bag, draw a sail and cut it out with the help of scissors. In the meantime, let your toddler decorate the mast (wrapping paper tube) of the sail. You can also use some party streamers to decorate the sides of the boat. Finally, tape the sail to the mast of the boat and attach it to the sailboat. 12. Cupcake bakery We all love cupcakes with pretty frosting and colorful sprinkles. This activity will help toddlers learn about colors and shapes. For this activity, you will need colored dough, cupcake liners, cookie cutters, cutting boards, and colorfulsprinkles. Your toddler can mix and match doughs of different colors to make the cupcakes look interesting. After the dough is ready, cut it into different shapes and sizes using a cookie cutter and decorate it with sprinkles. 13. Vet’s office You can turn your toddler’s playroom into a vet’s clinic by placing some stuffed animals and a doctor’s kit. Pretend to be a pet owner and visit the clinic with your stuffed pet animal. Describe your pet’s illness to the little doctor and let them examine your pet. Sounds fun? 14. Junior detective All you will need for your toddler detective play are some household items and a few pieces of paper. Write down clues on the paper and hide them in different places of the house, leading to the hidden prize at the end. Let your toddler put on their detective glasses and search the entire house. This activity helps improve their brain functioning skills, emotional skills and also involves physical movement. 15. Perfume factory Making perfumes may help your ltoddlers develop their fine motor skills. All you need for this activity are some bottles, water, and flowers. Pour water into the bottles and head out to the garden. Pluck various flower petals, put them into the water, leave them to infuse, so the perfumes are ready. 16. Doctor’s chamber Turn your toddler’s playroom into a doctor’s chamber by placing a table, a chair, and an apron. Use their toys as patients, while you may also pretend to be the one. Then, let the little doctor use the handy doctor play kit to examine you and prescribe your medicines. You will only require two things for this activity-Styrofoam and decorative flowers. Give these to your toddler and let them plant a beautiful garden. Then, ask them to stick the flowers onto the styrofoam, and their garden is ready. You can also perform this activity with your toddler, so that they can learn from you. There can also be multiple teachable moments during the play where you can guide your toddler. Camping lets children engage in cooperative play and also learn problem-solving skills. You will need some wrapping paper roll, some orange and red colored papers, double stick tape, scissors, and a brown marker to perform the camping play. Add brown color to the paper. These will be the logs for the campfire. Cut the colored paper to form the flames of fire. Place the double stick tape on the logs and let your child stick the flames to it. Stick all the logs together, and your campfire is ready. Also, don’t forget to pack some snacks for the camping trip. 19. Making Gingerbread man Making gingerbread men is a fun activity that can keep your little one engaged for a long time. You will need foam gingerbread, googly eyes, gingerbread cookie cutters, and cookie sheets. You can turn the kitchen counter into a gingerbread-making station and enjoy this activity with your child. 20. Toy store owner For this one, you can bring out all the toys. Clear out some shelves and arrange different toys in the store. You can also write prices for the toys on small pieces of paper, and the toy store is open for sale. You can visit the toy store as a customer to buy some toys with fake money. 21. Managing a restaurant You can sometimes turn dinner time into playtime. Become the chef and let your toddler be your executive. Plan the menu, the meal, set the table, and your at-home restaurant is ready for service. 22. Art gallery Hand out a few sheets of paper and some crayons to your little one and let them draw some masterpieces. Later on, hang these drawings in a room and turn them into an art gallery. Get tickets for you and your family members to this art gallery because it is not something that you would want to miss. Also, when others meet the artist in person, it would be exciting for your little one. 23. Ruling a kingdom Does your child aspire to be a ruler and rule a kingdom? So why not make it happen. You can make a crown using chart paper and decorate it with beads and stones. Turn their playroom into a kingdom and their stuffed toys into ministers. All hail the new ruler. 24. Post office You do not need any extravagant space to set up a pretend post office for your little one. Just set up a table and a chair in the corner of a room and the post office is ready. Let all the family members write letters to each other and submit them to the post office. A little mail carrier will be arriving at your place with these sweet letters soon. 25. Go fishing You can use the couch as a boat and take your toddler fishing with the help of a string tied to a long wooden stick. First, cut a few colored pieces of paper in different sizes of fish and spread them on the mat in front of the. Then, with the help of the fishing rod, catch as many fish as you can. 26. Teddy bear hunt Hide some of your toddler’s teddy bears in different places of the house and leave around a few clues for them to find. Then, after they have found all the hidden teddy bears, leave a clue for an interesting surprise gift (can be their favorite candy or cookie) towards the end of the hunt. Dramatic pretend play is an essential preschool activity for toddlers as it helps develop essential social and intellectual skills. It also improves gross and fine motor skills. Take ideas from the list of dramatic play activities that we have put together and have a wonderful time with your little one. MomJunction’s articles are written after analyzing the research works of expert authors and institutions. Our references consist of resources established by authorities in their respective fields. You can learn more about the authenticity of the information we present in our editorial policy. The following two tabs change content below.Sanjana did her post graduation in Applied Microbiology from Vellore Institute of Technology, India. Her interest in science and health, combined with her passion to write made her convert from a scientist to a writer. She believes her role at MomJunction combines the best of both worlds as she writes health-based content based on scientific evidence. Sanjana is trained in classical… more
<urn:uuid:d8a7421e-6705-44ec-b1d6-76bbcab20709>
CC-MAIN-2023-50
https://aceparents.com/baby/26-best-dramatic-play-for-toddlers-and-preschoolers/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.943174
2,676
3.6875
4
In ACIA’s discussions, many threads led to one theme: communication. The need for information is ageless. “What’s going on?” has always been an instinctive human response in any critical situation. So have “Are you OK?” and “I’m all right” — reaching out to people who are important to us to learn or tell that someone has survived a violent or dangerous event. But if the questions haven’t changed, the means of asking and answering have been changing with dizzying speed. It is already hard to remember a world without cell phones, instant messaging and social networks, but that world is actually not far back in time. Only a handful of years before the Virginia Tech shootings we didn’t yet live in the confident expectation that we could contact anyone anytime from anywhere. That assumption is now universal, or very close to it, and among many other consequences it has had a profound effect on how people experience and respond emotionally in a critical incident. A striking indication of that effect was a survey finding that the largest single reason for post-traumatic stress symptoms among Virginia Tech students after April 16 was, as Professor Michael Hughes reported, “not being able to contact friends to confirm their safety.” In a different survey, described by Steven D. Sheetz, one of the authors, students were asked how important they felt it had been to have a cell phone. The response was unsurprising: Having your cell phone on that day, well it was really important. People who had cell phones really felt it was an important thing that they had that ability to communicate available to them on that day. In Sheetz’s sample, nearly two-thirds said having a cell phone was “extremely important” and another 13 percent called it “quite important.” Other means of instant communication were important too, though there were notable differences between students and others in their use of communication channels other than cell phones and e-mail. Texting, instant messaging and Facebook were used by large majorities of students (73 percent, 75 percent and 66 percent, respectively) but by much smaller numbers of faculty and staff respondents (less than 5 percent of faculty and 10 percent of staff reported using Facebook on April 16, for example, compared with two-thirds of students). It can be guessed, though, that only two years later, those generational gaps would be narrower. The revolution in communication technology has fascinating implications for incident management and critical incident analysis. One question is how use of that technology affects the emotional experience of a crisis. Does it reduce stress when people can tell others more quickly that they are safe, or learn that friends are OK? Or can it make the experience more stressful because unlike people in past crises, we have learned to expect instantaneous communication and become stressed much more quickly when we cannot reach someone? Steven Sheetz pointed out that it can be hypothesized that communicating will ease stress, but that such questions await further analysis: We know that that happened in Facebook. In Facebook there were groups that were headed “I’m OK at Virginia Tech” and by 2 o’clock that day many people could look at the list of names for that group and know everyone they knew was okay.* So from a computer guy’s perspective, it’s like hey, technology matters. It probably reduced stress. Now the question is how do we figure out how to measure that. Another set of questions has to do with how new communication channels can be used by institutions and authorities to inform, direct and reassure their communities in an emergency. Ned Benton, chair of the ACIA council, noted that one lesson from Virginia Tech for other universities is that they have to adapt communication strategies for the new technological environment. He cited responses to a 2008 survey conducted by the National Campus Safety & Security Project: All kinds of universities and colleges answered what they do differently now, and one of the areas had to do with communication. How to broadcast, how to e-mail, how to make sure that if you communicated, whichever way you were communicating you could get the message and whichever way you wanted to get the message out there you could do it. It is not just communication among people involved in a crisis, or among incident managers, that has been revolutionized in recent years. Communication to the larger public has also undergone profound change. It has been a cliche for some time to say that news, in a headline event like the Virginia Tech shootings, is now instantaneous and nonstop. It is rapidly becoming a cliche to add that news from conventional broadcast and print media is now accompanied by — or frantically trying to catch up with — a flood of information and images from spectators and participants, which reaches the public through the Internet and social networking sites without ever passing through any traditional news media structures. Bruce Shapiro, executive director of the Dart Foundation for Journalism and Trauma, recalled that when the July 7 terrorist bombings in London occurred in 2005, A number of people sent their video to the BBC. A year later the head of online news for BBC said to me that if that happened this year those videos would have gone straight to the internet, they wouldn’t have gone to the BBC. If that transformation is now widely recognized, though, it is less clear just how it has changed the way we experience critical incidents, and how it shapes the public’s response. Jeffrey Stern posed the question this way: We just had the 40th anniversary of the moon landing, the first globally witnessed televised event. Now it’s become commonplace, everything from shock and awe in Baghdad to the O.J. Simpson chase to Michael Jackson to coverage of Virginia Tech and Columbine, Hurricane Katrina, the 9-11 attacks. We used to have an incident and it would affect the people right there, and everybody else would get the news on the radio later or the next day. Now we’re all a part of the incident on a global scale. That has to have a huge sociological, psychological impact. I was working in the White House the day of the Virginia Tech incident, and one thing that struck me was that within 24 hours we went from this bloodbath, this tragedy, to mourner-in-chief President Bush playing out this role before the blood has dried — this script of what we know we’re supposed to do because we’ve watched it so much. What is the psychological impact of the fact that whenever something happens everyone is a witness, globally and instantaneously? What is the impact for the future? James Hawdon: There is a literature that doesn’t particularly look at mass media coverage, but the whole notion of communal bereavement where people who have no attachment to the victims still suffer trauma, some aspect of trauma. The classic study was done in Sweden after the sinking of a ferry there, and there was a traumatic event for the country. One could hypothesize that the more coverage we have the more widespread this effect is going to be. Danny Axsom: People desire information in order to reduce uncertainty. And that includes about typical norms for bereavement. We don’t know how to act and we look for a lot of comparison information from other people and from the media. Whether that’s constructive is another question. People here were being advised to step away from the computer or the TV. But with the lack of information about what’s going on, there’s also a lack of information about what’s normative as far as adjusting. How should I be adjusting, how are other people adjusting? So you look and look. That’s not necessarily saying it’s good, but there’s a motivation for the audience to seek that information out. In the new information era, we are flooded not just with instant facts but with instant speculation, opinion, unreliable early reports, spin, and guesswork. Arnold R. Isaacs commented: While information spreads farther and faster than ever before, so does misinformation. Every reporter and every cop, everybody who deals with emergency situations, knows how chaotic and confusing and fragmented the story is during the initial stages. Even if individual reports are cautious and qualified — and obviously that’s not always the case — the way the process works today means that volume, intensity and repetition can make a whole that is much less careful than any of the individual parts. Think about the white van frenzy during the Washington sniper case. It was a flimsy story to begin with, there really wasn’t much there. Yet it had police stopping hundreds of white vans all over the mid-Atlantic states for days. And it obliterated the reports that did exist about the blue Chevy that was the actual vehicle used in the shootings. That’s an example of the echo effect, when something is repeated enough times that it starts to bounce back and forth: witnesses tell investigators and journalists things that are not new information or from their experience or observation, but just repeating what they’ve heard. Dave Cullen describes this process brilliantly in his book on Columbine. This doesn’t only complicate life for incident managers while something is happening. It can clutter up historical understanding as well. I’ll bet there are still people who think the Washington snipers were driving around in a white van. Steven M. Gorelick, professor of media studies at Hunter College, City University of New York, calls the phenomenon “cultural noise”: A nonstop onslaught of rumors, partial knowledge, misinformation, self-proclaimed expert comment, nonsense, rumors and all sorts of craziness. This noise surrounds catastrophic and other incidents and complicates incident management and understanding. It comes from everyone from the evangelists to the scholars to the bloggers. They complicate the lives of people involved with this, they can offend, but they can’t be ignored. I’m not suggesting there’s anything that incident managers or sociologists can do about this. Much of this is protected speech. Catastrophic events now can be safely said to occur almost 98 percent in real time. Even in less developed countries, virtually everyone who comes to witness a catastrophe is carrying the equivalent broadcasting power of a television station. They carry it with them. They broadcast from an event. It used to be that an event would occur and it would be some moments before society could get sense of what was going on and start building an account of it. But now noise occurs during an event. Last point, all this stuff now sticks around. It used to be ephemeral. You’d see it then it’s gone. Today’s cyberspace is an infinitely expanding area where all the noise is still there. Noise that you all from Virginia Tech have moved on from, rumors, crazy stuff, it’s all still there, and even if it’s been ripped off the internet, you know it’s been mirrored or cached and it’s still really there. Why does this matter? More than at any other time, anyone in the midst of a calamity with a clear, serious message to communicate — perhaps immediate enough to involve actual danger — now sends that message into a confused and noisy environment packed with obstacles that can completely stymie the message’s reception. Finding paths of communication amidst this confounding labyrinth, especially when the stakes are high and the danger great, is a serious challenge of the digital age. Bruce Shapiro noted that underlying the noise is a fierce competition to define what an event means, and to use that meaning to influence political or policy or other consequences: Charlotte Ryan, who teaches at the University of Massachusetts, says that power is the ability to control rules, resources or meaning.* Maybe one defining factor of critical incidents is that they overwhelm or undermine the ability to control rules, resources and meaning. I think where these collisions happen in the aftermath of critical incidents, what really matters more often than not is who will have control over the meaning. Whose story is it becomes critical in the aftermath. Who are some of these stakeholders who have a role, a stake, in shaping the meaning of an event? There are disaster response agencies, there are mental health professionals, there are educators, religious leaders, political leaders, organized victims. And institutions; we’ve heard about the university attempting to control how the story gets told. These are all stakeholders. Then there are hijackers of meaning. In the walk on campus we were talking about religious proselytizers who came to the campus trying to manipulate the meaning of this event to get people into their sects. We have talked about pundits who are hijackers of meaning, who try to graft a misleadingly simple narrative line onto something that’s really quite complex. There are politicians who seize upon critical incidents and then hijack them. I think the question of roles and responsibilities in shaping the aftermath has something to do with recognizing that meaning needs to be put together through the combined efforts of all the stakeholders rather than competitive storytelling. It’s a big challenge. * More information on the use of Facebook after the April 16 events can be found in S. Vieweg, L. Palen, S. Liu, A. Hughers, J. Sutton, “Collective Intelligence in Disaster: Examination of the Phenomenon in the Aftermath of the 2007 Virginia Tech Shooting.” Proceedings of the 5th International ISCRAM Conference, Washington D.C. (May 2008) * Charlotte Ryan and Samuel Alexander, “‘Reframing’ the presentation of environmental law and policy,”
<urn:uuid:ef6154fa-ca35-47d0-952c-03e09e28540b>
CC-MAIN-2023-50
https://aciajj.commons.gc.cuny.edu/category/communicating-in-crisis/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.968126
2,812
3.015625
3
Muay Thai originated from Thailand. Often known as the art of 8 limbs utilizing the fists, elbows, knees, and shins. Muay refers to Combat it derives from Sanskrit which is the original Hindu language that resembles the word Mavya which means unite together, The word Thai refers to the country of Thailand. Other arts that are considered the cousins of Muay Thai are Muay Lao (Originating from the country of Laos), and the art of 9 limbs utilizing the head known as Lethwei (Originating from Burma). The word “Muay” serves as an umbrella to represent the Martial Arts that were introduced to the countries of Thailand, Laos, and Cambodia. The close combat art of Muay Thai was once used to defend the country of Thailand from invasions by bordering countries such as Burma. For the past centuries Muay Thai was a staple lifestyle to the Thai’s, and a mandatory art of close combat in the Thai Army. In modern day life, the martial art has evolved to a sport that we use as a way of life. The Wai Kru Ram Muay: Spiritual Dance The Wai Kru Ram Muay is a Spiritual ritual that is usually performed before a Muay Thai fight. Fighters wear the Mongkhon and Pra Jiad while dancing to the Sarama (Thai music). The rhythmic music is played with an oboe along with Thai drums or cymbals. During the fights, the tempo of the Sarama increases at a faster pace to set the pace and atmosphere of the Fight. The spiritual dance is a representation to pay respect to ancestors, teachers, and families of the fighter. The Mongkhon: Headwear The Mongkhon (Headwear) is worn during the Wai Kru Ram Muay. In Thailand. The Mongkhon is said to be a sacred representation of each academy. In ancient times, The Mongkhon was worn by Thai soldiers and made from pieces of fabric from a loved one or family member. The fabric was often blessed by a monk before it was worn. This fabric resembled a good luck charm that had spiritual powers to protect the soldiers. Today, the Mongkhon is made from Ropes, Ribbons, and other Silk materials. The Pra Jiad : Arm Band The Pra Jiad is an armband used to give the fighters good luck. Some fighters wear it as a representation of good luck, or to show their rank and status in the sport of Muay Thai. The Pra Jiad is usually made by family or the teachers of the fighter. Sometimes Prayers are said from a Buddhist monk or close family member as the Pra Jiad is being tied to the arm as a sense of good luck. Muay Thai is a highly effective martial art utilizing the science of 8 limbs. The martial art has been passed down from years of war, which has evolved into a sport, and way of life. Whether you’re training to become a professional fighter, Mixed Martial Artist, or to live a healthy lifestyle, Muay Thai should be a discipline to consider. “What Is Muay Thai, Muay Thai History of Training and Fighting.” Tiger Muay Thai & MMA Training Camp, Phuket, Thailand, www.tigermuaythai.com/about-muay-thai/history. Temps, Dietmar. “Muay Lao: the Kick Boxing Scene in Vientiane, Laos.” Muay Lao: the Kick Boxing Scene in Vientiane, Laos — Dietmar Temps, Photography, dietmartemps.com/travel-blog/muay-lao-the-kick-boxing-scene-in-vientiane-laos_582/.
<urn:uuid:78a3b42b-38b4-4f23-b40b-7dde4909d6eb>
CC-MAIN-2023-50
https://adamtlee.medium.com/the-art-of-8-limbs-an-introduction-to-the-art-of-muay-thai-174aa419bcf0?responsesOpen=true&sortBy=REVERSE_CHRON&source=author_recirc-----e68c1eaf7a14----1---------------------81fadc90_e3b2_4255_bfca_9d7e3735e6fd-------
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.952951
775
2.921875
3
Pertussis (Whooping Cough) People of all ages can become ill with pertussis and some can become very sick. Children younger than 6 months of age are the most vulnerable to serious illness and even hospitalization if they develop pertussis. The most effective prevention against pertussis is vaccination. Vaccination of household members and other close family and friends helps protect infants. Pregnant women should receive a Tdap, the pertussis booster vaccine, during each pregnancy to help decrease the chances of the baby being exposed to pertussis. Babies and young infants are further protected when both parents, caregivers, siblings and healthcare workers stay up to date on pertussis vaccinations. California schools require that all students entering 7th grade provide proof of receiving Tdap. Pertussis begins with the symptoms of a cough and runny nose for 1-2 weeks followed by weeks of coughing fits. Fever is not usually seen unlike most other respiratory illnesses. People with symptoms should see their health care provider for testing, diagnosis and potential treatment. - Fact Sheet | Spanish - Whooping Cough factsheet (CDPH): English | Spanish - Information for pregnant women: English | Spanish - Is it Just a Cough? Poster (CDPH): English | Spanish - Protect Babies from Whooping Cough (CDC): English | Spanish - Pertussis Poster for Parents and Grandparents (CDC): English | Spanish The best tool for prevention of pertussis is vaccination. There are vaccines for infants, children, preteens, teens and adults. The childhood vaccine is called DTaP, and the pertussis booster vaccine for adolescents and adults is called Tdap. Both vaccines also protects against the diseases: tetanus and diphtheria. - Children should get 5 doses of DTaP vaccine at the following ages: 2 month, 4 months, 6 months and 15-18 months and 4-6 years. - The Tdap booster vaccine is recommended at 11-12 years of age and for adults who has not previously received a dose. - IMPORTANT: Pregnant women should receive an additional Tdap booster vaccine during each pregnancy regardless of vaccination history. The best place to get vaccinated is at your regular health care provider or clinic. Other locations are provided in Contra Costa to receive pertussis containing vaccine. - Pertussis Vaccination: Where to Get Vaccinated - California School Tdap Vaccination Requirement - Find out where to get vaccinated Information about Pertussis Vaccines - Vaccine Information Statements (CDC): - Personal Stories of Pertussis Disease (Shot by Shot) - Pertussis Videos (Immunization Action Coalition) - Preventing the spread of whooping cough (pertussis) English | Spanish Protejiendo de la Tos Ferina - Pertussis Radio PSA (CDPH) English | Spanish Tos ferina (Pertussis) PSA - Pertussis PSA (CDC) English | Spanish Tos ferina (Pertussis) PSA - Recognizing and Preventing Whooping Cough (Pertussis) (CDC) English | Spanish Reconocimiento y prevención de la tos ferina (Pertussis) - Contra Costa Pertussis Press Release – Press Release – Jul 16, 2013 - Protect Our Babies from Pertussis – Healthy Outlook – May 05, 2010 - Shots Not Just for Pre-Schoolers – Healthy Outlook - May 30, 2004 - Report any suspect or laboratory confirmed case of pertussis; and - Immediately report outbreak by phone to Contra Costa Public Health Communicable Disease Programs at 925-313-6740. Schools & Child Care Settings
<urn:uuid:3ce86165-b554-478e-8952-9fdd0e61febb>
CC-MAIN-2023-50
https://adfs2.cchealth.org/pertussis/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.881456
803
3.921875
4
Art: Drawing for Advanced This is a broad drawing course suitable for those who have good knowledge and experience of drawing using a range of processes. The course focuses on learning correct drawing techniques and teaches the art of drawing through practical demonstrations and one to one assistance throughout each session. The course has a progressive structure with a new technique and a particular area of drawing being covered each week. NB: This course will be taught at Woodley Hill House. You will learn: - shading techniques - observational techniques - light and shadow, proportion - drawing from life - drawing from photographs - drawing with graphite - drawing with colour. For the first few sessions the tutor will bring in objects and images for the class to draw, after which you will be encouraged to bring in your own objects/images suggested by the tutor. This course is not formally assessed but you will be taught via group activities, practical demonstrations and one to one tutorials. Each week, a class handout will be provided which gives further information on the class and topics covered during that particular week. For the first session, you will need to come prepared with the following materials: - graphite pencils - sketch book of choice
<urn:uuid:1aa7e945-26d8-4bd8-9b9f-26cdf9a405b2>
CC-MAIN-2023-50
https://adult.activatelearning.ac.uk/find-a-course/detail/art-drawing-for-advanced/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.941641
252
2.578125
3