Date
stringlengths
11
18
Link
stringlengths
62
62
Title
stringlengths
16
148
Summary
stringlengths
1
2.68k
Body
stringlengths
22
13k
Category
stringclasses
20 values
Year
int64
2k
2.02k
May 4, 2021
https://www.sciencedaily.com/releases/2021/05/210504112554.htm
Air pollution linked to high blood pressure in children; other studies address air quality and the heart
A meta-analysis of 14 air pollution studies from around the world found that exposure to high levels of air pollutants during childhood increases the likelihood of high blood pressure in children and adolescents, and their risk for high blood pressure as adults. The study is published in a special issue on air pollution in the
Other studies look at: the effects of diesel exhaust on the muscle sympathetic nerve; the impact of pollutants on high blood pressure; rates of hospital readmission for heart failure among those exposed to high levels of ambient air pollution; and risk of stroke and heart attack after long-term exposure to high levels of particulate matter. The studies include health outcomes of people who were exposed to pollutants in the United States, China and Europe.High blood pressure during childhood and adolescence is a risk factor for hypertension and heart disease in adulthood. Studies on air pollution and blood pressure in adolescents and children, however, have produced inconsistent conclusions. This systematic review and meta-analysis pooled information from 14 studies focused on the association between air pollution and blood pressure in youth. The large analysis included data for more than 350,000 children and adolescents (mean ages 5.4 to 12.7 years of age)."Our analysis is the first to closely examine previous research to assess both the quality and magnitude of the associations between air pollution and blood pressure values among children and adolescents," said lead study author Yao Lu, M.D., Ph.D., professor of the Clinical Research Center at the Third Xiangya Hospital at Central South University in Changsha, China, and professor in the department of life science and medicine at King's College London. "The findings provide evidence of a positive association between short- and long-term exposure to certain environmental air pollutants and blood pressure in children and adolescents."The analysis included 14 studies published through September 6, 2020, exploring the impact of long-term exposure (?30 days) and/or short-term exposure (<30 days) of ambient air pollution on blood pressure levels of adolescents and/or children in China and/or countries in Europe.The studies were divided into groups based upon length of exposure to air pollution and by composition of air pollutants, specifically nitrogen dioxide and particulate matter with diameter ?10 ?m or ?2.5 ?m. (The majority of research linking heart disease with particulate matter focuses on particle matter mass, which is categorized by aerodynamic diameter -- ?m or PM.) Fine particles are defined as PM2.5 and larger; coarse particles are defined at PM10; and the concentrations of particulate matter are typically measured in their mass per volume of air (?g/m3).The meta-analysis concluded:"To reduce the impact of environmental pollution on blood pressure in children and adolescents, efforts should be made to reduce their exposure to environmental pollutants," said Lu. "Additionally, it is also very important to routinely measure blood pressure in children and adolescents, which can help us identify individuals with elevated blood pressure early."The results of the analysis are limited to the studies included, and they did not include data on possible interactions between different pollutants, therefore, the results are not generalizable to all populations. Additionally, the analysis included the most common and more widely studied pollutants vs. air pollutants confirmed to have heart health impact, of which there are fewer studies.The study was funded by the National Natural Science Foundation of China; Hunan Youth Talent Project; the Natural Science Foundation of Hunan Province; and the Fundamental Research Funds for Central Universities of Central South University.
Environment
2,021
May 4, 2021
https://www.sciencedaily.com/releases/2021/05/210504112516.htm
Mangroves and seagrasses absorb microplastics
Mangroves and seagrasses grow in many places along the coasts of the world, and these 'blue forests' constitute an important environment for a large number of animals. Here, juvenile fish can hide until they are big enough to take care of themselves; crabs and mussels live on the bottom; and birds come to feed on the plants.
However, the plant-covered coastal zones do not only attract animals but also microplastics, a new study shows.- The denser the vegetation, the more plastic is captured, says Professor and expert in coastal ecology, Marianne Holmer, from the University of Southern Denmark.She is concerned about how the accumulated microplastics affect animal and plant life.- We know from other studies that animals can ingest microplastics and that this may affect their organism.Animals ingest microplastics with the food they seek in the blue forests. They may suffocate, die of starvation, or the small plastic particles can get stuck different places in the body and do damage.Another problem with microplastics is that they may be covered with microorganisms, environmental toxins or other health hazardous/disease-promoting substances that are transferred to the animal or plant that absorbs the microplastics.- When microplastics are concentrated in an ecosystem, the animals are exposed to very high concentrations, Marianne Holmer explains.She points out that microplastics concentrated in, for example, a seagrass bed are impossible to remove again.The study is based on examinations of three coastal areas in China, where mangroves, Japanese eelgrass (Z. japonica) and the paddle weed Halophila ovalis grow. All samples taken in blue forests had more microplastics than samples from control sites without vegetation.The concentrations were up to 17.6 times higher, and they were highest in the mangrove forest. The concentrations were up to 4.1 times higher in the seagrass beds.Mangrove trees probably capture more microplastics, as the capture of particles is greater in mangrove forests than in seagrass beds.Researchers also believe that microplastics bind in these ecosystems in the same way as carbon; the particles are captured between leaves and roots, and the microplastics are buried in the seabed.- Carbon capture binds carbon dioxide in the seabed, and the blue forests are really good at that, but it's worrying if the same thing happens to microplastics, says Marianne Holmer.Although the study was conducted along Chinese coasts, it may be relevant to similar ecosystems in the rest of the world, including Denmark, where eelgrass beds are widespread.- It's my expectation that we will also find higher concentrations of microplastics in Danish and global seagrasses, she says.The study was conducted in collaboration with colleagues from the Zhejiang University in China, among others, and is published in the journal The blue forests: Lots of plants grow in or below sea level; mangroves, seaweed, seagrass and marsh plants. Especially mangroves and seagrasses absorb and store carbon like plants on land and are thus extremely important for the planet's carbon footprint.
Environment
2,021
May 4, 2021
https://www.sciencedaily.com/releases/2021/05/210504112514.htm
Artificial intelligence to monitor water quality more effectively
Artificial intelligence that enhances remote monitoring of water bodies -- highlighting quality shifts due to climate change or pollution -- has been developed by researchers at the University of Stirling.
A new algorithm -- known as the 'meta-learning' method -- analyses data directly from satellite sensors, making it easier for coastal zone, environmental and industry managers to monitor issues such as harmful algal blooms (HABs) and possible toxicity in shellfish and finfish.Environmental protection agencies and industry bodies currently monitor the 'trophic state' of water -- its biological productivity -- as an indicator of ecosystem health. Large clusters of microscopic algae, or phytoplankton, is called eutrophication and can turn into HABs, an indicator of pollution and which pose risk to human and animal health.HABs are estimated to cost the Scottish shellfish industry £1.4 million per year, and a single HAB event in Norway killed eight million salmon in 2019, with a direct value of over £74 million.Lead author Mortimer Werther, a PhD Researcher in Biological and Environmental Sciences at Stirling's Faculty of Natural Sciences, said: "Currently, satellite-mounted sensors, such as the Ocean and Land Instrument (OLCI), measure phytoplankton concentrations using an optical pigment called chlorophyll-a. However, retrieving chlorophyll-a across the diverse nature of global waters is methodologically challenging."We have developed a method that bypasses the chlorophyll-a retrieval and enables us to estimate water health status directly from the signal measured at the remote sensor."Eutrophication and hypereutrophication is often caused by excessive nutrient input, for example from agricultural practices, waste discharge, or food and energy production. In impacted waters, HABs are common, and cyanobacteria may produce cyanotoxins which affect human and animal health. In many locations, these blooms are of concern to the finfish and shellfish aquaculture industries.Mr Werther said: "To understand the impact of climate change on freshwater aquatic environments such as lakes, many of which serve as drinking water resources, it is essential that we monitor and assess key environmental indicators, such as trophic status, on a global scale with high spatial and temporal frequency."This research, funded by the European Union's Horizon 2020 programme, is the first demonstration that trophic status of complex inland and nearshore waters can be learnt directly by machine learning algorithms from OLCI reflectance measurements. Our algorithm can produce estimates for all trophic states on imagery acquired by OLCI over global water bodies."Our method outperforms a comparable state-of-the-art approach by 5-12% on average across the entire spectrum of trophic states, as it also eliminates the need to choose the right algorithm for water observation. It estimates trophic status with over 90% accuracy for highly affected eutrophic and hypereutrophic waters."The collaborative study was carried out with five external partners from research and industry: Dr. Stefan G.H. Simis from Plymouth Marine Laboratory; Harald Krawczyk from the German Aerospace Center; Dr. Daniel Odermatt from the Swiss Federal Institute of Aquatic Science and Technology; Kerstin Stelzer from Brockmann Consult and Oberon Berlage from Appjection (Amsterdam).
Environment
2,021
May 4, 2021
https://www.sciencedaily.com/releases/2021/05/210503151309.htm
Northern Red Sea corals pass heat stress test with flying colors
Even under the most optimistic scenarios, most of the coral reef ecosystems on our planet -- whether in Australia, the Maldives or the Caribbean -- will have disappeared or be in very bad shape by the end of this century. That's because global warming is pushing ocean temperatures above the limit that single-cell algae, which are corals' main allies, can withstand. These algae live inside coral tissue for protection and, in exchange, provide corals with essential nutrients produced through photosynthesis. Because the algae contain a variety of pigments and therefore give coral reefs their famous colors, if they are lost the corals turn white, which is known as coral bleaching. But in spite of the real threat caused by global warming, corals in the Red Sea look set to keep their vibrant color.
"We already knew that corals in the Gulf of Aqaba, at the northern tip of the Red Sea, were particularly resistant to higher temperatures. But we wanted to study the full molecular mechanism behind this resistance," says Romain Savary, a postdoc at EPFL's Laboratory for Biological Geochemistry (LGB) and lead author of the study, which appears today in To conduct their study, the scientists subjected Gulf of Aqaba corals to a range of heat stresses including the higher temperatures likely to occur in the coming decades. The average maximum monthly temperature in these waters is currently around 27°C, so the scientists exposed coral samples to temperatures of 29.5°C, 32°C and 34.5°C, over both a short time period (three hours) and a longer one (one week). The scientists measured the corals' and symbiotic algae's gene expression both during and after the heat stress test, and determined the composition of the microbiome residing in the corals."The main thing we found is that these corals currently live in temperatures well below the maximum they can withstand with their molecular machinery, which means they're naturally shielded against the temperature increases that will probably occur over the next 100 or even 200 years," says Savary. "Our measurements showed that at temperatures of up to 32°C, the corals and their symbiotic organisms were able to molecularly recover and acclimate to both short-term and long-term heat stress without any major consequences." This offers genuine hope to scientists -- although warmer waters are not the only threat facing this exceptional natural heritage.This is the first time scientists have conducted a genetic analysis of coral samples on such a broad scale, and their findings reveal how these heat-resistant corals respond at the most fundamental level -- gene expression. They can also be used as a basis for identifying 'super corals.' According to Meibom, "Romain's research gives us insight into the specific genetic factors that allow corals to survive. His study also indicates that an entire symphony of genetic expression is at work to give corals this extraordinary power." This sets a standard for what "super coral" gene expression looks like during a heat stress and a recovery. But could Red Sea corals be used to one day repopulate the Great Barrier Reef? "Corals are highly dependent on their surroundings," says Meibom. "They can adapt to new environments only after a long, natural colonization process. What's more, the Great Barrier Reef is the size of Italy -- it would be impossible to repopulate it artificially."The scientists' work was made possible thanks to two unique research instruments: the Red Sea Simulator (RSS), developed by the Interuniversity Institute for Marine Sciences in Eilat, Israel; and the Coral Bleaching Automated Stress System (CBASS), developed by a team of researchers in the US. Their findings have laid the groundwork for a much more ambitious project that will be led by the Transnational Red Sea Center (TRSC, And what does that glimpse into the future tell us? Some corals in the southern Red Sea are already starting to bleach. Savary believes there's just one solution: "We have to protect these corals and shield them from local stressors, which are mainly sources of pollution and physical destruction. That way we can keep a stock of 'natural super corals' for potentially recolonizing areas that have been hit particularly hard by climate-change-induced heat waves."
Environment
2,021
May 3, 2021
https://www.sciencedaily.com/releases/2021/05/210503172832.htm
Local impacts from fracking the Eagle Ford
Hydraulic fracturing to extract trapped fossil fuels can trigger earthquakes. Most are so small or far from homes and infrastructure that they may go unnoticed; others can rattle windows, sway light fixtures and jolt people from sleep; some have damaged buildings.
Stanford University geophysicists have simulated and mapped the risk of noticeable shaking and possible building damage from earthquakes caused by hydraulic fracturing at all potential fracking sites across the Eagle Ford shale formation in Texas, which has hosted some of the largest fracking-triggered earthquakes in the United States.Published April 29 in Tens of thousands of wells drilled in the vast formation over the past decade helped to fuel the U.S. shale boom and contributed to a dramatic increase in earthquakes in the central and eastern U.S. starting around 2009. Although damaging earthquakes are rare, the authors write, "the perceived risks of hydraulic fracturing have both caused public concern and impeded industry development."In sparsely populated areas within the southwestern portion of the Eagle Ford, the researchers found damage is unlikely even if fracking causes earthquakes as large as magnitude 5.0. Allowing such powerful quakes, however, could jeopardize the "social license to operate," they write. The phrase, which emerged within the mining industry in the 1990s and has since been adopted by climate activists, refers to the unofficial acceptance by local community members and broader civil society that oil, gas and mining operations need to do business without costly conflicts."Seismicity is part of the social license for hydraulic fracturing, but far from the only issue," said study co-author Bill Ellsworth, a geophysics research professor at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "Eliminating hydraulic fracturing seismicity altogether wouldn't change any of the other concerns."Among those concerns are health threats from living near oil and gas wells and greenhouse gas emissions from fossil fuel production and use. California's recent announcement of plans to stop issuing new permits for hydraulic fracturing by 2024, for example, comes as part of an effort to phase out oil extraction and reduce greenhouse gas emissions.The researchers say their goal is to make it easier for operators, regulators, local residents and property owners to discuss the risks that are important to them without technical expertise. "The approach we've developed provides the risk of nuisance or damage as a shared frame of reference and tools to evaluate it," said study co-author and geophysics professor Greg Beroza, co-director of the Stanford Center for Induced and Triggered Seismicity (SCITS).The new risk analysis applies a technique first published last year for considering where people and structures are located as well as forecasts for maximum earthquake magnitude and geological factors that can amplify or dampen tremors as they travel underground. The approach makes it possible to start out with some level of risk -- such as a 50 percent chance of 30 households experiencing shaking that feels exciting but not frightening, based on community questionnaires -- and calculate the largest earthquake magnitude that would keep risk at or below that level.The authors propose using this type of analysis as a starting point for managing earthquake risk caused by fracking using a system known as a traffic-light protocol. Adopted in states including Ohio and Oklahoma to manage seismic hazards related to oil, gas and some geothermal energy development, traffic-light protocols give operators a green light to proceed as long as quakes remain relatively small. Larger earthquakes may require an operator to adjust or halt fluid injections, knowing that shaking may continue and even intensify after the pumps shut down."If the goal is to treat everyone equally in terms of risk, our analysis shows action should be taken at lower magnitudes for drill sites near the cities in the north of the Eagle Ford than for those in rural areas in the south," explained Ellsworth, who is also a co-director of SCITS.According to the researchers, it's "unfair" to set a uniform threshold for the amount of shaking allowed across a large formation like the Eagle Ford. "Single valued thresholds can allow for thresholds that are too permissive in urban regions or too restrictive in rural regions," said Beroza, the Wayne Loel Professor at Stanford Earth. "Instead, if you start with a tolerance to risk, you can set thresholds that vary according to changes in the risk."
Environment
2,021
May 3, 2021
https://www.sciencedaily.com/releases/2021/05/210503144727.htm
Cell atlas of stony corals is boost for coral reef conservation efforts
Researchers at the University of Haifa, the Weizmann Institute and the Centre for Genomic Regulation (CRG) have built the first atlas of all of the different types of cells in Stylophora pistillata, a reef-building stony coral native to the Indo-Pacific oceans. Published today in the journal
The findings provide new insights into the molecular biology and evolution of corals and will aid present and future conservation efforts to protect coral reef ecosystems threatened by rising temperatures and ocean acidification.The map reveals that Stylophora pistillata has 40 different cell types over the three main stages in their life cycle. The researchers found molecular mechanisms responsible for vital biological processes such as the formation of the coral's skeleton, which serves as the habitat for a large number of marine species. The team also uncovered how corals establish a symbiotic relationship with the photosynthetic algae that reside within their cells.The researchers were also surprised to discover the presence of specialized immune cells that employ many genes typically associated to immune cell function in vertebrates. It has been previously thought that innate immunity plays a role in preserving the health of algae symbionts, as well as resilience to rising temperatures and acidification, but until now no specialized immune cells have been reported in corals.According to Dr. Tali Mass, one of the authors of the study and researcher at the University of Haifa, "Coral reefs play a critical role in the ecosystem of oceans and seas, since they provide a habitat for around 25% of animals in the sea and build the largest biogenic structures in the world. The warming of the seawater and rising acidity pose a threat to the future of coral reefs, and accordingly, the genetic sequencing we have completed is extremely important for the survival of coral reefs and the future of the oceans."According to Arnau Sebe Pedrós, co-author of the study and Group Leader at the CRG, "Our work systematically defines the molecular biology of coral cells. This cell atlas will help to better understand the responses of corals to raising temperatures and ocean acidification, and may even eventually help design interventions that boost the resilience of the coral reefs we still have left. This work is also a good example of how single-cell genomics technologies are revolutionizing our understanding of animal biodiversity and evolution, bridging the gap between genomes and organisms."The researchers built the cell atlas by using a method called single-cell RNA sequencing to measure the gene expression of each individual cell. In research, single-cell RNA sequencing is almost exclusively limited to species that can be grown in laboratory conditions. As stony corals are difficult to grow in lab conditions, researchers in Israel collected the corals at different stages in their life cycle in the Gulf of Eilat and then transported them to the Weizmann Institute and to the CRG in Barcelona for sequencing and analysis. The study is one of the few to carry out single-cell analysis in species sampled from the wild.Stony corals are the foundation species for many coral reefs. They begin their life as a swimming larva that disperses and settles as a polyp. Polyps rapidly build a protein-rich matrix that forms a calcium carbonate skeleton, eventually developing into a colonial adult composed of many individual polys. Stony coral colonies are the main habitat for a huge diversity of marine species, which is why coral reefs are considered the rainforests of the sea.Stony corals live in tropical seas by forming a symbiotic relationship with photosynthetic algae that lives within its cells. The algae provide photosynthetic products to the cell, which in turn provides the algae with carbon. The symbiotic relationship sustains the high energy demands of coral growth and reproduction, including the production of its skeleton.In the last few decades, coral reefs have declined worldwide. The main drivers of this decline are rising ocean temperatures and acidification, which directly impact coral symbiosis by leading to coral bleaching, where corals expel the algae living in their tissues, as well as affecting skeleton formation through reduced calcification rates.
Environment
2,021
May 3, 2021
https://www.sciencedaily.com/releases/2021/05/210503144514.htm
GM grass cleanses soil of toxic pollutants left by military explosives, new study shows
A grass commonly used to fight soil erosion has been genetically modified to successfully remove toxic chemicals left in the ground from munitions that are dangerous to human health, new research shows.
The study -- led by the University of York- demonstrates that genetically modified switchgrass (Panicum virgatum) can detoxify residues of the military explosive, RDX, left behind on live-fire training ranges, munitions dumps and minefields.RDX has been a major component of munitions since WW2 which are still used extensively on military training grounds. This use has now resulted in widespread pollution of groundwater.Researchers generated the plants by inserting two genes from bacteria able to breakdown RDX. The plants were then grown in RDX contaminated soil on a US military site. The genetically modified grass grew well and successfully degraded RDX to non-detectable levels in their plant tissues.Study authors, Professor Neil Bruce from the Department of Biology and Director of the Centre for Novel Agricultural Products (CNAP) and Dr Liz Rylott, also from CNAP believe it is the first successful example of the use of a GM plant in-the-field to remove organic pollutants, which are resistant to environmental degradation.Dr Rylott said: "The removal of the toxic RDX from training ranges is logistically challenging and there is currently a lack of cost-effective and sustainable solutions."Our research demonstrates how the expression, in switchgrass, of two bacterial genes that have evolved specifically to degrade RDX give the plants the ability to remove and metabolise RDX in the field at concentrations relevant to live fire military ranges."We demonstrated that by inserting these genes into switchgrass, the plant then had the ability to degrade RDX to non-detectable levels in the plant tissue."The study confirmed that the plants were able remove and degrade RDX at a rate of 27 kg RDX per hectare. RDX is designated as a priority pollutant by the US Environmental Protection Agency and is of significant and increasing public concern. In the US, over 10 million hectares of military land are contaminated with munitions components, of which, RDX is a major component.Professor Bruce added: "The recalcitrance of RDX to degradation in the environment, combined with its high mobility through soil and groundwater, mean that plumes of toxic RDX continue to spread below these military sites, threatening drinking water supplies."The paper sites an example when in 1997, plumes of RDX pollution were discovered in both groundwater and the aquifer beneath the training range at the Massachusetts Military Reservation in Cape Cod. The aquifer is the sole source of drinking water for half a million people and resulted in the Environment Protection Agency preventing use of all live munitions during training at this site.The study says that the continuing demand for vast amounts of military explosives means that RDX will continue to be manufactured and used globally on a massive scale for the foreseeable future.
Environment
2,021
May 3, 2021
https://www.sciencedaily.com/releases/2021/05/210503135621.htm
New research shows long-term recovery possible for areas impacted by seagrass die-off
Nearly 10,000 acres of lush seagrass vanished from Florida Bay between 1987 and 1991, leading to massive ecological changes in the region near the Florida Keys. Abundance of the seagrass,
Researchers from the University of South Florida, the Florida Fish and Wildlife Conservation Commission (FWC) and the University of North Carolina Wilmington documented the response of seagrasses after the die-off. Their detailed data collection for over 20 years across the large area of impact has provided unique insight into seagrass resiliency or the ability of a coastal ecosystem to recover after the extensive loss. This study, published in Seagrass plays an important role across much of the Gulf of Mexico and Caribbean Sea, providing critical habitat and feeding grounds for many species of fish, turtles and other wildlife. They're considered to be one of the most productive ecosystems in the world and in Florida Bay contribute to a sport fishing industry worth hundreds of millions of dollars per year.USF Distinguished University Professor Susan Bell first learned of the 1987 large-scale seagrass die-off in Florida when she got a call from a long-time fisherman friend who noticed the seagrass disappearing and large amounts of dead seagrass. Bell notified colleagues at FWC, who began to detail what was happening across a roughly 15 square mile stretch of the bay.For more than 10 years, researchers saw little to no change in seagrass, especially in the levels of turtlegrass. However, after another decade of monitoring, researchers reported a return to pre-die-off levels of turtlegrass in the region. The study shows that the entire sequence of die-off, algal blooms and recovery took 17-23 years. Both the long duration of the study and large area over which the data were systematically collected were unique to reports of seagrass recovery. Also, most studies of marine populations that recover from some kind of disturbance are linked to human intervention, such as removing a source of pollution, but in this case the recovery required no human activities."While the fact this system recovered after the 1980s die-off is fantastic, we really wanted to figure out the mechanisms that allowed recovery to happen," said Bell, a faculty member in the USF Department of Integrative Biology. "What we discuss are a number of features that underlie the seagrass recovery: the system was remote, remnants of seagrass leftover after the die-off served as a catalyst for repopulation and having multiple species of seagrass present increases the likelihood for recovery."In the last case, two opportunistic seagrass species were first to increase in abundance after the die-off and likely facilitated the return of turtlegrass.Bell believes this study can serve as a framework for other regions experiencing seagrass die-off, including once again in Florida Bay, which is still in the midst of the die-off that began in 2015. Their work warns that evaluation of ecosystem resiliency may take decades to detect, mandating long-term studies. Researchers are continuing to study the changes in Florida Bay, but are hopeful that with the right conditions, the region can once again return to normal."Today, this monitoring program provides some of our best information on the status of the system," said Brad Furman, a co-author of the study and research scientist at FWC's Fish and Wildlife Research Institute. "Studies like this one allow us to set expectations for recovery, something we did not have in the 1990s, which is extremely important as we watch the Bay respond to the most recent die-off event."
Environment
2,021
May 3, 2021
https://www.sciencedaily.com/releases/2021/05/210503135611.htm
Short-term exposure to air pollution may impede cognition; Aspirin could help
Exposure to air pollution, even over the course of just a few weeks, can impede mental performance, according to a new study led by researchers at Columbia University Mailman School of Public Health. However, these adverse effects were lessened in people taking nonsteroidal anti-inflammatory drugs (NSAIDs) like aspirin. The study is among the first to explore short-term air pollution exposures and the use of NSAIDs to mitigate their effects. The results are published in the journal
Examples of events that would increase someone's exposure to air pollution over the short term could include forest fires, smog, second-hand cigarette smoke, charcoal grills, and gridlock traffic.The researchers examined the relationship between exposures to fine particulate matter (PM2.5) and black carbon, a component of PM, and cognitive performance in 954 older white males from the Greater Boston Area enrolled in the Normative Aging Study. They also explored whether taking NSAIDs could modify their relationships. Cognitive performance was assessed using the Global Cognitive Function (GCF) and Mini-Mental State Examination (MMSE) scales. Air pollution levels were obtained from a site in Boston.Elevated average PM2.5 exposure over 28 days was associated with declines in GCF and MMSE scores. Men who took NSAIDs experienced fewer adverse short-term impacts of air pollution exposures on cognitive health than non-users, though there were no direct associations between recent NSAID use and cognitive performance. The researchers postulate that NSAIDs, especially aspirin, may moderate neuroinflammation or changes in blood flow to the brain triggered by inhaling pollution."Despite regulations on emissions, short-term spikes in air pollution remain frequent and have the potential to impair health, including at levels below that usually considered hazardous," says senior author Andrea Baccarelli, MD, PhD, chair of the Department of Environmental Health Sciences. "Taking aspirin or other anti-inflammatory drugs appears to mitigate these effects, although policy changes to further restrict air pollution are still warranted."The link between long-term PM exposure and impaired cognitive performance in the aging population is well-established. Reported effects include reduced brain volume, cognitive decrements, and dementia development. Air pollution has also been associated with poor cognition of children and adults. Until now, however, little was known about the effects of short-term exposure to air pollution.The researchers say future studies should investigate the specific effects of chemical components of air pollution on cognitive performance, exposure sources in the environment, and whether cognitive impairments due to short-term air pollution exposures are transient or persistent. Randomized clinical trials of NSAID use are needed to validate their protective effects.
Environment
2,021
May 3, 2021
https://www.sciencedaily.com/releases/2021/05/210503093514.htm
Lead found in rural drinking water supplies in West Africa
Scientists are warning that drinking water supplies in parts of rural West Africa are being contaminated by lead-containing materials used in small community water systems such as boreholes with handpumps and public taps.
They analysed scrapings taken from the plumbing of 61 community water supply systems in Ghana, Mali and Niger. Eighty percent of the tested systems had at least one component that contained lead in excess of international guidance.Lead is released into the water when the components corrode.The study, by a research team from the University of Leeds, University of North Carolina at Chapel Hill and Boston University, also took samples of the water from those 61 water distribution systems, and from a further 200 taps and boreholes with handpumps.Sixty percent of the samples contained lead -- nine percent were at a level that exceeded World Health Organisation guidelines.The researchers found that lead contamination was significantly associated with the use of lead-containing components in the water systems.There is no-known safe level of exposure to lead. It accumulates in the body and crosses the blood-brain barrier and can cause irreversible damage to cognitive and neurological development, particularly in children and babies in the womb.Lead contamination in plumbing systems has been a recognised problem for decades which has been controlled in urban areas served by large piped water systems by implementing corrosion control and using lead-free or low-lead components, enforced through testing and monitoring, building codes and regulations.Evidence shows there is still a problem in higher-income countries with lead contamination in water from private wells and small, piped systems. The picture in low and middle-income countries has been less well studied, although the problem is believed to be widespread -- and the potential implications for public health are much greater because of the global scale and the number of people who rely on small community water-supply systems.In Sub-Sharan Africa alone, it is estimated that 184 million people use boreholes with handpumps to access water and 112 million people use rural piped supplies.Jamie Bartram, Professor of Public Health and Environment at the University of Leeds, who supervised the latest research, said the evidence demonstrated the need for coordinated and urgent remedial action."We have an opportunity for effective prevention and improved water supply practice world-wide. The required actions are overdue and unquestionably beneficial. The cost of ensuring that components are lead-safe is negligible," he said."Using certified-safe components has multiple benefits, minimizing the risk to other hazardous contaminants. In contrast, delay carries further disease burden, increases the ultimate cost of protecting populations, and accumulates remediation burdens for future generations."The study, Occurrence of Lead and other Toxic Metals Derived from Drinking-Water Systems in three West African Countries, is published in the journal The International Plumbing Code (IPC), from the International Code Council, recommends that lead in a plumbing component should not exceed 0.25 percent, based on weight.Of the 130 plumbing components tested by the research team, 82 percent had a lead level that exceeded the IPC recommended maximum. Brass components were the most problematic. The researchers say the use of brass in a water system increased the expected lead concentrations in drinking-water samples by a factor of 3.8.Where drinking water was contaminated, the mean value of lead in the water was approximately 8 micrograms in a litre of water -- where a microgram is one-millionth of a gram. The individual values, the 95 percent confidence limit, ranged from 0.5 micrograms/litre of water to 15 micrograms/litre of water. The World Health Organisation guideline value is 10 micrograms/litre of water.Dr Michael Fisher, Assistant Professor at the University of North Carolina at Chapel Hill, led the study. He said: "It is clear is that lead is present in most tested systems in this study and finds its way into drinking water at levels of concern."These findings suggest several affordable, feasible, no-regrets opportunities to reduce or prevent this contamination from occurring. Collaboration among multiple stakeholders will be required to achieve rapid success."Lead exposure from other sources like paint and petrol has been successfully phased out and lead can be successfully eliminated from drinking water systems through concerted and collaborative responses."The scientists say manufacturers could discourage the use of unsuitable components, for example through explicit labelling and engagement in professional networks.They write in the paper: "This contamination may be readily addressed through cost-effective preventive action, such as consistent use of components and materials compliant with IPC codes. Supply chain improvements with verification of compliance would reduce the availability and use of unsuitable components, such as leaded brass parts, in drinking-water systems."Governments may develop or update regulations related to lead-free water system components and their implementation, including verification schemes."The research team say importers and wholesalers should ensure that product suitability and specifications are conspicuous and intelligible. Professional associations should disseminate knowledge and foster understanding and good practices throughout their memberships. Several governments are already taking action.The United Nations Sustainable Development Goal SDG 6 states that everybody should have access to safe and affordable drinking water.The research was funded by World Vision, an NGO that works to reduce poverty affecting children.
Environment
2,021
April 30, 2021
https://www.sciencedaily.com/releases/2021/04/210430135438.htm
Wildfire smoke trends worsening for Western US
From the Pacific Northwest to the Rocky Mountains, summers in the West are marked by wildfires and smoke. New research from the University of Utah ties the worsening trend of extreme poor air quality events in Western regions to wildfire activity, with growing trends of smoke impacting air quality clear into September. The work is published in
"In a big picture sense, we can expect it to get worse," says Kai Wilmot, lead author of the study and doctoral student in the Department of Atmospheric Sciences. "We're going to see more fire area burned in the Western U.S. between now and in 2050. If we extrapolate our trends forward, it seems to indicate that a lot of urban centers are going to have trouble in meeting air quality standards in as little time as 15 years."Many of the West's inhabitants have seen smoky summer skies in recent years. Last year, dramatic images of an orange-tinted San Francisco Bay Area called attention to the far-reaching problem of wildfire smoke. Wilmot, a native of the Pacific Northwest, has seen the smoke as well and, with his colleagues, looked at trends of extreme air quality events in the West from 2000 to 2019 to see if they correlated with summer wildfires.Using air measurements of PMOver the years studied, the researchers noticed that the mean air quality was worsening in the Pacific Northwest in the average August when sensors indicated wildfire smoke events."That's pretty dramatic," Wilmot says, "that extreme events are strong enough to pull the mean up so that we're seeing an overall increase in particulate matter during August across much of the Pacific Northwest and portions of California. The Pacific Northwest seems like it's just really getting the brunt of it."The reason for that, he says, is that the regions around the Pacific Northwest, in British Columbia and Northern California, both experience wildfires around August. The mountainous Pacific Northwest, Wilmot says, sits in the middle.But by September, the researchers found, wildfire activity slows in British Columbia and shifts to the Rocky Mountains. The smoke shifts too -- the researchers saw emerging trends correlating wildfire smoke with declines in September air quality in Wyoming and Montana. "We see the PMWhat about Utah? The study findings show that the magnitude and significance of air quality trends increases as you go from the southern states of Arizona and New Mexico toward the Pacific Northwest. In Utah, Wilmot says, air quality trends are near the edge of statistical significance, with evidence for impact from wildfires, but evidence that's less robust than in the Pacific Northwest and California. "Thinking about events like the smoke transport from fires in the Bay Area this past summer," Wilmot says, "I would not be surprised to see trends in Utah become increasingly convincing with additional data."Other researchers in other studies have suggested that the future will bring more fire areas burned in the Western U.S., with an accompanying increase in wildfire smoke exposure throughout the West and the impacts of that smoke on human health.Wilmot notes that the trends the researchers see in the Pacific Northwest in August are "pretty robust," he says, while the September trends in Montana and Wyoming are still "emerging.""I think the concern is that, given more time, those emerging trends are going to start looking a lot more like what we're seeing in August," he says. "I hope that's not the case, but it seems entirely within the realm of possibility."His next step is to develop simulation models to more precisely link wildfire emissions in urban centers to smoke source regions."The big picture," he says, "is aiming to help forest management in terms of identifying wildfire emissions hotspots that are particularly relevant to air quality in the Western U.S., such that if we had funding to spend on some sort of intervention to limit wildfire emissions, we would know where to allocate those funds first to get the most out of it."
Environment
2,021
April 30, 2021
https://www.sciencedaily.com/releases/2021/04/210430165906.htm
Poorer communities hardest hit by toxic pollution incidents
Toxic pollution hits poorer populations hardest as firms experience more pollutant releases and spend less money on waste management in areas with lower average incomes.
Research from Lancaster University Management School and Texas Tech University, published in The team studied potentially polluting firms across Texas, and found a correlation between lower income locations and the probability of potentially polluting firms choosing to locate there. Their data, from the US Environment Agency's Toxic Release Inventory also showed the relative frequency of toxic releases decreased as household income rose."We looked both at whether firms made decisions on their location based on demographics -- particularly income -- of the local areas, and also whether firms made different choices on limiting the possibilities of toxic release through waste management based on those same statistics," said co-author Professor Dakshina De Silva, of Lancaster University's Department of Economics."Firms reduced their releases and increased spending on waste management in higher income areas -- evidenced by a greater number of waste management services -- while lower income areas were disproportionately subject to toxic releases."The patterns we observed lead us to conclude that, at least partially, potentially polluting firms seek to maximise their expected profits and recognise the financial risk associated with a release in different areas."Releasing toxic chemicals in the environment is costly for firms because they will have to implement clean-up programmes, pay penalties and compensate local residents for damages. Higher incomes -- and associated property values -- increase the costs as damages are linked to reduced property values and lost income due to limitations on working."Potentially polluting firms seeking to maximise profits will be concerned with the liability of toxic releases and the threat such releases pose to their financial results," said co-author Dr Anita Schiller, of Lancaster University. "They will therefore take into account the demographics of an area when valuing the legal costs and compensation they would have to pay out in the event of a toxic release, and balance this with the cost of reducing the likelihood of such a release."In areas where there is a higher income and higher property prices, compensation levels will go up in the event of an incident, with the possibility of collective action by residents and businesses also increasing. Potentially polluting firms must also consider the financial risk of a release and the costs of managing such an incident's likelihood."The researchers found that levels of toxic release had declined steadily since 2000 -- by 34% between 2000 and 2006 and by a further 21% since 2006 -- but suggested this drop was not uniform, with potentially polluting firms reducing releases through waste management where local opposition to their presence was the highest.Co-author Dr Aurelie Slechten added: "Combined with our finding that economic activity and local income are linked with spending on waste management and with levels of toxic releases, this implies that the group most affected by releases will be poorer populations in industrial areas."Without further action, the disparity in exposure to toxic releases faced by certain groups will not be reduced by simply requiring that firms report their releases. Serious thought needs to be given to regulation on compensation schemes."
Environment
2,021
April 30, 2021
https://www.sciencedaily.com/releases/2021/04/210430093149.htm
Defect that limits hybrid perovskites solar-cell performance identified
Researchers in the materials department in UC Santa Barbara's College of Engineering have uncovered a major cause of limitations to efficiency in a new generation of solar cells.
Various possible defects in the lattice of what are known as hybrid perovskites had previously been considered as the potential cause of such limitations, but it was assumed that the organic molecules (the components responsible for the "hybrid" moniker) would remain intact. Cutting-edge computations have now revealed that missing hydrogen atoms in these molecules can cause massive efficiency losses. The findings are published in a paper titled "Minimizing hydrogen vacancies to enable highly efficient hybrid perovskites," in the April 29 issue of the journal The remarkable photovoltaic performance of hybrid perovskites has created a great deal of excitement, given their potential to advance solar-cell technology. "Hybrid" refers to the embedding of organic molecules in an inorganic perovskite lattice, which has a crystal structure similar to that of the perovskite mineral (calcium titanium oxide). The materials exhibit power-conversion efficiencies rivaling that of silicon, but are much cheaper to produce. Defects in the perovskite crystalline lattice, however, are known to create unwanted energy dissipation in the form of heat, which limits efficiency.A number of research teams have been studying such defects, among them the group of UCSB materials professor Chris Van de Walle, which recently achieved a breakthrough by discovering a detrimental defect in a place no one had looked before: on the organic molecule."Methylammonium lead iodide is the prototypical hybrid perovskite," explained Xie Zhang, lead researcher on the project. "We found that it is surprisingly easy to break one of the bonds and remove a hydrogen atom on the methylammonium molecule. The resulting 'hydrogen vacancy' then acts as a sink for the electric charges that move through the crystal after being generated by light falling on the solar cell. When these charges get caught at the vacancy, they can no longer do useful work, such as charging a battery or powering a motor, hence the loss in efficiency."The research was enabled by advanced computational techniques developed by the Van de Walle group. Such state-of-the-art calculations provide detailed information about the quantum-mechanical behavior of electrons in the material. Mark Turiansky, a senior graduate student in Van de Walle's group who was involved in the research, helped build sophisticated approaches for turning this information into quantitative values for rates of charge carrier trapping."Our group has created powerful methods for determining which processes cause efficiency loss," Turiansky said, "and it is gratifying to see the approach provide such valuable insights for an important class of materials.""The computations act as a theoretical microscope that allows us to peer into the material with much higher resolution than can be achieved experimentally," Van de Walle explained. "They also form a basis for rational materials design. Through trial and error, it has been found that perovskites in which the methylammonium molecule is replaced by formamidinium exhibit better performance. We are now able to attribute this improvement to the fact that hydrogen defects form less readily in the formamidinium compound."This insight provides a clear rationale for the empirically established wisdom that formamidinium is essential for realizing high-efficiency solar cells," he added. "Based on these fundamental insights, the scientists who fabricate the materials can develop strategies to suppress the harmful defects, boosting additional efficiency enhancements in solar cells."Funding for this research was provided by the Department of Energy's Office of Science and Office of Basic Energy Sciences. The computations were performed at the National Energy Research Scientific Computing Center.
Environment
2,021
April 29, 2021
https://www.sciencedaily.com/releases/2021/04/210429141943.htm
Exploiting plants' ability to 'tell the time' to make food production more sustainable
Cambridge plant scientists say circadian clock genes, which enable plants to measure daily and seasonal rhythms, should be targeted in agriculture and crop breeding for higher yields and more sustainable farming.
Like humans, plants have an 'internal clock' that monitors the rhythms of their environment. The authors of a study published today say that now the genetic basis of this circadian system is well understood and there are improved genetic tools to modify it, the clock should be exploited in agriculture -- a process they describe as 'chronoculture' -- to contribute to global food security."We live on a rotating planet, and that has a huge impact on our biology -- and on the biology of plants. We've discovered that plants grow much better when their internal clock is matched to the environment they grow in," said Professor Alex Webb, Chair of Cell Signalling in the University of Cambridge's Department of Plant Sciences and senior author of the report.A plant's circadian clock plays an important role in regulating many of the functions that affect yield including flowering time, photosynthesis, and water use. The genes controlling the circadian rhythm are similar in all major crop plants -- making them a potential target for crop breeders wishing to gain more control over these functions.Chronoculture could also be applied by adapting crop growing practices to the optimal time of day, to reduce the resources required. The study is published today in the journal The simplest and easiest approach, say the scientists, would be to use knowledge of a crop's internal clock to apply water, herbicides or pesticides at the most effective time of day or night. Low-cost technologies including drones and sensors could collect round-the-clock information about plant crop growth and health. Farmers could then receive advice about the best time to apply treatments to their specific crop, for their precise location and weather conditions."We know from lab experiments that watering plants or applying pesticides can be more effective at certain times of day, meaning farmers could use less of these resources. This is a simple win that could save money and contribute to sustainability," says Webb.He added: "Using water more efficiently is an important sustainability goal for agriculture."Webb says that indoor 'vertical farming' could also be improved using chronoculture. The approach, mostly used for leafy greens at present, grows crops under highly controlled light and temperature conditions but can also be very energy intensive. With knowledge of the plants' internal clock and the ability to change it through genetic modification, the lighting and heating cycles could be matched to the plant for highly efficient growth."In vertical farming, chronoculture could give total control over the crop. We could breed specific crop plants with internal clocks suited to growing indoors, and optimise the light and temperature cycles for them," says Webb.A third potential application of chronoculture is post-harvest, when plants slowly deteriorate and continue to be eaten by pests. There is good evidence that pest damage can be reduced by maintaining the internal rhythms of the harvested plants."Plants' responses to pests are optimised -- they're most resistant to pests at the time of day the pests are active," says Webb. "So just a simple light in the refrigerated lorry going on and off to mimic the day/ night cycle would use the plants' internal clock to help improve storage and reduce waste."The researchers say that in selecting plants with particular traits such as late flowering time for higher yield, crop breeders have already been unwittingly selecting for the plants with the most suitable internal clock. New understanding of the genes involved in the clock could make this type of breeding much more targeted and effective.Webb says there are many opportunities for chronoculture to make food production more sustainable. The specifics would be different for every location and crop, and this is where more research is now needed. He is confident that the approach can form part of the solution to feeding our growing population sustainably.It has been estimated that we will need to produce more food in the next 35 years has ever been produced in human history, given the projected increases in global population and the change in diets as incomes rise.A similar idea is now being applied in human medicine: 'chronomedicine' is finding that drugs are more effective when taken at a specific time of day.
Environment
2,021
April 29, 2021
https://www.sciencedaily.com/releases/2021/04/210429123407.htm
New optical hydrogen sensors eliminate risk of sparking
Hydrogen as a clean, renewable alternative to fossil fuels is part of a sustainable-energy future, and very much already here. However, lingering concerns about flammability have limited widespread use of hydrogen as a power source for electric vehicles. Previous advances have minimized the risk, but new research from the University of Georgia now puts that risk in the rearview mirror.
Hydrogen vehicles can refuel much more quickly and go farther without refueling than today's electric vehicles, which use battery power. But one of the final hurdles to hydrogen power is securing a safe method for detecting hydrogen leaks.A new study published in "Right now, most commercial hydrogen sensors detect the change of an electronic signal in active materials upon interaction with hydrogen gas, which can potentially induce hydrogen gas ignition by electrical sparking," said Tho Nguyen, associate professor of physics in the Franklin College of Arts and Sciences, a co-principal investigator on the project. "Our spark-free optical-based hydrogen sensors detect the presence of hydrogen without electronics, making the process much safer."Hydrogen power has many more applications than powering electric vehicles, and flammability mitigating technologies are critical. Robust sensors for hydrogen leak detection and concentration control are important in all stages of the hydrogen-based economy, including production, distribution, storage and utilization in petroleum processing and production, fertilizer, metallurgical applications, electronics, environmental sciences, and in health and safety-related fields.The three key problems associated with hydrogen sensors are response time, sensitivity, and cost. Current mainstream technology for H2 optical sensors requires an expensive monochromator to record a spectrum, followed by analyzing a spectral shift comparison."With our intensity-based optical nano sensors, we go from detection of hydrogen at around 100 parts-per-million to 2 parts-per-million, at a cost of a few dollars for a sensing chip," Tho said. "Our response time of .8 seconds is 20% faster than the best available optical device reported in the literature right now."The new optical device relies on the nanofabrication of a nanosphere template covered with a Palladium Cobalt alloy layer. Any hydrogen present is quickly absorbed, then detected by an LED. A silicon detector records the intensity of the light transmitted."All metals tend to absorb hydrogen, but by finding the suitable elements with a right balance in the alloy and engineering the nanostructure to amplify subtle changes in light transmission after hydrogen absorption, we were able to set a new benchmark for how fast and sensitive these sensors can be," said George Larsen, a senior scientist at Savannah River National Laboratory and co-principal investigator on the project. "All while keeping the sensor platform as simple as possible."The research is primarily supported by the U.S. Department of Energy and the SRNL's Laboratory Directed Research and Development Program.
Environment
2,021
April 29, 2021
https://www.sciencedaily.com/releases/2021/04/210429123402.htm
Combining solar panels and lamb grazing increases land productivity, study finds
Land productivity could be greatly increased by combining sheep grazing and solar energy production on the same land, according to new research by Oregon State University scientists.
This is believed to be the first study to investigate livestock production under agrivoltaic systems, where solar energy production is combined with agricultural production, such as planting agricultural crops or grazing animals.The researchers compared lamb growth and pasture production in pastures with solar panels and traditional open pastures. They found less overall but higher quality forage in the solar pastures and that lambs raised in each pasture type gained similar amounts of weight. The solar panels, of course, provide value in terms of energy production, which increases the overall productivity of the land.Solar panels also benefit the welfare of the lambs by providing shade, which allows the animals to preserve energy. Also lamb grazing alleviates the need to manage plant growth under the solar panels through herbicides or regular mowing, which require extra labor and costs."The results from the study support the benefits of agrivoltaics as a sustainable agricultural system," said Alyssa Andrew, a master's student at Oregon State who is the lead author of the paper published in Frontier in Sustainable Food Systems.Solar photovoltaic installation in the U.S. has increased by an average of 48% per year over the past decade, and current capacity is expected to double again over the next five years, the researchers say.Past research has found that grasslands and croplands in temperate regions are the best places to install solar panels for maximum energy production. However, energy production in photovoltaic systems requires large areas of land, potentially causing a competition between agricultural uses.Agrivoltaics looks to diffuse that competition by measuring the economic value of energy production and agricultural use of the same land. Past research has focused on crops and solar panels and found that some crops, particularly types that like shade, can be more productive in combination with solar panels.Another recent Oregon State study found that shade provided by solar panels increased the abundance of flowers under the panels and delayed the timing of their bloom, both findings that could aid the agricultural community.The just-published study with lambs and solar panels was carried out in 2019 and 2020 at Oregon State's campus in Corvallis. Findings included:"The overall return is about the same, and that doesn't take into account the energy the solar panels are producing," said Serkan Ates, an assistant professor in the Oregon State's Department of Animal and Rangeland Sciences and a co-author of the paper. "And if we designed the system to maximize production we would likely get even better numbers."Andrew is now working on a follow up to this study where she is quantifying the forage and lamb production from three different pasture types under solar panels.In addition to Andrew and Ates, several colleagues from the Oregon State College of Agricultural Sciences co-authored the paper: Mary Smallman of the Department of Animal and Rangeland Sciences and Chad Higgins and Maggie Graham of the Department of Biological and Ecological Engineering.The Agricultural Research Foundation at Oregon State funded the research.
Environment
2,021
April 29, 2021
https://www.sciencedaily.com/releases/2021/04/210429095151.htm
Battery parts can be recycled without crushing or melting
The proliferation of electric cars, smartphones, and portable devices is leading to an estimated 25 percent increase globally in the manufacturing of rechargeable batteries each year. Many raw materials used in the batteries, such as cobalt, may soon be in short supply. The European Commission is preparing a new battery decree, which would require the recycling of 95 percent of the cobalt in batteries. Yet existing battery recycling methods are far from perfect.
Researchers at Aalto University have now discovered that electrodes in lithium batteries containing cobalt can be reused as is after being newly saturated with lithium. In comparison to traditional recycling, which typically extracts metals from crushed batteries by melting or dissolving them, the new process saves valuable raw materials, and likely also energy.'In our earlier study of how lithium cobalt oxide batteries age, we noticed that one of the main causes of battery deterioration is the depletion of lithium in the electrode material. The structures can nevertheless remain relatively stable, so we wanted to see if they can be reused,' explains Professor Tanja Kallio at Aalto University.Rechargeable lithium-ion batteries have two electrodes between which electrically charged particles move. Lithium cobalt oxide is used in one electrode and, in most of the batteries, the other is made of carbon and copper.In traditional battery recycling methods, some of batteries' raw materials are lost and lithium cobalt oxide turns into other cobalt compounds, which require a lengthy chemical refinement process to turn them back into electrode material. The new method sidesteps this painstaking process: by replenishing the spent lithium in the electrode through an electrolysis process, commonly used in industry, the cobalt compound can be directly reused.The results show that the performance of electrodes newly saturated with lithium is almost as good as that of those made of new material. Kallio believes that with further development the method would also work on an industrial scale.'By reusing the structures of batteries we can avoid a lot of the labour that is common in recycling and potentially save energy at the same time. We believe that the method could help companies that are developing industrial recycling,' Kallio says.The researchers next aim to see if the same method could also be used with the nickel-based batteries of electric cars.
Environment
2,021
April 29, 2021
https://www.sciencedaily.com/releases/2021/04/210429090212.htm
Social media and science show how ship's plastic cargo dispersed from Florida to Norway
A ship's container lost overboard in the North Atlantic has resulted in printer cartridges washing up everywhere from the coast of Florida to northern Norway, a new study has shown.
It has also resulted in the items weathering to form microplastics that are contaminated with a range of metals such as titanium, iron and copper.The spillage is thought to have happened around 1,500 km east of New York, in January 2014, with the first beached cartridges reported along the coastline of the Azores in September the same year.Since then, around 1,500 more have been reported on social media, with the greatest quantities along the coastlines of the UK and Ireland but also as far south as Cape Verde and north to the edge of the Arctic Circle.The study was conducted by the University of Plymouth and the Lost at Sea Project, who have previously worked together on research suggesting LEGO bricks could survive in the ocean for up to 1,300 years.For this new research, they combined sightings data reported by members of the public and oceanographic modelling tools to show how the cartridges reached their resting place.Some were carried by the Azores and Canary currents around the North Atlantic Gyre, while others were transported northwards with the North Atlantic and Norwegian currents.Writing in the journal Through microscopic and X-ray fluorescence analyses, they also revealed a high degree of exterior weathering that resulted in the cartridge surfaces becoming chalky and brittle.This has resulted in the formation of microplastics rich in titanium, the chemical fouling of interior ink foams by iron oxides, and, in some cases, the presence of an electronic chip containing copper, gold and brominated compounds.Significantly, the study's authors say, the latter characteristic renders cartridges as electrical and electronic waste and means the finds are not governed by current, conventional regulations on plastic cargo lost at sea.Lead author Dr Andrew Turner, Associate Professor (Reader) in Environmental Sciences at the University of Plymouth, said: "Cargo spillages are not common, but estimates suggest there could be several thousand containers lost at sea every year. They can cause harm to the seabed but, once ruptured, their contents can have an impact both where they are lost and -- as shown in this study -- much more widely. This research has also shown once again how plastics not designed to be exposed to nature can break down and become a source of microplastics in the environment. It also calls into question the relevance and robustness of current instruments and conventions that deal with plastic waste and its accidental loss at sea."Tracey Williams, founder of the Cornwall-based Lost at Sea Project, added: "This study also highlights the potential usefulness of social media-led citizen science to marine research. Over many years, members of the public have helped us to show the amount of plastic in our seas and on our beaches. It is something people care passionately about and are committed to trying to solve."
Environment
2,021
April 29, 2021
https://www.sciencedaily.com/releases/2021/04/210428135531.htm
In wild soil, predatory bacteria grow faster than their prey
Predatory bacteria -- bacteria that eat other bacteria -- grow faster and consume more resources than non-predators in the same soil, according to a new study out this week from Northern Arizona University. These active predators, which use wolfpack-like behavior, enzymes, and cytoskeletal 'fangs' to hunt and feast on other bacteria, wield important power in determining where soil nutrients go. The results of the study, published in the journal
Like every other life form on earth, bacteria belong to intricate food webs in which organisms are connected to one another by whom they consume and how. In macro webs, ecologists have long understood that when resources like grass and shrubs are added to lower levels of the web, predators at the top, such as wolves, often benefit. The research team, led by Bruce Hungate and researchers from Northern Arizona University and Lawrence Livermore Laboratory, wanted to test whether the same was true in the microbial food webs found in wild soil."We've known predation plays a role in maintaining soil health, but we didn't appreciate how significant predator bacteria are to these ecosystems before now," said Hungate, who directs the Center for Ecosystem Science and Society at Northern Arizona University.To understand who and how much predator bacteria were consuming, the research team assembled a big picture using dozens of smaller data "snapshots": 82 sets of data from 15 sites in a range of ecosystems. The team used information about how bacteria behave in culture to categorize bacteria as obligate or facultative predators. About seven percent of all bacteria in the meta-analysis were identified as predators, and the majority of those were facultative, or omnivorous.Obligate predator bacteria like Bdellovibrionales and Vampirovibrionales grew 36 percent faster and took up carbon 211 percent faster than non-predators did. When the soil received a boost of carbon, predator bacteria used it to grow faster than other types. Researchers saw these effects in the omnivorous bacteria, as well, though the differences were less profound.All the experiments were conducted using a state-of-the-art technique called quantitative Stable Isotope Probing, or qSIP. Researchers used labeled isotopes, which act a little like molecular hashtags, to track who is active and taking up nutrients in the soil. By sequencing the DNA in a soil sample and looking for these labels, the team could see who was growing and eating whom at the level of bacterial taxa."While analyzing my data, I noticed that Soil ecosystems contain more carbon than is stored in all the plants on Earth, so understanding how carbon and other elements move among soil organisms is crucial to predicting future climate change. And because bacteria are so abundant in soil, they have an enormous role in how nutrients are stored there or lost. And learning more about how predator bacteria act as 'antibiotics' could have therapeutic implications, down the road."Until now, predatory bacteria have not been a part of that soil story," said Hungate. "But this study suggests that they are important characters who have a significant role determining the fate of carbon and other elements. These findings motivate us to take a deeper look at predation as a process."In addition to Hungate, the other NAU authors are Jane Marks, professor in the Department of Biological Sciences; Egbert Schwartz, professor in the Department of Biological Sciences; graduate research assistant Pete Chuckran, Paul Dijkstra, research professor in the Department of Biological Sciences; graduate student Megan Foley; Michaela Hayer, research associate for Ecoss; Ben Koch, senior research scientist for Ecoss; Michelle Mack, professor in the Department of Biological Sciences; Rebecca Mau, research associate, Pathogen & Microbiome Institute; Samantha Miller, research associate for Ecoss; Jeff Propster, research assistant for Ecoss; graduate research assistant Alicia Purcell; and former NAU researcher Bram Stone.The research was supported by the Department of Energy's Office of Biological and Environmental Research Genomic Sciences Program, and a Lawrence Fellow award from the Lawrence Livermore National Laboratory.
Environment
2,021
April 28, 2021
https://www.sciencedaily.com/releases/2021/04/210428140909.htm
How to get salt out of water: Make it self-eject
About a quarter of a percent of the entire gross domestic product of industrialized countries is estimated to be lost through a single technical issue: the fouling of heat exchanger surfaces by salts and other dissolved minerals. This fouling lowers the efficiency of multiple industrial processes and often requires expensive countermeasures such as water pretreatment. Now, findings from MIT could lead to a new way of reducing such fouling, and potentially even enable turning that deleterious process into a productive one that can yield saleable products.
The findings are the result of years of work by recent MIT graduates Samantha McBride PhD '20 and Henri-Louis Girard PhD '20 with professor of mechanical engineering Kripa Varanasi. The work, reported in the journal When the researchers began studying the way salts crystallize on such surfaces, they found that the precipitating salt would initially form a partial spherical shell around a droplet. Unexpectedly, this shell would then suddenly rise on a set of spindly leg-like extensions grown during evaporation. The process repeatedly produced multilegged shapes, resembling elephants and other animals, and even sci-fi droids. The researchers dubbed these formations "crystal critters" in the title of their paper.After many experiments and detailed analysis, the team determined the mechanism that was producing these leg-like protrusions. They also showed how the protrusions varied depending on temperature and the nature of the hydrophobic surface, which was produced by creating a nanoscale pattern of low ridges. They found that the narrow legs holding up these critter-like forms continue to grow upward from the bottom, as the salty water flows downward through the straw-like legs and precipitates out at the bottom, somewhat like a growing icicle, only balanced on its tip. Eventually the legs become so long they are unable to support the critter's weight, and the blob of salt crystal breaks off and falls or is swept away.The work was motivated by the desire to limit or prevent the formation of scaling on surfaces, including inside pipes where such scaling can lead to blockages, Varanasi says. "Samantha's experiment showed this interesting effect where the scale pretty much just pops off by itself," he says."These legs are hollow tubes, and the liquid is funneled down through these tubes. Once it hits the bottom and evaporates, it forms new crystals that continuously increase the length of the tube," McBride says. "In the end, you have very, very limited contact between the substrate and the crystal, to the point where these are going to just roll away on their own."McBride recalls that in doing the initial experiments as part of her doctoral thesis work, "we definitely suspected that this particular surface would work well for eliminating sodium chloride adhesion, but we didn't know that a consequence of preventing that adhesion would be the ejection of the entire thing" from the surface.One key, she found, was the exact scale of the patterns on the surface. While many different length scales of patterning can yield hydrophobic surfaces, only patterns at the nanometer scale achieve this self-ejecting effect. "When you evaporate a drop of salt water on a superhydrophobic surface, usually what happens is those crystals start getting inside of the texture and just form a globe, and they don't end up lifting off," McBride says. "So it's something very specific about the texture and the length scale that we're looking at here that allows this effect to occur."This self-ejecting process, based simply on evaporation from a surface whose texture can be easily produced by etching, abrasion, or coating, could be a boon for a wide variety of processes. All kinds of metal structures in a marine environment or exposed to seawater suffer from scaling and corrosion. The findings may also enable new methods for investigating the mechanisms of scaling and corrosion, the researchers say.By varying the amount of heat along the surface, it's even possible to get the crystal formations to roll along in a specific direction, the researchers found. The higher the temperature, the faster the growth and liftoff of these forms takes place, minimizing the amount of time the crystals block the surface.Heat exchangers are used in a wide variety of different processes, and their efficiency is strongly affected by any surface fouling. Those losses alone, Varanasi says, equal a quarter of a percent of the GDP of the U.S. and other industrialized nations. But fouling is also a major factor in many other areas. It affects pipes in water distribution systems, geothermal wells, agricultural settings, desalination plants, and a variety of renewable energy systems and carbon dioxide conversion methods.This method, Varanasi says, might even enable the use of untreated salty water in some processes where that would not be practical otherwise, such as in some industrial cooling systems. Further, in some situations the recovered salts and other minerals could be salable products.While the initial experiments were done with ordinary sodium chloride, other kinds of salts or minerals are expected to produce similar effects, and the researchers are continuing to explore the extension of this process to other kinds of solutions.Because the methods for making the textures to produce a hydrophobic surface are already well-developed, Varanasi says, implementing this process at large industrial scale should be relatively rapid, and could enable the use of salty or brackish water for cooling systems that would otherwise require the use of valuable and often limited fresh water. For example, in the U.S. alone, a trillion gallons of fresh water are used per year for cooling. A typical 600-megawatt power plant consumes about a billion gallons of water per year, which could be enough to serve 100,000 people. That means that using sea water for cooling where possible could help to alleviate a fresh-water scarcity problem.The work was supported by Equinor through MIT Energy Initiative, the MIT Martin Fellowship Program, and the National Science Foundation.Video:
Environment
2,021
April 28, 2021
https://www.sciencedaily.com/releases/2021/04/210428140906.htm
People of color hardest hit by air pollution from nearly all sources
Various studies show that people of color are disproportionately exposed to air pollution in the United States. However, it was unclear whether this unequal exposure is due mainly to a few types of emission sources or whether the causes are more systemic. A new study that models peoples' exposure to air pollution -- resolved by race-ethnicity and income level -- shows that exposure disparities among people of color and white people are driven by nearly all, rather than only a few, emission source types.
The study led by University of Illinois Urbana Champaign civil and environmental engineering professor Christopher Tessum is published in the journal "Community organizations have been experiencing and advocating against environmental injustice for decades," Tessum said. "Our study contributes to an already extensive body of evidence with the new finding that there is no single air pollution source, or a small number of sources, that account for this disparity. Instead, the disparity is caused by almost all of the sources."The team used an air quality model to analyze Environmental Protection Agency data for more than 5,000 emission source types, including industry, agriculture, coal electric utilities, light- and heavy-duty gasoline vehicles, diesel vehicles, off-road vehicles and equipment, construction, residential sources, road dust and other miscellaneous small emissions sources. Each source type studied contributes to fine particle air pollution, defined as particles being 2.5 micrometers or less in diameter, the study reports.To identify patterns of air pollution exposure associated with race-ethnicity and income, the researchers combined the spatial air pollution patterns predicted in their air quality model with residential population counts from the U.S. Census Bureau to identify differences in exposure by race-ethnicity and income.The researchers found that for the 2014 U.S. total population average, fine particle air pollution exposures from the majority of source types are higher than average for people of color and lower than average for white people. The data indicate that white people are exposed to lower-than-average concentrations from emissions source types that, when combined, cause 60% of their total exposure, the study reports. Conversely, people of color experience greater-than-average exposures from source types that, when combined, cause 75% of their total exposure. This disparity exists at the country, state and city level and for people within all income levels."We find that nearly all emission sectors cause disproportionate exposures for people of color on average," said co-author Julian Marshall, a professor of civil and environmental engineering at the University of Washington. "The inequities we report are a result of systemic racism: Over time, people of color and pollution have been pushed together, not just in a few cases but for nearly all types of emissions."The researchers found that air pollution disparities arise from a more systemic set of causes than previously understood."We were struck by how these systemic disparities exist for people of color not only in certain neighborhoods but at every spatial scale in the U.S.," said co-author Joshua Apte, a professor of civil and environmental engineering at the University of California, Berkeley. "The problem exists within urban and rural areas, many distinct U.S. regions, and for people living within almost all American cities.""This new study adds context to our previous work, which showed that a disproportionate consumption of goods and services -- which is an underlying cause of pollution -- compounds the exposure of people of color to air pollution," said co-author Jason Hill, a professor of bioproducts and biosystems engineering at the University of Minnesota.The study results come with caveats, the researchers said. The emissions data, air quality modeling and population counts all contain previously quantified uncertainty. However, because the team's findings are consistent across states, urban and rural areas, and concentration levels, they are unlikely to be an artifact of model or measurement bias. This study focuses on outdoor air pollution concentrations in places where people reside and does not account for variability in mobility, access to health care and baseline mortality and morbidity rates, among other factors."Some assume that when there is a systematic racial-ethnic disparity, such as the one we see here, that the underlying cause is a difference in income," Tessum said. "Because the data shows that the disparity cross-cuts all income levels, our study reinforces previous findings that race, rather than income, is what truly drives air pollution-exposure disparities."The researchers say they hope these findings will highlight potential opportunities for addressing this persistent environmental inequity.
Environment
2,021
April 28, 2021
https://www.sciencedaily.com/releases/2021/04/210428132952.htm
Research gives trees an edge in landfill clean-up
A research team from the USDA Forest Service and the University of Missouri has developed a new contaminant prioritization tool that has the potential to increase the effectiveness of environmental approaches to landfill clean-up.
Phytoremediation -- an environmental approach in which trees and other plants are used to control leachate and treat polluted water and soil -- hinges on matching the capability of different tree species with the types of contaminants present in soil and water. Identifying the worst contaminants within the dynamic conditions of a landfill has been challenging."Thousands of contaminants can be present in landfill leachate, and contamination can vary by location and over time, so it can be difficult to determine what needs to be, or even can be targeted with environmental remediation," said Elizabeth Rogers, a USDA Forest Service Pathways intern and the lead author of the study. "This tool allows site managers to prioritize the most hazardous contaminants or customize the tool to address local concerns."Rogers and co-authors Ron Zalesny, Jr., a supervisory research plant geneticist with the Northern Research Station, and Chung-Ho Lin, a Research Associate Professor at the University of Missouri's Center for Agroforestry, combined multiple sources of data to develop a pollutant prioritization tool that systematically prioritizes contaminants according to reported toxicity values.Knowing which contaminants are the most hazardous allows scientists like Zalesny to better match trees and tree placement in landfills. "Phytoremediation research has focused on discovering which trees work best in particular soils and sites," Zalesny said. "The ability to home in on specific contaminants will enhance phytoremediation outcomes."The pollutant prioritization tool allows for greater transparency on the benefits of phytoremediation. "When you know what you are targeting, you can provide better information to your community on how long remediation will take and how effective it is," Lin said.
Environment
2,021
April 28, 2021
https://www.sciencedaily.com/releases/2021/04/210428113756.htm
Childhood air pollution exposure linked to poor mental health at age 18
A multidecade study of young adults living in the United Kingdom has found higher rates of mental illness symptoms among those exposed to higher levels of traffic-related air pollutants, particularly nitrogen oxides, during childhood and adolescence.
Previous studies have identified a link between air pollution and the risk of specific mental disorders, including depression and anxiety, but this study looked at changes in mental health that span all forms of disorder and psychological distress associated with exposure to traffic-related air pollutants.The findings, which will appear April 28 in The link between air pollution exposure and young adult mental illness symptoms is modest, according to the study's first-author Aaron Reuben, a graduate student in clinical psychology at Duke University. But "because harmful exposures are so widespread around the world, outdoor air pollutants could be a significant contributor to the global burden of psychiatric disease," he said.The World Health Organization (WHO) currently estimates that 9 out of 10 people worldwide are exposed to high levels of outdoor air pollutants, which are emitted during fossil fuel combustion in cars, trucks, and powerplants, and by many manufacturing, waste-disposal, and industrial processes.In this study, air pollution, a neurotoxicant, was found to be a weaker risk factor for mental illness than other better-known risks, such as family history of mental illness, but was of equal strength to other neurotoxicants known to harm mental health, particularly childhood exposure to lead.In a previous study in the same cohort, Helen Fisher of King's College London's Institute of Psychiatry, Psychology & Neuroscience, and coauthor and principal investigator for this study, linked childhood air pollution exposure to the risk of psychotic experiences in young adulthood, raising concern that air pollutants may exacerbate risk for psychosis later in life.When combined with studies showing increased hospital admissions for many psychiatric illnesses during "poor" air quality days in countries like China and India, the current study builds on past findings to reveal that "air pollution is likely a non-specific risk factor for mental illness writ large," said Fisher, who noted that exacerbations of mental illness risk may show up differently in different children.The subjects of this study are a cohort of 2,000 twins born in England and Wales in 1994-1995 and followed to young adulthood. They have regularly participated in physical and mental health evaluations and have provided information about the larger communities in which they live.Researchers measured exposure to air pollutants -- particularly nitrogen oxides (NOx), a regulated gaseous pollutant, and fine particulate matter (PM2.5), a regulated aerosol pollutant with suspended particles below 2.5 microns in diameter -- by modeling air quality around study member's homes at ages 10 and 18 years using high-quality air dispersion models and data provided by the UK National Atmospheric Emissions Inventory and the Imperial College's UK road-traffic emissions inventory. Twenty-two percent of the study members were found to have had exposure to NOx that exceeded WHO guidelines, and 84% had exposure to PM2.5 that exceeded guidelines.The research team, based at Duke and King's IoPPN, also assessed participant mental health at age 18. Symptoms associated with ten different psychiatric disorders -- dependence on alcohol, cannabis, or tobacco; conduct disorder and attention-deficit/hyperactivity disorder; major depression, generalized anxiety disorder, post-traumatic stress disorder, and eating disorder; and thought disorder symptoms related to psychosis -- were used to calculate a single measure of mental health, called the psychopathology factor, or "p-factor" for short.The higher an individual's p-factor score, the greater the number and severity of psychiatric symptoms identified. Individuals can also differ on their mental health across sub-domains of psychopathology, which group together symptoms of distress or dysfunction that are manifested in outwardly visible ways (externalizing problems, like conduct disorder), experienced largely internally (internalizing problems, like anxiety), and via delusions or hallucinations (thought disorder symptoms). Air pollution effects on mental health were observed across these subdomains of psychopathology, with the strongest links to thought disorder symptoms.Unique to this study, the researchers also assessed characteristics of children's neighborhoods to account for disadvantageous neighborhood conditions that associate with higher air pollution levels and greater risk of mental illness, including socioeconomic deprivation, physical dilapidation, social disconnection, and dangerousness. While air pollution levels were greater in neighborhoods with worse economic, physical, and social conditions, adjusting the study results for neighborhood characteristics did not alter the results, nor did adjustment for individual and family factors, such as childhood emotional and behavioral problems or family socioeconomic status and history of mental illness."We have confirmed the identification of what is essentially a novel risk factor for most major forms of mental illness," said Reuben, "one that is modifiable and that we can intervene on at the level of whole communities, cities, and or even countries."In the future, the study team is interested in learning more about the biological mechanisms that link early life air pollution exposure to greater risk for mental illness at the transition to adulthood. Previous evidence suggests that air pollutant exposures can lead to inflammation in the brain, which may lead to difficulty regulating thoughts and emotions.While the findings are most relevant to high-income countries with only moderate levels of outdoor air pollutants, like the US and the UK, there are also implications for low-income, developing countries with higher air pollution exposures, like China and India. "We don't know what the mental health consequences are of very high air pollution exposures, but that is an important empirical question we are investigating further," said Fisher.Support for the study came from the UK Medical Research Council (MRC) [grant G1002190]; the US National Institute of Child Health and Human Development [grant HD077482]; the US National Institute of Environmental Health Science [grant F31ES029358]; Google; the Jacobs Foundation; a joint Natural Environment Research Council, UK MRC and Chief Scientist Office grant [NE/P010687/1]; and the King's Together Multi and Interdisciplinary Research Scheme (Wellcome Trust Institutional Strategic Support Fund; grant 204823/Z/16/Z).
Environment
2,021
April 28, 2021
https://www.sciencedaily.com/releases/2021/04/210428113745.htm
Is forest harvesting increasing in Europe?
Is forest harvesting increasing in Europe? Yes, but not as much as reported last July in a controversial study published in
The study, "Abrupt increase in harvested forest area over Europe after 2015" used satellite data to assess forest cover and claimed an abrupt increase of 69% in the harvested forest in Europe from 2016. The authors, from the European Commission's Joint Research Centre (JRC), suggested that this increase resulted from expanding wood markets encouraged by EU bioeconomy and bioenergy policies. The publication triggered a heated debate, both scientific and political, as the EU Parliament and Council were discussing the Post-2020 EU Forest Strategy.In a response published in These errors relate to satellite sensitivity improving markedly over the period of assessment, as well as to changes in forests due to natural disturbances -- for example drought and storm related die-back and tree-falls -- being often attributed wrongly to timber harvests.Dr Marc Palahí, Director of the European Forest Institute (EFI), who led the response said: "In the future forest information should be more carefully assessed, taking into account a wide variety of methodological issues and factors, before drawing hasty conclusions. This requires enhanced collaboration as well as scientifically robust and common approaches between the European Commission and Member States to enable better informed forest-related policies in the context of the EU Green Deal.""Over the years, we are becoming better and better at detecting forest loss" said Dr Ruben Valbuena from Bangor University who co-led the study. One of the errors in the JRC study was to underestimate how satellite images, and the methods used to analyse them, have improved over the periods they compared. "Satellite products can only be employed under strict protocols assessing errors, and with better distinction between deforestation and other causes of forest loss," he said.Professor Gert-Jan Nabuurs from Wageningen University, an IPCC lead author who participated in the study, commented that "the harvest across Europe's forests has increased in recent years, but by just 6%, not the 69% claimed by the JRC study. This is due primarily to a moderate economic recovery after the 2008-2012 recession. What is really striking is the unprecedented levels of natural disturbances affecting our forests in many parts of the continent in recent years."The implications of the errors found by Palahí and colleagues are of global relevance, as many studies to inform policy-makers and society at large on the state of the world forests are nowadays based on remote sensing. The analysis of products based on satellite imagery is becoming key for instance to understand the extent of global deforestation, and thus we need scientifically robust remote sensing methods for sound policy-making.
Environment
2,021
April 28, 2021
https://www.sciencedaily.com/releases/2021/04/210428090241.htm
Spring forest flowers likely key to bumblebee survival
For more than a decade, ecologists have been warning of a downward trend in bumble bee populations across North America, with habitat destruction a primary culprit in those losses. While efforts to preserve wild bees in the Midwest often focus on restoring native flowers to prairies, a new Illinois-based study finds evidence of a steady decline in the availability of springtime flowers in wooded landscapes.
The scarcity of early season flowers in forests -- a primary food source for bumble bees at this time of year -- likely endangers the queen bees' ability to start their nesting season and survive until other floral resources become available, researchers say. They report their findings in the "We went through long-term vegetation data from 262 random sites across Illinois, most of them privately owned," said study co-author David Zaya, a plant ecologist in the Illinois Natural History Survey at the University of Illinois Urbana-Champaign. These data were collected through the Critical Trends Assessment Program, which began in 1997 at the INHS."We filtered our data to look at two subsets of plants: those used by bumble bees generally, and those thought to be preferred by the endangered rusty patched bumble bee, Bombus affinis," said study lead John Mol, a research ecologist at the U.S. Geological Survey Fort Collins Science Center. "We then looked at how the abundance and richness of these bumble bee food sources either increased, decreased or stayed the same since 1997."The team also looked at timing -- scouring records on flowering dates to map the availability of floral resources throughout the year for each habitat type: forest, grassland and wetland. They also analyzed satellite data to track trends in forest, grassland, wetland and agricultural acreage since 1997.The research yielded some good news."We found that bumble bee food-plant cover has increased within Illinois grasslands by about 7% since 1997," Mola said. "This is great because it suggests that restoration or management actions by landowners is succeeding."However, satellite data revealed that total acreage devoted to grassland in the state shrank by about 7.5% over the same time period."It may not matter much that the typical grassland is better if there's less of it," Mola said.While the richness of floral resources in grasslands increased, the researchers saw an opposite trend in forested areas of the state, with food plants for bees in forests declining by 3-4% since 1997."The biggest finding of this study was that forest plants that bloom in spring appear to be declining, and the timing of those flowers matches up almost perfectly with when queens are out and foraging," Zaya said. "We can't say for sure, but if declining food is contributing to bumble bee declines, it is most likely related to when the queens are trying to establish nests."Previous studies have shown that browsing by deer and invasive shrubs reduce the abundance of flowering plants in forests. Climate change also shifts the flowering times for many plants, potentially causing a mismatch between the bees' needs and the availability of food."The forest is a really important habitat for bees early in the season that often gets overlooked in pollinator conservation planning," Mola said. "This has me thinking very carefully about the role of forests in bumble bee conservation."The Illinois Department of Natural Resources supports the Critical Trends Assessment Program. The INHS is a division of the Prairie Research Institute at the U. of I.
Environment
2,021
April 28, 2021
https://www.sciencedaily.com/releases/2021/04/210428080933.htm
Researchers find how tiny plastics slip through the environment
Washington State University researchers have shown the fundamental mechanisms that allow tiny pieces of plastic bags and foam packaging at the nanoscale to move through the environment.
The researchers found that a silica surface such as sand has little effect on slowing down the movement of the plastics, but that natural organic matter resulting from decomposition of plant and animal remains can either temporarily or permanently trap the nanoscale plastic particles, depending on the type of plastics.The work, published in the journal "We're looking at developing a filter that can be more efficient at removing these plastics," Chowdhury said. "People have seen these plastics escaping into our drinking water, and our current drinking water system is not adequate enough to remove these micro and nanoscale plastics. This work is the first fundamental way to look at those mechanisms."Around since the 1950s, plastics have properties that make them useful for modern society. They are water resistant, cheap, easy to manufacture and useful for a huge variety of purposes. However, plastics accumulation is becoming a growing concern around the world with giant patches of plastic garbage floating in the oceans and plastic waste showing up in the most remote areas of the world."Plastics are a great invention and so easy to use, but they are so persistent in the environment," Chowdhury said.After they're used, plastics degrade through chemical, mechanical and biological processes to micro- and then nano-sized particles less than 100 nanometers in size. Despite their removal in some wastewater treatment plants, large amounts of micro and nanoscale plastics still end up in the environment. More than 90% of tap water in the U.S. contains nanoscale plastics, Chowdhury said, and a 2019 study found that people eat about five grams of plastic a week or the amount of plastic in a credit card. The health effects of such environmental pollution is not well understood."We don't know the health effects, and the toxicity is still unknown, but we continue to drink these plastics every day," said Chowdhury.As part of the new study, the researchers studied the interactions with the environment of the tiniest particles of the two most common types of plastics, polyethylene and polystyrene, to learn what might impede their movement. Polyethylene is used in plastic bags, milk cartons and food packaging, while polystyrene is a foamed plastic that is used in foam drinking cups and packaging materials.In their work, the researchers found that the polyethylene particles from plastic bags move easily through the environment -- whether through a silica surface like sand or natural organic matter. Sand and the plastic particles repel each other similarly to like-poles of a magnet, so that the plastic won't stick to the sand particles. The plastic particles do glom onto natural organic material that is ubiquitous in natural aquatic environment but only temporarily. They can be easily washed off with a change in chemistry in the water."That's bad news for polyethylene in the environment," said Chowdhury. "It doesn't stick to the silica surface that much and if it sticks to the natural organic matter surface, it can be re-mobilized. Based on these findings, it indicates that nanoscale polyethylene plastics may escape from our drinking water treatment processes, particularly filtration."In the case of polystyrene particles, the researchers found better news. While a silica surface was not able to stop its movement, organic matter did. Once the polystyrene particles stuck to the organic matter, they stayed in place.The researchers hope that the research will eventually help them develop filtration systems for water treatment facilities to remove nanoscale particles of plastics.The work was funded by the State of Washington Water Research Center.
Environment
2,021
April 28, 2021
https://www.sciencedaily.com/releases/2021/04/210428090248.htm
Lactic acid bacteria can extend the shelf life of foods
Researchers at the National Food Institute have come up with a solution that can help combat both food loss and food waste: They have generated a natural lactic acid bacterium, which secretes the antimicrobial peptide nisin, when grown on dairy waste.
Nisin is a food-grade preservative, which can extend the shelf life of foods, and thus can be used to reduce food waste. The discovery also makes it possible to better utilize the large quantities of whey generated when cheese is made.Nisin is approved for use in a number of foods, where it can prevent the growth of certain spoilage microorganisms as well as microorganisms that make consumers sick. It can for instance inhibit spore germination in canned soups and prevent late blowing in cheeses -- without affecting its flavour.In theory, nisin could be added to fresh milk to extend its shelf life. However, different countries have different rules stating what types of products nisin may be added to and in which amounts.Many dairies are already turning a profit by extracting protein and lactose from the many tons of whey they generate, which they use in e.g. infant formula and sports nutrition. What is left behind can still be used to produce nisin.In addition to ensuring better resource utilization, there may be a financial gain from producing nisin: Most commercially available nisin products contain 2.5% nisin and cost approximately 40 euro per kilogram.
Environment
2,021
April 27, 2021
https://www.sciencedaily.com/releases/2021/04/210427195141.htm
Drones provide bird's eye view of how turbulent tidal flows affect seabird foraging habits
The foraging behaviour of seabirds is dramatically affected by turbulence caused by natural coastal features and humanmade ocean structures, new research has shown.
In a first-of-its-kind study, scientists from the UK and Germany used drones to provide a synchronised bird's eye view of what seabirds see and how their behaviour changes depending on the movement of tidal flows beneath them.The research focused on the wake of a tidal turbine structure set in a tidal channel -- Strangford Lough in Northern Ireland -- that has previously been identified as a foraging hotspot for terns.Through a combination of drone tracking and advanced statistical modelling, it showed that terns were more likely to actively forage over vortices (swirling patches of water).However, eruptions of upwelling water (boils) ahead of the terns' flight path prompted them to stay on course as they approached.Writing in the Royal's Society flagship biological research journal, They also say it potentially gives them the ability to predict how species might respond to environmental changes such as the increased future development of ocean renewable energy sites and climate change.The study was conducted by researchers from Queen's University Belfast and the University of Plymouth (UK), and Bielefeld University (Germany).Dr Lilian Lieber, Bryden Centre Research Fellow at Queen's and the study's lead investigator, said: "Our research highlights the importance of identifying changes in local flow conditions due to ocean energy structures which can change the occurrence, scale and intensity of localised turbulence in the water. Through a fantastic interdisciplinary collaboration, we were able to track prevalent flow features and seabirds on thus far unobtainable scales, shedding new light on tern foraging associations with turbulence. We found that terns were more likely to actively forage over vortices, while conspicuous upwellings provided a strong physical cue even at some distance, leading them to investigate such features. This research can help us predict seabird responses to coastal change."Co-investigator Professor Roland Langrock, Professor in Statistics and Data Analysis at Bielefeld, said: "It is extremely exciting that we now have these incredibly detailed animal movement data, which allows us to investigate behavioural processes at effectively arbitrarily fine scales of animal decision-making. While it presented some new statistical challenges, the interdisciplinary nature of our project presents a valuable contribution to the emerging field of high-throughput movement ecology."Co-investigator Dr Alex Nimmo-Smith, Associate Professor in Marine Physics in Plymouth, led the computational development of automatically and reliably tracking the terns using machine learning as well as mapping the underlying turbulent features.He added: "The drone provided a real bird's eye view, allowing us to track the highly localised foraging behaviour of the terns and the close association they have with particular flow features. Upwelling boils and swirling vortices, characteristic of strong tidal flows, can bring potential prey items (such as small fish) to the water surface and trap them there. Therefore, these physical processes provide foraging opportunities for the terns."
Environment
2,021
April 27, 2021
https://www.sciencedaily.com/releases/2021/04/210427163246.htm
Fishing in African waters
African waters have been contributing to the global supply of fish for years, with three of the four most productive marine ecosystems in the world near the continent. African countries' Exclusive Economic Zones (EEZs) contributed over 6 million metric tons of fish to the world's food supply, supporting food security and livelihood in the continent, while generating $15 billion to the African gross domestic product in 2011. Every sovereign state has an EEZ, an area of ocean adjacent to their shores in which they have special rights regarding the exploration and use of marine resources.
Industrial fleets from countries around the world have been increasingly fishing in African waters, but with climate change and increasing pollution threatening Africa's fish stocks, there is a growing concern of the sustainability of these marine fisheries if they continue to be exploited by foreign countries.A new study used Automatic Identification System (AIS) satellite data from Global Fishing Watch to describe and characterize the spatial characteristics of African and foreign industrial fishing activities within these African EEZs. Mi-Ling Li, assistant professor in the University of Delaware's School of Marine Science and Policy in the College of Earth, Ocean and Environment (CEOE), served as the lead author on the paper, which was published in the Countries in Africa have a short-term economic incentive to grant foreign countries access to fish in their waters. Those foreign countries have to make direct payments to acquire permits to fish in a country's EEZ."There has been controversy over foreign fishing in African waters, but there hasn't been a quantitative assessment of how they act," said Li. "It's difficult because a lot of the African countries do not have good surveillance of their fisheries."The study described spatial and temporal characteristics for both African and foreign industrial fishing activities -- examining boats that were large enough to carry AIS trackers."African fisheries desperately need better information and data for management," said David Kroodsma, Director of Research and Innovation at the Global Fishing Watch and a co-author of the paper. "It is exciting to be able to use vessel GPS data to help solve this challenge and reveal fishing activity across the continent."The paper highlights where and how long the boats spent most of their time and what fish they reported catching in those locations.The EEZs fished by a large number of countries were generally located in West Africa, with the EEZs of Western Sahara and Mauritania fished by the highest number of foreign countries.The resources of specific fish stocks could determine where vessels would fish. Vessels from Japan, for instance, spent most of their time fishing in eastern Africa for tuna, with an estimated 75% of total reported Japanese catches coming from the waters of Madagascar, Mauritius, Mozambique, and Seychelle."This paper shows that fisheries and their management in Africa are globally interconnected, highlighting the need for international cooperation to address the challenges that fisheries in the continent are facing," said William Cheung, professor at the Institute for the Oceans and Fisheries at the University of British Columbia who is a co-author of the study. "We demonstrate the importance of having accessible data, including those from new technology, to generate knowledge that is necessary to address these challenges."While the AIS data can show where and how long the vessels were fishing, there is a reliance on the reporting data from the vessels themselves to confirm what they are catching. Sometimes, the data does not always correlate, pointing to the possibility of illegal, unreported or unregulated (IUU) fishing.The study used Namibia, an African country in that region, as a case study.Unlike some other African countries, Namibia requires fleets in their EEZ to land their catches in their domestic ports. Not all fishing fleets followed that regulation, however. While 20 fishing entities were identified by AIS as being in Namibian waters, not all of the vessels recorded having caught fish in those waters."Namibia has a relatively good surveillance system, and they require every fleet who fishes there to land in their docks," said Li. "But even with those regulations, we find a big discrepancy in who reported fishing and catch there and who we detected by AIS. This is a big issue with regards to illegal fishing in African waters."The authors said the AIS system can be utilized to help detect and characterize unreported activities in these EEZs, which can help with the response to IUU fishing.
Environment
2,021
April 27, 2021
https://www.sciencedaily.com/releases/2021/04/210427135521.htm
Incentives could turn costs of biofuel mandates into environmental benefits
New studies from the Center for Advanced Bioenergy and Bioproducts Innovation (CABBI) shed more light on the economic and environmental costs of mandates in the Renewable Fuels Standard (RFS), a federal program to expand the nation's biofuels sector.
Researchers said the studies indicate the need to adopt more targeted policies that value the environmental and ecosystem benefits of perennial bioenergy crops over cheaper options -- and provide financial incentives for farmers to grow them.The RFS was issued in 2005 and updated through the Energy Independence and Security Act of 2007 to enhance U.S. energy security, reduce greenhouse gas (GHG) emissions, and promote rural development. The 2007 standards mandated blending 36 billion gallons of first-generation biofuels (made from food crops like corn, such as ethanol) and second-generation biofuels (made from the biomass of miscanthus or other energy feedstocks) with fossil fuels by 2022, to replace petroleum-based heating oil and fuel. The corn ethanol mandate has been met, with 15 billion gallons produced annually, but production of cellulosic biofuels has been negligible. Targets beyond 2022 are yet to be determined.The biofuel mandates impact the environment in multiple ways -- affecting land use, GHG emissions, nitrogen (N) application, and leakage of harmful nitrogen compounds into the soil, air, and water. Those impacts vary by feedstock, as do the economic costs and benefits for consumers buying food and fuel and for producers, depending on cultivation costs and the competition for cropland for alternative uses.The first study calculated the net economic and environmental costs of the RFS mandates and found that maintaining the corn ethanol mandate would lead to a cumulative net cost to society of nearly $200 billion from 2016 to 2030 compared to having no RFS. The social cost of nitrogen damage from corn ethanol production substantially offsets the social benefits from GHG savings.On the other hand, implementation of the additional cellulosic mandate could provide substantial economic and environmental benefits with technological innovations that lower the costs of converting biomass to cellulosic ethanol and policies that place a high monetized value for GHG mitigation benefits. That study, published in The second study examined how full implementation of the RFS mandates will affect water quality in the Mississippi/Atchafalaya River Basin (MARB) and Gulf of Mexico, which are plagued by nitrogen runoff from corn and soybean fields. Rising N levels have depleted oxygen and created a hypoxic dead zone in the gulf. Specifically, this study looked at whether diversifying cropland with perennial energy crops -- such as CABBI grows -- could reduce N loss associated with corn production and thus improve water quality while meeting RFS goals.It found that the most economical place to grow perennial bioenergy crops, which typically require less nitrogen fertilizer and lower N runoff, was on idle cropland. This limited their potential to reduce N runoff, which would be highest if they replaced N-intensive row crops on cropland. The N reduction benefits of bioenergy crops would also be more than offset by the increase in runoff generated by the harvesting of low-cost crop residues such as corn stover -- leaves and stalks of corn left after the grain is harvested -- for cellulosic biomass. The findings suggest that targeted incentives for reducing N loss are needed to persuade growers to replace N-intensive row crops as well as biomass from corn stover with bioenergy crops. Published in Together, the studies showed that maintaining the corn ethanol mandate pushes more land into corn production, which increases the market price of other agricultural commodities. While producers might benefit from higher market prices, consumers who buy fuel or agricultural products pay the cost. And although the corn ethanol mandate can help mitigate GHG by displacing fossil fuels with biofuels, it increases nitrogen leaching because of increased fertilizer use with expanded corn production. That worsens water quality in the MARB and Gulf of Mexico and leads to a huge environmental and social cost.In contrast, the cellulosic ethanol mandate could provide an overall benefit with the right policies. Supporting research and development to lower the cost of converting biomass to cellulosic ethanol would substantially reduce production costs and increase social benefits, and a high monetized value for GHG mitigation could offset all other costs.These findings should lead policymakers to question the effectiveness of technology mandates like the RFS that treat all cellulosic feedstocks as identical. It incentivizes cheaper options like corn stover and limits incentives to grow high-yielding perennial energy crops that have lower carbon intensity and N-leakage but are more costly under current technology.CABBI researchers hope performance-based policies -- including the low carbon fuel standard, carbon and nitrogen leakage taxes, or limits on crop-residue harvest and N application -- can be implemented to supplement the RFS mandates after 2022.The complexity of biofuel policies requires expertise from both agronomists and economists, as in these studies. Both research teams developed integrated economic and biophysical models incorporating a broad range of factors into their analyses."CABBI provides a great opportunity for this kind of research, inspiring collaborations from different disciplines," Khanna said.
Environment
2,021
April 27, 2021
https://www.sciencedaily.com/releases/2021/04/210427122425.htm
'Dominating' fungus could be solution to producing more biofuels and chemicals
The discovery of a novel enzyme that releases a valuable chemical from agricultural waste could provide an important breakthrough in the upscaling of renewable fuels and chemicals, a new study shows.
Researchers -- led by the University of York -- have discovered an enzyme in a fungus which can act as a catalyst to bring about a biochemical reaction that breaks down lignocelluloseLignocellulose is found in forestry and agricultural waste like wheat straw, which was used in this research. It has long been considered by scientists that this dry matter could be used as a sustainable resource for the production of fuels and chemicals if a way to break it down could be found so that it can be processed effectively.Professor Neil Bruce from the Department of Biology and Director of the Centre for Novel Agricultural Products (CNAP) said: "We believe this discovery is important as there is much interest in using lignocellulose as renewable and sustainable resource for the production of liquid fuels and chemicals."Although lignocellulose is one of the most abundant forms of fixed carbon in the biosphere, the use of lignocellulose as a material to supply bioindustry has been hampered by its composition and structure, which renders it highly obstinate to degradation."This is, in part, due to the presence of lignin, a complex aromatic polymer that encases the structure to block enzyme accessibility."There are currently no industrial biocatalytic processes for breaking down lignin.But researchers found that an enzyme produced by a fungus called, Parascedosporium putredinis NO1, can break through the lignin to begin the essential process of degradation needed to ultimately produce biofuels.Professor Bruce added: "P. putredinis NO1 is able to dominate cultures in the latter stages of wheat straw degradation in a mixed microbial community when easily accessible polysaccharides have been exhausted."We demonstrate that treatments with this enzyme can increase the digestibility of lignocellulosic biomass, offering the possibility of producing a valuable product from lignin while decreasing processing costs."The research was in collaboration with the Department of Energy's Great Lakes Bioenergy Research Center at the Wisconsin Energy Institute, and the University of Wisconsin, USA.
Environment
2,021
April 27, 2021
https://www.sciencedaily.com/releases/2021/04/210427122414.htm
Solar-powered desalination unit shows great promise
Despite the vast amount of water on Earth, most of it is nonpotable seawater. Freshwater accounts for only about 2.5% of the total, so much of the world experiences serious water shortages.
In When sunlight strikes the titanium layer, it heats rapidly and vaporizes the water. By placing the unit in a transparent container with a sloped quartz roof, the water vapor can be condensed and collected, producing a copious amount of freshwater."In the solar energy field, TiNO is a common commercial solar absorbing coating, widely used in solar hot water systems and in photovoltaic units," author Chao Chang said. "It has a high solar absorption rate and a low thermal emittance and can effectively convert solar energy into thermal energy."The investigators developed a method for depositing a layer of TiNO using a technique known as magnetron sputtering. They used a special type of highly porous paper known as airlaid paper that acts as a wicking material to supply water from the seawater reservoir. Airlaid paper is made from wood fibers and is commonly used in disposable diapers.The evaporation unit included three parts: the TiNO layer on top, a thermal insulator, and the airlaid paper on the bottom. The insulation layer is polyethylene foam, which has many air-filled pores that trap heat and allow the multi-layer unit to float on top of a reservoir of seawater, minimizing heat loss to the surroundings."The porous airlaid paper used as the substrate for the TiNO solar absorber can be reused and recycled more than 30 times," said Chang.Salt precipitation on the TiNO surface could interfere with efficiency, but the investigators found even after a long time, no salt layer formed on the surface. They suggest the porous nature of the paper wicks away any salt that might form on the surface, returning it to the seawater reservoir.The salinity of ordinary seawater is over 75,000 milligrams of salt per liter. Ordinary drinking water has a salinity of about 200 milligrams per liter. The desalination unit was able to decrease the seawater salinity to less than 2 milligrams per liter.The combination of low cost, high efficiency, and lack of fouling for this desalination technology shows it has the potential to help solve the world's freshwater shortage.
Environment
2,021
April 27, 2021
https://www.sciencedaily.com/releases/2021/04/210427094803.htm
Ship traffic dropped during first months of COVID pandemic
Ship movements on the world's oceans dropped in the first half of 2020 as Covid-19 restrictions came into force, a new study shows.
Researchers used a satellite vessel-tracking system to compare ship and boat traffic in January to June 2020 with the same period in 2019.The study, led by the University of Exeter (UK) and involving the Balearic Islands Coastal Observing and Forecasting System and the Mediterranean Institute for Advanced Studies (both in Spain), found decreased movements in the waters off more than 70 per cent of countries.Global declines peaked in April 2020, but by June -- as Covid restrictions were eased in many countries -- ship movements began to increase."As lockdowns came into force, we heard stories and began to see early research findings that suggested reduced boat movements had allowed some marine ecosystems to recover," said lead-author Dr David March of the Centre for Ecology and Conservation on Exeter's Penryn Campus in Cornwall."There were reports of clearer water in Venice's canals, and a study showed a reduction in underwater noise at Vancouver."Professor Brendan Godley, who leads the Exeter Marine research group, added: "The effects of ships and boats -- from noise and pollution to fishing and collisions with animals -- have a major impact on marine ecosystems across the world."Our study aimed to measure the impact of Covid on this traffic, and we are continuing to monitor this as the restrictions on human activity continue to change."Quantifying the changes in human activities at sea paves the way to research the impacts of Covid-19 on the blue economy and ocean health."The study found:"The long-term trend is for increased global ship movements, so a modest decrease may represent a more significant reduction compared to the amount of traffic we would otherwise have seen," Dr March concluded.Near real-time vessel traffic data was provided by exactEarth and Marine Traffic.Funding for the study came from a Marie Sk?odowska-Curie grant under the European Union's Horizon 2020 programme, the Natural Environment Research Council, the Waterloo Foundation, and the Darwin Initiative.The paper, published in the journal
Environment
2,021
April 27, 2021
https://www.sciencedaily.com/releases/2021/04/210427094744.htm
Value from sewage? New technology makes pig farming more environmentally friendly
Anyone who lives in Okinawa, a subtropical island in Japan, has an appreciation of the intensity of its pig farming industry. The farms have a large effect on the island's economy and culture. According to Japan's Cabinet Office, as of 2018, there were over 225,000 pigs in Okinawa. Pork is a staple in the local diet and is found in many dishes in traditional restaurants. But the presence of the pig farms has another, less welcome, impact -- the odor-y kind. Drive through some particularly farm-filled areas with the car's windows wound down and you're sure to be filled with regret.
This smell is, at least in part, caused by a byproduct of the pig farming. Across Okinawa, large amounts of wastewater are produced by the farms. Now, researchers from the Biological Systems Unit at the Okinawa Institute of Science and Technology Graduate University (OIST) have created a new system for treating this wastewater, which they've successfully tested on a local swine farm in Okinawa."Our new system uses two different chambers," explained Dr. Anna Prokhorova, lead author of a paper recently published in This is a stark contrast to the traditional aeration system currently utilized by farmers which mainly treats organic matter in the wastewater and also converts the ammonium present to nitrate but does not treat the nitrate further. In Japan, the nitrate discharge limit for the livestock industry will soon be lowered to one fifth of the current level (which today sits at 500 milligrams of nitrate-nitrogen per liter) to be in line with other industries. More than 35% of farms in Okinawa are likely to exceed this impending change."This is of huge concern because nitrate contamination can have disastrous impacts on both human health and the environment," said Dr Mami Kainuma, group leader in the Biological Systems Unit. "When nitrate is ingested by people, it is converted to nitrite, which impacts the bloods' ability to carry oxygen and can lead to methemoglobinemia or blue baby syndrome."This new system relied on the existence of a rich community of bacteria to begin the process. In the first chamber -- the anode chamber -- the bacteria reacted with the organic molecules present, releasing electrons in the process. These electrons were then transferred to the second chamber -- the cathode chamber -- via the electrodes. The cathode chamber contained wastewater that had already gone through the aeration process and thus had high levels of nitrate. Bacteria on the surface of the cathode chamber accept these electrons and used them to power the conversion of nitrate to nitrogen gas. The advantage of this system is that the nitrate removal can happen in wastewater with low organic matter content, such as the already-aerated water.After successfully trialing this system in the lab, the researchers set up an initial pilot experiment at one of the pig farms in Okinawa Prefecture Livestock and Grassland Research Center by working with the Okinawa Prefecture Environment Science Center and Okidoyaku. There they had access to both the aeration tank and raw wastewater. The project was funded by Okinawa Prefectural Government and monitored for over a year. Because of the integral role the bacterial communities played, the researchers also analyzed which species were present, how the composition of the community changed over time, and which species were responsible for each step.The long-term experiment showed that the dominant nitrate-removing bacteria were those that can receive electrons to grow. During the treatment, their activity was stimulated by applied potential to the electrode in a range of -0.4V -- -0.6V, which led to more efficient treatment of the wastewater. Such bacterial communities grew by over 60% in total in the cathode chamber, and continued to exhibit strong activity, leading to a high rate of nitrate reduction. Another big advantage was that, as the organic matter and, in particular, the volatile fatty acids were degraded in raw wastewater, the smell was lessened, and the number of pathogens reduced."We're very happy with the results so far. It's much more efficient than we expected," said Dr. Prokhorova. "This system is scalable, low cost, easy to assemble, and low maintenance. We're hopeful that, within the next few years, it will be utilized by farmers in Okinawa and other locations with similar issues, such as rural communities in mainland Japan and Southeast Asia."The work will be continued as a POC program in OIST.
Environment
2,021
April 27, 2021
https://www.sciencedaily.com/releases/2021/04/210427085752.htm
Vertical turbines could be the future for wind farms
The now-familiar sight of traditional propeller wind turbines could be replaced in the future with wind farms containing more compact and efficient vertical turbines. New research from Oxford Brookes University has found that the vertical turbine design is far more efficient than traditional turbines in large scale wind farms, and when set in pairs the vertical turbines increase each other's performance by up to 15%.
A research team from the School of Engineering, Computing and Mathematics (ECM) at Oxford Brookes led by Professor Iakovos Tzanakis conducted an in-depth study using more than 11,500 hours of computer simulation to show that wind farms can perform more efficiently by substituting the traditional propeller type Horizontal Axis Wind Turbines (HAWTs), for compact Vertical Axis Wind Turbines (VAWTs).The research demonstrates for the first time at a realistic scale, the potential of large scale VAWTs to outcompete current HAWT wind farm turbines.VAWTs spin around an axis vertical to the ground, and they exhibit the opposite behaviour of the well-known propeller design (HAWTs). The research found that VAWTs increase each other's performance when arranged in grid formations. Positioning wind turbines to maximise outputs is critical to the design of wind farms.Professor Tzanakis comments "This study evidences that the future of wind farms should be vertical. Vertical axis wind farm turbines can be designed to be much closer together, increasing their efficiency and ultimately lowering the prices of electricity. In the long run, VAWTs can help accelerate the green transition of our energy systems, so that more clean and sustainable energy comes from renewable sources."With the UK's wind energy capacity expected to almost double by 2030, the findings are a stepping stone towards designing more efficient wind farms, understanding large scale wind energy harvesting techniques and ultimately improving the renewable energy technology to more quickly replace fossil fuels as sources of energy.According to the Global Wind Report 2021, the world needs to be installing wind power three times faster over the next decade, in order to meet net zero targets and avoid the worst impacts of climate change.Lead author of the report and Bachelor of Engineering graduate Joachim Toftegaard Hansen commented: "Modern wind farms are one of the most efficient ways to generate green energy, however, they have one major flaw: as the wind approaches the front row of turbines, turbulence will be generated downstream. The turbulence is detrimental to the performance of the subsequent rows."In other words, the front row will convert about half the kinetic energy of the wind into electricity, whereas for the back row, that number is down to 25-30%. Each turbine costs more than £2 million/MW. As an engineer, it naturally occurred to me that there must be a more cost-effective way."The study is the first to comprehensively analyse many aspects of wind turbine performance, with regards to array angle, direction of rotation, turbine spacing, and number of rotors. It is also the first research to investigate whether the performance improvements hold true for three VAWT turbines set in a series.Dr Mahak co-author of the article and Senior Lecturer in ECM comments: "The importance of using computational methods in understanding flow physics can't be underestimated. These types of design and enhancement studies are a fraction of the cost compared to the huge experimental test facilities. This is particularly important at the initial design phase and is extremely useful for the industries trying to achieve maximum design efficiency and power output."
Environment
2,021
April 26, 2021
https://www.sciencedaily.com/releases/2021/04/210426140908.htm
Fully recyclable printed electronics developed
Engineers at Duke University have developed the world's first fully recyclable printed electronics. By demonstrating a crucial and relatively complex computer component -- the transistor -- created with three carbon-based inks, the researchers hope to inspire a new generation of recyclable electronics to help fight the growing global epidemic of electronic waste.
The work appears online April 26 in the journal "Silicon-based computer components are probably never going away, and we don't expect easily recyclable electronics like ours to replace the technology and devices that are already widely used," said Aaron Franklin, the Addy Professor of Electrical and Computer Engineering at Duke. "But we hope that by creating new, fully recyclable, easily printed electronics and showing what they can do, that they might become widely used in future applications."As people worldwide adopt more electronics into their lives, there's an ever-growing pile of discarded devices that either don't work anymore or have been cast away in favor of a newer model. According to a United Nations estimate, less than a quarter of the millions of pounds of electronics thrown away each year is recycled. And the problem is only going to get worse as the world upgrades to 5G devices and the Internet of Things (IoT) continues to expand.Part of the problem is that electronic devices are difficult to recycle. Large plants employ hundreds of workers who hack at bulky devices. But while scraps of copper, aluminum and steel can be recycled, the silicon chips at the heart of the devices cannot.In the new study, Franklin and his laboratory demonstrate a completely recyclable, fully functional transistor made out of three carbon-based inks that can be easily printed onto paper or other flexible, environmentally friendly surfaces. Carbon nanotubes and graphene inks are used for the semiconductors and conductors, respectively. While these materials are not new to the world of printed electronics, Franklin says, the path to recyclability was opened with the development of a wood-derived insulating dielectric ink called nanocellulose."Nanocellulose is biodegradable and has been used in applications like packaging for years," said Franklin. "And while people have long known about its potential applications as an insulator in electronics, nobody has figured out how to use it in a printable ink before. That's one of the keys to making these fully recyclable devices functional."The researchers developed a method for suspending crystals of nanocellulose that were extracted from wood fibers that -- with the sprinkling of a little table salt -- yields an ink that performs admirably as an insulator in their printed transistors. Using the three inks in an aerosol jet printer at room temperature, the team shows that their all-carbon transistors perform well enough for use in a wide variety of applications, even six months after the initial printing.The team then demonstrates just how recyclable their design is. By submerging their devices in a series of baths, gently vibrating them with sound waves and centrifuging the resulting solution, the carbon nanotubes and graphene are sequentially recovered with an average yield of nearly 100%. Both materials can then be reused in the same printing process while losing very little of their performance viability. And because the nanocellulose is made from wood, it can simply be recycled along with the paper it was printed on.Compared to a resistor or capacitor, a transistor is a relatively complex computer component used in devices such as power control or logic circuits and various sensors. Franklin explains that, by demonstrating a fully recyclable, multifunctional printed transistor first, he hopes to make a first step toward the technology being commercially pursued for simple devices. For example, Franklin says he could imagine the technology being used in a large building needing thousands of simple environmental sensors to monitor its energy use or customized biosensing patches for tracking medical conditions."Recyclable electronics like this aren't going to go out and replace an entire half-trillion-dollar industry by any means, and we're certainly nowhere near printing recyclable computer processors," said Franklin. "But demonstrating these types of new materials and their functionality is hopefully a stepping stone in the right direction for a new type of electronics lifecycle."This work was supported by the Department of Defense Congressionally Directed Medical Research Program (W81XWH-17-2-0045), the National Institutes of Health (1R01HL146849) and the Air Force Office of Scientific Research (FA9550-18-1-0222).
Environment
2,021
April 26, 2021
https://www.sciencedaily.com/releases/2021/04/210426140855.htm
Airports could generate enough solar energy to power a city
A new study has found Australia's government-owned airports could produce enough electricity to power 136,000 homes, if they had large-scale rooftop solar systems installed.
Researchers at RMIT University compared electricity generated by residential solar panels in a regional Australian city to the potential green energy production of 21 leased federal airports.They found if large-scale solar panels were installed at the airports, they would generate 10 times more electricity than the city's 17,000 residential panels, while offsetting 151.6 kilotons of greenhouse gasses annually.Researcher Dr Chayn Sun said the analysis showed the value of focusing renewable energy efforts on large, centralised rooftop solar systems."We can't rely on small residential solar panels to get us to a zero-emission economy but installing large panels at locations like airports would get us a lot closer," she said."We hope our results will help guide energy policy, while informing future research in solar deployment for large buildings."There's so much potential to facilitate national economic development while contributing towards greenhouse gas emission reduction targets."Sun, a geospatial scientist in RMIT's School of Science, said airports were ideal for solar panels but were not currently being used to their full potential -- many Australian airports are without adequate solar systems."Airports get good sun exposure because they're not shaded by tall buildings or trees, making them a perfect spot to harness the sun's energy," she said."Australia is facing an energy crisis, yet our solar energy resources -- such as airport rooftops -- are being wasted."Harnessing this power source would avoid 63 kilotons of coal being burned in Australia each year, an important step towards a zero-carbon future."For the study, published in the Lead author Athenee Teofilo, a Master of Geospatial Science student, then mapped the buildings in every leased federal airport -- excluding unsuitable structures like dome and blister-type hangars -- and identified 2.61km2 of usable rooftop space.Researchers determined the optimum tilt angle for the solar arrays for each airport, to maximise efficiency.Perth Airport had most energy-generating potential; placing solar panels there could produce almost twice the solar output of Bendigo, equal to the combined production from Adelaide, Sydney, Moorabbin and Townsville airports.Even Melbourne Airport alone would outperform Bendigo's annual solar electricity production by almost 12 gigawatt hours a year.Airport buildings less suited to solar panels could still be useful for ground-mounted solar systems, the study found.Sun said the research underlined the necessity for energy policies to include a plan for installing solar panels at airports."Based on our solar radiation analysis, we know airports with decent solar systems could not only be self-sufficient but would generate enough electricity to send the excess back into the grid," she said."We mapped airports owned by the federal government, but Australia has more than 150 privately-owned airfields, which could also have panels installed."Australia receives so much solar radiation, so every airport in the country would benefit from having the right type of solar panels installed."The same could be said for many airports and large buildings located around the world."Sun said reflections from the panels would not be a problem, as modern solar arrays absorb rather than reflect sunlight.Previous studies have deemed airports as great solar generators but the RMIT research goes further by precisely modelling the use of large-scale systems.The findings could also be extended to assess the solar potential of other sites, such as large commercial buildings, warehouses or distribution centres.
Environment
2,021
April 26, 2021
https://www.sciencedaily.com/releases/2021/04/210426140717.htm
Hydrocracking our way to recycling plastic waste
Millions of tons of plastic end up in landfills every year. It's a big societal problem and an even larger environmental threat.
In the United States, less than 9% of plastic waste is recycled. Instead, more than 75% of plastics waste ends up in landfills and up to 16% is burned, a process that releases toxic gases into the atmosphere.Researchers from the University of Delaware's Center for Plastics Innovation (CPI) have developed a direct method to convert single-use plastic waste -- plastic bags, yogurt containers, plastic bottles and bottle caps, packaging and more -- to ready-to-use molecules for jet fuels, diesel and lubricants.The work, reported in a paper in The UD-developed process requires approximately 50% less energy than other technologies, and it doesn't involve adding carbon dioxide to the atmosphere, an emissions savings over other commonly used techniques. It can be done in just a couple hours at low temperature, around 250 degrees Celsius. This is slightly higher than the 450-degree Fahrenheit oven temperature you might use to roast vegetables or bake a puff pastry at home.Importantly, the UD team's method can treat a variety of plastics, even when they are mixed together, a plus considering the way recyclables are managed."Chemical conversion is the most versatile and robust approach to combat plastics waste," said Dion Vlachos, the project principal investigator and the Unidel Dan Rich Chair in Energy Professor of Chemical and Biomolecular Engineering at UD.Co-authors on the paper include Sibao Liu, a former UD postdoctoral researcher, now an associate professor of chemical engineering and technology at Tianjin University; and CPI researchers Pavel Kots, a UD postdoctoral fellow; Brandon Vance, a UD graduate student; and Andrew Danielson, a senior majoring in chemical engineering.The UD research team used a chemical process called hydrocracking to break down the plastic solids into smaller carbon molecules, then added hydrogen molecules on either end to stabilize the material for use.Catalytic cracking is not new. Refineries have used it to convert heavy crude oil into gasoline for years.The research team's method, however, does more than just break the plastic down. It also converts the material into branched molecules that allow them to be more directly translated into an end product."This makes them ready-to-use molecules for high-value lubricant or fuel applications," said Vlachos, who also directs the Delaware Energy Institute and the Catalysis Center for Energy Innovation at UD.The catalyst itself is actually a hybrid material, a combination of zeolites and mixed metal oxides.Zeolites are known to have properties that make them good at creating branched molecules. Zeolites are found in things like water purification or softener systems and home detergents, where they counteract minerals like calcium and magnesium, making hard water softer and improving the laundry process.Mixed metal oxides, meanwhile, are known for their ability to break down large molecules just the right amount without overdoing it. The antacid in your medicine cabinet, for example, is a metal oxide used to break down, or neutralize, the acid causing your upset stomach."Alone these two catalysts do poorly. Together, the combination does magic, melting the plastics down and leaving no plastic behind," Vlachos said.This gives the CPI-developed method an advantage over current techniques used today, although Vlachos stressed that more work is needed to translate these scientific methods to industry. Another plus: the team's catalyst materials are commonly used and, therefore, fairly inexpensive and abundant."These are not exotic materials, so we can quickly start thinking about how to use the technology," he said. He and Liu have filed a provisional patent on the novel bi-catalyst and unique method through UD's Office of Economic Innovation and Partnerships.Reducing plastic waste by chemically converting it to fuels can play a powerful role in driving a circular economy, where materials are recycled into something new at the end of their useful lifespan, instead of being thrown away. The recycled components can be used to make the same thing again or, in the case of fuels, upcycled into higher-value products -- creating both economic and environmental gains."This innovative catalytic approach is a significant advance in our quest for depolymerization processes that involve less energy intensive pathways and generate highly specific breakdown targets, said CPI Director LaShanda Korley, Distinguished Professor of Materials Science and Engineering and Chemical and Biomolecular Engineering. "This fundamental understanding opens up a new route toward plastics waste valorization."For Andrew Danielson, a UD senior chemical engineering major involved in the project, the potential environmental benefits of plastic conversion are exciting."Plastic waste is a serious environmental issue. I believe that this research can help lead to better methods of plastic repurposing," said Danielson, whose contributions to the work included verifying the data collected during the project by reproducing the experiments.Following graduation in May, Danielson will put this research experience to work in the chemical industry. He's already landed a job in process controls, a part of the manufacturing process that involves controlling variables, such as temperature, pressure and conductivity, among other things.Next steps in the CPI research include exploring what other plastics the team's method can treat and what products it can make. To begin, Vlachos said the team hopes to expand collaborations with colleagues across campus and in the Center for Plastics Innovation to explore other avenues for making valuable products by eliminating waste."As this circular economy gets going, the world will need to make fewer original plastics because we will be reusing materials made today into the future," he said.Another goal is to develop methods to improve the recycling process itself."We want to use green electricity to drive the chemical processing involved in making new things. We are very far away at the moment from seeing this, but that's where we are headed over the next 10 to 20 years," Vlachos said.
Environment
2,021
April 26, 2021
https://www.sciencedaily.com/releases/2021/04/210426085909.htm
Stable coral cell lines cultured
Researchers in Japan have established sustainable cell lines in a coral, according to a study published today in
Seven out of eight cell cultures, seeded from the stony coral, "Establishing stable cells lines for marine organisms, especially coral, has proven very difficult in the past," said Professor Satoh, senior author of the study and head of the Marine Genomics Unit at the Okinawa Institute of Science and Technology Graduate University (OIST). "This success could prove to be a pivotal moment for gaining a deeper understanding of the biology of these vitally important animals."However, In the study, Professor Satoh worked closely with Professor Kaz Kawamura from Kochi University -- an expert in developing and maintaining cell cultures of marine organisms.Since adult coral host a wide variety of microscopic marine organisms, the group chose to try creating the cell lines from coral larvae to reduce the chances of cross-contamination. Another benefit of using larval cells was that they divide more easily than adult cells, potentially making them easier to culture.The researchers used coral specimens in the lab to isolate both eggs and sperm and fertilize the eggs. Once the coral larvae developed, they separated the larvae into individual cells and grew them in petri dishes.Initially, the culture attempts ended in failure. "Small bubble bodies appeared and then occupied most of the petri dish," said Professor Kaz Kawamura. "We later found that these were the fragments of dying stony coral cells."In the second year, the group discovered that by adding a protease called plasmin to the cell culture medium, right at the beginning of the culture, they could stop the stony coral cells from dying and keep them growing.Two to three weeks later, the larval cells developed into eight different cell types, which varied in color, form and gene activity. Seven out of the eight continued to divide indefinitely to form new coral cells.One of the most exciting advancements of this study was that some of the cell lines were similar in form and gene activity to endodermal cells. The endoderm is the inner layer of cells formed about a day after the coral eggs are fertilized.Importantly, it is the cells in the endoderm that incorporate the symbiotic algae, which photosynthesize and provide nutrients to sustain the coral."At this point in time, the most urgent need in coral biology is to understand the interaction between the coral animal and its photosynthetic symbiont at the cellular level, and how this relationship collapses under stress, leading to coral bleaching and death," said Professor David Miller, a leading coral biologist from James Cook University, Australia, who was not involved in the study.He continued: "Subject to confirmation that these cells in culture represent coral endoderm, detailed molecular analyses of the coral/photosymbiont interaction would then be possible -- and from this, real advances in understanding and perhaps preventing coral bleaching could be expected to flow."For Professor Satoh, his interest is in how the photosymbiotic algae cells, which are almost as big as the larval cells, initially enter the coral."The algae are incorporated into the coral cells around a week after the larvae first develop," said Prof. Satoh. "But no one has yet observed this endosymbiotic event on a single-cell level before."The scientists also found that the coral cell lines were still viable after being frozen with liquid nitrogen and then thawed. "This is crucial for being able to successfully supply the coral cell lines to research laboratories across the globe," said Professor Satoh.The implications for future research using these cell lines are far-reaching, ranging from research on how single coral cells respond to pollution or higher temperatures, to studying how corals produce the calcium carbonate that builds their skeleton.Research could also provide further insight into how corals develop, which could improve our ability to farm coral.In future research, the team hopes to establish cells lines that are clonal, meaning every cell in the culture is genetically identical."This will give us a much clearer idea of exactly which coral cell types we are growing, for example gut-like cells or nerve-like cells, by looking at which genes are switched on and off in the cells," said Professor Satoh.
Environment
2,021
April 23, 2021
https://www.sciencedaily.com/releases/2021/04/210423210744.htm
From toxic ions to single-atom copper
Copper remains one of the single most ubiquitous metals in everyday life. As a conductor of heat and electricity, it is utilized in wires, roofing and plumbing, as well as a catalyst for petrochemical plants, solar and electrical conductors and for a wide range of energy related applications. Subsequently, any method to harvest more of the valuable commodity proves a useful endeavor.
Debora Rodrigues, Ezekiel Cullen Professor of Engineering at the University of Houston Cullen College of Engineering, in collaboration with Francisco C. Robles Hernandez, professor at the UH College of Technology and Ellen Aquino Perpetuo, professor at the University of Sao Paulo, Brazil offered conclusive research for understanding how bacteria found in copper mines convert toxic copper ions to stable single-atom copper.In their co-authored paper, "Copper Mining Bacteria: Converting toxic copper ions into a stable single atom copper," their research demonstrates how copper-resistant bacterium from a copper mine in Brazil convert copper sulfate ions into zero-valent metallic copper."The idea of having bacteria in mines is not new, but the unanswered question was: what are they doing in the mines?" Robles said. "By putting the bacteria inside an electronic microscope, we were able to figure out the physics and analyze it. We found out the bacteria were isolating single atom copper. In terms of chemistry, this is extremely difficult to derive. Typically, harsh chemicals are used in order to produce single atoms of any element. This bacterium is creating it naturally that is very impressive."As useful as copper is, the process of mining the metal often leads to toxic exposures and challenges on drawing out substantial volume for commercial use. Approximately one billion tons of copper are estimated in global reserves, according to the Copper Development Association Inc., with roughly 12.5 million metric tons per year mined. This aggregates to roughly 65 years of remaining reserves. Part of the supply challenge comes from limited available copper in high concentration in the earth's crust, but the other challenge is the exposure to sulfur dioxide and nitrogen dioxide in the copper smelting and production process to concentrate the metal into useful quantities."The novelty of this discovery is that microbes in the environment can easily transform copper sulfate into zero valent single atom copper. This is a breakthrough because the current synthetic process of single atom zerovalent copper is typically not clean, it is labor intensive and expensive," Rodrigues said."The microbes utilize a unique biological pathway with an array of proteins that can extract copper and convert it into single-atom zero-valent copper. The aim of the microbes is to create a less toxic environment for themselves by converting the ionic copper into single-atom copper, but at the same time they make something that is beneficial for us too."With a focus in electronic microscopy, Robles examined samples from Rodrigues' findings in Brazilian copper mines and he determined the single atom nature of the copper. Rodrigues and Aquino's groups further identified the bacterial process for converting copper sulfate to elemental copper -- a rare find.Research results demonstrate this new conversion process as an alternative to produce single atoms of metallic coper is safer, and more efficient versus current methods (i.e. chemical vapor deposition, sputtering and femtosecond laser ablation)."We have only worked with one bacterium, but that may not be the only one out there that performs a similar function," Rodrigues concluded. "The next step for this particular research is harvesting the copper from these cells and using it for practical applications."
Environment
2,021
April 23, 2021
https://www.sciencedaily.com/releases/2021/04/210423141311.htm
Flexible diet may help leaf-eating lemurs survive deforestation
Fruits and veggies are good for you and if you are a lemur, they may even help mitigate the effects of habitat loss.
A new study sequencing the genome of four species of sifakas, a genus of lemurs found only in Madagascar's forests, reveals that these animals' taste for leaves runs all the way to their genes, which are also more diverse than expected for an endangered species.Sifakas are folivores, meaning that the bulk of their diet is composed of leaves. Leaves can be difficult to digest and full of toxic compounds meant to prevent them from being eaten. Unlike our carefully selected spinach, tree leaves also don't taste great, and are not very nutritious.Because of that, leaf-eaters typically have all sorts of adaptations, such as a longer digestive tract with special pouches where bacteria help break down the food.In a new study appearing April 23 in These four species are found in different habitats in Madagascar, ranging from arid deciduous forests to rainforests, but share a similar diet.The genomes showed molecular evidence for adaptations to neutralize and eliminate leaves' toxic compounds, optimize the absorption of nutrients, and detect bitter tastes. Their genome shows patterns of molecular evolution similar to those found in other distantly related herbivores, such as the colobus monkeys from Central Africa, and domestic cattle.Yet despite being such fine-tuned leaf-eating machines, sifakas can eat more than just leaves. They eat lots of fruits when those are in season and will also happily munch on flowers."Sifakas can take advantage of foods that are higher energy and are more nutrient dense, and can fall back and subsist on leaves in times of scarcity," said Elaine Guevara, assistant research professor of Evolutionary Anthropology at Duke University and lead author of the study.This dietary flexibility may have given them an advantage over their strictly leaves-only or fruit-only cousins in the face of threats such as forest fragmentation and disturbance.Indeed, the analysis also showed that sifakas are genetically more diverse than would be expected for a critically endangered species on an island of shrinking habitats."These animals do seem to have very healthy levels of genetic diversity, which is very surprising," said GuevaraGuevara and her team gauged genome heterozygosity, which is a measure of genetic diversity and an indicator of population size. Species at high risk for extinction tend to have only small populations left, and very low heterozygosity.Sifakas do not follow this pattern and show far higher heterozygosity than other primates or other species of critically endangered mammals. Heterozygous populations tend to be more resilient to threats such as climate change, habitat loss, and new pathogens.However, sifakas have very long generation times, averaging 17 years, so the loss of genetic diversity may take decades to become obvious. Guevara says that the genetic diversity found in this study may actually reflect how healthy populations were 50 years ago, prior to a drastic increase in deforestation rates in Madagascar."Sifakas are still critically endangered, their population numbers are decreasing, and habitat loss is accelerating drastically," said Guevara.There is still room for optimism. By not being picky eaters, sifakas may be less sensitive to deforestation and habitat fragmentation than primates with more restricted diets, allowing them to survive in areas with less-than-pristine forests."I've seen sifakas at the Lemur Center eat dead pine needles," said Guevara. "Their diet is really flexible."Their greater genetic diversity may therefore mean that there is still hope for sifakas, if their habitats receive and maintain protection and strategic management."Sifakas still have a good chance if we act. Our results are all the more reason to do everything we can to help them," said Guevara.This work was funded by the Center for the Advanced Study of Human Paleobiology at The George Washington University, Duke University, and the Wenner-Gren Foundation. Genome sequencing and assembly were funded by National Human Genome Research Institute grant U54 HG003273 to Richard Gibbs (HGSC, Baylor College of Medicine).
Environment
2,021
April 23, 2021
https://www.sciencedaily.com/releases/2021/04/210423130055.htm
Seismicity on Mars full of surprises, in first continuous year of data
The SEIS seismometer package from the Mars InSight lander has collected its first continuous Martian year of data, revealing some surprises among the more than 500 marsquakes detected so far.
At the Seismological Society of America (SSA)'s 2021 Annual Meeting, Savas Ceylan of ETH Zürich discussed some of the findings from The Marsquake Service, the part of the InSight ground team that detects marsquakes and curates the planet's seismicity catalog.Marsquakes differ from earthquakes in a number of ways, Ceylan explained. To begin with, they are much smaller than earthquakes, with the largest event recorded at teleseismic distances around magnitude 3.6. SEIS is able to detect these small events because the background seismic noise on Mars can be much lower than on Earth, without the constant tremor produced by ocean waves."For much of a Martian year, from around sunset until early hours, the Martian atmosphere becomes very quiet, so there is no local noise either," he said. "Additionally, our sensors are optimized and shielded for operating under severe Martian conditions, such as extremely low temperatures and the extreme diurnal temperature fluctuations on the red planet."Marsquakes also come in two distinct varieties: low-frequency events with seismic waves propagating at various depths in the planet's mantle, and high-frequency events with waves that appear to propagate through the crust. "In terms of how the seismic energy decays over time, the low-frequency events appear to be more like earthquakes" in which the shaking dies away relatively quickly, Ceylan said, "while the high-frequency events are resembling moonquakes" in persisting for longer periods.The vast majority of the events are high-frequency and occur at hundreds of kilometers of distance from the lander. "It is not quite clear to us how these events could be confined to only high frequency energy while they occur at such large distances," he said. "On top of that, the frequency of those events seems to vary over the Martian year, which is a pattern that we do not know at all from Earth."Only a handful of marsquakes have clear seismic phase arrivals -- the order in which the different types of seismic waves arrive at a location -- which allows researchers to calculate the direction and distance the waves come from. All these marsquakes originate from a sunken area of the surface called Cerberus Fossae, about 1800 kilometers away from the InSight Lander.Cerberus Fossae is one of the youngest geological structures on Mars, and may have formed from extensional faulting or subsidence due to dike emplacement. Recent studies suggest extension mechanism may be the source of the Cerberus Fossae quakes, Ceylan noted, "however, we have a long way in front of us to be able to explain the main tectonic mechanisms behind these quakes."The biggest challenge for The Marsquake Service and InSight science team has been "adapting to unexpected signals in the data from a new planet," Ceylan said.Although there were significant efforts to shield SEIS from non-seismic noise by covering it and placing it directly on the Martian surface, its data are still contaminated by weather and lander noise."We needed to understand the noise on Mars from scratch, discover how our seismometers behave, how the atmosphere of Mars affects seismic recordings, and find alternative methods to interpret the data properly," said Ceylan.It took the Service a while to be "confident in identifying the different event types," he added, "discriminating these weak signals from the rich and varied background noise, and being able to characterize these novel signals in a systematic manner to provide a self-consistent catalog."The InSight seismicity catalog and data are released to the public via IPG Paris, IRIS, and PDS on a three month schedule, with three month data delay.
Environment
2,021
April 22, 2021
https://www.sciencedaily.com/releases/2021/04/210422181902.htm
Ancient Indigenous forest gardens promote a healthy ecosystem
A new study by Simon Fraser University historical ecologists finds that Indigenous-managed forests -- cared for as "forest gardens" -- contain more biologically and functionally diverse species than surrounding conifer-dominated forests and create important habitat for animals and pollinators. The findings are published today in
According to researchers, ancient forests were once tended by Ts'msyen and Coast Salish peoples living along the north and south Pacific coast. These forest gardens continue to grow at remote archaeological villages on Canada's northwest coast and are composed of native fruit and nut trees and shrubs such as crabapple, hazelnut, cranberry, wild plum, and wild cherries. Important medicinal plants and root foods like wild ginger and wild rice root grow in the understory layers."These plants never grow together in the wild," says Chelsey Geralda Armstrong, an SFU Indigenous Studies assistant professor and the study lead researcher. "It seemed obvious that people put them there to grow all in one spot -- like a garden. Elders and knowledge holders talk about perennial management all the time.""It's no surprise these forest gardens continue to grow at archaeological village sites that haven't yet been too severely disrupted by settler-colonial land-use."Ts'msyen and Coast Salish peoples' management practices challenge the assumption that humans tend to overturn or exhaust the ecosystems they inhabit. This research highlights how Indigenous peoples not only improved the inhabited landscape, but were also keystone builders, facilitating the creation of habitat in some cases. The findings provide strong evidence that Indigenous management practices are tied to ecosystem health and resilience."Human activities are often considered detrimental to biodiversity, and indeed, industrial land management has had devastating consequences for biodiversity," says Jesse Miller, study co-author, ecologist and lecturer at Stanford University. "Our research, however, shows that human activities can also have substantial benefits for biodiversity and ecosystem function. Our findings highlight that there continues to be an important role for human activities in restoring and managing ecosystems in the present and future."Forest gardens are a common management regime identified in Indigenous communities around the world, especially in tropical regions. Armstrong says the study is the first time forest gardens have been studied in North America -- showing how important Indigenous peoples are in the maintenance and defence of some of the most functionally diverse ecosystems on the Northwest Coast."The forest gardens of Kitselas Canyon are a testament to the long-standing practice of Kitselas people shaping the landscape through stewardship and management," says Chris Apps, director, Kitselas Lands & Resources Department. "Studies such as this reconnect the community with historic resources and support integration of traditional approaches with contemporary land-use management while promoting exciting initiatives for food sovereignty and cultural reflection."
Environment
2,021
April 22, 2021
https://www.sciencedaily.com/releases/2021/04/210422150435.htm
Genetic effects of Chernobyl radiation
In two landmark studies, researchers have used cutting-edge genomic tools to investigate the potential health effects of exposure to ionizing radiation, a known carcinogen, from the 1986 accident at the Chernobyl nuclear power plant in northern Ukraine. One study found no evidence that radiation exposure to parents resulted in new genetic changes being passed from parent to child. The second study documented the genetic changes in the tumors of people who developed thyroid cancer after being exposed as children or fetuses to the radiation released by the accident.
The findings, published around the 35th anniversary of the disaster, are from international teams of investigators led by researchers at the National Cancer Institute (NCI), part of the National Institutes of Health. The studies were published online in "Scientific questions about the effects of radiation on human health have been investigated since the atomic bombings of Hiroshima and Nagasaki and have been raised again by Chernobyl and by the nuclear accident that followed the tsunami in Fukushima, Japan," said Stephen J. Chanock, M.D., director of NCI's Division of Cancer Epidemiology and Genetics (DCEG). "In recent years, advances in DNA sequencing technology have enabled us to begin to address some of the important questions, in part through comprehensive genomic analyses carried out in well-designed epidemiological studies."The Chernobyl accident exposed millions of people in the surrounding region to radioactive contaminants. Studies have provided much of today's knowledge about cancers caused by radiation exposures from nuclear power plant accidents. The new research builds on this foundation using next-generation DNA sequencing and other genomic characterization tools to analyze biospecimens from people in Ukraine who were affected by the disaster.The first study investigated the long-standing question of whether radiation exposure results in genetic changes that can be passed from parent to offspring, as has been suggested by some studies in animals. To answer this question, Dr. Chanock and his colleagues analyzed the complete genomes of 130 people born between 1987 and 2002 and their 105 mother-father pairs.One or both of the parents had been workers who helped clean up from the accident or had been evacuated because they lived in close proximity to the accident site. Each parent was evaluated for protracted exposure to ionizing radiation, which may have occurred through the consumption of contaminated milk (that is, milk from cows that grazed on pastures that had been contaminated by radioactive fallout). The mothers and fathers experienced a range of radiation doses.The researchers analyzed the genomes of adult children for an increase in a particular type of inherited genetic change known as de novo mutations. De novo mutations are genetic changes that arise randomly in a person's gametes (sperm and eggs) and can be transmitted to their offspring but are not observed in the parents.For the range of radiation exposures experienced by the parents in the study, there was no evidence from the whole-genome sequencing data of an increase in the number or types of de novo mutations in their children born between 46 weeks and 15 years after the accident. The number of de novo mutations observed in these children were highly similar to those of the general population with comparable characteristics. As a result, the findings suggest that the ionizing radiation exposure from the accident had a minimal, if any, impact on the health of the subsequent generation."We view these results as very reassuring for people who were living in Fukushima at the time of the accident in 2011," said Dr. Chanock. "The radiation doses in Japan are known to have been lower than those recorded at Chernobyl."In the second study, researchers used next-generation sequencing to profile the genetic changes in thyroid cancers that developed in 359 people exposed as children or in utero to ionizing radiation from radioactive iodine (I-131) released by the Chernobyl nuclear accident and in 81 unexposed individuals born more than nine months after the accident. Increased risk of thyroid cancer has been one of the most important adverse health effects observed after the accident.The energy from ionizing radiation breaks the chemical bonds in DNA, resulting in a number of different types of damage. The new study highlights the importance of a particular kind of DNA damage that involves breaks in both DNA strands in the thyroid tumors. The association between DNA double-strand breaks and radiation exposure was stronger for children exposed at younger ages.Next, the researchers identified the candidate "drivers" of the cancer in each tumor -- the key genes in which alterations enabled the cancers to grow and survive. They identified the drivers in more than 95% of the tumors. Nearly all the alterations involved genes in the same signaling pathway, called the mitogen-activated protein kinase (MAPK) pathway, including the genes BRAF, RAS, and RET.The set of affected genes is similar to what has been reported in previous studies of thyroid cancer. However, the researchers observed a shift in the distribution of the types of mutations in the genes. Specifically, in the Chernobyl study, thyroid cancers that occurred in people exposed to higher radiation doses as children were more likely to result from gene fusions (when both strands of DNA are broken and then the wrong pieces are joined back together), whereas those in unexposed people or those exposed to low levels of radiation were more likely to result from point mutations (single base-pair changes in a key part of a gene).The results suggest that DNA double-strand breaks may be an early genetic change following exposure to radiation in the environment that subsequently enables the growth of thyroid cancers. Their findings provide a foundation for further studies of radiation-induced cancers, particularly those that involve differences in risk as a function of both dose and age, the researchers added."An exciting aspect of this research was the opportunity to link the genomic characteristics of the tumor with information about the radiation dose -- the risk factor that potentially caused the cancer," said Lindsay M. Morton, Ph.D., deputy chief of the Radiation Epidemiology Branch in DCEG, who led the study."The Cancer Genome Atlas set the standard for how to comprehensively profile tumor characteristics," Dr. Morton continued. "We extended that approach to complete the first large genomic landscape study in which the potential carcinogenic exposure was well-characterized, enabling us to investigate the relationship between specific tumor characteristics and radiation dose."She noted that the study was made possible by the creation of the Chernobyl Tissue Bank about two decades ago -- long before the technology had been developed to conduct the kind of genomic and molecular studies that are common today."These studies represent the first time our group has done molecular studies using the biospecimens that were collected by our colleagues in Ukraine," Dr. Morton said. "The tissue bank was set up by visionary scientists to collect tumor samples from residents in highly contaminated regions who developed thyroid cancer. These scientists recognized that there would be substantial advances in technology in the future, and the research community is now benefiting from their foresight."
Environment
2,021
April 22, 2021
https://www.sciencedaily.com/releases/2021/04/210422150425.htm
Scientists uncover structure of light-driven enzyme with potential biofuel applications
Although many organisms capture and respond to sunlight, enzymes -- proteins that catalyze biochemical reactions -- are rarely driven by light. Scientists have identified only three types of natural photoenzymes so far. The newest one, discovered in 2017, is fatty acid photodecarboxylase (FAP). Derived from microscopic algae, it uses blue light to catalyze the conversion of fatty acids, found in fats and oils, into alkanes and alkenes.
"A growing number of labs envision using FAPs for green chemistry applications, because alkanes and alkenes are important components of solvents and fuels, including gasoline and jet fuels. And the transformation of fatty acids into alkanes or alkenes happens in a single step within the enzyme," says Martin Weik, the leader of a research group at the Institute of Biologie Structurale at the Universite Grenoble Alpes.Weik is a primary investigator of a new study that has captured the complex sequence of structural changes FAP undergoes in response to light, called a photocycle, which drives this fatty acid transformation. Although researchers previously proposed a FAP photocycle, the fundamental mechanism was not understood. The scientists didn't know how long it took a fatty acid to lose its carboxylate, the chemical group attached to the end of its long chain of hydrocarbons, a critical step in forming alkenes or alkanes.In collaboration with SLAC scientists, experiments at the Linac Coherent Light Source (LCLS) at the Department of Energy's SLAC National Accelerator Laboratory helped answer many of these outstanding questions. The researchers describe their results in To understand a light-sensitive enzyme like FAP, scientists use many different techniques to study processes that take place over a broad range of time scales -- because photon absorption happens in femtoseconds, or millionths of a billionth of a second, while biological responses on the molecular level often happen in thousandths of a second."Our international, interdisciplinary consortium, led by Frederic Beisson at the Universite Aix-Marseille, used a wealth of techniques, including spectroscopy, crystallography and computational approaches," Weik says. "It's the sum of these different results that enabled us to get a first glimpse of how this unique enzyme works as a function of time and in space."The consortium first studied the complex steps of the catalytic process at their home labs using optical spectroscopy methods, which investigate the electronic and geometric structure of atoms in the samples, including chemical bonding and charge. Spectroscopic experiments identified the enzyme's intermediate states accompanying each step, measured their lifetimes and provided information on their chemical nature. These results motivated the need for the ultrafast capabilities of the LCLS.Next, a structural view of the catalytic process was provided by serial femtosecond crystallography (SFX) with the LCLS X-ray free-electron laser (XFEL). During these experiments, a jet of tiny FAP microcrystals was hit with optical laser pulses to kick off the catalytic reaction, followed by extremely short, ultrabright X-ray pulses to measure the resulting changes in the enzyme's structure.By integrating thousands of these measurements -- acquired using various time delays between the optical and X-ray pulses -- the researchers were able to follow structural changes in the enzyme over time. They also determined the structure of the enzyme's resting state by probing without the optical laser.Surprisingly, the researchers found that in the resting state, the enzyme's light-sensing part, called the FAD cofactor, has a bent shape. "This cofactor acts like an antenna to capture photons. It absorbs blue light and initiates the catalytic process," Weik says. "We thought the starting point of the FAD cofactor was planar, so this bent configuration was unexpected."The bent shape of the FAD cofactor was actually first discovered by X-ray crystallography at the European Synchrotron Radiation Facility, but the scientists suspected this bend was an artifact of radiation damage, a common problem for crystallographic data collected at synchrotron light sources. Only SFX experiments could confirm this unusual configuration because of their unique ability to capture structural information before damaging the sample, Weik says."These experiments were complemented by computations," he adds, "Without the high-level quantum calculations performed by Tatiana Domratcheva of Moscow State University, we wouldn't have understood our experimental results."Despite the improved understanding of FAP's photocycle, unanswered questions remain. For example, researchers know carbon dioxide is formed during a certain step of the catalytic process at a specific time and location, but they don't know its state as it leaves the enzyme."In future XFEL work, we want to identify the nature of the products and to take pictures of the process with a much smaller step size so as to resolve the process in much finer detail," says Weik. "This is important for fundamental research, but it can also help scientists modify the enzyme to do a task for a specific application."
Environment
2,021
April 22, 2021
https://www.sciencedaily.com/releases/2021/04/210422150413.htm
Mars has right ingredients for present-day microbial life beneath its surface, study finds
As NASA's Perseverance rover begins its search for ancient life on the surface of Mars, a new study suggests that the Martian subsurface might be a good place to look for possible present-day life on the Red Planet.
The study, published in the journal "The big implication here for subsurface exploration science is that wherever you have groundwater on Mars, there's a good chance that you have enough chemical energy to support subsurface microbial life," said Jesse Tarnas, a postdoctoral researcher at NASA's Jet Propulsion Laboratory who led the study while completing his Ph.D. at Brown University. "We don't know whether life ever got started beneath the surface of Mars, but if it did, we think there would be ample energy there to sustain it right up to today."In recent decades, scientists have discovered that Earth's depths are home to a vast biome that exists largely separated from the world above. Lacking sunlight, these creatures survive using the byproducts of chemical reactions produced when rocks come into contact with water.One of those reactions is radiolysis, which occurs when radioactive elements within rocks react with water trapped in pore and fracture space. The reaction breaks water molecules into their constituent elements, hydrogen and oxygen. The liberated hydrogen is dissolved in the remaining groundwater, while minerals like pyrite (fool's gold) soak up free oxygen to form sulfate minerals. Microbes can ingest the dissolved hydrogen as fuel and use the oxygen preserved in the sulfates to "burn" that fuel.In places like Canada's Kidd Creek Mine, these "sulfate-reducing" microbes have been found living more than a mile underground, in water that hasn't seen the light of day in more than a billion years. Tarnas has been working with a team co-led by Brown University professor Jack Mustard and Professor Barbara Sherwood Lollar of the University of Toronto to better understand these underground systems, with an eye toward looking for similar habitats on Mars and elsewhere in the solar system. The project, called Earth 4-D: Subsurface Science and Exploration, is supported by the Canadian Institute for Advances Research.For this new study, the researchers wanted to see if the ingredients for radiolysis-driven habitats could exist on Mars. They drew on data from NASA's Curiosity rover and other orbiting spacecraft, as well as compositional data from a suite of Martian meteorites, which are representative of different parts of the planet's crust.The researchers were looking for the ingredients for radiolysis: radioactive elements like thorium, uranium and potassium; sulfide minerals that could be converted to sulfate; and rock units with adequate pore space to trap water. The study found that in several different types of Martian meteorites, all the ingredients are present in adequate abundances to support Earth-like habitats. This was particularly true for regolith breccias -- meteorites sourced from crustal rocks more than 3.6 billion years old -- which were found to have the highest potential for life support. Unlike Earth, Mars lacks a plate tectonics system that constantly recycle crustal rocks. So these ancient terrains remain largely undisturbed.The researchers say the findings help make the case for an exploration program that looks for signs of present-day life in the Martian subsurface. Prior research has found evidence of an active groundwater system on Mars in the past, the researchers say, and there's reason to believe that groundwater exists today. One recent study, for example, raised the possibility of an underground lake lurking under the planet's southern ice cap. This new research suggests that wherever there's groundwater, there's energy for life.Tarnas and Mustard say that while there are certainly technical challenges involved in subsurface exploration, they aren't as insurmountable as people may think. A drilling operation wouldn't require "a Texas-sized oil rig," Mustard said, and recent advances in small drill probes could soon put the Martian depths within reach."The subsurface is one of the frontiers in Mars exploration," Mustard said. "We've investigated the atmosphere, mapped the surface with different wavelengths of light and landed on the surface in half-a-dozen places, and that work continues to tell us so much about the planet's past. But if we want to think about the possibility of present-day life, the subsurface is absolutely going to be where the action is."The research was supported by the Canadian Institute for Advanced Research.
Environment
2,021
April 22, 2021
https://www.sciencedaily.com/releases/2021/04/210422102851.htm
The future looks bright for infinitely recyclable plastic
Plastics are a part of nearly every product we use on a daily basis. The average person in the U.S. generates about 100 kg of plastic waste per year, most of which goes straight to a landfill. A team led by Corinne Scown, Brett Helms, Jay Keasling, and Kristin Persson at Lawrence Berkeley National Laboratory (Berkeley Lab) set out to change that.
Less than two years ago, Helms announced the invention of a new plastic that could tackle the waste crisis head on. Called poly(diketoenamine), or PDK, the material has all the convenient properties of traditional plastics while avoiding the environmental pitfalls, because unlike traditional plastics, PDKs can be recycled indefinitely with no loss in quality.Now, the team has released a study that shows what can be accomplished if manufacturers began using PDKs on a large scale. The bottom line? PDK-based plastic could quickly become commercially competitive with conventional plastics, and the products will get less expensive and more sustainable as time goes on."Plastics were never designed to be recycled. The need to do so was recognized long afterward," explained Nemi Vora, first author on the report and a former postdoctoral fellow who worked with senior author Corinne Scown. "But driving sustainability is the heart of this project. PDKs were designed to be recycled from the get-go, and since the beginning, the team has been working to refine the production and recycling processes for PDK so that the material could be inexpensive and easy enough to be deployed at commercial scales in anything from packaging to cars."The study presents a simulation for a 20,000-metric-ton-per-year facility that puts out new PDKs and takes in used PDK waste for recycling. The authors calculated the chemical inputs and technology needed, as well as the costs and greenhouse gas emissions, then compared their findings to the equivalent figures for production of conventional plastics."These days, there is a huge push for adopting circular economy practices in the industry. Everyone is trying to recycle whatever they're putting out in the market," said Vora. "We started talking to industry about deploying 100% percent infinitely recycled plastics and have received a lot of interest.""The questions are how much it will cost, what the impact on energy use and emissions will be, and how to get there from where we are today," added Helms, a staff scientist at Berkeley Lab's Molecular Foundry. "The next phase of our collaboration is to answer these questions."To date, more than 8.3 billion metric tons of plastic material have been produced, and the vast majority of this has ended up in landfills or waste incineration plants. A small proportion of plastics are sent to be recycled "mechanically," meaning they are melted down and then re-shaped into new products. However, this technique has limited benefit. Plastic resin itself is made of many identical molecules (called monomers) bound together into long chains (called polymers). Yet to give plastic its many textures, colors, and capabilities, additives like pigments, heat stabilizers, and flame retardants are added to the resin. When many plastics are melted down together, the polymers become mixed with a slew of potentially incompatible additives, resulting in a new material with much lower quality than newly produced virgin resin from raw materials. As such, less than 10% of plastic is mechanically recycled more than once, and recycled plastic usually also contains virgin resin to make up for the dip in quality.PDK plastics sidestep this problem entirely -- the resin polymers are engineered to easily break down into individual monomers when mixed with an acid. The monomers can then be separated from any additives and gathered to make new plastics without any loss of quality. The team's earlier research shows that this "chemical recycling" process is light on energy and carbon dioxide emissions, and it can be repeated indefinitely, creating a completely circular material lifecycle where there is currently a one-way ticket to waste.Yet despite these incredible properties, to truly beat plastics at their own game, PDKs also need to be convenient. Recycling traditional petroleum-based plastic might be hard, but making new plastic is very easy."We're talking about materials that are basically not recycled," said Scown. "So, in terms of appealing to manufacturers, PDKs aren't competing with recycled plastic -- they have to compete with virgin resin. And we were really pleased to see how cheap and how efficient it will be to recycle the material."Scown, who is a staff scientist in Berkeley Lab's Energy Technologies and Biosciences Areas, specializes in modeling future environmental and financial impacts of emerging technologies. Scown and her team have been working on the PDK project since the outset, helping Helms' group of chemists and fabrication scientists to choose the raw materials, solvents, equipment, and techniques that will lead to the most affordable and eco-friendly product."We're taking early stage technology and designing what it would look like at commercial-scale operations" using different inputs and technology, she said. This unique, collaborative modeling process allows Berkeley Lab scientists to identify potential scale-up challenges and make process improvements without costly cycles of trial and error.The team's report, published in Thanks to optimization from process modeling, recycled PDKs are already drawing interest from companies needing to source plastic. Always looking to the future, Helms and his colleagues have been conducting market research and meeting with people from industry since the project's early days. Their legwork shows that the best initial application for PDKs are markets where the manufacturer will receive their product back at the end of its lifespan, such as the automobile industry (through trade-ins and take-backs) and consumer electronics (through e-waste programs). These companies will then be able to reap the benefits of 100% recyclable PDKs in their product: sustainable branding and long-term savings."With PDKs, now people in industry have a choice," said Helms. "We're bringing in partners who are building circularity into their product lines and manufacturing capabilities, and giving them an option that is in line with future best practices."Added Scown: "We know there's interest at that level. Some countries have plans to charge hefty fees on plastic products that rely on non-recycled material. That shift will provide a strong financial incentive to move away from utilizing virgin resins and should drive a lot of demand for recycled plastics."After infiltrating the market for durable products like cars and electronics, the team hopes to expand PDKs into shorter-lived, single-use goods such as packaging.As they forge plans for a commercial launch, the scientists are also continuing their techno-economic collaboration on the PDK production process. Although the cost of recycled PDK is already projected to be competitively low, the scientists are working on additional refinements to lower the cost of virgin PDK, so that companies are not deterred by the initial investment price.And true to form, the scientists are working two steps ahead at the same time. Scown, who is also vice president for Life-cycle, Economics & Agronomy at the Joint BioEnergy Institute (JBEI), and Helms are collaborating with Jay Keasling, a leading synthetic biologist at Berkeley Lab and UC Berkeley and CEO of JBEI, to design a process for producing PDK polymers using microbe-made precursor ingredients. The process currently uses industrial chemicals, but was initially designed with Keasling's microbes in mind, thanks to a serendipitous cross-disciplinary seminar."Shortly before we started the PDK project, I was in a seminar where Jay was describing all the molecules that they could make at JBEI with their engineered microbes," said Helms. "And I got very excited because I saw that some of those molecules were things that we put in PDKs. Jay and I had a few chats and, we realized that nearly the entire polymer could be made using plant material fermented by engineered microbes.""In the future, we're going to bring in that biological component, meaning that we can begin to understand the impacts of transitioning from conventional feedstocks to unique and possibly advantaged bio-based feedstocks that might be more sustainable long term on the basis of energy, carbon, or water intensity of production and recycling," Helms continued. "So, where we are now, this is the first step of many, and I think we have a really long runway in front of us, which is exciting."The Molecular Foundry is a Department of Energy (DOE) Office of Science user facility that specializes in nanoscale science. JBEI is a Bioenergy Research Center funded by DOE's Office of Science.This work was supported by the DOE's Bioenergy Technologies Office and Berkeley Lab's Laboratory Directed Research and Development (LDRD) program.PDK technology is available for licensing and collaboration.
Environment
2,021
April 22, 2021
https://www.sciencedaily.com/releases/2021/04/210422093939.htm
How is a molecular machine assembled?
The conversion of light into chemical energy by plants and photosynthetic microorganisms is one of the most important processes in nature, removing climate-damaging CO2 from the atmosphere. Protein complexes, so-called photosystems, play the key role in this process. An international research team shed light for the first time on the structure and function of a transition state in the synthesis of photosystem II.
The study was published by the team from Ruhr-Universität Bochum (RUB), the Max Planck Institutes of Biochemistry and Biophysics, the Center for Synthetic Microbiology (SYNMIKRO) and the Chemistry Department at Philipps Universität Marburg, the University of Illinois Urbana-Champaign, USA, and Université Paris-Saclay, France, online on 12 April 2021 in the journal Photosystem II (PS II) is of fundamental importance for life, as it is able to catalyse the splitting of water. The oxygen released in this reaction allows us to breathe. In addition, PS II converts light energy in such a way that atmospheric CO2 can be used to synthesise organic molecules. PS II thus represents the molecular beginning of all food chains. Its structure and function have already been researched in detail, but little has been known so far about the molecular processes that lead to the orderly assembly of the complex.PS II consists of more than 100 individual parts that have to come together in a well-orchestrated process in order to ultimately create a fully functional machine. Helper proteins, so-called assembly factors, which are responsible for the sub-steps, play a crucial role in this process. "Picture them as robots on an assembly line, for example making a car," explains Professor Marc Nowaczyk from the RUB Chair for Plant Biochemistry. "Each robot adds a part or assembles prefabricated modules to end up with a perfect machine."When figuring out how this is done, the difficulty was to isolate an intermediate product, including its molecular helpers, because such transition states are very unstable compared to the finished product and are only present in very small quantities. Only by using tricks, such as removing a part of the assembly line production, was it possible to isolate an intermediate stage with the associated helper proteins for the first time.Thanks to cryo-electron microscopy, sensitive protein structures, which include PS II transition states, and even the smallest virus particles can be imaged. The data, published in
Environment
2,021
April 21, 2021
https://www.sciencedaily.com/releases/2021/04/210421160028.htm
The wave beneath their wings
It's a common sight: pelicans gliding along the waves, right by the shore. These birds make this kind of surfing look effortless, but actually the physics involved that give them a big boost are not simple.
Researchers at the University of California San Diego have recently developed a theoretical model that describes how the ocean, the wind and the birds in flight interact in a recent paper in UC San Diego mechanical engineering Ph.D. student Ian Stokes and adviser Professor Drew Lucas, of UC San Diego's Department of Mechanical and Aerospace Engineering and Scripps Institution of Oceanography, found that pelicans can completely offset the energy they expend in flight by exploiting wind updrafts generated by waves through what is known as wave-slope soaring. In short, by practicing this behavior, sea-birds take advantage of winds generated by breaking waves to stay aloft.The model could be used to develop better algorithms to control drones that need to fly over water for long periods of time, the researchers said. Potential uses do not stop there."There's a community of biologists and ornithologists that studies the metabolic cost of flight in birds that can use this and see how their research connects to our estimates from theory. Likewise, our model generates a basic prediction for the winds generated by passing swell, which is important to physicists that study how the ocean and atmosphere interact in order to improve weather forecasting," Stokes said."This is an interesting project because it shows how the waves are actually moving the air around, making wind. If you're a savvy bird, you can optimize how you move to track waves and to take advantage of these updrafts. Since seabirds travel long distances to find food, the benefits may be significant," Lucas said.Stokes and Lucas are, of course, not the first scientists to study the physics of the atmosphere that pelicans and other birds are hardwired to intuit so they can conserve energy for other activities. For centuries, humans have been inspired by the sight of birds harnessing the power and patterns of the winds for soaring flight.That's how it started with Stokes, who is now in the second year of his PhD at UC San Diego. As a UC Santa Barbara undergraduate, Stokes, a surfer and windsurfer in his off hours, needed a project for his senior physics class and thought of the birds that would accompany him on the waves. When he looked closer, he appreciated the connection between their flight dynamics and the study of environmental fluid dynamics, a speciality of scientists at UC San Diego. The project ultimately turned into a master's thesis with Lucas, drawing inspiration from oceanographers at Scripps who seek to understand the interactions between the ocean and atmosphere.Wave-slope soaring is just one of the many behaviors in sea-birds that take advantage of the energy in their environment. By tapping into these predictable patterns, the birds are able to forage, travel, and find mates more effectively."As we appreciate their mastery of the fluid, ever-changing ocean environment, we gain insight into the fundamental physics that shape our world," said Lucas.Video:
Environment
2,021
April 21, 2021
https://www.sciencedaily.com/releases/2021/04/210421124659.htm
New evidence shows important seabird nutrients reach coral reefs after rat eradication
Scientists have provided the first evidence to show that eradicating rats from tropical islands effects not just the biodiversity on the islands, but also the fragile coral seas that surround them.
The new study led by scientists at Lancaster University and published in the journal The findings offer encouragement that rat eradication can benefit coral reefs, because these nutrient flows can bolster the health of delicate coral reef ecosystems and may improve their chances of rebounding between climate disturbance events.Seabirds are a critically important distributor of nutrients for island and marine environments. They feed on fish often in the open ocean far from islands, and then return to islands to roost -- depositing nitrogen-rich nutrients on the island in the form of guano -- or poo. Some of the guano is then leached off the islands by rain and into the surrounding seas where the nitrogen fertilises corals and other marine species such as algae and sponges, boosting the food-chain.However, over the last several centuries people introduced rats to many tropical islands through settlement, sailing between islands and shipwrecks. Rats are a very damaging invasive species, consuming the seeds of many plants, and devastating bird populations as they eat eggs, chicks, and even adults of the smallest seabird species.Previous studies by scientists from the same research team revealed that islands with rat infestations had much smaller seabird populations than islands without rats, and as a result, there was much less nitrogen on the islands, and in the surrounding marine environments. This results in significant knock-on negative effects with less reef fish biomass, reduced ecosystem functioning, and slower coral growth on adjacent coral reefs.In this latest study the researchers looked at 20 islands in the central and western Indian Ocean, including the Chagos Archipelago and the Scattered islands. These remote and protected islands provided an ideal testbed as they contain islands with varying rat histories.By comparing rat infested islands with islands where invasive rat populations have been eradicated, and islands that have never had rats, they found that the greatest number of seabirds, and the nutrients they provide to island and marine ecosystems were at the islands where rats have never been introduced. Those islands that have existing rat infestations had the fewest seabirds, and seabird-derived nutrients. Importantly, they found islands where rats had been eradicated were between the two.Seabird populations are steadily increasing on two islands within the Scattered Islands -- Île du Lys and Tromelin."Rats were eradicated on Tromelin in 2005," said Matthieu Le Corre of the Universite de La Reunion who was part of the research team. "Since then there has been an eight-fold increase in seabirds, and six species that were locally extinct because of the rats have restarted to breed after rat eradication. On Île du Lys, where rats were eradicated in 2003, surveys show there has been a ten-fold increase in seabirds since."The researchers, testing fish and algae, also found increased measurements of seabird-derived nitrogen in the marine ecosystems surrounding these islands.These results show that even after hundreds of years of rat infestations, the islands and the surrounding marine ecosystems can start to recover within 16 years of rat eradication.Dr Casey Benkwitt, of Lancaster University and lead author of the research, said: "We know that rats are devastating for island seabird populations, and that the loss of these birds from islands is harmful to the surrounding coral reef ecosystems. However, we needed to test whether removing the rats from islands could help these important nutrient cycles return, and if so, how quickly and how far the benefits would spread onto fragile coral reefs."Our study shows the first evidence that rat eradication programmes can indeed be an important tool in helping restore these vital seabird-led nutrient cycles not just on tropical islands themselves, but also in their surrounding seas -- and that they can recover within relatively short timeframes, which is very encouraging."However, not all of the marine ecosystem functions around islands that have had rats eradicated have improved. One important measure, that of the growth and size of coral reef fish, was not higher around these islands. Fish do grow faster adjacent to islands that have never had rats present, so these benefits may take longer to emerge following rat eradication.The findings will help guide management and restoration efforts for islands and coral reef environments. The researchers believe that by combining rat eradication programmes with other restoration strategies it could be possible to speed-up the recovery of seabird populations, and their benefits to surrounding marine environments.Professor Nick Graham, of Lancaster University and Principal Investigator of the research, said: "This study adds to the weight of evidence suggesting rat eradication can have substantial conservation benefits to tropical island and adjacent marine ecosystems. The nutrient cycles that returning seabirds bring can bolster coral and fish assemblages. With climate impacts severely impacting coral reefs, management actions to boost the ecosystem are incredibly important."
Environment
2,021
April 21, 2021
https://www.sciencedaily.com/releases/2021/04/210421124637.htm
Dating in a jungle: Female praying mantises jut out weird pheromone gland to attract mates
It isn't only myriads of currently unknown species that await discovery in the Amazon rainforests. As a new study by German scientists at the Ruhr-University (Bochum) and the Bavarian State Collection of Zoology (Munich), published in the open-access peer-reviewed scientific
"When I saw the maggot-like structures peeking out from the back of the praying mantis and then withdrew, I immediately thought of parasites that eat the animal from the inside, because that is not really uncommon in insects," says Frank Glaw, a reptile and amphibian expert from the Bavarian State Collection of Zoology, who discovered the unusual phenomenon.However, it took specialists in this particular animal group to solve the riddle. Although the experts had seen nothing like this in praying mantises before either, they pointed out that there are other species of mantises, in which mostly unfertilised females release pheromones from a gland in the same part of the body (between the 6th and 7th tergite), in order to attract mates. The Y-shaped organ, which can stretch up to 6 mm in length, is in fact an advanced pheromone gland, which the insect controls with the help of hemolymph."We suspect that Stenophylla lobivertex can release the pheromones with the protrusible organ more efficiently and in a more targeted manner than other praying mantises," says Christian J. Schwarz, entomologist at the Ruhr-University."This can be very important, especially for rare species with a low population density, so that males can reliably find their females."Stenophylla lobivertex is a very rare species and lives hidden in the Amazon rainforests. Discovered only 20 years ago, the bizarre-looking and well-camouflaged animal has only been spotted a few times, and apparently only mates at night in the darkness.
Environment
2,021
April 21, 2021
https://www.sciencedaily.com/releases/2021/04/210421124624.htm
To design truly compostable plastic, scientists take cues from nature
Despite our efforts to sort and recycle, less than 9% of plastic gets recycled in the U.S., and most ends up in landfill or the environment.
Biodegradable plastic bags and containers could help, but if they're not properly sorted, they can contaminate otherwise recyclable #1 and #2 plastics. What's worse, most biodegradable plastics take months to break down, and when they finally do, they form microplastics -- tiny bits of plastic that can end up in oceans and animals' bodies -- including our own.Now, as reported in the journal "In the wild, enzymes are what nature uses to break things down -- and even when we die, enzymes cause our bodies to decompose naturally. So for this study, we asked ourselves, 'How can enzymes biodegrade plastic so it's part of nature?" said senior author Ting Xu , who holds titles of faculty senior scientist in Berkeley Lab's Materials Sciences Division, and professor of chemistry and materials science and engineering at UC Berkeley.At Berkeley Lab, Xu -- who for nearly 15 years has dedicated her career to the development of functional polymer materials inspired by nature -- is leading an interdisciplinary team of scientists and engineers from universities and national labs around the country to tackle the mounting problem of plastic landfill posed by both single-use and so-called biodegradable plastics.Most biodegradable plastics in use today are usually made of polylactic acid (PLA), a vegetable-based plastic material blended with cornstarch. There is also polycaprolactone (PCL), a biodegradable polyester that is widely used for biomedical applications such as tissue engineering.But the problem with conventional biodegradable plastic is that they're indistinguishable from single-use plastics such as plastic film -- so a good chunk of these materials ends up in landfills. And even if a biodegradable plastic container gets deposited at an organic waste facility, it can't break down as fast as the lunch salad it once contained, so it ends up contaminating organic waste, said co-author Corinne Scown, a staff scientist and deputy director for the Research, Energy Analysis & Environmental Impacts Division in Berkeley Lab's Energy Technologies Area.Another problem with biodegradable plastics is that they aren't as strong as regular plastic -- that's why you can't carry heavy items in a standard green compost bag. The tradeoff is that biodegradable plastics can break down over time -- but still, Xu said, they only break down into microplastics, which are still plastic, just a lot smaller.So Xu and her team decided to take a different approach -- by "nanoconfining" enzymes into plastics.Because enzymes are part of living systems, the trick would be carving out a safe place in the plastic for enzymes to lie dormant until they're called to action.In a series of experiments, Xu and co-authors embedded trace amounts of the commercial enzymes Burkholderia cepacian lipase (BC-lipase) and proteinase K within the PLA and PCL plastic materials. The scientists also added an enzyme protectant called four-monomer random heteropolymer, or RHP, to help disperse the enzymes a few nanometers (billionths of a meter) apart.In a stunning result, the scientists discovered that ordinary household tap water or standard soil composts converted the enzyme-embedded plastic material into its small-molecule building blocks called monomers, and eliminated microplastics in just a few days or weeks.They also learned that BC-lipase is something of a finicky "eater." Before a lipase can convert a polymer chain into monomers, it must first catch the end of a polymer chain. By controlling when the lipase finds the chain end, it is possible to ensure the materials don't degrade until being triggered by hot water or compost soil, Xu explained.In addition, they found that this strategy only works when BC-lipase is nanodispersed -- in this case, just 0.02 percent by weight in the PCL block -- rather than randomly tossed in and blended."Nanodispersion puts each enzyme molecule to work -- nothing goes to waste," Xu said.And that matters when factoring in costs. Industrial enzymes can cost around $10 per kilogram, but this new approach would only add a few cents to the production cost of a kilogram of resin because the amount of enzymes required is so low -- and the material has a shelf life of more than 7 months, Scown added.X-ray scattering studies performed at Berkeley Lab's Advanced Light Source characterized the nanodispersion of enzymes in the PCL and PLA plastic materials.Interfacial-tension experiments conducted by co-author Tom Russell revealed in real time how the size and shape of droplets changed as the plastic material decomposed into distinct molecules. The lab results also differentiated between enzyme and RHP molecules."The interfacial test gives you information about how the degradation is proceeding," he said. "But the proof is in the composting -- Ting and her team successfully recovered plastic monomers from biodegradable plastic simply by using RHPs, water, and compost soil."Russell is a visiting faculty scientist and professor of polymer science and engineering from the University of Massachusetts who leads the Adaptive Interfacial Assemblies Towards Structuring Liquids program in Berkeley Lab's Materials Sciences Division.Developing a very affordable and easily compostable plastic film could incentivize produce manufacturers to package fresh fruits and vegetables with compostable plastic instead of single-use plastic wrap -- and as a result, save organic waste facilities the extra expense of investing in expensive plastic-depackaging machines when they want to accept food waste for anaerobic digestion or composting, Scown said.Since their approach could potentially work well with both hard, rigid plastics and soft, flexible plastics, Xu would like to broaden their study to polyolefins, a ubiquitous family of plastics commonly used to manufacture toys and electronic parts.The team's truly compostable plastic could be on the shelves soon. They recently filed a patent application through UC Berkeley's patent office. And co-author Aaron Hall, who was a Ph.D. student in materials science and engineering at UC Berkeley at the time of the study, founded UC Berkeley startup Intropic Materials to further develop the new technology. He was recently selected to participate in Cyclotron Road, an entrepreneurial fellowship program in partnership with Activate."When it comes to solving the plastics problem, it's our environmental responsibility to take up nature on its path. By prescribing a molecular map with enzymes behind the wheel, our study is a good start," Xu said.
Environment
2,021
April 21, 2021
https://www.sciencedaily.com/releases/2021/04/210421082910.htm
Energy unleashed by submarine volcanoes could power a continent
Volcanic eruptions deep in our oceans are capable of extremely powerful releases of energy, at a rate high enough to power the whole of the United States, according to research published today.
Eruptions from deep-sea volcanoes were long-thought to be relatively uninteresting compared with those on land. While terrestrial volcanoes often produce spectacular eruptions, dispersing volcanic ash into the environment, it was thought that deep marine eruptions only produced slow moving lava flows.But data gathered by remotely operated vehicles deep in the North East Pacific and analysed by scientists at the University of Leeds, has revealed a link between the way ash is dispersed during submarine eruptions and the creation of large and powerful columns of heated water rising from the ocean floor, known as megaplumes.These megaplumes contain hot chemical-rich water and act in the same way as the atmospheric plumes seen from land-based volcanoes, spreading first upwards and then outwards, carrying volcanic ash with them. The size of megaplumes is immense, with the volumes of water equivalent to forty million Olympic-sized swimming pools. They have been detected above various submarine volcanoes but their origin has remained unknown. The results of this new research show that they form rapidly during the eruption of lava.The research was carried out by Sam Pegler, from the School of Mathematics and David Ferguson, from the School of Earth and Environment and is being published today in the journal Together they developed a mathematical model which shows how ash from these submarine eruptions spreads several kilometres from the volcano. They used the ash pattern deposited by a historic submarine eruption to reconstruct its dynamics. This showed that the rate of energy released and required to carry ash to the observed distances is extremely high -- equivalent to the power used by the whole of the USA.David Ferguson said: "The majority of Earth's volcanic activity occurs underwater, mostly at depths of several kilometres in the deep ocean but, in contrast to terrestrial volcanoes, even detecting that an eruption has occurred on the seafloor is extremely challenging. Consequently, there remains much for scientists to learn about submarine volcanism and its effects on the marine environment."The research shows that submarine eruptions cause megaplumes to form but the release of energy is so rapid that it cannot be supplied from the erupted molten lava alone. Instead, the research concludes that submarine volcanic eruptions lead to the rapid emptying of reservoirs of hot fluids within the earth's crust. As the magma forces its way upwards towards the seafloor, it drives this hot fluid with it.Sam Pegler added: "Our work provides evidence that megaplumes are directly linked to the eruption of lava and are responsible for transporting volcanic ash in the deep ocean. It also shows that plumes must have formed in a matter of hours, creating an immense rate of energy release.David Ferguson adds: "Observing a submarine eruption in person remains extremely difficult but the development of instruments based on the seafloor means data can be streamed live as the activity occurs.Efforts like these, in concert with continued mapping and sampling of the ocean floor means the volcanic character of our oceans is slowly being revealed."
Environment
2,021
April 20, 2021
https://www.sciencedaily.com/releases/2021/04/210420183146.htm
Using engineering methods to track the imperceptible movements of stony corals
Coral reefs around the world are under threat from rising sea temperatures, ocean acidification, disease and overfishing, among other reasons.
Tracking signs of stress and ill health is difficult because corals -- an animal host coexisting with algae, bacteria, viruses and fungi -- are dynamic organisms that behave differently depending on what's happening in their environment. Some scientists wonder if recording changes in coral movements over time could help with monitoring a coral reef's health.This is not always a straightforward task. Some coral species wave and pulse in the current, but others have rock-like skeletons and may have movements that are not visible to the human eye. A new study led by University of Washington researchers borrowed image-analysis methods from engineering to spot the minute movements of a stony coral.The team published these results April 8 in "In mechanics, we have to be able to measure imperceptible deformations in materials and structures to understand how much load these systems are experiencing and to predict potential failures," said co-senior author Jinkyu Yang, a UW associate professor of aeronautics and astronautics. "We thought we could use these same analysis methods to study living systems, such as corals."First the researchers needed to find the right coral species to test."Our analysis method easily captures surface deformation when whatever we are imaging has texture on its surface. Smooth surfaces without textures, like polished metal and glass, don't work as well," said lead author Shuaifeng Li, a UW doctoral student of aeronautics and astronautics. "Luckily, stony corals, such as Montipora capricornis, have unique patterns on their surfaces."To get started, the researchers set up a coral photo shoot. They took 200 images of the M. capricornis specimen in a tank at a rate of 30 photos per hour in both daytime and nighttime conditions, which were controlled using different lights."It was challenging to keep a sharp focus on the coral due to the way the light refracted off the glass tank," Li said. "Also, we needed to pay particular attention to make sure the lighting conditions were consistent throughout the test."Once they had acquired the pictures, the researchers used two analysis methods to search for movement. Both methods compare subsequent images in a series to the first image, playing them like a flipbook to extract changes. From here, the team could measure parameters such as pixel velocity, what parts of the coral are moving, and whether something is being compressed or stretched. The researchers also further processed the photos to be able to pull out the different types of movements occurring across the coral.Across all measurements, the researchers saw more activities happening under the nighttime conditions. The team also saw movement for both the tissue growing on the coral's stony skeleton as well as the coral polyps, though the polyps had larger movements."Corals often feed more at night by expanding their polyps and using their tentacles to catch zooplankton prey, and here we are able to quantify these nocturnal movements," said co-senior author Hollie Putnam, assistant professor of biological sciences at the University of Rhode Island. "This application of engineering techniques and analyses to assess subtle and dynamic movements can transform our understanding of coral behavior and physiology, which is critical as corals are under threat from multiple stressors."The team plans to expand this method to work on more coral species, including soft corals, which have much larger movements. Ultimately, the goal is to make this technique useful for determining potential changes in coral health under different circumstances."One investigation that should be considered is looking at how coral tissue motion changes upon exposure to pollutants generated by anthropogenic activities, such as chemical dispersants and oil," Yang said. "Also this method could be used to monitor coral reefs by using satellite images or pictures taken by citizen scientists."
Environment
2,021
April 20, 2021
https://www.sciencedaily.com/releases/2021/04/210420160906.htm
Restoration efforts can brighten an ecosystem's future, but cannot erase its past
An expansive project led by Michigan State University's Lars Brudvig is examining the benefits, and limits, of environmental restoration on developed land after humans are done with it.
Experts estimate there are up to 17 million square miles of land worldwide that have been altered by humans -- through cultivation say -- and then abandoned. That's more than four times the size of the continental United States.Once humans change a landscape, their impacts linger long after they've moved on. However, humans can heal some of that damage by working to restore the land to its natural state.But questions remain about how far restoration can go in overcoming a land's past, how much it can move the needle back toward normal. Brudvig and his collaborators now have some answers that they've published April 19 online in the "Restoration benefited sites regardless of their land-use history. The benefits are clear," said Brudvig, an associate professor of plant biology in MSU's College of Natural Science.For this project, researchers compared land that had been used for farming with land without a history of agriculture. By working to restore habitats on both types of plots, the team could paint a clearer picture of how a habitat's history affects restoration efforts.The researchers found that the effects of restoration outweighed the detriments from a plot's previous land use two-to-one. Despite the benefits, however, restoration could not erase all of farming's lasting effects."Agricultural sites were different to begin with and they remained different after restoration," Brudvig said. "It does beg the question of what else we should be doing."Though this project does not answer that question, it does offer many insights that can help ecologists decide where and how to target their restoration efforts.In the study, the team observed dozens of different ecological properties across more than 300 acres for several years following a restoration treatment developed for longleaf pines savannas in the Southeast U.S."The longleaf pine is the tree of the South. It's this charismatic, beautiful, really, really cool species of tree," Brudvig said. "There's also incredible biodiversity in this region. There's on the order of 900 different species of plants that are found here and nowhere else."This work required a large experimental site with a well-known land-use history. Fortunately, Brudvig was part of a multiuniversity collaboration that had been working at such a site for years: the Savannah River Site in South Carolina.The site is a U.S. Department of Energy complex and its natural ecosystems are managed by the U.S. Department of Agriculture's Forest Service."I don't know of another place on Earth where we could have set up this research project and pulled this off," Brudvig said.The site's history is well documented, but also complicated and painful, Brudvig said. The site has a long history of agriculture, with farmers replacing open, grassy savannas with fields to grow corn, cotton and other crops. But as the Cold War waged in the mid-20th century, the U.S. government commandeered the land and shut down those farms.In the time since, people have turned the farmland into tree plantations, densely packed with longleaf and other pines. The few remaining natural savannas also transitioned into thick forest because people began suppressing fires in the region, too.Longleaf pines, which thrive in the savanna setting, have evolved to be resilient to fires caused by lightning strikes, for example. Suppressing fires allowed tree species that are better acclimated for more crowded forest conditions to fill in the open spaces.Counterintuitively, then, restoring the savanna meant removing trees in areas with and without histories of agriculture."I get that question a lot: If you're trying to restore an ecosystem, shouldn't you be planting trees?" Brudvig said. "But by removing trees, you keep dense canopies from growing, giving opportunities to other plants on the ground. There's a whole suite of plants and animals adapted to the conditions of savannas."And thinning trees also created valuable lumber, so the U.S. Forest Service was able to take bids from contractors to carefully thin the trees, meaning this restoration effort was also a revenue generating one.To compare the effects of restoration and past land use, the team used vetted statistical tools to put different factors on the same mathematical footing. For example, they could assign comparable values to soil quality, plant diversity and how different species were interacting with each other, such as how effective bees were at pollinating plants.The researchers could then analyze how each category was affected by land use and restoration in a quantitative way.A black and yellow carpenter bee collects pollen from little purple flowers.Interactions between pollinators and plants were one of the dozens of ecological properties researchers monitored in this study. Credit: Nash Turley"Past studies have looked at more narrow sets of characteristics -- such as plant properties or animal diversity," Brudvig said. "And we kind of did that, too, but collectively as a group, we have 45 different ways we've queried the ecosystem."Researchers on this project came from seven different universities, including the University of Wisconsin-Madison."For me, the most important takeaway from this project is that the past matters for present-day restoration," said John Orrock, a collaborator on the project and a professor of integrative biology at UW-Madison. "The success of current restoration is haunted by the ghost of land-use past."Knowing this helps ecologists make the most effective use of limited resources, he said, adding that teamwork was critical to performing a study of this magnitude."Conducting experiments at landscape scales is incredibly challenging," said Ellen Damschen, a co-investigator on the project from UW-Madison, where she is a professor of integrative biology. "We have had the great fortune of partnering with the U.S. Forest Service and Department of Energy at Savannah River Site to test key restoration questions at scales relevant to management.""What makes this work possible is that partnership and the trust built over decades of collaborating together, as well as welcoming new members," Damschen said. "It is wonderful when students and collaborators can ask new and different questions that can shed light on how the system works."One of those contributors was Nash Turley, who's now a postdoctoral researcher at Pennsylvania State University. But, back in 2014, he was joining MSU as a postdoc and helping create the project before setting foot on campus."I turned in my Ph.D. thesis at the University of Toronto and started driving down to work in the field with Lars the next day," he said. "On the way, I wrote in my notepad that we should look at all these factors associated with land-use legacy."Because Brudvig, Orrock, Damschen and their colleagues had been working at the Savannah River Site for years, they had an abundance of data already available."But we needed more," Turley said.The team's study area consisted of 126 plots, each larger than a football field. Researchers measured each of the 45 ecological variables -- say, the number of plant species -- across the study area."It's easy to say we measured 45 variables, but it was just an immense, immense effort," Turley said. "It takes one crew of people an entire summer, making measurements all day long to monitor one of those properties."But the payoff was worth it. Although restoration didn't undo land-use legacies, its benefits were clear no matter a plot's history. And the restoration itself was relatively simple. The team was able to return land to a more natural state by merely thinning trees.Granted, this particular restoration won't work for every ecosystem. But understanding an ecosystem's history and performing similar studies in the future will help identify ways to improve those 17 million square miles of abandoned land, Turley said. Especially in the 2020s, which the United Nations has declared the Decade on Ecosystem Restoration."There's a great opportunity to do a lot of restoration and do a lot of good," Turley said. "But because our human interactions have done so much and last so long, we'll have to put in even more effort to undo those."And those opportunities don't just live in the South or in other places around the globe. Motivated Spartans can find them in their own backyard."In Michigan and the Midwest, there's a ton of abandoned land, mostly left to do its own thing," Turley said. "If we had the motivation to do better with it, we could. It just takes the will."
Environment
2,021
April 20, 2021
https://www.sciencedaily.com/releases/2021/04/210420121509.htm
Review summarizes known links between endocrine disruptors and breast cancer risk
Exposure to certain endocrine-disrupting chemicals could elevate the risk of breast cancer, according to a new comprehensive systematic review of epidemiological research. However, for many chemicals, evidence is inconsistent or still limited. The review was carried out by researchers at the universities of Hong Kong and Eastern Finland and published in
Endocrine-disrupting chemicals (EDCs) can interfere with the body's hormonal system, also called the endocrine system, and are widely present in the environment. They originate from a variety of sources, including pesticides, plasticisers and other industrial and pharmaceutical chemicals, as well as natural sources. Humans are often exposed to EDCs through food, but other possible exposure routes include drinking water, skin contact and air.Breast cancer accounts for the majority of women's cancers. There has been an increasing interest in the role of estrogene-mimicking EDCs, so called xenoestrogens, in the development of breast cancer. They comprise a broad range of pesticides, synthetic chemicals, phytoestrogens and certain mycotoxins. The researchers reviewed 131 epidemiological studies evaluating the link between xenoestrogen exposure and breast cancer. Most studies assessed exposures by measuring the EDCs and their metabolites in urine, serum, plasma or adipose tissues.According to the review, the nowadays widely banned pesticide DDT is one of the most studied EDCs in relation to breast cancer risk. Out of 43 epidemiological studies, eleven reported positive associations between DDT or its metabolites in lipid, serum or plasma and breast cancer incidence. Nine reported higher DDT levels among women with breast cancer than among controls. In a few studies, DDT was linked to estrogen-positive breast cancer or the association to breast cancer risk depended on genotype.Polychlorinated biphenyls, PCBs, are a large group of compounds earlier much used in electrical devices, surface coatings and other purposes. The review of 50 studies found the association between total PCBs and breast cancer risk to be inconsistent. However, 19 studies linked certain PCBs to a higher breast cancer incidence. Similar to DTT, PCBs accumulate in the adipose tissue and in the food chain and can be excreted in breast milk.Perfluorooctanoid acid (PFOA) found in some food packaging and cookware was linked to breast cancer risk in three out of five epidemiological studies. Some studies found an association between cancer risk and certain genotypes both for PCBs and PFOAs.DDT, PCBs ja PFOA are POP substances, persistent organic pollutants, the use of which is strictly regulated. DDT ja PCBs are old POP substances and their levels in the environment are decreasing. PFOA is a newer POP substance.Phytoestrogens are natural plant estrogens that have been suggested to prevent breast cancer. Genistein is a phytoestrogen found in soy products. The review included 29 epidemiological studies focusing on genistein, 18 of which linked it to a lower breast cancer risk, although some only in certain age groups or populations.For most EDCs included in the review, the link to breast cancer has been investigated in only a few epidemiological studies. Phtalates and bisphenol A (BPA), for example, are used in plastic packaging and can transfer to food. According to the review, four out of six studies linked phthalates to increased breast cancer risk. BPA was linked to more aggressive tumours in one study, but two other epidemiological studies found no link to breast cancer.Parabens are common preservatives in foods and cosmetic products and considered possible endocrine disruptors. The only epidemiological study on the topic reported a link between paraben exposures, breast cancer risk and mortality following breast cancer.Oral contraceptive use was linked to an increased breast cancer risk in seven out of eight epidemiological studies, but there were controversies on how duration or discontinuation of oral contraceptive use affected the risk.The review also included the herbicide atrazine, the industrial by-product dioxine, mycotoxins produced by food and crop molds, and PBDEs found in household furniture coatings and appliances, but epidemiological studies on their links to breast cancer risk were still scarce and often inconsistent.The writers point out that for EDCs to disrupt endocrine functions, dose, time, duration and age at exposure all matter. In addition, as multiple EDCs coexist in the environment, more research is needed to evaluate their interactive effects on breast cancer risk.The review also suggests that genotypes could determine whether EDC exposure affects breast cancer risk, and more research is needed on this topic. "One example is the polymorphism of the CYP1A1 gene, which is responsible for estrogen metabolism."According to the writers, next-generation technologies, such as genome sequencing, proteomics or epigenomics, can help identify new exposure biomarkers with better sensitivity and specificity. "These technologies will also pave way to better assessment of past exposure and prediction of future risks, by taking into account an individual's genetic profile."
Environment
2,021
April 19, 2021
https://www.sciencedaily.com/releases/2021/04/210419182105.htm
People have shaped Earth's ecology for at least 12,000 years, mostly sustainably
New research published today in the
The new data overturn earlier reconstructions of global land use history, some of which indicated that most of Earth's land was uninhabited even as recently as 1500 CE. Further, this new "Our work shows that most areas depicted as 'untouched,' 'wild,' and 'natural' are actually areas with long histories of human inhabitation and use," says UMBC's Erle Ellis, professor of geography and environmental systems and lead author. He notes that they might be interpreted like this because in these areas, "societies used their landscapes in ways that sustained most of their native biodiversity and even increased their biodiversity, productivity, and resilience."The interdisciplinary research team includes geographers, archaeologists, anthropologists, ecologists, and conservation scientists. They represent the U.S., the Netherlands, China, Germany, Australia, and Argentina, pooling their knowledge and expertise into a large-scale study that required a highly collaborative approach. They tested the degree to which global patterns of land use and population over 12,000 years were associated statistically with contemporary global patterns of high biodiversity value within areas prioritized for conservation."Our global maps show that even 12,000 years ago, nearly three-quarters of terrestrial nature was inhabited, used, and shaped by people," says Ellis. "Areas untouched by people were almost as rare 12,000 years ago as they are today."The cultural practices of early land users did have some impact on extinctions. However, by and large, land use by Indigenous and traditional communities sustained the vast majority of Earth's biodiversity for millennia. This finding comes at a critical time of heightened need to develop long-term, sustainable answers to our biggest environmental problems."The problem is not human use per se," explains professor and co-author Nicole Boivin, of the Max Planck Institute for the Science of Human History in Jena, Germany. "The problem is the kind of land use we see in industrialized societies -- characterized by unsustainable agricultural practices and unmitigated extraction and appropriation."To truly understand terrestrial nature today, it is necessary to understand the deep human history of that nature. Outside of a few remote areas, "nature as we know it was shaped by human societies over thousands of years," says Ellis. He believes that efforts to conserve and restore "won't be successful without empowering the Indigenous, traditional, and local people who know their natures in ways that scientists are only beginning to understand."The authors argue that their findings confirm that biodiversity conservation and restoration will benefit by shifting focus from preserving land in a form imagined as "untouched" to supporting traditional and Indigenous peoples whose land use practices have helped sustain biodiversity over the long term."This study confirms on a scale not previously understood that Indigenous peoples have managed and impacted ecosystems for thousands of years, primarily in positive ways," says Darren J. Ranco, associate professor of anthropology and coordinator of Native American research at the University of Maine. "These findings have particular salience for contemporary Indigenous rights and self-determination."Ranco, a citizen of the Penobscot Indian Nation, notes that Indigenous people currently exercise some level of management of about 5% of the world's lands, upon which 80% of the world's biodiversity exists. Even so, Indigenous people have been excluded from management, access, and habitation of protected lands in places such as the U.S. National Parks."We must also assure that new attempts to protect lands and biodiversity are not just a green-grab of Indigenous lands," says Ranco. "We cannot re-create the worst of colonial policies meant to exclude Indigenous people, which would undoubtedly make the situation much worse for the environment and humanity.""Our research demonstrates the connections between people and nature that span thousands of years," says Torben Rick, study co-author and curator of North American Archaeology at the Smithsonian National Museum of Natural History. "These connections are essential for understanding how we arrived at the present and how to achieve a more sustainable future."This research represents a new form of collaboration across archaeology, global change science, conservation, and scholars of Indigenous knowledge. The co-authors hope this work will open the door to increasing the use of global land use history data by natural scientists, policymakers, activists, and others. Leaders in a range of fields can use these data, they note, to better understand and collaborate with Indigenous, traditional, and local peoples to conserve biodiversity and ecosystems over the long term."It is clear that the perspectives of Indigenous and local peoples should be at the forefront of global negotiations to reduce biodiversity loss," says Rebecca Shaw, chief scientist at World Wildlife Fund and another study co-author. "There is a global crisis in the way traditionally-used land has been transformed by the scale and magnitude of intensive human development. We have to change course if we are to sustain humanity over the next 12,000 years."
Environment
2,021
April 19, 2021
https://www.sciencedaily.com/releases/2021/04/210419135703.htm
Green hydrogen: 'Rust' as a photoanode and its limits
Hydrogen will be needed in large quantities as an energy carrier and raw material in the energy system of the future. To achieve this, however, hydrogen must be produced in a climate-neutral way, for example through so-called photoelectrolysis, by using sunlight to split water into hydrogen and oxygen. As photoelectrodes, semiconducting materials are needed that convert sunlight into electricity and remain stable in water. Metal oxides are among the best candidates for stable and inexpensive photoelectrodes. Some of these metal oxides also have catalytically active surfaces that accelerate the formation of hydrogen at the cathode or oxygen at the anode.
Research has long focused on haematite (α-FeScientists have puzzled over this for a long time. What exactly has been overlooked? What is the reason that only modest increases in efficiency have been achieved?In a recent study published in By combining their results, the researchers succeeded in extracting a fundamental physical property of the material that had generally been neglected when considering inorganic solar absorbers: The photogeneration yield spectrum. "Roughly speaking, this means that only part of the energy of the light absorbed by haematite generates mobile charge carriers, the rest generates rather localised excited states and is thus lost," Grave explains."This new approach provides experimental insight into light-matter interaction in haematite and allows distinguishing its optical absorption spectrum into productive absorption and non-productive absorption," Rothschild explains. "We could show that the effective upper limit for the conversion efficiency of haematite photoanodes is significantly lower than that expected based on above band-gap absorption," says Grave. According to the new calculation, today's "champion" haematite photoanodes have already come quite close to the theoretically possible maximum. So it doesn't get much better than that.The approach has also been successfully applied to TiO2, a model material, and BiVO4, which is currently the best-performing metal oxide photoanode material. "With this new approach, we have added a powerful tool to our arsenal that enables us to identify the realizable potential of photoelectrode materials. Implementing this to novel materials will hopefully expedite the discovery and development of the ideal photoelectrode for solar water splitting. It would also allow us to 'fail quickly', which is arguably just as important when developing new absorber materials" says Friedrich.
Environment
2,021
April 19, 2021
https://www.sciencedaily.com/releases/2021/04/210419094010.htm
New model describes the (scaling) laws of the jungle
A forest looks like a hotbed of randomness, with trees and plants scattered in wild and capricious diversity. But appearances can be deceiving, say a trio of complexity researchers at the Santa Fe Institute (SFI). Underneath that apparent messiness lurk extraordinary regularities, governed by the biological mechanisms that drive universal forces of growth, death, and competition.
In a paper published April 9 in the journal "This paper goes a long way in showing how things that look arbitrary and capricious can in fact be understood within a mathematical framework," says SFI Distinguished Shannan Professor and former President Geoffrey West, who collaborated with Lee and Chris Kempes, SFI Professor, on the model.Scientists have long sought mathematical laws that connect the similar patterns that emerge at large and small scales of existence. "If you look at the microscopic structure of multicellular life, you see a lot of the same patterns playing out," says Lee. The metabolic rate of an organism follows a power scaling law with its mass, for example. Previous attempts at establishing such mathematical laws for the assemblage of plants in a forest have been a source of vociferous debate.In previous work, West and others have developed models that start with the metabolic constraints on a single, optimized tree to make predictions about patterns that might emerge in a community of such trees. The model accurately showed how features like growth rate or canopy size might change with plant size -- and how those features might affect competition with other organisms or change the structure of the entire forest.Kempes says that this idealized model paved the way for connecting biological principles like metabolism to mathematical, macro-level patterns, but over time researchers began to focus on how real-world situations differ in detail from that model. Not every tree or population follows the optimal rules, though, leading researchers like Lee to investigate new ways to generalize the core tenets."What happens when that law for scaling deviates for individual species, or for different contexts? How does that work?" says Kempes. "How do all those fit together?"The new model extends essential ideas from earlier works for how to set up a model informed by the biological principles of growth, death, and resource competition, but it also allows a user to generalize those ideas to a wide range of species and situations, says Kempes. A user might relax certain assumptions about tree allometries -- relationships between size and shape -- or incorporate ideas about how trees interact with other organisms, like termites.By turning these "knobs" on the simulation, Lee says, researchers can more closely reproduce the diverse ways that forests diverge from the idealized model. They can also clearly connect biological principles at the level of the organism to how forest structure plays out on larger scales.West says the new approach will not only reveal scaling laws that have been previously gone unnoticed but also shine a light on new areas of investigation. "One of the great things about having an analytical model of this kind is that it points to where data is missing, or where data is poor," he says, "and the kinds of things people should be measuring."The model also shows how a physics-inspired approach -- which often focuses on idealized situations -- can contribute to advances in understanding biological complexity. "There is this marvelous interplay between the fields," West says.
Environment
2,021
April 15, 2021
https://www.sciencedaily.com/releases/2021/04/210415142800.htm
Process simultaneously removes toxic metals and salt to produce clean water
University of California, Berkeley, chemists have discovered a way to simplify the removal of toxic metals. like mercury and boron. during desalination to produce clean water, while at the same time potentially capturing valuable metals, such as gold.
Desalination -- the removal of salt -- is only one step in the process of producing drinkable water, or water for agriculture or industry, from ocean or waste water. Either before or after the removal of salt, the water often has to be treated to remove boron, which is toxic to plants, and heavy metals like arsenic and mercury, which are toxic to humans. Often, the process leaves behind a toxic brine that can be difficult to dispose of.The new technique, which can easily be added to current membrane-based electrodialysis desalination processes, removes nearly 100% of these toxic metals, producing a pure brine along with pure water and isolating the valuable metals for later use or disposal."Desalination or water treatment plants typically require a long series of high-cost, pre- and post-treatment systems that all the water has to go through, one by one," said Adam Uliana, a UC Berkeley graduate student who is first author of a paper describing the technology. "But here, we have the ability to do several of these steps all in one, which is a more efficient process. Basically, you could implement it in existing setups."The UC Berkeley chemists synthesized flexible polymer membranes, like those currently used in membrane separation processes, but embedded nanoparticles that can be tuned to absorb specific metal ions -- gold or uranium ions, for example. The membrane can incorporate a single type of tuned nanoparticle, if the metal is to be recovered, or several different types, each tuned to absorb a different metal or ionic compound, if multiple contaminants need to be removed in one step.The polymer membrane laced with nanoparticles is very stable in water and at high heat, which is not true of many other types of absorbers, including most metal-organic frameworks (MOFs), when embedded in membranes.The researchers hope to be able to tune the nanoparticles to remove other types of toxic chemicals, including a common groundwater contaminant: PFAS, or polyfluoroalkyl substances, which are found in plastics. The new process, which they call ion-capture electrodialysis, also could potentially remove radioactive isotopes from nuclear power plant effluent.In their study, to be published this week in the journal "Electrodialysis is a known method for doing desalination, and here we are doing it in a way that incorporates these new particles in the membrane material and captures targeted toxic ions or neutral solutes, like boron," Long said. "So, while you are driving ions through this membrane, you are also decontaminating the water for, say, mercury. But these membranes can also be highly selective for removing other metals, like copper and iron, at high capacity."Water shortages are becoming commonplace around the world, including in California and the American West, exacerbated by climate change and population growth. Coastal communities are increasingly installing plants to desalinate ocean water, but inland communities, too, are looking for ways to turn contaminated sources -- groundwater, agricultural runoff and industrial waste -- into clean, safe water for crops, homes and factories.While reverse osmosis and electrodialysis work well for removing salt from high-salinity water sources, such as seawater, the concentrated brine left behind can have high levels of metals, including cadmium, chromium, mercury, lead, copper, zinc, gold and uranium.But the ocean is becoming increasingly polluted by industry and agricultural runoff, and inland sources even more so."This would be especially useful for those areas that have low levels of contaminants that are still toxic at these low levels, as well as different wastewater sites that have lots of types of toxic ions in their streams," Long said.Most desalination processes remove salt -- which exists largely as sodium and chlorine ions in water -- using a reverse osmosis membrane, which allows water through, but not ions, or an ion exchange polymer, which allows ions through, but not water. The new technology merely adds porous nanoparticles, each about 200 nanometers in diameter, that capture specific ions while allowing the sodium, chlorine and other non-targeted charged molecules to pass through.Long designs and studies porous materials that can be decorated with unique molecules that capture targeted compounds from liquid or gas streams: carbon dioxide from power plant emissions, for example. The nanoparticles used in these polymer membranes are called porous aromatic frameworks, or PAFs, which are three-dimensional networks of carbon atoms linked by compounds made up of multiple ring-shaped molecules -- chemical groups referred to as aromatic compounds. The internal structure is related to that of a diamond, but with the link between carbon atoms lengthened by the aromatic linker to create lots of internal space. Various molecules can be attached to the aromatic linkers to capture specific chemicals.To capture mercury, for example, sulfur compounds called thiols, which are known to tightly bind mercury, are attached. Added methylated sulfur groups enable capture of copper, and groups containing oxygen and sulfur capture iron. The altered nanoparticles make up about 20% of the weight of the membrane, but, because they are very porous, account for about 45% of the volume.Calculations suggest that a kilogram of the polymer membrane could strip essentially all of the mercury from 35,000 liters of water containing 5 parts per million (ppm) of the metal, before requiring regeneration of the membrane.Uliana showed in his experiments that boric acid, a compound of boron that is toxic to crops, can be removed by these membranes, though with diffusion dialysis that relies on a concentration gradient to drive the chemical -- which is not ionic, like metals -- through the membrane to be captured by the PAF nanoparticles."We tried different types of high-salinity water -- for example, groundwater, industrial wastewater and also brackish water -- and the method works for each of them," he said. "It seems to be versatile for different water sources; that was one of the design principles we wanted to put into this."Uliana also demonstrated that the membranes can be reused many times -- at least 10, but likely more -- without losing their ability to absorb ionic metals. And membranes containing PAFs tuned to absorb metals easily release their absorbed metals for capture and reuse."It is a technology where, depending on what your toxic impurities are, you could customize the membrane to deal with that type of water," Long added. "You may have problems with lead, say, in Michigan, or iron and arsenic in Bangladesh. So, you target the membranes for specific contaminated water sources. These materials really knock it down to often immeasurable levels."
Environment
2,021
April 15, 2021
https://www.sciencedaily.com/releases/2021/04/210415141834.htm
The whitest paint is here -- and it's the coolest. Literally.
In an effort to curb global warming, Purdue University engineers have created the whitest paint yet. Coating buildings with this paint may one day cool them off enough to reduce the need for air conditioning, the researchers say.
In October, the team created an ultra-white paint that pushed limits on how white paint can be. Now they've outdone that. The newer paint not only is whiter but also can keep surfaces cooler than the formulation that the researchers had previously demonstrated."If you were to use this paint to cover a roof area of about 1,000 square feet, we estimate that you could get a cooling power of 10 kilowatts. That's more powerful than the central air conditioners used by most houses," said Xiulin Ruan, a Purdue professor of mechanical engineering.The researchers believe that this white may be the closest equivalent of the blackest black, "Vantablack," which absorbs up to 99.9% of visible light. The new whitest paint formulation reflects up to 98.1% of sunlight -- compared with the 95.5% of sunlight reflected by the researchers' previous ultra-white paint -- and sends infrared heat away from a surface at the same time.Typical commercial white paint gets warmer rather than cooler. Paints on the market that are designed to reject heat reflect only 80%-90% of sunlight and can't make surfaces cooler than their surroundings.The team's research paper showing how the paint works publishes Thursday (April 15) as the cover of the journal ACS Applied Materials & Interfaces.Two features give the paint its extreme whiteness. One is the paint's very high concentration of a chemical compound called barium sulfate which is also used to make photo paper and cosmetics white."We looked at various commercial products, basically anything that's white," said Xiangyu Li, a postdoctoral researcher at the Massachusetts Institute of Technology who worked on this project as a Purdue Ph.D. student in Ruan's lab. "We found that using barium sulfate, you can theoretically make things really, really reflective, which means that they're really, really white."The second feature is that the barium sulfate particles are all different sizes in the paint. How much each particle scatters light depends on its size, so a wider range of particle sizes allows the paint to scatter more of the light spectrum from the sun."A high concentration of particles that are also different sizes gives the paint the broadest spectral scattering, which contributes to the highest reflectance," said Joseph Peoples, a Purdue Ph.D. student in mechanical engineering.There is a little bit of room to make the paint whiter, but not much without compromising the paint."Although a higher particle concentration is better for making something white, you can't increase the concentration too much. The higher the concentration, the easier it is for the paint to break or peel off," Li said.The paint's whiteness also means that the paint is the coolest on record. Using high-accuracy temperature reading equipment called thermocouples, the researchers demonstrated outdoors that the paint can keep surfaces 19 degrees Fahrenheit cooler than their ambient surroundings at night. It can also cool surfaces 8 degrees Fahrenheit below their surroundings under strong sunlight during noon hours.The paint's solar reflectance is so effective, it even worked in the middle of winter. During an outdoor test with an ambient temperature of 43 degrees Fahrenheit, the paint still managed to lower the sample temperature by 18 degrees Fahrenheit.This white paint is the result of six years of research building on attempts going back to the 1970s to develop radiative cooling paint as a feasible alternative to traditional air conditioners.Ruan's lab had considered over 100 different materials, narrowed them down to 10 and tested about 50 different formulations for each material. Their previous whitest paint was a formulation made of calcium carbonate, an earth-abundant compound commonly found in rocks and seashells.The researchers showed in their study that like commercial paint, their barium sulfate-based paint can potentially handle outdoor conditions. The technique that the researchers used to create the paint also is compatible with the commercial paint fabrication process.Patent applications for this paint formulation have been filed through the Purdue Research Foundation Office of Technology Commercialization. This research was supported by the Cooling Technologies Research Center at Purdue University and the Air Force Office of Scientific Research through the Defense University Research Instrumentation Program (Grant No.427 FA9550-17-1-0368). The research was performed at Purdue's FLEX Lab and Ray W. Herrick Laboratories and the Birck Nanotechnology Center of Purdue's Discovery Park.
Environment
2,021
April 15, 2021
https://www.sciencedaily.com/releases/2021/04/210415114139.htm
Experts' predictions for future wind energy costs drop significantly
Technology and commercial advancements are expected to continue to drive down the cost of wind energy, according to a survey led by Lawrence Berkeley National Laboratory (Berkeley Lab) of the world's foremost wind power experts. Experts anticipate cost reductions of 17%-35% by 2035 and 37%-49% by 2050, driven by bigger and more efficient turbines, lower capital and operating costs, and other advancements. The findings are described in an article in the journal
The study summarizes a global survey of 140 wind experts on three wind applications -- onshore (land-based) wind, fixed-bottom offshore wind, and floating offshore wind. The anticipated future costs for all three types of wind energy are half what experts predicted in a similar Berkeley Lab study in 2015. The study also uncovered insights on the possible magnitude of and drivers for cost reductions, anticipated technology trends, and grid-system value-enhancement measures."Wind has experienced accelerated cost reductions in recent years, both onshore and offshore, making previous cost forecasts obsolete. The energy sector needs a current assessment," said Ryan Wiser, senior scientist at Berkeley Lab. "Our 'expert elicitation' survey complements other methods for evaluating cost-reduction potential by shedding light on how cost reductions might be realized and by clarifying the important uncertainties in these estimates."President Biden signed an Executive Order in January aiming to maximize offshore wind potential and has identified wind power as a key component of the nation's renewed efforts to combat climate change. Renewable energy sources such as wind and solar will play an important role in efforts to reach net zero carbon emissions by mid-century.Significant opportunities for, but uncertainty in, cost reductions Under a "best guess" (or median) scenario, experts anticipate 17%-35% reductions in the levelized cost of energy by 2035 and 37%-49% reductions by 2050 across the three wind applications studied, relative to 2019 baseline values. Levelized costs reflect the average cost of energy per unit of electricity output over the lifetime of an electricity plant and are useful for evaluating technology progress. There are greater absolute reductions (and more uncertainty) in the levelized cost of energy for offshore wind compared with onshore wind, and a narrowing gap between fixed-bottom and floating offshore wind.Notwithstanding the maturation of both onshore and offshore wind technology, there is substantial room for continued improvement, and costs could be even lower: experts predict a 10% chance that reductions will be 38%-53% by 2035 and 54%-64% by 2050. At the same time, there is uncertainty in these projections, illustrated by the range in expert views and by the "high cost" scenario in which cost reductions are relatively modest.Multiple drivers for cost reduction: larger turbines are on the horizon There are five key factors that impact the levelized cost of energy: upfront capital cost, ongoing operating costs, capacity factor, project design life, and cost of financing. Experts anticipate continued improvements across all dimensions, with the relative contribution varying by wind application. "Forecasts that consider only improvements in capital cost will, at best, capture about 45% of the cost reduction opportunity," noted study co-author Joe Rand, also of Berkeley Lab.A key driver in these improvements is turbine size, according to experts. For onshore wind, growth is expected not only in generator ratings (to 5.5 megawatts [MW] on average in 2035, up from 2.5 MW in 2019) but also in two other factors that increase capacity -- rotor diameters and hub heights. Offshore wind turbines are expected to get even bigger, to 17 MW on average in 2035, up from 6 MW in 2019. Floating offshore wind is anticipated to gain market share, growing from its current pre-commercial state and accounting for up to 25% of new offshore wind projects by 2035.Implications for the future of wind energy Wind energy has grown rapidly, but its long-term contribution to energy supply depends, in part, on future costs and value. The new study finds that cost reductions have accelerated in recent years: faster than previously predicted by most forecasters, and faster than historical rates of decline. The experts surveyed anticipate future reductions and growing use of value-enhancement measures, both for onshore wind and offshore wind."All else being equal, these trends will enable wind to play a larger role in global energy supply than previously thought while facilitating energy-sector decarbonization," concluded co-author Joachim Seel, also with Berkeley Lab. "Analysts, investors, planners, and policymakers should avoid outdated assumptions and forecasts." At the same time, as documented in the study, uncertainties in the magnitude of future cost reduction are significant, illustrating the importance of embedding uncertainty considerations in modeling and in policy, planning, investment, and research decisions.
Environment
2,021
April 15, 2021
https://www.sciencedaily.com/releases/2021/04/210415114125.htm
Transparent nanolayers for more solar power
There is no cheaper way to generate electricity today than with the sun. Power plants are currently being built in sunny locations that will supply solar electricity for less than two cents per kilowatt hour. Solar cells available on the market based on crystalline silicon make this possible with efficiencies of up to 23 percent. Therefore they hold a global market share of around 95 percent. With even higher efficiencies of more than 26 percent, costs could fall further. An international working group led by photovoltaics researchers from Forschungszentrum Jülich now plan to reach this goal with a nanostructured, transparent material for the front of solar cells and a sophisticated design. The scientists report on their success of many years of research in the scientific journal
Silicon solar cells have been steadily improved over the past decades and have already reached a very high level of development. However, the disturbing effect of recombination still occurs after the absorption of sunlight and the photovoltaic generation of electrical charge carriers. In this process, negative and positive charge carriers that have already been generated combine and cancel each other out before they could be used for the flow of solar electricity. This effect can be countered by special materials that have a special property -- passivation."Our nanostructured layers offer precisely this desired passivation," says Malte Köhler, former PhD student and first author from the Jülich Institute for Energy and Climate Research (IEK-5), who has since received his doctorate. In addition, the ultra-thin layers are transparent -- so the incidence of light is hardly reduced -- and exhibit high electrical conductivity."No other approach so far combines these three properties -- passivation, transparency, conductivity -- as well as our new design," says Dr. Kaining Ding, head of the Jülich working group. A first prototype of the Jülich TPC solar cell achieved a high efficiency of 23.99 percent (+- 0.29 percent) in the laboratory. This value was also confirmed by the independent CalTeC laboratory of the Institute for Solar Energy Research in Hamelin (ISFH). This means that the Jülich TPC solar cell still ranks slightly below the best crystalline silicon cells made in laboratories to date. But simulations carried out in parallel have shown that efficiencies of more than 26 percent are possible with TPC technology."In addition, we have only used processes in manufacturing that can be integrated relatively quickly into series production," Ding emphasizes the advantage over other research approaches. With this strategy, the Jülich scientists pave the way for their development from the laboratory to a large scale in industrial solar cell production without too much effort.Several process steps were necessary to produce the layers of the TPC solar cell. On a thin layer of silicon dioxide, the researchers deposited a double layer of tiny pyramid-shaped nanocrystals of silicon carbide -- applied at two different temperatures. Finally, a transparent layer of indium tin oxide followed. Ding and colleagues used wet chemical processes, chemical vapor deposition (CVD) and a sputtering process.For their success, the Jülich researchers from IEK 5 and of the Jülich Ernst Ruska Center for Electron Microscopy worked closely together with several institutes in the Netherlands, China, Russia and Ecuador. The partners include researchers from RWTH Aachen University, the University of Duisburg-Essen, the Technical Universities of Delft and Eindhoven, the Universidad San Francisco de Quito, the University and Kutateladze Institute of Thermophysics in Novosibirsk and Sun Yat-Sen University in Guangzhou. In further steps, Kaining Ding's research group plans to further optimize the power yield of its TPC solar cells. "We expect solar cell manufacturers to show great interest in our technology," Ding says.
Environment
2,021
April 14, 2021
https://www.sciencedaily.com/releases/2021/04/210414202438.htm
Satellite map of human pressure on land provides insight on sustainable development
The coronavirus pandemic has led researchers to switch gears or temporarily abandon projects due to health protocols or not being able to travel. But for Patrick Keys and Elizabeth Barnes, husband and wife scientists at Colorado State University, this past year led to a productive research collaboration.
They teamed up with Neil Carter, assistant professor at the University of Michigan, on a paper published in Keys, lead author and a research scientist in CSU's School of Global Environmental Sustainability, said the team used machine learning to produce the map, which reveals where abrupt changes in the landscape have taken place around the world. The map shows a near-present snapshot of effects from deforestation, mining, expanding road networks, urbanization and increasing agriculture."The map we've developed can help people understand important challenges in biodiversity conservation and sustainability in general," said Keys.This type of a map could be used to monitor progress for the United Nations Sustainable Development Goal 15 (SDG15), "Life on Land," which aims to foster sustainable development while conserving biodiversity.Barnes, an associate professor in CSU's Department of Atmospheric Science, did the heavy lifting on the data side of the project.While staggering parenting duties with Keys, she wrote code like never before, working with trillions of data points and training up to eight separate algorithms to cover different parts of the world. She then merged the algorithms to provide a seamless classification for the whole planet.At first, the two researchers had to learn to speak the other's work language."Pat initially had an idea for this research, and I said, 'Machine learning doesn't work that way,'" said Barnes.She then sketched out the components with him: The input is something we want to be able to see from space, like a satellite image; and the output is some measure of what humans are doing on Earth. The middle part of the equation was machine learning.Keys said what Barnes designed is a convolutional neural network, which is commonly used for interpreting images. It's similar to how Facebook works when the site suggests tagging friends in a photo."It's like our eyes and our brains," he said.In developing the algorithm, they used existing data that classified human impacts on the planet, factors like roads and buildings, and grazing lands for livestock and deforestation. Then, the convolutional neural network learned how to accurately interpret satellite imagery, based on this existing data.The researchers started with Indonesia, a country that has experienced rapid change over the last 20 years. By the end of the summer, after they were confident about what they identified in Indonesia using machine learning, Keys suggested that they look at the entire globe."I remember telling him it's not possible," said Barnes. "He knows whenever I say that, I will go back and try and make it work. A week later, we had the whole globe figured out."Barnes said using machine learning is not fool-proof, and it requires some follow-up to ensure that data are accurate."Machine learning will always provide an answer, whether it's garbage or not," she explained. "Our job as scientists is to determine if it is useful."Keys spent many nights on Google Earth reviewing over 2,000 places on the globe in the year 2000 and then compared those sites with 2019. He noted changes and confirmed the data with Barnes.The research team also did a deeper dive into three countries -- Guyana, Morocco and Gambia -to better understand what they found.In the future, when new satellite data is available, Keys said the team can quickly generate a new map."We can plug that data into this now-trained neural network and generate a new map," he said. "If we do that every year, we'll have this sequential data that shows how human pressure on the landscape is changing."Keys said the research project helped lift his spirits over the last year."Honestly, I have had a tough time during the pandemic," he said. "Looking back, I was able to work on this project that was exciting, fun, interesting and open-ended, and with great people. It brightened the pandemic considerably."
Environment
2,021
April 14, 2021
https://www.sciencedaily.com/releases/2021/04/210414132032.htm
Channel migration plays leading role in river network evolution
A new study by former University of Illinois Urbana-Champaign graduate student Jeffrey Kwang, now at the University of Massachusetts, Amherst; Abigail Langston, of Kansas State University; and Illinois civil and environmental engineering professor Gary Parker takes a closer look at the vertical and lateral -- or depth and width -- components of river erosion and drainage patterns. The study is published in the
"A tree's dendritic structure exists to provide fresh ends for leaves to grow and collect as much light as possible," Parker said. "If you chop off some branches, they will regrow in a dendritic pattern. In our study, we wanted to find out if river systems behave similarly when their paths are altered, even though existing numerical models cannot replicate this."In a previous study conducted at Illinois, Parker and Kwang experimented with sandbox and waterflow landscape models with meandering, S-shaped streams imprinted onto them. With the water left running, the systems eventually rerouted the S-shaped channel into the ever-familiar dendritic pattern over time -- something that the numerical models do not predict."That told us there was some key element missing in the numerical models," Kwang said. "One thing I observed was that the channels in the model sandbox streams were migrating laterally, or across the landscape, to reorganize the drainage network. This lateral migration has been observed in other researchers' experiments but is not captured by the numerical models. I thought that this has to be where the numerical and physical models differ."Soon after Parker and Kwang's realization, Kwang and Langston met at the Summer Institute on Earth-Surface Dynamics at the St. Anthony Falls Laboratory at the University of Minnesota. They discovered a mutual interest in lateral stream erosion."Working through the existing river drainage models, Jeffrey and I found that the initial conditions in landscape evolution models have been overlooked," Langston said. "Usually, they started with a flat, featureless surface with very small, randomized bumps. We wondered if introducing more complex initial conditions, like the meandering stream imprint Jeffrey used in his earlier experiment, would make a big difference in the numerical models."Changing the initial modeling conditions also had the researchers rethinking the importance of lateral versus vertical erosion, challenging the traditional modeling practice, which concentrates on vertical erosion. They decided to run numerical simulations using vertical erosion and another set using both vertical and lateral erosion.Incorporating these new ideas, the team created a landscape evolution model that simulates a river network with an initial S-shaped channel and vertical erosion. After running the model 5 million years into the future, the river carves a deep canyon that retains the S-shaped pattern, which is not very realistic, Kwang said.At the 5 million-year mark, the team introduced to the model a vertical and lateral erosion scenario, developed by Langston and University of Colorado, Boulder professor Gregory Tucker. The channels begin to migrate and reorganize the river network into a more realistic dendritic pattern.At the 10 million-year mark, the model starts to resemble a tree and continues to become more dendritic through to the end of the model at 15 million years."Our work shows that lateral migration of channels, a mechanism commonly ignored in landscape evolution models, has potential to restructure river networks drastically," Kwang said.The team plans to examine the role that sediment type and geologic features such as mountains, faults and fractures play in this process. There are places where the underlying geology has an enormous influence on drainage patterns, the researchers said, and accounting for them in the models is needed.Understanding how rivers evolve naturally or rebound after engineering measures have rerouted them will help decision-makers plan future land use, the researchers said. "It is important to look ahead tens to hundreds or even thousands of years when planning the storage of toxic waste, for example," Kwang said. "Lateral migration of nearby rivers could pose a significant threat over time.""We've known about lateral migration of mountain rivers for years, but nobody thought to put it into a model and run it at hundreds to thousands to millions of years," Parker said. "This was the first time that anyone has attempted this, and we are very excited about the results."The National Science Foundation supported this research.Parker also is affiliated with the Center for Advanced Study and geology and geography and geographic information sciences at Illinois. Kwang is a postdoctoral researcher in geosciences at the University of Massachusetts, Amherst. Langston is a professor of geography and geospatial sciences at Kansas State University.
Environment
2,021
April 13, 2021
https://www.sciencedaily.com/releases/2021/04/210413170705.htm
World's protected areas need more than a 'do not disturb' sign
Lessons learned from the world's protected forests: Just declaring a plot of land protected isn't enough -- conservation needs thoughtful selection and enforcement.
A group of scientists, many tied to Michigan State University, examined nearly 55,000 protected areas across the world to understand what it took to effectively protect their forests -- a key benchmark to protecting habitat and preserving natural resources. They conclude that it's important to protect the forests exposed to the most threats in areas close to cities and be prepared to be strict in enforcing rules intended to stop deforestation.In a recent issue of Preserving forests means more trees to suck up greenhouse gasses, as well as prevent erosion, mitigate flooding, purify water and quell sandstorms. The paper notes some high-profile protected areas have suffered a loss of wildlife meant to be protected."Protecting forests is crucial to achieve sustainability," said Jianguo "Jack" Liu, MSU Rachel Carson Chair in Sustainability and director of the Center for Systems Integration and Sustainability. "To ensure we are directing our efforts to the right places, it's important to scrutinize protected areas across the globe."In this comprehensive analysis, the study revealed:The group concluded that declaring an area as protected is not enough and that more attention needs to be given to improving the quality of forest protection and protecting the right forests. Currently, a global pattern of declaring remote areas as protected is missing the target, which instead is better focused on natural areas more in danger of being exploited.
Environment
2,021
April 13, 2021
https://www.sciencedaily.com/releases/2021/04/210413144912.htm
Life expectancy lower near superfund sites
Living near a hazardous waste or Superfund site could cut your life short by about a year, reports Hanadi S. Rifai, John and Rebecca Moores Professor of Civil and Environmental Engineering at the University of Houston. The study, published in
The analysis shows a decrease of more than two months in life expectancy for those living near a Superfund site. When coupled with high disadvantage of sociodemographic factors like age, sex, marital status and income, the decrease could be nearly 15 months, according to the analysis. Prior studies confirmed that those living near hazardous waste sites generally have greater sociodemographic disadvantage and, as a result, poorer health. The average life expectancy in the U.S. is 78.7 years, and millions of children have been raised within less than a one-mile radius from a federally designated Superfund site."We have ample evidence that contaminant releases from anthropogenic sources (e.g., petrochemicals or hazardous waste sites) could increase the mortality rate in fence-line communities," reports Rifai. "Results showed a significant difference in life expectancy among census tracts with at least one Superfund site and their neighboring tracts with no sites."Nationally there are thousands of so-called Superfund, or contaminated, sites that pose a risk to human health and the environment. These sites include manufacturing facilities, processing plants, landfills and mining sites where hazardous waste was dumped, left out in the open or poorly managed.The study presents a nationwide geocoded statistical modeling analysis of the presence of Superfund sites, their flood potential, and the impact on life expectancy independently and in context of other sociodemographic determinants. Life expectancy is one of the most basic indicators of public health. Studies show a 1% increase in life expectancy could lead to a 1.7% to 2% increase in population.Analysis revealed that out of 12,717 census tracts with at least one Superfund site, the adverse effect of this presence was more severe on the ones with higher sociodemographic disadvantage. For instance, the presence of a Superfund site in a census tract with smaller than median income ($52,580) could reduce life expectancy by as much as seven months.While many studies have broken down mortality rates associated with different diseases, only a few have paid attention to hazardous waste and Superfund sites and their potential impact on mortality rates.Other recent national studies showed a significant correlation between the residential proximity to Superfund sites and the occurrence of Non-Hodgkin's lymphoma, especially among males. In Texas, the Texas Department of State Health Services recently examined a cancer cluster in downtown Houston around a former railroad creosote treatment facility, finding the observed number of childhood acute lymphoblastic leukemia cases was greater than expected based on cancer rates in Texas.Rifai also examined the impact of flooding, which could cause the transport of contaminants from Superfund sites and potentially affect neighborhoods farther than the nearby fence-line communities."When you add in flooding, there will be ancillary or secondary impacts that can potentially be exacerbated by a changing future climate," said Rifai. "The long-term effect of the flooding and repetitive exposure has an effect that can transcend generations."Joining Rifai on this project are Amin Kiaghadi, University of Houston and Clint N. Dawson, University of Texas at Austin.
Environment
2,021
April 13, 2021
https://www.sciencedaily.com/releases/2021/04/210413134735.htm
Northern star coral study could help protect tropical corals
As the Rhode Island legislature considers designating the Northern Star Coral an official state emblem, researchers are finding that studying this local creature's recovery from a laboratory-induced stressor could help better understand how to protect endangered tropical corals.
A new study published today in The stony Northern Star Coral naturally occurs off the coast of Rhode Island and other New England states in brown colonies with high (symbiotic) densities and in white colonies with low (aposymbiotic) densities of a symbiotic dinoflagellate alga. The study found that those corals with algal symbionts -- organisms that are embedded within the coral's tissue and are required by tropical corals to survive -- recovered their mucus microbiomes more consistently and more quickly.The study also identified six bacterial taxa that played a prominent role in reassembling the coral back to its healthy microbiome. This is the first microbiome manipulation study on this coral."The work is important because it suggests that this coral may be able to recover its mucus microbiome following disturbance, it identifies specific microbes that may be important to assembly, and it demonstrates that algal symbionts may play a previously undocumented role in the microbial recovery and resilience to environmental change," the paper states.With thermal bleaching and disease posing major threats to tropical corals, this research, along with other work on tropical corals, "provides a major step toward identifying the microbiome's roles in maintaining coral resilience," the paper notes."We think that the algae are helping the coral select the microbes that live with it, and this suggestion of symbiont-microbe coordination following disturbance is a new concept for corals," said paper co-author Amy Apprill, associate scientist at the Woods Hole Oceanographic Institution."Worldwide, coral reefs are in crisis. Any time we see corals recover, that's always good news. It shows that they can combat a stressor and figure out how to become healthy again," said Apprill. "What we found here is translatable to tropical corals which are faced with different stressors, such as warming water, disease, and pollution. This paper suggests that the symbiotic algae play a major role in providing consistency and resilience to the coral microbiome.""When we think about corals, it's usually assumed that we're thinking about the tropics and the bright blue water and where it's warm, sunny, and sandy. However, the Northern Star Coral lives in murkier and much colder waters, yet it can still teach us a lot about expanding our understanding of corals," said lead author Shavonna Bent, a student in the MIT-WHOI Joint Program in Oceanography/Applied Ocean Science and Engineering.The Northern Star Coral is an ideal emblem for Rhode Island, said co-author Koty Sharp. The coral is small like the state; it's New England-tough in dealing with large temperature fluctuations; and it's a local, offering plenty of insight that can help address global problems, said paper co-author Koty Sharp, an associate professor at Roger Williams University who is leading the effort for official designation of the coral.Committees from both the Rhode Island House and Senate have held hearings on the proposed legislation. The Senate has approved the bill, and the House could vote on it in the coming month. Assuming the House also approves the bill, it will be sent to Rhode Island Gov. Daniel McKee for signing into law.Sharp said the designation effort has a big educational component. "If designating this as a state emblem allows us to teach more people about the power of basic research to support conservation, or if this allows us to teach a generation of school children about the local animals that live around them, then this state coral will have a great deal of value," she said.
Environment
2,021
April 13, 2021
https://www.sciencedaily.com/releases/2021/04/210413124342.htm
COVID-19 in our dust may help predict outbreaks, study finds
A study done in rooms where COVID-19 patients were isolated shows that the virus's RNA -- part of the genetic material inside a virus -- can persist up to a month in dust.
The study did not evaluate whether dust can transmit the virus to humans. It could, however, offer another option for monitoring COVID-19 outbreaks in specific buildings, including nursing homes, offices or schools.Karen Dannemiller, senior author of the study, has experience studying dust and its relationship to potential hazards like mold and microbes."When the pandemic started, we really wanted to find a way that we could help contribute knowledge that might help mitigate this crisis," said Dannemiller, assistant professor of civil, environmental and geodetic engineering and environmental health sciences at The Ohio State University."And we've spent so much time studying dust and flooring that we knew how to test it."The study, published today (April 13, 2021) in the journal The study offers another non-invasive avenue for monitoring buildings for COVID-19 outbreaks, especially as more people are vaccinated and return to communal spaces.Municipalities and others have tested wastewater to evaluate the prevalence of COVID-19 in a given community -- gene copies and fragments of the virus live in human waste, and by testing wastewater, local governments and others can determine how widespread the virus might be, even if people are asymptomatic.Dust monitoring could offer similar understanding on a smaller scale -- say, a specific nursing home, hospital or school."In nursing homes, for example, you're still going to need to know how COVID is spreading inside the building," said Nicole Renninger, lead author of the paper and an engineering graduate student in Dannemiller's lab. "For surveillance purposes, you need to know if you are picking up an outbreak that's going on right now."For this study, the research team worked with the crews responsible for cleaning the rooms at Ohio State where students who tested positive for COVID-19 were isolated. They also collected samples from two homes where people who tested positive for COVID-19 lived. They gathered vacuum bags of dust from the cleaning crews and from the homes.The researchers also tested swabs collected from surfaces in the rooms.They found genetic material from the SARS-CoV-2 virus -- the virus that causes COVID-19 -- in 97% of the bulk dust samples and in 55% of the surface swabs.The cleaning crews sprayed a chlorine-based disinfectant in the rooms prior to cleaning; the researchers believe that disinfectant destroyed the envelope and/or capsid -- the outer coat surrounding the virus -- likely defanging it for transmission.The research team tested the samples when they arrived at the lab, shortly after the rooms were cleaned, then tested the samples again weekly. After four weeks, the virus's RNA did not significantly decay in the vacuum bags."We weren't sure that the genetic material would survive -- there are many different organisms in dust, and we weren't sure we'd see any viral RNA at all," Renninger said. "And we were surprised when we found that the actual RNA itself seems to be lasting a pretty long time."Testing dust to monitor for COVID-19 outbreaks would likely be most useful for smaller-scale communities with a high-risk population -- a nursing home, for example, Dannemiller said. Testing indoor dust is also likely less expensive at that scale than testing wastewater or all individuals directly on a routine basis."We wanted to demonstrate that dust could be complementary to wastewater for surveillance," Dannemiller said. "Wastewater is great for a large population, but not everybody sheds the virus in feces, and you have to collect wastewater samples, which not everyone wants to do. People are already vacuuming these rooms, so dust may be a good option for some groups."Even before this study was published, Dannemiller said the researchers presented their findings to an industry group that represents maintenance and cleaning staff, with a recommendation: "If they can wait at least an hour or more after a person leaves the room to clean, they should," she said, citing previous studies of viral viability on other materials and in aerosols.Other Ohio State researchers who worked on this study include Nick Nastasi, Ashleigh Bope, Samuel Cochran, Sarah Haines, Neeraja Balasubrahmaniam, Katelyn Stuart and Natalie Hull. Kyle Bibby, an associate professor of engineering at the University of Notre Dame, and Aaron Bivins, a postdoc in his group, also contributed.
Environment
2,021
April 12, 2021
https://www.sciencedaily.com/releases/2021/04/210412142717.htm
Bottom-up is the way forward for nitrogen reduction at institutions
Nitrogen is an element basic for life -- plants need it, animals need it, it's in our DNA -- but when there's too much nitrogen in the environment, things can go haywire. On Cape Cod, excess nitrogen in estuaries and salt marshes can lead to algal blooms, fish kills, and degradation of the environment.
In a study published in "This bottom-up approach is all about balancing the needs of various stakeholders to come up with the best, most feasible solution for the institution," says Sarah Messenger, lead author and research assistant in the MBL Ecosystems Center.At the MBL, and many institutions like it, the biggest sources of nitrogen are food production and consumption. That's because nitrogen is used as fertilizer to grow our food and produce the feed that supports livestock animals.Researchers found that switching to an all-vegan diet (no animal products) in dining halls would make a huge dent in the institution's nitrogen footprint, but MBL's dining team called it a non-starter. The kitchen staff isn't trained for that, nor would the customers be open to it, they said. It would also mean big changes for the department's budget."The problem here is not just at an institutional level, but at a family level, at the community level, at regional, state, and national levels. The problem is that these things cost a lot of money and they are difficult to implement," says Javier Lloret, MBL research scientist.The scientists also found that upgrading the institution's HVAC systems, fitting new, more efficient windows, or going 100% solar could cut the institution's nitrogen footprint. But all of those solutions come with a large price tag."The MBL is really small, so we don't have the personnel resources of larger Institutions," says Messenger. "We don't have a sustainability department. We have limited manpower, limited resources, and limited money that we can put towards making these big changes."But what the MBL and many smaller institutions have are people who are experts at what they do. So instead of searching for a solution to the institution's nitrogen footprint that could require a large budget commitment from the director or the board of trustees, the scientists identified on-the-ground stakeholders in the departments that were responsible for the biggest portion of MBL's nitrogen footprint -- food and facilities -- and modeled "low-effort" solutions for those departments. They focused on ideas that didn't require new personnel, a large investment or a change in normal operations, and could be accomplished over the short term (three to five years)."The goal was to only model changes that these department leaders could implement in their departments without needing sweeping changes and institution approval," says Messenger, adding that the team "completely disregarded any reduction strategies that weren't acceptable to the people that know their department best."For dining services, an all-vegan or all-vegetarian menu was a no-go, but replacing 20% of MBL's beef use with chicken or fish resulted in a 2.6% reduction in the institution's nitrogen footprint, according to the researchers' model. Swapping out 10% of meat meals with vegetarian options would reduce the MBL's nitrogen footprint by 5.7%. They found that serving less meat could also result in a reduction of costs to the catering department.The MBL's facilities team was already implementing an upgrade of the institution's lighting, so the research team adding those upgrades to their model -- calculating a 7.3% drop in the nitrogen footprint of the utilities sector and a 2% drop in the MBL's overall nitrogen footprint. Purchasing extra renewable energy credits to power the MBL with solar energy was also modeled.Combining all the approaches, the scientists calculated a 7.7% drop in the MBL's nitrogen footprint with no major disruption to operations and no additional monetary support from the institution."It might not seem like a lot, but 8% is a lot of nitrogen," said Lloret. "These small changes do make a difference."While no measures have been implemented yet, the team said they were optimistic by the response from the department stakeholders and hoped that the MBL, and other small institutions like it, could use this paper to examine ways to minimize their nitrogen and carbon footprints."We all want the same thing," says Messenger. "We would all love the MBL to reduce the impact on our environment as much as possible, but exactly how to get there is challenging. We're hoping that this paper can be a map forward."
Environment
2,021
April 9, 2021
https://www.sciencedaily.com/releases/2021/04/210409104450.htm
Glass injection molding
Glass is ubiquitous, from high-tech products in the fields of optics, telecommunications, chemistry and medicine to everyday objects such as bottles and windows. However, shaping glass is mainly based on processes such as melting, grinding or etching. These processes are decades old, technologically demanding, energy-intensive and severely limited in terms of the shapes that can be realized. For the first time, a team led by Prof. Dr. Bastian E. Rapp from the Laboratory of Process Technology at the Department of Microsystems Engineering at the University of Freiburg, in collaboration with the Freiburg-based start-up Glassomer, has developed a process that makes it possible to form glass easily, quickly and in almost any shape using injection molding. The researchers presented their results in the journal
"For decades, glass has often been the second choice when it comes to materials in manufacturing processes because its formation is too complicated, energy-intensive and unsuitable for producing high-resolution structures," explains Rapp. "Polymers, on the other hand, have allow all of this, but their physical, optical, chemical and thermal properties are inferior to glass. As a result, we have combined polymer and glass processing. Our process will allow us to quickly and cost-effectively replace both mass-produced products and complex polymer structures and components with glass."Injection molding is the most important process in the plastics industry and enables the fast and cost-effective production of components in so-called high-throughput in almost any shape and size. Transparent glass could not be molded in this process until now. With the newly developed Glassomer injection molding technology from a special granulate designed in-house, it is now possible to also mold glass in high throughput at just 130 °C. The injection-molded components from the 3D printer are then converted into glass in a heat treatment process: The result is pure quartz glass. This process requires less energy than conventional glass melting, resulting in energy efficiency. The formed glass components have a high surface quality, so that post-treatment steps such as polishing are not required.The novel designs made possible by Glassomer's glass injection molding technology have a wide range of applications from data technology, optics and solar technology to a so-called lab-on-a-chip and medical technology. "We see great potential especially for small high-tech glass components with complicated geometries. In addition to transparency, the very low coefficient of expansion of quartz glass also makes the technology interesting. Sensors and optics work reliably at any temperature if the key components are made of glass," explains Dr. Frederik Kotz, group leader at the Laboratory of Process Technology and Chief Scientific Officer (CSO) at Glassomer. "We have also been able to show that micro-optical glass coatings can increase the efficiency of solar cells. This technology can now be used to produce cost-effective high-tech coatings with high thermal stability. There are a number of commercial opportunities for it."The team around Frederik Kotz and Markus Mader, a doctoral student at the Laboratory of Process Technology, solved previously existing problems in the injection molding of glass such as porosity and particle abrasion. In addition, key process steps in the new method were designed to use water as the base material, making the technology more environmentally friendly and sustainable.Bastian Rapp is executive director of the Freiburg Materials Research Center FMF and a member of the Cluster of Excellence Living, Adaptive and Energy-autonomous Materials Systems (livMatS) at the University of Freiburg, which develops novel, bio-inspired material systems. Rapp is also co-founder and Chief Technical Officer (CTO) of Glassomer GmbH, which develops high-resolution 3D printing technologies for glass. His research has earned him a Consolidator Grant from the European Research Council (ERC), among other awards.Video:
Environment
2,021
April 8, 2021
https://www.sciencedaily.com/releases/2021/04/210408152301.htm
Pulp mill waste hits the road instead of the landfill
Waste materials from the pulp and paper industry have long been seen as possible fillers for building products like cement, but for years these materials have ended up in the landfill. Now, researchers at UBC Okanagan are developing guidelines to use this waste for road construction in an environmentally friendly manner.
The researchers were particularly interested in wood-based pulp mill fly ash (PFA), which is a non-hazardous commercial waste product. The North American pulp and paper industry generates more than one million tons of ash annually by burning wood in power boiler units for energy production. When sent to a landfill, the producer shoulders the cost of about $25 to $50 per ton, so mills are looking for alternative usages of these by-products."Anytime we can redirect waste to a sustainable alternative, we are heading in the right direction," says Dr. Sumi Siddiqua, associate professor at UBC Okanagan's School of Engineering. Dr. Siddiqua leads the Advanced Geomaterials Testing Lab, where researchers uncover different reuse options for industry byproducts.This new research co-published with Postdoctoral Research Fellow Dr. Chinchu Cherian investigated using untreated PFA as an economically sustainable low-carbon binder for road construction."The porous nature of PFA acts like a gateway for the adhesiveness of the other materials in the cement that enables the overall structure to be stronger and more resilient than materials not made with PFA," says Dr. Cherian. "Through our material characterization and toxicology analysis, we found further environmental and societal benefits that producing this new material was more energy efficient and produced low-carbon emissions."But Dr. Siddiqua notes the construction industry is concerned that toxins used in pulp and paper mills may leach out of the reused material."Our findings indicate because the cementation bonds developed through the use of the untreated PFA are so strong, little to no release of chemicals is apparent. Therefore, it can be considered as a safe raw material for environmental applications."While Dr. Cherian explains that further research is required to establish guidelines for PFA modifications to ensure its consistency, she is confident their research is on the right track."Overall, our research affirms the use of recycled wood ash from pulp mills for construction activities such as making sustainable roads and cost-neutral buildings can derive enormous environmental and economic benefits," she says. "And not just benefits for the industry, but to society as a whole by reducing waste going to landfills and reducing our ecological footprints."In the meantime, while cement producers can start incorporating PFA into their products, Dr. Cherian says they should be continually testing and evaluating the PFA properties to ensure overall quality.
Environment
2,021
April 8, 2021
https://www.sciencedaily.com/releases/2021/04/210408152258.htm
Bacteria help plants grow better
Every third-grader knows that plants absorb nutrients from the soil through their roots. The fact that they also release substances into the soil is probably less well known. And this seems to make the lives of plants a lot easier.
That is at least the conclusion of the current study. The participating researchers studied several maize varieties that differ significantly in their yield. In their search for the cause, they came across an enzyme, flavone synthase 2. "The high-yield inbred line 787 we studied contains large amounts of this enzyme in its roots," explains Dr. Peng Yu of the Institute of Crop Science and Resource Conservation (INRES) at the University of Bonn. "It uses this enzyme to make certain molecules from the flavonoid group and releases them into the soil."Flavonoids give flowers and fruits their color. In the soil, however, they perform a different function: They ensure that very specific bacteria accumulate around the roots. And these microbes, in turn, cause the formation of more lateral branches on these roots, called lateral roots. "This allows the maize plant to absorb more nitrogen from the environment," explains Prof. Dr. Frank Hochholdinger of the Institute of Crop Science and Resource Conservation (INRES). "This means the plant grows faster, especially when nitrogen supplies are scarce."The researchers were able to demonstrate in experiments how well this works. They did this using a maize variety with the abbreviation LH93, which normally produces rather puny plants. However, that changed when they planted this variety in soil where the high-performance line 787 had previously grown: LH93 then grew significantly better. The effect disappeared when the botanists sterilized the soil before repotting. This shows that the enriched bacteria are indeed responsible for the turbo growth, because they were killed during sterilization.The researchers were able to demonstrate in another experiment that the microorganisms really do promote the growth of lateral roots. Here, they used a maize variety that cannot form lateral roots due to a mutation. However, when they supplemented the soil with the bacterium, the roots of the mutant started to branch out. It is not yet clear how this effect comes about. Additionally, with microbial support the maize coped far better with nitrogen deficiency.Nitrogen is extremely important for plant growth -- so much so, that farmers artificially increase its amount in the soil by applying fertilizer. However, some of the fertilizer is washed off the fields into streams with the rain or enters the groundwater. It can also enter the atmosphere in the form of nitrogen oxides or as ammonium gas, where it contributes to the greenhouse effect. The production of nitrogenous fertilizers furthermore requires a great deal of energy. "If we breed crops that can improve their nitrogen usage with the help of bacteria, we might be able to significantly reduce environmental pollution," Yu hopes.The study shows that plants help to shape the conditions of the soil in which they grow, in ways that ultimately benefit them. However, this aspect has been neglected in breeding until now. Dr. Peng Yu adds that, in general, many interactions of the root system with soil organisms are not yet well enough understood. He wants to help change that: He has just taken over the leadership of an Emmy Noether junior research group at the University of Bonn, which is dedicated to precisely this topic. With its Emmy Noether Program, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) offers young researchers an opportunity to qualify for a university professorship within six years.
Environment
2,021
April 8, 2021
https://www.sciencedaily.com/releases/2021/04/210408152256.htm
Scant evidence that 'wood overuse' at Cahokia caused collapse
Whatever ultimately caused inhabitants to abandon Cahokia, it was not because they cut down too many trees, according to new research from Washington University in St. Louis.
Archaeologists from Arts & Sciences excavated around earthen mounds and analyzed sediment cores to test a persistent theory about the collapse of Cahokia, the pre-Columbian Native American city in southwestern Illinois that was once home to more than 15,000 people.No one knows for sure why people left Cahokia, though many environmental and social explanations have been proposed. One oft-repeated theory is tied to resource exploitation: specifically, that Native Americans from densely populated Cahokia deforested the area, an environmental misstep that could have resulted in erosion and localized flooding.But such musings about self-inflicted disaster are outdated -- and they're not supported by physical evidence of flooding issues, Washington University scientists said."There's a really common narrative about land use practices that lead to erosion and sedimentation and contribute to all of these environmental consequences," said Caitlin Rankin, an assistant research scientist at the University of Illinois at Urbana-Champaign who conducted this work as part of her graduate studies at Washington University."When we actually revisit this, we're not seeing evidence of the flooding," Rankin said."The notion of looming ecocide is embedded in a lot of thinking about current and future environmental trajectories," said Tristram R. "T.R." Kidder, the Edward S. and Tedi Macias Professor of Anthropology in Arts & Sciences at Washington University. "With a growing population and more mouths to feed, overconsumption of all resources is a real risk."Inevitably, people turn to the past for models of what has happened. If we are to understand what caused changes at sites like Cahokia, and if we are to use these as models for understanding current possibilities, we need to do the hard slogging that critically evaluates different ideas," added Kidder, who leads an ongoing archaeological research program at the Cahokia Mounds State Historic Site. "Such work allows us to sift through possibilities so we can aim for those variables that do help us to explain what happened in the past -- and explore if this has a lesson to tell us about the future."Writing in the journal Their new archaeological work, completed while Rankin was at Washington University, shows that the ground surface on which the mound was constructed remained stable until industrial development.The presence of a stable ground surface from Mississippian occupation to the mid-1800s does not support the expectations of the so-called "wood-overuse" hypothesis, the researchers said.This hypothesis, first proposed in 1993, suggests that tree clearance in the uplands surrounding Cahokia led to erosion, causing increasingly frequent and unpredictable floods of the local creek drainages in the floodplain where Cahokia was constructed.Rankin noted that archaeologists have broadly applied narratives of ecocide -- the idea that societies fail because people overuse or irrevocably damage the natural resources that their people rely on -- to help to explain the collapse of past civilizations around the world.Although many researchers have moved beyond classic narratives of ecocide made popular in the 1990s and early 2000s, Cahokia is one such major archaeological site where untested hypotheses have persisted."We need to be careful about the assumptions that we build into these narratives," Rankin said."In this case, there was evidence of heavy wood use," she said. "But that doesn't factor in the fact that people can reuse materials -- much as you might recycle. We should not automatically assume that deforestation was happening, or that deforestation caused this event."Kidder said: "This research demonstrates conclusively that the over-exploitation hypothesis simply isn't tenable. This conclusion is important because the hypothesis at Cahokia -- and elsewhere -- is sensible on its face. The people who constructed this remarkable site had an effect on their environment. We know they cut down tens of thousands of trees to make the palisades -- and this isn't a wild estimate, because we can count the number of trees used to build and re-build this feature. Wood depletion could have been an issue.""The hypothesis came to be accepted as truth without any testing," Kidder said. "Caitlin's study is important because she did the hard work -- and I do mean hard, and I do mean work -- to test the hypothesis, and in doing so has falsified the claim. I'd argue that this is the exciting part; it's basic and fundamental science. By eliminating this possibility, it moves us toward other explanations and requires we pursue other avenues of research."
Environment
2,021
April 8, 2021
https://www.sciencedaily.com/releases/2021/04/210408131454.htm
Solar and wind power could mitigate conflict in northeast Africa
A new study shows that several disagreements between Ethiopia, Sudan and Egypt around Africa's largest hydropower plant, the new Grand Ethiopian Renaissance Dam (GERD), could be alleviated by massively expanding solar and wind power across the region. Adapting GERD operation to support grid integration of solar and wind power would provide tangible energy and water benefits to all involved countries, creating regional win-win situations. "Our results call for integrated hydro-solar-wind planning to be taken up in the GERD negotiations," says Sebastian Sterl, energy planning expert at Vrije Universiteit Brussel (VUB) and KU Leuven in Belgium and lead author of the study, published in
For several years, political tensions between Egypt, Sudan and Ethiopia have been escalating in a conflict surrounding Africa's largest hydropower plant: the nearly complete Grand Ethiopian Renaissance Dam (GERD) on the Blue Nile. Ethiopia, which started filling GERD's massive reservoir in 2020, says it needs GERD's electricity to lift millions of its citizens out of poverty. But Egypt is deeply concerned by the mega-dam's consequences for the Nile river, since its agriculture depends completely on Nile water -Egypt raised this issue to the UN Security Council earlier in 2020. Sudan, meanwhile, appears caught between both sides. Ongoing African Union-led mediation talks to agree on long-term operation of the dam have so far yielded little fruit. Certain tongues have even invoked the looming threat of a "water war" between Cairo and Addis Ababa.Sebastian Sterl, energy planning expert at VUB and KU Leuven and lead author of the study, explains: "The Blue Nile is a highly seasonal river. The GERD's reservoir is so large that it can store the river's full peak flow and deliver hydropower at a stable rate throughout the year, removing the flow seasonality. This makes a lot of sense from the Ethiopian perspective, but it overhauls the natural timing of the water reaching Sudan and Egypt. Behind many disagreements around GERD lies the question of who, if anyone, should be allowed to exert such control over the Nile river."A group of researchers based in Belgium and Germany, led by Sterl, have now identified a surprising method that could solve multiple disagreements around the dam at once and benefit all three countries. The idea boils down to massively deploying modern, clean solar and wind power to serve as a complement to GERD's hydropower. More concretely: the researchers propose that Ethiopia and its neighbours deploy large-scale solar and wind farms, work towards a regionally integrated power grid, and then agree on Ethiopia operating GERD in synergy with solar and wind power. This would mean turbining less water on sunny and windy days, and more water during cloudy, windless spells and at nighttime, to "firm up" the always-fluctuating solar and wind power.The researchers realised that sunshine and wind in many regions of Ethiopia, Sudan and their eastern African neighbours have opposite seasonal profiles to the Blue Nile flow. In these places, the sun shines brightest and the winds blow strongest during the dry season. This "seasonal synergy" between water, sun and wind lies at the heart of the researchers' findings.The study found that, if GERD were operated to back up solar and wind power throughout the year -- both hourly and seasonally -- this would automatically mean producing less hydropower during the dry season, and more during the wet season, without negatively affecting GERD's yearly average power output. The water flowing out of the dam would then have a seasonality somewhat resembling the natural river flow, with a clear peak in the wet season.According to Sterl, if GERD were operated in this way, "Essentially, Ethiopia would have all the expected benefits of a big dam -- but for Sudan and Egypt, it would look as if the Ethiopians only built a modest, relatively small reservoir. There are many such reservoirs already on the Nile, so no country downstream of Ethiopia could really object to this."By reconciling parties around common energy and water objectives, the researchers identified at least five concrete benefits of such integrated hydro-solar-wind planning. First, Ethiopia could become Africa's largest power exporter while reducing its dependence on hydropower and lowering its electricity generation costs on the long term. Second, consumption of polluting fossil fuels in Sudan and other eastern African countries could be displaced by solar and wind power, backed up by GERD. Third, thanks to the proposed operation scheme of GERD, Egypt could receive more water during dry years than before and would not need to change the operation of its own High Aswan Dam. Fourth, Ethiopia would make more efficient use of its mega-dam's more than a dozen turbines by frequently producing at peak power whenever solar and wind would be unavailable. And fifth, Nile river ecology across Sudan would be less affected by the new dam, as flow seasonality is an important component of rivers' ecological sustainability.According to the authors, the entire eastern African region stands to contribute. "Ethiopia could theoretically go alone, using GERD to back up its own solar and wind power," says Sterl. "But it would work much better if, say, Sudan were to join in -- it has better solar and wind resources than Ethiopia, allowing for better hydro-solar-wind synergies and reducing the overall costs of renewable power generation. Egypt has great solar and wind resources too, as do Djibouti, South Sudan and other eastern African countries. Regional cooperation in a common, Eastern African Power Pool could be key."The results of the study suggest that integrated hydro-solar-wind planning could be a highly interesting option to discuss in the ongoing GERD negotations between Ethiopia, Sudan and Egypt. "You could call it a win-win situation," says prof. Wim Thiery, climate researcher at VUB and co-author of the study. "The entire region would benefit."The researchers obtained their results by using a dedicated, highly detailed computer model (REVUB) conceived to simulate the operation of hydropower dams alongside other renewables, like solar and wind power. The model was originally created by the same VUB-researchers in 2019 to study renewable electricity scenarios for West Africa. Later, as the GERD negotiations became more and more present in the media, the researchers realised they could directly apply the same tool to study solar and wind power as potential solutions to the GERD conflict.
Environment
2,021
April 8, 2021
https://www.sciencedaily.com/releases/2021/04/210408131423.htm
Living fossils: Microbe discovered in evolutionary stasis for millions of years
It's like something out of science fiction. Research led by Bigelow Laboratory for Ocean Sciences has revealed that a group of microbes, which feed off chemical reactions triggered by radioactivity, have been at an evolutionary standstill for millions of years. The discovery could have significant implications for biotechnology applications and scientific understanding of microbial evolution.
"This discovery shows that we must be careful when making assumptions about the speed of evolution and how we interpret the tree of life," said Eric Becraft, the lead author on the paper. "It is possible that some organisms go into an evolutionary full-sprint, while others slow to a crawl, challenging the establishment of reliable molecular timelines."Becraft, now an assistant professor of biology at the University of Northern Alabama, completed the research as part of his postdoctoral work at Bigelow Laboratory and recently published it in the Nature publishing group's The microbe, Because of their unique biology and isolation, the authors of the new study wanted to understand how the microbes evolved. They searched other environmental samples from deep underground and discovered "We wanted to use that information to understand how they evolved and what kind of environmental conditions lead to what kind of genetic adaptations," said Bigelow Laboratory Senior Research Scientist Ramunas Stepanauskas, the corresponding author on the paper and Becraft's postdoctoral advisor. "We thought of the microbes as though they were inhabitants of isolated islands, like the finches that Darwin studied in the Galapagos."Using advanced tools that allow scientists to read the genetic blueprints of individual cells, the researchers examined the genomes of 126 microbes obtained from three continents. Surprisingly, they all turned out to be almost identical."It was shocking," Stepanauskas said. "They had the same makeup, and so we started scratching our heads."Scientists found no evidence that the microbes can travel long distances, survive on the surface, or live long in the presence of oxygen. So, once researchers determined that there was no possibility the samples were cross-contaminated during research, plausible explanations dwindled."The best explanation we have at the moment is that these microbes did not change much since their physical locations separated during the breakup of supercontinent Pangaea, about 175 million years ago," Stepanauskas said. "They appear to be living fossils from those days. That sounds quite crazy and goes against the contemporary understanding of microbial evolution."What this means for the pace of microbial evolution, which often happens at a much more accelerated rate, is surprising. Many well-studied bacteria, such as Stepanauskas and his colleagues hypothesize the standstill evolution they discovered is due to the microbe's powerful protections against mutation, which have essentially locked their genetic code. If the researchers are correct, this would be a rare feature with potentially valuable benefits.Microbial enzymes that create copies of DNA molecules, called DNA polymerases, are widely used in biotechnology. Enzymes with high fidelity, or the ability to recreate themselves with little differences between the copy and the original, are especially valuable."There's a high demand for DNA polymerases that don't make many mistakes," Stepanauskas said. "Such enzymes may be useful for DNA sequencing, diagnostic tests, and gene therapy."Beyond potential applications, the results of this study could have far-reaching implications and change the way scientists think about microbial genetics and the pace of their evolution."These findings are a powerful reminder that the various microbial branches we observe on the tree of life may differ vastly in the time since their last common ancestor," Becraft said. "Understanding this is critical to understanding the history of life on Earth."
Environment
2,021
April 8, 2021
https://www.sciencedaily.com/releases/2021/04/210408112401.htm
Carbon dots from human hair boost solar cells
QUT researchers have used carbon dots, created from human hair waste sourced from a Brisbane barbershop, to create a kind of "armour" to improve the performance of cutting-edge solar technology.
In a study published in the Perovskites solar cells, a relatively new photovoltaic technology, are seen as the best PV candidate to deliver low-cost, highly efficient solar electricity in coming years. They have proven to be as effective in power conversion efficiency as the current commercially available monocrystalline silicon solar cells, but the hurdles for researchers in this area is to make the technology cheaper and more stable.Unlike silicon cells, they are created with a compound that is easily manufactured, and as they are flexible they could be used in scenarios such as solar-powered clothing, backpacks that charge your devices on the go and even tents that could serve as standalone power sources.This is the second major piece of research to come as a result of a human hair derived carbon dots as multifunctional material.Last year, Associate Professor Prashant Sonar led a research team, including Centre for Materials Science research fellow Amandeep Singh Pannu, that turned hair scraps into carbon nanodots by breaking down the hairs and then burning them at 240 degrees celsius. In that study, the researchers showed the carbon dots could be turned into flexible displays that could be used in future smart devices.In this new study, Professor Wang's research team, including Dr Ngoc Duy Pham, and Mr Pannu, working with Professor Prashant Sonar's group, used the carbon nanodots on perovskite solar cells out of curiosity. Professor Wang's team had previously found that nanostructured carbon materials could be used to improve a cell's performance.After adding a solution of carbon dots into the process of making the perovskites, Professor Wang's team found the carbon dots forming a wave-like perovskite layer where the perovskite crystals are surrounded by the carbon dots."It creates a kind of protective layer, a kind of armour," Professor Wang said."It protects the perovskite material from moisture or other environmental factors, which can cause damage to the materials."The study found that perovskite solar cells covered with the carbon dots had a higher power conversion efficiency and a greater stability than perovskite cells without the carbon dots.Professor Wang has been researching advanced solar cells for about 20 years, and working with perovskite cells since they were invented about a decade ago, with the primary objective of developing cost-effective, stable photovoltaics materials and devices, to help solve the energy issue in the world."Our final target is to make solar electricity cheaper, easier to access, longer lasting and to make PV devices lightweight because current solar cells are very heavy," Professor Wang said."The big challenges in the area of perovskite solar cells are solving stability of the device to be able to operate for 20 years or longer and the development of a manufacturing method that is suitable for large scale production."Currently, all the reported high-performance perovskite solar cells have been made in a controlled environment with extremely low level of moisture and oxygen, with a very small cell area which are practically unfeasible for commercialisation."To make the technology commercially viable, challenges for fabrication of efficient large area, stable, flexible, perovskite solar panels at low cost needs to be overcome."This can only be achieved through a deep understanding of the material properties in large-scale production and under industrially compatible conditions."Professor Wang is particularly interested in how perovskite cells could be used in the future to power spacecrafts.The International Space Station is powered by four solar arrays, which can generate up to 120 kW of electricity. But one disadvantage of the current technology of space PVs is the weight of the payload to get them there.While perovskite would be much lighter, one of the challenges for researchers is to develop perovskite cells able to cope with the extreme radiation and broad range of temperature variation in space -- from minus 185 degrees to more than 150 degrees Celsius.Professor Wang said the solution could be ten years off, but researchers were continuing to gain greater insights in the area.Currently Professor Wang's research team is collaborating with Professor Dmitri Golberg in the QUT Centre for Materials Science to understand the properties of perovskite materials under extreme environmental conditions such as strong irradiation of an electron beam and drastic temperature change."I'm quite optimistic given how much this technology has improved so far," Professor Wang said.
Environment
2,021
April 7, 2021
https://www.sciencedaily.com/releases/2021/04/210407135735.htm
Adoption of green infrastructure tracked
In a new paper published in the
"This work came out of a long term collaboration in Arizona trying to understand a lot of aspects of how green infrastructure (GI) is used there," says Mitchell Pavao-Zuckerman, assistant professor in Environmental Science and Technology at UMD. "We are looking at the functionality of GI, its practical benefits, but also how governance and learning around GI changes, inhibits, or helps adoption. Looking at evolution and adoption, we can see different types of players that are key, like policy entrepreneurs who are early adopters or innovators in either practice or policy and how they help diffuse knowledge around the city. Learning these lessons, we gain a lot of insight into how policy is changing, and how other areas could adapt going forward."Funded by the National Science Foundation's (NSF) Coupled Human and Natural Systems program, Pavao-Zuckerman collaborated with the University of Arizona, the Udall Center for Public Policy in Tucson, and the University of Virginia to examine these GI trends. The researchers took a mixed methods approach to the work, examining policy, documentation, and newspaper reports to create a timeline of GI developments in the history of the city. The timeline was then used as a starting point when interviewing stakeholders and GI players in Tucson, providing a richer context and backdrop to the interview data."The timeline and our approach to gathering this information is innovative; it puts a method behind anecdotal stories," explains Pavao-Zuckerman. "Studying this kind of process in an urban setting around GI is a new thing, so that is one of the unique pieces of this paper. In lots of places, you have this knowledge and history of how things have come about, but using the timeline and interviews to document how things have changed, and putting it within theories of adaptation and governance -- these are new frontiers for working with GI and urban environments."As Pavao-Zuckerman describes it, Tucson provides a compelling look at how GI emerges in places that don't necessarily have water quality mandates, which are prominent in Maryland and the area surrounding the Chesapeake Bay. In Tucson and much of the Southwest, water sustainability and conservation is often the driver."In Maryland with the Bay, a lot of GI is implemented as a way to meet water quality standards and meet pollution reduction targets," says Pavao-Zuckerman. "But in this case, there aren't water quality mandates, and the focus is really on harvested water. A lot of water consumption in the Southwest goes to domestic irrigation for lawns and gardens, which can sometimes be up to 50% of potable water usage, so the demand is huge. You also see integration with urban tree canopy and stormwater basins that can help mitigate heat islands and buffer for future climate change when things get even hotter out there. So you see the same types of things there as in the Bay area, like curb cuts to redirect stormwater and urban tree cover, but it is coming from a different place. So it is interesting to see how you get to the same place from a different start point."One thing that Pavao-Zuckerman and the team found in Tucson that the rest of the country can learn from is an overall culture of what is known as water ethics. Similar to the concept of One Health (the intersection and interconnectedness of animal, human, and environmental health), Tucson water municipalities call it One Water."Part of what we see going forward is a more holistic way of thinking about water," says Pavao-Zuckerman. "Stormwater is usually thought of as a waste stream that we want to get rid of as quickly as possible, but people are starting to see it as a resource and not a waste. The water municipality there calls it One Water, thinking about the integration of the whole water system. Instead of thinking of stormwater and drinking water as two separate things, we think about water collectively, and that gives you a different perspective for management. That mindset will hopefully also start to happen in other places."Other key findings from the paper include the need to think about GI across all scales, from individual and neighborhood adoption to the city level. Additionally, there is a need for more equitable dispersion of GI to ensure environmental and social justice."A lot of this practice is done effectively voluntarily," explains Pavao-Zuckerman. "Neighborhoods and the city will promote it, but the city isn't necessarily going out and implementing most of these structures -- that is up to the home or property owner. Because implementation has started from policy entrepreneurs and individuals in Tucson, it didn't happen randomly and also didn't happen necessarily in communities where it is most needed. Most cities are like this, with more affluent communities having more implementation, and places that have less money or more people of color tend to have less implementation, so they are bearing the brunt of the environmental harms that they are trying to solve. So that needs to be part of the trajectory going forward, thinking about how to shift practice to reflect that and encourage cooperation at all levels and scales to be more equitable."Overall, this paper provides a landscape of GI implementation and gives researchers, policy makers, and advocates alike a chance to understand where things are coming from so they can think more strategically about where things are headed."It lets us do backcasting and forwardcasting," emphasizes Pavao-Zuckerman. "We can see where things came from and new threads that are starting to emerge. GI is important because it adds different aspects of resilience to an environment. It helps to buffer environmental extremes, and it adds more flexibility throughout the landscape to withstand and respond to extreme events. We think of climate change as this thing that is going to be hotter, wetter, or drier, but it is the extreme ends of weather events that really hit cities and people hard, and GI is something that we think is going to really help. We are paying particular attention to the role of people and organizations in driving GI change in this work to understand it as a way for how people can shape urban transformations to make a more sustainable and resilient community."This work is funded by the National Science Foundation, Grant Number Award #1518376, Linking Ecosystem Services and Governance of Water Resources in Urbanized Landscapes.
Environment
2,021
April 6, 2021
https://www.sciencedaily.com/releases/2021/04/210406092653.htm
Aquatic biodiversity key to sustainable, nutrient-rich diets
Seafood is a pillar of global food security -- long recognized for its protein content. But research is highlighting a critical new link between the biodiversity of aquatic ecosystems and the micronutrient-rich seafood diets that help combat micronutrient deficiencies, or 'hidden hunger', in vulnerable populations.
"Getting the most nutritional value per gram of seafood is crucial in fighting hidden hunger and meeting United Nations Sustainable Development Goals," says Dr. Joey Bernhardt, an ecologist from the University of British Columbia (UBC) who led the study, published this week in the "We've found that aquatic species contain distinct and complementary sets of micronutrients, so the most efficient way to fulfill our nutritional requirements is to fill our diets with small amounts of a variety of species. In order to be able to do that, we need to preserve the biodiversity of our aquatic ecosystems locally and globally."Dr. Bernhardt and UBC biodiversity researcher Dr. Mary O'Connor analyzed nutrient concentrations in the edible portions of 547 aquatic finfish and shellfish species. While different animals offered similar amounts of protein, they varied greatly in concentrations of micronutrients like iron, zinc, calcium and two fatty acids (docosahexaenoic and eicosapentaenoic acid, known as DHA and EPA). This variation is critical to the value of biodiversity to human well-being.Most animals did not meet a single micronutrient recommended dietary allowance (RDA) in a 100g portion -- fewer than half reached a target of 10% RDA for calcium, iron and EPA. Increasing biodiversity from one to 10 species in seafood diets was correlated with reaching more nutrient targets established by the US Institute of Medicine -- and the nutritional value increased even when the seafood portion size remained constant."We know that biodiversity is a critical component of the many economic, cultural and ecological benefits humans enjoy from healthy natural ecosystems -- from elevated forest production to water quality to nutrient cycling. Our analysis proves that biodiversity also enhances nutritional metrics in aquatic systems, and this benefit is at least as great as the biodiversity benefits we've seen in other sectors," says Dr. O'Connor."And this study further demonstrates the importance of biodiversity, measured as the number of different kinds of animals out there, for human-wellbeing in wild ecosystems -- showing that protecting biodiversity in nature is as important as maintaining agro-biodiversity."The finding is of particular importance coastal communities, including many Indigenous communities, who eat on average 15 times more seafood than other groups -- and tend to rely more on locally available seafood."Aquatic ecosystems are under threat from human activities, and we're observing major changes in biodiversity patterns worldwide," adds Dr. Bernhardt."Until now, we didn't understand the consequences of these aquatic biodiversity changes for human nutrition and health. With this new work, we have bridged the gap between biodiversity science and human nutrition science, demonstrating that aquatic biodiversity change can have direct and immediate impact on human nutrition and well-being."
Environment
2,021
April 6, 2021
https://www.sciencedaily.com/releases/2021/04/210406084051.htm
Separating beer waste into proteins for foods, and fiber for biofuels
Home brewing enthusiasts and major manufacturers alike experience the same result of the beer-making process: mounds of leftover grain. Once all the flavor has been extracted from barley and other grains, what's left is a protein- and fiber-rich powder that is typically used in cattle feed or put in landfills. Today, scientists report a new way to extract the protein and fiber from brewer's spent grain and use it to create new types of protein sources, biofuels and more.
The researchers will present their results today at the spring meeting of the American Chemical Society (ACS).  "There is a critical need in the brewing industry to reduce waste," says Haibo Huang, Ph.D., the project's principal investigator. His team partnered with local breweries to find a way to transform leftover grain into value-added products."Spent grain has a very high percentage of protein compared to other agricultural waste, so our goal was to find a novel way to extract and use it," says Yanhong He, a graduate student who is presenting the work at the meeting. Both Huang and He are at Virginia Polytechnic and State University (Virginia Tech).Craft brewing has become more popular than ever in the U.S. This increased demand has led to an increase in production, generating a major uptick in waste material from breweries, 85% of which is spent grain. This byproduct comprises up to 30% protein and up to 70% fiber, and while cows and other animals may be able to digest spent grain, it is difficult for humans to digest it because of its high fiber content.In order to transform this waste into something more functional, Huang and He developed a novel wet milling fractionation process to separate the protein from the fiber. Compared to other techniques, the new process is more efficient because the researchers do not have to dry the grain first. They tested three commercially available enzymes -- alcalase, neutrase and pepsin -- in this process and found that alcalase treatment provided the best separation without losing large amounts of either component. After a sieving step, the result was a protein concentrate and a fiber-rich product.Up to 83% of the protein in the spent grain was recaptured in the protein concentrate. Initially the researchers proposed using the extracted protein as a cheaper, more sustainable replacement for fishmeal to feed farmed shrimp. But more recently, Huang and He have started to explore using the protein as an ingredient in food products, catering to the consumer demand for alternate protein sources.However, that still left the remaining fiber-rich product without a specific use. Last year, Huang's postdoctoral researcher Joshua O'Hair, Ph.D., reported finding a new species of Next, the team plans to work on scaling up the process of separating the protein and fiber components in order to keep up with the volume of spent grain generated at breweries. They are also working with colleagues to determine the economic feasibility of the separation process, as the enzymes currently used to separate the protein and fiber components are expensive. Huang and He hope to find suitable enzymes and green chemicals to make this process even more sustainable, scalable and affordable.
Environment
2,021
April 6, 2021
https://www.sciencedaily.com/releases/2021/04/210406092656.htm
Beef industry can cut emissions with land management, production efficiency
A comprehensive assessment of 12 different strategies for reducing beef production emissions worldwide found that industry can reduce greenhouse gas (GHG) emissions by as much as 50% in certain regions, with the most potential in the United States and Brazil. The study, "Reducing Climate Impacts of Beef Production: A synthesis of life cycle assessments across management systems and global regions," is published April 5 in
A research team led by Colorado State University (CSU) and funded by the Climate and Land Use Alliance found that widespread use of improved ranching management practices in two distinct areas of beef production would lead to substantial emissions reductions. This includes increased efficiency to produce more beef per unit of GHG emitted -- growing bigger cows at a faster rate -- and enhanced land management strategies to increase soil and plant carbon sequestration on grazed lands.Globally, cattle produce about 78% of total livestock GHG emissions. Yet, there are many known management solutions that, if adopted broadly, can reduce, but not totally eliminate, the beef industry's climate change footprint, according to lead author Daniela Cusack, an assistant professor in the Department of Ecosystem Science and Sustainability at CSU.Overall, the research team found a 46% reduction in net GHG emissions per unit of beef was achieved at sites using carbon sequestration management strategies on grazed lands, including using organic soil amendments and restoring trees and perennial vegetation to areas of degraded forests, woodlands and riverbanks. Additionally, researchers found an overall 8% reduction in net GHGs was achieved at sites using growth efficiency strategies. Net-zero emissions, however, were only achieved in 2% of studies."Our analysis shows that we can improve the efficiency and sustainability of beef production, which would significantly reduce the industry's climate impact," said Cusack, also a research associate at the Smithsonian Tropical Research Institute in Panama. "But at the same time, we will never reach net-zero emissions without further innovation and strategies beyond land management and increased growth efficiency. There's a lot of room, globally, for improvement."Researchers analyzed 292 comparisons of "improved" versus "conventional" beef production systems across Asia, Australia, Brazil, Canada, Latin America and the U.S. The analysis revealed that Brazilian beef production holds the most potential for emissions reductions.In the studies analyzed, researchers found a 57% GHG emission reduction through improved management strategies for both carbon sequestration and production efficiency in Brazil. Specific strategies include improved feed quality, better breed selections and enhanced fertilizer management.The biggest impact was found in integrated field management, including intensive rotational grazing schemes, adding soil compost, reforestation of degraded areas and selectively planting forage plants bred for sequestering carbon in soils."My home country of Brazil has more than 52 million hectares of degraded pastureland -- larger than the state of California," said Amanda Cordeiro, co-author and a graduate student at CSU. "If we can aim for a large-scale regeneration of degraded pastures, implementation of silvo-agro-forestry systems and adoption of other diversified local management strategies to cattle production, Brazil can drastically decrease carbon emissions."In the U.S., researchers found that carbon sequestration strategies such as integrated field management and intensive rotational grazing reduced beef GHG emissions by more than 100% -- or net-zero emissions -- in a few grazing systems. But efficiency strategies were not as successful in the U.S. studies, possibly because of a high use of the strategies in the region already."Our research shows the important role that ranchers can play in combatting the global climate crisis, while ensuring their livelihoods and way of life," said Clare Kazanski, co-author and North America region scientist with The Nature Conservancy. "By analyzing management strategies in the U.S. and around the world, our research reinforces that ranchers are in a key position to reduce emissions in beef production through various management strategies tailored to their local conditions."Darrell Wood, a northern California rancher, is an example of a producer leading the way on climate-friendly practices. Wood's family participates in the California Healthy Soils program, which incentivizes practices with a demonstrated climate benefit."As a sixth-generation cattle rancher, I see nothing but upside potential from using our cattle as a tool for reducing greenhouse gas emissions," Wood said. "Taking good care of our grasslands not only benefits climate, but also wildlife and the whole ecosystem that generates clean air and water. It'll help the next generation continue our business, too."Although the research shows a significant reduction in the GHG footprints of beef production using improved management strategies, scientists don't yet know the full potential of shifting to these emission-reducing practices because there are very few data on practice adoption levels around the world."Asia, for example, is one of the most rapidly growing beef markets, but there is an imbalance between the amount of research focus on improving beef production and the growing demand for beef," Cusack said. "We know with the right land management and efficiency strategies in place, it's possible to have large reductions in emissions across geographic regions, but we need to keep pushing for additional innovations to create a truly transformation shift in the way the global beef system operates to ensure a secure food supply and a healthy environment."Additional co-authors on the paper include Alexandra Hedgpeth, Kenyon Chow and Jason Karpman (University of California, Los Angeles); and Rebecca Ryals (University of California, Merced).
Environment
2,021
April 5, 2021
https://www.sciencedaily.com/releases/2021/04/210405143400.htm
New batteries give jolt to renewables, energy storage
The cost of harvesting solar energy has dropped so much in recent years that it's giving traditional energy sources a run for their money. However, the challenges of energy storage -- which require the capacity to bank an intermittent and seasonally variable supply of solar energy -- have kept the technology from being economically competitive.
Cornell University researchers led by Lynden Archer, Dean and Professor of Engineering, have been exploring the use of low-cost materials to create rechargeable batteries that will make energy storage more affordable. Now, they have shown that a new technique incorporating aluminum results in rechargeable batteries that offer up to 10,000 error-free cycles.This new kind of battery could provide a safer and more environmentally friendly alternative to lithium-ion batteries, which currently dominate the market but are slow to charge and have a knack for catching fire.The team's paper, "Regulating Electrodeposition Morphology in High-Capacity Aluminium and Zinc Battery Anodes Using Interfacial Metal-Substrate Bonding," published in Among the advantages of aluminum is that it is abundant in the earth's crust, it is trivalent and light, and it therefore has a high capacity to store more energy than many other metals. However, aluminum can be tricky to integrate into a battery's electrodes. It reacts chemically with the glass fiber separator, which physically divides the anode and the cathode, causing the battery to short circuit and fail.The researchers' solution was to design a substrate of interwoven carbon fibers that forms an even stronger chemical bond with aluminum. When the battery is charged, the aluminum is deposited into the carbon structure via covalent bonding, i.e., the sharing of electron pairs between aluminum and carbon atoms.While electrodes in conventional rechargeable batteries are only two dimensional, this technique uses a three-dimensional -- or nonplanar -- architecture and creates a deeper, more consistent layering of aluminum that can be finely controlled.The aluminum-anode batteries can be reversibly charged and discharged one or more orders of magnitude more times than other aluminum rechargeable batteries under practical conditions.
Environment
2,021
April 5, 2021
https://www.sciencedaily.com/releases/2021/04/210405123313.htm
This hydrogen fuel machine could be the ultimate guide to self-improvement
Three years ago, scientists at the University of Michigan discovered an artificial photosynthesis device made of silicon and gallium nitride (Si/GaN) that harnesses sunlight into carbon-free hydrogen for fuel cells with twice the efficiency and stability of some previous technologies.
Now, scientists at the Department of Energy's (DOE's) Lawrence Berkeley National Laboratory (Berkeley Lab) -- in collaboration with the University of Michigan and Lawrence Livermore National Laboratory (LLNL) -- have uncovered a surprising, self-improving property in Si/GaN that contributes to the material's highly efficient and stable performance in converting light and water into carbon-free hydrogen. Their findings, reported in the journal "Our discovery is a real game-changer," said senior author Francesca Toma, a staff scientist in the Chemical Sciences Division at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab). Usually, materials in solar fuels systems degrade, become less stable and thus produce hydrogen less efficiently, she said. "But we discovered an unusual property in Si/GaN that somehow enables it to become more efficient and stable. I've never seen such stability."Previous artificial photosynthesis materials are either excellent light absorbers that lack durability; or they're durable materials that lack light-absorption efficiency.But silicon and gallium nitride are abundant and cheap materials that are widely used as semiconductors in everyday electronics such as LEDs (light-emitting diodes) and solar cells, said co-author Zetian Mi, a professor of electrical and computer engineering at the University of Michigan who invented Si/GaN artificial photosynthesis devices a decade ago.When Mi's Si/GaN device achieved a record-breaking 3 percent solar-to-hydrogen efficiency, he wondered how such ordinary materials could perform so extraordinarily well in an exotic artificial photosynthesis device -- so he turned to Toma for help.Mi had learned of Toma's expertise in advanced microscopy techniques for probing the nanoscale (billionths of a meter) properties of artificial photosynthesis materials through HydroGEN, a five-national lab consortium supported by the DOE's Hydrogen and Fuel Cell Technologies Office, and led by the National Renewable Energy Laboratory to facilitate collaborations between National Labs, academia, and industry for the development of advanced water-splitting materials. "These interactions of supporting industry and academia on advanced water-splitting materials with the capabilities of the National Labs are precisely why HydroGEN was formed -- so that we can move the needle on clean hydrogen production technology," said Adam Weber, Berkeley Lab's Hydrogen and Fuel Cell Technologies Lab Program Manager and Co-Deputy Director of HydroGEN.Toma and lead author Guosong Zeng, a postdoctoral scholar in Berkeley Lab's Chemical Sciences Division, suspected that GaN might be playing a role in the device's unusual potential for hydrogen production efficiency and stability.To find out, Zeng carried out a photoconductive atomic force microscopy experiment at Toma's lab to test how GaN photocathodes could efficiently convert absorbed photons into electrons, and then recruit those free electrons to split water into hydrogen, before the material started to degrade and become less stable and efficient.They expected to see a steep decline in the material's photon absorption efficiency and stability after just a few hours. To their astonishment, they observed a 2-3 orders of magnitude improvement in the material's photocurrent coming from tiny facets along the "sidewall" of the GaN grain, Zeng said. Even more perplexing was that the material had increased its efficiency over time, even though the overall surface of the material didn't change that much, Zeng said. "In other words, instead of getting worse, the material got better," he said.To gather more clues, the researchers recruited scanning transmission electron microscopy (STEM) at the National Center for Electron Microscopy in Berkeley Lab's Molecular Foundry, and angle-dependent X-ray photon spectroscopy (XPS).Those experiments revealed that a 1 nanometer layer mixed with gallium, nitrogen, and oxygen -- or gallium oxynitride -- had formed along some of the sidewalls. A chemical reaction had taken place, adding "active catalytic sites for hydrogen production reactions," Toma said.Density functional theory (DFT) simulations carried out by co-authors Tadashi Ogitsu and Tuan Anh Pham at LLNL confirmed their observations. "By calculating the change of distribution of chemical species at specific parts of the material's surface, we successfully found a surface structure that correlates with the development of gallium oxynitride as a hydrogen evolution reaction site," Ogitsu said. "We hope that our findings and approach -- a tightly integrated theory-experiments collaboration enabled by the HydroGEN consortium -- will be used to further improve the renewable hydrogen production technologies."Mi added: "We've been working on this material for over 10 years -- we know it's stable and efficient. But this collaboration helped to identify the fundamental mechanisms behind why it gets more robust and efficient instead of degrading. The findings from this work will help us build more efficient artificial photosynthesis devices at a lower cost."Looking ahead, Toma said that she and her team would like to test the Si/GaN photocathode in a water-splitting photoelectrochemical cell, and that Zeng will experiment with similar materials to get a better understanding of how nitrides contribute to stability in artificial photosynthesis devices -- which is something they never thought would be possible."It was totally surprising," said Zeng. "It didn't make sense -- but Pham's DFT calculations gave us the explanation we needed to validate our observations. Our findings will help us design even better artificial photosynthesis devices.""This was an unprecedented network of collaboration between National Labs and a research university," said Toma. "The HydroGEN consortium brought us together -- our work demonstrates how the National Labs' Team Science approach can help solve big problems that affect the entire world."
Environment
2,021
April 5, 2021
https://www.sciencedaily.com/releases/2021/04/210405113622.htm
Lightning strikes will more than double in Arctic as climate warms
In 2019, the National Weather Service in Alaska reported spotting the first-known lightning strikes within 300 miles of the North Pole. Lightning strikes are almost unheard of above the Arctic Circle, but scientists led by researchers at the University of California, Irvine have published new research in the journal
"We projected how lightning in high-latitude boreal forests and Arctic tundra regions will change across North America and Eurasia," said Yang Chen, a research scientist in the UCI Department of Earth System Science who led the new work. "The size of the lightning response surprised us because expected changes at mid-latitudes are much smaller."The finding offers a glimpse into the changes that're in store for the Arctic as the planet continues warming; it suggests Arctic weather reports during summertime will be closer to those seen today far to the south, where lightning storms are more common.James Randerson, a professor in UCI's Department of Earth System Science who co-authored the study, was part of a NASA-led field campaign that studied wildfire occurrence in Alaska during 2015, which was a extreme year for wildfires in the state. "2015 was an exceptional fire year because of a record number of fire starts," Randerson said. "One thing that got us thinking was that lightning was responsible for the record-breaking number of fires."This led Chen to look at over-twenty-year-old NASA satellite data on lighting strikes in northern regions, and construct a relationship between the flash rate and climatic factors. By using future climate projections from multiple models used by the United Nations, the team estimated a significant increase in lightning strikes as a result of increases in atmospheric convection and more intense thunderstorms.A lightning strike bump could open a Pandora's box of related troubles. Fires, Randerson explained, burn away short grasses, mosses, and shrubs that are important components of Arctic tundra ecosystems. Such plants cover much of the landscape, and one thing they do is keep the seeds of trees from taking root in the soil. After a fire burns away low-lying plants, however, seeds from trees can more easily grow on bare soil, allowing forests stands to expand north. Evergreen forests will replace what's typically a snow-covered landscape; snow's white hue reflects sunlight back out into space, but darker forests absorb solar energy, helping warm the region even further.And there's more trouble: more fires mean more permafrost -- perennially frozen soil that defines much of the Arctic landscape -- will melt as the fires strip away protective insulative layers of moss and dead organic matter that keep soils cool. Permafrost stores a lot of organic carbon that, if melted out of the ice, will convert to greenhouse gases carbon dioxide and methane, which, when released, will drive even more warming.The lighting finding comes of the heels of another study that, led by Randerson, published in the Journal of Geophysical Research on Monday, April 5 describes how amplified Arctic warming and the melting of the Greenland ice sheet will scramble food webs in the surrounding oceans.Now, Chen and Randerson say, scientists need to start paying more attention to the frequency of Arctic lightning strikes so they can gauge how the story unfolds in the coming decades."This phenomenon is very sporadic, and it's very difficult to measure accurately over long time periods," said Randerson. "It's so rare to have lightning above the Arctic Circle." Their results, he hopes, will galvanize calls for new satellite missions that can monitor Arctic and boreal latitudes for lightning strikes and the fires they might ignite.Back in 2019, the National Weather Service in Alaska released a special announcement about the North Pole lightning strikes. Such announcements, however, may struggle to make headlines by the end of the century.
Environment
2,021
April 5, 2021
https://www.sciencedaily.com/releases/2021/04/210405075901.htm
Making cleaner, greener plastics from waste fish parts
Polyurethanes, a type of plastic, are nearly everywhere -- in shoes, clothes, refrigerators and construction materials. But these highly versatile materials can have a major downside. Derived from crude oil, toxic to synthesize, and slow to break down, conventional polyurethanes are not environmentally friendly. Today, researchers discuss devising what they say should be a safer, biodegradable alternative derived from fish waste -- heads, bones, skin and guts -- that would otherwise likely be discarded.
The researchers will present their results today at the spring meeting of the American Chemical Society (ACS).If developed successfully, a fish-oil based polyurethane could help meet the immense need for more sustainable plastics, says Francesca Kerton, Ph.D., the project's principal investigator. "It is important that we start designing plastics with an end-of-life plan, whether it's chemical degradation that turns the material into carbon dioxide and water, or recycling and repurposing."To make the new material, Kerton's team started out with oil extracted from the remains of Atlantic salmon, after the fish were prepared for sale to consumers. "I find it interesting how we can make something useful, something that could even change the way plastics are made, from the garbage that people just throw out," says Mikhailey Wheeler, a graduate student who is presenting the work at the meeting. Both Kerton and Wheeler are at Memorial University of Newfoundland (Canada).The conventional method for producing polyurethanes presents a number of environmental and safety problems. It requires crude oil, a non-renewable resource, and phosgene, a colorless and highly toxic gas. The synthesis generates isocyanates, potent respiratory irritants, and the final product does not readily break down in the environment. The limited biodegradation that does occur can release carcinogenic compounds. Meanwhile, demand for greener alternatives is growing. Previously, others have developed new polyurethanes using plant-derived oils to replace petroleum. However, these too come with a drawback: The crops, often soybeans, that produce the oil require land that could otherwise be used to grow food.Leftover fish struck Kerton as a promising alternative. Salmon farming is a major industry for coastal Newfoundland, where her university is located. After the fish are processed, leftover parts are often discarded, but sometimes oil is extracted from them. Kerton and her colleagues developed a process for converting this fish oil into a polyurethane-like polymer. First, they add oxygen to the unsaturated oil in a controlled way to form epoxides, molecules similar to those in epoxy resin. After reacting these epoxides with carbon dioxide, they link the resulting molecules together with nitrogen-containing amines to form the new material.But does the plastic smell fishy? "When we start the process with the fish oil, there is a faint kind of fish smell, but as we go through the steps, that smell disappears," Kerton says.Kerton and her team described this method in a paper last August, and since then, Wheeler has been tweaking it. She has recently had some success swapping out the amine for amino acids, which simplifies the chemistry involved. And while the amine they used previously had to be derived from cashew nut shells, the amino acids already exist in nature. Wheeler's preliminary results suggest that histidine and asparagine could fill in for the amine by linking together the polymer's components.In other experiments, they have begun examining how readily the new material would likely break down once its useful life is over. Wheeler soaked pieces of it in water, and to speed up the degradation for some pieces, she added lipase, an enzyme capable of breaking down fats like those in the fish oil. Under a microscope, she later saw microbial growth on all of the samples, even those that had been in plain water, an encouraging sign that the new material might biodegrade readily, Wheeler says.Kerton and Wheeler plan to continue testing the effects of using an amino acid in the synthesis and studying how amenable the material is to the microbial growth that could hasten its breakdown. They also intend to study its physical properties to see how it might potentially be used in real world applications, such as in packaging or fibers for clothing.
Environment
2,021
April 5, 2021
https://www.sciencedaily.com/releases/2021/04/210405075853.htm
New study ties solar variability to the onset of decadal La Nina events
A new study shows a correlation between the end of solar cycles and a switch from El Nino to La Nina conditions in the Pacific Ocean, suggesting that solar variability can drive seasonal weather variability on Earth.
If the connection outlined in the journal "Energy from the Sun is the major driver of our entire Earth system and makes life on Earth possible," said Scott McIntosh, a scientist at the National Center for Atmospheric Research (NCAR) and co-author of the paper. "Even so, the scientific community has been unclear on the role that solar variability plays in influencing weather and climate events here on Earth. This study shows there's reason to believe it absolutely does and why the connection may have been missed in the past."The study was led by Robert Leamon at the University of Maryland-Baltimore County, and it is also co-authored by Daniel Marsh at NCAR. The research was funded by the National Science Foundation, which is NCAR's sponsor, and the NASA Living With a Star program.The appearance (and disappearance) of spots on the Sun -- the outwardly visible signs of solar variability -- have been observed by humans for hundreds of years. The waxing and waning of the number of sunspots takes place over approximately 11-year cycles, but these cycles do not have distinct beginnings and endings. This fuzziness in the length of any particular cycle has made it challenging for scientists to match up the 11-year cycle with changes happening on Earth.In the new study, the researchers rely on a more precise 22-year "clock" for solar activity derived from the Sun's magnetic polarity cycle, which they outlined as a more regular alternative to the 11-year solar cycle in several companion studies published recently in peer-reviewed journals.The 22-year cycle begins when oppositely charged magnetic bands that wrap the Sun appear near the star's polar latitudes, according to their recent studies. Over the cycle, these bands migrate toward the equator -- causing sunspots to appear as they travel across the mid-latitudes. The cycle ends when the bands meet in the middle, mutually annihilating one another in what the research team calls a terminator event. These terminators provide precise guideposts for the end of one cycle and the beginning of the next.The researchers imposed these terminator events over sea surface temperatures in the tropical Pacific stretching back to 1960. They found that the five terminator events that occurred between that time and 2010-11 all coincided with a flip from an El Nino (when sea surface temperatures are warmer than average) to a La Nina (when the sea surface temperatures are cooler than average). The end of the most recent solar cycle -- which is unfolding now -- is also coinciding with the beginning of a La Nina event."We are not the first scientists to study how solar variability may drive changes to the Earth system," Leamon said. "But we are the first to apply the 22-year solar clock. The result -- five consecutive terminators lining up with a switch in the El Nino oscillation -- is not likely to be a coincidence."In fact, the researchers did a number of statistical analyses to determine the likelihood that the correlation was just a fluke. They found there was only a 1 in 5,000 chance or less (depending on the statistical test) that all five terminator events included in the study would randomly coincide with the flip in ocean temperatures. Now that a sixth terminator event -- and the corresponding start of a new solar cycle in 2020 -- has also coincided with an La Nina event, the chance of a random occurrence is even more remote, the authors said.The paper does not delve into what physical connection between the Sun and Earth could be responsible for the correlation, but the authors note that there are several possibilities that warrant further study, including the influence of the Sun's magnetic field on the amount of cosmic rays that escape into the solar system and ultimately bombard Earth. However, a robust physical link between cosmic rays variations and climate has yet to be determined."If further research can establish that there is a physical connection and that changes on the Sun are truly causing variability in the oceans, then we may be able to improve our ability to predict El Nino and La Nina events," McIntosh said.
Environment
2,021
April 2, 2021
https://www.sciencedaily.com/releases/2021/04/210402095937.htm
Mapping policy for how the EU can reduce its impact on tropical deforestation
EU imports of certain products contribute significantly to deforestation in other parts of the world.
In a new study, researchers from Chalmers University of Technology, Sweden, and University of Louvain, Belgium, evaluated thousands of policy proposals for how the EU could reduce this impact, to assess which would have the largest potential to reduce deforestation -- while also being politically feasible."Unsurprisingly, there is weaker support for tougher regulations, such as import restrictions on certain goods. But our study shows that there is broad support in general, including for certain policies that have real potential to reduce imported deforestation," says Martin Persson, Associate Professor of Physical Resource Theory at Chalmers University of Technology.Previous research has already shown the EU's great impact in this area. More than half of tropical deforestation is linked to production of food and animal feed, such as palm oil, soybeans, wood products, cocoa and coffee -- goods which the EU imports in vast quantities. The question is, what can the EU do to reduce its contribution to deforestation?"This issue is particularly interesting now, as this year the EU is planning to present legislative proposals for reducing deforestation caused by European consumption. The question has been discussed by the EU since 2008, but now something political is actually happening," says Simon Bager, a doctoral student at the University of Louvain (UCLouvain), and lead author of the study.The authors of the article mapped 1 141 different proposals, originating from open consultations and workshops, where the EU has collected ideas from companies, interest groups and think tanks. The researchers also compiled proposals from a large number of research reports, policy briefs and other publications, where different stakeholders have put forward various policy proposals. After grouping together similar proposals, they arrived at 86 unique suggestions.Finding proposals for measures that would have the desired effect but are also possible to implement in practice, and enjoy the necessary political support, is no easy task. But after their extensive survey, the researchers identify two policy options in particular which show particular promise:- The other possibility is to support multi-stakeholder forums, where companies, civil society organisations, and politicians come together to agree on possible measures for ridding a supply-chain, commodity, or area, of deforestation. There are positive examples here too, the most notable being the Amazon Soy Moratorium from 2006, when actors including Greenpeace and the World Wide Fund for Nature gathered with soy producers and exporters and agreed to end soy exports from deforested areas in the Amazon rainforest. "Examples such as these demonstrate the effect that multi-stakeholder forums can have. And in our opinion, it is a measure that is easier to get acceptance for, because it is an opportunity for the affected parties to be directly involved in helping design the measures themselves," says Martin.The researchers also investigated how to deal with the trade-off between policy impacts and feasibility. An important part of this is combining different complementary measures. Trade regulations on their own, for example, risk hitting poorer producing countries harder, and should therefore be combined with targeted aid to help introduce more sustainable production methods, increasing yields without having to resort to deforestation. This would also reduce the risk of goods that are produced on deforested land simply being sold in markets other than the EU."If the EU now focuses on its contribution to deforestation, the effect may be that what is produced on newly deforested land is sold to other countries, while the EU gets the 'good' products. Therefore, our assessment is that the EU should ensure that the measures introduced are combined with those which contribute to an overall transition to sustainable land use in producing countries," says Simon Bager.In conclusion, the researchers summarise three essential principles needed for new measures, if the EU is serious about reducing its impact on tropical deforestation. "First, enact measures that actually are able to bring about change. Second, use a range of measures, combining different tools and instruments to contribute to reduced deforestation. Finally, ensure the direct involvement of supply chain actors within particularly important regions, expanding and broadening the measures over time," concludes Simon Bager.The authors hope that the research and identified policy options can serve as inspiration for policy makers, NGOs, industries, and other stakeholders working to address the EU's deforestation footprint. With at least 86 different unique alternatives, there is a wide range of opportunities to focus on the problem -- very few of these are political 'non-starters' or proposals which would have no effect on the issue.
Environment
2,021
April 2, 2021
https://www.sciencedaily.com/releases/2021/04/210402095949.htm
How the Chicxulub impactor gave rise to modern rainforests
About 66 million years ago, a huge asteroid crashed into what is now the Yucatan, plunging the Earth into darkness. The impact transformed tropical rainforests, giving rise to the reign of flowers.
Tropical rainforests today are biodiversity hotspots and play an important role in the world's climate systems. A new study published today in The study led by researchers at the Smithsonian Tropical Research Institute (STRI) shows that the asteroid impact that ended the reign of dinosaurs 66 million years ago also caused 45% of plants in what is now Colombia to go extinct, and it made way for the reign of flowering plants in modern tropical rainforests."We wondered how tropical rainforests changed after a drastic ecological perturbation such as the Chicxulub impact, so we looked for tropical plant fossils," said Mónica Carvalho, first author and joint postdoctoral fellow at STRI and at the Universidad del Rosario in Colombia. "Our team examined over 50,000 fossil pollen records and more than 6,000 leaf fossils from before and after the impact."In Central and South America, geologists hustle to find fossils exposed by road cuts and mines before heavy rains wash them away and the jungle hides them again. Before this study, little was known about the effect of this extinction on the evolution of flowering plants that now dominate the American tropics.Carlos Jaramillo, staff paleontologist at STRI and his team, mostly STRI fellows -- many of them from Colombia -- studied pollen grains from 39 sites that include rock outcrops and cores drilled for oil exploration in Colombia, to paint a big, regional picture of forests before and after the impact. Pollen and spores obtained from rocks older than the impact show that rainforests were equally dominated by ferns and flowering plants. Conifers, such as relatives of the of the Kauri pine and Norfolk Island pine, sold in supermarkets at Christmas time (Araucariaceae), were common and cast their shadows over dinosaur trails. After the impact, conifers disappeared almost completely from the New World tropics, and flowering plants took over. Plant diversity did not recover for around 10 million years after the impact.Leaf fossils told the team much about the past climate and local environment. Carvalho and Fabiany Herrera, postdoctoral research associate at the Negaunee Institute for Conservation Science and Action at the Chicago Botanic Garden, led the study of over 6,000 specimens. Working with Scott Wing at the Smithsonian's National Museum of Natural History and others, the team found evidence that pre-impact tropical forest trees were spaced far apart, allowing light to reach the forest floor. Within 10 million years post-impact, some tropical forests were dense, like those of today, where leaves of trees and vines cast deep shade on the smaller trees, bushes and herbaceous plants below. The sparser canopies of the pre-impact forests, with fewer flowering plants, would have moved less soil water into the atmosphere than did those that grew up in the millions of years afterward."It was just as rainy back in the Cretaceous, but the forests worked differently." Carvalho said.The team found no evidence of legume trees before the extinction event, but afterward there was a great diversity and abundance of legume leaves and pods. Today, legumes are a dominant family in tropical rainforests, and through associations with bacteria, take nitrogen from the air and turn it into fertilizer for the soil. The rise of legumes would have dramatically affected the nitrogen cycle.Carvalho also worked with Conrad Labandeira at the Smithsonian's National Museum of Natural History to study insect damage on the leaf fossils."Insect damage on plants can reveal in the microcosm of a single leaf or the expanse of a plant community, the base of the trophic structure in a tropical forest," Labandeira said. "The energy residing in the mass of plant tissues that is transmitted up the food chain -- ultimately to the boas, eagles and jaguars -- starts with the insects that skeletonize, chew, pierce and suck, mine, gall and bore through plant tissues. The evidence for this consumer food chain begins with all the diverse, intensive and fascinating ways that insects consume plants.""Before the impact, we see that different types of plants have different damage: feeding was host-specific," Carvalho said. "After the impact, we find the same kinds of damage on almost every plant, meaning that feeding was much more generalistic."How did the after effects of the impact transform sparse, conifer-rich tropical forests of the dinosaur age into the rainforests of today -- towering trees dotted with yellow, purple and pink blossoms, dripping with orchids? Based on evidence from both pollen and leaves, the team proposes three explanations for the change, all of which may be correct. One idea is that dinosaurs kept pre-impact forests open by feeding and moving through the landscape. A second explanation is that falling ash from the impact enriched soils throughout the tropics, giving an advantage to the faster-growing flowering plants. The third explanation is that preferential extinction of conifer species created an opportunity for flowering plants to take over the tropics."Our study follows a simple question: How do tropical rainforests evolve?" Carvalho said. "The lesson learned here is that under rapid disturbances -- geologically speaking -- tropical ecosystems do not just bounce back; they are replaced, and the process takes a really long time."The Smithsonian Tropical Research Institute, headquartered in Panama City, Panama, is a unit of the Smithsonian Institution. The institute furthers the understanding of tropical biodiversity and its importance to human welfare, trains students to conduct research in the tropics and promotes conservation by increasing public awareness of the beauty and importance of tropical ecosystems.
Environment
2,021
April 1, 2021
https://www.sciencedaily.com/releases/2021/04/210401151250.htm
Global assessment of cumulative human impacts to at-risk marine species over time
Despite the fact that our planet is mostly ocean and human maritime activity is more intense than it has ever been, we know remarkably little about the state of the ocean's biodiversity -- the variety and balance of species that support healthy and productive ecosystems. And it's no surprise -- marine biodiversity is complex, human impacts are uneven, and species respond differently to different stressors.
"It is really hard to know how a species is doing by just looking out from your local coast, or dipping underwater on SCUBA," said Ben Halpern, a marine ecologist at the Bren School of Environmental Science & Management at UC Santa Barbara and Director of the National Center for Ecological Analysis and Synthesis. "You only see a small patch of where a species lives and what it is experiencing, and only the few species you happen to see on that day." Though valuable, these snapshots are only part of a much larger picture of cumulative human impacts on at-risk marine species. Even less obvious are changes in impact over time and assessments of vulnerability to these impacts, which differs across species.However, the picture of marine biodiversity is about to get a lot clearer. In a first-of-its kind study published in the journal "This is the first study of its kind looking at the effects of human activity on marine species, and the first looking at changes over time," said O'Hara, a doctoral student in the Bren School. Taking data on 1,271 threatened and near-threatened marine species from the International Union for Conservation of Nature and Natural Resources' (IUCN) Red List, the researchers mapped the at-risk species along range and anthropogenic stressors from 2003-2013."We focused on those species known to be at a higher risk of extinction because from a conservation perspective, it's especially important to understand where and how our activities continue to jeopardize those species," O'Hara said. "Not every species is affected the same way by various human activities -- some species are more sensitive to fishing pressures while others are more vulnerable to rising sea surface temperatures or ocean acidification." Mapping over a series of 11 years would also give the researchers a sense of cumulative human impact, a method they first employed in a previous study that focused on representative marine habitats.It's not a shock. Human impacts on marine biodiversity are increasing, dominated by fishing, direct human disturbance from land and ocean acidification. But there were some unexpected discoveries for the authors. The extent to which at-risk species are facing these pressures from human activities, and the pace at which the pressures are expanding and intensifying, is worrisome. Corals are the most widely impacted marine organism on Earth."I was surprised at the extent to which corals were impacted -- coral species are facing impacts across essentially their entire ranges and those impacts are only getting more intense, particularly climate-related impacts," O'Hara said. "We hear stories of coral bleaching and the like, but our results really highlight the impact we are having." The species of the Coral Triangle -- the tropical waters connecting Indonesia, the Philippines, Papua New Guinea and the Solomon Islands -- are among the most affected by human impacts, as are species in the North Atlantic, North Sea and Baltic Sea.The information from this approach could give decisionmakers a deeper understanding of where and how human activity is affecting marine biodiversity, which could lead to effective solutions. For instance, addressing areas of human impact overlap can maximize the benefits of conservation for several species in the area. Effective conservation measures can help ease the pressures of climate change phenomena such as ocean acidification or rising ocean temperatures.The team might get the chance to put their findings to work later this year, at the U.N. Convention on Biological Diversity's 15th Conference of Parties, where 197 participating nations and territories will convene on a framework to protect and conserve global biodiversity."That framework will include targets for protecting land and ocean areas globally, along the lines of President Biden's executive order to protect 30% of U.S. lands and coastal waters by 2030," O'Hara said. "With our study we hope to highlight those areas where such protection can do the greatest good for those species and ecosystems at greatest risk."
Environment
2,021
April 1, 2021
https://www.sciencedaily.com/releases/2021/04/210401112554.htm
Climate change cut global farming productivity 21% since 1960s
Despite important agricultural advancements to feed the world in the last 60 years, a Cornell-led study shows that global farming productivity is 21% lower than it could have been without climate change. This is the equivalent of losing about seven years of farm productivity increases since the 1960s.
The future potential impacts of climate change on global crop production has been quantified in many scientific reports, but the historic influence of anthropogenic climate change on the agricultural sector had yet to be modeled.Now, a new study provides these insights: "Anthropogenic Climate Change Has Slowed Global Agricultural Productivity Growth," published in "We find that climate change has basically wiped out about seven years of improvements in agricultural productivity over the past 60 years," Ortiz-Bobea said. "It is equivalent to pressing the pause button on productivity growth back in 2013 and experiencing no improvements since then. Anthropogenic climate change is already slowing us down."The scientists and economists developed an all-encompassing econometric model linking year-to-year changes in weather and productivity measures with output from the latest climate models over six decades to quantify the effect of recent human-caused climate change on what economists call "total factor productivity," a measure capturing overall productivity of the agricultural sector.Ortiz-Bobea said they considered more than 200 systematic variations of the econometric model, and the results remained largely consistent. "When we zoom into different parts of the world, we find that the historical impacts of climate change have been larger in areas already warmer, including parts of Africa, Latin America and Asia," he said.Humans have already altered the climate system, Ortiz-Bobea said, as climate science indicates the globe is about 1 degree Celsius warmer than without atmospheric greenhouse gases."Most people perceive climate change as a distant problem," Ortiz-Bobea said. "But this is something that is already having an effect. We have to address climate change now so that we can avoid further damage for future generations."Cornell funding was provided by USDA National Institute of Food and Agriculture and the National Science Foundation.
Environment
2,021
April 1, 2021
https://www.sciencedaily.com/releases/2021/04/210401112538.htm
African elephants' range is just 17 percent of what it could be
A study reported in the journal
"We looked at every square kilometer of the continent," says lead author Jake Wall of the Mara Elephant Project in Kenya. "We found that 62% of those 29.2 million square kilometers is suitable habitat."The findings suggest that, if released from human pressures, including the threat of being killed for their ivory, elephants still have great potential for recovery into areas where the human footprint is light. They note that those 18 million square kilometers include many areas where there is still room for peaceful coexistence between humans and elephants as well as others where that prospect is clearly not realistic.Like many wildlife species, it's long been clear that African elephant populations and their geographic range were shrinking due to killing for ivory, habitat loss, and the growth of human populations. But African savannah and forest elephants can live in many environments, from semi-deserts to tropical swamp forests. Wall's team wanted to better understand how elephants are using the space that's available to them and what's driving their ranging patterns.To analyze the suitability of habitats over the entire continent at a kilometer-level scale, Wall and his colleagues drew on data from GPS-tracking collars fitted to 229 elephants across Africa by Save the Elephants and its partners over a 15-year period. Using Google Earth Engine, a satellite imagery computing platform, they looked at the vegetation, tree cover, surface temperature, rainfall, water, slope, aggregate human influence, and protected areas in the areas the elephants traversed. This allowed them to determine which habitats can support elephants and the extremes of conditions that they currently can tolerate."Combining three powerful tools -- GPS telemetry, continent-wide remote sensing at a fine resolution, and a suite of analytical techniques -- has allowed us to see what factors now control the movements and lives of these two hugely ecologically important species -- and where, if circumstances change, they could range more widely across their historical African home," said Samantha Strindberg of the Wildlife Conservation Society.The researchers uncovered vast areas of potentially suitable habitat for elephants in the Central African Republic and the Democratic Republic of Congo. The researchers note that forests in those areas recently held hundreds of thousands of elephants but today hold only about 5,000 to 10,000. The study also highlighted the extreme habitats that African elephants do not visit."The major no-go areas include the Sahara, Danakil, and Kalahari deserts, as well as urban centers and high mountaintops," said Iain Douglas-Hamilton, the founder of Save the Elephants. "That gives us an idea of what the former range of elephants might have been. However, there's a dearth of information about the status of African elephants between the end of Roman times and the arrival of the first European colonizers."The tracking data also show that elephants living in protected areas tend to have smaller home ranges. The researchers suggest that's probably because they feel unsafe ranging into unprotected lands. The study notes that approximately 57% of the current elephant range is outside of protected areas, highlighting the limited space presently reserved for their safety. To secure long-term survival of elephants, the researchers say that habitat protection, protection of elephants themselves from illegal killing, and an ethic of human-elephant coexistence will be essential."Elephants are generalist mega-herbivores that can occupy fringe habitats," Wall says. "Their range may have shrunk, but if we gave them the chance, they could spread back to former parts of it."Unfortunately, trends are headed in the wrong direction. "The human footprint is increasing at an accelerated rate and expected to double by 2050, with between 50% and 70% of the planet already experiencing anthropogenic disturbance," the researchers write. "Fragmentation of wildlife habitats by humans has resulted in only 7% of wildlife habitat patches being larger than 100 kmThis work was supported by the European Commission and a Canadian National Science and Research Council (NSERC) award.
Environment
2,021
March 31, 2021
https://www.sciencedaily.com/releases/2021/03/210331153733.htm
First images of freshwater plumes at sea
The first imaging of substantial freshwater plumes west of Hawai'i Island may help water planners to optimize sustainable yields and aquifer storage calculations. University of Hawai'i at Manoa researchers demonstrated a new method to detect freshwater plumes between the seafloor and ocean surface in a study recently published in
The research, supported by the Hawai'i EPSCoR 'Ike Wai project, is the first to demonstrate that surface-towed marine controlled-source electromagnetic (CSEM) imaging can be used to map oceanic freshwater plumes in high-resolution. It is an extension of the groundbreaking discovery of freshwater beneath the seafloor in 2020. Both are important findings in a world facing climate change, where freshwater is vital for preserving public health, agricultural yields, economic strategies, and ecosystem functions.While the CSEM method has been used to detect the presence of resistive targets such as oil, gas and freshwater beneath the seafloor, this study is the first time CSEM was applied to image freshwater in the ocean water column, according to 'Ike Wai research affiliate faculty Eric Attias, who led the study."This study has profound implications for oceanography, hydrogeology and ocean processes that affect biogeochemical cycles in coastal waters worldwide," said Attias. "Using CSEM, we now can estimate the volumes of freshwater emanating to the water column. This is indicative of the renewability of Hawai'i's submarine freshwater system."Submarine groundwater discharge (SGD), the leaking of groundwater from a coastal aquifer into the ocean, is a key process, providing a water source for people, and supporting sea life such as fish and algae. According to UH Manoa Department of Earth Sciences Associate Professor and study co-author Henrietta Dulai, the location of offshore springs is extremely hard to predict because of the unknown underlying geology and groundwater conduits."The flux of such high volumes of nutrient-rich, low salinity groundwater to the ocean has great significance for chemical budgets and providing nutrients for offshore food webs," said Dulai. "It is great to have a method that can pinpoint discharge locations and plumes as it opens up new opportunities to sample and identify the age of the water, its origin, chemical composition, and its significance for marine ecosystems in this otherwise oligotrophic (relatively low in plant nutrients and containing abundant oxygen in the deeper parts) ocean."This study included electromagnetic data driven 2D CSEM inversion, resistivity-to-salinity calculation, and freshwater plume volumetric estimation. Through the use of CSEM, the research team was able to image surface freshwater bodies and multiple large-scale freshwater plumes that contained up to 87% freshwater offshore Hawai'i Island. The results imply that at the study site substantial volumes of freshwater are present in the area between the seafloor and the ocean's surface. A conservative estimate for one of the plumes suggests 10,720 cubic meters or approximately the volume of four Olympic-sized swimming pools.The methodology used in this study can be applied to coastal areas worldwide, thus improving future hydrogeological models by incorporating offshore SGD and optimizing sustainable yields and storage calculations. Attias plans to extend the novel use of CSEM to further prove its application in imaging freshwater at other volcanic islands around the globe.Attias will present his work at the International Tropical Island Water Conference taking place April 12-15, 2021. Hosted by the UH Water Resources Research Center and Hawai'i EPSCoR, this conference brings together water scientists, water managers and community members from around the world to share cutting-edge research and learn from each other's experiences managing and understanding water resources across a broad range of tropical island settings.
Environment
2,021
March 31, 2021
https://www.sciencedaily.com/releases/2021/03/210331153724.htm
Low-cost solar-powered water filter removes lead, other contaminants
A new invention that uses sunlight to drive water purification could help solve the problem of providing clean water off the grid.
The device resembles a large sponge that soaks up water but leaves contaminants -- like lead, oil and pathogens -- behind. To collect the purified water from the sponge, one simply places it in sunlight. The researchers described the device in a paper published this week in the journal The inspiration for the device came from the pufferfish, a species that takes in water to swell its body when threatened, and then releases water when danger passes, said the device's co-inventor Rodney Priestley, the Pomeroy and Betty Perry Smith Professor of Chemical and Biological Engineering, and Princeton's vice dean for innovation."To me, the most exciting thing about this work is it can operate completely off-grid, at both large and small scales," Priestley said. "It could also work in the developed world at sites where low-cost, non-powered water purification is needed."Xiaohui Xu, a Princeton Presidential Postdoctoral Research Fellow in the Department of Chemical and Biological Engineering and co-inventor, helped develop the gel material at the heart of the device."Sunlight is free," Xu said, "and the materials to make this device are low-cost and non-toxic, so this is a cost-effective and environmentally friendly way to generate pure water."The authors noted that the technology delivers the highest passive solar water- purification rate of any competing technology.One way to use the gel would be to place it in a water source in the evening and the next day place it in the sunlight to generate the day's drinking water, Xu said.The gel can purify water contaminated with petroleum and other oils, heavy metals such as lead, small molecules, and pathogens such as yeast. The team showed that the gel maintains its ability to filter water for at least ten cycles of soaking and discharge with no detectable reduction in performance. The results suggest that the gel can be used repeatedly.To demonstrate the device in real-world conditions, Xu took the device to Lake Carnegie on the Princeton University campus.Xu placed the gel into the cool water (25 degree Celsius, or 77 degrees Fahrenheit) of the lake, which contains microorganisms that make it unsafe to drink, and let it soak up the lake water for an hour.At the end of the hour, Xu lifted the gel out of the water and set it on top of a container. As the sun warmed the gel, pure water trickled into the container over the next hour.The device filters water much more quickly than existing methods of passive solar-powered water purification methods, the researchers said. Most other solar-powered approaches use sunlight to evaporate water, which takes much longer than absorption and release by the new gel.Other water filtration methods require electricity or another source of power to pump water through a membrane. Passive filtration via gravity, as with typical household countertop filters, requires regular replacement of filters.At the heart of the new device is a gel that changes depending on temperature. At room temperature, the gel can act as a sponge, soaking up water. When heated to 33 degrees Celsius (91 degrees Fahrenheit), the gel does the opposite -- it pushes the water out of its pores.The gel consists of a honeycomb-like structure that is highly porous. Closer inspection reveals that the honeycomb consists of long chains of repeating molecules, known as poly(N-isopropylacrylamide), that are cross-linked to form a mesh. Within the mesh, some regions contain molecules that like to have water nearby, or are hydrophilic, while other regions are hydrophobic or water-repelling.At room temperature, the chains are long and flexible, and water can easily flow via capillary action into the material to reach the water-loving regions. But when the sun warms the material, the hydrophobic chains clump together and force the water out of the gel.This gel sits inside two other layers that stop contaminants from reaching the inner gel. The middle layer is a dark-colored material called polydopamine that transforms sunlight into heat and also keeps out heavy metals and organic molecules. With PDA in place, the sun's light can heat up the inner material even if the actual outdoor temperature is not very warm.The final external layer is a filtering layer of alginate, which blocks pathogens and other materials from entering the gel.Xu said that one of the challenges to making the device was to formulate the inner gel to have the correct properties for water absorption. Initially the gel was brittle, so she altered the composition until it was flexible. Xu synthesized the materials and conducted studies to assess the device's ability to purify water, aided by coauthors Sehmus Ozden and Navid Bizmark, postdoctoral research associates in the Princeton Institute for the Science and Technology of Materials.Sujit Datta, assistant professor of chemical and biological engineering, and Craig Arnold, the Susan Dod Brown Professor of Mechanical and Aerospace Engineering and director of the Princeton Institute for the Science and Technology of Materials, collaborated on the development of the technology.The team is exploring ways to make the technology widely available with the help of Princeton Innovation, which supports University researchers in the translation of discoveries into technologies and services for the benefit of society.
Environment
2,021
March 31, 2021
https://www.sciencedaily.com/releases/2021/03/210331130926.htm
Seagrasses turn back the clock on ocean acidification
Spanning six years and seven seagrass meadows along the California coast, a paper published today from the University of California, Davis, is the most extensive study yet of how seagrasses can buffer ocean acidification.
The study, published today in the journal "This buffering temporarily brings seagrass environments back to preindustrial pH conditions, like what the ocean might have experienced around the year 1750," said co-author Tessa Hill, a UC Davis professor in the Department of Earth and Planetary Sciences and Bodega Marine Laboratory.When picturing seagrasses, you might think of slimy grasses that touch your feet as you walk along the shoreline. But a closer look into these underwater meadows reveals an active, vibrant ecosystem full of surprises.Sea turtles, bat rays, leopard sharks, fishes, harbor seals, seahorses, colorful sea slugs, are just some of the creatures that visit seagrass ecosystems for the food and habitat they provide. They are nursery grounds for species like Dungeness crab and spiny lobster, and many birds visit seagrass meadows specifically to dine on what's beneath their swaying blades of grass."It's a marine forest without trees," said lead author Aurora M. Ricart, who conducted the study as a postdoctoral scholar at UC Davis Bodega Marine Laboratory and is currently at Bigelow Laboratory for Ocean Sciences in Maine. "The scale of the forest is smaller, but all of the biodiversity and life that is in that forest is comparable to what we have in terrestrial forests."For the study, the scientists deployed sensors between 2014 and 2019, collecting millions of data points from seven seagrass meadows of eelgrass stretching from Northern to Southern California. These include Bodega Harbor, three locations in Tomales Bay, plus Elkhorn Slough, Newport Bay and Mission Bay.Buffering occurred on average 65 percent of the time across these locations, which ranged from nearly pristine reserves to working ports, marinas and urban areas.Despite being the same species, eelgrass behavior and patterns changed from north to south, with some sites increasing pH better than others. Time of year was also an important factor, with more buffering occurring during the springtime when grasses were highly productive.Seagrasses naturally absorb carbon as they photosynthesize when the sun is out, which drives this buffering ability. Yet the researchers wondered, would seagrasses just re-release this carbon when the sun went down, cancelling out that day's buffering? They tested that question and found a welcome and unique finding:"What is shocking to everyone that has seen this result is that we see effects of amelioration during the night as well as during the day, even when there's no photosynthesis," Ricart said. "We also see periods of high pH lasting longer than 24 hours and sometimes longer than weeks, which is very exciting."Northern California's Bodega Harbor and Tom's Point within Tomales Bay stood out as being particularly good at buffering ocean acidification. Pinpointing why and under what conditions that happens across varied seascapes remains among the questions for further study.The study carries implications for aquaculture management, as well as for climate change mitigation and conservation and restoration efforts.Globally, ocean acidification is on the rise while seagrass ecosystems are in decline. As more carbon dioxide is emitted on the planet, about a third is absorbed by the ocean. This changes the pH balance of the water and can directly impede the shell formation of species like oysters, abalone and crab."We already knew that seagrasses are valuable for so many reasons -- from climate mitigation to erosion control and wildlife habitat," said co-author Melissa Ward, a UC Davis graduate student researcher at the time of the study and currently a postdoctoral researcher at San Diego State University. "This study shows yet another reason why their conservation is so important. We now have a piece of evidence to say the state's directive to explore these ideas for ameliorating ocean acidification is a valuable thread to follow and merits more work."The study was funded by California Sea Grant and the California Ocean Protection Council.
Environment
2,021
March 31, 2021
https://www.sciencedaily.com/releases/2021/03/210331130923.htm
Plants play leading role in cycling toxic mercury through the environment
Researchers studying mercury gas in the atmosphere with the aim of reducing the pollutant worldwide have determined a vast amount of the toxic element is absorbed by plants, leading it to deposit into soils.
Hundreds of tons of mercury each year are emitted into the atmosphere as a gas by burning coal, mining and other industrial and natural processes. These emissions are absorbed by plants in a process similar to how they take up carbon dioxide. When the plants shed leaves or die, the mercury is transferred to soils where large amounts also make their way into watersheds, threatening wildlife and people who eat contaminated fish.Exposure to high levels of mercury over long periods can lead to neurological and cardiovascular problems in humans, according to UMass Lowell's Daniel Obrist, professor and chair of the Department of Environmental, Earth and Atmospheric Sciences, who is leading the research group.Obrist is an expert on the cycling of mercury in the environment. In his latest project, he and UMass Lowell Research Associate Jun Zhou collected more than 200 published studies with data on mercury levels in vegetation from more than 400 locations around the world. In evaluating this data, they determined about 88 percent of the mercury found in plants originates from plants' leaves absorbing gaseous mercury from the atmosphere. Globally, vegetation can take up more than 1,300 tons of mercury each year, accounting for 60 to 90 percent of it being deposited over land, according to Zhou.The team's findings were published this month in the academic journal "When I walk outside here in New England, I am always amazed at the greenness of our forest, grasslands and salt marshes. One goal of my research is to determine how strongly vegetation controls the cycling of elements -- some of which can be toxic pollutants -- so we can better mitigate damaging effects," Obrist said.The work moves scientists toward a greater understanding of how mercury cycling works, according to Zhou."Researchers have worked on the role that vegetation plays on cycling of mercury for over 30 years now, but the full extent of these impacts are still not yet fully realized. It was timely to write this comprehensive review and communicate to colleagues and the public about the current state of knowledge in this area," Zhou said.Other contributors to the study include scientists from the Environment and Climate Change Canada's Air Quality Research Division in Quebec, and the University of Basel in Switzerland. Support for the research was provided by the U.S. National Science Foundation and Swiss National Science Foundation.In a separate but related project led by Obrist, researchers continue to measure how vegetation affects mercury cycling in New England forests, focusing on those in Maine and Massachusetts. Obrist's team is using a variety of instruments and sensors to measure the forests' uptake of mercury in the atmosphere at various heights from above the tree canopy down to near the forest floor, allowing for daily tracking of how mercury deposition may be different in each forest and may change with the seasons.
Environment
2,021
March 31, 2021
https://www.sciencedaily.com/releases/2021/03/210331130916.htm
Estimating lifetime microplastic exposure
Every day, people are exposed to microplastics from food, water, beverages and air. But it's unclear just how many of these particles accumulate in the human body, and whether they pose health risks. Now, researchers reporting in ACS'
Microplastics, which are tiny pieces of plastic ranging in size from 1 µm to 5 mm (about the width of a pencil eraser), are ingested from a variety of sources, such as bottled water, salt and seafood. Their fate and transport in the human body are largely unknown, although the particles have been detected in human stool. In addition to possibly causing tissue damage and inflammation, microplastics could be a source of carcinogens and other harmful compounds that leach from plastic into the body. Previous studies have tried to estimate human exposure to the particles and their leached chemicals, but they have limitations, including discrepancies in the databases used, a failure to consider the entire microplastic size range and the use of average exposure rates that do not reflect global intakes. Nur Hazimah Mohamed Nor, Albert Koelmans and colleagues wanted to develop a comprehensive model to estimate the lifetime exposure of adults and children to microplastics and their associated chemicals.To make their model, the researchers identified 134 studies that reported microplastic concentrations in fish, mollusks, crustaceans, tap or bottled water, beer, milk, salt and air. They performed corrections to the data so that they could be accurately compared among the different studies. Then, the team used data on food consumption in different countries for various age groups to estimate ranges of microplastic ingestion. This information, combined with rates of microplastic absorption from the gastrointestinal tract and excretion by the liver, was used to estimate microplastic distribution in the gut and tissues. The model predicted that, by the age of 18, children could accumulate an average of 8,300 particles (6.4 ng) of microplastics in their tissues, whereas by the age of 70, adults could accrue an average of 50,100 microplastic particles (40.7 ng). The estimated amounts of four chemicals leaching from the plastics were small compared with a person's total intake of these compounds, the researchers concluded. These data suggest that prior studies might have overestimated microplastic exposure and possible health risks, but it will be important to assess the contributions of other food types to ingestion and accumulation, the researchers say.
Environment
2,021
March 31, 2021
https://www.sciencedaily.com/releases/2021/03/210331103645.htm
Scientists design 'smart' device to harvest daylight
A team of Nanyang Technological University, Singapore (NTU Singapore) researchers has designed a 'smart' device to harvest daylight and relay it to underground spaces, reducing the need to draw on traditional energy sources for lighting.
In Singapore, authorities are looking at the feasibility of digging deeper underground to create new space for infrastructure, storage, and utilities. Demand for round-the-clock underground lighting is therefore expected to rise in the future.To develop a daylight harvesting device that can sustainably meet this need, the NTU team drew inspiration from the magnifying glass, which can be used to focus sunlight into one point.They used an off-the-shelf acrylic ball, a single plastic optical fibre -- a type of cable that carries a beam of light from one end to another -- and computer chip-assisted motors.The device sits above ground and just like the lens of a magnifying glass, the acrylic ball acts as the solar concentrator, enabling parallel rays of sunlight to form a sharp focus at its opposite side. The focused sunlight is then collected into one end of a fibre cable and transported along it to the end that is deployed underground. Light is then emitted via the end of the fibre cable directly.At the same time, small motors -- assisted by computer chips -- automatically adjust the position of the fibre's collecting end, to optimise the amount of sunlight that can be received and transported as the sun moves across the sky.Developed by Assistant Professor Yoo Seongwoo from the School of Electrical and Electronics Engineering and Dr Charu Goel, Principal Research Fellow at NTU's The Photonics Institute, the innovation was reported in the peer-reviewed scientific journal The device overcomes several limitations of current solar harvesting technology. In conventional solar concentrators, large, curved mirrors are moved by heavy-duty motors to align the mirror dish to the sun. The components in those systems are also exposed to environmental factors like moisture, increasing maintenance requirements.The NTU device, however, is designed to use the round shape of the acrylic ball, ridding the system of heavy-duty motors to align with the sun, and making it compact.The prototype designed by the researchers' weighs 10 kg and has a total height of 50 cm. To protect the acrylic ball from environmental conditions (ultraviolet light, dust etc.), the researchers also built a 3mm thick, transparent dome-shaped cover using polycarbonate.Asst Prof Yoo, lead author of the study said, "Our innovation comprises commercially available off-the-shelf materials, making it potentially very easy to fabricate at scale. Due to space constraints in densely populated cities, we have intentionally designed the daylight harvesting system to be lightweight and compact. This would make it convenient for our device to be incorporated into existing infrastructure in the urban environment."The NTU team believes the device is ideally suited to be mounted as a conventional lamp post above ground. This would enable the innovation to be used in two ways: a device to harvest sunlight in the day to light up underground spaces, and a streetlamp to illuminate above ground at night using electricity.The research by the NTU scientists is an example of NTU's Smart Campus vision that aims to develop technologically advanced solutions for a sustainable future.As the sun moves across the sky throughout the day, so will the position of the focused sunlight inside the acrylic ball. To ensure that maximum sunlight is being collected and transported down the fibre cable throughout the day, the system uses a computer chip-based mechanism to track the sun rays.The Global Positioning System (GPS) coordinates of the device location are pre-loaded into the system, allowing it to determine the spot where maximum sunlight should be focused at any given time. Two small motors are then used to automatically adjust the position of the ?bre to catch and transport sunlight from the focused spot at one-minute intervals.To guarantee the device's automatic positioning capability, pairs of sensors that measure light brightness are also placed around the sunlight collecting end of the fibre cable. Whenever the sensors detect inconsistencies in the light measurements, the small motors automatically activate to adjust the cable's position until the values on the sensors are the same. This indicates that the fibre is catching the maximum amount of sunlight possible.During rain or overcast skies when there is inadequate sunlight to be collected and transported underground, an LED bulb powered by electricity installed right next to the emitting end of the fibre cable, will automatically light up. This ensures that the device can illuminate underground spaces throughout the day without interruption.In experiments in a pitch-black storeroom (to simulate an underground environment), the NTU researchers found the device's luminous efficacy -- the measure of how well a light source produces visible light using 1 Watt of electrical power- to be at 230 lumens/Watt.This far exceeds those recorded by commercially available LED bulbs, which have a typical output of 90 lumens/Watt. The quality of the light output of the NTU smart device is also comparable with current commercially available daylight harvesting systems which are far more costly.Dr Charu, who is the first author of the study, said, "The luminous efficacy of our low-cost device proves that it is well-suited for low-level lighting applications, like car parks, lifts, and underground walkways in dense cities. It is also easily scalable. Since the light capturing capacity of the ball lens is proportional to its size, we can customise the device to a desired output optical power by replacing it with a bigger or smaller ball."Michael Chia, Managing Director at Technolite, a Singapore-based design focused agency specialising in lighting, and the industry collaborator of the research study said, "It is our privilege and honour to take this innovation journey with NTU. While we have the commercial and application knowledge, NTU in-depth knowhow from a technical perspective has taken the execution of the project to the next level that is beyond our expectations."Moving forward, the lighting company is exploring ways to potentially incorporate the smart device or its related concepts into its industrial projects for improved efficiency and sustainability.
Environment
2,021
March 31, 2021
https://www.sciencedaily.com/releases/2021/03/210331103612.htm
Carbon-neutral 'biofuel' from lakes
Lakes store huge amounts of methane. In a new study, environmental scientists at the University of Basel offer suggestions for how it can be extracted and used as an energy source in the form of methanol.
Discussion about the current climate crisis usually focuses on carbon dioxide (COMore than half the methane caused by human activities comes from oil production and agricultural fertilizers. But the gas is also created by the natural decomposition of biomass by microbes, for example in lakes. In their most recent publication, researchers at the University of Basel in Switzerland outline the potential and theoretical possibilities for using methane from lakes and other freshwater bodies for sustainable energy production.Methane from lakes and water reservoirs makes up about 20% of global natural methane missions. "That would theoretically be enough to meet the world's energy needs," says Maciej Bartosiewicz, a postdoc in the Department of Environmental Sciences of the University of Basel. Lakes continuously absorb COThe idea described in the article isn't completely new: since 2016, methane in Lake Kivu between Rwanda and the Democratic Republic of Congo has been extracted from a depth of 260 meters, cleaned and used for energy supply directly via generators. "Methane occurs in high concentrations in large quantities on the lake bed there," explains Bartosiewicz. "The methane concentration is about 100 times higher than in ordinary lakes." Low concentrations made extracting methane from conventional lakes seem too technically difficult until a few years ago. But new microporous membranes made of polymeric materials now allow the gas to be separated from the water much more efficiently.The researchers have made the first concrete proposals in this regard: using a hydrophobic gas-liquid membrane contactor, a methane-containing gas mixture can be separated from water and the methane concentrated. Zeolite minerals are particularly suitable for enrichment, since hydrophobic crystalline substances can adsorb and release gases."With our idea, we wanted to start a broad discussion about the potential, feasibility and risks of a technology like this," says Bartosiewicz. "Until now, no studies have addressed the effects of methane removal on lake ecosystem functioning, but no immediate negative effects can be foreseen with our current understanding." However, removing excess carbon could even help curb excessive phytoplankton bloom formation and reduce natural greenhouse gas emissions from lakes. More work is needed before any practical implementation of this initial theoretical idea, says Bartosiewicz. But he's convinced: "This concept could one day make an important contribution to reaching our climate goals."
Environment
2,021
March 31, 2021
https://www.sciencedaily.com/releases/2021/03/210331084907.htm
Impacts of sunscreen on coral reefs needs urgent attention
More research is needed on the environmental impact of sunscreen on the world's coral reefs, scientists at the University of York say.
The concerns over the number of cases of cancer as a result of overexposure to UV solar radiation, has led to extensive production and use of skin protection products. The chemical compounds used in these products, however, can enter the environment at the points of manufacture as well as through use by the consumer.It is already understood that UV-filter compounds have toxic effects on marine organisms, but research in this area is limited and does not take into account certain variables, such as differences in environmental conditions.Dr Brett Sallach, from the University of York's Department of Environment and Geography, said: "Given the declining status of coral reef ecosystems and the many stressors they already face, it is important to identify the potential occurrence and toxicological risks associated with UV-filter exposure to reef ecosystems."Our research aimed to identify what research was out there and what gaps we had in our knowledge. Importantly we needed to understand what areas could be considered priority for future attention in order to understand the impacts of these products, and hopefully prevent any further damage to the environment."Undoubtedly products that can help protect against the harmful effects of UV radiation on human health are hugely important, and therefore we need reliable and extensive evidence to suggest any changes or scaling back of these products."Researchers consulted with experts and industry representatives within the field of marine UV-filter exposure to understand the limitations of current research and what areas needed urgent attention.They found that the majority of research on UV-filter compounds focuses on freshwater organisms and ecosystems, and that environmental conditions can either increase or decrease the response to toxic elements, making the true risk of the compounds difficult to establish.This research does not translate easily to the unique ecology of coral reefs, and therefore long-term environmental monitoring would be needed in tropical and subtropical climates to understand the toxic effects here.Yasmine Watkins, who led the work as part of her Masters degree in the Department of Environment and Geography, said: "We make four recommendations for priority research areas going forward, based on our consultation with experts. We need more work in the area of understanding UV-filter toxicity under different climate conditions, and long-term study into exposure and recovery of coral reefs."We also need to know realistic exposure to these compounds and how long they exist in the marine environment, to determine what the 'safe' limits are."Researchers aim to highlight these priority areas to better inform regulators and policy makers to improve conservation and management of coral reefs, whilst ensuring that human health can continue to be protected by UV-filter products.
Environment
2,021
March 30, 2021
https://www.sciencedaily.com/releases/2021/03/210330171009.htm
Architecture of Eolian successions under icehouse and greenhouse conditions
Anthropogenic climate change is one of the foremost scientific and societal challenges. In part, our response to this global challenge requires an enhanced understanding of how the Earth's surface responds to episodes of climatic heating and cooling. As historical records extend back only a few hundred years, we must look back into the ancient rock record to see how the surface of the Earth has responded to shifts between icehouse (presence of ice at the Earth's poles) and greenhouse (no substantial ice at Earth's poles) climates in the past.
In their study published last week in Their results demonstrate statistically that preserved sedimentary architectures developed under icehouse and greenhouse conditions are fundamentally different. These differences can be tied to contrasting environmental conditions existing on Earth's surface. During icehouse climates, alternations between glacial and interglacial episodes (caused by changes in the Earth's orbit -- the so-called Milankovitch cyclicity) resulted in cycles of glacial-episode accumulation and interglacial deflation.Greenhouse conditions instead promoted the preservation of eolian elements in the geological record due to elevated water tables and the widespread action of biogenic and chemical stabilizing agents, which protected deposits from wind-driven deflation.In the context of a rapidly changing climate, the results presented in this work can help predict the potential long-term impact of climate change on Earth surface processes.
Environment
2,021