Date
stringlengths 11
18
| Link
stringlengths 62
62
| Title
stringlengths 16
148
| Summary
stringlengths 1
2.68k
| Body
stringlengths 22
13k
⌀ | Category
stringclasses 20
values | Year
int64 2k
2.02k
|
---|---|---|---|---|---|---|
March 2, 2021
|
https://www.sciencedaily.com/releases/2021/03/210302154234.htm
|
Scientists use forest color to gauge permafrost depth
|
Scientists regularly use remote sensing drones and satellites to record how climate change affects permafrost thaw rates -- methods that work well in barren tundra landscapes where there's nothing to obstruct the view.
|
But in boreal regions, which harbor a significant portion of the world's permafrost, obscuring vegetation can stymy even the most advanced remote sensing technology.In a study published in January, researchers in Germany and at the University of Alaska Fairbanks' Geophysical Institute developed a method of using satellite imagery to measure the depth of thaw directly above permafrost in boreal ecosystems. Rather than trying to peer past vegetation, they propose a unique solution that uses variations in forest color to infer the depth of permafrost beneath.Permafrost deposits in the northern hemisphere have been continuously frozen for hundreds of thousands of years. The soil layer directly above that of permafrost, however, is much more dynamic -- freezing and thawing with the seasons and growing or shrinking as it interacts with different types of vegetation at the surface.Because permafrost in boreal regions is often overlaid with thick forest cover, typical methods of measuring permafrost and the active layer that work well in tundra regions -- like using pulsed lasers or radar that penetrate the soil -- are ineffective and can give spurious results."The canopies get in the way in forested regions," said Christine Waigl, a researcher in the Geophysical Institute at the University of Alaska Fairbanks and co-author on the study. "Some remote sensing instruments can penetrate the vegetation cover, but the interpretation requires specialized knowledge."Instead of looking past the forest cover, scientists have turned to a variety of indirect methods. One approach is to assign categories to vegetation in order to obtain broad estimates for the size of the active layer underneath. The results can be imprecise -- similar to the difference between a sketch of a landscape and a high-resolution photo of it.Instead, lead author Veronika Döpper, a researcher at the Berlin Institute of Technology, took a different approach, one in which she viewed the vegetative landscape as a continuum."In natural forests, the plants around you don't just fall into one category or the other with no gradient in between," said Döpper. "So in our study, rather than saying we had a birch-dominated forest or a black spruce-dominated forest with corresponding deep or shallow permafrost, we used our satellite imagery to see the gradient between the two."To achieve this goal, Döpper obtained satellite imagery of the forests surrounding the city of Fairbanks, Alaska, that were taken during the summer of 2018. In order to know what she was looking at, Döpper set up over 65 10-by-10 meter plots outside Fairbanks that same summer in which she identified all the trees and shrubs, directly measured the depth of the active layer and recorded the location with GPS.By using the GPS coordinates to locate each plot on the satellite maps, Döpper could then say how the total number of species in a given plot contributed to the color of those plots as seen in her remote sensing imagery."Different types of vegetation will have completely different tones in their color spectra and reflectances, which we can use to map vegetation composition," said Döpper.With her plots as color guides, reflectances could then determine the exact type of vegetation growing over large swaths of forested and unforested areas outside of Fairbanks and, by proxy, could estimate the precise thaw depths for the same area.Not only does this new approach promise to provide more accurate and abundant estimates of permafrost depth for use in climate modeling, it's also a valuable tool for communities in boreal regions.When permafrost melts, the runoff bores channels through the soil, destabilizing the topography overhead. This can result in ground subsidence and landslides, endangering lives and posing risks to infrastructure.Over 80% of Alaska is covered in permafrost. As these deposits continue to melt, innovative methods of monitoring their disappearance will be essential in more ways than one.
|
Environment
| 2,021 |
March 2, 2021
|
https://www.sciencedaily.com/releases/2021/03/210302130705.htm
|
'Space hurricane' in Earth's upper atmosphere discovered
|
The first observations of a space hurricane have been revealed in Earth's upper atmosphere, confirming their existence and shedding new light on the relationship between planets and space.
|
Hurricanes in the Earth's low atmosphere are known, but they had never before been detected in the upper atmosphere.An international team of scientists led by Shandong University in China analysed observations made by satellites in 2014 to reveal a long-lasting hurricane, resembling those in the lower atmosphere, in the polar ionosphere and magnetosphere with surprisingly large energy and momentum deposition despite otherwise extremely quiet geomagnetic conditions.The analysis allowed a 3D image to be created of the 1,000km-wide swirling mass of plasma several hundred kilometres above the North Pole, raining electrons instead of water.Professor Qing-He Zhang, lead author of the research at Shandong University, said: "These features also indicate that the space hurricane leads to large and rapid deposition of energy and flux into the polar ionosphere during an otherwise extremely quiet geomagnetic condition, suggesting that current geomagnetic activity indicators do not properly represent the dramatic activity within space hurricanes, which are located further poleward than geomagnetic index observatories."Professor Mike Lockwood, space scientist at the University of Reading, said: "Until now, it was uncertain that space plasma hurricanes even existed, so to prove this with such a striking observation is incredible.""Tropical storms are associated with huge amounts of energy, and these space hurricanes must be created by unusually large and rapid transfer of solar wind energy and charged particles into the Earth's upper atmosphere."Plasma and magnetic fields in the atmosphere of planets exist throughout the universe, so the findings suggest space hurricanes should be a widespread phenomena."Hurricanes often cause loss of life and property through high winds and flooding resulting from the coastal storm surge of the ocean and the torrential rains. They are characterised by a low-pressure centre (hurricane eye), strong winds and flow shears, and a spiral arrangement of towering clouds with heavy rains.In space, astronomers have spotted hurricanes on Mars, and Saturn, and Jupiter, which are similar to terrestrial hurricanes in the low atmosphere. There are also solar gases swirling in monstrous formations deep within the sun's atmosphere, called solar tornadoes. However, hurricanes had not been reported in the upper atmosphere of the planets in our heliosphere.The space hurricane analysed by the team in Earth's ionosphere was spinning in an anticlockwise direction, had multiple spiral arms, and lasted almost eight hours before gradually breaking down.The team of scientists from China, the USA, Norway and the UK used observations made by four DMSP (Defense Meteorological Satellite Program) satellites and a 3D magnetosphere modelling to produce the image. Their findings were published in Professor Zhang added: "This study suggests that there are still existing local intense geomagnetic disturbance and energy depositions which is comparable to that during super storms. This will update our understanding of the solar wind-magnetosphere-ionosphere coupling process under extremely quiet geomagnetic conditions."In addition, the space hurricane will lead to important space weather effects like increased satellite drag, disturbances in High Frequency (HF) radio communications, and increased errors in over-the-horizon radar location, satellite navigation and communication systems."
|
Environment
| 2,021 |
March 2, 2021
|
https://www.sciencedaily.com/releases/2021/03/210302130702.htm
|
Energy switching decisions could widen social inequalities
|
New energy tariffs could leave people on bad deals even worse off despite the potential benefits for everyone, research has found.
|
The study, led by the University of Leeds, found new types of energy contracts designed for a low carbon future could benefit all types of customer, with opportunities to sell excess energy from solar panels or incentives for using energy at off-peak times.However, many people were unlikely to choose them because they were disengaged from the energy market, didn't trust energy companies, or already feel satisfied with their current tariffs. Those likely to adopt them first are younger, with higher incomes and higher education.Energy companies are already starting to offer these contracts, but there has been little understanding of how much consumer demand there is for these new models, and how consumers may be affected by them.The study, carried out by a team that included researchers from UCL and University of Waikato, New Zealand, shows consumers who already trust the energy market, with higher incomes and positive attitudes towards technology, are likely to do well out of contracts that help energy system decarbonisation.However, consumers in lower-income and lower education groups may be too cautious to gain the benefits of early adoption, be too disinterested in switching supplier, or find the market too untrustworthy to engage with. This could lead to them defaulting to more expensive, less tailored, or even more risky contracts.Principal investigator Dr Stephen Hall, from Leeds' School of Earth and Environment, said: "These new energy contracts are really important for low-carbon energy systems, and are already appearing on price comparison sites."Our work shows only some consumers find these new types of energy supply attractive, and others cannot access them because they rent their home or might not be able to afford cutting edge technologies like electric cars and home batteries. This means some consumers could get left behind because they cannot or will not engage with new tariffs."The energy market tends to preference affluent and active consumers, while often exploiting inactive consumers who are usually in lower-income groups."The findings of this research suggests that gap is likely to widen without intervention because smarter and more flexible tariffs worsen the divide between who benefits from the market, and who loses out."Researchers set out to explore how likely customers were to choose new types of energy contracts when presented with a range of offers.Some 2,024 customers were presented with five new business models and asked and asked a series of questions about them, including their likelihood to sign up to them if they were available today.The models included options such as longer 10-year contracts with energy efficiency measures, allowing for utilities to control some energy services in the home, making the switching decision automatic, and trading their own excess green electricity.From the responses they identified four consumer segments based on how engaged they were with their current electricity and gas suppliers, their appetite for choosing new business models and their reasons for wanting certain types of products. The segment with the highest appetite for new models was also the smallest, suggesting that they may have a limited consumer base to expand into.This group was made up of people who had the highest income, tended to be younger and keen on adopting new technologies, they also had the highest level of education.The other three segments were less likely to choose new business models because they were cautious in their adoption of new technology, didn't think the new tariffs would offer a better deal, had significant trust issues in the market, or were too disengaged to switch supplier regularly.The report, Matching consumer segments to innovative utility business models, is published in The findings suggest there is potential for further innovation in the energy market, but at the same time, the customer base may be more limited than is generally expected.There is also the risk of a further loss of trust in the energy market as tariffs become more complex and winners and losers more obvious.Professor Jillian Anable, who led the statistical analysis, said: "Our analysis has revealed a rich set of segments which shows that there is no average domestic energy consumer."Energy companies are going to have to work hard to tailor their products and their marketing in ways that cut through the disengagement, complexity and mistrust experienced by people with respect to their home energy consumption."Dr Hall said: "Our research shows there is some demand for innovative energy contracts, but there is a disconnect between the groups that find them attractive and their ability to sign up to them. Energy companies should target them directly and give them the means to act on their preferences."As the market diversifies and contracts become more complex, consumers may decide to stick with what they know, introducing more complex consumer risks. The challenge for regulatory institutions is to recognise these risks and evolve the regulatory model of the retail market."
|
Environment
| 2,021 |
March 2, 2021
|
https://www.sciencedaily.com/releases/2021/03/210302130645.htm
|
Groundbreaking research into white-rot fungi proves its value in carbon sequestration
|
A foundational study conducted by scientists at the National Renewable Energy Laboratory (NREL) shows for the first time that white-rot fungi are able to use carbon captured from lignin as a carbon source.
|
The research confirms a hypothesis from Davinia Salvachúa Rodriguez, the senior author of a newly published paper. Until now, scientists were unsure whether white-rot fungi -- the most efficient lignin-degrading organisms in nature -- actually consume the products generated from breaking down lignin."What we have demonstrated here is that white-rot fungi can actually utilize lignin-derived aromatic compounds as a carbon source, which means they can eat them and utilize them to grow," Salvachúa said. "That is another strategy for carbon sequestration in nature and has not been reported before."The paper, "Intracellular pathways for lignin catabolism in white-rot fungi," appears in the journal Salvachúa, a research scientist in NREL's Renewable Resources and Enabling Sciences Center, has spent more than a decade studying white-rot fungi. Last year, the Department of Energy's Office of Science awarded her a $2.5 million grant as part of the Early Career Research Program to further her work.White-rot fungi evolved to degrade lignin, which Salvachúa calls "the most recalcitrant biopolymer on Earth." Lignin helps make the plant's cell walls more rigid. Other parts of the plant, such as cellulose, are also recalcitrant but can be fully depolymerized to single monomeric species for use as a biofuel and biochemical precursors, for example. But the intractability of lignin and the lack of an efficient method to deconstruct and convert lignin to monomeric compounds hampers the viability of plant-based biorefineries.Salvachúa's work forms the foundation of a new research area based on lignin being broken down by white-rot fungi, which could be further exploited to simultaneously convert the biopolymer into value-added compounds.The researchers examined two species of white-rot fungi: Trametes versicolor and Gelatoporia subvermispora. Through the use of genomic analysis, isotopic labeling, and systems biology approaches, the researchers determined the ability of these organisms to incorporate carbon from lignin-derived aromatic compounds into central metabolism and were able to map out the potential aromatic catabolic pathways for that conversion process. Further, in vitro enzyme analyses enable validation of some of the proposed steps. The researchers also highlight that this work is just the beginning of a broad area towards discovering new enzymes and pathways and better understanding carbon flux in these organisms.Lignin accounts for about 30% of the organic carbon in the biosphere. Concerns about the changing climate have sparked a growing interest in the issue of carbon cycling, in which carbon is absorbed by natural reservoirs -- such as plants -- from the atmosphere and later decomposed and returned to the atmosphere or other natural reservoirs. Because more carbon is stored in the soil than in the atmosphere or plants, white-rot fungi are now positioned as key players in the sequestration of lignin-derived carbon in soils.Scientists have demonstrated the ability of some bacterial strains to break down lignin as well, but not as effectively as white-rot fungi. Salvachúa said bacteria are easier to work with than fungi because they grow more quickly, and many are genetically tractable, contrary to white-rot fungi. "With fungi, one experiment can be up to two months," she said. "We try to be very careful when we plan an experiment because that's a long time. That's six experiments a year if you need results to move forward. With bacteria, you can do one per week."The Department of Energy's Office of Science, Biological and Environmental Research program, funded a portion of the research, with other funding coming from the Laboratory Directed Research and Development program at NREL.
|
Environment
| 2,021 |
March 2, 2021
|
https://www.sciencedaily.com/releases/2021/03/210302130642.htm
|
Fuel efficiency of one car may be cancelled by your next car purchase
|
In a recent collaborative study led by the University of Maryland (UMD), researchers find that consumers tend to buy something less fuel efficient than they normally would for their second car after springing for an eco-friendly vehicle. While this sounds like an all-too-logical conclusion, the study reports a 57% reduction in the benefits of driving your fuel efficient car for carbon emissions purely based on the purchase of your second vehicle. Since about three-quarters of cars are purchased into multi-car households, these findings could have major implications for carbon emissions, and especially for the design of carbon mitigation programs like Cash-for-Clunkers and Corporate Average Fuel Economy (CAFE) standards that aren't taking into account the decisions of consumers with multiple vehicles.
|
"What we really wanted to do is see how households are making decisions when they purchase and own more than one vehicle," says James Archsmith, assistant professor in Agricultural & Resource Economics at UMD and lead author on this study. "We have a lot of energy policy out there trying to get people to buy more fuel efficient cars, but we really think of every car as this separate purchase that doesn't rely on any other things going on in the household, and that's just not the case. Other vehicles, priorities, and how those purchases and the intended uses of the vehicles interact are all important to understand how effective our policies are."Published in "It's not likely that people are actually thinking about fuel economy that way, that they can splurge on a less fuel efficient vehicle," explains Archsmith. "It is probably operating through other attributes of the car that are associated with fuel economy. So I have a car that is small and fuel efficient, but it isn't as comfortable and can't fit the kids. Then, I tend to buy a bigger second car. It is more likely utility based in some way, but it is correlated with fuel economy nonetheless."The study also found that consumers who buy fuel efficient vehicles tend to end up driving them more and farther distances than they might otherwise, further chipping away at the emissions benefits. These decisions and consumer behaviors need to have a place in policy making that is designed to reduce carbon emissions and incentivize more fuel efficient cars, the researchers say."If people buy a more fuel-efficient car, down the road when they replace one of their other cars, the car they buy is going to be less fuel-efficient," says Rapson. "So the effect of fuel economy standards is reduced. There is a strong force that we didn't know about before that is going to erode the benefit of [policy] forcing people to buy more fuel-efficient cars."Since California's fuel economy standards are models for the rest of the country, they should be adapted to fit actual human behavior. "Unintended consequences like this need to be taken into account when making policy," adds Rapson. "On average, fuel economy standards are putting more fuel-efficient cars in households. That can be good if it reduces gasoline use. But if it causes people to buy a bigger, less fuel-efficient second car to compensate, this unintended effect will erode the intended goals of the policy."Archsmith and the team hope to expand this research out beyond California and to other aspects of driver behavior research that plays an important role on fuel economy, and ultimately on environmental health and climate change."We want to focus more on driving behavior and how multi-car households drive their vehicles and respond to changes in gasoline prices in the future," says Archsmith. "We want to refine them and then extend beyond the state of California, using California as a model in this paper and doing the same kind of analyses in other states as well. The pandemic has also changed the way people drive, and we expect to see more purchases of fuel inefficient cars coming off of lockdowns and lower gas prices."
|
Environment
| 2,021 |
March 2, 2021
|
https://www.sciencedaily.com/releases/2021/03/210302094053.htm
|
Unlocking kelp’s potential as a major biofuel source
|
For several years now, the biofuels that power cars, jet airplanes, ships and big trucks have come primarily from corn and other mass-produced farm crops. Researchers at USC, though, have looked to the ocean for what could be an even better biofuel crop: seaweed.
|
Scientists at the USC Wrigley Institute for Environmental Studies on Santa Catalina Island, working with private industry, report that a new aquaculture technique on the California coast dramatically increases kelp growth, yielding four times more biomass than natural processes. The technique employs a contraption called the "kelp elevator" that optimizes growth for the bronze-colored floating algae by raising and lowering it to different depths.The team's newly published findings suggest it may be possible to use the open ocean to grow kelp crops for low-carbon biofuel similar to how land is used to harvest fuel feedstocks such as corn and sugarcane -- and with potentially fewer adverse environmental impacts.The National Research Council has indicated that generating biofuels from feedstocks like corn and soybeans can increase water pollution. Farmers use pesticides and fertilizers on the crops that can end up polluting streams, rivers and lakes. Despite those well-evidenced drawbacks, 7% of the nation's transportation fuel still comes from major food crops. And nearly all of it is corn-based ethanol."Forging new pathways to make biofuel requires proving that new methods and feedstocks work. This experiment on the Southern California coast is an important step because it demonstrates kelp can be managed to maximize growth," said Diane Young Kim, corresponding author of the study, associate director of special projects at the USC Wrigley Institute and a professor of environmental studies at the USC Dornsife College of Letters, Arts and Sciences.The study was published on Feb. 19 in the journal Government and industry see promise in a new generation of climate-friendly biofuels to reduce net carbon dioxide emissions and dependence on foreign oil. New biofuels could either supplement or replace gasoline, diesel, jet fuel and natural gas.If it lives up to its potential, kelp is a more attractive option than the usual biofuel crops -- corn, canola, soybeans and switchgrass -- for two very important reasons. For one, ocean crops do not compete for fresh water, agricultural land or artificial fertilizers. And secondly, ocean farming does not threaten important habitats when marginal land is brought into cultivation.The scientists focused on giant kelp, Macrocystis pyrifera, the seaweed that forms majestic underwater forests along the California coast and elsewhere and washes onto beaches in dense mats. Kelp is one of nature's fastest-growing plants and its life cycle is well understood, making it amenable to cultivation.But farming kelp requires overcoming a few obstacles. To thrive, kelp has to be anchored to a substrate and only grows in sun-soaked waters to about 60 feet deep. But in open oceans, the sunlit surface layer lacks nutrients available in deeper water.To maximize growth in this ecosystem, the scientists had to figure out how to give kelp a foothold to hang onto, lots of sunlight and access to abundant nutrients. And they had to see if kelp could survive deeper below the surface. So, Marine BioEnergy invented the concept of depth-cycling the kelp, and USC Wrigley scientists conducted the biological and oceanographic trial.The kelp elevator consists of fiberglass tubes and stainless-steel cables that support the kelp in the open ocean. Juvenile kelp is affixed to a horizontal beam, and the entire structure is raised and lowered in the water column using an automated winch.Beginning in 2019, research divers collected kelp from the wild, affixed it to the kelp elevator and then deployed it off the northwest shore of Catalina Island, near Wrigley's marine field station. Every day for about 100 days, the elevator would raise the kelp to near the surface during the day so it could soak up sunlight, then lower it to about 260 feet at night so it could absorb nitrate and phosphate in the deeper water. Meantime, the researchers continually checked water conditions and temperature while comparing their kelp to control groups raised in natural conditions."We found that depth-cycled kelp grew much faster than the control group of kelp, producing four times the biomass production," Kim said.Prior to the experiment, it was unclear whether kelp could effectively absorb the nutrients in the deep, cold and dark environment. Nitrate is a big limiting factor for plants and algae, but the study suggests that the kelp found all it needed to thrive when lowered into deep water at night. Equally important, the kelp was able to withstand the greater underwater pressure.Brian Wilcox, co-founder and chief engineer of Marine BioEnergy, said: "The good news is the farm system can be assembled from off-the-shelf products without new technology. Once implemented, depth-cycling farms could lead to a new way to produce affordable, carbon-neutral fuel year-round."Cindy Wilcox, co-founder and president of Marine BioEnergy, estimates that it would take a Utah-sized patch of ocean to make enough kelp biofuel to replace 10% of the liquid petroleum consumed annually in the United States. One Utah would take up only 0.13% of the total Pacific Ocean.Developing a new generation of biofuels has been a priority for California and the federal government. The U.S. Department of Energy's Advanced Research Projects Agency-Energy invested $22 million in efforts to increase marine feedstocks for biofuel production, including $2 million to conduct the kelp elevator study. The Department of Energy has a study to locate a billion tons of feedstock per year for biofuels; Cindy Wilcox of Marine BioEnergy said the ocean between California, Hawaii and Alaska could contribute to that goal, helping make the U.S. a leader in this new energy technology.Video:
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301211620.htm
|
The time is ripe! An innovative contactless method for the timely harvest of soft fruits
|
Most people are probably familiar with the unpleasant feeling of eating overripe or underripe fruit. Those who work in agriculture are tasked with ensuring a timely harvest so that ripeness is at an optimal point when the fruit is sold, both to minimize the amount of fruit that goes to waste and maximize the quality of the final product. To this end, a number of techniques to assess fruit ripeness have been developed, each with their respective advantages and disadvantages depending on the type of produce.
|
Although biochemical and optical methods exist, mechanical techniques are the most widely used. They indirectly assess ripeness based on the fruit's firmness. In turn, firmness is quantified by observing the vibrations that occur on the fruit when mechanical energy is accurately delivered through devices such as hammers, pendulums, or speakers. Unfortunately, these approaches fall short for softer fruits, which are more readily damaged by the contact devices used.In a recent study published in But what is LIP and how is it used? Plasma is a state of matter similar to the gaseous state but in which most particles have an electric charge. This energetic state can be produced in normal air by focusing a high-intensity laser beam onto a small volume. Because the generated plasma "bubble" is unstable, it immediately expands, sending out shockwaves at ultrasonic speeds. Professor Naoki Hosoya and colleagues at SIT had successfully used LIP shockwaves generated close to the surface of apples to excite a type of vibration called However, soft fruits do not exhibit The team went further and looked for the best position on the mangoes' surface to determine the velocity of Rayleigh waves. Mangoes, as well as other soft fruits, have large seeds inside, which can alter the propagation of surface waves in ways that are detrimental to measurements. "The results of our experiments indicate that Rayleigh waves along the 'equator' of the mango are better for firmness assessment compared to those along the 'prime meridian'," explains Hosoya. The experiments also revealed that cavities within the fruit's flesh or decay can greatly affect the results of the measurements. Thus, as Hosoya adds, they will keep investigating which is the best area to measure firmness in mangoes using their novel approach.In short, the team at SIT has engineered an innovative strategy to assess the ripeness of soft fruits from outside. "Our system," remarks Hosoya, "is suitable for non-contact and non-destructive firmness assessment in mangoes and potentially other soft fruits that do not exhibit the usual
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301151550.htm
|
Plant clock could be the key to producing more food for the world
|
A University of Melbourne led study has established how plants use their metabolism to tell time and know when to grow -- a discovery that could help leverage growing crops in different environments, including different seasons, different latitudes or even in artificial environments and vertical gardens.
|
Published in the Lead researcher Dr Mike Haydon, from the School of BioSciences, said while plants don't sleep as humans do, their metabolism is adjusted during the night to conserve energy for the big day ahead of making their own food using energy from sunlight, or photosynthesis."Getting the timing of this daily cycle of metabolism right is really important because getting it wrong is detrimental to growth and survival," Dr Haydon said. "Plants can't stumble to the fridge in the middle of the night if they get hungry so they have to predict the length of the night so there's enough energy to last until sunrise; a bit like setting an alarm clock."Dr Haydon and collaborators had earlier shown that the accumulation of sugars produced from photosynthesis give the plant important information about the amount of sugar generated in the morning and sends signals to what's known as the circadian clock, to adjust its pace."We have now found that a different metabolic signal, called superoxide, acts at dusk and changes the activity of circadian clock genes in the evening," said Dr Haydon. "We also found that this signal affects plant growth. We think this signal could be providing information to the plant about metabolic activity as the sun sets."Researchers hope the study will be invaluable in the world producing more food, more reliably."As we strive to produce more food for the increasing global population in the face of changing climate, we may need to grow crops in different environments such as different seasons, different latitudes or even in artificial environments like vertical gardens," Dr Haydon said."Understanding how plants optimise rhythms of metabolism could be useful information to allow us to fine-tune their circadian clocks to suit these conditions and maximise future yields."
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301133633.htm
|
Metal whispering: Finding a better way to recover precious metals from electronic waste
|
Inspired by nature's work to build spiky structures in caves, engineers at Iowa State University have developed technology capable of recovering pure and precious metals from the alloys in our old phones and other electrical waste.
|
Using controlled applications of oxygen and relatively low temperatures, the engineers say they can dealloy a metal by slowly moving the most reactive components to the surface where they form stalagmite-like spikes of metal oxides.That leaves the least-reactive components in a purified, liquid core surrounded by brittle metal-oxide spikes "to create a so-called 'ship-in-a-bottle structure,'" said Martin Thuo, the leader of the research project and an associate professor of materials science and engineering at Iowa State University."The structure formed when the metal is molten is analogous to filled cave structures such as stalactites or stalagmites," Thuo said. "But instead of water, we are using oxidation to create these structures."A paper describing the new technology, "Passivation-driven speciation, dealloying and purification," has recently been published by the journal University startup funds and part of a U.S. Department of Energy Small Business Innovation Research grant supported development of the technology.Thuo noted this project is the exact opposite of his research group's previous work to develop heat-free solder."With heat-free solder, we wanted to put things together," he said. "With this, we want to make things fall apart."But not just fall apart any which way. Thuo and the engineers in his research group want to control exactly how and where alloy components fall apart, or dealloy."It's like being a metal whisperer," he said. "We make things go the way we want."The engineers offered a more precise description in their paper: "This work demonstrates the controlled behavior of surface oxidation in metals and its potential in design of new particle structures or purification/dealloying. By tuning oxidation via temperature, oxidant partial pressure, time and composition, a balance between reactivity and thermal deformation enables unprecedented morphologies."Those unprecedented forms and structures could be very useful."We need new methods to recover precious metals from e-waste or mixed metal materials," Thuo said. "What we demonstrate here is that the traditional electrochemical or high-temperature methods (above 1,832 degrees Fahrenheit) may not be necessary in metal purification as the metal's reactivity can be used to drive separation."Thuo said the oxidation technology works well at temperatures of 500 to 700 degrees Fahrenheit. ("This is set in an oven and getting metals to separate," he said.)Besides metal purification and recovery, this new idea could also be applied to metal speciation -- the ability to dictate creation and distribution of certain metal components. One use could be production of complex catalysts to drive multi-stage reactions.Let's say chemists need a tin oxide catalyst followed by a bismuth oxide catalyst. They'll start with an alloy with the bismuth oxide buried beneath the tin oxide. They'll run the reaction with the tin oxide catalyst. Then they'll raise the temperature to the point that the bismuth oxide comes to the surface as spikes. And then they'll run the reaction with the bismuth oxide catalyst.Thuo credits development of the new technology to working with talented students and two collaborators."We built on this big idea very slowly," he said. "And working together, we were able to break into this knowledge gap."
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301112406.htm
|
Health risks to babies on the front line of climate change
|
Extreme rainfall associated with climate change is causing harm to babies in some of the most forgotten places on the planet setting in motion a chain of disadvantage down the generations, according to new research in
|
Researchers from Lancaster University and the FIOCRUZ health research institute in Brazil found babies born to mothers exposed to extreme rainfall shocks, were smaller due to restricted fetal growth and premature birth.Low birth-weight has life-long consequences for health and development and researchers say their findings are evidence of climate extremes causing intergenerational disadvantage, especially for socially-marginalized Amazonians in forgotten places.Climate extremes can affect the health of mothers and their unborn babies in many ways -- for example causing crops to fail, reducing access to nutritious affordable food, increasing prevalence of infectious diseases. Extremely intense rainfall in Amazonia causes river flooding exposing poorer households to water-borne diseases and creating ideal breeding conditions for mosquitos, leading to outbreaks of malaria or dengue fever. Major floods and droughts are extremely disruptive to people's lives; related stress and anxiety can contribute to premature birth and impair normal childhood development.The team focused on all the live births over an 11-year period in 43 highly river-dependent municipalities in Amazonas State, Brazil. For these 291,479 births they analysed how birth-weight, fetal growth and pregnancy duration were affected by local rainfall variability during pregnancy.Extremely intense rainfall in Amazonia was associated with severely reduced mean birth-weight due to pre-term birth or restricted growth -- average birth-weight reduction was nearly 200 grams.Under climate change in Amazonia, sustained periods of exceptionally heavy rain are becoming more common and subsequent floods are five-times more common than just a few decades ago.Using satellite data, researchers calculated weeks of prenatal exposure -- including the pre-pregnancy trimester -- to each kind of rainfall variability and compared that to birth weight and pregnancy duration.The study also foundDr Luke Parry of Lancaster University's Environment Centre and one of the authors of the report said: "Our study found climate extremes were adding another layer of disadvantage onto babies already facing a poor start in life."Due to the deep social inequalities in the Brazilian Amazon, children born to adolescent indigenous mothers with no formal or little education, were over 600 grams smaller than those born into more privileged households. Extreme weather placed a further penalty onto these newborns."Reducing the health risks found by the team will require much greater investment into poverty alleviation and better healthcare if we are to help Amazonia's river-dwelling populations adapt to changing rainfall patterns and increasingly frequent and severe floods and droughts."Dr Erick Chacon-Montalvan of Lancaster University, lead author of the study said: "We used publicly available data on birth records to go 'back in time' to look at the relationship between climate extremes and birth weight. Our study showed that even intense but non-extreme rainfall was harmful."Increasing climatic variability in Amazonia is concerning, in part because subsequent disadvantages associated with low birth weight include in lower educational attainment, poorer health, reduced income in adulthood, and mortality-risks."Jesem Orellana from FIOCRUZ in Brazil said: "Rainfall shocks confer inter-generational disadvantage for river-dependent populations living in neglected areas of Amazonia. These marginalized populations experience injustice because, despite contributing little to climate change, they are responsible for safeguarding most remaining forest and highly susceptible to climatic shocks.
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301112403.htm
|
Global warming poses threat to food chains
|
Rising temperatures could reduce the efficiency of food chains and threaten the survival of larger animals, new research shows.
|
Scientists measured the transfer of energy from single-celled algae (phytoplankton) to small animals that eat them (zooplankton).The study -- by the University of Exeter and Queen Mary University of London, and published in the journal Warmer conditions increase the metabolic cost of growth, leading to less efficient energy flow through the food chain and ultimately to a reduction in overall biomass."These findings shine a light on an under-appreciated consequence of global warming," said Professor Gabriel Yvon-Durocher, of the Environment and Sustainability Institute on Exeter's Penryn Campus in Cornwall."Phytoplankton and zooplankton are the foundation of food webs that support freshwater and marine ecosystems that humans depend on."Our study is the first direct evidence that the cost of growth increases in higher temperatures, limiting the transfer of energy up a food chain."Professor Mark Trimmer, of Queen Mary University of London, said: "If the effects we find in this experiment are evident in natural ecosystems, the consequences could be profound."The impact on larger animals at the top of food chains -- which depend on energy passed up from lower down the food chain -- could be severe. More research is needed.""In general, about 10% of energy produced on one level of a food web makes it up to the next level," said Dr Diego Barneche, of the Australian Institute of Marine Science and the Oceans Institute at the University of Western Australia."This happens because organisms expend a lot of energy on a variety of functions over a lifetime, and only a small fraction of the energy they consume is retained in biomass that ends up being eaten by predators."Warmer temperatures can cause metabolic rates to accelerate faster than growth rates, which reduces the energy available to predators in the next level up the food web."The study measured nitrogen transfer efficiency (a proxy for overall energy transfer) in freshwater plankton that had been exposed to a seven-year-long outdoor warming experiment in the UK.
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301112331.htm
|
Hotter, drier, CRISPR: editing for climate change
|
Gene editing technology will play a vital role in climate-proofing future crops to protect global food supplies, according to scientists at The University of Queensland.
|
Biotechnologist Dr Karen Massel from UQ's Centre for Crop Science has published a review of gene editing technologies such as CRISPR-Cas9 to safeguard food security in farming systems under stress from extreme and variable climate conditions."Farmers have been manipulating the DNA of plants using conventional breeding technologies for millennia, and now with new gene-editing technologies, we can do this with unprecedented safety, precision and speed," Dr Massel said."This type of gene editing mimics the way cells repair in nature."Her review recommended integrating CRISPR-Cas9 genome editing into modern breeding programs for crop improvement in cereals.Energy-rich cereal crops such as wheat, rice, maize and sorghum provide two-thirds of the world's food energy intake."Just 15 plant crops provide 90 per cent of the world's food calories," Dr Massel said."It's a race between a changing climate and plant breeders' ability to produce crops with genetic resilience that grow well in adverse conditions and have enriched nutritional qualities."The problem is that it takes too long for breeders to detect and make that genetic diversity available to farmers, with a breeding cycle averaging about 15 years for cereal crops."Plus CRISPR allows us to do things we can't do through conventional breeding in terms of generating novel diversity and improving breeding for desirable traits."In proof-of-concept studies, Dr Massel and colleagues at the Queensland Alliance for Agriculture and Food Innovation (QAAFI) applied gene editing technology to sorghum and barley pre-breeding programs."In sorghum, we edited the plant's genes to unlock the digestibility level of the available protein and to boost its nutritional value for humans and livestock," she said."We've also used gene-editing to modify the canopy architecture and root architecture of both sorghum and barley, to improve water use efficiency."Dr Massel's research also compared the different genome sequences of cereals -- including wild variants and ancestors of modern cereals -- to differences in crop performance in different climates and under different kinds of stresses."Wild varieties of production crops serve as a reservoir of genetic diversity, which is especially valuable when it comes to climate resilience," she said."We are looking for genes or gene networks that will improve resilience in adverse growing climates."Once a viable gene variant is identified, the trick is to re-create it directly in high-performing cultivated crops without disrupting the delicate balance of genetics related to production traits."These kinds of changes can be so subtle that they are indistinguishable from the naturally occurring variants that inspired them."In 2019, Australia's Office of the Gene Technology Regulator deregulated gene-editing, differentiating it from genetically modified organism (GMO) technology.Gene edited crops are not yet grown in Australia, but biosecurity and safety risk assessments of the technology are currently being undertaken.This research is funded by an Australian Research Council Discovery grant with support from the Queensland Department of Agriculture and Fisheries and The University of Queensland.
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301091144.htm
|
Low-level thinning can help restore redwood forests without affecting stream temperatures
|
Selectively cutting trees in riparian zones to aid forest restoration can be done without adversely affecting streams' water temperature as long as the thinning isn't too intensive, new research by Oregon State University shows.
|
Published in "We don't know much about what happens with the more subtle changes in shade and light that come with thinning," Roon said. "Most of the research so far has looked at the effects of clearcutting with no stream-side buffer at all, or harvests outside of an untouched buffer area. And regulatory requirements tend to look at single descriptors of stream temperature -- the warmest it gets in the middle of summer, for example -- and those descriptors possibly don't do a thorough job of explaining thermal influences on ecological processes."Riparian zones -- lands near streams, lakes, ponds, etc. -- have unique soil and vegetation characteristics that are influenced strongly by the presence of water. Riparian zones make up less than 1% of the total area of the American West, contrast significantly with the West's arid uplands and provide habitat for a range of endangered and threatened species.In riparian restoration, conservation managers look at an area's functional and structural elements -- climate, soils, weather patterns, hydrology, plants, wildlife and socioeconomic use patterns -- and actively or passively try to set in motion processes that enable natural ecological conditions to return.Riparian forests in the Pacific Northwest, Roon explains, were "extensively altered" by previous timber harvesting practices that persisted for much of the 20th century, including clearcutting trees right up to waters' edge.Leaving a buffer zone of vegetation alongside streams is critical for wildlife, especially salmon and trout. A riparian buffer also provides a wide array of ecosystem services for riparian forests and streams including filtering sediment and excessive nutrients and providing shade to keep water temperatures cool, as well as storing carbon. In addition, mature trees next to the stream will eventually fall in, creating important habitat for fish.Roon and collaborators Jason Dunham and Jeremiah Groom examined the effects of riparian thinning on shade, light and stream temperature in three small watersheds in second-growth redwood forests in northern California. Dunham is an aquatic ecologist with the U.S. Geological Survey who has a courtesy appointment in OSU's Department of Fisheries and Wildlife. Groom, who holds two graduate degrees from Oregon State, operates Groom Analytics LLC, a data analysis consulting firm, and is an expert on the effects of forest harvest on stream temperatures.Northern California is well known for its groves of large, iconic redwood trees. However, intensive logging removed most of the old-growth forests from this region and less than 10% of those forests remain.Foresters are interested in whether thinning can be applied to young forests to help speed up the recovery of older redwood forests. Most forest restoration efforts so far have been focused on upland forests -- those where soils do not stay saturated for extended periods -- and have been generally successful; now attention is turning to young forests in riparian zones.The OSU research took place both on private timber land and in nearby Redwood National Park, bringing together land managers with different natural resource management requirements with the common goal of understanding the effects of thinning on aquatic ecosystems. In this study the focus was on stream temperatures -- an important consideration for the sensitive fish and amphibians that live in the watersheds in the region.The large-scale field study enabled the scientists to measure conditions before and after experimental thinning treatments. These types of field experiments are rare in riparian forests, which are carefully protected."The power of these types of field experiments is that they can help us more directly attribute changes to the thinning treatment itself and remove other factors that often confound these types of studies," Roon said. "Responses to the thinning differed greatly depending on the intensity. In the watersheds where thinning treatments were more intensive, the reductions in shade and increases in light were sufficient to change the stream thermal regimes both locally and downstream. However, where the thinning treatments were less intensive, smaller reductions in shade and light resulted in minimal changes in stream temperatures."That means, Roon said, at lower intensity levels thinning within riparian zones in second-growth redwood forests looks like a feasible restoration strategy. This study is an important step toward making robust decisions regarding whether thinning, and how much of it, can be done without having adverse effects on streams.Future research, he added, could examine thinning treatments with a wider range of intensities. Roon also pointed out that the results were specific to the cool, coastal climates of the redwoods and would not necessarily apply to locations further inland."We also need to understand the effects of thinning in other locations under a range of different contexts," he said. "Until we know more about the effects of thinning on stream temperatures across a broader range of conditions, it should still be approached with caution."Oregon State University, Green Diamond Resource Company, and Save the Redwoods League supported this research.
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301084521.htm
|
Visiting water bodies worth billions to economies
|
Europeans spend more than £700 billion (€800bn) a year on recreational visits to water bodies -- but perceived poor water quality costs almost £90 billion (€100bn) in lost visits, a new study has found.
|
The new research -- led by a European collaboration involving the University of Exeter and the University of Stirling -- used data from 11,000 visits in 14 different countries to analyse the economic value of water bodies, such as rivers, lakes, waterfalls, beaches and seaside promenades.The research team estimated that people spend an average of £35 (€40) travelling to and from these sites, with a typical family making 45 such trips each year.The team also found that people were much less likely to visit sites if the perceived water quality fell, at a cost of well over €100 billion per year. The finding highlights the importance of maintaining and improving high bathing water quality standards.Published in Professor Tobias Börger, of the Berlin School of Economics and Law, used data collected as part of the European Union-funded BlueHealth project, which surveyed more than 18,000 people on their use of water bodies and their health and wellbeing. He explained: "The COVID-19 crisis has taught us all how important access to natural green and blue spaces is for people's mental health and wellbeing. Our research highlights that it's also critical for the economy to maintain high standards of water quality, as the pandemic crisis begins to ease."Following a Directive adopted by the European Commission, across the EU-member states, more than 15,000 coastal and almost 7,000 inland designated bathing water sites must now prominently display signs stating water quality over the past four years. Around 95 per cent of sites meet minimum quality standards and are considered safe for bathing, while 85 per cent are rated as having excellent water quality.Professor Danny Campbell, from the University of Stirling, a co-author on the study, added: "While the study reveals that changes in water quality matter to people, we found that household income and educational attainment are not related to visiting water bodies. This shows that all parts of society can and do enjoy the benefits of such visits in terms of recreation, health and wellbeing."The findings fit well with a growing body of work looking at people's experiences of inland and coastal waters and health across Europe. Co-author of the study, Dr Mathew White at the University of Exeter, said: "Blue spaces benefit people in a variety of ways. They encourage physical activity, they help de-stress and relax people, and they are important places for spending quality time with family and friends, all things which help people's mental and physical health. This research finds that good water quality is key in encouraging people to take up these benefits."The team hopes their study will help planners and regulators justify the costs of building and maintaining the infrastructure needed to keep bathing water quality high.
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301095948.htm
|
To sustain a thriving café culture, we must ditch the disposable cup.
|
Takeaway coffees -- they're a convenient start for millions of people each day, but while the caffeine perks us up, the disposable cups drag us down, with nearly 300 billion ending up in landfill each year.
|
While most coffee drinkers are happy to make a switch to sustainable practices, new research from the University of South Australia shows that an absence of infrastructure and a general 'throwaway' culture is severely delaying sustainable change.It's a timely finding, particularly given the new bans on single-use plastics coming into effect in South Australia today, and the likelihood of takeaway coffee cups taking the hit by 2022.Lead researcher, UniSA's Dr Sukhbir Sandhu, says the current level of coffee cup waste is unsustainable and requires a commitment from individuals, retailers, and government agencies alike to initiate change."There's no doubt we live in a disposable society -- so much of our lives is about convenient, on-the-run transactions. But such a speedy pace encourages the 'takeaway and throwaway' culture that we so desperately need to change," Dr Sandhu says."Educating and informing people about the issues of single-use coffee cups is effective -- people generally want to do the right thing -- but knowing what's right and acting upon it are two different things, and at the moment, there are several barriers that are impeding potential progress."For example, if your favourite coffee shop doesn't offer recyclable or compostable cups, it's unlikely to stop you from getting a coffee; we need that coffee hit and we need it now. So, strike one."Then, with the popularity of arty, patterned paper cups on the rise, you may think you're buying a recyclable option. But no -- most takeaway coffee cups are in fact lined with a waterproof plastic, which is not only non-recyclable, but also a contaminant. Strike two."Finally, if you happen upon a coffee shop that does offer recyclable coffee cups, once you're finished, where do you put it? A lack of appropriate waste disposal infrastructure means that even compostable cups are ending up in landfill. Strike three."As it happens, compostable cups need to go into a green organics bin, but these bins might not be easily accessible in public settings like the standard shopping precincts."While the South Australian government is moving in the right direction with its Replace the Waste campaign, changing our 'grab and go' culture is challenging."It's important to drive home clear, strong messages about single-use plastics and their impact on the environment," Dr Sandhu says."The more we can drive people to choose reusable cups, the more uptake we'll see. People like to mimic what their colleagues, friends and peers do, especially when it is the right thing."
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301091153.htm
|
Climate change threatens European forests
|
In recent years, European forests have suffered greatly from extreme climate conditions and their impacts. More than half of Europe's forests are potentially at risk from windthrow, forest fire, insect attacks or a combination of these. This is the main result of a study by an international team of scientists with the participation of Henrik Hartmann from the Max Planck Institute for Biogeochemistry in Jena, Germany. Using satellite data and artificial intelligence, the scientists studied vulnerability to disturbances in the period between 1979 and 2018. In the light of ongoing climate change, their findings are very important for improving mitigation and adaptation strategies as well as forest management to make European forests more resilient for the future.
|
Forests cover a good third of Europe's land mass, play an important role in regulating the climate, and provide a wide range of ecosystem services to humans. However, in recent decades climate change has made forests increasingly vulnerable to disturbances. Forest structure and prevailing climate largely determine how vulnerable forests are to perturbations and vulnerability to insect infestations has increased notably in recent decades; especially in northern Europe. The boreal coniferous forests in cold regions and the dry forests of Iberian Peninsula are among the most fragile ecosystems.Henrik Hartmann, research group leader at the Max Planck Institute for Biogeochemistry, observes forest responses from an ecophysiological perspective and sums up "The experience of recent years, especially since 2018, has clearly shown that the threat to forests posed by insect pests has particularly increased with ongoing climate change. There is a risk that further climate warming will increase this trend."Extreme weather conditions such as heat waves and drought weaken trees and make them vulnerable to insect pests. "This finding is not new, and forests are normally well adapted to deal with occasional climate extremes. The fact that these extremes are now occurring so frequently and repeatedly makes the exception the norm, and forests cannot cope with that situation," the expert explains.The study also shows that large and old trees are particularly vulnerable to climatic extremes. In recent drought years, this has also been observed in Central European beech forests, where an increasing number of old trees suddenly died. "This is because their water transportation system has to work under greater stress to transport water from the soil through the roots and up into the high-up crown. As a result, large trees suffer more from drought and are then more susceptible to disease."Large and older trees are also preferred hosts for harmful insects. For example, the European spruce bark beetle, which mainly attacks adult spruce trees, prefers to fly to larger individuals. In addition, large trees also provide a greater area for wind attacks during storm events. "The results of the study are conclusive from both an ecological and an ecophysiological perspective," summarizes Henrik Hartmann.Existing European forests will not necessarily disappear, but many of them could be severely damaged by anticipated climate change-induced disturbances and important ecosystem services could be impaired by the loss of especially large and old trees.
|
Environment
| 2,021 |
March 1, 2021
|
https://www.sciencedaily.com/releases/2021/03/210301091156.htm
|
Rocket launches reveal water vapor effect in upper atmosphere
|
Results of a 2018 multirocket launch at Poker Flat Research Range north of Fairbanks, Alaska, will help scientists better understand the impact of more water vapor accumulating near the fringe of the Earth's atmosphere.
|
"This is the first time anyone has experimentally demonstrated that cloud formation in the mesosphere is directly linked to cooling by water vapor itself," said Irfan Azeem, space physicist at Astra LLC in Louisville, Colorado, and principal investigator of the Super Soaker mission.The NASA-funded project, named Super Soaker, involved launching a canister containing about 50 gallons of water skyward to create an artificial polar mesospheric cloud on the night of Jan. 25-26, 2018. A ground-based laser radar (lidar) detected the cloud that formed 18 seconds after the water release 50 miles overhead.A time-lapse photograph captures the Super Soaker launches on Jan. 25-26, 2018. Three rockets launched with the mission, two using vapor tracers to track wind movement and one releasing a water canister to seed a polar mesospheric cloud. The green laser beam visible at the top left is the LIDAR beam used to measure the artificial cloud.Findings of the Super Soaker experiment were published this month in the The Super Soaker experiment showed that water vapor contributes to cloud formation in the upper atmosphere in two ways: by making the air more humid and by cooling the air. This moisture-driven cooling is distinct from the "greenhouse effect" warming that water vapor causes in the lowest level of the atmosphere -- the troposphere.Increased water vapor comes from methane, which is being produced by humans in rising quantities. Methane rises into the mesosphere, where it oxidizes in sunlight to form water vapor and carbon dioxide.Space traffic also adds to the amount of water vapor collecting in this region of the atmosphere, as water vapor is common in rocket engine exhaust, the paper notes. Increases in space traffic will result in more water being deposited in the upper atmosphere."We expect that a more humid atmosphere should be cloudier, just as we see fog form over ponds on cold mornings. What we found that is new is that the water vapor appears to actively cool the atmosphere to promote cloud formation," Collins said."We don't believe it's going to cause a radical effect at the ground, but it helps us understand long-term climate trends in the atmosphere and the role of water vapor in the climate system," he said. "And it also allows us to understand better our weather and climate models, where a lot of times cloud formation is the acid test of whether all the parts of your weather and climate model are working."Polar mesospheric clouds exist at the edge of space at an altitude of 47 to 53 miles. Water vapor at these altitudes freezes into ice crystals, forming the clouds. These clouds can be seen as they glow brightly after sunset when they are lit from below by sunlight against a dark sky. Polar mesospheric clouds are also called night-glowing or noctilucent clouds.The clouds appear naturally in the Arctic or Antarctic during summer. Researchers chose to create the artificial cloud in the winter to have a controlled setting.These clouds have long been used as indicators of climate change. First reported in the late 1800s, the clouds have been seen more often in the 20th and 21st centuries.The Super Soaker experiment consisted of three rocket launches from Poker Flat Research Range within about 40 minutes.The first two rockets dispersed trimethyl aluminum, which, when it reacts with oxygen, produces the harmless products of aluminum oxide, carbon dioxide and water vapor, as well as a bluish white glow. By tracking this glow, researchers can determine the winds over a wide range of altitudes, letting them see the weather conditions into which the water was released.The third rocket, launched 90 seconds after the second rocket, carried the water canister. The canister was detonated at an altitude of 53 miles to create a small polar mesospheric cloud that was detectable only by the lidar.Researchers are eager to conduct a second Super Soaker, one that will eject much more water."What we saw was very, very fine, and we'd like to get more water up there to get a thicker cloud and see if we can directly measure the cooling effect rather than inferring it from the fact that the cloud formed," Collins said, referring to data acquired from the ground-based lidar.Other co-authors of the paper represented the Space Science Division at the Naval Research Laboratory in Washington, D.C.; Astra LLC of Boulder, Colorado; the Department of Physics, Center for Atmospheric and Space Sciences at Utah State University; the Department of Physics at Clemson University; GATS Inc. of Boulder, Colorado; and the Applied Physics Laboratory at Johns Hopkins University.Poker Flat Research Range is owned by the UAF Geophysical Institute and is operated under a contract with NASA's Wallops Flight Facility, which is part of the Goddard Space Flight Center.
|
Environment
| 2,021 |
February 27, 2021
|
https://www.sciencedaily.com/releases/2021/02/210227083311.htm
|
When using pyrite to understand Earth's ocean and atmosphere: Think local, not global
|
The ocean floor is vast and varied, making up more than 70% of the Earth's surface. Scientists have long used information from sediments at the bottom of the ocean -- layers of rock and microbial muck -- to reconstruct the conditions in oceans of the past.
|
These reconstructions are important for understanding how and when oxygen became available in Earth's atmosphere and ultimately increased to the levels that support life as we know it today.Yet reconstructions that rely on signals from sedimentary rocks but ignore the impact of local sedimentary processes do so at their own peril, according to geoscientists including David Fike in Arts & Sciences at Washington University in St. Louis.Their new study published Feb. 26 in The researchers compared pyrite in sediments collected in a borehole drilled in the shelf just off the eastern coast of New Zealand with sediments drilled from the same ocean basin but hundreds of kilometers out into the Pacific."We were able to get a gradient of shallow to deep sediments and compare the differences between those isotopic compositions in pyrite between those sections," said Fike, professor of Earth and planetary sciences and director of environmental studies at Washington University."We demonstrate that, for this one basin in the open ocean, you get very different signals between shallow and deep water, which is prima facie evidence to argue that these signals aren't the global fingerprint of oxygen in the atmosphere," said Fike, who also serves as director of Washington University's International Center for Energy, Environment and Sustainability (InCEES).Instead of pointing directly to oxygen, the same signals from pyrite could be reinterpreted as they relate to other important factors, Fike said, such as sea level change and plate tectonics.Fike and first author Virgil Pasquier, a postdoctoral fellow at the Weizmann Institute of Sciences in Israel, first questioned the way that pyrite has been used as a proxy in a study published in PNAS in 2017 using Mediterranean Sea sediments. For his postdoctoral research, Pasquier has been working with professor Itay Halevy at the Weizmann Institute to understand the various controls on the isotopic composition of pyrite. Their results raise concerns about the common use of pyrite sulfur isotopes to reconstruct Earth's evolving oxidation state."Strictly speaking, we are investigating the coupled cycles of carbon, oxygen and sulfur, and the controls on the oxidation state of the atmosphere," Pasquier said."It's much more sexy for a paper to reconstruct past changes in ocean chemistry than to focus on the burial of rocks or what happened during the burial," he said. "But I find this part even more interesting. Because most microbial life -- especially back when oxygen was initially accumulating in the atmosphere -- occurred in sediments. And if our ultimate goal is to understand oxygenation of the oceans, then we have to understand this."For this study, the team conducted 185 sulfur isotope analyses of pyrite along the two boreholes. They determined that changes in the signals in pyrite from the nearshore borehole were more controlled by sea level-driven changes in local sedimentation, rather than any other factor.In contrast, sediments in the deeper borehole were immune to the sea-level changes. Instead, they recorded a signal associated with the long-term reorganization of ocean currents."There is a water depth threshold," said Roger Bryant, a co-author and PhD graduate of Fike's laboratory at Washington University, now a postdoctoral fellow at the University of Chicago. "Once you go below that water depth, sulfur isotopes apparently are not sensitive to things like climate and environmental conditions in the surface environment."Fike added: "The Earth is a complicated place, and we need to remember that when we try to reconstruct how it has changed in the past. There are a number of different processes that impact the kinds of signals that get preserved. As we try to better understand Earth's long-term evolution, we need to have a more nuanced view about how to extract information from those signals."
|
Environment
| 2,021 |
February 26, 2021
|
https://www.sciencedaily.com/releases/2021/02/210226103802.htm
|
Microbes deep beneath seafloor survive on byproducts of radioactive process
|
A team of researchers from the University of Rhode Island's Graduate School of Oceanography and their collaborators have revealed that the abundant microbes living in ancient sediment below the seafloor are sustained primarily by chemicals created by the natural irradiation of water molecules.
|
The team discovered that the creation of these chemicals is amplified significantly by minerals in marine sediment. In contrast to the conventional view that life in sediment is fueled by products of photosynthesis, an ecosystem fueled by irradiation of water begins just meters below the seafloor in much of the open ocean. This radiation-fueled world is one of Earth's volumetrically largest ecosystems.The research was published today in the journal "This work provides an important new perspective on the availability of resources that subsurface microbial communities can use to sustain themselves. This is fundamental to understand life on Earth and to constrain the habitability of other planetary bodies, such as Mars," said Justine Sauvage, the study's lead author and a postdoctoral fellow at the University of Gothenburg who conducted the research as a doctoral student at URI.The process driving the research team's findings is radiolysis of water -- the splitting of water molecules into hydrogen and oxidants as a result of being exposed to naturally occurring radiation. Steven D'Hondt, URI professor of oceanography and a co-author of the study, said the resulting molecules become the primary source of food and energy for the microbes living in the sediment."The marine sediment actually amplifies the production of these usable chemicals," he said. "If you have the same amount of irradiation in pure water and in wet sediment, you get a lot more hydrogen from wet sediment. The sediment makes the production of hydrogen much more effective."Why the process is amplified in wet sediment is unclear, but D'Hondt speculates that minerals in the sediment may "behave like a semiconductor, making the process more efficient."The discoveries resulted from a series of laboratory experiments conducted in the Rhode Island Nuclear Science Center. Sauvage irradiated vials of wet sediment from various locations in the Pacific and Atlantic Oceans, collected by the Integrated Ocean Drilling Program and by U.S. research vessels. She compared the production of hydrogen to similarly irradiated vials of seawater and distilled water. The sediment amplified the results by as much as a factor of 30."This study is a unique combination of sophisticated laboratory experiments integrated into a global biological context," said co-author Arthur Spivack, URI professor of oceanography.The implications of the findings are significant."If you can support life in subsurface marine sediment and other subsurface environments from natural radioactive splitting of water, then maybe you can support life the same way in other worlds," said D'Hondt. "Some of the same minerals are present on Mars, and as long as you have those wet catalytic minerals, you're going to have this process. If you can catalyze production of radiolytic chemicals at high rates in the wet Martian subsurface, you could potentially sustain life at the same levels that it's sustained in marine sediment."Sauvage added, "This is especially relevant given that the Perseverance Rover has just landed on Mars, with its mission to collect Martian rocks and to characterize its habitable environments."D'Hondt said the research team's findings also have implications for the nuclear industry, including for how nuclear waste is stored and how nuclear accidents are managed. "If you store nuclear waste in sediment or rock, it may generate hydrogen and oxidants faster than in pure water. That natural catalysis may make those storage systems more corrosive than is generally realized," he said.The next steps for the research team will be to explore the effect of hydrogen production through radiolysis in other environments on Earth and beyond, including oceanic crust, continental crust and subsurface Mars. They also will seek to advance the understanding of how subsurface microbial communities live, interact and evolve when their primary energy source is derived from the natural radiolytic splitting of water.
|
Environment
| 2,021 |
February 26, 2021
|
https://www.sciencedaily.com/releases/2021/02/210226103746.htm
|
Stark warning: Combating ecosystem collapse from the tropics to the Antarctic
|
Eminent scientists warn that key ecosystems around Australia and Antarctica are collapsing, and propose a three-step framework to combat irreversible global damage.
|
Their report, authored by 38 Australian, UK and US scientists from universities and government agencies, is published today in the international journal Lead author, Dr Dana Bergstrom from the Australian Antarctic Division, said that the project emerged from a conference inspired by her ecological research in polar environments."I was seeing unbelievably rapid, widespread dieback in the alpine tundra of World Heritage-listed Macquarie Island and started wondering if this was happening elsewhere," Dr Bergstrom said."With my colleagues from the Australian Antarctic Division and the University of Queensland we organised a national conference and workshop on 'Ecological Surprises and Rapid Collapse of Ecosystems in a Changing World', with support from the Australian Academy of Sciences."The resulting paper and extensive case studies examine the current state and recent trajectories of 19 marine and terrestrial ecosystems across all Australian states, spanning 58° of latitude from coral reefs to Antarctica. Findings include:Michael Depledge CBE, Emeritus Professor at the University of Exeter and former Chief Scientific Advisor to the Environment Agency of England and Wales, said the research had particular significance following the UK Government commissioned Dasgupta Review , which recently highlighted the catastrophic economic damage associated with biodiversity loss.Professor Depledge said: "Our paper is a further wake-up call that shows ecosystems are in varying states of collapse from the tropics to Antarctica. These findings from Australia are a stark warning of what is happening everywhere, and will continue without urgent action. The implications for human health and wellbeing are serious. Fortunately, as we show, by raising awareness, and anticipating risks there is still time to take action to address these changes."Our paper will hopefully increase awareness that our ecosystems are collapsing around us. We can already observe the damaging consequences for the health and wellbeing of some communities and anticipate threats to others. Taking stronger action now will avoid heaping further misery on a global population that is already bearing the scars of the global pandemic."The paper recommends a new '3As' framework to guide decision-making about actions to combat irreversible damage:Protecting pencil pines from fire in the Southwest Tasmanian Wilderness World Heritage Area: by mapping vegetation values against fire sensitivity (to identify fire-prone Gondwanan conifer communities), maintaining an area specific awareness of the shifting causation of bushfires (increasing frequency of dry lightning strikes), and developing new action strategies to lessen the pressure of unregulated fire (installing sprinkler systems), conservation managers established and used Awareness and Anticipation to formulate positive Action.The scientific team concluded that in the near future, even apparently resilient ecosystems are likely to suffer collapse as the intensity and frequency of pressures increase."Anticipating and preparing for future change is necessary for most ecosystems, unless we are willing to accept a high risk of loss," Dr Bergstrom said."Protecting the iconic ecosystems we have highlighted is not just for the animals and plants that live there. Our economic livelihoods, and therefore ultimately our survival, are intimately connected to the natural world."
|
Environment
| 2,021 |
February 25, 2021
|
https://www.sciencedaily.com/releases/2021/02/210225171641.htm
|
New sustainable building simulation method points to the future of design
|
A team from Cornell University's Environmental Systems Lab, led by recent graduate Allison Bernett, has put forth a new framework for injecting as much information as possible into the pre-design and early design phases of a project, potentially saving architects and design teams time and money down the road.
|
"(Our framework) allows designers to understand the full environmental impact of their building," said Bernett, corresponding author of "Sustainability Evaluation for Early Design (SEED) Framework for Energy Use, Embodied Carbon, Cost, and Daylighting Assessment" which published Jan. 10 in the Principle investigators are Timur Dogan, assistant professor of architecture in the College of Architecture, Art and Planning; and Katharina Kral, a licensed architect and lecturer in the Department of Architecture."How we look at this is, there's the cost of change in the design process, and then the opportunity of impact," Dogan said. "In the very beginning, changing something doesn't cost anything, but if you're a month into the project, changing something is really expensive, because now you have to rehire consultants and redesign things."And then the other thing is the potential of impact," he said. "In the very beginning, just with a simple nudge in the right direction, you can change a project from being an energy hog to something that's very sustainable, and integrates well into the environment."In 2018, according to the International Energy Agency, the construction sector accounted for 39% of energy and process-related greenhouse gas emissions. That included 11% originating from the manufacturing of building materials and products.The Sustainability Evaluation for Early Design (SEED) Framework is a decision-making tool that can dynamically and concurrently simulate several variables: building energy performance; embodied carbon (carbon emissions generated by construction and materials); construction cost; and daylighting (the use of natural light to illuminate indoor spaces).The framework will allow architects and design teams to rapidly trial and rank tens of thousands of design iterations, using as few as four inputs.Using publicly available data and a suite of available design simulation programs -- including Rhino/Grasshopper (a CAD program); ClimateStudio, developed by Dogan, for daylight simulation and building energy modeling; and engineering software Karamba3D -- Bernett and the team tested SEED in a case study of a hypothetical mid-sized office building modeled in Boston, Washington, D.C., and Phoenix.The SEED Framework generated thousands of design options based on variables specific to the three cities in the case study, offering designers the flexibility of many options early in the process, before changing course would get too expensive."The idea is, you run this analysis," Dogan said, "and you get a few options that already make a lot of sense, and some options that you can completely forget about. ... [It] always comes down to this lack of information in the decision-making process."In that sense, the construction industry is super inefficient," he said. "There's too many players who don't know the full picture and then make decisions that are not always rational. This framework that Allison worked on is geared to help bring the information to the table. Every stakeholder in the design process can then form their own opinion about design goal priorities."SEED's greatest asset, Bernett said, is amassing a tranche of data on multiple factors in one place, and involving architects early in the design and pre-design phases."It takes a lot of time to gather all that data, and we have that prepackaged. So there's definitely a hunger for that," said Bernett, who presented the SEED Framework in September 2019 at the International Building Performance Simulation Conference, in Rome."Right now, we rely heavily on energy modelers and consultants to do this work," she said. "And if we can involve architects more readily and more early on, I think that we're going to see a lot of improvement and cost-effectiveness to these early design decisions."In addition to the publicly available design simulations, the team used AutoFrame, a new procedure developed by Kral for automatically computing structural systems. AutoFrame helps improve the precision of embodied carbon assessments and daylight simulations.The Cornell Atkinson Center for Sustainability's Small Grants Program provided pivotal support for this work, Bernett said."That funding really gave it the push it needed," she said. "It allowed me to present a first iteration [of SEED] at the conference in Rome, and then to really flesh out the research more after that."
|
Environment
| 2,021 |
February 25, 2021
|
https://www.sciencedaily.com/releases/2021/02/210225163224.htm
|
A tangled food web
|
Born in food web ecology, the concept of trophic levels -- the hierarchy of who eats who in the natural world -- is an elegant way to understand how biomass and energy move through a natural system. It's only natural that the idea found its way into the realm of aquaculture, where marine and freshwater farmers try to maximize their product with efficient inputs.
|
"It's often used as a measure of how sustainable it is to harvest or consume that species," said Rich Cottrell(link is external), a postdoctoral researcher at UC Santa Barbara's National Center for Ecological Analysis & Synthesis (NCEAS). As plants (level 1) become food to plant eaters (level 2), who in turn are consumed by carnivores (level 3) and so on, the amount of energy required to support the same weight of organisms increases, he explained. As a result, species at levels 4 or 5, such as tuna, require far more energy per pound than would species in the lower trophic levels. It's the same reason vegetarian diets are often considered to be more sustainable than meat-eating ones."In the same manner, trophic level measures are now being recommended in policy settings for use as an indicator of the sustainability of fish farming, or aquaculture," Cottrell said. The lower the trophic level, the more sustainable the species is considered to be, and so policy often calls for more farming of low-trophic species.However, argue Cottrell and fellow aquaculture experts in a paper(link is external) published in the journal The causes for that have largely to do with how today's farmed fish are fed."Most of the fish and invertebrates that we farm for food are produced using human-made feeds," Cottrell explained. "But these feeds are constantly changing, and so the meaning of farmed trophic levels is changing through time." For instance, he pointed out, salmon are considered to be at a higher trophic level because their naturally carnivorous diets would require large amounts of fishmeal and oil, but advances in feed and manufacturing have reduced the proportion of fish-based ingredients to 10-15% in modern salmon diets. Meanwhile, herbivorous species such as carp and tilapia have been found to respond favorably to small amounts of fishmeal in their feed."In reality, they're now farmed at similar trophic levels," Cottrell said. "The line between 'low' and 'high' trophic levels will continue to blur with innovation."The trophic level concept misses still another important aspect of aquaculture sustainability in the realm of feed and resource efficiency, or how efficiently the farmed animals convert what they are fed into edible food."This is not well explained by trophic level," Cottrell said, adding that despite their high trophic placement, many carnivorous farmed fish could be more feed-efficient than their naturally carnivorous counterparts. And because aquaculture is increasingly turning to agriculture to provide replacements for fishmeal and oil, the promise of sustainability might be an empty one."Replacing fish-based ingredients with crops has led to a dramatic reduction in the trophic level of fed aquaculture species, but we know very little about how sustainable it is to increase pressure on global agricultural systems," he said.As the global aquaculture sector strives to meet the growing demand for farmed seafood, the researchers say it's time to rethink the use of trophic levels as a rule for and measure of sustainability. Stipulating low trophic level aquaculture recommendations may not be successful in promoting greater sustainability, Cottrell said. Boosting the supply of mussels, for instance, may not fulfill increasing demand for shrimp or salmon."It behooves us to find a way to ensure that for high-demand products, we produce these in the most environmentally efficient and socially responsible way possible," he said. "Trophic levels will not get us there."Fortunately, there are efforts at more nuanced sustainability assessments, such as voluntary certifications through the Aquaculture Stewardship Council or Best Aquaculture Practices, which examine the impacts of aquaculture at the farm level and through supply chains."Greater support for these programs and incentives for producers from various regions and production systems to join them would be a far more robust way to strengthen the sustainability of the aquaculture sector going forward," Cottrell said.
|
Environment
| 2,021 |
February 25, 2021
|
https://www.sciencedaily.com/releases/2021/02/210225143923.htm
|
Rare bee found after 100 years
|
A widespread field search for a rare Australian native bee not recorded for almost a century has found it's been there all along -- but is probably under increasing pressure to survive.
|
Only six individual were ever found, with the last published record of this Australian endemic bee species, Pharohylaeus lactiferus (Colletidae: Hylaeinae), from 1923 in Queensland."This is concerning because it is the only Australian species in the Pharohylaeus genus and nothing was known of its biology," Flinders University researcher James Dorey says in a new scientific paper in the journal The hunt began after fellow bee experts Olivia Davies and Dr Tobias Smith raised the possibility of the species' extinction based on the lack of any recent sightings. The 'rediscovery' followed extensive sampling of 225 general and 20 targeted sampling sites across New South Wales and Queensland.Along with extra bee and vegetation recordings from the Atlas of Living Australia, which lists 500 bee species in NSW and 657 in Queensland, the Flinders researchers sought to assess the latest levels of true diversity warning that habitat loss and fragmentation of Australia's rainforests, along with wildfires and climate change, are likely to put extinction pressure on this and other invertebrate species."Three populations of P. lactiferous were found by sampling bees visiting their favoured plant species along much of the Australian east coast, suggesting population isolation," says Flinders University biological sciences PhD candidate James Dorey.Highly fragmented habitat and potential host specialisation might explain the rarity of P. lactiferus.Australia has already cleared more than 40% of its forests and woodlands since European colonisation, leaving much of the remainder fragmented and degraded (Bradshaw 2012)."My geographical analyses used to explore habitat destruction in the Wet Tropics and Central Mackay Coast bioregions indicate susceptibility of Queensland rainforests and P. lactiferus populations to bushfires, particularly in the context of a fragmented landscape," Mr Dorey says.The study also warns the species is even more vulnerable as they appear to favour specific floral specimens and were only found near tropical or sub-tropical rainforest -- a single vegetation type."Collections indicate possible floral and habitat specialisation with specimens only visiting firewheel trees, Stenocarpus sinuatus (Proteaceae), and Illawarra flame trees, Brachychiton acerifolius (Malvaceae), to the exclusion of other available floral resources."Known populations of P. lactiferus remain rare and susceptible to habitat destruction (e.g. from changed land use or events such as fires), the paper concludes."Future research should aim to increase our understanding of the biology, ecology and population genetics of P. lactiferus.""If we are to understand and protect these wonderful Australian species, we really need to increase biomonitoring and conservation efforts, along with funding for the museum curation and digitisation of their collections and other initiatives," Mr Dorey says.
|
Environment
| 2,021 |
February 25, 2021
|
https://www.sciencedaily.com/releases/2021/02/210225113356.htm
|
Forests' long-term capacity to store carbon is dropping in regions with extreme annual fires
|
Researchers have analysed decades' worth of data on the impact of repeated fires on ecosystems across the world. Their results, published today in the journal
|
Savannah ecosystems, and regions with extreme wet or dry seasons were found to be the most sensitive to changes in fire frequency. Trees in regions with moderate climate are more resistant. Repeated fires also cause less damage to tree species with protective traits like thicker bark.These effects only emerge over the course of several decades: the effect of a single fire is very different from repeated burning over time. The study found that after 50 years, regions with the most extreme annual fires had 72% lower wood area -- a surrogate for biomass -- with 63% fewer individual trees than in regions that never burned. Such changes to the tree community can reduce the forest's long-term ability to store carbon, but may buffer the effect of future fires."Planting trees in areas where trees grow rapidly is widely promoted as a way to mitigate climate change. But to be sustainable, plans must consider the possibility of changes in fire frequency and intensity over the longer term," said Dr Adam Pellegrini in the University of Cambridge's Department of Plant Sciences, first author of the paper.He added: "Our study shows that although wetter regions are better for tree growth, they're also more vulnerable to fire. That will influence the areas we should manage to try and mitigate climate change."Past studies have found that frequent fires reduce levels of nutrients -- including nitrogen -- in the soil. The new study demonstrates that this can favour slower-growing tree species that have adaptations to help them survive with less nutrients. But these tree species also slow down nutrient cycling in the soil -- they hold onto what they have. This can limit the recovery of the forest as a whole by reducing the nutrients available for plant growth after an intense fire.Wildfires are playing an increasingly important role in global carbon emissions. Fire burns five percent of the Earth's surface every year, releasing carbon dioxide into the atmosphere equivalent to 20% of our annual fossil fuel emissions.In the past, the majority of carbon released by wildfires was recaptured as ecosystems regenerated. But the more frequent fires of recent years, driven by changes in climate and land use, don't always allow time for this."As fire frequency and intensity increases because of climate change, the structure and functioning of forest ecosystems are going to change in so many ways because of changes in tree composition," said Pellegrini.He added: "More fire-tolerant tree species are generally slower growing, reducing the productivity of the forest. As climate change causes wildfires to become more intense and droughts more severe, it could hamper the ability of forests to recover -- reducing their capacity for carbon storage."
|
Environment
| 2,021 |
February 25, 2021
|
https://www.sciencedaily.com/releases/2021/02/210225113347.htm
|
Extreme melt on Antarctica's George VI ice shelf
|
Antarctica's northern George VI Ice Shelf experienced record melting during the 2019-2020 summer season compared to 31 previous summers of dramatically lower melt, a University of Colorado Boulder-led study found. The extreme melt coincided with record-setting stretches when local surface air temperatures were at or above the freezing point.
|
"During the 2019-2020 austral summer, we observed the most widespread melt and greatest total number of melt days of any season for the northern George VI Ice Shelf," said CIRES Research Scientist Alison Banwell, lead author of the study published in Banwell and her co-authors -- scientists at CU Boulder's National Snow and Ice Data Center and the Department of Atmospheric and Oceanic Sciences, NASA Goddard and international institutions -- studied the 2019-2020 melt season on the northern George VI Ice Shelf using a variety of satellite observations that can detect meltwater on top of the ice and within the near-surface snow.Surface meltwater ponding is potentially dangerous to ice shelves, according to Banwell, because when these lakes drain, the ice fractures and may trigger ice-shelf break-up. "The George VI Ice Shelf buttresses the largest volume of upstream grounded ice of any Antarctic Peninsula ice shelf. So if this ice shelf breaks up, ice that rests on land would flow more quickly into the ocean and contribute more to sea level rise than any other ice shelf on the Peninsula," Banwell said.The 2019-2020 melt season was the longest for the northern George VI Ice Shelf -- the second largest ice shelf on the Antarctic Peninsula -- but it wasn't the longest melt season over the entire peninsula: the 1992-1993 melt season was the longest. But as air temperatures continue to warm, increased melting on the northern George VI Ice Shelf and other Antarctic ice shelves may lead to ice-shelf break-up events and ultimately sea-level rise.To determine the factors that caused the record melt on the northern George VI Ice Shelf, the researchers examined local weather data -- including surface temperature, relative humidity, wind direction, and wind speed -- from the British Antarctic Survey's Fossil Bluff weather station on the ice shelf's western margin.Banwell and her colleagues documented several multi-day periods with warmer-than-average air temperatures that likely contributed to the exceptional melt of 2019-2020. "Overall, a higher percentage of air temperatures during the 2019-2020 season -- 33 percent -- were at or above zero degrees Celsius (32 degrees Fahrenheit) compared to any prior season going back to 2007," Banwell said.The researchers identified periods from late November onwards when temperatures were consistently above the freezing point for up to 90 hours. "When the temperature is above zero degrees Celsius, that limits refreezing and also leads to further melting. Water absorbs more radiation than snow and ice, and that leads to even more melting," Banwell said.To detect surface melt over large areas, the researchers used satellite microwave data. "Microwave data lets us look at the brightness temperature of the surface and the near surface snow, which indicates whether or not there is meltwater present," Banwell said. Observations going back to 1979 showed that the extent and duration of surface melt on the northern George VI Ice Shelf in 2019-2020 were greater than the 31 previous summers.Next, the researchers used satellite imagery to calculate the volume of meltwater on the George VI Ice Shelf, finding that the 2019-2020 melt season had the largest volume of surface meltwater since 2013. Meltwater ponding peaked on January 19, 2020, when satellite images showed 23 percent of the entire study area covered in water, with a total volume of 0.62 km3 -- equal to about 250,000 Olympic-size swimming pools.Banwell's research on ice-shelf stability includes a field project funded by the National Science Foundation. During the researchers' first field season in November 2019, they installed instruments to measure changes in the elevation, lake depth and weather conditions on the northern George VI Ice Shelf. "We were due to go back in November 2020, but that was canceled due to COVID," Banwell said. "We hope to return later this year when it's once again safe to pursue field work in this remote region of Earth."
|
Environment
| 2,021 |
February 25, 2021
|
https://www.sciencedaily.com/releases/2021/02/210225082525.htm
|
Over 80% of Atlantic Rainforest remnants have been impacted by human activity
|
A Brazilian study published in
|
According to the authors, in terms of carbon storage, the biomass erosion corresponds to the destruction of 70,000 square kilometers (km²) of forest -- almost 10 million soccer pitches -- or USD 2.3 billion-USD 2.6 billion in carbon credits. "These figures have direct implications for mechanisms of climate change mitigation," they state in the article.Atlantic Rainforest remnants in Brazil are strung along its long coastline. The biome once covered 15% of Brazil, totaling 1,315,460 km². Only 20% of the original area is now left. The fragments are of varying sizes and have different characteristics.To estimate the impact of human activity on these remnants, the researchers used data from 1,819 forest inventories conducted by several research groups."These inventories are a sort of tree census. The researchers go into the field and choose an area to survey, typically 100 meters by 100 meters. All the trees found within this perimeter are identified, analyzed, and measured," said Renato de Lima, a researcher at the University of São Paulo's Institute of Biosciences (IB-USP) and leader of the study. "We compiled all the data available in the scientific literature and calculated the average loss of biodiversity and biomass in the fragments studied, which represent 1% of the biome. We then used statistical methods to extrapolate the results to the fragments not studied, assuming that the impact would be constant throughout the Atlantic Rainforest biome."After identifying the tree species in a fragment, the researchers estimated the size of their seeds and also what they call the "ecological or successional group." These two factors indicate how healthy the forest is, according to Lima. "There are hardy plants that require very little in the way of local resources and can grow on wasteland, pasture, forest borders, etc. These are known as pioneer species. A Brazilian example is the Ambay pumpwood [Cecropia pachystachya]," he said.Pioneer tree species tend to produce seeds of smaller size, but in large numbers, because each seed has such a small chance of germinating. At the opposite extreme are climax species that flourish only in favorable environments, such as Brazilwood (Paubrasilia echinata) or various species of the genus Ocotea. These trees produce larger seeds with a substantial store of nutrients."This kind of seed requires a heavier investment by the parent tree in terms of energy," Lima said. "Areas in which climax species are present typically support more diversified fauna, so they serve as a marker of overall forest quality. Areas in which pioneer species predominate have probably been disturbed in the recent past."The IB-USP group set out to show how the loss of late-successional species correlated with overall biodiversity loss and also with biomass loss, which represents the reduction in the forest's capacity to store carbon and keep this greenhouse gas out of the atmosphere. They found the forest fragments studied to have 25%-32% less biomass, 23%-31% fewer tree species, and 33%-42% fewer individuals belonging to late-successional, large-seeded, and endemic species.The analysis also showed that biodiversity and biomass erosion were lower in strictly protected conservation units, especially large ones. "The smaller the forest fragment and the larger the edge area, the easier it is for people to gain access and disturb the remnant," Lima said.On the positive side, degraded forest areas can recoup their carbon storage capacity if they are restored. "Combating deforestation and restoring totally degraded open areas such as pasturelands have been a major focus. These two strategies are very important, but we shouldn't forget the fragments in the middle," Lima said.According to Paulo Inácio Prado, a professor at IB-USP and last author of the study, restored forest remnants can attract billions of dollars in investment relating to carbon credits. "Degraded forests should no longer be considered a liability. They're an opportunity to attract investment, create jobs and conserve what still remains of the Atlantic Rainforest," he said.Lima believes this could be an attractive strategy for landowners in protected areas of the biome. "There's no need to reduce the amount of available arable land. Instead, we should increase the biomass in forest fragments, recouping part of the cost of restoration in the form of carbon credits," he said. "There will be no future for the Atlantic Rainforest without the owners of private properties. Only 9% of the remaining forest fragments are on state-owned land."DatabaseAccording to Lima, the study began during his postdoctoral research, which was supported by São Paulo Research Foundation -- FAPESP and supervised by Prado. The aim was to identify the key factors that determine biodiversity and biomass loss in remnants of Atlantic Rainforest. "We found human action to be a major factor," he said. "We considered activities such as logging, hunting, and invasion by exotic species, as well as the indirect effects of forest fragmentation."The data obtained from the 1,819 forest inventories used in the research is stored in a repository called TreeCo, short for Neotropical Tree Communities. Lima developed the database during his postdoctoral fellowship and still runs it. Its contents are described in an article published in Biodiversity and Conservation."The repository became a byproduct of my postdoctoral project, and more than ten PhD and master's candidates are using it in their research," Lima said.
|
Environment
| 2,021 |
February 25, 2021
|
https://www.sciencedaily.com/releases/2021/02/210225082517.htm
|
The risks of communicating extreme climate forecasts
|
Apocalypse now? The all-too-common practice of making climate doomsday forecasts is not just bad science, it's also a terrible way to communicate important information.
|
For decades, climate change researchers and activists have used dramatic forecasts to attempt to influence public perception of the problem and as a call to action on climate change. These forecasts have frequently been for events that might be called "apocalyptic," because they predict cataclysmic events resulting from climate change.In a new paper published in the Rode and Fischbeck, professor of Social & Decision Sciences and Engineering & Public Policy, collected 79 predictions of climate-caused apocalypse going back to the first Earth Day in 1970. With the passage of time, many of these forecasts have since expired; the dates have come and gone uneventfully. In fact, 48 (61%) of the predictions have already expired as of the end of 2020.Fischbeck noted, "from a forecasting perspective, the 'problem' is not only that all of the expired forecasts were wrong, but also that so many of them never admitted to any uncertainty about the date. About 43% of the forecasts in our dataset made no mention of uncertainty."In some cases, the forecasters were both explicit and certain. For example, Stanford University biologist Paul Ehrlich and British environmental activist Prince Charles are serial failed forecasters, repeatedly expressing high degrees of certainty about apocalyptic climate events.Rode commented "Ehrlich has made predictions of environmental collapse going back to 1970 that he has described as having 'near certainty'. Prince Charles has similarly warned repeatedly of 'irretrievable ecosystem collapse' if actions were not taken, and when expired, repeated the prediction with a new definitive end date. Their predictions have repeatedly been apocalyptic and highly certain...and so far, they've also been wrong."The researchers noted that the average time horizon before a climate apocalypse for the 11 predictions made prior to 2000 was 22 years, while for the 68 predictions made after 2000, the average time horizon was 21 years. Despite the passage of time, little has changed -- across a half a century of forecasts; the apocalypse is always about 20 years out.Fischbeck continued, "It's like the boy who repeatedly cried wolf. If I observe many successive forecast failures, I may be unwilling to take future forecasts seriously.That's a problem for climate science, say Rode and Fischbeck."The underlying science of climate change has many solid results," says Fischbeck, "the problem is often the leap in connecting the prediction of climate events to the prediction of the consequences of those events." Human efforts at adaptation and mitigation, together with the complexity of socio-physical systems, means that the prediction of sea level rise, for example, may not necessarily lead to apocalyptic flooding."By linking the climate event and the potential consequence for dramatic effect," noted Rode, "a failure to observe the consequence may unfairly call into question the legitimacy of the science behind the climate event."With the new Biden administration making climate change policy a top priority, trust in scientific predictions about climate change is more crucial than ever, however scientists will have to be wary in qualifying their predictions. In measuring the proliferation the forecasts through search results, the authors found that forecasts that did not mention uncertainty in their apocalyptic date tended to be more visible (i.e., have more search results available). Making sensational predictions of the doom of humanity, while scientifically dubious, has still proven tempting for those wishing to grab headlines.The trouble with this is that scientists, due to their training, tend to make more cautious statements and more often include references to uncertainty. Rode and Fischbeck found that while 81% of the forecasts made by scientists referenced uncertainty, less than half of the forecasts made by non-scientists did."This is not surprising," said Rode, "but it is troubling when you consider that forecasts that reference uncertainty are less visible on the web. This results in the most visible voices often being the least qualified."Rode and Fischbeck argue that scientists must take extraordinary caution in communicating events of great consequence. When it comes to climate change, the authors advise "thinking small." That is, focusing on making predictions that are less grandiose and shorter in term. "If you want people to believe big predictions, you first need to convince them that you can make little predictions," says Rode.Fischbeck added, "We need forecasts of a greater variety of climate variables, we need them made on a regular basis, and we need expert assessments of their uncertainties so people can better calibrate themselves to the accuracy of the forecaster."
|
Environment
| 2,021 |
February 24, 2021
|
https://www.sciencedaily.com/releases/2021/02/210224143550.htm
|
Record-high Arctic freshwater will flow to Labrador Sea, affecting local and global oceans
|
Freshwater is accumulating in the Arctic Ocean. The Beaufort Sea, which is the largest Arctic Ocean freshwater reservoir, has increased its freshwater content by 40% over the past two decades. How and where this water will flow into the Atlantic Ocean is important for local and global ocean conditions.
|
A study from the University of Washington, Los Alamos National Laboratory and the National Oceanic and Atmospheric Administration shows that this freshwater travels through the Canadian Archipelago to reach the Labrador Sea, rather than through the wider marine passageways that connect to seas in Northern Europe. The open-access study was published Feb. 23 in "The Canadian Archipelago is a major conduit between the Arctic and the North Atlantic," said lead author Jiaxu Zhang, a UW postdoctoral researcher at the Cooperative Institute for Climate, Ocean and Ecosystem Studies. "In the future, if the winds get weaker and the freshwater gets released, there is a potential for this high amount of water to have a big influence in the Labrador Sea region."The finding has implications for the Labrador Sea marine environment, since Arctic water tends to be fresher but also rich in nutrients. This pathway also affects larger oceanic currents, namely a conveyor-belt circulation in the Atlantic Ocean in which colder, heavier water sinks in the North Atlantic and comes back along the surface as the Gulf Stream. Fresher, lighter water entering the Labrador Sea could slow that overturning circulation."We know that the Arctic Ocean has one of the biggest climate change signals," said co-author Wei Cheng at the UW-based Cooperative Institute for Climate, Ocean and Atmosphere Studies. "Right now this freshwater is still trapped in the Arctic. But once it gets out, it can have a very large impact."Fresher water reaches the Arctic Ocean through rain, snow, rivers, inflows from the relatively fresher Pacific Ocean, as well as the recent melting of Arctic Ocean sea ice. Fresher, lighter water floats at the top, and clockwise winds in the Beaufort Sea push that lighter water together to create a dome.When those winds relax, the dome will flatten and the freshwater gets released into the North Atlantic."People have already spent a lot of time studying why the Beaufort Sea freshwater has gotten so high in the past few decades," said Zhang, who began the work at Los Alamos National Laboratory. "But they rarely care where the freshwater goes, and we think that's a much more important problem."Using a technique Zhang developed to track ocean salinity, the researchers simulated the ocean circulation and followed the Beaufort Sea freshwater's spread in a past event that occurred from 1983 to 1995.Their experiment showed that most of the freshwater reached the Labrador Sea through the Canadian Archipelago, a complex set of narrow passages between Canada and Greenland. This region is poorly studied and was thought to be less important for freshwater flow than the much wider Fram Strait, which connects to the Northern European seas.In the model, the 1983-1995 freshwater release traveled mostly along the North American route and significantly reduced the salinities in the Labrador Sea -- a freshening of 0.2 parts per thousand on its shallower western edge, off the coast of Newfoundland and Labrador, and of 0.4 parts per thousand inside the Labrador Current.The volume of freshwater now in the Beaufort Sea is about twice the size of the case studied, at more than 23,300 cubic kilometers, or more than 5,500 cubic miles. This volume of freshwater released into the North Atlantic could have significant effects. The exact impact is unknown. The study focused on past events, and current research is looking at where today's freshwater buildup might end up and what changes it could trigger."A freshwater release of this size into the subpolar North Atlantic could impact a critical circulation pattern, called the Atlantic Meridional Overturning Circulation, which has a significant influence on Northern Hemisphere climate," said co-author Wilbert Weijer at Los Alamos National Lab.This research was funded by the Department of Energy, the National Science Foundation, Los Alamos National Laboratory, and NOAA. Other authors are Mike Steele at the UW Applied Physics Laboratory and Tarun Verma and Milena Veneziani at Los Alamos National Lab.
|
Environment
| 2,021 |
February 24, 2021
|
https://www.sciencedaily.com/releases/2021/02/210224120321.htm
|
Scientists begin building highly accurate digital twin of our planet
|
To become climate neutral by 2050, the European Union launched two ambitious programmes: "Green Deal" and "DigitalStrategy." As a key component of their successful implementation, climate scientists and computer scientists launched the "Destination Earth" initiative, which will start in mid-2021 and is expected to run for up to ten years. During this period, a highly accurate digital model of the Earth is to be created, a digital twin of the Earth, to map climate development and extreme events as accurately as possible in space and time.
|
Observational data will be continuously incorporated into the digital twin in order to make the digital Earth model more accurate for monitoring the evolution and predict possible future trajectories. But in addition to the observation data conventionally used for weather and climate simulations, the researchers also want to integrate new data on relevant human activities into the model. The new "Earth system model" will represent virtually all processes on the Earth's surface as realistically as possible, including the influence of humans on water, food and energy management, and the processes in the physical Earth system.The digital twin of the Earth is intended to be an information system that develops and tests scenarios that show more sustainable development and thus better inform policies. "If you are planning a two-metre high dike in The Netherlands, for example, I can run through the data in my digital twin and check whether the dike will in all likelihood still protect against expected extreme events in 2050," says Peter Bauer, deputy director for Research at the European Centre for Medium-Range Weather Forecasts (ECMWF) and co-initiator of Destination Earth. The digital twin will also be used for strategic planning of fresh water and food supplies or wind farms and solar plants.The driving forces behind Destination Earth are the ECMWF, the European Space Agency (ESA), and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT). Together with other scientists, Bauer is driving the climate science and meteorological aspects of the Earth's digital twin, but they also rely on the know-how of computer scientists from ETH Zurich and the Swiss National Supercomputing Centre (CSCS), namely ETH professors Torsten Hoefler, from the Institute for High Performance Computing Systems, and Thomas Schulthess, Director of CSCS.In order to take this big step in the digital revolution, Bauer emphasises the need for earth sciences to be married to the computer sciences. In a recent publication in In their paper, the researchers look back on the steady development of weather models since the 1940s, a success story that took place quietly. Meteorologists pioneered, so to speak, simulations of physical processes on the world's largest computers. As a physicist and computer scientist, CSCS's Schulthess is therefore convinced that today's weather and climate models are ideally suited to identify completely new ways for many more scientific disciplines how to use supercomputers efficiently.In the past, weather and climate modelling used different approaches to simulate the Earth system. Whereas climate models represent a very broad set of physical processes, they typically neglect small-scale processes, which, however, are essential for the more precise weather forecasts that in turn, focus on a smaller number of processes. The digital twin will bring both areas together and enable high-resolution simulations that depict the complex processes of the entire Earth system. But in order to achieve this, the codes of the simulation programmes must be adapted to new technologies promising much enhanced computing power.With the computers and algorithms available today, the highly complex simulations can hardly be carried out at the planned extremely high resolution of one kilometre because for decades, code development stagnated from a computer science perspective. Climate research benefited from being able to gain higher performance by ways of new generations of processors without having to fundamentally change their programme. This free performance gain with each new processor generation stopped about 10 years ago. As a result, today's programmes can often only utilise 5 per cent of the peak performance of conventional processors (CPU).For achieving the necessary improvements, the authors emphasize the need of co-design, i.e. developing hardware and algorithms together and simultaneously, as CSCS successfully demonstrated during the last ten years. They suggest to pay particular attention to generic data structures, optimised spatial discretisation of the grid to be calculated and optimisation of the time step lengths. The scientists further propose to separate the codes for solving the scientific problem from the codes that optimally perform the computation on the respective system architecture. This more flexible programme structure would allow a faster and more efficient switch to future architectures.The authors also see great potential in artificial intelligence (AI). It can be used, for example, for data assimilation or the processing of observation data, the representation of uncertain physical processes in the models and data compression. AI thus makes it possible to speed up the simulations and filter out the most important information from large amounts of data. Additionally, the researchers assume that the use of machine learning not only makes the calculations more efficient, but also can help describing the physical processes more accurately.The scientists see their strategy paper as a starting point on the path to a digital twin of the Earth. Among the computer architectures available today and those expected in the near future, supercomputers based on graphics processing units (GPU) appear to be the most promising option. The researchers estimate that operating a digital twin at full scale would require a system with about 20,000 GPUs, consuming an estimated 20MW of power. For both economic and ecological reasons, such a computer should be operated at a location where CO2-neutral generated electricity is available in sufficient quantities.
|
Environment
| 2,021 |
February 23, 2021
|
https://www.sciencedaily.com/releases/2021/02/210223164442.htm
|
Positive reinforcements help algorithm forecast underground natural reserves
|
Texas A&M University researchers have designed a reinforcement-based algorithm that automates the process of predicting the properties of the underground environment, facilitating the accurate forecasting of oil and gas reserves.
|
Within the Earth's crust, layers of rock hold bountiful reservoirs of groundwater, oil and natural gas. Now, using machine learning, researchers at Texas A&M University have developed an algorithm that automates the process of determining key features of the Earth's subterranean environment. They said this research might help with accurate forecasting of our natural reserves.Specifically, the researchers' algorithm is designed on the principle of reinforcement or reward learning. Here, the computer algorithm converges on the correct description of the underground environment based on rewards it accrues for making correct predictions of the pressure and flow expected from boreholes."Subsurface systems that are typically a mile below our feet are completely opaque. At that depth we cannot see anything and have to use instruments to measure quantities, like pressure and rates of flow," said Siddharth Misra, associate professor in the Harold Vance Department of Petroleum Engineering and the Department of Geology and Geophysics. "Although my current study is a first step, my goal is to have a completely automated way of using that information to accurately characterize the properties of the subsurface."The algorithm is described in the December issue of the journal Simulating the geology of the underground environment can greatly facilitate forecasting of oil and gas reserves, predicting groundwater systems and anticipating seismic hazards. Depending on the intended application, boreholes serve as exit sites for oil, gas and water or entry sites for excess atmospheric carbon dioxide that need to be trapped underground.Along the length of the boreholes, drilling operators can ascertain the pressures and flow rates of liquids or gas by placing sensors. Conventionally, these sensor measurements are plugged into elaborate mathematical formulations, or reservoir models, that predict the properties of the subsurface such as the porosity and permeability of rocks.But reservoir models are mathematically cumbersome, require extensive human intervention, and at times, even give a flawed picture of the underground geology. Misra said there has been an ongoing effort to construct algorithms that are free from human involvement yet accurate.For their study, Misra and his team chose a type of machine-learning algorithm based on the concept of reinforcement learning. Simply put, the software learns to make a series of decisions based on feedback from its computational environment."Imagine a bird in a cage. The bird will interact with the boundaries of the cage where it can sit or swing or where there is food and water. It keeps getting feedback from its environment, which helps it decide which places in the cage it would rather be at a given time," Misra said. "Algorithms based on reinforcement learning are based on a similar idea. They too interact with an environment, but it's a computational environment, to reach a decision or a solution to a given problem."So, these algorithms are rewarded for favorable predictions and are penalized for unfavorable ones. Over time, reinforcement-based algorithms arrive at the correct solution by maximizing their accrued reward.Another technical advantage of reinforcement-based algorithms is that they do not make any presuppositions about the pattern of data. For example, Misra's algorithm does not assume that the pressure measured at a certain time and depth is related to what the pressure was at the same depth in the past. This property makes his algorithm less biased, thereby reducing the chances of error at predicting the subterranean environment.When initiated, Misra's algorithm begins by randomly guessing a value for porosity and permeability of the rocks constituting the subsurface. Based on these values, the algorithm calculates a flow rate and pressure that it expects from a borehole. If these values do not match the actual values obtained from field measurements, also known as historical data, the algorithm is penalized. Consequently, it is forced to correct its next guess for the porosity and permeability. However, if its guesses were somewhat correct, the algorithm is rewarded and makes further guesses along that direction.The researchers found that within 10 iterations of reinforcement learning the algorithm was able to correctly and very quickly predict the properties of simple subsurface scenarios.Misra noted that although the subsurface simulated in their study was simplistic, their work is still a proof of concept that reinforcement algorithms can be used successfully in automated reservoir-property predictions, also referred as automated history matching."A subsurface system can have 10 or 20 boreholes spread over a two- to five-mile radius. If we understand the subsurface clearly, we can plan and predict a lot of things in advance, for example, we would be able to anticipate subsurface environments if we go a bit deeper or the flow rate of gas at that depth," Misra said. "In this study, we have turned history matching into a sequential decision-making problem, which has the potential to reduce engineers' efforts, mitigate human bias and remove the need of large sets of labeled training data."He said future work will focus on simulating more complex reservoirs and improving the computational efficiency of the algorithm.Hao Li of the University of Oklahoma was a contributor to this work. This research is funded by the United States Department of Energy.
|
Environment
| 2,021 |
February 23, 2021
|
https://www.sciencedaily.com/releases/2021/02/210223135513.htm
|
Alaska thunderstorms may triple with climate change
|
Warming temperatures will potentially alter the climate in Alaska so profoundly later this century that the number of thunderstorms will triple, increasing the risks of widespread flash flooding, landslides, and lightning-induced wildfires, new research finds.
|
In a pair of new papers, a research team led by scientists at the Paris Sciences and Letters University and the National Center for Atmospheric Research (NCAR) show that the sea ice around Alaska could largely give way to open water in the warmer months, creating an ample source of moisture for the atmosphere. This moisture, combined with warmer temperatures that can hold more water vapor, would turbocharge summertime storms over Alaska by the end of the century under a high greenhouse gas emissions scenario."Alaska can expect three times as many storms, and those storms will be more intense," said NCAR scientist Andreas Prein, a co-author of the new papers. "It will be a very different regime of rainfall."The thunderstorms would extend throughout Alaska, even in far northern regions where such storms are virtually unheard of. In more southern regions of the state that currently experience occasional thunderstorms, the storms would become far more frequent and peak rainfall rates would increase by more than a third.The scientists used a suite of advanced computer models and a specialized algorithm to simulate future weather conditions and to track the sources of moisture in the atmosphere. They noted that the impacts in Alaska could be significantly reduced if society curbed emissions.The findings have far-reaching implications for the 49th state. Flooding is already the most expensive type of natural disaster in central Alaska, and wildfires ignited by lightning strikes are a major hazard."We suspect that the increasing number of thunderstorms might have significant impacts, such as amplifying spring floods or causing more wildfire ignitions," said Basile Poujol, a scientist with the Paris Sciences and Letters University and lead author of both studies. "Further studies are necessary to determine whether these impacts are likely to occur and, if so, their potential effects on ecosystems and society."The studies, published in Alaska is expected to warm by 6-9 degrees Celsius (about 11-16 degrees Fahrenheit) by the end of the century if society pumps out high amounts of greenhouse gases. The vast state is already experiencing damaging impacts from warmer temperatures, including longer wildfire seasons, record heat waves, and landslides and sinkholes caused by melting permafrost.If thunderstorms become more common in Alaska, it would represent a major shift in the state's climate.Organized convective storms, including powerful systems of thunderstorms, are a frequent occurrence in the tropics and midlatitudes, where the atmosphere is moist and solar heating creates instability and rapidly rising parcels of air. In contrast, the colder Arctic provides an inhospitable environment for high-impact thunderstorms.For the first paper, which focused on how Alaskan thunderstorms may change later this century, the authors compared computer simulations of Alaska's current-day climate with the conditions expected at the end of the century. They fed data from global climate models into the higher-resolution NCAR-based Weather Research and Forecasting (WRF) model, which enabled them to generate detailed simulations of Alaska's weather and climate. They then applied a specialized storm-tracking algorithm, focusing on large thunderstorm clusters in the simulations that extended for dozens to hundreds of miles and unleashed more than an inch of rain per hour -- the type of event that could lead to far-reaching flash flooding and landslides.To confirm that the models were realistic, the authors compared the simulations of recent atmospheric conditions with observations of actual conditions from radar, satellite, lightning sensors, and other sources.The results showed that thunderstorm frequency south of the Yukon River increased from about once a year to every month during the warm season. Hourly rainfall rates increased noticeably, ranging up to 37% higher in the cores of storms. In addition, thunderstorms began appearing in regions that had not previously experienced them, such as the North Slope and West Coast.The second paper focused on the reasons for the increase in thunderstorms. After using WRF and other models to develop a detailed representation of the atmosphere over Alaska, including temperature, water vapor, and seasonal sea ice cover, the research team applied a specialized model to trace air parcels back to their sources."Our goal was to determine the sources of moisture and associated changes that would fuel such a significant increase in thunderstorms over Alaska," said NCAR scientist Maria Molina, a co-author of the second study.The results showed that moist air masses from ice-free regions of the Gulf of Alaska, Bering Sea, and Arctic Ocean will provide abundant fuel for storms. The warmer atmosphere will experience increasingly powerful thunderstorms that are more likely to organize and form large-scale clusters, increasing the potential for heavy rain and lightning.Prein said the effects of increased storms in Alaska could be particularly severe because the landscape will be reshaped by melting permafrost and the northerly migration of boreal forests."The potential for flash flooding and landslides is definitely increasing, and the Arctic is becoming way more flammable," he said. "It's hard to grasp what the ecological changes will be in the future."These modeling results from the two studies are in agreement with observed increases in thunderstorm activity in Arctic regions. The authors urged more research into other high-latitude regions to understand if they will experience similar changes."There's a lot of value in doing targeted regional climate model simulations that can capture smaller-scale events like thunderstorms and open the door for us to begin to understand more of the complex ways that climate change will impact many aspects of life all over the globe," said NCAR scientist Andrew Newman, a co-author of the first paper. "These two studies show the potential for the Arctic to experience previously unseen weather events in addition to traditionally highlighted changes such as sea ice loss."
|
Environment
| 2,021 |
February 23, 2021
|
https://www.sciencedaily.com/releases/2021/02/210223135345.htm
|
Agile underwater glider could quietly survey the seas
|
Autonomous underwater vehicles have become versatile tools for exploring the seas. But they can be disruptive to the environment or have trouble traveling through confined spaces.
|
Purdue University researchers are studying an alternative: highly maneuverable, low-cost underwater gliders that operate silently. Components and sensors of the glider also can be easily swapped out or added according to a wide range of mission specifications."Our goal is persistent operation of mobile robots in challenging environments," said Nina Mahmoudian, associate professor of mechanical engineering. "Most underwater robots have limited battery life and must return back after just a few hours. For long-endurance operations, an underwater glider can travel for weeks or months between charges but could benefit from increased deployment opportunities in high-risk areas."An underwater glider differs from other marine robots because it has no propeller or active propulsion system. It changes its own buoyancy to sink down and rise up, and to propel itself forward. Although this up-and-down approach enables very energy-efficient vehicles, it presents several problems: The vehicles are expensive, slow and not maneuverable, especially in shallow water.Mahmoudian has developed an agile vehicle called ROUGHIE (Research Oriented Underwater Glider for Hands on Investigative Engineering). Shaped like a torpedo, ROUGHIE is about four feet long and features no outward propulsion or control surfaces other than a static rear wing. When deployed from shore or from a boat, ROUGHIE pumps water into its ballast tanks to change its buoyancy and provide initial glide path angle. To control its pitch, the vehicle's battery subtly shifts its weight forward and backward, acting as its own control mechanism. To steer, the entire suite of inner components are mounted on a rail that rotates, precisely controlling the vehicle's roll. The design is modular and adaptable for a variety of applications."This is a totally unique approach," Mahmoudian said. "Most underwater gliders can only operate in deep oceans and are not agile for confined spaces. ROUGHIE has a turning radius of only about 10 feet, compared to an approximately 33-foot turn radius of other gliders."ROUGHIE is so maneuverable that Mahmoudian's team has been testing it in the diving well at Purdue's Morgan J. Burke Aquatic Center. By installing a motion capture system of infrared cameras below the water, they can track the vehicle's movements and characterize its maneuvering behavior in three dimensions with millimeter accuracy."We program ROUGHIE with flight patterns ahead of time, and it performs those patterns autonomously," Mahmoudian said. "It can do standard sawtooth up-and-down movements to travel in a straight line, but it can also travel in circular patterns or S-shaped patterns, which it would use when patrolling at sea. The fact that it can perform these tasks within the confined environment of a swimming pool using nothing but internal actuation is incredibly impressive."This maneuverability means that ROUGHIE is able to follow complex paths and can explore real-world areas other underwater gliders can't."It can operate in shallow seas and coastal areas, which is so important for biology or climate studies," Mahmoudian said. "And because it's totally quiet, it won't disturb wildlife or disrupt water currents like motorized vehicles do."ROUGHIE can be fitted with a variety of sensors, gathering temperature, pressure and conductivity data vital to oceanographers. Mahmoudian's team has sent ROUGHIE into small ponds and lakes with a fluorimeter to measure algae bloom. The team also outfitted the vehicle with compact magnetometers, capable of detecting anomalies like shipwrecks and underwater munitions. This research has been published recently in the journal Mahmoudian and her students have been developing ROUGHIE since 2012 when she began the project at Michigan Technological University."My students designed and built it from scratch, and they developed the control and navigational algorithms in parallel," Mahmoudian said. "For the price of a current commercial vehicle, we can put 10 of these in the water, monitoring conditions for months at a time. We believe this vehicle has great value to any local community."Video: This work is supported by the National Science Foundation (grant 1921060), Office of Naval Research (grant N00014-20-1-2085) and the Naval Sea Systems Command Small Business Technology Transfer program N68335-19-C-0086.
|
Environment
| 2,021 |
February 23, 2021
|
https://www.sciencedaily.com/releases/2021/02/210223121648.htm
|
Biopolymer-coated nanocatalyst can help realize a hydrogen fuel-driven future
|
To combat climate change, shifting from fossil fuels to clean and sustainable energy sources is imperative. A popular candidate in this regard is hydrogen, an eco-friendly fuel that produces only water when used. However, the efficient methods of hydrogen production are usually not eco-friendly. The eco-friendly alternative of splitting water with sunlight to produce hydrogen is inefficient and suffers from low stability of the photocatalyst (material that facilitates chemical reactions by absorbing light). How does one address the issue of developing a stable and efficient photocatalyst?
|
In a study recently published in The scientists fabricated the PDA-coated ZnS nanocatalysts through polymerization to coat dopamine onto ZnS nanorods, and varied the polymerization period to create samples of three different PDA thicknesses -- 1.2 nm (ZnS/PDA1), 2.1 nm (ZnS/PDA2), and 3.5 nm (ZnS/PDA3). They then measured the photocatalytic performance of these samples by monitoring their hydrogen production under simulated sunlight illumination.The ZnS/PDA1 catalyst showed the highest hydrogen production rate followed by ZnS/PDA2, uncoated ZnS, and ZnS/PDA3. The team attributed the inferior performance of ZnS/PDA2 and ZnS/PDA3 to more light absorption by the thicker PDA coatings, which reduced the light reaching ZnS and impeded the excited charge carriers to reach the surface; uncoated ZnS, contrarily, underwent photocorrosion.To understand the role of electronic structure in the observed enhancement, the scientists measured emission and extinction spectra of the samples along with density functional theory calculations. The former revealed that the enhanced absorption was due to Zn-O or O-Zn-S shells forming on ZnS and the creation of energy levels near the valence band (highest atomic level filled with electrons) that can accept "holes" (absence of electrons), while the calculations showed that ZnS/PDA has a unique "doubly staggered" electronic structure that facilitates the transport and separation of charge carriers at the surface. The improved durability was due to lowered oxidative capacity of holes in the valence states of PDA.Dr. Kim and his team are hopeful of wider applications of their technique. "The polydopamine coating utilized in our work is also applicable to other groups of selenide, boride, and telluride-based catalysts," comments Dr. Kim.The future might indeed be hydrogen!
|
Environment
| 2,021 |
February 23, 2021
|
https://www.sciencedaily.com/releases/2021/02/210223110718.htm
|
Low-level jets create winds of change for turbines
|
As one of the leading sources of clean and renewable energy, global wind power capacity has increased more than fivefold over the past decade, leading to larger turbines and pushing wind technology to its limits.
|
"These much larger turbines are operating in very different atmospheric layers than smaller turbines used 5-10 years ago," said Srinidhi Gadde, one of the authors of a paper in the Low-level jets, which are maxima in wind velocity in the lower atmosphere, are one cause for concern with growing turbines. These strong, energetic wind flows can either have desirable or detrimental effects on the turbines, depending on how high the wind flows are in relation to the turbines."A simple way to think about LLJs is to visualize them as high-velocity 'rivers' or 'streams' of wind within the atmosphere," Gadde said.In their simulation of a wind farm with a 4-by-10 grid of turbines, Gadde and co-author Richard Stevens considered three different scenarios in which the LLJs were above, below, and in the middle of the turbine rotors.When the jets and the turbines were at the same height, the researchers found the front rows blocked wind access downstream, causing a reduction in power production in each successive row. Relative to this equal height scenario, a larger downstream energy capture was observed in both other cases, though by different mechanisms.For high jets, the turbulence generated in the wakes of the turbines pulls the wind from the upper atmosphere down toward the turbines in a process called downward vertical kinetic energy entrainment, leading to large amounts of power production. More surprisingly, when the jets are low, the reverse process occurs. High-velocity wind from the LLJ is pushed upward into the turbine, a previously unknown phenomenon, which the authors termed upward vertical kinetic energy entrainment.Gadde said he looks forward to applying this work to drive innovation and functionality to meet future power demands, which will require an even deeper understanding of events like LLJs and additional observations of these phenomena."As one of the leading renewable energy technologies, wind energy is expected to deliver major contributions to the expected growth in renewable energy production in the coming decades," he said.
|
Environment
| 2,021 |
February 23, 2021
|
https://www.sciencedaily.com/releases/2021/02/210223110411.htm
|
Climate impacts drive east-west divide in forest seed production
|
Younger, smaller trees that comprise much of North America's eastern forests have increased their seed production under climate change, but older, larger trees that dominate forests in much of the West have been less responsive, a new Duke University-led study finds.
|
Declines in these trees' seed production, or fecundity, could limit western forests' ability to regenerate following the large-scale diebacks linked to rising temperatures and intensifying droughts that are now occurring in many states and provinces.This continental divide, reported for the first time in the new study, "could dramatically alter the composition and structure of 21st century North American forests," said James S. Clark, Nicholas Distinguished Professor of Environmental Science at Duke, who led the research.Knowing the contrasting responses occur -- and understanding why they happen -- will help scientists more accurately predict future changes to North American forests and develop conservation and management strategies to mitigate the changes, he said.Researchers from 48 institutions collaborated with Clark on the peer-reviewed study, which appears Feb. 23 in Fecundity is a measure of trees' capacity to regenerate after diebacks and other large-scale disturbances by dispersing seeds to habitats where their odds of future survival are more favorable. It's an essential factor for determining future forest responses to climate change, but like many ecological processes it's noisy, highly variable and incredible hard to estimate.Fecundity changes over time, based on changes in a tree's size, growth rate or access to light, water and other resources, and is driven by two indirect climate impacts -- the effects of growth that depend on climate, and the effects of climate that depend on tree size -- that currently aren't accounted for in the models used to predict future change."It was the only major demographic process driving forest response to climate change that we lacked field-based estimates on," Clark said.To address this problem, he devised new statistical software that allowed him to synthesize decades of raw data on size, growth, canopy spread, and access to resources for nearly 100,000 individual trees at long-term research sites and experimental forests across North America. The unfiltered raw data revealed what previous meta-analyses based on averaged measurements had missed: At the continental scale, fecundity increases as a tree grows larger, up to a point. And then it begins to decline."This explains the East-West divide. Most trees in the East are young, growing fast and entering a size class where fecundity increases, so any indirect impact from climate that spurs their growth also increases their seed production," Clark said. "We see the opposite happening with the older, larger trees in the West. There are small and large trees in both regions, of course, but the regions differ enough in their size structure to respond in different ways."Now that we understand, in aggregate, how this all works, the next step is to apply it to individual species or stands and incorporate it into the models we use to predict future forest changes," he said.The data used in the study came from trees in the Mast Inference and Prediction (MASTIF) monitoring network, which includes more than 500 long-term field research sites nationwide, including plots that are also part of the National Ecological Observation Network (NEON).
|
Environment
| 2,021 |
February 23, 2021
|
https://www.sciencedaily.com/releases/2021/02/210223110729.htm
|
Researchers challenge the Conservation Reserve Program status quo to mitigate fossil fuels
|
Researchers at the Center for Advanced Bioenergy and Bioproducts Innovation (CABBI) found that transitioning land enrolled in the Conservation Reserve Program (CRP) to bioenergy agriculture can be advantageous for American landowners, the government, and the environment.
|
Land enrolled in the CRP cannot currently be used for bioenergy crop production, wherein high-yielding plants (like miscanthus and switchgrass) are harvested for conversion into marketable bioproducts that displace fossil fuel- and coal-based energy. Established by the U.S. Department of Agriculture in 1985, the CRP incentivizes landowners to retire environmentally degraded cropland, exchanging agricultural productivity for native habitats and accepting annual government payments in return.As the world warms and its population explosively expands, global demand for food production is at odds with the decreased agricultural productivity threatened by extreme climate conditions. Therefore, allocating CRP land for high-yielding energy biomass might eliminate the need for bioenergy crops and food crops to vie for space.A team led by CABBI Sustainability Theme Leader Madhu Khanna and Ph.D. student Luoye Chen developed an integrated modeling approach to assess the viability of transitioning CRP land in the eastern U.S. to perennial bioenergy crops. Their paper, published in "As proponents of a safer, more sustainable bioeconomy, we must prioritize displacing fossil fuels," said Khanna, who is also Acting Director of the Institute for Sustainability, Energy, and Environment (iSEE) at the University of Illinois Urbana-Champaign. "As scientists, it is our responsibility to take a thoughtful, innovative approach to mitigating greenhouse gases in a way that will prove beneficial in the long term."The transportation and electricity sectors are looking to expand bioenergy production, and it is imperative that the agricultural sector do the same. This necessitates a program wherein bioenergy cropland and food cropland coexist rather than compete."The CABBI team takes an integrated approach to weighing the costs and benefits of swapping the CRP status quo -- uncultivated acreage -- for bioenergy, combining the Biofuel and Environmental Policy Analysis Model (BEPAM) with the biogeochemical model DayCent (Daily Time Step Version of the Century Model).BEPAM assesses net profitability, answering the key question: What precise economic conditions will incentivize CRP landowners to make the switch to bioenergy cropland? An environmental counterpoint to BEPAM, DayCent simulates the full ecosystem effects of the transition on a given county, providing a "sneak peek" into the future and shedding light on how this land-use change might affect factors like crop yield, nutrient exchange, and soil carbon sequestration.A key component of this study aggregates data from both models to formulate a greenhouse gas (GHG) life-cycle assessment, which calculates the total GHGs mitigated by the process as a whole -- from the physical act of planting to the introduction of clean energy into the bioeconomy."The full life-cycle assessment really is key to understanding the big-picture results of our research," Chen said. "We take everything into account -- the process of actually growing and harvesting the feedstocks, the carbon sequestered in the soil, and the fact that ultimately, we will be displacing fossil fuels with biofuels, and coal-based electricity with bioelectricity."Keeping that end result in mind anchors everything else to the ultimate goal of a net positive environmental impact."The team concluded that converting 3.4 million hectares of CRP land to bioenergy from 2016 to 2030 is economically and environmentally viable -- under certain conditions.Economically speaking, all systems are "go" if the market price of biomass is high and the government continues to distribute appropriate CRP land rental payments. These factors can ideally function as counterweights: If biomass prices decrease, substantial land rental payments may alleviate financial stress from farmers and encourage their continued commitment to bioenergy; alternatively, soaring biomass prices would rationalize relaxed government support, saving taxpayers money. The team identified two ideal pairings: 1) landowners receive 100 percent of their original government payments and sell biomass at $75/metric ton; or 2) landowners receive 75 percent of their original payment and sell biomass for $100/metric ton. Ideally, both parties benefit.Converting CRP land to bioenergy can also result in substantial GHG savings. Previous studies show that a large "soil carbon debt" is liable to accrue at the outset of the venture, during the planting years of miscanthus and switchgrass. However, taking into account the full life-cycle assessment mentioned above, the research team determined that the long-term effects of displacing fossil fuel- and coal-based energy with bioproducts would more than make up for this temporary loss.Considering landowner income from biomass sales, savings in government payments to maintain existing CRP enrollment, and the monetized benefits of GHG mitigation through displacing fossil fuels (quantified using the "social cost of carbon"), the total net value of converting CRP land to bioenergy could be as high as $28 billion to $125 billion over the 2016-2030 period.
|
Environment
| 2,021 |
February 22, 2021
|
https://www.sciencedaily.com/releases/2021/02/210222164144.htm
|
Graphene Oxide membranes could reduce paper industry energy costs
|
The U.S. pulp and paper industry uses large quantities of water to produce cellulose pulp from trees. The water leaving the pulping process contains a number of organic byproducts and inorganic chemicals. To reuse the water and the chemicals, paper mills rely on steam-fed evaporators that boil up the water and separate it from the chemicals.
|
Water separation by evaporators is effective but uses large amounts of energy. That's significant given that the United States currently is the world's second-largest producer of paper and paperboard. The country's approximately 100 paper mills are estimated to use about 0.2 quads (a quad is a quadrillion BTUs) of energy per year for water recycling, making it one of the most energy-intensive chemical processes. All industrial energy consumption in the United States in 2019 totaled 26.4 quads, according to Lawrence Livermore National Laboratory.An alternative is to deploy energy-efficient filtration membranes to recycle pulping wastewater. But conventional polymer membranes -- commercially available for the past several decades -- cannot withstand operation in the harsh conditions and high chemical concentrations found in pulping wastewater and many other industrial applications.Georgia Institute of Technology researchers have found a method to engineer membranes made from graphene oxide (GO), a chemically resistant material based on carbon, so they can work effectively in industrial applications."GO has remarkable characteristics that allow water to get through it much faster than through conventional membranes," said Sankar Nair, professor, Simmons Faculty Fellow, and associate chair for Industry Outreach in the Georgia Tech School of Chemical and Biomolecular Engineering. "But a longstanding question has been how to make GO membranes work in realistic conditions with high chemical concentrations so that they could become industrially relevant."Using new fabrication techniques, the researchers can control the microstructure of GO membranes in a way that allows them to continue filtering out water effectively even at higher chemical concentrations.The research, supported by the U.S. Department of Energy-RAPID Institute, an industrial consortium of forest product companies, and Georgia Tech's Renewable Bioproducts Institute, was reported recently in the journal Nair, his colleagues Meisha Shofner and Scott Sinquefield, and their research team began this work five years ago. They knew that GO membranes had long been recognized for their great potential in desalination, but only in a lab setting. "No one had credibly demonstrated that these membranes can perform in realistic industrial water streams and operating conditions," Nair said. "New types of GO structures were needed that displayed high filtration performance and mechanical stability while retaining the excellent chemical stability associated with GO materials."To create such new structures, the team conceived the idea of sandwiching large aromatic dye molecules in between GO sheets. Researchers Zhongzhen Wang, Chen Ma, and Chunyan Xu found that these molecules strongly bound themselves to the GO sheets in multiple ways, including stacking one molecule on another. The result was the creation of "gallery" spaces between the GO sheets, with the dye molecules acting as "pillars." Water molecules easily filter through the narrow spaces between the pillars, while chemicals present in the water are selectively blocked based on their size and shape. The researchers could tune the membrane microstructure vertically and laterally, allowing them to control both the height of the gallery and the amount of space between the pillars.The team then tested the GO nanofiltration membranes with multiple water streams containing dissolved chemicals and showed the capability of the membranes to reject chemicals by size and shape, even at high concentrations. Ultimately, they scaled up their new GO membranes to sheets that are up to 4 feet in length and demonstrated their operation for more than 750 hours in a real feed stream derived from a paper mill.Nair expressed excitement for the potential of GO membrane nanofiltration to generate cost savings in paper mill energy usage, which could improve the industry's sustainability. "These membranes can save the paper industry more than 30% in energy costs of water separation," he said.This work is supported by the U.S. Department of Energy (DOE) Rapid Advancement in Process Intensification Deployment (RAPID) Institute (#DE-EE007888-5-5), an industrial consortium comprising Georgia-Pacific, International Paper, SAPPI, and WestRock, and the Georgia Tech Renewable Bioproducts Institute. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsoring organizations.
|
Environment
| 2,021 |
February 22, 2021
|
https://www.sciencedaily.com/releases/2021/02/210222141452.htm
|
Traditional hydrologic models may misidentify snow as rain, new citizen science data shows
|
Normally, we think of the freezing point of water as 0°C or 32°F -- but in the world of weather forecasting and hydrologic prediction, that isn't always the case. In the Lake Tahoe region of the Sierra Nevada, the shift from snow to rain during winter storms may actually occur at temperatures closer to 39.5°F, according to new research from the Desert Research Institute (DRI), Lynker Technologies, and citizen scientists from the Tahoe Rain or Snow project
|
The new paper, which published this month in "Scientists use a temperature threshold to determine where and when a storm will transition from rain to snow, but if that threshold is off, it can affect our predictions of flooding, snow accumulation, and even avalanche formation," said Keith Jennings, Ph.D., Water Resources Scientist at Lynker Technologies and one of the lead authors on the study.Previous studies have found that thresholds used are particularly problematic in the Sierra Nevada, where a significant proportion of winter precipitation falls near 32°F. When the temperature is near freezing, weather forecasts and hydrologic models have difficulty correctly predicting whether it will be raining or snowing.Tahoe Rain or Snow was launched in 2019 to take on the challenge of enhancing the prediction of snow accumulation and rainfall that may lead to flooding by making real-time observations of winter weather. The team is comprised of two scientists, one education specialist, and about 200 volunteer weather spotters from the Lake Tahoe and western slope regions of the Sierra Nevada and Truckee Meadows."Tahoe Rain or Snow harnesses the power of hundreds of local volunteers. The real-time observations that they share with scientists add an incredible amount of value to the study of hydrology and clarify crucial gaps left by weather models," said Meghan Collins, MS, Education Program Manager for DRI and another lead author on the paper.In 2020, these citizen scientists submitted over 1,000 timestamped, geotagged observations of precipitation phase through the Citizen Science Tahoe mobile phone app. Ground-based observations submitted by the Tahoe Rain or Snow team in 2020 showed that a much warmer temperature threshold of 39.5°F for splitting precipitation into rain and snow may be more accurate for our mountain region. In contrast, a 32°F rain-snow temperature threshold would have vastly overpredicted rainfall, leading to pronounced underestimates of snow accumulation. Such model errors can lead to issues in water resources management, travel planning, and avalanche risk prediction."Tahoe Rain or Snow citizen scientists across our region open a door to improve our understanding of winter storms," said Monica Arienzo, Ph.D., Assistant Research Professor of Hydrology at DRI and another lead author on the paper. "Growing our team of volunteer scientists is important given that climate change is causing the proportion of precipitation falling as snow to decrease, and they help enhance the predictions of precipitation that we rely on in the Sierra Nevada and Truckee Meadows."
|
Environment
| 2,021 |
February 22, 2021
|
https://www.sciencedaily.com/releases/2021/02/210222124659.htm
|
Using human rights laws may be most effective way of harnessing international legislation to protect the Amazon
|
Using laws governing human rights may be the best way of harnessing international legislation and tribunals to protect the Amazon, a new study shows.
|
Safeguarding the rainforest is a critical priority because of the ecosystem's planetary importance. Recent increases in deforestation and fires in the region have made this even more urgent.The new research, published in the The experts behind the study, Dr Justine Bendel from the University of Exeter and Professor Tim Stephens from the University of Sydney, hope it will be used as a comprehensive guide for those working to protect the Amazon. It assesses the potential for litigation in international courts and tribunals, examining the possible claims, the risks associated with each of these and which are more likely to be successful.The protection of the Amazon poses particular challenges for international law because global ways of protecting the environment exist alongside the territorial jurisdiction of the Amazon nations, which have permanent sovereignty over natural resources.The hurdles in jurisdiction and evidence often prevent a clear-cut judgment and Amazon states from being compelled to take the urgent and direct measures needed to bring the ecosystem back from the brink.A number of global legal frameworks have been invoked to advance Amazon conservation. These range from the Convention on Biological Diversity (CBD) to the more recent Reducing Emissions from Deforestation and Forest Degradation (REDD+) mechanism. The study shows how they have had limited success, and gains are now being lost.Domestic proceedings are more likely to be perceived as an unwarranted international intervention in a matter primarily of domestic concern. The study argues for close engagement by the international community with Brazil, and the other Amazon States, to implement internationally agreed conservation outcomes.The rights of those living in the Amazon and in particular of the indigenous communities are protected by the American Convention on Human Rights (ACHR). Under the ACHR cases may be referred to the Inter?American Court of Human Rights (IACtHR) by the Inter?American Commission on Human Rights (IAComHR) or by Amazon States, all of which except Guyana and Peru, are parties to the ACHR.Indigenous communities in South America have used the IACtHR when governments and firms have wanted to gain access to natural resources located on indigenous lands. The study warns this can be a lengthy process, and doesn't always effectively protect land as implementation of territorial claims often raises complex issues about rights.Previous cases have established indigenous peoples have a legal right to a healthy environment and food and water. The study shows how this opens up the possibility for reparations for environmental damage independently from ownership of the land.The IACtHR has recently confirmed large projects such as the construction of substantial infrastructures or energy?related projects with potential transboundary impacts may be open to challenge through human rights litigation. Such a transboundary element would also potentially allow claims based on climate change, and therefore the deforestation of the Amazon.Dr Bendel said: "There are multiple opportunities for human rights litigation to take on the issue of deforestation of the Amazon, and the strengthening of both indigenous rights and the right to a healthy environment make cases to combat deforestation more likely to succeed."Recent cases show the IACtHR takes indigenous rights seriously, but compliance with decisions depend on political will, which may be lacking in the Amazonian region, in particular in Brazil."We hope organisations working to protect the Amazon will be able to use this study when they plan how to use international courts and tribunals. We have shown the options available, and the limitations of using these organisations and how to overcome them."
|
Environment
| 2,021 |
February 22, 2021
|
https://www.sciencedaily.com/releases/2021/02/210222124532.htm
|
How two radically different communities coexist beneath canopies of California's iconic kelp forests
|
Walk along the beach after a winter storm and you'll see a shore littered with wracks of giant kelp, some 30 to 40 feet long -- evidence of the storm's impact on coastal kelp forests.
|
Less apparent to the casual beachgoer is what happens to the submarine forests after the storm's fury dies down. This is precisely the topic of a new study led by Raine Detmer(link is external), a graduate student at UC Santa Barbara. She developed a mathematical model describing the effects of severe storms on kelp forest ecosystems, particularly the seafloor, or benthic, communities. The research, published in Giant kelp forests are a wonder of the underwater world. They share many similarities with terrestrial forests: lush understories, diverse fauna and verdant canopies that stretch skyward toward the sunlight. However, they also have features completely foreign to any woodland. Giant kelp is among the fastest growing organisms on Earth -- able to grow up to two feet per day under ideal conditions -- with a lifecycle much shorter than that of any tree. Also, unlike trees, the presence of the giant algae can change rapidly: Storms can uproot entire kelp forests in February that grow back by September.These factors make for a forest that is always in flux. "If you have a really dynamic foundation species, like giant kelp, this can cause fluctuations in environmental conditions," said Detmer, a first-year doctoral student in the lab of Holly Moeller(link is external), an assistant professor in the Department of Ecology, Evolution and Marine Biology. conducted the research her senior year at UC Santa Barbara.Detmer, who conducted the research her senior year at UC Santa Barbara, sought to determine how storms affect the kelp forest floor, which hosts a diverse community of invertebrates and understory macroalgae. To this end, she and her coauthors developed a mathematical model of the ecosystem's intricate relationships. It accounts for factors like the growth rate and mortality of algae and invertebrates, the life stages of giant kelp and the amount of light reaching the seafloor."This model is hard to describe," said co-author Moeller, "because it's this chimera built from 100 years of different mathematical ecology models that Raine wove together in this creative way that you need when you have to represent things as complex and variable as a kelp forest."The model incorporates two very different assemblages of species -- understory algae and sessile invertebrates -- and the complicated lifecycle of kelp itself. To validate it, Detmer relied on data from the Santa Barbara Coastal Long Term Ecological Research project (SBC LTER), a National Science Foundation research site managed by UCSB's Marine Science Institute. "You can't build models like this without 20 years of data," Moeller said. "And you can't find 20 years of data, on kelp forests anyway, anywhere but here, at the SBC LTER."A driving force behind the make-up of the benthic community is the competition between sessile invertebrates, like sponges and anemones, and understory algae for space. With enough sunlight, the algae can spread and grow more quickly than the invertebrates, but under the shade of the giant kelp, the invertebrates win out.Simulating the effects of storms on the ecosystem revealed that, by removing the giant kelp, a storm can provide a competitive advantage to the algae over the invertebrates. What's more, if the storm also scours the sea bottom, it exposes more surface for the two factions. And with the seafloor now bathed in sunlight, the algae can take advantage of the real estate more quickly than their competition.What's fascinating is that the ecosystem doesn't simply remain a meadow, as the algae's time in the sun is only temporary. The model showed that as the vigorous giant kelp again begins to reach for the surface, it shifts the competitive advantage back toward the invertebrates.In this way, competing groups of organisms with different resource requirements can coexist in these systems, with each faction dominating at a different time. Moeller compares the situation to contestants in a triathlon. If one athlete is a great runner but a poor swimmer, and another is a great swimmer but a poor runner, both competitors will be able to hold their own overall.The results highlight the effect of a dynamic foundation species, like giant kelp, in shaping an equally dynamic ecosystem. In contrast, ecosystems with more stable foundation species, like redwoods for instance, don't exhibit this kind of behavior. "Visitors to Muir Woods expect to see a redwood forest regardless of the time of year," Moeller said. "But visitors to a kelp forest could find sparse kelp and a carpet of seafloor macroalgae on one dive, and return to see a dense kelp canopy just a few months later."The findings also support the intermediate disturbance hypothesis, Detmer explained, which contends that there is a sweet spot in terms of disturbance frequency and intensity that will allow multiple different factions to coexist in an ecosystem."The intermediate disturbance hypothesis is like the ecologist's version of the Goldilocks story," said Moeller, "where there is a frequency of storm disturbance that is just right to produce these high diversity communities." In other words, different groups of organisms flourish under different disturbance regimes: Frequent storms favor the light-loving macroalgae, while the invertebrates do better under more stable conditions, when the shade of the giant kelp keeps the algae in check.Researchers have previously investigated the effects of storms on benthic communities at the SBC LTER. But Detmer's model adds predictive power to the insights gleaned from those past experiments and data. "The value that the mathematical models have is they allow you not just to interpolate, but also project and extrapolate," Moeller said. "Once you have a mathematical model that performs as beautifully as Raine's does, then you can use it to start making those projections."As the effects of climate change become more severe, this predictive power will prove critical to assessing kelp forest health and developing stewardship strategies. Scientists predict storm frequency and intensity will increase, which may hamper the ability of giant kelp to recover from these events. We may see macroalgae-dominated states more often, accompanied by a decline in the numbers of sessile invertebrates. This is important because these animals are an important food source for predators, from sharks and fish to otters and sea stars. A decrease in these prey species could lead to a reduction in the numbers and diversity of predator species."Just the ability to quantify what the sensitivity of these systems are helps us," Moeller said. "It can give us a sense of where their breaking points lie."
|
Environment
| 2,021 |
February 22, 2021
|
https://www.sciencedaily.com/releases/2021/02/210222082615.htm
|
Future ocean warming boosts tropical rainfall extremes
|
The El Niño-Southern Oscillation (ENSO) is the most energetic naturally occurring year-to-year variation of ocean temperature and rainfall on our planet. The irregular swings between warm and wet "El Niño" conditions in the equatorial Pacific and the cold and dry "La Niña" state influence weather conditions worldwide, with impacts on ecosystems, agriculture and economies. Climate models predict that the difference between El Niño- and La Niña-related tropical rainfall will increase over the next 80 years, even though the temperature difference between El Niño and La Niña may change only very little in response to global warming. A new study published in
|
Using the latest crop of climate models, researchers from the IBS Center for Climate Physics at Pusan National University, the Korea Polar Research Institute, the University of Hawai'i at Manoa, and Environment and Climate Change Canada, worked together to unravel the mechanisms involved. "All climate models show a pronounced intensification of year-to-year tropical rainfall fluctuations in response to global warming." says lead author Dr. Kyung-Sook Yun from the IBS Center for Climate Physics (Image, right panel). "Interestingly the year-to-year changes in ocean temperature do not show such a clear signal. Our study therefore focuses on the mechanisms that link future ocean warming to extreme rainfall in the tropical Pacific," she goes on to say.The research team found that the key to understanding this important climatic feature lies in the relationship between tropical ocean surface temperature and rainfall. There are two important aspects to consider: 1) the ocean surface temperature threshold for rainfall occurrence, and 2) the rainfall response to ocean surface temperature change, referred to as rainfall sensitivity. "In the tropics, heavy rainfall is typically associated with thunderstorms and deep clouds shaped like anvils. These only form once the ocean surface is warmer than approximately 27.5 degrees Celsius or 81 degrees Fahrenheit in our current climate," says co-author Prof. Malte Stuecker from the University of Hawai'i at Manoa.This ocean surface temperature threshold for intense tropical rainfall shifts towards a higher value in a warmer world and does not contribute directly to an increase in rainfall variability. "However, a warmer atmosphere can hold more moisture which means that when it rains, rainfall will be more intense. Moreover, enhanced warming of the equatorial oceans leads to upward atmospheric motion on the equator. Rising air sucks in moist air from the off-equatorial regions, which can further increase precipitation, in case other meteorological conditions for a rain event are met." says co-lead author Prof. June-Yi Lee from IBS Center for Climate Physics.This increase in rainfall sensititvity is the key explanation why there will be more extreme ENSO-related swings in rainfall in a warmer world.
|
Environment
| 2,021 |
February 22, 2021
|
https://www.sciencedaily.com/releases/2021/02/210222082610.htm
|
New study on the forecasting of extreme rainfall events in Mediterranean countries
|
Extreme rainfall has devastating consequences for societies and economies. Locations around the Mediterranean are frequently affected by such events, leading to landslides and floods. "It is, however, extremely challenging to forecast many days in advance when and where exactly heavy rainfall will occur. Thus, researchers strive to develop new tools to better predict extreme weather phenomena allowing for early warnings and adequate mitigation strategies," explains first author Nikolaos Mastrantonas, who has carried out the study as a PhD student within the EU-funded research project CAFE.
|
The researchers analysed weather data from 1979 to today, grouping the daily weather into nine patterns of distinct atmospheric characteristics over the Mediterranean. The study shows that there is a strong relation between these nine patterns and the location of the extreme weather event. "We can now use the data to come up with a model that will help to better predict extreme rain in the Mediterranean," says Prof. Jörg Matschullat of TU Bergakademie Freiberg. The geoecologist supervises Nikolaos Mastrantonas' PhD and adds: "When it comes to climate, the Mediterranean Sea is a particularly interesting region as it is surrounded by large continents and mountain ranges. The regional climate of the area is also dependent on large-scale patterns over the Atlantic Ocean, the Balkans and the Black Sea."According to the study, the nine patterns are associated with unstable low-pressure systems such as cut off lows and troughs, or with stable anticyclonic conditions, such as ridges, extending over hundreds of kilometres. "Such conditions lead to extreme precipitation events at different subregions of the Mediterranean," says Nikolaos Mastrantonas. To name one example: A low-pressure system centred over the Bay of Biscay increases the probability of extreme rainfall over mountainous and coastal regions in Spain, Morocco, Italy, and even in the West Balkans more than sixfold.The team also found that mountains create a strong link between distant areas. In Central Western Italy, for example, three in every ten extremes happen simultaneously with extremes over Montenegro and Croatia, although almost 500 kilometres lie between these two areas. "This is a result of the Apennines that block a substantial part of the air flow, and frequently force the moisture to precipitate in the western part of Italy, and on the same day over Croatia," the young researcher explains.According to the scientists, current weather forecasting models can already provide reliable information about large-scale weather variability up to three weeks in advance, a timeframe known as sub-seasonal scale. "As the next step of this work, we will quantify how reliable the state-of-the-art weather forecasting models are in predicting the identified nine patterns. Our intention is to incorporate such information into new forecasting products informing about extreme weather over the Mediterranean at sub-seasonal scales," Prof Jörg Matschullat clarifies.
|
Environment
| 2,021 |
February 22, 2021
|
https://www.sciencedaily.com/releases/2021/02/210222095049.htm
|
New method to track genetic diversity of salmon, trout
|
Scientists at Oregon State University and the U.S. Forest Service have demonstrated that DNA extracted from water samples from rivers across Oregon and Northern California can be used to estimate genetic diversity of Pacific salmon and trout.
|
The findings, just published in the journal "There has been a dearth of this kind of data across the Northwest," said Kevin Weitemier, a postdoctoral fellow at Oregon State and lead author of the paper. "This allows us to get a quick snapshot of multiple populations and species all at once."In addition to demonstrating that environmental DNA, or eDNA, can be used to measure genetic diversity, the researchers also made unexpected discoveries about the history of these species, including a connection that links watersheds in northern and southern Oregon.eDNA allows scientists to test for the presence or absence of an organism in an environment, such as soil or water. It's a safe, rapid and cost-effective method that alleviates the need for scientists to capture individual organisms to collect DNA.The research by the Oregon State and USDA Forest Service Pacific Northwest Research Station team is unique in that it used eDNA samples to determine genetic diversity -- not just the presence or absence -- of four species: coho salmon, chinook salmon, rainbow trout/steelhead and coastal cutthroat trout. The team previously developed the techniques that allowed it to capture this robust set of genetic sequences across a large landscape."Understanding the standing genetic diversity of any species of concern is really important to maintain the population integrity," Weitemier said. "We need to make sure they harbor enough diversity to maintain these populations on their own."He said it's particularly important for salmonids, the family of fish that includes salmon and trout, because their genetic diversity is impacted by population declines and hatchery-raised salmonids that are released into rivers.In most of the watersheds the researchers studied, steelhead and chinook and coho salmon have undergone major declines, which has led to them being listed as threatened species. Populations of coastal cutthroat trout have declined in some places but not broadly enough to be listed.For this study, the researchers examined water samples collected from 16 sites in western Oregon and northwestern California in 2017. They sampled between one and five creeks in five watersheds: the Deschutes, Willamette, Umpqua, Rogue and Klamath. They also sampled from three creeks that feed rivers along the Oregon Coast.They found genetic similarities between rainbow trout in the northern Deschutes watershed and the southern Klamath watershed. This is believed to be the first genetic corroboration of previously hypothesized hydrological connection between the two now disconnected watersheds.The scientists saw unexpected high levels of genetic diversity in coho salmon, despite the species only being present in three rivers, the Coquille, Nestucca and Klamath, that they sampled.They showed the disproportionate importance of smaller coastal rivers -- they analyzed samples from creeks that feed the Nestucca, Alsea and Coquille rivers -- on genetic diversity, particularly of coastal cutthroat trout."These rivers are kind of punching above their weight in terms of the amount of diversity compared to the population and the size of the watershed," Weitemier said.The researchers also found further evidence of the Umpqua watershed's unique diversity. They found unique genetic variants of coastal cutthroat trout, and to a lesser extent rainbow trout. The Umpqua is also the sole home to two smaller fish, the Umpqua chub and Umpqua pikeminnow."The diversity in the Umpqua just makes it a special place in Oregon and this paper is another piece of information to confirm that," said Tiffany Garcia, an author of the paper and an associate professor in Oregon State's Department of Fisheries and Wildlife.The researchers plan to continue this research, working with water samples collected by the USDA Aquatic and Riparian Effectiveness and Monitoring Program. They regularly collect water from 200 sites, including the 16 this study focused on, across Oregon and California."We're working with them to monitor and understand more about occupancy of various species beyond Pacific salmon and trout, to include other fishes, amphibians, mussels, macroinvertebrates, crayfishes and pathogens," said Brooke Penaluna, an author of the paper and a research fish biologist with the USDA's Pacific Northwest Research Station.In addition to Weitemier, Garcia and Penaluna, authors of the paper are Lucas J. Longway of Oregon State and Laura Hauck and Richard Cronn of the USDA.
|
Environment
| 2,021 |
February 21, 2021
|
https://www.sciencedaily.com/releases/2021/02/210221195714.htm
|
Pioneering research reveals gardens are secret powerhouse for pollinators
|
Home gardens are by far the biggest source of food for pollinating insects, including bees and wasps, in cities and towns, according to new research.
|
The study, led by the University of Bristol and published today in the Results showed three gardens generated daily on average around a teaspoon of Nature's ambrosia, the unique sugar-rich liquid found in flowers which pollinators drink for energy. While a teaspoon may not sound much to humans, it's the equivalent to more than a tonne to an adult human and enough to fuel thousands of flying bees. The more bees and fellow pollinators can fly, the greater diversity of flora and fauna will be maintained.Ecologist Nicholas Tew, lead author of the study, said: "Although the quantity and diversity of nectar has been measured in the countryside, this wasn't the case in urban areas, so we decided to investigate."We expected private gardens in towns and cities to be a plentiful source of nectar, but didn't anticipate the scale of production would be to such an overwhelming extent. Our findings highlight the pivotal role they play in supporting pollinators and promoting biodiversity in urban areas across the country."The research, carried out in partnership with the universities of Edinburgh and Reading and the Royal Horticultural Society, examined the nectar production in four major UK towns and cities: Bristol, Edinburgh, Leeds, and Reading. Nectar production was measured in nearly 200 species of plant by extracting nectar from more than 3,000 individual flowers. The extraction process involves using a fine glass tube. The sugar concentration of the nectar was quantified with a refractometer, a device which measures how much light refracts when passing through a solution."We found the nectar supply in urban landscapes is more diverse, in other words comes from more plant species, than in farmland and nature reserves, and this urban nectar supply is critically underpinned by private gardens," said Nicholas Tew, who is studying for a PhD in Ecology."Gardens are so important because they produce the most nectar per unit area of land and they cover the largest area of land in the cities we studied."Nearly a third (29 per cent) of the land in urban areas comprised domestic gardens, which is six times the area of parks, and 40 times the area of allotments."The research illustrates the huge role gardeners play in pollinator conservation, as without gardens there would be far less food for pollinators, which include bees, wasps, butterflies, moths, flies, and beetles in towns and cities. It is vital that new housing developments include gardens and also important for gardeners to try to make sure their gardens are as good as possible for pollinators," Nicholas Tew explained."Ways to do this include planting nectar-rich flowers, ensuring there is always something in flower from early spring to late autumn, mowing the lawn less often to let dandelions, clovers, daisies and other plant flowers flourish, avoiding spraying pesticides which can harm pollinators, and avoiding covering garden in paving, decking or artificial turf."Dr Stephanie Bird, an entomologist at the Royal Horticultural Society, which helped fund the research, said: "This research highlights the importance of gardens in supporting our pollinating insects and how gardeners can have a positive impact through their planting decisions. Gardens should not be seen in isolation -- instead they are a network of resources offering valuable habitats and provisions when maintained with pollinators in mind."
|
Environment
| 2,021 |
February 22, 2021
|
https://www.sciencedaily.com/releases/2021/02/210222192836.htm
|
Some open ocean waters teeming with an abundance of life
|
Since Charles Darwin's day, the abundance of life on coral reefs has been puzzling, given that most oceanic surface waters in the tropics are low in nutrients and unproductive.
|
But now research, led by Newcastle University and published in in the journal The team found that these offshore resources contribute to more than 70% of reef predator diets, the rest being derived from reef associated sources.Led by Dr Christina Skinner, now based at the Hong Kong University of Science and Technology, the researchers included collaborators from Woods Hole Oceanographic Institution (USA), Banyan Tree Marine Lab (Maldives) and the University of Bristol (UK).The team used advanced stable isotope techniques to show that four species of grouper near the top of the food web all rely on offshore resources; this didn't change between species and was the case on the outside of an atoll and also inside the lagoon, suggesting that the oceanic subsidy is system-wide.The scientists believe that this offshore energy may be entering the food web through lower-level plankton feeding fish that the groupers are then feeding on. This is likely to be supported by inputs of nutrient-rich deep water, which are little understood.The findings help explain how coral reefs maintain high productivity in apparently nutrient-poor tropical settings, but also emphasise their susceptibility to future fluctuations of ocean productivity which have been predicted in many climate-change models.Dr Skinner said: "The study provides key insights into the nutrition of coral reef ecosystems, especially their dependence on offshore production. Detailed knowledge of food web dynamics is crucial to understand the impacts of anthropogenic and climate-induced change in marine ecosystems."The results force us to reconsider how we view coral reefs, and they highlight the extent of the connectivity with the surrounding ocean. If these groupers are mostly reliant on offshore energy to support their feeding, then maybe they won't be so impacted by the loss of live coral, as many fishery studies have predicted; they may be more resilient."On the other hand though, some studies have predicted that ocean production will decline in the future from climate change. If that is the case, and these groupers are reliant on that open ocean energy, they will be impacted by those changes."Study co-author, Professor Nick Polunin, from Newcastle University's School of Natural and Environmental Sciences, added: "Coral reefs are really suffering across the tropics from climate-related disturbances, particularly oceanic warming."In spite of its tiny area, this ecosystem is a massive contributor to marine biodiversity and this study highlights how little we know about the food web sources sustaining that exceptional wealth of species it sustains."
|
Environment
| 2,021 |
February 19, 2021
|
https://www.sciencedaily.com/releases/2021/02/210219155906.htm
|
The melting of large icebergs is a key stage in the evolution of ice ages
|
A new study, in which the Andalusian Earth Sciences Institute (IACT) (CSIC-UGR) participated, has described for the first time a key stage in the beginning of the great glaciations and indicates that it can happen to our planet in the future. The findings were recently published in the scientific journal
|
The study claims to have found a new connection that could explain the beginning of the ice ages on Earth.Antarctic iceberg melt could hold the key to the activation of a series of mechanisms that cause the Earth to suffer prolonged periods of global cooling, according to Francisco J. Jiménez-Espejo, a researcher at the Andalusian Earth Sciences Institute (CSIC-UGR), whose discoveries were recently published in the journal It has long been known that changes in the Earth's orbit, as it moves around the Sun, trigger the beginning or end of glacial periods by affecting the amount of solar radiation that reaches the planet's surface. However, until now, the question of how small variations in the solar energy that reaches us can lead to such dramatic shifts in the planet's climate has remained a mystery.In this new study, a multinational group of researchers proposes that, when the Earth's orbit around the sun is just right, the Antarctic icebergs begin to melt further and further away from the continent, moving huge volumes of freshwater from the Antarctic Ocean into the Atlantic.This process causes the Antarctic Ocean to become increasingly salty, while the Atlantic Ocean becomes fresher, affecting overall ocean circulation patterns, drawing COWithin this study, the scientists used several techniques to reconstruct oceanic conditions in the past, including by identifying tiny fragments of rock that had broken away from Antarctic icebergs as they melted into the ocean. These deposits were obtained from marine sediment cores recovered by the International Ocean Discovery Program (IODP) during Expedition 361 off the sea-margins of South Africa. These sediment cores enabled the scientists to reconstruct the history of the icebergs that reached these latitudes in the last million and a half years, this being one of the most continuous records known.The study describes how these rocky deposits appear to be consistently associated with variations in deep ocean circulation, which was reconstructed from chemical variations in minute deep-sea fossils known as foraminifera. The team also used new climate simulations to test the proposed hypotheses, finding that huge volumes of fresh water are carried northward by icebergs.The first author of the article, PhD student Aidan Starr from the University of Cardiff, notes that the researchers are "surprised to have discovered that this teleconnection is present in each of the different ice ages of the last 1.6 million years. This indicates that the Antarctic Ocean plays a major role in the global climate, something that scientists have long sensed, but that we have now clearly demonstrated."Francisco J. Jiménez Espejo, a researcher with the IACT, participated in his capacity as a specialist in inorganic geochemistry and physical properties during the IODP 361 expedition aboard the JOIDES Resolution research vessel. For two months, between January and March 2016, the research team sailed between Mauritius and Cape Town, collecting deep-sea sediment cores.Jiménez Espejo's main contribution to the study focused on identifying the geochemical variations associated with glacial and interglacial periods, which has made it possible to estimate with greater accuracy the age of the sediment and its sensitivity to the different environmental changes associated with those periods.Over the course of the last 3 million years, the Earth began to experience periodic glacial cooling. During the most recent episode, about 20,000 years ago, icebergs continuously reached the Atlantic coasts of the Iberian Peninsula from the Arctic. Currently, the Earth is in a warm interglacial period known as the Holocene.However, the progressive increase in global temperature associated with COIan Hall, also of Cardiff University, who co-directed the scientific expedition, indicates that the results may contribute to understanding how the Earth's climate may respond to anthropic changes. Similarly, Jiménez Espejo, notes that "last year, during an expedition aboard Hespérides, the Spanish Navy research vessel, we were able to observe the immense A-68 iceberg that had just broken into several pieces next to the islands of South Georgia. Ocean warming may cause the trajectories and the melt patterns of these large icebergs to alter in the future, affecting the currents and, therefore, our climate and the validity of the models that scientists use to predict it."
|
Environment
| 2,021 |
February 19, 2021
|
https://www.sciencedaily.com/releases/2021/02/210219111503.htm
|
Global study of 48 cities finds nature sanitizes 41.7 million tons of human waste a year
|
The first global-scale assessment of the role ecosystems play in providing sanitation finds that nature provides at least 18% of sanitation services in 48 cities worldwide, according to researchers in the United Kingdom and India. The study, published February 19 in the journal
|
"Nature can, and does, take the role of sanitation infrastructure," said Alison Parker, a Senior Lecturer in International Water and Sanitation at Cranfield University in the United Kingdom and one of the authors of the study. "While we are not marginalizing the vital role of engineered infrastructure, we believe a better understanding of how engineered and natural infrastructure interact may allow adaptive design and management, reducing costs, and improving effectiveness and sustainability, and safeguard the continued existence of these areas of land."Wastewater treatment infrastructure that converts human feces into harmless products is an important tool for global human health. However, more than 25% of the world's population did not have access to basic sanitation facilities in 2017 and another 14% used toilets in which waste was disposed of onsite. While some of this waste may be hazardous to local populations, previous research has suggested that natural wetlands and mangroves, for example, provide effective treatment services. The Navikubo wetland in Uganda processes untreated wastewater from more than 100,000 households, protecting the Murchison Bay and Lake Victoria from harmful contaminants, while in the United States coastal wetlands in the Gulf of Mexico remove nitrogen from the Mississippi River."We realized that nature must be providing sanitation services, because so many people in the world do not have access to engineered infrastructure like sewers," adds Simon Willcock, a Senior Lecturer in Environmental Geography in Bangor University, UK, and another author of the study. "But the role for nature was largely unrecognized."To better understand how natural ecosystems process waste, the team from Bangor University, Cranfield University, Durham University, University of Gloucestershire, University of Hyderabad (India) and the Fresh Water Action Network, South Asia quantified sanitation ecosystem services in 48 cities containing about 82 million people using Excreta Flow Diagrams, which leverage a combination of in-person interviews, informal and formal observations, and direct field measurements to document how human fecal matter flows through a city or town. The researchers assessed all diagrams that were available on December 17th, 2018, focusing on those coded as "fecal sludge contained not emptied" (FSCNE), in which the waste is contained in a pit latrine or septic tank below ground but does not pose a risk to groundwater, for example, because the water table is too deep.Conservatively, Willcock and colleagues estimate that nature processes 2.2 million cubic meters of human waste per year within these 48 cities. Since more than 892 million people worldwide use similar onsite disposal toilet facilities, they further estimate that nature sanitizes about 41.7 million tons of human waste per year before the liquid enters the groundwater -- a service worth about $4.4 billion per year. However, the authors note that these estimates likely undervalue the true worth of sanitation ecosystem services, since natural processes may contribute to other forms of wastewater processing, though these are harder to quantify.Willcock and colleagues hope that their findings will shed light on an important but often unrecognized contribution that nature makes to many people's everyday lives, inspiring the protection of ecosystems such as wetlands that protect downstream communities from wastewater pollutants."We would like to promote a better collaboration between ecologists, sanitation practitioners and city planners to help nature and infrastructure work better in harmony, and to protect nature where it is providing sanitation services," said Parker.This work was prepared as part of the ESRC and ICSSR funded Rurality as a vehicle for Urban Sanitation Transformation (RUST) project.
|
Environment
| 2,021 |
February 19, 2021
|
https://www.sciencedaily.com/releases/2021/02/210219111325.htm
|
Deep-sea vent-endemic snail hologenome decoded
|
A research team led by Prof. QIAN Peiyuan, Head and Chair Professor from the Hong Kong University of Science and Technology (HKUST)'s Department of Ocean Science and David von Hansemann Professor of Science, has published their cutting-edge findings of symbiotic mechanisms of a deep-sea vent snail (Gigantopelta aegis) in the scientific journal
|
Deep-sea hydrothermal vent is characterized as extremely high hydrostatic pressure and darkness, and they can release fluids overheated by the Earth's crust, with concentrated toxic heavy metals and chemical substances. These characteristics make hydrothermal vent one of the most uniquely extreme environments on our planet. In addition, the deep-sea hydrothermal vent environment is very similar to the earth's early environment, when life began to form. Unlike most ecosystems relying on photosynthesis-derived nutrients, fauna living in vents depend on chemosynthetic microbes that can utilize chemical energy to synthesize organic compounds, supporting dense and unique macro-organisms living there. However, how the organisms thrive and adapt to such an extreme environment remains a complex puzzle.In April and May 2019, Prof. Qian's team undertook a deep-sea research expedition and explored Longqi vent filed on the Southwest Indian Ridge with a remotely operated vehicle. They found a dominant species, Gigantopelta snails, at the sea floor (approximate 2800 m depth). Prof. Qian's team discovered two types of symbiotic bacteria with dramatically different morphologies that live inside the esophageal gland cells of Gigantopelta snails, which was further identified as one sulfur-oxidizing bacteria and one methane-oxidizing bacteria.The team further decoded the genomes of Gigantopelta snail, sulfur-oxidizing bacteria and methane-oxidizing bacteria, unveiling a novel dual symbiosis system that are highly versatile in utilizing the chemical energy for nutrient synthesis. The sulfur-oxidizing bacteria can utilize the chemical energy from hydrogen, hydrogen sulfide, sulfate, sulfite and thiosulfate, while the methane-oxidizing bacteria can utilize hydrogen and methane. From the host side, there are more copies of pattern recognition receptors in Gigantopelta genome, and they specifically expressed in the symbiotic organ, which help Gigantopelta recognize and maintain a dual symbiosis system. Gigantopelta snails adopt a mutualistic metabolic relationship among multiple symbiotic partners and thus flourish in this vent ecosystem. These findings not only enable us to gain a better understanding of how animals thrive in such extreme environment, but also shed light on how such animals cope with microbes in a highly specialized way.
|
Environment
| 2,021 |
February 19, 2021
|
https://www.sciencedaily.com/releases/2021/02/210219091843.htm
|
Release of nutrients from lake-bottom sediments worsens Lake Erie's annual 'dead zone'
|
Robotic laboratories on the bottom of Lake Erie have revealed that the muddy sediments there release nearly as much of the nutrient phosphorus into the surrounding waters as enters the lake's central basin each year from rivers and their tributaries.
|
Excessive phosphorus, largely from agricultural sources, contributes to the annual summer cyanobacteria bloom that plagues Lake Erie's western basin and the central basin's annual "dead zone," an oxygen-starved region that blankets several thousand square miles of lake bottom and that reduces habitat for fish and other organisms.The release of phosphorus from Lake Erie sediments during periods of low oxygen -- a phenomenon known as self-fertilization or internal loading -- has been acknowledged since the 1970s. But the new University of Michigan-led study marks the first time the process has been monitored step by step for an entire season using lake-bottom sensors.The authors of the new study, published online Feb. 18 in the journal "Until now, we lacked evidence to pinpoint when and where this phenomenon occurs in Lake Erie and how much it contributes to nutrients in the lake," said study lead author Hanna Anderson, a research technician at U-M's Cooperative Institute for Great Lakes Research who did the work for a master's thesis at the School for Environment and Sustainability."These new measurements have allowed us to estimate that this self-fertilization process contributes up to 11,000 metric tons of phosphorus to the lake water each summer, an amount that is close to the total annual runoff of phosphorus from rivers and tributaries into the central part of the lake," said Casey Godwin, an assistant research scientist at the institute and a co-author of the paper.Efforts to control Lake Erie nutrient pollution, or eutrophication, have focused on reducing the amount of phosphorus-rich runoff from farms and other sources that flows into the lake from rivers and their tributaries. In 2016, the U.S. and Canadian governments adopted a phosphorus-reduction target of 40%.The authors of the new "Environmental managers tasked with tributary load reduction must take internal loading estimates into account when determining how to balance the total P load," they wrote. "Historical and persistent sediment P loading represents a delayed lake response to eutrophication and prevents the successful management of a system when only external P loading is considered."In addition to several U-M scientists, authors of the paper include researchers from the National Oceanic and Atmospheric Administration's Great Lakes Environmental Research Laboratory. U-M scientists and staff at CIGLR collaborate with NOAA GLERL on a number of projects such as this.The researchers deployed two small autonomous laboratories at lake-bottom sites in Lake Erie's central basin -- one at a depth of 67 feet and the other at a depth of 79 feet -- in late July 2019 and left them there for more than two months.The self-contained chemistry labs, manufactured by SeaBird Scientific and owned by the team's NOAA collaborators, are cylinders 22 inches long and 7 inches wide. The labs and their batteries were placed inside a protective steel framework that was lowered from the stern of a ship. The metal cage was attached to a 150-pound weight and two white floats that kept it off the bottom.The autonomous analyzers were programmed to measure phosphorus concentrations in the water every six hours. They also monitored water temperature and dissolved-oxygen levels. More than 300 phosphorus measurements were made at each site before the devices were retrieved in early October.This previously unobtainable dataset yielded some surprising findings.For example, earlier studies had suggested that nutrients begin to flow out of lake-bottom sediments when dissolved-oxygen concentrations in the surrounding waters drop to very low levels, a condition called hypoxia.But the chemistry robots showed that the flow of phosphorus did not begin during hypoxia -- even when oxygen levels dropped below the point where fish can survive.Instead, the "positive P flux" from the sediments began 12 to 24 hours after dissolved oxygen levels in the lake-bottom water dropped to zero, a condition called anoxia. At the two central-basin sites in Lake Erie, that period began in late summer and continued into early October."Within 24 hours of when the oxygen went away completely, we recorded a rapid increase of phosphorus in the water, and this continued until the concentration at the bottom of the lake was more than a hundred times higher than at the surface," said study senior author Thomas Johengen, director of U-M's Cooperative Institute for Great Lakes Research."Our findings about the timing of phosphorus release relative to oxygen levels in the water are the first of their kind for the Great Lakes and represent a novel application of this technology," Johengen said.Knowing when the phosphorus release began, the rate of flow from the sediments, and the duration of the anoxic period enabled the researchers to estimate the total amount of phosphorus added to Lake Erie's central basin each year due to internal loading.The researchers estimated that Erie's lake-bottom sediments annually release between 2,000 and 11,500 metric tons of phosphorus. The high end of this range equals the approximate annual inflow of phosphorus to Lake Erie's central basin from rivers and tributaries: 10,000 to 11,000 metric tons.The released phosphorus is in a readily available form called soluble reactive phosphorus, or SRP, that likely fuels central-basin algal growth. When those algae die and sink, bacteria decompose the organic matter and consume oxygen in the process. The result: an oxygen-starved region in bottom and near-bottom waters of the central basin known as the dead zone."Internal loading of phosphorus from lake-bottom sediments can become a positive feedback loop: Hypoxia leads to the release of P from the sediments, which causes more algae growth, and the dead and dying algae consume the oxygen in the water and contribute to hypoxia the following summer," Godwin said."This type of feedback has been seen in lakes worldwide, and it interacts with ongoing efforts to reduce phosphorus loads from Lake Erie's tributaries," he said.As the Great Lakes continue to warm in the years ahead due to human-caused climate change, Lake Erie's central-basin dead zone is expected to form earlier and last longer each year, resulting in a greater supply of phosphorus released from the sediments, according to the study authors.The current study demonstrates the potential for using robotic laboratories to monitor those changes, as well as any changes that may occur due to the decreased flow of nutrients into Lake Erie from rivers and tributaries, according to the authors. Internal loading from central-basin sediments likely does not impact the severity of Lake Erie's western-basin algal blooms, according to the researchers."NOAA's mission in the Great Lakes includes observing, understanding and forecasting significant events such as internal loading. Very often, the development and application of advanced technology such as this can confirm a hypothesis or provide novel insight that was previously impossible," said study co-author Steve Ruberg, senior scientist at NOAA's Great Lakes Environmental Research Laboratory."This important observational result will contribute to NOAA's collaboration with the EPA's Great Lakes National Program Office under the Great Lakes Water Quality Agreement, significantly improving our understanding of hypoxic zone phosphorus loading and the subsequent impact on the Lake Erie ecosystem," Ruberg said.In addition to Anderson, Godwin, Johengen and Ruberg, the authors of the Environmental Science & Technology Water paper are Heidi Purcell and Peter Alsip of U-M's Cooperative Institute for Great Lakes Research and Lacey Mason of NOAA's Great Lakes Environmental Research Laboratory.The work was supported by NOAA's National Centers for Coastal Ocean Science Competitive Research Program and through the NOAA Cooperative Agreement with the Cooperative Institute for Great Lakes Research at the University of Michigan.
|
Environment
| 2,021 |
February 19, 2021
|
https://www.sciencedaily.com/releases/2021/02/210219083837.htm
|
How to calculate the social cost of carbon? Researchers offer roadmap in new analysis
|
The Biden administration is revising the social cost of carbon (SCC), a decade-old cost-benefit metric used to inform climate policy by placing a monetary value on the impact of climate change. In a newly published analysis in the journal
|
"President Biden signed a Day One executive order to create an interim SCC within a month and setting up a process to produce a final, updated SCC within a year," explains Gernot Wagner, a climate economist at New York University's Department of Environmental Studies and NYU's Robert F. Wagner Graduate School of Public Service and the paper's lead author. "Our work outlines how the administration can use the latest research in ways that take into account storms, wildfires, and other phenomena that are more devastating today than they were when the SCC was first created.""Economic analysis is at the heart of the regulatory process in the U.S. and will therefore play a major role in shaping and informing the ambitious climate goals from the new administration," adds David Anthoff, co-author and assistant professor of Energy and Resources at UC Berkeley. "Our recommendations offer a roadmap for how this can be done in a way that is both scientifically rigorous and transparent.""The damage and loss of life caused by the severe weather in Texas is only the latest example of how climate change can upend our well-being in ways not imagined only 10 years ago," observes Wagner.The revised SCC will be created by the federal government's Interagency Working Group (IWG), which includes the Council of Economic Advisors, the Office of Management and Budget, and the Office of Science and Technology Policy.In the "Climate science and economics have advanced since 2010," write the authors. "Devastating storms and wildfires are now more common, and costs are mounting. Advances in science mean that researchers can now link many more extreme weather events directly to climate change, and new econometric techniques help to quantify dollar impacts."
|
Environment
| 2,021 |
March 10, 2021
|
https://www.sciencedaily.com/releases/2021/03/210310204217.htm
|
New research could boost a solar-powered fuel made by splitting water
|
Hydrogen is an incredibly powerful fuel, and the ingredients are everywhere -- in plain old water. Researchers would love to be able to use it widely as a clean and sustainable energy source.
|
One catch, however, is that a considerable amount of energy is required to split water and make hydrogen. Thus scientists have been working on fabricating materials for photoelectrodes that can use solar energy to split water, creating a "solar fuel" that can be stored for later use.Scientists with the University of Chicago, the University of Madison-Wisconsin and Brookhaven National Laboratory published a new breakthrough in making such photoelectrodes. Their research, reported in "Our results are crucial for both understanding and improving photoelectrodes used in solar fuel production," said Giulia Galli, the Liew Family Professor of Molecular Engineering and professor of chemistry at UChicago, senior scientist at Argonne National Laboratory and co-corresponding author of the paper."Each improvement we make brings us closer to the promise of a sustainable future fuel," added co-corresponding author Kyoung-Shin Choi, professor of chemistry at the University of Wisconsin, Madison.Galli and Choi are theoretical and experimental leaders in the field of solar fuels, respectively, and have been collaborating for several years to design and optimize photoelectrodes for producing solar fuels. To understand the effects of the surface composition of electrodes, they teamed up with Mingzhao Liu (MS'03, PhD'07), a staff scientist with the Center for Functional Nanomaterials at Brookhaven National Laboratory.The way that a photoelectrode works is by absorbing energy from sunlight, which generates an electrical potential and current that can split water into oxygen and hydrogen.The team investigated a photoelectrode material called bismuth vanadate, which is promising because it strongly absorbs sunlight across a range of wavelengths and remains relatively stable in water. In particular, they wanted to investigate the electrode surface."The properties of the bulk materials have been extensively studied; however, the impact of the surface on water splitting has been challenging to establish," explained Liu, a co-corresponding author of the paper.At Brookhaven, Liu and a graduate student Chenyu Zhou, had perfected a method for growing bismuth vanadate as a photoelectrode with a well-defined orientation and surface structure. "However," Zhou said, "we knew that our photoelectrode had slightly more vanadium than bismuth on the surface." The group wanted to know if a more bismuth-rich version would have better performance.At UW-Madison, Choi and graduate student Dongho Lee found a way to change the surface composition without altering the makeup of the rest of the electrode, and they fabricated a sample with more bismuth atoms on the surface.To understand on a molecular level what was happening, the two different surface compositions were examined using special instruments at the Center for Functional Nanomaterials, including scanning tunneling microscopy. Wennie Wang, a post-doctoral scholar in the Galli group, compared experimental and simulated microscopy images and identified the surface structure models that closely mimicked the experimental samples."Our quantum mechanical calculations provided a wealth of information, including the electronic properties of the surface and the exact positions of the atoms," said Wang. "This information turned out to be critical to interpret experiments."Next, the team compared what happened when light struck the surfaces. They found that surfaces with an excess of bismuth atoms are more favorable for water splitting reactions."When bismuth vanadate absorbs light, it generates electrons and electron vacancies called holes," said Lee. "What we found is that the bismuth-terminated surface lifts the electrons to higher energy and also leads to more efficient separation of electrons from holes -- overall, having more bismuth atoms on the surface favors water splitting reactions.""Our tightly integrated experimental and theoretical investigations were vital in gaining an atomic level understanding of how the surface modification can change the properties of a photoelectrode," said Choi. "Our collaboration funded by the National Science Foundation has been extremely fruitful," added Galli.Next the researchers will explore how bismuth vanadate photoelectrodes interact with a catalyst layer that is applied on top of the photoelectrode surface to facilitate water oxidation."We believe the results obtained from our study will serve as an essential foundation for future studies," said Liu. "We identified an important piece of the complex puzzle of water splitting, and we're looking forward to continuing to explore ways to improve solar fuel production as a sustainable alternative to fossil fuels," added Galli.This work was funded by the National Science Foundation and used computational resources of the University of Chicago's Research Computing Center. The work at Brookhaven was carried out in the Materials Synthesis and Characterization and Proximal Probes User Facilities and funded by Department of Energy, Office of Science.
|
Environment
| 2,021 |
February 18, 2021
|
https://www.sciencedaily.com/releases/2021/02/210218180208.htm
|
Fuel for earliest life forms: Organic molecules found in 3.5 billion-year-old rocks
|
A research team including the geobiologist Dr. Helge Missbach from the University of Cologne has detected organic molecules and gases trapped in 3.5 billion-year-old rocks. A widely accepted hypothesis says that the earliest life forms used small organic molecules as building materials and energy sources. However, the existence of such components in early habitats on Earth was as yet unproven. The current study, published in the journal '
|
Specifically, the scientists examined about 3.5 billion-year-old barites from the Dresser Formation in Western Australia. The barite thus dates from a time when early life developed on Earth. 'In the field, the barites are directly associated with fossilized microbial mats, and they smell like rotten eggs when freshly scratched. Thus, we suspected that they contained organic material that might have served as nutrients for early microbial life,' said Dr. Helge Missbach of the Institute of Geology and Mineralogy and lead author of the study.In the fluid inclusions, the team identified organic compounds such as acetic acid and methanethiol, in addition to gases such as carbon dioxide and hydrogen sulfide. These compounds may have been important substrates for metabolic processes of early microbial life. Furthermore, they are discussed as putative key agents in the origin of life on Earth. 'The immediate connection between primordial molecules emerging from the subsurface and the microbial organisms -- 3.5 billion years ago -- somehow surprised us. This finding contributes decisively to our understanding of the still unclear earliest evolutionary history of life on Earth,' Missbach concluded.
|
Environment
| 2,021 |
February 18, 2021
|
https://www.sciencedaily.com/releases/2021/02/210218142849.htm
|
Migratory birds track climate across the year
|
As climate change takes hold across the Americas, some areas will get wetter, and others will get hotter and drier. A new study of the yellow warbler, a widespread migratory songbird, shows that individuals have the same climatic preferences across their migratory range. The work is published Feb. 17 in
|
"What's amazing is that the birds track similar climates despite the fact that they have migrated thousands of miles," said Rachael Bay, assistant professor in the Department of Evolution and Ecology, College of Biological Sciences at the University of California, Davis. "It seems that individual birds may be adapted to particular climate regimes."Yellow warblers (Setophagia petechia) breed throughout North America and fly south to Central and South America to spend the winter. A previous study by Bay and colleagues found links between genetic variation and precipitation across North America, suggesting that certain individuals might be adapted to dry conditions while others thrive in wet conditions. In the current study, the authors were able to use genetics to predict where birds captured on their wintering grounds in Central and South America would end up breeding and compare climate patterns in their winter and summer areas.Individual birds showed preferences for drier or wetter areas, but not for warmer or cooler areas. In other words, birds that bred in relatively dry parts of North America -- such as California's Central Valley -- overwintered in dry parts of South or Central America."This is the first demonstration of using individual genetic tracking to link climates across the migratory cycle within a bird species," Bay said.This range of climatic preferences could have consequences for how the birds respond to climate change. Bay speculates that the variation she and her colleagues found might provide the raw material for the species to adapt to changing climate conditions. For example, populations that are adapted to drier conditions might displace those adapted to wetter ones. In fact, Bay and colleagues have already found that population sizes of yellow warblers changed with precipitation across years.Bay collected data for the study during her postdoctoral research, in collaboration with banding stations and collecting sites in North and South America. Bay and her colleagues are now eager to see whether individuals of other bird species also track climate during migration.Additional authors on the paper are Daniel Karp, Department of Wildlife, Fish and Conservation Biology, UC Davis; James Saracco, The Institute for Bird Populations, Petaluma, California; William Anderegg, University of Utah; Luke Frishkoff, University of Texas at Arlington; David Wiedenfeld, American Bird Conservancy, The Plains, Virginia; Thomas Smith, UCLA; and Kristen Ruegg, Colorado State University, Fort Collins.The work was supported by grants from the National Science Foundation, National Geographic, California Energy Commission and First Solar Inc.
|
Environment
| 2,021 |
February 18, 2021
|
https://www.sciencedaily.com/releases/2021/02/210218142729.htm
|
Ancient relic points to a turning point in Earth's history 42,000 years ago
|
The temporary breakdown of Earth's magnetic field 42,000 years ago sparked major climate shifts that led to global environmental change and mass extinctions, a new international study co-led by UNSW Sydney and the South Australian Museum shows.
|
This dramatic turning point in Earth's history -- laced with electrical storms, widespread auroras, and cosmic radiation -- was triggered by the reversal of Earth's magnetic poles and changing solar winds.The researchers dubbed this danger period the 'Adams Transitional Geomagnetic Event', or 'Adams Event' for short -- a tribute to science fiction writer Douglas Adams, who wrote in The Hitchhiker's Guide to the Galaxy that '42' was the answer to life, the universe, and everything.The findings are published today in "For the first time ever, we have been able to precisely date the timing and environmental impacts of the last magnetic pole switch," says Chris Turney, a professor at UNSW Science and co-lead author of the study."The findings were made possible with ancient New Zealand kauri trees, which have been preserved in sediments for over 40,000 years."Using the ancient trees we could measure, and date, the spike in atmospheric radiocarbon levels caused by the collapse of Earth's magnetic field."While scientists already knew the magnetic poles temporarily flipped around 41-42,000 years ago (known as the 'Laschamps Excursion'), they didn't know exactly how it impacted life on Earth -- if at all.But the researchers were able to create a detailed timescale of how Earth's atmosphere changed over this time by analysing rings on the ancient kauri trees."The kauri trees are like the Rosetta Stone, helping us tie together records of environmental change in caves, ice cores and peat bogs around the world," says co-lead Professor Alan Cooper, Honorary Researcher at the South Australian Museum.The researchers compared the newly-created timescale with records from sites across the Pacific and used it in global climate modelling, finding that the growth of ice sheets and glaciers over North America and large shifts in major wind belts and tropical storm systems could be traced back to the Adams Event.One of their first clues was that megafauna across mainland Australia and Tasmania went through simultaneous extinctions 42,000 years ago."This had never seemed right, because it was long after Aboriginal people arrived, but around the same time that the Australian environment shifted to the current arid state," says Prof. Cooper.The paper suggests that the Adams Event could explain a lot of other evolutionary mysteries, like the extinction of Neandertals and the sudden widespread appearance of figurative art in caves around the world."It's the most surprising and important discovery I've ever been involved in," says Prof. Cooper.The magnetic north pole -- that is, the direction a compass needle points to -- doesn't have a fixed location. It usually wobbles close to the North Pole (the northern-most point of Earth's axis) over time due to dynamic movements within the Earth's core, just like the magnetic south pole.Sometimes, for reasons that aren't clear, the magnetic poles' movements can be more drastic. Around 41,000-42,000 years ago they swapped places entirely."The Laschamps Excursion was the last time the magnetic poles flipped," says Prof. Turney. "They swapped places for about 800 years before changing their minds and swapping back again."Until now, scientific research has focused on changes that happened while the magnetic poles were reversed, when the magnetic field was weakened to about 28 per cent of its present-day strength.But according to the team's findings, the most dramatic part was the lead-up to the reversal, when the poles were migrating across the Earth."Earth's magnetic field dropped to only 0-6 per cent strength during the Adams Event," says Prof. Turney."We essentially had no magnetic field at all -- our cosmic radiation shield was totally gone."During the magnetic field breakdown, the Sun experienced several 'Grand Solar Minima' (GSM), long-term periods of quiet solar activity.Even though a GSM means less activity on the Sun's surface, the weakening of its magnetic field can mean more space weather -- like solar flares and galactic cosmic rays -- could head Earth's way."Unfiltered radiation from space ripped apart air particles in Earth's atmosphere, separating electrons and emitting light -- a process called ionisation," says Prof. Turney."The ionised air 'fried' the Ozone layer, triggering a ripple of climate change across the globe."Dazzling light shows would have been frequent in the sky during the Adams Event.Aurora borealis and aurora australis, also known as the northern and southern lights, are caused by solar winds hitting the Earth's atmosphere.Usually confined to the polar northern and southern parts of the globe, the colourful sights would have been widespread during the breakdown of Earth's magnetic field."Early humans around the world would have seen amazing auroras, shimmering veils and sheets across the sky," says Prof. Cooper.Ionised air -- which is a great conductor for electricity -- would have also increased the frequency of electrical storms."It must have seemed like the end of days," says Prof. Cooper.The researchers theorise that the dramatic environmental changes may have caused early humans to seek more shelter. This could explain the sudden appearance of cave art around the world roughly 42,000 years ago."We think that the sharp increases in UV levels, particularly during solar flares, would suddenly make caves very valuable shelters," says Prof. Cooper. "The common cave art motif of red ochre handprints may signal it was being used as sunscreen, a technique still used today by some groups."The amazing images created in the caves during this time have been preserved, while other art out in open areas has since eroded, making it appear that art suddenly starts 42,000 years ago."These findings come two years after a particularly important ancient kauri tree was uncovered at Ng?wh?, Northland.The massive tree -- with a trunk spanning over two and a half metres -- was alive during the Laschamps."Like other entombed kauri logs, the wood of the Ng?wh? tree is so well preserved that the bark is still attached," says UNSW's Dr Jonathan Palmer, a specialist in dating tree-rings (dendrochronology). Dr Palmer studied cross sections of the trees at UNSW Science's Chronos 14Carbon-Cycle Facility.Using radiocarbon dating -- a technique to date ancient relics or events -- the team tracked the changes in radiocarbon levels during the magnetic pole reversal. This data was charted alongside the trees' annual growth rings, which acts as an accurate, natural timestamp.The new timescale helped reveal the picture of this dramatic period in Earth's history. The team were able to reconstruct the chain of environmental and extinction events using climate modelling."The more we looked at the data, the more everything pointed to 42," says Prof. Turney. "It was uncanny."Douglas Adams was clearly on to something, after all."While the magnetic poles often wander, some scientists are concerned about the current rapid movement of the north magnetic pole across the Northern Hemisphere."This speed -- alongside the weakening of Earth's magnetic field by around nine per cent in the past 170 years -- could indicate an upcoming reversal," says Prof. Cooper."If a similar event happened today, the consequences would be huge for modern society. Incoming cosmic radiation would destroy our electric power grids and satellite networks."Prof. Turney says the human-induced climate crisis is catastrophic enough without throwing major solar changes or a pole reversal in the mix."Our atmosphere is already filled with carbon at levels never seen by humanity before," he says. "A magnetic pole reversal or extreme change in Sun activity would be unprecedented climate change accelerants."We urgently need to get carbon emissions down before such a random event happens again."Video:
|
Environment
| 2,021 |
February 18, 2021
|
https://www.sciencedaily.com/releases/2021/02/210218135814.htm
|
The distribution of vertebrate animals redefines temperate and cold climate regions
|
The distribution of vegetation is routinely used to classify climate regions worldwide, yet whether these regions are relevant to other organisms is unknown. Umeå researchers have established climate regions based on vertebrate species' distributions in a new study published in
|
Climate determines how life organises across the world. Understanding which climatic conditions drive important changes in ecosystems is crucial to understanding and predicting how life functions and evolves.Human well-being critically depends on the vertebrate diversity, and yet we don't know enough about the climates that promote the organisation of these species. We know for instance that dry environments promote the generation of deserts, and humid and hot environments allow evergreen forests to thrive. But what conditions drive the distribution of vertebrates like mammals, frogs, birds and more?"To fill this gap, we studied the climates driving the organisation of vertebrates on Earth. We developed a network-based approach that connects species to their preferred climatic conditions. Then, we searched for climatic conditions preferred by similar vertebrate species," explains main author Joaquín Calatayud former post doc at Integrated Science Lab, Umeå University, and today working at King Juan Carlos University in Spain.With this approach, the authors presented the climate regions that define the distribution of vertebrates. Climates with high-energy, such as deserts, tropical savannas, and steppes, were found to be similar across different groups of vertebrates and plants. This was not the case for temperate and cold climates. Regions characterized by those climates differed across all groups. For instance, warm-blooded birds and mammals define regions of polar climates that are not observed in the case of cold-blooded amphibians and reptiles. This suggests that inhabiting these climates requires possessing specific climatic adaptations that have not appeared in all groups."Our results indicate that specific climate classifications are required to study the ecology, evolution, and conservation of specific groups of species," says Joaquín Calatayud.This study can build the basis for a better understanding of climate-driven ecological and evolutionary processes, leading to better conservation strategies, the authors say."Do ecosystem functions or evolutionary processes vary among climate regions? Do climatic regions hold a similar conservation status? These are some of the questions that our results could help to answer."
|
Environment
| 2,021 |
February 18, 2021
|
https://www.sciencedaily.com/releases/2021/02/210218094521.htm
|
Waste into wealth: Harvesting useful products from microbial growth
|
Ancient alchemists dreamed of transforming base materials like lead into gold and other valuable commodities. While such efforts generally came to naught, researchers today are having some success in extracting a variety of useful products like aviation fuels, lubricants, solvents, food additives and plastics from organic waste.
|
The trick is accomplished with the aid of specialized bacteria, whose metabolic activities can convert simpler chemicals into useful products through a microbial growth process knows as chain elongation.Anca Delgado, a researcher in the Biodesign Swette Center for Environmental Biotechnology at Arizona State University, has been exploring the phenomenon. In a new study, she describes for the first time how the chain elongation processes are carried out by microorganisms under normal conditions in soil.The work promises to shed new light on these poorly understood processes in nature, allowing researchers to better leverage them to convert organic sources like food waste into valuable products. Such techniques offer a double benefit to society, minimizing or eliminating environmental waste/contaminants while producing biochemicals or biofuels and other important resources, through green chemistry. The work will also help researchers expand their knowledge of microbial ecology."We observed that different soil types sampled from 1.5 m or less below ground surface harbor a readily active potential for chain elongation of acetate and ethanol," Delgado says. " When fed acetate and ethanol, soil microcosms produced butyrate and hexanoate in just a few days and chain elongation became the main metabolism occurring in these samples."Delgado is joined by ASU colleagues Sayalee Joshi, Aide Robles, and Samuel Aguiar.Their research findings appear in the current issue of the The idea of converting organic residual streams like food waste to fuels and useful compounds has been steadily gaining ground, driven by advancing technologies as well as the rapidly growing global need for clean energy sources and pollution reduction. Such processes can help society form so-called circular economies, in which unwanted waste streams are continually converted into energy sources and other useful commodities.Organic waste sources hold enormous potential as an alternative resource for producing high-value fuels and chemicals because they are renewable and because they do not compete with the human food chain, (as some existing biofuels like corn ethanol do).One source for these useful transformations is organic food waste, a staggering amount of which is produced annually. Driven by rising global populations, the accumulation of food waste has become a critical problem, due to associated health and environmental hazards.Food waste is discharged from a variety of sources, including food processing industries, households, and the hospitality sector. According to the United Nations Food and Agriculture organization, a staggering 1.3 billion tons of food are lost to the food chain, and the amount is on a rapid rise.In addition to the squandering of food and land resources, food waste contributes a hefty burden to the environment in terms of carbon footprint, increasing greenhouse gas emissions and liberating an estimated 3.3 billion tons of CO2 into the atmosphere per year. Researchers hope to convert these waste residues into useful products and purify them in an efficient manner.One of the most innovative and eco-friendly means of dealing with all of this organic waste is through anaerobic digestion, which also holds the promise of expanding the world's energy supply. A promising emergent technology employing anaerobic digestion is known as microbial chain elongation, a metabolic process used by anaerobic microorganisms to grow and acquire energy. They do this by combining carboxylate chemicals like acetate (C2), with more reduced compounds, such as ethanol (C2), to produce longer-chain carboxylates (typically C4-C8).This biotechnological process converts volatile fatty acids (VFAs) and an electron donor, typically, ethanol, into more valuable medium chain fatty acids (MCFAs), which are the precursors needed to produce biofuels and other useful chemicals. Initial waste sources are processed through chain-elongation, which involves the cyclic addition of carbon units, thereby converting municipal solid waste, agriculture waste, syngas, etc., into the high-value, medium-chain carboxylates like hexanoate (C6) and octanoate (C8).The conversion of VFAs into MCFAs with ethanol as electron donor is accomplished by chain elongating microorganisms, particularly, a bacterium known as Clostridium kluyveri. C. kluyveri (and closely related bacterial strains) accomplish their chain-elongation feats through a process known as the reverse ?-oxidative pathway. As the name suggests, this pathway is the opposite of metabolic pathway organisms use to break down fatty acids derived from foods.In recent years, researchers have explored β-oxidation pathways as well as developing the means to reverse these pathways in order to produce chemicals and polymer building blocks, using industrially relevant microorganisms.Chain elongation has hence proven an effective means of producing valuable chemicals in laboratory bioreactors, though the process is presumed to occur naturally in soils as well. It turns out that anaerobic soils and sediments are often rich in the same kinds of biodegradable organic compounds found in municipal or agriculture waste streams and therefore, a natural source of chain elongation.Using soil samples from four various US locations, the current study examines the extent of natural chain elongation and how these processes vary according to the particular biogeochemical characteristics of soil composition. The research was designed to gauge the prevalence of chain elongation in anaerobic soil microorganisms and its possible role in microbial ecology.The results demonstrate the potential for chain elongation activities involving acetate and ethanol, which are typical metabolites found in soils as a result of organic compound fermentation. The study measured high enrichment rates in microorganisms similar to C. kluyveri under chain elongating conditions, which were found to vary with soil type.The findings shed new light on this intriguing aspect of microbial ecology and may provide helpful clues for future efforts using microorganisms to process waste streams into a range of beneficial chemicals and other products.As Delgado notes, "on the fundamental side, results from this study are paving the way for investigations on the activity of chain elongation in situ. On the biotechnology side, this work shows that soils can be excellent sources of chain-elongating microorganisms for bioreactors focused on production of the specialty chemicals, hexanoate and octanoate."
|
Environment
| 2,021 |
February 17, 2021
|
https://www.sciencedaily.com/releases/2021/02/210217175151.htm
|
Skies of blue: Recycling carbon emissions to useful chemicals and reducing global warming
|
Rapid global urbanization has dramatically changed the face of our planet, polluting our atmosphere with greenhouse gases and causing global warming. It is the need of the hour to control our activities and find more sustainable alternatives to preserve what remains of our planet for the generations to come.
|
Carbon dioxide (COA team of researchers from Korea, led by Prof. Jung Rae Kim from Pusan National University, have answered this question for a newer CCU system called the bioelectrochemical system (BES). Prof. Kim explains, "We have developed a 'bioelectrosynthetic process' in which electroactive bacteria convert CO/COThe two-chamber BES they used had several special features that achieved this. The cathode contained an electro-active biofilm, and the anode produced hydrogen ions via water electrolysis. These chambers were divided by an ion exchange membrane (IEM), which controlled the flow of protons and electrons between the chambers. Further, while the former contained microbial culture media, the latter contained mechanisms to control the initial pH of the system. In addition, a quinone electron mediator was used.They found that, given the right IEM -- one that allowed protons but not oxygen to pass through -- an acidic pH in the anode chamber caused a higher proton concentration gradient across the membrane, which was key to enhancing acetate production and the synthesis of longer chain fatty acids in the cathode chamber. The quinone-dependent mediators improved electron transfer and increased product formation.Prof. Kim states, "Since CO is a more reduced gas than CO
|
Environment
| 2,021 |
February 17, 2021
|
https://www.sciencedaily.com/releases/2021/02/210217151143.htm
|
Termite gut microbes could aid biofuel production
|
Wheat straw, the dried stalks left over from grain production, is a potential source of biofuels and commodity chemicals. But before straw can be converted to useful products by biorefineries, the polymers that make it up must be broken down into their building blocks. Now, researchers reporting in
|
In straw and other dried plant material, the three main polymers -- cellulose, hemicelluloses and lignin -- are interwoven into a complex 3D structure. The first two polymers are polysaccharides, which can be broken down into sugars and then converted to fuel in bioreactors. Lignin, on the other hand, is an aromatic polymer that can be converted to useful industrial chemicals. Enzymes from fungi can degrade lignin, which is the toughest of the three polymers to break down, but scientists are searching for bacterial enzymes that are easier to produce. In previous research, Guillermina Hernandez-Raquet and colleagues had shown that gut microbes from four termite species could degrade lignin in anaerobic bioreactors. Now, in a collaboration with Yuki Tobimatsu and Mirjam Kabel, they wanted to take a closer look at the process by which microbes from the wood-eating insects degrade lignin in wheat straw, and identify the modifications they make to this material.The researchers added 500 guts from each of four higher termite species to separate anaerobic bioreactors and then added wheat straw as the sole carbon source. After 20 days, they compared the composition of the digested straw to that of untreated straw. All of the gut microbiomes degraded lignin (up to 37%), although they were more efficient at breaking down hemicelluloses (51%) and cellulose (41%). Lignin remaining in the straw had undergone chemical and structural changes, such as oxidation of some of its subunits. The researchers hypothesized that the efficient degradation of hemicelluloses by the microbes could have also increased the degradation of lignin that is cross-linked to the polysaccharides. In future work, the team wants to identify the microorganisms, enzymes and lignin degradation pathways responsible for these effects, which could find applications in lignocellulose biorefineries.
|
Environment
| 2,021 |
February 17, 2021
|
https://www.sciencedaily.com/releases/2021/02/210217151113.htm
|
Lakes isolated beneath Antarctic ice could be more amenable to life than thought
|
Lakes underneath the Antarctic ice sheet could be more hospitable than previously thought, allowing them to host more microbial life.
|
This is the finding of a new study that could help researchers determine the best spots to search for microbes that could be unique to the region, having been isolated and evolving alone for millions of years. The work could even provide insights into similar lakes beneath the surfaces of icy moons orbiting Jupiter and Saturn, and the southern ice cap on Mars.Lakes can form beneath the thick ice sheet of Antarctica where the weight of ice causes immense pressure at the base, lowering the melting point of ice. This, coupled with gentle heating from rocks below and the insulation provided by the ice from the cold air above, allows pools of liquid water to accumulate.More than 400 of these 'subglacial' lakes have been discovered beneath the Antarctic ice sheet, many of which have been isolated from each other and the atmosphere for millions of years.This means that any life in these lakes could be just as ancient, providing insights into how life might adapt and evolve under persistent extreme cold conditions, which have occurred previously in Earth's history.Expeditions have successfully drilled into two small subglacial lakes at the edge of the ice sheet, where water can rapidly flow in or out. These investigations revealed microbial life beneath the ice, but whether larger lakes isolated beneath the central ice sheet contain and sustain life remains an open question.Now, in a study published today in As they have no access to sunlight, microbes in these environments do not gain energy through photosynthesis, but by processing chemicals. These are concentrated in sediments on the lake beds, where life is thought to be most likely.However, for life to be more widespread, and therefore easier to sample and detect, the water in the lake must be mixed -- must move around -- so that sediments, nutrients and oxygen can be more evenly distributed.In lakes on the surface of the Earth, this mixing is caused by the wind and by heating from the sun, causing convection currents. As neither of these can act on subglacial lakes, it could be assumed there is no mixing.However, the team behind the new study found that another source of heat is sufficient to cause convection currents in most subglacial lakes. The heat is geothermal: rising from the interior of the Earth and generated by the combination of heat left over from the formation of the planet and the decay of radioactive elements.The researchers calculated that this heat can stimulate convection currents in subglacial lakes that suspend small particles of sediment and move oxygen around, allowing more of the water body to be hospitable to life.Lead researcher Dr Louis Couston, from the University of Lyon and the British Antarctic Survey said: "The water in lakes isolated under the Antarctic ice sheet for millions of years is not still and motionless; the flow of water is actually quite dynamic, enough to cause fine sediment to be suspended in the water. With dynamic flow of water, the entire body of water may be habitable, even if more life remains focused on the floors. "This changes our appreciation of how these habitats work, and how in future we might plan to sample them when their exploration takes place."The researchers' predictions may soon be tested, as a team from the UK and Chile plan to sample a lake called Lake CECs in the next few years. Samples taken throughout the depth of the lake water will show just where microbial life is found.The predictions could also be used to generate theories about life elsewhere in the Solar System, as co-author Professor Martin Siegert, Co-Director of the Grantham Institute -- Climate Change and Environment at Imperial, explains: "Our eyes now turn to predicting the physical conditions in liquid water reservoirs on icy moons and planets. The physics of subglacial water pockets is similar on Earth and icy moons, but the geophysical setting is quite different, which means that we're working on new models and theories."With new missions targeting icy moons and increasing computing capabilities, it's a great time for astrobiology and the search for life beyond the Earth."
|
Environment
| 2,021 |
February 17, 2021
|
https://www.sciencedaily.com/releases/2021/02/210217151003.htm
|
Climate change and suppression tactics are critical factors increasing fires
|
The millions of people affected by 2020's record-breaking and deadly fires can attest to the fact that wildfire hazards are increasing across western North America.
|
Both climate change and forest management have been blamed, but the relative influence of these drivers is still heavily debated. The results of a recent study show that in some ecosystems, human-caused climate change is the predominant factor; in other places, the trend can also be attributed to a century of fire suppression that has produced dense, unhealthy forests.Over the past decade, fire scientists have made major progress in understanding climate-fire relationships at large scales, such as across western North America. But a new paper published in the journal Researchers at five Western universities delved into which factors are increasing fire activity at the scales where management actions are implemented, and when and where ecosystems are likely to benefit from fuel management. They looked at data gathered across complex terrain in two mixed-conifer watersheds, in the Idaho Batholith and the Central Rocky Mountains.Key findings include:"This paper presents one of the first wildfire attribution studies at the scale of actionable management and shows that local responses to climate change and fire suppression can be highly variable even within individual watersheds," said lead author Erin Hanan from the University of Nevada, Reno.Hanan is a researcher with the University's Experiment Station and Assistant Professor of Natural Resources & Environmental Science in the College of Agriculture, Biotechnology & Natural Resources. She and her collaborators at UC Merced, UC Santa Barbara, Washington State University and the University of Washington, Tacoma, used a novel modeling approach to examine how climate change and fire suppression influence fire regimes across watersheds.Their approach integrates three research methods: (1) remote sensing data to characterize past fires; (2) climate models to determine the role climate change has played in local meteorological patterns, including temperature, rainfall and humidity; and (3) a watershed model that simulates how climate, water, vegetation and wildfire interact over space and time.The scientists' work uses climate records developed through a National Science Foundation-funded initiative called FireEarth and a watershed model developed at UC Santa Barbara, and expanded through another National Science Foundation-funded initiative called SERI-Fire."This study is really the first to directly compare the independent effects of climate change versus fire suppression, which you can only do using dynamic models," said UC Merced Assistant Professor Crystal Kolden, who led the FireEarth initiative. "We were actually surprised that the climate change signal was so clear; that's kind of rare. And even though our study was limited to Idaho, the forest types and climate we modeled are found throughout the western U.S., so they are good analogs for many other watersheds."In addition to illuminating the roles of major wildfire factors, the research also boosts methodology. "This paper moves fire modeling and prediction forward by looking inside watersheds and disentangling the many factors that influence how fire regimes will evolve in the coming decades," said UC Santa Barbara Professor Naomi Tague, who led the SERI-Fire initiative.While climate change remains a major factor, increasing the frequency and intensity of large wildfires across the globe, there are many regions where past suppression efforts still play an important role. Forest-density reduction is often the approach used in fuel-limited forests where decades of suppression have significantly increased fuel loads. However, density reductions sometimes have unintended consequences, particularly when vegetation growth is enhanced by the treatments, leading to greater plant water use, drier conditions -- including drier fuels -- and an increased fire risk.Because fuel management often occurs at fine scales, spatially explicit models are needed to project how different areas within watersheds will respond to fire suppression or fuel treatments under the shifting conditions brought about by climate change."We found that the effects of climate change and fuels varied at fine scales within watersheds, and the relative influence of these drivers is changing as the climate continues to warm, so solutions to the growing wildfire problem must be adaptive and location-based," Hanan said. "That's why it's important to consider local environmental conditions and climate change trends in policy and management planning for the future."
|
Environment
| 2,021 |
February 17, 2021
|
https://www.sciencedaily.com/releases/2021/02/210217134852.htm
|
New highly radioactive particles found in Fukushima
|
The 10 year anniversary of the Fukushima Daiichi nuclear accident occurs in March. Work just published in the Journal
|
Particles containing radioactive cesium (134+137Cs) were released from the damaged reactors at the Fukushima Daiichi Nuclear Power Plant (FDNPP) during the 2011 nuclear disaster. Small (micrometer-sized) particles (known as CsMPs) were widely distributed, reaching as far as Tokyo. CsMPs have been the subject of many studies in recent years. However, it recently became apparent that larger (>300 micrometers) Cs-containing particles, with much higher levels of activity (~ 105 Bq), were also released from reactor unit 1 that suffered a hydrogen explosion. These particles were deposited within a narrow zone that stretches ~8 km north-northwest of the reactor site. To date, little is known about the composition of these larger particles and their potential environmental and human health impacts.Now, work just published in the journal The particles, reported in the study, were found during a survey of surface soils 3.9 km north-northwest of reactor unit 1.From 31 Cs-particles collected during the sampling campaign, two have given the highest ever particle-associated 134+137Cs activities for materials emitted from the FDNPP (specifically: 6.1 × 105 and 2.5 × 106 Bq, respectively, for the particles, after decay-correction to the date of the FDNPP accident).The study involved scientists from Japan, Finland, France, the UK, and USA, and was led by Dr. Satoshi Utsunomiya and graduate student Kazuya Morooka (Department of Chemistry, Kyushu University). The team used a combination of advanced analytical techniques (synchrotron-based nano-focus X-ray analysis, secondary ion mass spectrometry, and high-resolution transmission electron microscopy) to fully characterize the particles. The particle with a 134+137Cs activity of 6.1 × 105 Bq was found to be an aggregate of smaller, flakey silicate nanoparticles, which had a glass like structure. This particle likely came from reactor building materials, which were damaged during the Unit 1 hydrogen explosion; then, as the particle formed, it likely adsorbed Cs that had had been volatized from the reactor fuel. The 134+137Cs activity of the other particle exceeded 106 Bq. This particle had a glassy carbon core and a surface that was embedded with other micro-particles, which included a Pb-Sn alloy, fibrous Al-silicate, Ca-carbonate / hydroxide, and quartz.The composition of the surface embedded micro-particles likely reflect the composition of airborne particles within the reactor building at the moment of the hydrogen explosion, thus providing a forensic window into the events of March 11th 2011. Utsunomiya added, "The new particles from regions close to the damaged reactor provide valuable forensic clues. They give snap-shots of the atmospheric conditions in the reactor building at the time of the hydrogen explosion, and of the physio-chemical phenomena that occurred during reactor meltdown." He continued, "whilst nearly ten years have passed since the accident, the importance of scientific insights has never been more critical. Clean-up and repatriation of residents continues and a thorough understanding of the contamination forms and their distribution is important for risk assessment and public trust.Professor Gareth Law (co-author, University of Helsinki) added, "clean-up and decommissioning efforts at the site face difficult challenges, particularly the removal and safe management of accident debris that has very high levels of radioactivity. Therein, prior knowledge of debris composition can help inform safe management approaches."Given the high radioactivity associated with the new particles, the project team were also interested in understanding their potential health / dose impacts.Dr Utsunomiya stated, "Owing to their large size, the health effects of the new particles are likely limited to external radiation hazards during static contact with skin. As such, despite the very high level of activity, we expect that the particles would have negligible health impacts for humans as they would not easily adhere to the skin. However, we do need to consider possible effects on the other living creatures such as filter feeders in habitats surrounding Fukushima Daiichi. Even though ten years have nearly passed, the half-life of 137Cs is ~30 years. So, the activity in the newly found highly radioactive particles has not yet decayed significantly. As such, they will remain in the environment for many decades to come, and this type of particle could occasionally still be found in radiation hot spots."Professor Rod Ewing (co-author from Stanford University) stated "this paper is part of a series of publications that provide a detailed picture of the material emitted during the Fukushima Daiichi reactor meltdowns. This is exactly the type of work required for remediation and an understanding of long-term health effects."Professor Bernd Grambow (co-author from IMT Atlantique) added "the present work, using cutting-edge analytical tools, gives only a very small insight in the very large diversity of particles released during the nuclear accident, much more work is necessary to get a realistic picture of the highly heterogeneous environmental and health impact."
|
Environment
| 2,021 |
February 17, 2021
|
https://www.sciencedaily.com/releases/2021/02/210217132320.htm
|
Fishes contribute roughly 1.65 billion tons of carbon in feces and other matter annually
|
Scientists have little understanding of the role fishes play in the global carbon cycle linked to climate change, but a Rutgers-led study found that carbon in feces, respiration and other excretions from fishes -- roughly 1.65 billion tons annually -- make up about 16 percent of the total carbon that sinks below the ocean's upper layers.
|
Better data on this key part of the Earth's biological pump will help scientists understand the impact of climate change and seafood harvesting on the role of fishes in carbon flux, according to the study -- the first of its kind -- in the journal "Our study is the first to review the impact that fishes have on carbon flux," said lead author Grace K. Saba, an assistant professor in the Center for Ocean Observing Leadership in the Department of Marine and Coastal Sciences in the School of Environmental and Biological Sciences at Rutgers University-New Brunswick. "Our estimate of the contribution by fish -- about 16 percent -- includes a large uncertainty, and scientists can improve it with future research. Forms of carbon from fish in ocean waters where sunlight penetrates -- up to about 650 feet deep -- include sinking fecal pellets, inorganic carbon particles (calcium carbonate minerals), dissolved organic carbon and respired carbon dioxide."The ocean plays a vital role in the Earth's carbon cycle by exchanging carbon dioxide, a key greenhouse gas linked to global warming and climate change, with the atmosphere. Carbon dioxide absorbed by the ocean is taken up by phytoplankton (algae), small single-celled plants at the ocean's surface. Through an important process called the biological pump, this organic carbon can go from the surface to ocean depths when algal material or fecal pellets from fishes and other organisms sink. The daily migration of fishes to and from the depths also contributes organic carbon particles, along with excreted and respired material. Another factor is mixing of ocean waters."Carbon that makes its way below the sunlit layer becomes sequestered, or stored, in the ocean for hundreds of years or more, depending on the depth and location where organic carbon is exported," Saba said. "This natural process results in a sink that acts to balance the sources of carbon dioxide."
|
Environment
| 2,021 |
February 17, 2021
|
https://www.sciencedaily.com/releases/2021/02/210217115404.htm
|
Climate change and fire suppression
|
The unprecedented and deadly blazes that engulfed the American West in 2020 attest to the increasing number, size and severity of wildfires in the region. And while scientists predict the climate crisis will exacerbate this situation, there's still much discussion around its contributing factors.
|
With this in mind, scientists at five western universities, including UC Santa Barbara, investigated the effects of human-driven climate change and more than a century of fire suppression, which has produced dense forests primed to burn. Their research, published in the journal "We wanted to know how climate change and fire suppression, each in different ways, can influence fire regimes," said coauthor Naomi Tague, a professor of ecohydrology and ecoinformatics at UCSB's Bren School of Environmental Science & Management.The scientists, led by Assistant Professor Erin Hanan at the University of Nevada, integrated three research methods to tackle these questions. They employed remote sensing data to characterize past fires. They harnessed climate models to determine the role climate change has played in local meteorological patterns, including temperature, rainfall and humidity. And they used an earth-system model to simulate how climate, water, vegetation and wildfire interact over space and time.The scientists drew on climate records developed through a National Science Foundation-funded initiative called FireEarth and a watershed model called RHESSys-Fire that originated in the Tague Team Lab at UC Santa Barbara. Funding from another NSF initiative had enabled Tague's lab to incorporate advances to this model that represent the climate impacts on fire, as well as hydrology and vegetation growth. The authors applied these techniques to data gathered across complex terrain in two mixed-conifer watersheds in the Idaho Batholith and the Central Rocky Mountains.The results were clear, but far from straightforward. "For some locations, we found that climate change increased fire activity," said Tague, who led the SERI-Fire initiative, "but surprisingly, in other locations, climate change actually decreased fire activity."The team found that climate change increased burn probability and led to larger, more frequent fires in wetter areas while doing the opposite in more arid locations. In areas of intermediate soil moisture, the effects of climate change and fire suppression varied in response to local trade-offs between flammability and fuel loading.The scientists were surprised that climate change could decrease the severity of fires under certain conditions, but Tague offers an explanation. "Climate change can reduce the growth and development of fuels," she said, "particularly in more arid sites."These are crucial insights in our efforts to understand and manage wildfires. "This paper presents one of the first wildfire attribution studies at the scale of actionable management," said lead author Erin Hanan, "and shows that local responses to climate change and fire suppression can be highly variable even within individual watersheds.""This study is really the first to directly compare the independent effects of climate change versus fire suppression, which you can only do using dynamic models," added UC Merced Assistant Professor Crystal Kolden, who led the FireEarth initiative. "We were actually surprised that the climate change signal was so clear; that's kind of rare. And even though our study was limited to Idaho, the forest types and climate we modeled are found throughout the western U.S., so they are good analogs for many other watersheds."In addition to illuminating the roles of major wildfire factors, the research also boosts methodology. "This paper moves fire modeling and prediction forward by looking inside watersheds and disentangling the many factors that influence how fire regimes will evolve in the coming decades," said Tague.While climate change remains a major component -- increasing the frequency and intensity of large wildfires across the globe -- there are many regions where past suppression efforts still play an important role. Forest-density reduction is often a favored approach in regions where decades of fire suppression have significantly increased fuel loads. However, density reductions sometimes have unintended consequences, as Tague and her colleagues detailed in a paper recently published in Frontiers in Forests and Global Change. Under certain conditions, this practice can encourage vegetation growth, which can lead to greater water use by plants and potentially increasing fire risks.Because fuel management often occurs at fine scales, spatially explicit models are needed to project how different areas within watersheds will respond to fire suppression or fuel treatments under the shifting conditions brought about by climate change."Our results tell us that a one-size-fits-all approach to fuel treatment and fire management is unlikely to work," Tague said. "Debates over what causes fire activity, and what good treatment options might be, must always take where you are into account."
|
Environment
| 2,021 |
February 17, 2021
|
https://www.sciencedaily.com/releases/2021/02/210217114432.htm
|
More sustainable recycling of plastics
|
The new method works without extremely high temperatures, is therefore more energy-efficient and has a significantly higher recovery rate (approx. 96 per cent of the starting material) than established processes. These findings will be published on 17 February 2021 in the scientific journal
|
"The direct re-utilization of plastics is often hampered by the fact that, in practice, mechanical recycling only functions to a limited degree -- because the plastics are contaminated and mixed with additives, which impairs the properties of the recycled materials," Stefan Mecking explains."Chemical recycling" is an alternative: Via a chemical process, used plastic is broken down into its molecular building blocks, which can then be converted into new plastic.Specifically in the case of polyethylene -- the most widely used plastic -- chemical recycling is difficult. On a molecular level, plastics are made up of long molecular chains. "Polymer chains of polyethylene are very stable and not easily reversed back into small molecules," Stefan Mecking explains. Temperatures exceeding 600° Celsius are required, making the procedure energy-consuming. At the same time, the recovery rate is limited (in some cases less than ten per cent of the starting material).Stefan Mecking and his team report on a method that makes a more energy-efficient chemical recycling of polyethylene-like plastics possible, coupled with a very high recovery rate of around 96 per cent of the starting materials. To do so, the chemists used "breaking-points" on a molecular level enabling a deconstruction of the chain into smaller molecular building blocks. "Key for our method are polymers with a low density of predetermined breaking-points in the polyethylene chain, so that the crystalline structure and material properties are not compromised," Stefan Mecking explains and adds: "This type of materials is also very suitable for 3D printing."Stefan Mecking´s research team demonstrated this chemical recycling on polyethylene-like plastics based on plant oil. The recycling stage requires temperatures of only about 120 degrees. Furthermore, the chemists also performed this recycling method on mixed plastics as they occur in waste streams. The properties of the recycled materials are on a par with those of the starting material. "Recyclability is an important aspect for future technologies based on plastics. Re-utilizing such valuable materials as efficiently as possible makes sense. With our research we want to contribute to making chemical recycling of plastics more sustainable and effective," Stefan Mecking resumes.
|
Environment
| 2,021 |
February 16, 2021
|
https://www.sciencedaily.com/releases/2021/02/210216185914.htm
|
Thermal energy storage with new solution meant to ease grid stress
|
Scientists from the National Renewable Energy Laboratory (NREL) have developed a simple way to better evaluate the potential of novel materials to store or release heat on demand in your home, office, or other building in a way that more efficiently manages the building's energy use.
|
Their work, featured in The paper, "Rate Capability and Ragone Plots for Phase Change Thermal Energy Storage," was authored by NREL's Jason Woods, along with co-authors Allison Mahvi, Anurag Goyal, Eric Kozubal, Wale Odukomaiya, and Roderick Jackson. The paper describes a new way of optimizing thermal storage devices that mirrors an idea used for batteries, helping to inform what new thermal storage materials are needed for buildings and how the devices should be designed with these materials.Thermal energy storage allows buildings to function like a huge battery by storing thermal energy in novel materials until it can be used later. One example is a heat pump. While electricity is needed initially to create and store the heat, the heat is used later without using additional electricity.In another example, some materials have the ability to change phases, like ice that can transition from a solid to a liquid. As the ice melts, it absorbs energy from and cools a working fluid, which can then be used to cool a building space. Because phase change occurs at a nearly constant temperature, useful energy can be provided or stored for a longer period at a steady temperature. Thermal energy storage is typically very "round trip" energy efficient.The authors discovered that a Ragone plot, often used to characterize batteries, also works well to describe the potential effectiveness of various thermal storage device candidates. A Ragone plot shows the tradeoff between how much energy a device can store and its discharge power, or how quickly the device can release energy. This foundational approach makes comparisons between different thermal storage materials or device improvements easier to evaluate. It serves as a starting point for defining targets and is a useful design tool to develop new thermal storage materials and devices that can serve as novel, alternative energy storage options."This Ragone framework ensures cost-effective design of thermal storage materials and devices depending on the power and energy requirements of a particular application," said Jason Woods, a senior research engineer at NREL and lead author of the newly published paper.Mahvi, a postdoctoral researcher at NREL, said another advantage is enabling technologies that can mitigate blackouts on the grid. "Most of peak electricity demand -- especially in the summer when you might see blackouts -- is driven by air conditioning. If you can move that demand to some other time of the day, you can help alleviate the strain on the grid, keeping the grid operational, but also keeping people indoors comfortable.""Thermal energy storage systems will need to become more flexible and adaptable with the addition of onsite power generation, electric vehicle charging, and the combination of thermal storage with batteries," Woods said. "Part of this flexibility requires higher power -- but this higher power comes at a cost of available energy, as this publication highlights."The way in which the thermal energy storage is used will impact its performance. Scientists need to consider questions about how stored energy can best be used to keep building occupants comfortable, or for different applications like maintaining electronic equipment at a safe temperature."Which one works best for me and my application will depend on what the requirements are. How much do I need to store, and how fast do I need to discharge it?" Mahvi said. "This framework will allow us to optimize thermal storage systems from the material to the component scale to increase the power density while still accessing as much of the available capacity as possible. This will result in more efficient devices that can be used for a wide range of applications."The researchers developed a computer model to understand the various design tradeoffs with these thermal storage devices, including ones that require high power (release the energy quickly) and low power (release the energy slowly). They also built a prototype phase change thermal storage device, illustrating this power-energy tradeoff in practice.The Building Technologies Office in the U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy funded this research.
|
Environment
| 2,021 |
February 16, 2021
|
https://www.sciencedaily.com/releases/2021/02/210216114930.htm
|
Aging offshore wind turbines could stunt growth of renewable energy sector
|
The University of Kent has led a study highlighting the urgent need for the UK's Government and renewable energy industries to give vital attention to decommissioning offshore wind turbines approaching their end of live expectancy by 2025. The research reveals that the UK must decommission approximately 300 and 1600 early-model offshore wind turbines by 2025 and 2030, respectively.
|
Urgent focus is needed now to proactively use the remaining years until turbines installed in the 1990s and early 2000s are no longer safely functional in 2025, to prevent safety lapses, potentially huge costs and the irretrievable loss of the skillset required for safe decommission.The research shows that these original turbines have an approximate lifetime of 20 to 25 years, but this expectation is vulnerable to factors that occur whilst in use. Within each early-model turbine, there exist thousands of components and parts that have worn down, become replaced and fixed without estimates on their installation time frame, and are nearing the end of their life expectancy.There is no existing breakdown of the potential costs of the activities that would surround decommissioning offshore wind turbines, nor is there is an alternative plan to their decommission.As the turbines exceed their safety remit, the sector is also set to lose the unique skillset of engineers that originally installed and maintained these early models, as they are now approaching professional retirement. To combat this loss of skills, researchers advise the imperative creation of a database of era-specific skills and operation-techniques to offset such a loss.The study also finds that profitable operations can be established to counter the cost of decommission. Recycling of existing parts into new wind turbine operations has the potential to be hugely cost-effective for the sector, as well as ensuring that renewable means of production are at the forefront of future operations.Dr Mahmoud Shafiee, Reader in Mechanical Engineering at Kent's School of Engineering and Digital Arts said: 'Without a dedicated effort from the UK Government and renewable energy sector into planning the safe and efficient decommissioning these offshore wind turbines, there is a risk enormous and potentially unsalvageable cost to the renewable energy sector. The cost of maintaining outdated turbines is multiple times that of new installations, so for the benefit of our future hopes of renewable energy, we call on the Government and sector leaders to act now.'
|
Environment
| 2,021 |
February 16, 2021
|
https://www.sciencedaily.com/releases/2021/02/210216115029.htm
|
Luminescent windows generate energy from inside and out
|
Rice University engineers have suggested a colorful solution to next-generation energy collection: Luminescent solar concentrators (LSCs) in your windows.
|
Led by Rafael Verduzco and postdoctoral researcher and lead author Yilin Li of Rice's Brown School of Engineering, the team designed and built foot-square "windows" that sandwich a conjugated polymer between two clear acrylic panels.That thin middle layer is the secret sauce. It's designed to absorb light in a specific wavelength and guide it to panel edges lined with solar cells. Conjugated polymers are chemical compounds that can be tuned with specific chemical or physical properties for a variety of applications, like conductive films or sensors for biomedical devices.The Rice lab's polymer compound is called PNV (for poly[naphthalene-alt-vinylene]) and absorbs and emits red light, but adjusting the molecular ingredients should make it able to absorb light in a variety of colors. The trick is that, as a waveguide, it accepts light from any direction but restricts how it leaves, concentrating it onto the solar cells that convert it to electricity."The motivation for this research is to solve energy issues for buildings through integrated photovoltaics," said Li, who began the project as part of a "smart glass" competition. "Right now, solar rooftops are the mainstream solution, but you need to orient them toward the sun to maximize their efficiency, and their appearance isn't very pleasing."We thought, why can't we make colorful, transparent or translucent solar collectors and apply them to the outside of buildings?" he said.The study appears in the journal Admittedly, the amount of juice generated by the Rice team's test units is far less than that collected by even average commercial solar cells, which routinely convert about 20% of sunlight into electricity.But LSC windows never stop working. They happily recycle light from inside the building into electricity when the sun goes down. In fact, tests showed they were more efficient at converting ambient light from LEDs than they were from direct sunlight, even though the sunlight was 100 times stronger."Even indoors, if you hold up a panel, you can see very strong photoluminescence on the edge," Li said, demonstrating. The panels he tested showed a power conversion efficiency of up to 2.9% in direct sunlight and 3.6% under ambient LED light.Various types of luminophores have been developed over the last decade, but rarely with conjugated polymers, according to Verduzco."Part of the problem with using conjugated polymers for this application is that they can be unstable and degrade quickly," said Verduzco, a professor of chemical and biomolecular engineering and of materials science and nanoengineering. "But we've learned a lot about improving the stability of conjugated polymers in recent years, and in the future, we can engineer the polymers for both stability and desired optical properties."The lab also simulated the return of energy from panels as large as 120 inches square. They reported these panels would provide somewhat less energy, but it would still contribute to a household's needs.Li noted the polymer might also be tuned to convert energy from infrared and ultraviolet light, allowing those panels to remain transparent."The polymers can even be printed in patterns in the panels, so they can be turned into artwork," he said.
|
Environment
| 2,021 |
February 15, 2021
|
https://www.sciencedaily.com/releases/2021/02/210215092407.htm
|
Capuchin monkey genome reveals clues to its long life and large brain
|
An international team of scientists has sequenced the genome of a capuchin monkey for the first time, uncovering new genetic clues about the evolution of their long lifespan and large brains.
|
Published in "Capuchins have the largest relative brain size of any monkey and can live past the age of 50, despite their small size, but their genetic underpinnings had remained unexplored until now," explains Professor Joao Pedro De Magalhaes, who researches ageing at the University of Liverpool.The researchers developed and annotated a reference assembly for white-faced capuchin monkeys (Cebus imitator) to explore the evolution of these traits.Through a comparative genomics approach spanning a wide diversity of mammals, they identified genes under evolutionary selection associated with longevity and brain development."We found signatures of positive selection on genes underlying both traits, which helps us to better understand how such traits evolve. In addition, we found evidence of genetic adaptation to drought and seasonal environments by looking at populations of capuchins from a rainforest and a seasonal dry forest," said senior author and Canada Research Chair Amanda Melin who has studied capuchin monkey behaviour and genetics for almost 20 years.The researchers identified genes associated with DNA damage response, metabolism, cell cycle, and insulin signalling. Damage to the DNA is thought to be a major contributor to ageing and previous studies by Professor de Magalhaes and others have shown that genes involved in DNA damage responses exhibit longevity-specific selection patterns in mammals."Of course, because aging-related genes often play multiple roles it is impossible to be sure whether selection in these genes is related to ageing or to other life-history traits, like growth rates and developmental times, that in turn correlate with longevity," said Professor De Magalhaes."Although we should be cautious about the biological significance of our findings, it is tempting to speculate that, like in other species, changes to specific aging-related genes or pathways, could contribute to the longevity of capuchins," he added.The team's insights were made possible thanks to the development of a new technique to isolate DNA more efficiently from primate faeces.FecalFACS utilises an existing technique that has been developed to separate cells types in body fluids -- for example to separate different cell types in blood for cancer research -- and applies it to primate faecal samples."This is a major breakthrough because the typical way to extract DNA from faeces results in about 95-99% of the DNA coming from gut microbes and food items. A lot of money has been spent sequencing genomes from different organisms than the mammals we're actually trying to study. Because of this, when wildlife biologists have required entire genomes, they have had to rely on more pure sources of DNA, like blood, saliva, or tissue -- but as you can imagine, those are very hard to come by when studying endangered animals," explained the study's lead author, Dr Joseph Orkin, who completed work on this project as a postdoctoral scholar at the University of Calgary, and in his present location at Universitat Pompeu Fabra-CSIC in Barcelona."FecalFACS finally provides a way to sequence whole genomes from free-ranging mammals using readily available, non-invasive samples, which could really help future conservation efforts," he added.
|
Environment
| 2,021 |
February 12, 2021
|
https://www.sciencedaily.com/releases/2021/02/210212111921.htm
|
Flowers of St. John's Wort serve as green catalyst
|
An interdisciplinary team of scientists from the School of Science at TU Dresden has for the first time used dried flowers of St. John's Wort (genus Hypericum) as an active catalyst in various photochemical reactions. This conceptually new and sustainable process was registered as a German patent and presented in the journal
|
Since ancient times, St. John's Wort has been used as a medicinal herb covering a wide range of applications such as the treatment of burns, skin injuries, neuralgia, fibrosis, sciatica and depression. Due to its medicinal potential, the plant known in technical terminology as Hypericum perforatum even became "Medicinal Plant of the Year" in 2015. Now, scientists at TU Dresden have shown that there is much more to the herb than its potential healing properties.To this end, two interdisciplinary groups from biology and inorganic chemistry have joined forces and thus achieved astonishing results.Originally, the research groups led by botanist Prof. Stefan Wanke and chemist Prof. Jan. J. Weigand wanted to synthesize graphene-like 2D structures from natural products in the joint project funded by the Sächsische Aufbaubank (SAB; HyperiPhen project 100315829 in TG70 Bioleben). For this purpose, hypericin, a compound of St. John's Wort, served as a template and starting material. In the course of the investigations, it turned out that hypericin efficiently catalyzes photochemical reactions. Prof. Weigand then came up with the idea of using the dried flowers of St. John's Wort, from which hypericin can be obtained by extraction, as a green and sustainable alternative to common catalysts."The chemistry of natural substances and especially the background of botany were completely new to us. The exciting results that came out of it are all the more gratifying. The interdisciplinary project shows how important it is in science to think outside the "box,"" says Prof. Weigand, commenting on the success of the collaboration.The team is thus following a current trend in modern synthetic chemistry to include sustainable aspects. The search for sustainable, renewable and environmentally friendly photoredox catalysts is proving to be extremely challenging. The results now obtained are all the more promising. The plant compound hypericin, a secondary metabolite from St. John's Wort, is used as the active compound in chemical reactions without the need for prior chemical processing. The Dresden scientists have successfully applied for a German patent for this newly developed method (DE 10 2019 215 871).Also Prof. Wanke is amazed at the success of the collaboration: "Although the research project started with a good idea, bringing it to life was not entirely trivial, as the two working groups first had to "get to know" each other. Our research fields and methods used were far apart. But soon the first unusually exciting results emerged. Everyone involved learned a lot. We would like to continue the research, but the funding is still missing."
|
Environment
| 2,021 |
February 10, 2021
|
https://www.sciencedaily.com/releases/2021/02/210210170021.htm
|
Solar awnings over parking lots help companies and customers
|
The number of people who own electric vehicles (EVs) is increasing, but they face a conundrum: Unlike those who own gasoline-burning cars, EV owners can't just pop down to the corner gas station for a fill-up. Particularly in rural areas, charging stations can be few and far between.
|
Joshua Pearce, Richard Witte Endowed Professor of Materials Science and Engineering and professor of electrical and computer engineering at Michigan Technological University, hopes to change that.In a model outlined in a paper in the journal The study investigates the energy-related benefits of developing EV charging stations powered with solar photovoltaic (PV) canopies built into the parking infrastructure of large-scale retailers like Walmart.The case study shows such canopies could generate a potential of 3.1 megawatts per Walmart supercenter in the U.S., providing solar electricity for approximately 100 EV charging stations. Across the country, Walmart could deploy 11.1 gigawatts of solar canopies over parking lots to provide more than 346,000 EV charging stations with solar electricity for their customers. Such a fleet of solar awnings would cover the needs of 90% of the American public living within 15 miles of a Walmart -- just about as prevalent as the corner gas station.This model could be adopted by any box store because they have a competitive advantage of stand-alone EV charging stations. Solar electricity for EV charging could be made at a profit, solving community charging challenges. The results indicate store owners could increase store selection and profit by providing free PV-EV charging for their customers with four key benefits:"The electric car powerhouse Tesla is now the most valuable automobile company in the world, and the more experienced auto manufacturing giant GM has announced gas and diesel vehicles will be extinct by 2035. It is clear EV growth will continue to accelerate," Pearce said. "Retailers have an opportunity to leverage the stranded asset of their parking lots for profit from preferential store selection by the growing army of EV owners. Fast-moving retailers that make the investment in solar canopies and EV charging stations will attract early EV adopters and reap the most profit."
|
Environment
| 2,021 |
February 10, 2021
|
https://www.sciencedaily.com/releases/2021/02/210210142200.htm
|
The future of solar technology: New technology makes foldable cells a practical reality
|
With the recent development of foldable mobile phone screens, research on foldable electronics has never been so intensive. One particularly useful application of the foldable technology is in solar panels.
|
Current solar cells are restricted to rigid, flat panels, which are difficult to store in large numbers and integrate into everyday appliances, including phones, windows, vehicles, or indoor devices. But, one problem prevents this formidable technology from breaking through: to be integrated into these items, solar cells need to be foldable, to bend at will repeatedly without breaking. Traditional conducting materials used in solar cells lack flexibility, creating a huge obstacle in developing fully foldable cells.A key requirement for an efficient foldable conductor is the ability to withstand the pressure of bending within a very small radius while maintaining its integrity and other desirable properties. In short, a thin, flexible, transparent, and resilient conductor material is needed. Professor Il Jeon of Pusan National University, Korea, elaborates, "Unlike merely flexible electronics, foldable devices are subject to much harsher deformations, with folding radii as small as 0.5 mm. This is not possible with conventional ultra-thin glass substrates and metal oxide transparent conductors, which can be made flexible but never fully foldable."Fortunately, an international team of researchers, including Prof. Jeon, have found a solution, in a study published in To ensure maximum performance, they also "doped" the resulting material to increase its conductivity. By introducing small impurities (in this case, withdrawn electrons to molybdenum oxide) into the SWNT-PI nanocomposite layer, the energy needed for electrons to move across the structure is much smaller, and hence more charge can be generated for a given amount of current.Their resulting prototype far exceeded the team's expectations. Only 7 micrometers thick, the composite film exhibited exceptional resistance to bending, almost 80% transparency, and a power conversion efficiency of 15.2%, the most ever achieved in solar cells using carbon nanotube conductors! In fact, as pointed out by Prof. Jeon, "The obtained results are some of the best among those reported thus far for flexible solar cells, both in terms efficiency and mechanical stability."With this novel breakthrough in solar harvesting technology, one can only imagine what next-generation solar panels will look like.
|
Environment
| 2,021 |
February 10, 2021
|
https://www.sciencedaily.com/releases/2021/02/210210091131.htm
|
Why plant diversity is so important for bee diversity
|
As abundant and widespread bees, it is common to see both bumble bees and honey bees foraging on the same flower species during the summer, whether in Britain or many other countries.
|
Yet researchers at the Laboratory of Apiculture and Social Insects (LASI) at the University of Sussex, have found that different bees dominate particular flower species and revealed why.By studying 22 flower species in southern England and analysing the behaviour of more than 1000 bees, they found that 'energy efficiency' is a key factor when it comes to mediating competition.Bee bodyweight and the rate at which a bee visits flowers determine how energy efficient they are. Bodyweight determines the energy used while flying and walking between flowers, with a bee that is twice as heavy using twice as much energy. The rate at which a bee visits flowers, the number of flowers per minute, determines how much nectar, and therefore energy, it collects. Together, the ratio of these factors determines bee foraging energy efficiency.Professor of Apiculture, Francis Ratnieks, said: "While they forage on the same flowers, frequently we find that bumble bees will outnumber honey bees on a particular flower species, while the reverse will be true on a different species growing nearby."What was remarkable was that differences in foraging energy efficiency explained almost fully why bumble bees predominated on some flower species and honey bees on others."In essence, bumble bees have an advantage over honey bees in being faster at visiting flowers, so can gather more nectar (energy), but a disadvantage in being larger, and so using more of the nectar energy to power their foraging. On some flower species this gave an overall advantage to bumble bees, but on others to honey bees."In the study, published in the journal On some flower species such as lavender, where bumble bees dominated, visiting flowers at almost three times the rate of honeybees.The differences in the morphology of flowers impacted greatly on how energy efficient the two bee types were. Ling heather, with its mass of small flowers was better suited to the nimbler honey bee. By contrast, Erica heather, which researchers found growing beside the ling heather in the same nature reserve, has large bell shaped flowers and was better suited to bumble bees.Author Dr Nick Balfour said: "The energy efficiency of foraging is particularly important to bees. The research showed that the bees were walking (and flying) a challenging energy tightrope; half the energy they obtained from the nectar was expended in its collection."Energy (provided by nectar for bees) is a fundamental need, but the fact that honey bees and bumble bees do not compete head on for nectar is reassuring in terms of conservation and co-existence.Prof Ratnieks explained: "Bumble bees have a foraging advantage on some plants, and predominate on them, while honey bees have an advantage on others and predominate on these."Bee conservation therefore benefits from flower diversity, so that should certainly be a focus on bee conservation efforts. But fortunately, flowering plants are diverse."The research team, which included Sussex PhD student Kyle Shackleton, Life Sciences undergraduates Natalie A. Arscott, Kimberley Roll-Baldwin and Anthony Bracuti, and Italian volunteer, Gioelle Toselli, studied flower species in a variety of local locations. This included a nature reserve, the wider countryside, Brighton parks, Prof Ratnieks's own garden and a flower bed outside Sussex House on the University campus.Dr Balfour said: "Whether you have a window box, allotment or a garden, planting a variety of summer-blooming flowers or cutting your grass less often can really help pollinators during late summer."
|
Environment
| 2,021 |
February 23, 2021
|
https://www.sciencedaily.com/releases/2021/02/210223121646.htm
|
Study sheds light on unique social character of forest elephants
|
A new study led by a San Diego Zoo Global scientist offers rare insights into the unique social character of forest elephants, the least understood of the world's three currently existing elephant species.
|
Limited access to food in the central African forest probably affects why females of this species form smaller family units than other elephants, according to the study, published in the journal Forest elephants inhabit central Africa, splitting their time between dense forest and clearings called bais. In the forest, they live in small family groups -- often just a mother and her dependent offspring -- with older elephants apparently not acting as social hubs between families as is the case in savanna elephants. This may be because in dense forests, they subsist on limited nutrients provided by trees that produce fruit for a short period of time. But forest elephants congregate in larger groups when they visit bais where resources are abundant. Bais allow elephants to interact with a much wider network of elephants than they are found traveling with in the forest."This species is under considerable threat, but we know comparatively little about them," said Shifra Goldenberg, Ph.D., a research fellow in Population Sustainability at San Diego Zoo Global. "Baseline information on their social behavior can help us understand how threats like ivory poaching may impact them."The researchers examined a data set of elephant sightings in the Central African Republic from 1995 to 2010. The sightings were made from an observation platform as the elephants entered and exited a forest clearing called Dzanga Bai. Forest elephants are relatively difficult to study because of the dense growth in the forests where they spend most of their time."Forest elephant populations have declined severely," Turkalo said. "Years of observation in a unique population offer us the first in-depth look into the social lives of these reclusive animals, highlighting the close bonds between mothers and their offspring that can be maintained for decades."
|
Environment
| 2,021 |
February 9, 2021
|
https://www.sciencedaily.com/releases/2021/02/210209204139.htm
|
Dragonflies perform upside down backflips to right themselves
|
The findings add to current knowledge of how insects fly and keep stable in the air. They could also help to inspire new designs in small aerial vehicles like drones, which can be useful for search-and-rescue attempts and building inspection.
|
Our colourful sunny-day companions can glide, fly backwards, and travel up to 54 km/h when hunting prey or escaping predators -- but like any flying creature, they can be thrown off balance and even find themselves upside down.Many land-based animals like cats, and aerial animals like hoverflies, rotate themselves around a head-to-tail axis when falling, known as 'rolling', but not much is known about how most insects right themselves from extreme orientations.In a new study published today in They also found that dragonflies perform the same righting maneuver whilst unconscious, suggesting the response has a large component of passive stability -- a flight mechanism like that which lets planes glide when their engines are switched off. The research reveals how the shape and joint stiffness of the dragonflies' wings provide passive stability and could inform designs for small drones.Senior author Dr Huai-Ti Lin, of Imperial's Department of Bioengineering, said: "Engineers could take inspiration from flying animals to improve aerial systems. Drones tend to rely heavily on fast feedback to keep them upright and on course, but our findings could help engineers incorporate passive stability mechanisms into their wing structure."To conduct the study, the researchers dressed 20 common darter dragonflies with tiny magnets and motion tracking dots like those used to create CGI imagery.They then magnetically attached each dragonfly to a magnetic platform either rightside-up or upside-down with some variations in tilt, before releasing the insects into a freefall. The motion tracking dots provided moving 3D models of the dragonfly movements, which were captured by high-speed cameras for 3D reconstruction.They found that conscious dragonflies, when dropped from the upside-down position, somersaulted backwards to regain the rightside-up position. Dragonflies that were unconscious also completed the somersault, but more slowly.Dead dragonflies did not perform the maneuver at all, but when their wings were posed into specific live or unconscious positions by researchers, they were able to complete the righting maneuver -- albeit with a little more spin around the vertical axis than in live dragonflies. The researchers say this suggests that the maneuver relies on both muscle tone and wing posture, which is inbuilt in the dragonfly as a passive response rather than an active control.Lead author Dr Sam Fabian, also of the Department of Bioengineering, said: "Planes are often designed so that if their engines fail, they will glide along stably rather than drop out of the sky. We saw a similar response in dragonflies, despite the lack of active flapping, meaning that some insects, despite their small size, can leverage passive stability without active control."Passive stability lowers the effort requirements of flight, and this trait likely influenced how dragonfly shapes evolved. Dragonflies that use passive stability in flight are likely to have an advantage, as they use less energy and are better able to recover from inconvenient events."The researchers continue to research dragonfly flight biomechanics and will next investigate how these passive effects impact a dragonfly's active vision and guidance strategies in prey interception and obstacle avoidance.Video:
|
Environment
| 2,021 |
February 9, 2021
|
https://www.sciencedaily.com/releases/2021/02/210209151907.htm
|
'Defective' carbon simplifies hydrogen peroxide production
|
Rice University researchers have created a "defective" catalyst that simplifies the generation of hydrogen peroxide from oxygen.
|
Rice scientists treated metal-free carbon black, the inexpensive, powdered product of petroleum production, with oxygen plasma. The process introduces defects and oxygen-containing groups into the structure of the carbon particles, exposing more surface area for interactions.When used as a catalyst, the defective particles known as CB-Plasma reduce oxygen to hydrogen peroxide with 100% Faradaic efficiency, a measure of charge transfer in electrochemical reactions. The process shows promise to replace the complex anthraquinone-based production method that requires expensive catalysts and generates toxic organic byproducts and large amounts of wastewater, according to the researchers.The research by Rice chemist James Tour and materials theorist Boris Yakobson appears in the American Chemical Society journal Hydrogen peroxide is widely used as a disinfectant, as well as in wastewater treatment, in the paper and pulp industries and for chemical oxidation. Tour expects the new process will influence the design of hydrogen peroxide catalysts going forward."The electrochemical process outlined here needs no metal catalysts, and this will lower the cost and make the entire process far simpler," Tour said. "Proper engineering of carbon structure could provide suitable active sites that reduce oxygen molecules while maintaining the O-O bond, so that hydrogen peroxide is the only product. Besides that, the metal-free design helps prevent the decomposition of hydrogen peroxide."Plasma processing creates defects in carbon black particles that appear as five- or seven-member rings in the material's atomic lattice. The process sometimes removes enough atoms to create vacancies in the lattice.The catalyst works by pulling two electrons from oxygen, allowing it to combine with two hydrogen electrons to create hydrogen peroxide. (Reducing oxygen by four electrons, a process used in fuel cells, produces water as a byproduct.)"The selectivity towards peroxide rather than water originates not from carbon black per se but, as (co-lead author and Rice graduate student) Qin-Kun Li's calculations show, from the specific defects created by plasma processing," Yakobson said. "These catalytic defect sites favor the bonding of key intermediates for peroxide formation, lowering the reaction barrier and accelerating the desirable outcome."Tour's lab also treated carbon black with ultraviolet-ozone and treated CB-Plasma after oxygen reduction with argon to remove most of the oxygen-containing groups. CB-UV was no better at catalysis than plain carbon black, but CB-Argon performed just as well as CB-Plasma with an even wider range of electrochemical potential, the lab reported.Because the exposure of CB-Plasma to argon under high temperature removed most of the oxygen groups, the lab inferred the carbon defects themselves were responsible for the catalytic reduction to hydrogen peroxide.The simplicity of the process could allow more local generation of the valuable chemical, reducing the need to transport it from centralized plants. Tour noted CB-Plasma matches the efficiency of state-of-the-art materials now used to generate hydrogen peroxide."Scaling this process is much easier than present methods, and it is so simple that even small units could be used to generate hydrogen peroxide at the sites of need," Tour said.The process is the second introduced by Rice in recent months to make the manufacture of hydrogen peroxide more efficient. Rice chemical and biomolecular engineer Haotian Wang and his lab developed an oxidized carbon nanoparticle-based catalyst that produces the chemical from sunlight, air and water.
|
Environment
| 2,021 |
February 9, 2021
|
https://www.sciencedaily.com/releases/2021/02/210209151802.htm
|
Ancient Amazonian farmers fortified valuable land they had spent years making fertile to protect it
|
Ancient Amazonian communities fortified valuable land they had spent years making fertile to protect it from conflict, excavations show.
|
Farmers in Bolivia constructed wooden defences around previously nutrient-poor tropical soils they had enriched over generations to keep them safe during times of social unrest.These long-term soil management strategies allowed Amazonians to grow nutrient demanding crops, such as maize and manioc and fruiting trees, and this was key to community subsistence. These Amazonian Dark Earths, or It was known that some communities built ditches and embankments, known locally as a Excavations, at the Versalles archaeological site along the Iténez River in the Bolivian Amazon, provide the first archaeological evidence that communities in the region built wooden palisades along with earthworks. The construction circles the outer perimeter of the village, enclosing and protecting homes and the enriched soil and forest.Researchers had long speculated on the function of the Archaeological analysis show that those living in Versalles began enriching soils around 500 BC. After almost two millennia, the The research, published in the journal Dr Robinson said: "This is further evidence the Amazon is not a pristine place, untouched by human hands. People have had a great impact on the ecology of the rainforest. Communities invested heavily, generation after generation, to enrich the natural resources around them. As broad Amazon-wide social-unrest spread, the community felt the need to protect the resources into which they and their ancestors had invested so much."
|
Environment
| 2,021 |
February 8, 2021
|
https://www.sciencedaily.com/releases/2021/02/210208173051.htm
|
Variable weather makes weeds harder to whack
|
From flooded spring fields to summer hailstorms and drought, farmers are well aware the weather is changing. It often means spring planting can't happen on time or has to happen twice to make up for catastrophic losses of young seedlings.
|
According to a joint study between University of Illinois and USDA-ARS, it also means common pre-emergence herbicides are less effective. With less weed control at the beginning of the season, farmers are forced to rely more heavily on post-emergence herbicides or risk yield loss."We're having more variable precipitation, including conditions where folks aren't able to plant because fields are too wet. In those cases, pre-emergence herbicide applications are getting pushed back into a period that is consistently drier," says Marty Williams, USDA-ARS ecologist, affiliate professor in the Department of Crop Sciences at Illinois, and corresponding author on the study.Drier weather may be better for getting equipment onto the field for planting, but it's a problem for pre-emergence herbicides. Using data spanning 25 years and 252 unique weather environments, Williams and his team found most pre-emergence herbicides needed 5 to 15 centimeters of rain within 15 days of application. If that didn't happen, weed control rates plummeted."We already knew some rain after application was critical for the herbicide to move into the soil, but we didn't know how much or when," Williams says. "As we look to the future, having more variable rainfall and potentially increasing the frequency of falling below a critical rainfall threshold is problematic."Christopher Landau, a doctoral candidate on the project, leveraged the university's long-running herbicide evaluation program, for which digital data are available across multiple Illinois locations from 1992 forward. He evaluated the effects of four common pre-emergence herbicides (atrazine, acetochlor, S-metolachlor, and mesotrione) alone and in combination, on three economically important weed species: common lambsquarters, giant foxtail, and waterhemp. He also extracted rainfall and soil temperature data.The analyses clearly showed the overall need for rain after application, but the pre-emergence herbicides varied in their requirements for rainfall within that 15-day post-application window. For example, S-metolachlor required 10 to 15 centimeters of rainfall to maximize waterhemp control, whereas acetochlor only needed 5 centimeters to control the same weed.Results also indicated herbicide combinations helped to minimize the amount of rainfall required for successful control. Continuing the example, when atrazine was added to S-metolachlor, the combination needed only 5 centimeters to achieve the same level of control."Herbicide combinations often provide additional benefits to weed control programs, including more consistent weed control. The continual evolution of herbicide resistance in species such as waterhemp requires more integrated control measures, and herbicide combinations can be one component of integrated systems designed to minimize weed seed production," says Aaron Hager, study co-author and associate professor and faculty Extension specialist in crop sciences at Illinois.When the researchers considered the effect of soil temperature alone on herbicide efficacy, they didn't find a consistent pattern. But temperature was clearly important in low-rainfall scenarios."When rainfall was 10 centimeters or more within 15 days, the probability of successful weed control with most treatments was maximized under all soil temperatures. However, when rainfall was below 10 centimeters, higher soil temperatures either increased or decreased the probability of successful weed control, depending on the herbicide or herbicide combination. Ultimately, future temperatures in rainfall-limited conditions are likely to exacerbate variability in herbicide efficacy," Williams says.The researchers note their findings may be especially important in the western Corn Belt, where erratic weather and low rainfall probabilities are even more common than in Illinois. But they still recommend the use of pre-emergence herbicides in combination, and urge farmers to be strategic in timing their application when rain is in the forecast."The development and adoption of more integrated weed management strategies that utilize pre-emergence herbicides, in combination with additional cultural, mechanical, biological, and postemergence chemical control options, are needed as U.S. corn production prepares to adapt to a changing climate," Landau says.The Department of Crop Sciences is in the College of Agricultural, Consumer and Environmental Sciences at the University of Illinois at Urbana-Champaign.
|
Environment
| 2,021 |
February 8, 2021
|
https://www.sciencedaily.com/releases/2021/02/210208085457.htm
|
Half of global wastewater treated, rates in developing countries still lagging
|
A new study by scientists at Utrecht University and the United Nations University concludes that about half of global wastewater is treated, rather than the previous estimate of 20%. Despite this promising finding, the authors warn that treatment rates in developing countries are still very low. The study and its dataset were published Open Access in the journal
|
Humans and factories produce vast quantities of wastewater per day. If not properly collected and treated, wastewater may severely threaten human health and pollute the environment.The authors use national statistics to estimate volumes of wastewater production, collection, treatment and reuse. "Globally, about 359 billion cubic metres of wastewater is produced each year, equivalent to 144 million Olympic-sized swimming pools," says Edward Jones, PhD researcher at Utrecht University and lead author of the study. "About 48 percent of that water is currently released untreated. This is much lower than the frequently cited figure of 80 percent."While the results show a more optimistic outlook compared to previous work, the authors stress that many challenges still exist. "We see that particularly in the developing world, where most of the future population growth will likely occur, treatment rates are lagging behind," Jones explains. "In these countries in particular, wastewater production is likely to rise at a faster pace than the current development of collection infrastructure and treatment facilities. This poses serious threats to both human health and the environment. There is still a long way to go!"The main problem, especially in the developing world, is the lack of financial resources to build infrastructure to collect and treat wastewater. This is particularly the case for advanced treatment technologies, which can be prohibitively expensive. However, the authors highlight potential opportunities for creative reuse of wastewater streams that could help to finance improved wastewater treatment practices."The most obvious reuse of treated wastewater is to augment freshwater water supplies," Jones states. Treated wastewater reuse is already an important source of irrigation water in many dry countries, particularly in the Middle East and North Africa. However, only 11% of the wastewater produced globally is currently being reused, which shows large opportunities for expansion."But freshwater augmentation is not the only opportunity," says Jones. "Wastewater also has large potential as a source of nutrients and energy. Recognition of wastewater as a resource, opposed to as 'waste', will be key to driving improved treatment going forward."However, the authors stress the importance of proper monitoring of wastewater treatment plants, accompanied by strong legislation and regulations, to ensure that the reuse of wastewater is safe. The authors also acknowledge public acceptance as another key barrier towards increasing wastewater reuse.
|
Environment
| 2,021 |
February 4, 2021
|
https://www.sciencedaily.com/releases/2021/02/210204192504.htm
|
Mysterious organic scum boosts chemical reaction efficiency, may reduce chemical waste
|
Chemical manufacturers frequently use toxic solvents such as alcohols and benzene to make products like pharmaceuticals and plastics. Researchers are examining a previously overlooked and misunderstood phenomenon in the chemical reactions used to make these products. This discovery brings a new fundamental understanding of catalytic chemistry and a steppingstone to practical applications that could someday make chemical manufacturing less wasteful and more environmentally sound.
|
The study led by University of Illinois Urbana-Champaign researcher David Flaherty, University of Minnesota, Twin Cities researcher Matthew Neurock and Virginia Tech researcher Ayman Karim is published in the journal Combining solvents and metal nanoparticles accelerates many chemical reactions and helps maximize yield and profit margins for the chemical industry. However, many solvents are toxic and difficult to safely dispose, the researchers said. Water works, too, but it is not nearly as efficient or reliable as organic solvents. The reason for the difference was thought to be the limited solubility of some reactants in water. However, multiple irregularities in experimental data have led the team to realize the reasons for these differences were not fully understood.To better understand the process, the team ran experiments to analyze the reduction of oxygen to hydrogen peroxide -- one set using water, another with methanol, and others with water and methanol mixtures. All experiments used palladium nanoparticles."In experiments with methanol, we observed spontaneous decomposition of the solvent that leaves an organic residue, or scum, on the surface of the nanoparticles," said Flaherty, a professor of chemical and biomolecular engineering at Illinois. "In some cases, the scumlike residue clings to the nanoparticles and increases reaction rates and the amount of hydrogen peroxide formed instead of hampering the reaction. This observation made us wonder how it could be helping."The team found that the residue, or surface redox mediator, holds oxygen-containing species, including a key component hydroxymethyl. It accumulates on the palladium nanoparticles' surface and opens new chemical reaction pathways, the study reports."Once formed, the residue becomes part of the catalytic cycle and is likely responsible for some of the different efficiencies among solvents reported over the past 40 years of work on this reaction," Flaherty said. "Our work provides strong evidence that these surface redox mediators form in alcohol solvents and that they may explain many past mysteries for this chemistry."By working with multiple types of experiments and computational simulations, the team learned that these redox mediators effectively transfer both protons and electrons to reactants, whereas reactions in pure water transfer protons easily, but not electrons. These mediators also alter the nanoparticles' surface in a way that lowers the energy barrier to be overcome for proton and electron transfer, the study reports."We show that the alcohol solvents as well as organic additives can react to form metal-bound surface mediators that act much in the same way that the enzymatic cofactors in our bodies do in catalyzing oxidation and reduction reactions," Neurock said.Additionally, this work may have implications for reducing the amounts of solvent used and waste generated in the chemical industry."Our research suggests that for some situations, chemical producers could form the surface redox mediators by adding small amounts of an additive to pure water instead of pumping thousands of gallons of organic solvents through these reactors," Flaherty said.The Energy and Biosciences Institute through the EBI-Shell program and the National Science Foundation supported this research.
|
Environment
| 2,021 |
February 4, 2021
|
https://www.sciencedaily.com/releases/2021/02/210204192501.htm
|
Optical coating can simultaneously reflect and transmit the same wavelength, or color
|
For more than a century, optical coatings have been used to better reflect certain wavelengths of light from lenses and other devices or, conversely, to better transmit certain wavelengths through them. For example, the coatings on tinted eyeglasses reflect, or "block out," harmful blue light and ultraviolet rays.
|
But until now, no optical coating had ever been developed that could simultaneously reflect and transmit the same wavelength, or color.In a paper in In addition, the coating can be made to fully reflect only a very narrow wavelength range."The narrowness of the reflected light is important because we want to have a very precise control of the wavelength," says corresponding author Chunlei Guo, professor at Rochester's Institute of Optics. "Before our technology, the only coating that could do this was a multilayered dielectric mirror, that is much thicker, suffers from a strong angular dependence, and is far more expensive to make. Thus, our coating can be a low-cost and high-performance alternative."The researchers envision a few applications for the new technology. For example, they show how FROCs could be used to separate thermal and photovoltaic bands of the solar spectrum. Such capability could improve the effectiveness of devices that use hybrid thermal-electric power generation as a solar energy option. "Directing only the useful band of the solar spectrum to a photovoltaic cell prevents its overheating," says Guo.The technology could also lead to a six-fold increase in the life of a photovoltaic cell. And the rest of the spectrum "is absorbed as thermal energy, which could be used in other ways, including energy storage for night-time, electricity generation, solar-driven water sanitation, or heating up a supply of water," Guo says."These optical coatings can clearly do a lot of things that other coatings cannot do," Guo adds. But as with other new discoveries, "it will take a little bit of time for us or other labs to further study this and come up with more applications."Even when the laser was invented, people were initially confused about what to do with it. It was a novelty looking for an application."Guo's lab, the High-Intensity Femtosecond Laser Laboratory, is noted for its pioneering work in using femtosecond lasers to etch unique properties into metal surfaces.The FROC project resulted from a desire to explore "parallel" ways to create unique surfaces that do not involve laser etching. "Some applications are easier with laser, but others are easier without them," Guo says.Fano resonance, named after the physicist Ugo Fano, is a widespread wave scattering phenomenon first observed as a fundamental principle of atomic physics involving electrons. Later, researchers discovered that the same phenomenon can also be observed in optical systems. "But this involved very complex designs," Guo says.Guo and his colleagues found a simpler way to take advantage of Fano resonance in their optical coatings.They applied a thin, 15 nanometer-thick film of germanium to a metal surface, creating a surface capable absorbing a broad band of wavelengths. They combined that with a cavity that supports a narrowband resonance. The coupled cavities exhibit Fano resonance that is capable of reflecting a very narrow band of light.
|
Environment
| 2,021 |
February 4, 2021
|
https://www.sciencedaily.com/releases/2021/02/210204144101.htm
|
Molecule from nature provides fully recyclable polymers
|
Plastics are among the most successful materials of modern times. However, they also create a huge waste problem. Scientists from the University of Groningen (The Netherlands) and the East China University of Science and Technology (ECUST) in Shanghai produced different polymers from lipoic acid, a natural molecule. These polymers are easily depolymerized under mild conditions. Some 87 per cent of the monomers can be recovered in their pure form and re-used to make new polymers of virgin quality. The process is described in an article that was published in the journal
|
A problem with recycling plastics is that it usually results in a lower-quality product. The best results are obtained by chemical recycling, in which the polymers are broken down into monomers. However, this depolymerization is often very difficult to achieve. At the Feringa Nobel Prize Scientist Joint Research Center, a collaboration between the University of Groningen and ECUST, scientists developed a polymer that can be created and fully depolymerized under mild conditions.'We found a way to produce polymers from the natural molecule lipoic acid in a very controlled way,' explains Ben Feringa, Professor of Organic Chemistry at the University of Groningen. 'It is a beautiful molecule and a perfect building block that was created by nature.' The molecule has a ring structure that includes a sulphur-sulphur bond. When this bond is broken, the sulphur atoms can react with those of another monomer. 'This process was known before, but we managed to find a way to control it and to create long polymers.'The molecule also has a carboxyl group, which readily reacts with metal ions. These can crosslink the polymers, which results in an elastic material. By dissolving the molecule in water with sodium hydroxide and then evaporating the water, a firmer polymer film is produced through ionic bonds. As the polymerization is achieved through reversible bonds, the material is also self-healing, explains Feringa: 'When it is cut, you can simply press the ends together and they will reconnect in a few minutes.''Our experiments show what is possible with these monomers,' adds Feringa. 'We can even recycle the material into monomers several times, without loss of quality.' However, industrial applications of this new polymer are a long way off. Feringa: 'This is a proof of principle. We are conducting experiments now to create polymers with new functionalities and to better understand the polymerization and depolymerization processes.' Furthermore, although 87 per cent of the monomers can already be recovered, the scientists want to get as close to a hundred per cent as possible. 'Our experiments show that we can produce, in a controlled fashion, hard and soft, elastic polymers that can be fully depolymerized,' Feringa sums up. 'This molecule is really very promising.'The work that was described in the
|
Environment
| 2,021 |
February 4, 2021
|
https://www.sciencedaily.com/releases/2021/02/210204101640.htm
|
Deforestation is stressing mammals out
|
Lots of us are feeling pretty anxious about the destruction of the natural world. It turns out, humans aren't the only ones stressing out -- by analyzing hormones that accumulate in fur, researchers found that rodents and marsupials living in smaller patches of South America's Atlantic Forest are under more stress than ones living in more intact forests.
|
"We suspected that organisms in deforested areas would show higher levels of stress than animals in more pristine forests, and we found evidence that that's true," says Noé de la Sancha, a research associate at the Field Museum in Chicago, Associate Professor of Biology at Chicago State University, and co-author of a new paper in "A lot of species, all over the world, but especially in the tropics, are understudied," says Sarah Boyle, an Associate Professor of Biology and Chair of the Environmental Studies and Sciences Program at Rhodes College and the study's lead author. "There is not a lot known about many of these animals in terms of even their baseline hormone levels."The Atlantic Forest is often overshadowed by its neighbor the Amazon, but it's South America's second-largest forest, extending from northeastern Brazil down south along the Brazilian coastline, into northwestern Argentina to eastern Paraguay. It once covered about 463,000 square miles, an area bigger than California, Oregon, Washington, and Nevada put together. Since the arrival of Portuguese colonists 500 years ago, parts of the forest have been destroyed to make way for farmland and urban areas; today, less than one-third of the original forest remains.The destruction of an animal's habitat can drastically change its life. There's less food and territory to go around, and the animal might find itself in more frequent contact with predators or in increased competition with other animals for resources. These circumstances can add up to long-term stress.Stress isn't a bad thing in and of itself -- in small doses, stress can be life-saving. "A stress response is normally trying to bring your body back into balance," says David Kabelik, an Associate Professor of Biology and Chair of the Neuroscience Program at Rhodes College and one of the paper's authors. "If something perturbs you and it can cause you to be injured or die, the stress response mobilizes energy to deal with that situation and bring things back into a normal state. It allows you to survive." For instance, if an animal encounters a predator, a flood of stress hormones can give them the energy they need to run away, and then those hormone levels go back down to normal. "But then these animals are placed into these small fragments of habitat where they're experiencing elevated stress over prolonged periods, and that can lead to disease and dysregulation of various physiological mechanisms in the body."For this study, the researchers focused on patches of forest in eastern Paraguay, which has been particularly hard hit in the last century as the region was clear-cut for firewood, cattle farming, and soy. To study the effects of this deforestation, the researchers trapped 106 mammals from areas ranging from 2 to 1,200 hectares -- the size of a city block to 4.63 square miles. The critters they analyzed included five species of rodents and two species of marsupials.The researchers took samples of the animals' fur, since hormones accumulate in hair over periods of many days or weeks, and could present a clearer picture of the animals' typical stress levels than the hormones present in a blood sample. "Hormones change in the blood minute by minute, so that's not really an accurate reflection of whether these animals are under long-term stress or whether they just happened to run away from a predator a minute ago," says Kabelik, "and we were trying to get at something that's more of an indicator of longer term stress. Since glucocorticoid stress hormones get deposited into the fur over time, if you analyze these samples you can look at a longer term measure of their stress."Back in the lab, the researchers ground the fur into a fine powder and extracted the hormones. They analyzed hormone levels using enzyme immunoassay:"You use antibodies that bind these hormones to figure out how many are there," says Kabelik. "Then you divide that by the amount of fur that was in the sample, and it tells you the amount of hormones present per milligram."The team found that the animals from smaller patches of forest had higher levels of glucocorticoid stress hormones than animals from larger patches of forest. "Our findings that animals in the small forest patches had higher glucocorticoid levels was not surprising, given the extent to which some of these forested areas have been heavily impacted by forest loss and fragmentation," says Boyle."In particular, these findings are highly relevant for countries like Paraguay that currently show an accelerated rate of change in natural landscapes. In Paraguay, we are just beginning to document how the diversity of species that are being lost is distributed," says Pastor Pérez, a biologist at Universidad Nacional de Asunción and another of the paper's authors. "However, this paper shows that we also have a lot to learn about how these species interact in these environments."The scientists also found that the methods of trapping the animals contributed to the amount of stress hormones present. "It's an important consideration that people have to understand when they're doing these studies, that if they are live trapping the animals, that might be influencing the measured hormone levels," says Kabelik.The study not only sheds light on how animals respond to deforestation, but it could also lead to a better understanding of the circumstances in which animals can pass diseases to humans. "If you have lots of stressed out mammals, they can harbor viruses and other diseases, and there are more and more people living near these deforested patches that could potentially be in contact with these animals," says de la Sancha. "By destroying natural habitats, we're potentially creating hotspots for zoonotic disease outbreaks."And, the researchers say, the results of this study go far beyond South America's Atlantic Forest."Big picture, this is really important because it could be applicable to forest remnants throughout the world," says de la Sancha. "The tropics hold the highest diversity of organisms on the planet. Therefore, this has potential to impact the largest variety of living organisms on the planet, as more and more deforestation is happening. We're gonna see individuals and populations that tend to show higher levels of stress."
|
Environment
| 2,021 |
February 3, 2021
|
https://www.sciencedaily.com/releases/2021/02/210203090548.htm
|
Urban agriculture in Chicago does not allow consumers to rely solely on local food
|
Environmentally conscious consumers try to "buy local" when food shopping. Now, a study of food raised around Chicago has shown that buying local can't provide all necessary nutrients for area residents, though it could fulfill their needs if some nutrients were supplied as supplements. The researchers report in ACS'
|
As the U.S. population continues to flow to urban regions, consumers are moving farther from farms and croplands. This limits nutrient recycling and drives up emissions associated with transporting food. In addition, urban centers can develop "food deserts" where residents can't purchase nutritious food close to home. One potential solution is urban agriculture, which repurposes space within cities -- such as vacant lots and rooftops -- to grow crops. Christine Costello and colleagues wanted to know the impact of urban agriculture on enabling people living within a range of distances from Chicago's center to eat local food, yet meet their complete nutritional needs.The team considered 28 nutrients, the amount of available land, a variety of crops and livestock, a range of crop yields and both conventional and urban agriculture in the analysis. They drew circles on a map around Chicago with increasing radii, up to 400 miles, the maximum distance the U.S. government deems "local." Within that perimeter around Chicago, no mix of locally raised crops and livestock could satisfy all nutritional needs of the population. However, if D and B
|
Environment
| 2,021 |
February 2, 2021
|
https://www.sciencedaily.com/releases/2021/02/210202113800.htm
|
Native bees under threat from growing urbanization
|
Residential gardens are a poor substitute for native bushland and increasing urbanisation is a growing threat when it comes to bees, Curtin University research has found.
|
Published in Lead author, Forrest Foundation Scholar Miss Kit Prendergast, from Curtin's School of Molecular and Life Sciences said the findings highlight the need to prevent destruction of remaining bushland and preserve native vegetation, in order to protect sustainable bee communities and their pollination services."Our study involved spending hundreds of hours at 14 sites on the Swan Coastal Plain at Perth, Western Australia, recording which bees visited which flowers in the two types of habitats -- gardens and native bushland," Miss Prendergast said."From these bee-plant interactions I was able to map pollination networks, which could be analysed to determine how 'healthy' each habitat was for bees and the pollination services it provided, as well as how much potential competition there was between different bee groups, such as between introduced European honeybees and native bee groups."We found residential gardens were structurally different to those in bushland remnants, and the increasing loss of these native areas for residential development could disrupt important bee-plant interactions."Miss Prendergast said that while bushland remnants were more favourable environments for thriving pollination networks of bees and flowers, the chance of bee populations completely disappearing from an area was higher than in residential gardens."This suggests that, if disrupted for urban development, bee and plant populations in native bushland remnants would be even more prone to extinctions," Miss Prendergast said."The research shows the importance of bushland preservation to the survival and health of bee populations and the broader ecosystems."This has implications for the conservation of wild bee populations in this biodiversity hotspot, and suggests removal of remnant native vegetation for residential development could disrupt the balance and integrity of local ecosystems and lead to extinctions."
|
Environment
| 2,021 |
February 2, 2021
|
https://www.sciencedaily.com/releases/2021/02/210202085453.htm
|
Tiny 3D structures enhance solar cell efficiency
|
A new method for constructing special solar cells could significantly increase their efficiency. Not only are the cells made up of thin layers, they also consist of specifically arranged nanoblocks. This has been shown in a new study by an international research team led by the Martin Luther University Halle-Wittenberg (MLU), which was published in the scientific journal
|
Commercially available solar cells are mostly made of silicon. "Based on the properties of silicon it's not feasible to say that their efficiency can be increased indefinitely," says Dr Akash Bhatnagar, a physicist from the Centre for Innovation Competence (ZIK) "SiLi-nano" at MLU. His research team is therefore studying the so-called anomalous photovoltaic effect which occurs in certain materials. The anomalous photovoltaic effect does not require a p-n junction which otherwise enables the flow of current in silicon solar cells. The direction of the current is determined at the atomic level by the asymmetric crystal structure of the corresponding materials. These materials are usually oxides, which have some crucial advantages: they are easier to manufacture and significantly more durable. However, they often do not absorb much sunlight and have a very high electrical resistance. "In order to utilise these materials and their effect, creative cell architectures are needed that reinforce the advantages and compensate for the disadvantages," explains Lutz Mühlenbein, lead author of the study.In their new study, the physicists introduced a novel cell architecture, a so-called nanocomposite. They were supported by teams from the Bergakademie Freiberg, the Leibniz Institute of Surface Modification in Leipzig and Banaras Hindu University in India. In their experiment, the researchers stacked single layers of a typical material only a few nanometres in thickness on top of one another and offset them with nickel oxide strips running perpendicularly. "The strips act as a fast lane for the electrons that are generated when sunlight is converted into electricity and which are meant to reach the electrode in the solar cell," Bhatnagar explains. This is precisely the transport that would otherwise be impeded by the electrons having to traverse each individual horizontal layer.The new architecture actually increased the cell's electrical output by a factor of five. Another advantage of the new method is that it is very easy to implement. "The material forms this desired structure on its own. No extreme external conditions are needed to force it into this state," says Mühlenbein. The idea, for which the researchers have now provided an initial feasibility study, could also be applied to materials other than nickel oxide. Follow-up studies now need to examine if and how such solar cells can be produced on an industrial scale.
|
Environment
| 2,021 |
February 2, 2021
|
https://www.sciencedaily.com/releases/2021/02/210202101047.htm
|
Immense hydrocarbon cycle discovered in world's ocean
|
Hydrocarbons and petroleum are almost synonymous in environmental science. After all, oil reserves account for nearly all the hydrocarbons we encounter. But the few hydrocarbons that trace their origin to biological sources may play a larger ecological role than scientists originally suspected.
|
A team of researchers at UC Santa Barbara and Woods Hole Oceanographic Institution investigated this previously neglected area of oceanography for signs of an overlooked global cycle. They also tested how its existence might impact the ocean's response to oil spills."We demonstrated that there is a massive and rapid hydrocarbon cycle that occurs in the ocean, and that it is distinct from the ocean's capacity to respond to petroleum input," said Professor David Valentine(link is external), who holds the Norris Presidential Chair in the Department of Earth Science at UCSB. The research, led by his graduate students Eleanor Arrington(link is external) and Connor Love(link is external), appears in In 2015, an international team led by scientists at the University of Cambridge published a study demonstrating that the hydrocarbon pentadecane was produced by marine cyanobacteria in laboratory cultures. The researchers extrapolated that this compound might be important in the ocean. The molecule appears to relieve stress in curved membranes, so it's found in things like chloroplasts, wherein tightly packed membranes require extreme curvature, Valentine explained. Certain cyanobacteria still synthesize the compound, while other ocean microbes readily consume it for energy.Valentine authored a two-page commentary on the paper, along with Chris Reddy from Woods Hole, and decided to pursue the topic further with Arrington and Love. They visited the Gulf of Mexico in 2015, then the west Atlantic in 2017, to collect samples and run experiments.The team sampled seawater from a nutrient-poor region of the Atlantic known as the Sargasso Sea, named for the floating sargassum seaweed swept in from the Gulf of Mexico. This is beautiful, clear, blue water with Bermuda smack in the middle, Valentine said.Obtaining the samples was apparently a rather tricky endeavor. Because pentadecane is a common hydrocarbon in diesel fuel, the team had to take extra precautions to avoid contamination from the ship itself. They had the captain turn the ship into the wind so the exhaust wouldn't taint the samples and they analyzed the chemical signature of the diesel to ensure it wasn't the source of any pentadecane they found.What's more, no one could smoke, cook or paint on deck while the researchers were collecting seawater. "That was a big deal," Valentine said, "I don't know if you've ever been on a ship for an extended period of time, but you paint every day. It's like the Golden Gate Bridge: You start at one end and by the time you get to the other end it's time to start over."The precautions worked, and the team recovered pristine seawater samples. "Standing in front of the gas chromatograph in Woods Hole after the 2017 expedition, it was clear the samples were clean with no sign of diesel," said co-lead author Love. "Pentadecane was unmistakable and was already showing clear oceanographic patterns even in the first couple of samples that [we] ran."Due to their vast numbers in the world's ocean, Love continued, "just two types of marine cyanobacteria are adding up to 500 times more hydrocarbons to the ocean per year than the sum of all other types of petroleum inputs to the ocean, including natural oil seeps, oil spills, fuel dumping and run-off from land." These microbes collectively produce 300-600 million metric tons of pentadecane per year, an amount that dwarfs the 1.3 million metric tons of hydrocarbons released from all other sources.While these quantities are impressive, they're a bit misleading. The authors point out that the pentadecane cycle spans 40% or more of the Earth's surface, and more than one trillion quadrillion pentadecane-laden cyanobacterial cells are suspended in the sunlit region of the world's ocean. However, the life cycle of those cells is typically less than two days. As a result, the researchers estimate that the ocean contains only around 2 million metric tons of pentadecane at any given time.It's a fast spinning wheel, Valentine explained, so the actual amount present at any point in time is not particularly large. "Every two days you produce and consume all the pentadecane in the ocean," he said.In the future, the researchers hope to link microbes' genomics to their physiology and ecology. The team already has genome sequences for dozens of organisms that multiplied to consume the pentadecane in their samples. "The amount of information that's there is incredible," said Valentine, "and I think reveals just how much we don't know about the ecology of a lot of hydrocarbon-consuming organisms."Having confirmed the existence and magnitude of this biohydrocarbon cycle, the team sought to tackle the question of whether its presence might prime the ocean to break down spilled petroleum. The key question, Arrington explained, is whether these abundant pentadecane-consuming microorganisms serve as an asset during oil spill cleanups. To investigate this, they added pentane -- a petroleum hydrocarbon similar to pentadecane -- to seawater sampled at various distances from natural oil seeps in the Gulf of Mexico.They measured the overall respiration in each sample to see how long it took pentane-eating microbes to multiply. The researchers hypothesized that, if the pentadecane cycle truly primed microbes to consume other hydrocarbons as well, then all the samples should develop blooms at similar rates.But this was not the case. Samples from near the oil seeps quickly developed blooms. "Within about a week of adding pentane, we saw an abundant population develop," Valentine said. "And that gets slower and slower the further away you get, until, when you're out in the North Atlantic, you can wait months and never see a bloom." In fact, Arrington had to stay behind after the expedition at the facility in Woods Hole, Massachusetts to continue the experiment on the samples from the Atlantic because those blooms took so long to appear.Interestingly, the team also found evidence that microbes belonging to another domain of life, Archaea, may also play a role in the pentadecane cycle. "We learned that a group of mysterious, globally abundant microbes -- which have yet to be domesticated in the laboratory -- may be fueled by pentadecane in the surface ocean," said co-lead author Arrington.The results beg the question why the presence of an enormous pentadecane cycle appeared to have no effect on the breakdown of the petrochemical pentane. "Oil is different from pentadecane," Valentine said, "and you need to understand what the differences are, and what compounds actually make up oil, to understand how the ocean's microbes are going to respond to it."Ultimately, the genes commonly used by microbes to consume the pentane are different than those used for pentadecane. "A microbe living in the clear waters offshore Bermuda is much less likely to encounter the petrochemical pentane compared to pentadecane produced by cyanobacteria, and therefore is less likely to carry the genes for pentane consumption," said Arrington.Loads of different microbial species can consume pentadecane, but this doesn't imply that they can also consume other hydrocarbons, Valentine continued, especially given the diversity of hydrocarbon structures that exist in petroleum. There are less than a dozen common hydrocarbons that marine organisms produce, including pentadecane and methane. Meanwhile, petroleum comprises tens of thousands of different hydrocarbons. What's more, we are now seeing that organisms able to break down complex petroleum products tend to live in greater abundance near natural oil seeps.Valentine calls this phenomenon "biogeographic priming" -- when the ocean's microbial population is conditioned to a particular energy source in a specific geographic area. "And what we see with this work is a distinction between pentadecane and petroleum," he said, "that is important for understanding how different ocean regions will respond to oil spills."Nutrient-poor gyres like the Sargasso Sea account for an impressive 40% of the Earth's surface. But, ignoring the land, that still leaves 30% of the planet to explore for other biohydrocarbon cycles. Valentine thinks the processes in regions of higher productivity will be more complex, and perhaps will provide more priming for oil consumption. He also pointed out that nature's blueprint for biological hydrocarbon production holds promise for efforts to develop the next generation of green energy.
|
Environment
| 2,021 |
February 1, 2021
|
https://www.sciencedaily.com/releases/2021/02/210201200019.htm
|
Marine organisms use previously undiscovered receptors to detect, respond to light
|
Just as plants and animals on land are keenly attuned to the hours of sunlight in the day, life in the oceans follows the rhythms of the day, the seasons and even the moon. A University of Washington study finds the biological light switches that make this possible.
|
Single-celled organisms in the open ocean use a diverse array of genetic tools to detect light, even in tiny amounts, and respond, according to a study published Feb. 1 in the "If you look in the ocean environment, all these different organisms have this day-night cycle. They are very in tune with each other, even as they get moved around. How do they know when it's day? How do they know when it's night?" said lead author Sacha Coesel, a research scientist in oceanography at the UW.Though invisible to the human eye, ocean microbes support all marine life, from sardines to whales. Knowing these communities' inner workings could reveal how they will fare under changing ocean conditions."Just like rainforests generate oxygen and take up carbon dioxide, ocean organisms do the same thing in the world's oceans. People probably don't realize this, but these unicellular organisms are about as important as rainforests for our planet's functioning," Coesel said.By analyzing RNA filtered out of seawater samples collected throughout the day and night, the study identifies four main groups of photoreceptors, many of them new. This genetic activity uses light to trigger changes in the metabolism, growth, cell division, movements and death of marine organisms.The discovery of these new genetic "light switches" could also aid in the field of optogenetics, in which a cell's function can be controlled with light exposure. Today's optogenetic tools are engineered by humans, but versions from nature might be more sensitive or better detect light of particular wavelengths, the researchers said."This work dramatically expanded the number of photoreceptors -- the different kinds of those on-off switches -- that we know of," said senior author Virginia Armbrust, a UW professor of oceanography.Not surprisingly, many of the new tools were for light in the blue range, since water filters out red wavelengths (which is why oceans appear blue). Some were also for green light, Coesel said.The researchers collected water samples far from shore and looked at all genetic activity from protists: single-celled organisms with a nucleus. They filtered the water to select organisms measuring between 200 nanometers to one-tenth of a millimeter across. These included photosynthetic organisms, like algae, which absorb light for energy, as well as other single-celled plankton that gain energy by consuming other organisms.The team collected samples every four hours, day and night, for four days in the North Pacific near Hawaii. Researchers used trackers to follow the currents about 50 feet (15 meters) below the surface so that the samples came from the same water mass.The study also looked at samples that came from a depth of 120 and 150 meters (400 and 500 feet), in the ocean's "twilight zone." Even there, the genetic activity showed that the organisms were responding to very low levels of sunlight.While the sun is up, these organisms gain energy and grow in size, and at night, when the ultraviolet light is less damaging to their DNA, they undergo cell division."Daylight is important for ocean organisms, we know that, we take it for granted. But to see the rhythm of genetic activity during these four days, and the beautiful synchronicity, you realize just how powerful light is," Armbrust said.Future work will look at places farther from the equator, where plankton communities are more subjected to the changing seasons.
|
Environment
| 2,021 |
February 1, 2021
|
https://www.sciencedaily.com/releases/2021/02/210201200007.htm
|
Traffic noise makes mating crickets less picky
|
A new study shows that the mating behaviour of crickets is significantly affected by traffic noise and other human-made sounds -- a finding that could have implications for the future success of the species.
|
The research, published in the journal When a female cricket is nearby, male crickets will perform a courtship song by rubbing their wings together. The song is energetically costly to produce and so contains important information about the male's qualities. Therefore, it would be taken into account by females when making mating decisions.The researchers paired female crickets with silenced male crickets in ambient noise conditions, artificial white noise conditions, and traffic noise conditions (recorded at a ground level next to the A14 near Cambridge).Males were then allowed to court the female freely, and an artificial courtship song was played back when the males attempted to sing. Males were either paired with a high-quality courtship song, a low-quality courtship song, or no song at all.In the control conditions of ambient noise, the females mounted the males much sooner and more frequently when paired with a high-quality courtship song. However, a high-quality courtship song provided no benefit in the white noise and traffic noise conditions, with the researchers finding that courtship duration and mounting frequency were not influenced by the quality or even the presence of a song.The findings suggest that human-made noise alters how females perceive males when making mate choice decisions. In turn, this could affect individual fitness, as male crickets may attempt to expend more energy to produce an even better courtship song, as well as long-term population viability.Lead author Dr Adam Bent, who carried out the study as part of his PhD at Anglia Ruskin University (ARU) in Cambridge, England, said: "In the short-term, we found that males paired with high-quality songs in noisy environments are receiving no benefit over those paired with a low-quality song, or no song at all. As a result, males that produce high-quality songs may attempt to expend more energy into their calls to gain an advantage, potentially affecting that individual's survival."At the same time, female crickets may choose to mate with a lower-quality male as they are unable to detect differences in mate quality due to the human-made noise, and this may lead to a reduction or complete loss of offspring viability."Traffic noise and the crickets' courtship song do not share similar acoustic frequencies, so rather than masking the courtship song, we think the traffic noise serves as a distraction for the female cricket."Co-author Dr Sophie Mowles, Senior Lecturer in Animal and Environmental Biology at Anglia Ruskin University (ARU), said: "Humans are continually changing the characteristics of environments, including through the production of anthropogenic noise."As mate choice is a powerful driving force for evolution through sexual selection, disruptions may cause a decline in population viability. And because anthropogenic noise is a very recent evolutionarily selection pressure, it is difficult to predict how species may adapt."
|
Environment
| 2,021 |
February 1, 2021
|
https://www.sciencedaily.com/releases/2021/02/210201144931.htm
|
When rhinos fly: Upside down the right way for transport
|
When it comes to saving endangered species of a certain size, conservationists often have to think outside the box.
|
This was reinforced by a recent study published in the "We found that suspending rhinos by their feet is safer than we thought," said Dr. Robin Radcliffe, senior lecturer in wildlife and conservation medicine and first author of the study.While this finding might sound comical, it is vital information for conservationists working to save these vanishing creatures. To keep rhinos safe from poaching and to distribute individuals across habitats so their gene pools stay healthy, management teams often must move rhinos in remote areas that cannot be accessed by roads or automobiles. This often leaves one option: tranquilizing and airlifting the giant mammals out with a helicopter.While this technique of moving rhinos from place to place has been used for 10 years, no one had scientifically documented its clinical effects on the animals during transportation, or any potential negative effects once they wake up.Radcliffe and his colleagues were mindful that the anesthesia drugs used to tranquilize these large mammals can be dangerous."These drugs are potent opioids -- a thousand times more potent than morphine, with side effects that include respiratory depression, reduced oxygen in the blood and higher metabolism," Radcliffe said. "These side effects can impair rhinoceros health and even lead to mortalities during capture and translocation."The researchers predicted that hanging rhinos upside down would exacerbate the dangerous effects of these opioids. Horses in this position suffer from impaired breathing, likely due to the heavy abdominal organs pushing against the lungs and chest cavity. Therefore, this method was deemed riskier than transporting the creatures via a platform or sledge with the rhinos laying on their side.To put the question to rest, Radcliffe and Dr. Robin Gleed, professor of anesthesiology and pain medicine, collaborated with Namibian conservationists to conduct a field study of the highly endangered animals while anesthetized in two different positions: hanging by their feet from a crane to mimic the effects of air transport; or lying on their sides as they would during the immediate period after darting and transport on a sledge.The researchers traveled to Waterburg National Park in Namibia, where they examined 12 rhinoceroses captured for procedures related to conservation but not being moved. After tranquilizing the animals by darting from a helicopter, the scientists tested each animal while it was hanging upside down and lying on its side, in order to directly compare breathing and circulation in both positions.The data debunked Radcliffe and his colleague's predictions -- that hanging upside down by the feet was worse for rhinos' pulmonary function than lying on their sides. In fact, the rhinos actually fared slightly better when slung up in the sky."Hanging rhinos upside down actually improved ventilation (albeit to a small degree) over rhinos lying on their sides," Radcliffe said. "While this was unexpected, and the margins small, any incremental improvement in physiology helps to enhance safety of black rhinoceros during capture and anesthesia."While this is good news for conservationists working with black rhinos in rugged terrain, Radcliffe said more information is needed."Our next step with this research is to extend the time that subject rhinos are suspended upside down to mimic the helicopter-assisted aerial transport of rhinos in the real world," he said, noting that in the remote habitats of Namibia, these helicopter trips can take up to 30 minutes. "Now that we know that it's safe to hang rhinos upside down for short periods of time, we'd like to make sure that longer durations are safe as well."The work was supported by a grant from the Jiji Foundation, the College of Veterinary Medicine and the Namibian Ministry of Environment and Tourism.
|
Environment
| 2,021 |
February 1, 2021
|
https://www.sciencedaily.com/releases/2021/02/210201115935.htm
|
Seafood: Much to glean when times are rough
|
Scientists say stable seafood consumption amongst the world's poorer coastal communities is linked to how local habitat characteristics influence fishing at different times of the year.
|
In the coastal communities of low-income countries, the seafood people catch themselves is often a main food source. In a new study, scientists focused on an often-overlooked type of fishing called gleaning: collecting molluscs, crabs, octopus and reef fish by hand close to shore."We surveyed 131 households in eight coastal communities on a small island off Timor-Leste," said study lead author Ruby Grantham from the ARC Centre of Excellence for Coral Reef Studies.Grantham said even though gleaning is important for food security in rough weather -- when other types of fishing often aren't possible -- some households don't do it."It's not just a case of people fishing when they need to. Weather and coastal conditions make fishing activities, including gleaning, dangerous, unsuccessful or even impossible in some places at certain times of the year," Grantham said.She said the findings illustrate the ways people interact with, and benefit from, coastal ecosystems. And how this varies between communities and seasons.The study found the ability of households to glean in rough weather was influenced by the total area and type of shallow habitat close to the community."This highlights why we need context-specific understanding of dynamic coastal livelihoods and small-scale fisheries in particular," Grantham said."Even amongst these eight communities on the same small island we found distinct differences in how and when gleaning contributes to household fishing activities and as a source of subsistence seafood."Co-author Dr David Mills, Research Leader for the WorldFish Country Program in Timor-Leste, said the research is important for the future management of coastal fisheries."In Timor-Leste, low-income households have few opportunities to access the high-quality nutrition available from seafood," Dr Mills said."We know that gleaning fisheries are really important for food security at particular times of the year," he said."And this detailed research will help us develop management approaches that keep fisheries sustainable while also ensuring seafood remains available to those who need it the most, when they need it the most."Climate change is altering the world and its environments rapidly. People depend on their interactions with nature for many aspects of wellbeing. Understanding these interactions is critical for diagnosing vulnerabilities and building resilience, especially amongst coastal communities who depend directly on healthy oceans for food."The success of coastal livelihood strategies depends on a range of influences that are now, at best, poorly-understood," Grantham said."We wanted to explore how people interact with and benefit from coastal environments through time."Grantham said a better understanding of the existing relationships between people and nature, as well as how these influence interactions between societies and local ecosystems, is crucial to legitimate environmental policy and management to ensure sustainable futures."We need to further consider the factors influencing how feasible and how desirable social-ecological interactions, like fishing, are across different seasons," she said."These insights of the fine scale dynamics in how people interact with coastal ecosystems through activities such as gleaning can help strengthen our understanding in research, decision-making and management in coastal areas exposed to environmental change."
|
Environment
| 2,021 |
January 29, 2021
|
https://www.sciencedaily.com/releases/2021/01/210129110940.htm
|
Synthesizing valuable chemicals from contaminated soil
|
Scientists at Johannes Gutenberg University Mainz (JGU) and ETH Zurich have developed a process to produce commodity chemicals in a much less hazardous way than was previously possible. Such commodity chemicals represent the starting point for many mass-produced products in the chemical industry, such as plastics, dyes, and fertilizers, and are usually synthesized with the help of chlorine gas or bromine, both of which are extremely toxic and highly corrosive. In the current issue of
|
"Chlorine gas and bromine are difficult to handle, especially for small laboratories, as they require strict safety procedures," said Professor Siegfried Waldvogel, spokesperson for JGU's cutting-edge SusInnoScience research initiative, which helped develop the new process. "Our method largely eliminates the need for safety measures because it does not require the use of chlorine gas or bromine. It also makes it easy to regulate the reaction in which the desired chemicals are synthesized by controlling the supply of electric current."According to Professor Siegfried Waldvogel, electrolysis can be used to obtain dichloro and dibromo compounds for example from solvents that would ordinarily be used to produce PVC. "This is even much simpler than synthesizing dichloro and dibromo products from chlorine gas or bromine, respectively." The research team, he claims, has demonstrated that the novel process functions as intended for more than 60 different substrates. "The process can be used for molecules of different sizes and is thus broadly applicable. It is also easy to scale up, and we have already been able to employ it to transform larger quantities in the multi-gram range," Waldvogel added. The chemist is particularly enthusiastic about the discovery that electrolysis can also be used to separate chlorine atoms from molecules of certain insecticides that have been banned, yielding the desired dichloro products. "There is virtually no natural degradation of such insecticides," he pointed out. "They persist in the environment for extremely long periods and have now even been detected in the Arctic. Our process could help in eliminating such toxic substances and actually exploit them to our benefit in future."
|
Environment
| 2,021 |
January 29, 2021
|
https://www.sciencedaily.com/releases/2021/01/210129120302.htm
|
It's elemental: Ultra-trace detector tests gold purity
|
Unless radon gas is discovered in a home inspection, most people remain blissfully unaware that rocks like granite, metal ores, and some soils contain naturally occurring sources of radiation. In most cases, low levels of radiation are not a health concern. But some scientists and engineers are concerned about even trace levels of radiation, which can wreak havoc on sensitive equipment. The semiconductor industry, for instance, spends billions each year to source and "scrub" ultra-trace levels of radioactive materials from microchips, transistors and sensitive sensors.
|
Now chemists at the U.S. Department of Energy's Pacific Northwest National Laboratory have developed a simple and reliable method that holds promise for transforming how ultra-trace elements are separated and detected. Low levels of troublesome naturally occurring radioactive elements like uranium and thorium atoms are often tucked among valuable metals like gold and copper. It has been extraordinarily difficult, impractical, or even impossible, in some cases, to tease out how much is found in samples of ore mined across the globe.Yet sourcing materials with very low levels of natural radiation is essential for certain types of sensitive instruments and detectors, like those searching for evidence of currently undetected particles that many physicists believe actually comprise most of the universe."We are really pushing the envelope on detection," said chemist Khadouja Harouaka. "We want to measure very low levels of thorium and uranium in components that go into some of the most sensitive detectors in the world. It is particularly difficult to measure low levels of thorium and uranium in precious metals like the gold that goes into the electrical components of these detectors. With this new technique, we can overcome that challenge and achieve detection limits as low as 10 parts per trillion in gold."That's like trying to find one four-leaf clover in about 100 thousand acres of clover?an area larger than New Orleans.The scientists locate their extraordinarily rare "four-leaf clover" atoms from the huge field of ordinary atoms by sending their samples through a series of isolation chambers. These chambers first filter and then collide the rare atoms with simple oxygen, creating a "tagged" molecule of a unique molecular weight that can then be separated by its size and charge.The effect is like finding a way to tie a helium balloon to each target thorium or uranium atom so that it floats above the sea of gold sample and can be counted. In this case, the sophisticated counter is a mass spectrometer. The research is featured on the cover of the December 2020 issue of The central innovation is the collision cell chamber, where charged atoms of thorium and uranium react with oxygen, increasing their molecular weight and allowing them to separate from other overlapping signals that can disguise their presence."I had an aha moment," said Greg Eiden, the original PNNL inventor of the patented collision cell, which is used to perform these reactions, thereby reducing unwanted interference in the instrument readout by a factor of a million. "It was this miracle chemistry that gets rid of the bad stuff you don't want in your sample so you can see what you want to see."In the current study, Harouaka and her mentor Isaac Arnquist leveraged Eiden's work to tease out the vanishingly small number of radioactive atoms that can nonetheless ruin sensitive electronic detection equipment. Among other uses, the innovation may allow chemists, led by senior chemist Eric Hoppe and his team at PNNL, to further hone the chemistry that produces the world's purest electroformed copper. The copper forms a key component of sensitive physics detectors, including those used for international nuclear treaty verification.Stanford physicist Giorgio Gratta helps lead a global quest to capture evidence for the fundamental building blocks of the universe. The nEXO experiment, now in the planning stages, is pushing the detection boundaries for evidence of these elusive particles, called Majorana Fermions. The signals they seek come from exceedingly rare events. To detect such an event, the experiments require exquisitely sensitive detectors that are free of stray radiation pings introduced through the materials that make up the detector. That includes the metals in the electronics required to record the exceedingly rare events that trigger detection."PNNL is a global leader in ultra-trace radiation detection," said Gratta. "Their unique mix of innovation and application provide an important contribution that enables sensitive experiments like nEXO."Physicist Steve Elliott of Los Alamos National Laboratory emphasized the lengths to which researchers must go to ensure a scrupulously clean environment for rare particle detection."In experimental programs where even human fingerprints are too radioactive and must be avoided, techniques to measure ultra-low radioactive impurity levels are critical," he said, adding that this method could provide an important way to source materials for another of the next generation of rare neutrino event detectors, called LEGEND, being planned for deployment in an underground location in Europe.Semiconductors, the basic building blocks of modern electronics, including integrated circuits, microchips, transistors, sensors and quantum computers are also sensitive to the presence of stray radiation. And the innovation cycle demands each generation pack more and more into ever tinier microchips."As the architecture gets smaller and smaller, radiation contamination is an ever-bigger issue that manufacturers have been working around by changing the architecture inside the chips," said Hoppe. "But there's only so far you can go with that, and you really start to become limited by the purity of some of those materials. The industry has set targets for itself that right now it can't achieve, so having a measurement technique like this could make some of those targets achievable."More broadly, Eiden added, "in the big world of the periodic table there's probably applications for any element that you care about. And what Eric, Khadouja and Isaac are going after here is analyzing any trace impurity in any ultra-pure material."
|
Environment
| 2,021 |
January 29, 2021
|
https://www.sciencedaily.com/releases/2021/01/210129090520.htm
|
Using science to explore a 60-year-old Russian mystery
|
In early October 2019, when an unknown caller rang EPFL professor Johan Gaume's cell phone, he could hardly have imagined that he was about to confront one of the greatest mysteries in Soviet history. At the other end of the line, a journalist from The New York Times asked for his expert insight into a tragedy that had occurred 60 years earlier in Russia's northern Ural Mountains -- one that has since come to be known as the Dyatlov Pass Incident. Gaume, head of EPFL's Snow and Avalanche Simulation Laboratory (SLAB) and visiting fellow at the WSL Institute for Snow and Avalanche Research SLF, had never heard of the case, which the Russian Public Prosecutor's Office had recently resurrected from Soviet-era archives. "I asked the journalist to call me back the following day so that I could gather more information. What I learned intrigued me."
|
On 27 January 1959, a ten-member group consisting mostly of students from the Ural Polytechnic Institute, led by 23-year-old Igor Dyatlov -- all seasoned cross-country and downhill skiers -- set off on a 14-day expedition to the Gora Otorten mountain, in the northern part of the Soviet Sverdlovsk Oblast. At that time of the year, a route of this kind was classified Category III -- the riskiest category -- with temperatures falling as low as -30°C. On January 28, one member of the expedition, Yuri Yudin, decided to turn back. He never saw his classmates again.When the group's expected return date to the departure point, the village of Vizhay, came and went, a rescue team set out to search for them. On 26 February, they found the group's tent, badly damaged, on the slopes of Kholat Syakhl -- translated as "Death Mountain" -- some 20 km south of the group's destination. The group's belongings had been left behind. Further down the mountain, beneath an old Siberian cedar tree, they found two bodies clad only in socks and underwear. Three other bodies, including that of Dyatlov, were subsequently found between the tree and the tent site; presumably, they had succumbed to hypothermia while attempting to return to the camp. Two months later, the remaining four bodies were discovered in a ravine beneath a thick layer of snow. Several of the deceased had serious injuries, such as fractures to the chest and skull.The Soviet authorities investigated to determine the causes of this strange drama, but closed it after three months, concluding that a "compelling natural force" had caused the death of the hikers. In the absence of survivors, the sequence of events on the night of 1 to 2 February is unclear to this day, and has led to countless more or less fanciful theories, from murderous Yeti to secret military experiments.This is the mystery that Gaume was confronted with. "After the call from the New York Times reporter, I began writing equations and figures on my blackboard, trying to understand what might have happened in purely mechanical terms," he says. "When the reporter rang back, I told her it was likely that an avalanche had taken the group by surprise as they lay sleeping in the tent." This theory, which is the most plausible, was also put forward by the Russian Public Prosecutor's Office after the investigation was reopened in 2019 at the request of the victims' relatives. But the lack of evidence and the existence of odd elements has failed to convince a large portion of Russian society. "I was so intrigued that I began researching this theory more deeply. I then contacted Professor Alexander Puzrin, chair of Geotechnical Engineering at ETH Zurich, whom I had met a month earlier at a conference in France."Gaume, originally from France, and Russian-born Puzrin worked together to comb through the archives, which had been opened to the public after the fall of the Soviet Union. They also spoke with other scientists and experts in the incident, and developed analytical and numerical models to reconstruct the avalanche that may have caught the nine victims unaware."The Dyatlov Pass mystery has become part of Russia's national folklore. When I told my wife that I was going to work on it, she looked at me with deep respect!" says Puzrin. "I was quite keen to do it, especially because I had started working on slab avalanches two years earlier. My primary research is in the field of landslides; I study what happens when a certain amount of time elapses between when a landslide is triggered and when it actually occurs." According to Gaume and Puzrin, this is what happened in 1959: the hikers had made a cut in the mountain's snow-covered slope to set up their tent, but the avalanche didn't occur until several hours later."One of the main reasons why the avalanche theory is still not fully accepted is that the authorities have not provided an explanation of how it happened," says Gaume. In fact, there are a number of points that contradict that theory: first, the rescue team did not find any obvious evidence of an avalanche or its deposition. Then the average angle of the slope above the tent site -- less than 30° -- was not steep enough for an avalanche. Also, if an avalanche occurred, it was triggered at least nine hours after the cut was made in the slope. And finally, the chest and skull injuries observed on some victims were not typical of avalanche victims.In their investigation, published in On the night of the tragedy, one of the most important contributing factors was the presence of katabatic winds -- i.e., winds that carry air down a slope under the force of gravity. These winds could have transported the snow, which would have then accumulated uphill from the tent due to a specific feature of the terrain that the team members were unaware of. "If they hadn't made a cut in the slope, nothing would have happened. That was the initial trigger, but that alone wouldn't have been enough. The katabatic wind probably drifted the snow and allowed an extra load to build up slowly. At a certain point, a crack could have formed and propagated, causing the snow slab to release," says Puzrin.Both scientists are nevertheless cautious about their findings, and make it clear that much about the incident remains a mystery. "The truth, of course, is that no one really knows what happened that night. But we do provide strong quantitative evidence that the avalanche theory is plausible," Puzrin continues.The two models developed for this study -- an analytical one for estimating the time required to trigger an avalanche, created by ETH Zurich, and SLAB's numerical one for estimating the effect of avalanches on the human body -- will be used to better understand natural avalanches and the associated risks. Gaume and Puzrin's work stands as a tribute to Dyatlov's team, who were confronted with a "compelling force" of nature. And, although they were unable to complete their treacherous expedition, they have given generations of scientists a perplexing enigma to solve.
|
Environment
| 2,021 |
January 29, 2021
|
https://www.sciencedaily.com/releases/2021/01/210129090511.htm
|
Scholars reveal the changing nature of U.S. cities
|
Cities are not all the same, or at least their evolution isn't, according to new research from the University of Colorado Boulder.
|
These findings, out this week in The researchers hope that by providing this look at the past with this unique data set, they'll be able to glimpse the future, including the impact of population growth on cities or how cities might develop in response to environmental factors like sea level rise or wildfire risk."We can learn so much more about our cities about and urban development, if we know how to exploit these kinds of new data, and I think this really confirms our approach," said Stefan Leyk, a geography professor at CU Boulder and one of the authors on the papers."It's not just the volume of data that you take and throw into a washing machine. It's really the knowing how to make use of the data, how to integrate them, how to get the right and meaningful things out there."It's projected that by 2050, more than two-thirds of humans will live in urban areas. What those urban areas will look like, however, is unclear, given limited knowledge of the history of urban areas, broadly speaking, prior to the 1970s.This work and previous research, however, hopes to fill that gap by studying property-level data from the property management company, Zillow, through a property-share agreement.This massive dataset, called the Zillow Transaction and Assessment Dataset or ZTRAX, contains about 374 million data records that include the built year of existing buildings going back over 100 years. Previously, the researchers then used these data to create the Historical Settlement Data Compilation for the United States (HISDAC-US), a set of unique time series data set that's freely available for anyone to use.For this new research, which were funded by the National Science Foundation, the Institute of Behavioral Sciences and Earth Lab, the researchers applied statistical methods and data mining algorithms to the data, trying to glean all available information on the nature of settlement development, particularly for metropolitan statistical areas, or high-density geographic regions.What they found is that not only were they able to learn more about how to measure urban size, shape and structure (or form), including the number of built-up locations and their structures, they were also able to see very clear trends in the evolution of these distinct categories of urban development.In particular, the researchers found that urban form and urban size do not develop the same as previously thought. While size generally moves in a single direction, especially in large cities, form can ebb and flow depending on constraints, such as the geography of places as well as environmental and technological factors."This (the categorization) is something that is really novel about that paper because this could not be done prior to that because these data were just not available," said Johannes Uhl, the lead author of the paper and a research associate at CU Boulder.It's remarkable, according to the researchers, that the two articles are being published by different high-impact journals on the same day. While the "There's so much potential in this current data revolution, as we call it," Leyk commented. "The growth of so-called data journals is a good trend because it's becoming more and more systematic to publish formal descriptions of the data, to learn where the data can be found, and to inform the community what kind of publications are based on these data products. So, I like this trend and we try and make use of it."This research, however, is still far from finished. Next, the researchers hope to further examine the categories, and, in particular, the different groups of cities that emerged in the process of this research to hopefully determine a classification system for urban evolution, while also applying the data approach to more rural settings."The findings are interesting, but they can of course be expanded into greater detail," Uhl said.The researchers are also working with other researchers in different fields across the university to explore the applications of these data on topics as far reaching as urban fuel models for nuclear war scenarios, the exposure of the built environment to wildfire risk, and settlement vulnerability from sea level rise."The context is a little different in each of these fields, but really interesting," Leyk said. "You realize how important that kind of new data, new information, can become for so many unexpected topics."
|
Environment
| 2,021 |
January 28, 2021
|
https://www.sciencedaily.com/releases/2021/01/210128134802.htm
|
Wood formation can now be followed in real-time -- and possibly serve the climate of tomorrow
|
A genetic engineering method makes it possible to observe how woody cell walls are built in plants. The new research in wood formation, conducted by the University of Copenhagen and others, opens up the possibility of developing sturdier construction materials and perhaps more climate efficient trees.
|
The ability of certain tree species to grow taller than 100 meters is due to complex biological engineering. Besides needing the right amounts of water and light to do so, this incredible ability is also a result of cell walls built sturdily enough to keep a tree both upright and able to withstand the tremendous pressure created as water is sucked up from its roots and into its leaves.This ability is made possible by what are known as the secondary cell walls of plants. Secondary cell walls, also known as xylem or wood, are built according to a few distinct and refined patterns that allow wall strength to be maintained while still allowing connecting cells to transport water from one to the other.How these wall patterns are built has been a bit of a mystery. Now, the mystery is starting to resolve. For the first time, it is possible to observe the process of woody cell wall pattern formation within a plant -- and in real-time no less. A team of international researchers, including Professor Staffan Persson of the University of Copenhagen, has found a way to monitor this biological process live, under the microscope. The scientific article for the study is published in the journal, This opens up the possibility of manipulating the building process and perhaps even making plant xylem stronger -- a point that shall be returned to.Because wood forms in tissue that is buried deep inside the plant, and microscopes work best on the surfaces of objects, the wood-forming process is difficult to observe. To witness it in action, researchers needed to apply a genetic trick. By modifying the plants with a genetic switch, they were able to turn on wood formation in all cells of the plant -- including those on the surface. This allowed them to observe the wood formation under a microscope in detail and in real time.Cell walls consist mainly of cellulose, which is produced by enzymes located on the surface of all plant cells. Generally speaking, the process involves the orderly arrangement of protein tubes -- known as microtubules -- at the surface of the cells. The microtubules serve as tracks along which the wall producing enzymes deposit construction material to the wall."One can imagine the construction process as a train network, where the train would represent the cellulose-producing enzymes, as they move forward while producing cellulose fibers. The microtubules, or rails, steer the direction of the proteins, just like train rails. Interestingly, during the formation of woody cell walls these "rails" need to change their organization completely to make patterned walls -- a process that we now can follow directly under our microscopes" explains Staffan Persson, of UCPH's Department of Plant and Environmental Sciences."We now have a better understanding of the mechanisms that cause the microtubules to rearrange and form the patterns. Furthermore, we can simulate the wall pattern formation on a computer. The next step is to identify ways that allows us to make changes to the system. For example, by changing patterns," suggests Persson.First author, Dr Rene Schneider chimes in:"Changing the patterns can alter the ways in which a plant grows or distributes water within it, which can then go on to influence a plant's height or biomass. For example, if you could create trees that grow differently by accumulating more biomass, it could potentially help slow the increase in carbon dioxide in the atmosphere."In the longer term, Persson suggests that one of the most obvious applications is to manipulate biological processes to develop stronger or different woody construction materials."If we can change both the chemical composition of cell walls, which we and other researchers are already working on, and the patterns of cell walls, we can probably alter wood strength and porosity. Sturdier construction materials made of wood don't just benefit the construction industry, but the environment and climate as well. They have a smaller carbon footprint, a longer service life and can be used for manifold purposes. In some cases, they may even be able to replace energy-intensive materials like concrete," says Persson, whose background as a construction engineer compliments his plant biology background.He also points to potential applications in the development of cellulose-based nanomaterials. These are gaining ground in medicine, where cellulose-based nanomaterials can be used to efficiently transport pharmaceuticals throughout the body.However, these applications would first require further knowledge in how we can manipulate secondary walls and implementation of the knowledge in trees or other useful crop plants as much of our research is conducted in model plant organisms, such as thale cress, underscores both Staffan Persson and Rene Schneider.
|
Environment
| 2,021 |
January 28, 2021
|
https://www.sciencedaily.com/releases/2021/01/210128091143.htm
|
Turning food waste back into food
|
There's a better end for used food than taking up space in landfills and contributing to global warming.
|
UC Riverside scientists have discovered fermented food waste can boost bacteria that increase crop growth, making plants more resistant to pathogens and reducing carbon emissions from farming."Beneficial microbes increased dramatically when we added fermented food waste to plant growing systems," said UCR microbiologist Deborah Pagliaccia, who led the research. "When there are enough of these good bacteria, they produce antimicrobial compounds and metabolites that help plants grow better and faster."Since the plants in this experiment were grown in a greenhouse, the benefits of the waste products were preserved within a closed watering system. The plant roots received a fresh dose of the treatment each time they were watered."This is one of the main points of this research," Pagliaccia said. "To create a sustainable cycle where we save water by recycling it in a closed irrigation system and at the same time add a product from food waste that helps the crops with each watering cycle."These results were recently described in a paper published in the journal Food waste poses a serious threat to the planet. In the U.S. alone, as much as 50% of all food is thrown away. Most of this waste isn't recycled, but instead, takes up more than 20% of America's landfill volume.This waste represents not only an economic loss, but a significant waste of freshwater resources used to produce food, and a misuse of what could otherwise feed millions of low-income people who struggle with food security.To help combat these issues, the UCR research team looked for alternative uses for food waste. They examined the byproducts from two kinds of waste that is readily available in Southern California: beer mash -- a byproduct of beer production -- and mixed food waste discarded by grocery stores.Both types of waste were fermented by River Road Research and then added to the irrigation system watering citrus plants in a greenhouse. Within 24 hours, the average population of beneficial bacteria were two to three orders of magnitude greater than in plants that did not receive the treatments, and this trend continued each time the researchers added treatments.UCR environmental scientist Samantha Ying then studied nutrients such as carbon and nitrogen in the soil of the treated crops. Her analysis showed a spike in the amount of carbon after each waste product treatment, followed by a plateau, suggesting the beneficial bacteria used the available carbon to replicate.Pagliaccia explained that this finding has an impact on the growth of the bacteria and on the crops themselves. "If waste byproducts can improve the carbon to nitrogen ratio in crops, we can leverage this information to optimize production systems," she said.Another finding of note is that neither the beer mash nor the mixed food waste products tested positive for Salmonella or other pathogenic bacteria, suggesting they would not introduce any harmful element to food crops."There is a pressing need to develop novel agricultural practices," said UCR plant pathologist and study co-author Georgios Vidalakis. "California's citrus, in particular, is facing historical challenges such as Huanglongbing bacterial disease and limited water availability," said Georgios Vidalakis, a UCR plant pathologist.The paper's results suggest using these two types of food waste byproducts in agriculture is beneficial and could complement the use synthetic chemical additives by farmers -- in some cases relieving the use of such additives altogether. Crops would in turn become less expensive.Pagliaccia and Ying also recently received a California Department of Food and Agriculture grant to conduct similar experiments using almond shell byproducts from Corigin Solutions to augment crops. This project is also supported with funding from the California Citrus Nursery Board, Corigin Solutions, and by the California Agriculture and Food Enterprise."Forging interdisciplinary research collaborations and building public-private sector partnerships will help solve the challenges facing global agri-food systems," said UCR co-author Norman Ellstrand, a distinguished professor of genetics.When companies enable growers to use food waste byproducts for agricultural purposes, it helps move society toward a more eco-friendly system of consumption."We must transition from our linear 'take-make-consume-dispose' economy to a circular one in which we use something and then find a new purpose for it. This process is critical to protecting our planet from constant depletion of natural resources and the threat of greenhouse gases," Pagliaccia said. "That is the story of this project."
|
Environment
| 2,021 |
January 27, 2021
|
https://www.sciencedaily.com/releases/2021/01/210127140002.htm
|
A mild way to upcycle plastics used in bottles into fuel and other high-value products
|
Plastic is ubiquitous in people's lives. Yet, when plastic-containing items have fulfilled their missions, only a small amount is recycled into new products, which are often of lower quality compared to the original material. And, transforming this waste into high-value chemicals requires substantial energy. Now, researchers reporting in ACS'
|
Global production of sturdy, single-use plastic for toys, sterile medical packaging, and food and beverage containers is increasing. Polyolefin polymers, such as polyethylene and polypropylene, are the most common plastics used in these products because the polymers' molecular structures -- long, straight chains of carbon and hydrogen atoms -- make materials very durable. It's difficult to degrade the carbon-to-carbon bonds in polyolefins, however, so energy-intensive procedures using high temperatures, from 800 to 1400 F, or strong chemicals are needed to break down and recycle them. Previous studies have shown that noble metals, such as zirconium, platinum and ruthenium, can catalyze the process of splitting apart short, simple hydrocarbon chains and complicated, plant-based lignin molecules at moderate reaction temperatures requiring less energy than other techniques. So, Yuriy Román-Leshkov and colleagues wanted to see if metal-based catalysts would have a similar effect on solid polyolefins with long hydrocarbon chains, disintegrating them into usable chemicals and natural gas.The researchers developed a method to react simple hydrocarbon chains with hydrogen in the presence of noble- or transition-metal nanoparticles under mild conditions. In their experiments, ruthenium-carbon nanoparticles converted over 90% of the hydrocarbons into shorter compounds at 392 F. Then, the team tested the new method on more complex polyolefins, including a commercially available plastic bottle. Despite not pretreating the samples, as is necessary with current energy-intensive methods, they were completely broken down into gaseous and liquid products using this new method. In contrast to current degradation methods, the reaction could be tuned so that it yielded either natural gas or a combination of natural gas and liquid alkanes. The researchers say implementing their method could help reduce the volume of post-consumer waste in landfills by recycling plastics to desirable, highly valuable alkanes, though technology to purify the products is needed to make the process economically feasible.
|
Environment
| 2,021 |
January 27, 2021
|
https://www.sciencedaily.com/releases/2021/01/210127135951.htm
|
Forests with diverse tree sizes and small clearings hinder wildland fire growth
|
A new 3D analysis shows that wildland fires flare up in forests populated by similar-sized trees or checkerboarded by large clearings and slow down where trees are more varied. The research can help fire managers better understand the physics and dynamics of fire to improve fire-behavior forecasts.
|
"We knew fuel arrangement affected fire but we didn't know how," said Adam Atchley, lead author on a Los Alamos National Laboratory-led study published today in the The study for the first time links generalized forest characteristics that can be easily observed by remote sensing and modeled by machine learning to provide insight into fire behavior, even in large forested areas.Understanding how wildland fire behaves is necessary to curb its spread, and also to plan safe, effective prescribed burns. However, data is limited, and most studies are too simplistic to accurately predict fire behavior. To predict how fire will move through a forest, it is necessary to first paint an accurate picture of a typical forest's diversity with varying density, shapes, and sizes of trees. But this is computationally expensive, so most studies target homogenous forests that rarely occur in nature.Using its award-winning model, FIRETEC, on high-performance computers at Los Alamos, the team ran 101 simulations with U.S. Forest Service data for Arizona pine forests to realistically represent the variability of forests. The simulations coupled fire and atmospheric factors -- such as wind moving through trees -- at fine scales to provide a 3D view of how fire, wind, and vegetation interact.To understand how the forest structure affects fire behavior, Atchley and colleagues repeated simulations with minor changes in the forest structure, which they made by moving trees and randomizing tree shapes. Small changes had monumental impact in fire behavior. However, despite highly variable fire behavior, observable forest characteristics, such as tree diversity and the size of a stand of trees or a clearing, also substantially control how fire spreads.Results show that the more detailed and varied simulated forest decreases the forward spread of fire spread due to a combination of fuel discontinuities and increases fine-scale turbulent wind structures. On the other hand, large clearings can increase fire spread.
|
Environment
| 2,021 |
January 26, 2021
|
https://www.sciencedaily.com/releases/2021/01/210126192236.htm
|
Solar material can 'self-heal' imperfections
|
A material that can be used in technologies such as solar power has been found to self-heal, a new study shows.
|
The findings -- from the University of York -- raise the prospect that it may be possible to engineer high-performance self-healing materials which could reduce costs and improve scalability, researchers say.The substance, called antimony selenide (Sb2Se3), is a solar absorber material that can be used for turning light energy into electricity.Professor Keith McKenna from the Department of Physics said: "The process by which this semi-conducting material self-heals is rather like how a salamander is able to re-grow limbs when one is severed. Antimony selenide repairs broken bonds created when it is cleaved by forming new ones."This ability is as unusual in the materials world as it is in the animal kingdom and has important implications for applications of these materials in optoelectronics and photochemistry."The paper discusses how broken bonds in many other semiconducting materials usually results in poor performance. Researchers cite as an example, another semiconductor called CdTe that has to be chemically treated to fix the problem.Professor McKenna added: "We discovered that antimony selenide and the closely related material, antimony sulphide, are able to readily heal broken bonds at surfaces through structural reconstructions, thereby eliminating the problematic electronic states."Covalently-bonded semiconductors like antimony selenide find widespread applications in electronics, photochemistry, photovoltaics and optoelectronics for example solar panels and component for lighting and displays.
|
Environment
| 2,021 |
January 26, 2021
|
https://www.sciencedaily.com/releases/2021/01/210126171621.htm
|
Researchers simplify the study of gene-environment interactions
|
Researchers at Weill Cornell Medicine and Cornell University's Ithaca campus have developed a new computational method for studying genetic and environmental interactions and how they influence disease risk.
|
The research, published Jan. 7 in "Our study demonstrates that your genes matter and the environment matters and that the interaction of the two can increase risk for disease," said co-senior author, Dr. Olivier Elemento, who is professor of computational genomics in computational biomedicine, professor of physiology and biophysics, associate director of the HRH Prince Alwaleed Bin Talal Bin Abdulaziz Alsaud Institute for Computational Biomedicine, and director of the Caryl and Israel Englander Institute for Precision Medicine at Weill Cornell Medicine.Typically, studying gene-environment interactions creates a huge computational challenge, said lead author Andrew Marderstein, a doctoral candidate in the Weill Cornell Graduate School of Medical Sciences whose research was conducted both in Dr. Elemento's lab in New York City and Dr. Andrew Clark's lab in Ithaca, enabling him to have immediate access to computational biology and population health expertise."Genotype-environment interaction can be thought of as the situation where some genotypes are much more sensitive to environmental insults than others," said Dr. Clark, co-senior author and Jacob Gould Schurman Professor of Population Genetics in the Department of Molecular Biology and Genetics in the College of Arts & Sciences and a Nancy and Peter Meinig Family Investigator at Cornell University. "These are exactly the cases where changes in the diet or other exposures might have the biggest improvement in health, but only for a subset of individuals."The millions of genetic variants, or inherited genetic differences found between individuals in a population, and different lifestyle and environmental factors, such as smoking, exercise, different eating habits, can be analyzed for combined effects in numerous ways. When researchers test for gene-environment interactions, they typically analyze millions of data points in a pairwise fashion, meaning they assess one genetic variant and its interaction with one environmental factor at a time. This type of analysis can become quite labor intensive, said Marderstein.The new computational method prioritizes and assesses a smaller number of variants in the genome -- or the complete set of genetic material found in the body -- for gene-environment interactions. "We condensed a problem with analyzing 10 million different genetic variants to essentially analyzing only tens of variants in different regions of the genome," Marderstein said.While a standard genetic association study might look at whether a single genetic variant could lead to an average change in body mass index (BMI), this study assessed which genetic variants were associated with individuals being more likely to have a higher BMI or lower BMI. The researchers found that looking for sections of DNA associated with the variance in a human characteristic, called a variance quantitative trait locus or vQTL, enabled them to more readily identify gene-environment interactions. Notably, the vQTLs associated with body mass index were also more likely to be associated with diseases that have large environmental influences.Another area of study where the new computational method might useful is determining how an individual might respond to a specific drug based on gene-environment interactions, said Marderstein. Analysis of social determinants of health, meaning a person's environmental and social conditions, such as poverty level and educational attainment, is a third area that the researchers are interesting in pursuing, according to Dr. Elemento.Overall, scientists in the precision medicine field are realizing they can sequence a person's DNA, in addition to assessing environmental factors such air quality and physical activity, to better understand whether the individual is at risk of developing a specific disease. "The idea down the line is to use these concepts in the clinic," said Dr. Elemento. "This is part of the evolution of precision medicine, where we can now sequence somebody's genome very easily and then potentially analyze all of the variants in the genetic landscape that correlate with the risk of developing particular conditions."Dr. Olivier Elemento is an equity stock holder in OneThreeBiotech, a company that uses biology-driven AI to accurately predict new potential therapeutics and to pinpoint the underlying biological mechanisms driving drug efficacy. Dr. Elemento is an equity stock holder in Volastra Therapeutics, a company that aims to extend the lives of cancer patients by leveraging unique insights into chromosomal instability.
|
Environment
| 2,021 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.