Date
stringlengths 11
18
| Link
stringlengths 62
62
| Title
stringlengths 16
148
| Summary
stringlengths 1
2.68k
| Body
stringlengths 22
13k
⌀ | Category
stringclasses 20
values | Year
int64 2k
2.02k
|
---|---|---|---|---|---|---|
December 9, 2020
|
https://www.sciencedaily.com/releases/2020/12/201209094258.htm
|
Microbes and plants: A dynamic duo
|
Drought stress has been a major roadblock in crop success, and this obstacle will not disappear anytime soon. Luckily, a dynamic duo like Batman and Robin, certain root-associated microbes and the plants they inhabit, are here to help.
|
Plants and animals have a close connection to the microbes like bacteria living on them. The microbes, the creatures they inhabit, and the environment they create all play a critical role for life on Earth."We know that microbiomes, which are the communities of microorganisms in a given environment, are very important for the health of plants," said Devin Coleman-Derr.Coleman-Derr, a scientist at University of California, Berkeley, studies how drought impacts the microbiome of sorghum. He recently presented his research at the virtual 2020 ASA-CSSA-SSSA Annual Meeting.Findings show that certain bacteria living in the roots of sorghum, a crop commonly grown for animal feed, work together with the plant to reduce drought stress. This unique pairing leads to overall plant success."Plants have hormones, which help plants decide how to spend their energy," says Coleman-Derr. "Microbes can manipulate the system and cause the decision-making process of plants to be altered."Some bacteria and fungi are destined to inhabit certain plants. And, bacteria want the roots they inhabit to be their dream homes. If a bacterium partners with a plant to help it grow during dry weather, it is essentially building a better home for itself.Virtually all aspects of the plant's life are connected to the microbes present. When a plant gets thirsty, it can send the entire microbiome into action.Drought causes dramatic changes in how bacteria and plant partners interact. Additional bacteria may be recruited to help the plant survive the dry weather. These microbes can influence the plant's hormones to encourage more root growth, which will help the plant reach more water."We want to know if we can control this," said Coleman-Derr. "Is there the possibility to manipulate the microbiome present to help sorghum cope with drought stress?"The resiliency of crops to environmental stress is of growing concern to both researchers and farmers, especially with the changes in global climates. New research findings are important to develop crops that can maintain productivity, even in harsher conditions."We recognize that the microbiome is dynamic and changes over time," said Coleman-Derr. "While the jury is still out on if we can control sorghum microbiomes, several labs have shown that some bacteria present during drought stress lead to positive outcomes for plants."Understanding plant microbiomes is a large part of determining factors of crop productivity. Fortunately, plants are excellent models for studying microbiomes.The next step in this quest is to determine if microbiomes can be manipulated and used as a solution for drought in crop production systems."By determining if we can alter the microbiome, we can work towards achieving our goal of creating better producing crops with less inputs," said Coleman-Derr.Devin Coleman-Derr is a researcher for the Department of Plant and Microbial Biology at University of California, Berkeley. This research was supported by the United States Department of Agriculture Agricultural Research Service and the United States Department of Energy. The ASA-CSSA-SSSA Annual Meeting was hosted by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America.
|
Environment
| 2,020 |
December 9, 2020
|
https://www.sciencedaily.com/releases/2020/12/201209094243.htm
|
Several U.S. populations and regions exposed to high arsenic concentrations in drinking water
|
A new national study of public water systems found that arsenic levels were not uniform across the U.S., even after implementation of the latest national regulatory standard. In the first study to assess differences in public drinking water arsenic exposures by geographic subgroups, researchers at Columbia University Mailman School of Public Health confirmed there are inequalities in drinking water arsenic exposure across certain sociodemographic subgroups and over time. Community water systems reliant on groundwater, serving smaller populations located in the Southwest, and Hispanic communities were more likely to continue exceeding the national maximum containment level, raising environmental justice concerns. The findings are published online in
|
"This research has important implications for public health efforts aimed at reducing arsenic exposure levels, and for advancing environmental justice," said Anne Nigra, PhD, postdoctoral research fellow in environmental health sciences, and first author. "Systematic studies of inequalities in public drinking water exposures have been lacking until now. These findings identify communities in immediate need of additional protective public health measures."'Our objective was to identify subgroups whose public water arsenic concentrations remained above 10 µg/L after the new maximum arsenic contaminant levels were implemented and, therefore, at disproportionate risk of arsenic-related adverse health outcomes such as cardiovascular disease, related cancers, and adverse birth outcomes," said Ana Navas-Acien, PhD, Professor of Environmental Health Sciences and senior author.Arsenic is a highly toxic human carcinogen and water contaminant present in many aquifers in the United States. Earlier research by the Columbia research team showed that reducing the MCL from 50 to 10 µg/L prevented an estimated 200-900 cancer cases per year.The researchers compared community water system arsenic concentrations during (2006-2008) versus after (2009-2011) the initial monitoring period for compliance with EPA's 10 µg/L arsenic maximum contaminant level (MCL). They estimated three-year average arsenic concentrations for 36,406 local water systems and 2,740 counties and compared differences in means and quantiles of water arsenic between both three-year periods for U.S. regions and sociodemographic subgroups.Analyses were based on data from two of the largest EPA databases of public water available. Using arsenic monitoring data from the Third Six Year Review period (2006-2011), the researchers studied approximately 13 million analytical records from 139,000 public water systems serving 290 million people annually. Included were data from 46 states, Washington D.C., the Navajo Nation, and American Indian tribes representing 95 percent of all public water systems and 92 percent of the total population served by public water systems nationally.For 2006-2008 to 2009-2011, the average community water system arsenic concentrations declined by 10 percent nationwide, by 11.4 percent for the Southwest, and by 37 percent for New England, respectively. Despite the decline in arsenic concentrations, public drinking water arsenic concentrations remained higher for several sociodemographic subgroups -- Hispanic communities, the Southwestern U.S, the Pacific Northwest, and the Central Midwest., in particular. Likewise, communities with smaller populations and reliant on groundwater were more likely to have high arsenic levels.The percent of community water systems with average concentrations arsenic above the 10 µg/L MCL was 2.3% in 2009-2011 vs. 3.2% in 2006-2008. Community water systems that were not compliant with the arsenic MCL were more likely in the Southwest (61 percent), served by groundwater (95 percent), serving smaller populations (an average of 1,102 persons), and serving Hispanic communities (38 percent).Nigra and Navas-Acien say that estimating public drinking water arsenic exposure for sociodemographic and geographic subgroups is critical for evaluating whether inequalities in arsenic exposure and compliance with the maximum contaminant levels persist across the U.S, to inform future national- and state-level arsenic regulatory efforts, and to investigate whether inequalities in exposure by subgroup contribute to disparities in arsenic-related disease. "Our findings will help address environmental justice concerns and inform public health interventions and regulatory action needed to eliminate exposure inequalities.""We urge continued state and federal funding for infrastructure and technical assistance support for small public water systems in order to reduce inequalities and further protect numerous communities in the U.S. affected by elevated drinking water arsenic exposure," said Nigra.
|
Environment
| 2,020 |
December 9, 2020
|
https://www.sciencedaily.com/releases/2020/12/201209094304.htm
|
Engineers discover new microbe for simpler, cheaper and greener wastewater treatment
|
Researchers from the National University of Singapore (NUS) have developed a new way to treat sewage that is much simpler, cheaper and greener than existing methods.
|
Led by Associate Professor He Jianzhong from the Department of Civil and Environmental Engineering at the Faculty of Engineering, the NUS team found a new strain of bacterium called Thauera sp. strain SND5 that can remove both nitrogen and phosphorus from sewage.This discovery, which was first reported in the journal The team's new treatment method is also in the running for the International Water Association Project Innovation Awards 2021.In sewage, nitrogen is present in ammonia while phosphorus is present in phosphates. Too much of either compound risks polluting the environment, so they must be removed before the treated water can be released.Most existing sewage treatment systems use separate reactors for removing nitrogen and phosphorus, with different conditions for different microbes. Such a process is both bulky and expensive.Some existing systems use a single reactor, but they are inefficient because different microbes in the same reactor will compete with one another for resources. This makes it difficult to maintain the delicate balance among the microbes, resulting in an overall lower efficiency.Another problem with some existing sewage treatment methods is that they release nitrous oxide, a greenhouse gas. The NUS team's new microbe solves this problem as it converts the ammonia into harmless nitrogen gas instead. Additionally, phosphates originally present in sewage water were found to be removed.The unique SND5 bacterium was discovered in a wastewater treatment plant in Singapore. When the NUS research team was carrying out routine monitoring, they observed an unexpected removal of nitrogen in the aerobic tanks, as well as better-than-expected phosphate removal despite the absence of known phosphorus-removing bacteria."This leads us to hypothesise the occurrence of a previously undescribed biological phenomenon, which we hope to understand and harness for further applications," said Assoc Prof He.The NUS researchers then took wastewater samples from a tank, isolated various strains of bacteria, and tested each of them for their ability to remove nitrogen and phosphorus.One of the strains, which appeared as sticky, creamy, light yellow blobs on the agar medium, surprised the researchers by its ability to remove both nitrogen and phosphorus from water. In fact, it did the job faster than the other microbes that were tested. The NUS team sequenced its genes and compared them to related bacteria in a global database. They then established it to be a new strain.Compared to conventional nitrogen removal processes of nitrification and denitrification, the NUS team's way of using the newly identified microbe can save about 62 per cent of electricity due to its lower oxygen demand. This is of great significance as the aeration system in a wastewater treatment plant can consume nearly half of the plant's total energy.Assoc Prof He explained, "Population and economic growth have inevitably led to the production of more wastewater, so it is important to develop new technologies that cost less to operate and produce less waste overall -- all while meeting treatment targets."Meanwhile, the NUS researchers are looking to test their process at a larger scale, and formulate a "soup" of multiple microbes to boost SND5's performance even further.
|
Environment
| 2,020 |
December 8, 2020
|
https://www.sciencedaily.com/releases/2020/12/201208111549.htm
|
New method to label and track nano-particles could improve our understanding of plastic pollution
|
A ground-breaking method to label and track manufactured nano-plastics could signal a paradigm shift in how we understand and care for environments, finds a new study.
|
Nano-plastics are particles of at least one dimension below one μm. While there has been growing awareness of the dangers of visible plastic pollution to marine life, nano-plastics are thought to be even more dangerous as unseen, smaller animals and fish can ingest them.Nano-plastics are suspected of being released into the environment directly by commercial products and by the breakdown of larger pieces of plastic litter.In a study published by the journal Communications Materials, researchers from the University of Surrey detail a new one-step polymerization method to label nano-polystyrene directly on the carbon backbone of plastic. The new simple method uses 14C-styrene and requires minimal reagents and equipment to create nano-particles in a wide range of sizes for use in simulated lab environments.The team has used their new method to produce and investigate the behaviour of nano-plastics at low concentrations in a variety of scenarios -- including in bivalve mollusc.Dr Maya Al Sid Cheikh, co-author of the study and Lecturer in Analytical Chemistry at the University of Surrey, said:"The truth is that the scientific community knows little about the effects and behaviour of nano-plastics in our environment because it's extraordinarily difficult to detect, track and measure such minute particles."Our new, simple method is a step in the right direction for correcting this knowledge gap as it allows researchers to replicate scenarios in which commercially produced nano-particles have customarily gone unnoticed."
|
Environment
| 2,020 |
December 8, 2020
|
https://www.sciencedaily.com/releases/2020/12/201208163018.htm
|
Breakthrough material makes pathway to hydrogen use for fuel cells under hot, dry conditions
|
A collaborative research team, including Los Alamos National Laboratory, University of Stuttgart (Germany), University of New Mexico, and Sandia National Laboratories, has developed a proton conductor for fuel cells based on polystyrene phosphonic acids that maintain high protonic conductivity up to 200 C without water. They describe the material advance in a paper published this week in
|
"While the commercialization of highly efficient fuel-cell electric vehicles has successfully begun," said Yu Seung Kim, project leader at Los Alamos, "further technological innovations are needed for the next-generation fuel cell platform evolving towards heavy-duty vehicle applications. One of the technical challenges of current fuel cells is the heat rejection from the exothermic electrochemical reactions of fuel cells."We had been struggling to improve the performance of high-temperature membrane fuel cells after we had developed an ion-pair coordinated membrane in 2016," said Kim. "The ion-pair polymers are good for membrane use, but the high content of phosphoric acid dopants caused electrode poisoning and acid flooding when we used the polymer as an electrode binder."In current fuel cells, the heat rejection requirement is met by operating the fuel cell at a high cell voltage. To achieve an efficient fuel-cell powered engine, the operating temperature of fuel cell stacks must increase at least to the engine coolant temperature (100 C)."We believed that phosphonated polymers would be a good alternative, but previous materials could not be implemented because of undesirable anhydride formation at fuel cell operating temperatures. So we have focused on preparing phosphonated polymers that do not undergo the anhydride formation. Kerres' team at the University of Stuttgart was able to prepare such materials by introducing fluorine moiety into the polymer. It is exciting that we have now both membrane and ionomeric binder for high-temperature fuel cells," said Kim.Ten years ago, Atanasov and Kerres developed a new synthesis for a phosphonated poly(pentafluorostyrene) which consisted of the steps i) polymerization of pentafluorostyrene via radical emulsion polymerization and ii) phosphonation of this polymer by a nucleophilic phosphonation reaction. Surprisingly, this polymer showed a good proton conductivity being higher than Nafion in the temperature range >100°C, and an unexpected excellent chemical and thermal stability of >300°C.Atanasov and Kerres shared their development with Kim at Los Alamos, whose team in turn developed high-temperature fuel cells to use with the phosphonated polymers. With the integration of membrane electrode assembly with LANL's ion-pair coordinated membrane (Lee et al. Nature Energy, 1, 16120, 2016), the fuel cells employing the phosphonated polymer exhibited an excellent power density (1.13 W cm-2 under H2/O2 conditions with > 500 h stability at 160 C).What's next? "Reaching over 1 W cm-2 power density is a critical milestone that tells us this technology may successfully go to commercialization" said Kim. Currently, the technology is pursuing commercialization through the Department of Energy's ARPA-E and the Hydrogen and Fuel Cell Technologies Office within the Energy Efficiency and Renewable Energy Office (EERE).
|
Environment
| 2,020 |
December 7, 2020
|
https://www.sciencedaily.com/releases/2020/12/201207195129.htm
|
Global trends in nature's contributions to people
|
In a new study published today in the
|
"There are many ways that nature provides benefits to people -- from the production of material goods to non-material benefits, and the benefits of natural ecology that regulate environmental conditions," said Kate Brauman, lead author and a lead scientist at the U of M Institute on the Environment (IonE). "We are in a much better position to identify the problems in the way we are managing nature, and that gives us a path forward to manage it better."The study looked at a variety of peer-reviewed papers addressing wide-ranging elements of trends in nature and associated impacts on people. The study found that: global declines in most of nature's contributions to people over the past 50 years, such as natural regulations of water pollutants; negative impacts on people's well-being are already occurring, including reductions in crop yields from declining pollinator populations and soil productivity and increased exposure to flooding and storms as coastal ecosystems are degraded; and understanding and tracking nature's contributions to people provides critical feedback that can improve our ability to manage earth systems effectively, equitably and sustainably."This paper highlights the value of nature's contributions to our well-being," said co-author Steve Polasky, an IonE fellow and a professor in the College of Biological Sciences. "By making these values more visible, we hope that actions are taken to protect nature, so that nature can continue to provide benefits for future generations."The work of this study builds from the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) Global Assessment.
|
Environment
| 2,020 |
December 7, 2020
|
https://www.sciencedaily.com/releases/2020/12/201207112323.htm
|
Java's protective mangroves smothered by plastic waste
|
The mangrove forests on Java's north coast are slowly suffocating in plastic waste. The plastic problem in northeast Asia is huge and a growing threat to the region's mangroves; a natural alley against coastal erosion. Based on fieldwork published in
|
Van Bijsterveldt has monitored the accumulation of plastic waste in Indonesian mangroves over years. Most of it includes household litter, carried from the inland to the coastal area by local rivers. Ultimately, the waste gets stuck in the last stronghold between land and sea. Van Bijsterveldt: 'Mangroves form a perfect plastic trap.' For the mangrove tree, this trap can become quite lethal. The most common mangrove tree on Java's coast, the grey mangrove, has upward-growing roots to get oxygen flowing during high tide. 'You can look at these roots as snorkels,' says Van Bijsterveldt. 'When plastic waste accumulates in these forests, the snorkels are blocked.' In areas completely covered by plastic, trees suffocate.On the forest floor of mangroves along the north coast, it is hard to find a square metre without plastic. 'On average, we found 27 plastic items per square metre,' recounts Van Bijsterveldt. At several locations, plastic covered half of the forest floor. The problem isn't only the plastic on the surface. The team found plastics buried as deep as 35 cm inside the sediment. Plastic, stuck in these upper layers further decreases the trees access to oxygen. Still, Van Bijsterveldt was impressed by the trees' resilience. 'The roots change course when they are obstructed. They grow around the plastic. When half of the forest floor is covered, the tree still gets enough oxygen to keep its leaves.' However, the prospect of survival gets much gloomier once the threshold of 75 % is reached and plastic in the sediment pushes it towards 100%. 'We've seen roots stuck inside plastic bags. Trying to find a way out, they just grow in circles. Eventually trees that cannot outgrow the plastic die.'Battling erosion and breaking waves In cooperation with NGO's and local communities, Van Bijsterveldt works on mangrove restoration projects to prevent further erosion. Over the years many mangrove forests had to make way for rice paddies and later aquaculture ponds. A business model that brings fast profits but lacks in sustainability as it accelerates erosion. Not a small problem in a region threatened by coastal loss and rapid subsidence, and no financial means to build high-cost and high-maintenance solutions like dykes. Van Bijsterveldt: 'Mangroves form a low-cost, natural defence for the coastal communities. They act like wave breakers and can prevent erosion by trapping sediment from the water.'Restoration brings more benefits. Healthy mangroves mean healthy fish populations and a sustainably fishing economy. The tourism industry is also discovering the forests as a growing attraction that boosts the local economy. The Indonesian government is investing in mangrove restoration in an attempt to re-create a green-belt along the coast. But restoration is slow and existing forests are stressed. Van Bijsterveldt, saw attempts of planting new mangroves fail: 'There is so much focus on increasing the initial number of mangrove seedlings, that the challenges posed by plastic waste on the actual survival of young trees is overlooked'. Replanting mangroves without tackling the plastic problem is like trying to try to empty the ocean with a thimble. Successful restoration needs to go hand in hand with sustainable waste management.'
|
Environment
| 2,020 |
December 7, 2020
|
https://www.sciencedaily.com/releases/2020/12/201207091259.htm
|
More responsive COVID-19 wastewater testing
|
Accurately identifying changes in community COVID-19 infections through wastewater surveillance is moving closer to reality. A new study, published in
|
Testing wastewater -- a robust source of COVID-19 as those infected shed the virus in their stool -- could be used for more responsive tracking and supplementing information public health officials rely on when evaluating efforts to contain the virus, such as enhanced public health measures and even vaccines when they become available.The test works by identifying and measuring genetic material in the form of RNA from SARS-COV-2, the virus that causes COVID-19. "This work confirms that trends in concentrations of SARS-CoV-2 RNA in wastewater tracks with trends of new COVID-19 infections in the community. Wastewater data complements the data from clinical testing and may provide additional insight into COVID-19 infections within communities," said co-senior author Alexandria Boehm, a Stanford professor of civil and environmental engineering.As the U.S. grapples with record-breaking daily transmission rates, obtaining more information to track surges and inform public health policies in local communities remains key to managing the deadly virus. COVID-19 can be particularly hard to track, with many asymptomatic or mild cases going undetected. Those who do get tested can still spread the infection before they receive test results, inhibiting quick identification, treatment and isolation to slow the spread. Faster identification of case spikes could allow local officials to act more quickly before the disease reaches a crucial tipping point where transmission becomes difficult to contain and hospitalizations overwhelm the local health system.Tracking COVID-19 through wastewater surveillance of RNA is gaining steam across the country and could alert decision-makers about potential outbreaks days before individuals recognize symptoms of the virus. The viral RNA can be isolated from sewage in wastewater treatment facilities and identified through a complicated and highly technical recovery process, with the relative amounts in wastewater correlating to the number of cases. Anyone with a toilet connected to a sewer system could be depositing these biological samples on a regular basis, making wastewater sampling an inclusive source of information about COVID-19 in a community.With this in mind, the researchers sought to advance the effectiveness and accuracy of wastewater surveillance for COVID-19 by comparing the ability to detect the virus in two forms of wastewater -- a mostly liquid influent or a settled solid (sediment settled in a tank). Most current research focuses on influent samples; however, the team notes many viruses have an affinity for solids and expected higher concentrations of the virus in these samples, which could improve detection and consistency.The researchers found the settled solid samples had higher concentrations and better detection of SARS-CoV-2 compared to the liquid versions. "These results confirmed our early thinking that targeting the solids in wastewater would lead to sensitive and reproducible measurements of COVID-19 in a community. This means that we can track upward trends when cases are still relatively low," said co-senior author Krista Wigginton, an associate professor in civil & environmental engineering from the University of Michigan. Wigginton and Boehm co-lead the research.The researchers then tested about 100 settled solid samples from the San Jose-Santa Clara Regional Wastewater Facility from mid-March to mid-July 2020, tallying daily concentration numbers. Using statistical modeling they compared these concentrations with COVID-19 confirmed cases provided by the county. Their results tracked the trend of the county's cases, decreasing in both May and June and peaking in July.The research presents a possible way to identify new outbreaks, find hotspots, confirm the decrease of cases and inform public health interventions. As schools reopen, the technology could be implemented by districts to identify whether community virus circulation is decreasing. It also has the potential to be used in areas lacking the resources for robust individual clinical testing, such as testing sites in Illinois that reportedly closed early after running out of tests.There are still pieces of information needed to better understand the limitations of wastewater testing and improve what can be gleaned, the researchers note. The virus's rate of decay in wastewater, the extent and timeline of viral RNA shedding when sick and varying operations of different wastewater plants all have the potential to impact results. Future studies on these factors could lead to better insights about case trends.The team is launching a new pilot this month to sample up to eight wastewater treatment plants within California daily, with a 24-hour turnaround time. The pilot aims to better understand what types of almost real-time data are useful to public health officials. Implementing the methods and framework developed by the team and pilot study could also be used in the future to monitor wastewater for pathogens beyond COVID-19 circulating within communities.Boehm is also a senior fellow at the Stanford Woods Institute for the Environment and an affiliate of the Stanford Program on Water, Health & Development. Additional authors are: Katherine Graham, Stephanie Loeb, Marlene Wolfe, Sooyeol Kim, Lorelay Mendoza and Laura Roldan-Hernandez, Stanford Civil & Environmental Engineering; David Catoe, SLAC National Accelerator Laboratory; Nasa Sinnott-Armstrong, Stanford School of Medicine; Kevan Yamahara, Monterey Bay Aquarium Research Institute; Lauren Sassoubre, University of San Francisco, Engineering; Linlin Li, County of Santa Clara Public Health Department; Kathryn Langenfeld, University of Michigan, Civil & Environmental Engineering.Payal Sarkar, Noel Enoki and Casey Fitzgerald from the City of San José Environmental Services Department also contributed to the project.
|
Environment
| 2,020 |
December 4, 2020
|
https://www.sciencedaily.com/releases/2020/12/201204155422.htm
|
Crystals may help reveal hidden Kilauea Volcano behavior
|
Scientists striving to understand how and when volcanoes might erupt face a challenge: many of the processes take place deep underground in lava tubes churning with dangerous molten Earth. Upon eruption, any subterranean markers that could have offered clues leading up to a blast are often destroyed.
|
But by leveraging observations of tiny crystals of the mineral olivine formed during a violent eruption that took place in Hawaii more than half a century ago, Stanford University researchers have found a way to test computer models of magma flow, which they say could reveal fresh insights about past eruptions and possibly help predict future ones."We can actually infer quantitative attributes of the flow prior to eruption from this crystal data and learn about the processes that led to the eruption without drilling into the volcano," said Jenny Suckale, an assistant professor of geophysics at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "That to me is the Holy Grail in volcanology."The millimeter-sized crystals were discovered entombed in lava after the 1959 eruption of Kilauea Volcano in Hawaii. An analysis of the crystals revealed they were oriented in an odd, but surprisingly consistent pattern, which the Stanford researchers hypothesized was formed by a wave within the subsurface magma that affected the direction of the crystals in the flow. They simulated this physical process for the first time in a study published in "I always had the suspicion that these crystals are way more interesting and important than we give them credit for," said Suckale, who is senior author on the study.It was a chance encounter that prompted Suckale to act upon her suspicion. She had an insight while listening to a Stanford graduate student's presentation about microplastics in the ocean, where waves can cause non-spherical particles to assume a consistent misorientation pattern. Suckale recruited the speaker, then-PhD student Michelle DiBenedetto, to see if the theory could be applied to the odd crystal orientations from Kilauea."This is the result of the detective work of appreciating the detail as the most important piece of evidence," Suckale said.Along with Zhipeng Qin, a research scientist in geophysics, the team analyzed crystals from scoria, a dark, porous rock that forms upon the cooling of magma containing dissolved gases. When a volcano erupts, the liquid magma -- known as lava once it reaches the surface -- is shocked by the cooler atmospheric temperature, quickly entrapping the naturally occurring olivine crystals and bubbles. The process happens so rapidly that the crystals cannot grow, effectively capturing what happened during eruption.The new simulation is based on crystal orientations from Kilauea Iki, a pit crater next to the main summit caldera of Kilauea Volcano. It provides a baseline for understanding the flow of Kilauea's conduit, the tubular passage through which hot magma below ground rises to the Earth's surface. Because the scoria can be blown several hundred feet away from the volcano, these samples are relatively easy to collect. "It's exciting that we can use these really small-scale processes to understand this huge system," said DiBenedetto, the lead author of the study, now a postdoctoral scholar at the Woods Hole Oceanographic Institution.In order to remain liquid, the material within a volcano needs to be constantly moving. The team's analysis indicates the odd alignment of the crystals was caused by magma moving in two directions at once, with one flow directly atop the other, rather than pouring through the conduit in one steady stream. Researchers had previously speculated this could happen, but a lack of direct access to the molten conduit barred conclusive evidence, according to Suckale."This data is important for advancing our future research about these hazards because if I can measure the wave, I can constrain the magma flow -- and these crystals allow me to get at that wave," Suckale said.Monitoring Kilauea from a hazard perspective is an ongoing challenge because of the active volcano's unpredictable eruptions. Instead of leaking lava continuously, it has periodic bursts resulting in lava flows that endanger residents on the southeast side of the Big Island of Hawaii.Tracking crystal misorientation throughout the different stages of future Kilauea eruptions could enable scientists to deduce conduit flow conditions over time, the researchers say."No one knows when the next episode is going to start or how bad it's going to be -- and that all hinges on the details of the conduit dynamics," Suckale said.Suckale is also a fellow, by courtesy, of the Stanford Woods Institute for the Environment, an assistant professor, by courtesy, in Civil and Environmental Engineering and a member of the Stanford Institute for Computational and Mathematical Engineering (ICME).
|
Environment
| 2,020 |
December 4, 2020
|
https://www.sciencedaily.com/releases/2020/12/201204105939.htm
|
Scientists took a rare chance to prove we can quantify biodiversity by 'testing the water'
|
Organisms excrete DNA in their surroundings through metabolic waste, sloughed skin cells or gametes, and this genetic material is referred to as environmental DNA (eDNA).
|
As eDNA can be collected directly from water, soil or air, and analysed using molecular tools with no need to capture the organisms themselves, this genetic information can be used to report biodiversity in bulk. For instance, the presence of many fish species can be identified simultaneously by sampling and sequencing eDNA from water, while avoiding harmful capture methods, such as netting, trapping or electrofishing, currently used for fish monitoring.While the eDNA approach has already been applied in a number of studies concerning fish diversity in different types of aquatic habitats: rivers, lakes and marine systems, its efficiency in quantifying species abundance (number of individuals per species) is yet to be determined. Even though previous studies, conducted in controlled aquatic systems, such as aquaria, experimental tanks and artificial ponds, have reported positive correlation between the DNA quantity found in the water and the species abundance, it remains unclear how the results would fare in natural environments.However, a research team from the University of Hull together with the Environment Agency (United Kingdom), took the rare opportunity to use an invasive species eradication programme carried out in a UK fishery farm as the ultimate case study to evaluate the success rate of eDNA sampling in identifying species abundance in natural aquatic habitats. Their findings were published in the open-access, peer-reviewed journal "Investigating the quantitative power of eDNA in natural aquatic habitats is difficult, as there is no way to ascertain the real species abundance and biomass (weight) in aquatic systems, unless catching all target organisms out of water and counting/measuring them all," explains Cristina Di Muri, PhD student at the University of Hull.During the eradication, the original fish ponds were drained and all fish, except the problematic invasive species: the topmouth gudgeon, were placed in a new pond, while the original ponds were treated with a piscicide to remove the invasive fish. After the eradication, the fish were returned to their original ponds. In the meantime, all individuals were counted, identified and weighed from experts, allowing for the precise estimation of fish abundance and biomass."We then carried out our water sampling and ran genetic analysis to assess the diversity and abundance of fish genetic sequences, and compared the results with the manually collected data. We found strong positive correlations between the amount of fish eDNA and the actual fish species biomass and abundance, demonstrating the existence of a strong association between the amount of fish DNA sequences in water and the actual fish abundance in natural aquatic environments," reports Di Muri.The scientists successfully identified all fish species in the ponds: from the most abundant (i.e. 293 carps of 852 kg total weight) to the least abundant ones (i.e. one chub of 0.7 kg), indicating the high accuracy of the non-invasive approach."Furthermore, we used different methods of eDNA capture and eDNA storage, and found that results of the genetic analysis were comparable across different eDNA approaches. This consistency allows for a certain flexibility of eDNA protocols, which is fundamental to maintain results comparable across studies and, at the same time, choose the most suitable strategy, based on location surveyed or resources available," elaborates Di Muri."The opportunity of using eDNA analysis to accurately assess species diversity and abundance in natural environments will drive a step change in future species monitoring programmes, as this non-invasive, flexible tool is adaptable to all aquatic environments and it allows quantitative biodiversity surveillance without hampering the organisms' welfare."
|
Environment
| 2,020 |
December 3, 2020
|
https://www.sciencedaily.com/releases/2020/12/201203144239.htm
|
Researchers discover life in deep ocean sediments at or above water's boiling point
|
An international research team that included three scientists from the University of Rhode Island's Graduate School of Oceanography has discovered single-celled microorganisms in a location where they didn't expect to find them.
|
"Water boils on the (Earth's) surface at 100 degrees Celsius, and we found organisms living in sediments at 120 degrees Celsius," said URI Professor of Oceanography Arthur Spivack, who led the geochemistry efforts of the 2016 expedition organized by the Japan Agency for Marine-Earth Science and Technology and Germany's MARUM-Center for Marine and Environmental Sciences at the University of Bremen. The study was carried out as part of the work of Expedition 370 of the International Ocean Discovery Program.The research results from a two-month-long expedition in 2016 will be published today in the journal The news follows an announcement in October that microbial diversity below the seafloor is as rich as on Earth's surface. Researchers on that project from the Japan marine-earth science group, Bremen University, the University of Hyogo, University of Kochi and University of Rhode Island, discovered 40,000 different types of microorganisms from core samples from 40 sites around the globe.The research published in Spivack, who was joined by recent Ph.D. graduates, Kira Homola and Justine Sauvage, on the URI team, said one way to identify life is to look for evidence of metabolism."We found chemical evidence of the organisms' use of organic material in the sediment that allows them to survive," Spivack said. The URI team also developed a model for the temperature regime of the site."This research tells us that deep sediment is habitable in places that we did think possible," he added.While this is exciting news on its own, Spivack said the research could point to the possibility of life in harsh environments on other planets.According to the study, sediments that lie deep below the ocean floor are harsh habitats. Temperature and pressure steadily increase with depth, while the energy supply becomes increasingly scarce. It has only been known for about 30 years that, in spite of these conditions, microorganisms do inhabit the seabed at depths of several kilometers. The deep biosphere is still not well understood, and this brings up fundamental questions: Where are the limits of life, and what factors determine them? To study how high temperatures affect life in the low-energy deep biosphere over the long-term, extensive deep-sea drilling is necessary."Only a few scientific drilling sites have yet reached depths where temperatures in the sediments are greater than 30 degrees Celsius," explains study leader Hinrichs of MARUM. "The goal of the T-Limit Expedition, therefore, was to drill a thousand-meter deep hole into sediments with a temperature of up to 120 degrees Celsius -- and we succeeded."Like the search for life in outer space, determining the limits of life on the Earth is fraught with great technological challenges, the research study says."Surprisingly, the microbial population density collapsed at a temperature of only about 45 degrees," says co-chief scientist Fumio Inagaki of JAMSTEC. "It is fascinating -- in the high-temperature ocean floor, there are broad depth intervals that are almost lifeless. But then we were able to detect cells and microbial activity again in deeper, even hotter zones -- up to a temperature of 120 degrees."Spivack said the project was like going back to his roots, as he and David Smith, professor of oceanography and associate dean of URI's oceanography school, where they were involved in a drilling expedition at the same site about 20 years ago, an expedition that helped initiate the study of the deeply buried marine biosphere.As for the current project, Spivack said studies will continue on the samples the team collected. "The technology to examine samples collected from the moon took several years to be developed, and the same will be true for these samples from deep in the ocean sediments. We are developing the technology now to continue our research."
|
Environment
| 2,020 |
December 3, 2020
|
https://www.sciencedaily.com/releases/2020/12/201203144228.htm
|
Tire-related chemical is largely responsible for adult coho salmon deaths in urban streams
|
Every fall more than half of the coho salmon that return to Puget Sound's urban streams die before they can spawn. In some streams, all of them die. But scientists didn't know why.
|
Now a team led by researchers at the University of Washington Tacoma, UW and Washington State University Puyallup have discovered the answer. When it rains, stormwater flushes bits of aging vehicle tires on roads into neighboring streams. The killer is in the mix of chemicals that leach from tire wear particles: a molecule related to a preservative that keeps tires from breaking down too quickly.This research was published Dec. 3 in "Most people think that we know what chemicals are toxic and all we have to do is control the amount of those chemicals to make sure water quality is fine. But, in fact, animals are exposed to this giant chemical soup and we don't know what many of the chemicals in it even are," said co-senior author Edward Kolodziej, an associate professor in both the UW Tacoma Division of Sciences & Mathematics and the UW Department of Civil & Environmental Engineering."Here we started with a mix of 2,000 chemicals and were able to get all the way down to this one highly toxic chemical, something that kills large fish quickly and we think is probably found on every single busy road in the world."Coho salmon are born in freshwater streams. After spending the first year of their lives there, these fish make the epic journey out to sea where they live out most of their adult lives. A few -- about 0.1% -- return to their original streams to lay their eggs, or spawn, before dying. But researchers started noticing that, especially after a big rain, returning salmon were dying before they could spawn. The search for the coho-killer started with investigating the water quality of the creeks, a multi-agency effort led by NOAA-Fisheries and including the U.S. Fish and Wildlife Services, King County, Seattle Public Utilities and the Wild Fish Conservancy."We had determined it couldn't be explained by high temperatures, low dissolved oxygen or any known contaminant, such as high zinc levels," said co-senior author Jenifer McIntyre, an assistant professor at WSU's School of the Environment, based in Puyallup. "Then we found that urban stormwater runoff could recreate the symptoms and the acute mortality. That's when Ed's group reached out to see if they could help us understand what was going on chemically."First the team narrowed down what in stormwater runoff could be behind the symptoms. The researchers compared water from creeks where salmon were seen dying to look for common trends. All creek samples contained a chemical signature associated with tire wear particles. In addition, a study led by McIntyre found that a solution made from tire wear particles was highly toxic to salmon.But tire wear particles are a mixture of hundreds of different chemicals, so the team had a challenge ahead: How to find the culprit?The researchers started by sectioning the tire wear particle solution according to different chemical properties, such as removing all metals from the solution. Then they tested the different solutions to see which ones were still toxic to salmon in the lab. They repeated this process until only a few chemicals remained, including one that appeared to dominate the mixture but didn't match anything known."There were periods last year when we thought we might not be able to get this identified. We knew that the chemical that we thought was toxic had 18 carbons, 22 hydrogens, two nitrogens and two oxygens. And we kept trying to figure out what it was," said lead author Zhenyu Tian, a research scientist at the Center for Urban Waters at UW Tacoma. "Then one day in December, it was just like bing! in my mind. The killer chemical might not be a chemical directly added to the tire, but something related."Tian searched a list of chemicals known to be in tire rubber for anything that might be similar to their unknown -- give or take a few hydrogens, oxygens or nitrogens -- and found something called 6PPD, which is used to keep tires from breaking down too quickly."It's like a preservative for tires," Tian said. "Similar to how food preservatives keep food from spoiling too quickly, 6PPD helps tires last by protecting them from ground-level ozone."Ozone, a gas created when pollutants emitted by cars and other chemical sources react in the sunlight, breaks the bonds holding the tire together. 6PPD helps by reacting with ozone before it can react with the tire rubber, sparing the tires.But when 6PPD reacts with ozone, the researchers found that it was transformed into multiple chemicals, including 6PPD-quinone (pronounced "kwih-known"), the toxic chemical that is responsible for killing the salmon.This chemical is not limited to the Puget Sound region. The team also tested roadway runoff from Los Angeles and urban creeks near San Francisco, and 6PPD-quinone was present there as well. This finding is unsurprising, the researchers said, because 6PPD appears to be used in all tires and tire wear particles are likely present in creeks near busy roads across the world.Now that 6PPD-quinone has been identified as the "smoking gun" behind coho death in freshwater streams, the team can start to understand why this chemical is so toxic."How does this quinone lead to toxicity in coho? Why are other species of salmon, such as chum salmon, so much less sensitive?" McIntyre asked. "We have a lot to learn about which other species are sensitive to stormwater or 6PPD-quinone within, as well as outside, of the Puget Sound region."One way to protect salmon and other creatures living in the creeks is to treat stormwater before it hits the creeks. But, while tests have shown that there are effective environmentally friendly stormwater technologies for removing 6PPD-quinone, it would be almost impossible to build a treatment system for every road, the team added.Another option is to change the composition of the tires themselves to make them "salmon-safe.""Tires need these preservative chemicals to make them last," Kolodziej said. "It's just a question of which chemicals are a good fit for that and then carefully evaluating their safety for humans, aquatic organisms, etc. We're not sure what alternative chemical we would recommend, but we do know that chemists are really smart and have many tools in their toolboxes to figure out a safer chemical alternative."This research was funded by the National Science Foundation, the U.S. Environmental Protection Agency, Washington State Governors Funds and the Regional Monitoring Program for Water Quality in San Francisco Bay.Grant numbers: NSF: 1608464 and 1803240, EPA: #01J18101 and #DW-014-92437301
|
Environment
| 2,020 |
December 3, 2020
|
https://www.sciencedaily.com/releases/2020/12/201203144224.htm
|
Shuttering fossil fuel power plants may cost less than expected
|
Decarbonizing U.S. electricity production will require both construction of renewable energy sources and retirement of power plants now operated by fossil fuels. A generator-level model described in the December 4 issue of the journal
|
Meeting a 2035 deadline for decarbonizing U.S. electricity production, as proposed by the incoming U.S. presidential administration, would eliminate just 15% of the capacity-years left in plants powered by fossil fuels, says the article by Emily Grubert, a Georgia Institute of Technology researcher. Plant retirements are already underway, with 126 gigawatts of fossil generator capacity taken out of production between 2009 and 2018, including 33 gigawatts in 2017 and 2018 alone."Creating an electricity system that does not contribute to climate change is actually two processes -- building carbon-free infrastructure like solar plants, and closing carbon-based infrastructure like coal plants," said Grubert, an assistant professor in Georgia Tech's School of Civil and Environmental Engineering. "My work shows that because a lot of U.S. fossil fuel plants are already pretty old, the target of decarbonization by 2035 would not require us to shut most of these plants down earlier than their typical lifespans."Of U.S. fossil fuel-fired generation capacity, 73% (630 out of 840 gigawatts) will reach the end of its typical lifespan by 2035; that percentage would reach 96% by 2050, she says in the Policy Forum article. About 13% of U.S. fossil fuel-fired generation capacity (110 GW) operating in 2018 had already exceeded its typical lifespan.Because typical lifespans are averages, some generators operate for longer than expected. Allowing facilities to run until they retire is thus likely insufficient for a 2035 decarbonization deadline, the article notes. Closure deadlines that strand assets relative to reasonable lifespan expectations, however, could create financial liability for debts and other costs. The research found that a 2035 deadline for completely retiring fossil-based electricity generators would only strand about 15% (1700 gigawatt-years) of fossil fuel-fired capacity life, along with about 20% (380,000 job-years) of direct power plant and fuel extraction jobs that existed in 2018.In 2018, fossil fuel facilities operated in 1,248 of 3,141 counties, directly employing about 157,000 people at generators and fuel-extraction facilities. Plant closure deadlines can improve outcomes for workers and host communities by providing additional certainty, for example, by enabling specific advance planning for things like remediation, retraining for displaced workers, and revenue replacements."Closing large industrial facilities like power plants can be really disruptive for the people that work there and live in the surrounding communities," Grubert said. "We don't want to repeat the damage we saw with the collapse of the steel industry in the 70s and 80s, where people lost jobs, pensions, and stability without warning. We already know where the plants are, and who might be affected: using the 2035 decarbonization deadline to guide explicit, community grounded planning for what to do next can help, even without a lot of financial support."Planning ahead will also help avoid creating new capital investment where that may not be needed long-term. "We shouldn't build new fossil fuel power plants that would still be young in 2035, and we need to have explicit plans for closures both to ensure the system keeps working and to limit disruption for host communities," she said.Underlying policies governing the retirement of fossil fuel-powered facilities is the concept of a "just transition" that ensures material well-being and distributional justice for individuals and communities affected by a transition from fossil to non-fossil electricity systems. Determining which assets are "stranded," or required to close earlier than expected absent policy, is vital for managing compensation for remaining debt and/or lost revenue, Grubert said in the article.
|
Environment
| 2,020 |
December 3, 2020
|
https://www.sciencedaily.com/releases/2020/12/201203094533.htm
|
Scientists predict 'optimal' stress levels
|
Scientists have created an evolutionary model to predict how animals should react in stressful situations.
|
Almost all organisms have fast-acting stress responses, which help them respond to threats -- but being stressed uses energy, and chronic stress can be damaging.The new study -- by an international team including the University of Exeter -- suggests most animals remain stressed for longer than is optimal after a stress-inducing incident.The reasons for this are not clear, but one possibility is that there is a limit to how quickly the body can remove stress hormones from circulation."We have created one of the first mathematical models to understand how organisms have evolved to deal with stressful events," said Dr Tim Fawcett, of the University of Exeter."It combines existing research on stress physiology in a variety of organisms with analysis of optimal responses that balance the costs and benefits of stress."We know stress responses vary hugely between different species and even among individuals of the same species -- as we see in humans."Our study is a step towards understanding why stress responses are so variable."The researchers define stress as the process of an organism responding to "stressors" (threats and challenges in their environment), including both detection and the stress response itself.A key point highlighted in the study is the importance of how predictable threats are.The model suggests that an animal living in a dangerous environment should have a high "baseline" stress level, while an animal in a safer environment would benefit from being able to raise and reduce stress levels rapidly."Our approach reveals environmental predictability and physiological limits as key factors shaping the evolution of stress responses," said lead author Professor Barbara Taborsky, of the University of Bern."More research is needed to advance scientific understanding of how this core physiological system has evolved."
|
Environment
| 2,020 |
December 3, 2020
|
https://www.sciencedaily.com/releases/2020/12/201203094529.htm
|
Understanding bacteria's metabolism could improve biofuel production
|
A new study reveals how bacteria control the chemicals produced from consuming 'food.' The insight could lead to organisms that are more efficient at converting plants into biofuels.
|
The study, authored by scientists at UC Riverside and Pacific Northwest National Laboratory, has been published in the In the article, the authors describe mathematical and computational modeling, artificial intelligence algorithms and experiments showing that cells have failsafe mechanisms preventing them from producing too many metabolic intermediates.Metabolic intermediates are the chemicals that couple each reaction to one another in metabolism. Key to these control mechanisms are enzymes, which speed up chemical reactions involved in biological functions like growth and energy production."Cellular metabolism consists of a bunch of enzymes. When the cell encounters food, an enzyme breaks it down into a molecule that can be used by the next enzyme and the next, ultimately generating energy," explained study co-author, UCR adjunct math professor and Pacific Northwest National Laboratory computational scientist William Cannon.The enzymes cannot produce an excessive amount of metabolic intermediates. They produce an amount that is controlled by how much of that product is already present in the cell."This way the metabolite concentrations don't get so high that the liquid inside the cell becomes thick and gooey like molasses, which could cause cell death," Cannon said.One of the barriers to creating biofuels that are cost competitive with petroleum is the inefficiency of converting plant material into ethanol. Typically, E. coli bacteria are engineered to break down lignin, the tough part of plant cell walls, so it can be fermented into fuel.Mark Alber, study co-author and UCR distinguished math professor, said that the study is a part of the project to understand the ways bacteria and fungi work together to affect the roots of plants grown for biofuels."One of the problems with engineering bacteria for biofuels is that most of the time the process just makes the bacteria sick," Cannon said. "We push them to overproduce proteins, and it becomes uncomfortable -- they could die. What we learned in this research could help us engineer them more intelligently."Knowing which enzymes need to be prevented from overproducing can help scientists design cells that produce more of what they want and less of what they don't.The research employed mathematical control theory, which learns how systems control themselves, as well as machine learning to predict which enzymes needed to be controlled to prevent excessive buildup of metabolites.While this study examined central metabolism, which generates the cell's energy, going forward, Cannon said the research team would like to study other aspects of a cell's metabolism, including secondary metabolism -- how proteins and DNA are made -- and interactions between cells."I've worked in a lab that did this kind of thing manually, and it took months to understand how one particular enzyme is regulated," Cannon said. "Now, using these new methods, this can be done in a few days, which is extremely exciting."The U.S. Department of Energy, seeking to diversify the nation's energy sources, funded this three-year research project with a $2.1 million grant.The project is also a part of the broader initiatives under way in the newly established UCR Interdisciplinary Center for Quantitative Modeling in Biology.Though this project focused on bacterial metabolism, the ability to learn how cells regulate and control themselves could also help develop new strategies for combatting diseases."We're focused on bacteria, but these same biological mechanisms and modeling methods apply to human cells that have become dysregulated, which is what happens when a person has cancer," Alber said. "If we really want to understand why a cell behaves the way it does, we have to understand this regulation."
|
Environment
| 2,020 |
December 2, 2020
|
https://www.sciencedaily.com/releases/2020/12/201202192744.htm
|
Roly polies transfer environmental toxins to threatened fish populations in California
|
Roly poly bugs may be a source of fun for kids and adults but these little bugs that form into balls at the slightest touch are causing problems for some threatened fish.
|
New research finds steelhead trout in a stream on the California coast accumulate mercury in their bodies when the fish eat roly polies and similar terrestrial bugs that fall into local waterways. The new study corroborates earlier findings that mercury can make its way to the top of the food chain in coastal California.The results show for the first time that roly polies and other bugs are transferring high levels of the toxic metal to fish in an otherwise pristine watershed where environmental contaminants are not known to be a concern, according to the researchers."Our research is the first step in identifying [mercury] as a potential stressor on these populations," said Dave Rundio, a research fishery biologist at NOAA's Southwest Fisheries Science Center and co-leader of the study that will be presented 8 December at AGU's Fall Meeting 2020.It is unclear whether mercury accumulation in steelhead and other species would affect humans, but the findings suggest mercury can move between connected ecosystems and affect threatened or endangered species in unsuspecting ways, according to the researchers.Mercury is a toxic metal that can cause neurological and developmental problems in humans. Some mercury occurs naturally and is not harmful to living things, but certain bacteria convert elemental mercury into an active form that can become concentrated in the tissues of living organisms and accumulate in larger and larger amounts up through the food web.A 2019 study by Peter Weiss-Penzias, an atmospheric chemist at the University of California Santa Cruz, found mercury from coastal California fog can make its way to the top of the terrestrial food chain and reach nearly toxic levels in the bodies of pumas.In the new study, Rundio wanted to see if mercury can accumulate in top predators in aquatic ecosystems as well as on land. Rundio, Weiss-Penzias and other colleagues looked specifically at mercury levels in steelhead trout, a kind of rainbow trout that are one of the top sport fish in North America and culturally important to some Native American tribes.The researchers took samples of steelhead trout and their prey from a stream in the Big Sur region of central California to see if they had elevated mercury concentrations in their tissues and to determine where that mercury came from.Steelhead are predators with a varied diet in freshwater, eating anything from small fish and crustaceans to insects and even salamanders that fall into streams. Interestingly, terrestrial bugs like roly polies make up nearly half of a steelhead's diet in streams in coastal California.The researchers found steelhead had elevated mercury levels in their tissues, and the older, more mature fish had more mercury than juveniles. Some of the mature stream trout they sampled had mercury concentrations that met or exceeded water quality and food consumption advisory levels.The researchers also found the terrestrial bugs the fish eat -- most notable roly polies, which are a non-native species from Europe -- had higher mercury concentrations than their aquatic counterparts. The findings suggest mercury rolls into coastal California through fog, is consumed by roly polies eating leaf detritus and water droplets, and moves up the food chain to the fish -- similar to Weiss-Penzias's findings of how mercury makes it up the terrestrial food chain to pumas.The new findings came as a surprise to Rundio because the steelhead they sampled live in a nearly pristine environment scientists thought to be free of environmental contaminants.It is important to know where mercury accumulates in the environment and food webs because it a difficult toxin to get rid of, said Weiss-Penzias, who will present the work. Some toxins can be diluted enough in the environment that they become essentially harmless, but mercury becomes more concentrated as it moved up the food chain."We have to think of mercury in that sense and be extremely concerned about it because of the continual releases of coal combustion, gold mining, and other industrial processes," Weiss-Penzias said.
|
Environment
| 2,020 |
December 2, 2020
|
https://www.sciencedaily.com/releases/2020/12/201202114513.htm
|
Researchers develop plant nanobionic sensor to monitor arsenic levels in soil
|
Scientists from Disruptive & Sustainable Technologies for Agricultural Precision (DiSTAP), an Interdisciplinary Research Group (IRG) at the Singapore-MIT Alliance for Research and Technology (SMART), MIT's research enterprise in Singapore, have engineered a novel type of plant nanobionic optical sensor that can detect and monitor, in real-time, levels of the highly toxic heavy metal arsenic in the belowground environment. This development provides significant advantages over conventional methods used to measure arsenic in the environment and will be important for both environmental monitoring and agricultural applications to safeguard food safety, as arsenic is a contaminant in many common agricultural products such as rice, vegetables, and tea leaves.
|
This new approach is described in a paper titled, "Plant Nanobionic Sensors for Arsenic Detection," published recently in Arsenic and its compounds are a serious threat to humans and ecosystems. Long-term exposure to arsenic in humans can cause a wide range of detrimental health effects, including cardiovascular disease such as heart attack, diabetes, birth defects, severe skin lesions, and numerous cancers including those of the skin, bladder, and lung. Elevated levels of arsenic in soil as a result of anthropogenic activities such as mining and smelting is also harmful to plants, inhibiting growth and resulting in substantial crop losses. More troublingly, food crops can absorb arsenic from the soil, leading to contamination of food and produce consumed by humans. Arsenic in belowground environments can also contaminate groundwater and other underground water sources, the long-term consumption of which can cause severe health issues. As such, developing accurate, effective, and easy-to-deploy arsenic sensors is important to protect both the agriculture industry and wider environmental safety.These novel optical nanosensors developed by SMART DiSTAP exhibit changes in their fluorescence intensity upon the detection of arsenic. Embedded in plant tissues with no detrimental effects on the plant, these sensors provide a non-destructive way to monitor the internal dynamics of arsenic taken up by plants from the soil. This integration of optical nanosensors within living plants enables the conversion of plants into self-powered detectors of arsenic from their natural environment, marking a significant upgrade from the time- and equipment-intensive arsenic sampling methods of current conventional methods.Lead author Dr Tedrick Thomas Salim Lew said, "Our plant-based nanosensor is notable not only for being the first of its kind, but also for the significant advantages it confers over conventional methods of measuring arsenic levels in the belowground environment, requiring less time, equipment, and manpower. We envisage that this innovation will eventually see wide use in the agriculture industry and beyond. I am grateful to SMART DiSTAP and Temasek Life Sciences Laboratory (TLL), both of which were instrumental in idea generation, scientific discussion as well as research funding for this work."Besides detecting arsenic in rice and spinach, the team also used a species of fern, Pteris cretica, which can hyperaccumulate arsenic. This species of fern can absorb and tolerate high levels of arsenic with no detrimental effect -- engineering an ultrasensitive plant-based arsenic detector, capable of detecting very low concentrations of arsenic, as low as 0.2 parts per billion (ppb). In contrast, the regulatory limit for arsenic detectors is 10 parts per billion. Notably, the novel nanosensors can also be integrated into other species of plants. This is the first successful demonstration of living plant-based sensors for arsenic and represents a groundbreaking advancement which could prove highly useful in both agricultural research (e.g. to monitor arsenic taken up by edible crops for food safety), as well as in general environmental monitoring.Previously, conventional methods of measuring arsenic levels included regular field sampling, plant tissue digestion, extraction and analysis using mass spectrometry. These methods are time-consuming, require extensive sample treatment, and often involve the use of bulky and expensive instrumentation. SMART DiSTAP's novel method of coupling nanoparticle sensors with plants' natural ability to efficiently extract analytes via the roots and transport them allows for the detection of arsenic uptake in living plants in real-time with portable, inexpensive electronics, such as a portable Raspberry Pi platform equipped with a charge-coupled device (CCD) camera, akin to a smartphone camera.Co-author, DiSTAP co-lead Principal Investigator, and MIT Professor Michael Strano added, "This is a hugely exciting development, as, for the first time, we have developed a nanobionic sensor that can detect arsenic -- a serious environmental contaminant and potential public health threat. With its myriad advantages over older methods of arsenic detection, this novel sensor could be a game-changer, as it is not only more time-efficient but also more accurate and easier to deploy than older methods. It will also help plant scientists in organizations such as TLL to further produce crops that resist uptake of toxic elements. Inspired by TLL's recent efforts to create rice crops which take up less arsenic, this work is a parallel effort to further support SMART DiSTAP's efforts in food security research, constantly innovating and developing new technological capabilities to improve Singapore's food quality and safety."The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) programme.
|
Environment
| 2,020 |
December 1, 2020
|
https://www.sciencedaily.com/releases/2020/12/201201124214.htm
|
Scientists uncover the mysterious origin of canal grass in Panama
|
Urban legends about the origins of canal grass in Panama abound, but the Smithsonian has new evidence that puts the question to rest. Canal grass is an invasive weed, native to Asia. Because its tiny seeds blow in the wind, it readily invades clearings and spreads to form impenetrable stands by budding from tillers and rhizomes. Once established, canal grass is challenging to eliminate. Fire burns the tops and stimulates the roots. Glassy hairs edging its leaf blades cut skin and dull machetes.
|
The most widespread story is that the Panama Canal Co. imported canal grass (paja canalera or paja blanca in Spanish, Latin name: Saccharum spontaneum L.) for erosion control. In other versions, the U.S. Army brought it to landscape areas for military exercises or it arrived on ships transiting the canal during the 1950s or 1960s. Another study suggested that seeds or fragments of roots may have washed into the canal from a piece of earth-moving equipment from Thailand or Vietnam shipped through the canal in the 1970s."These explanations are all unlikely," said Kristin Saltonstall, STRI staff scientist. "The first documentation of its presence in Panama is a report by the Missouri Botanical Gardens from 1948."A parent of sugarcane, S. spontaneum often arrives in out-of-the-way places as an escapee from breeding collections. Several previous reports from the Tropical Agriculture and Higher Education Research Center suggested that canal grass escaped from a U.S. Department of Agriculture sugarcane breeding program at the Canal Zone Experimental Gardens (now Summit Nature Park) in the early 1940s. Saltonstall's new genetic results support this idea.In 1939, the U.S. Department of Agriculture sent more than 500 different varieties of sugarcane and close relatives to the Gardens. Breeders may have been concerned that they could be damaged by hurricanes at the sugarcane experimental station in Canal Point, Florida. The plants were allowed to flower in the gardens as part of ongoing sugarcane breeding trials between 1940 and 1945.Saltonstall compared DNA extracted from leaves of plants she collected in Panama to DNA samples from a large, international collection of sugarcane and sugarcane relatives maintained by colleagues in Australia, including many of the accessions that were likely brought to Panama in 1939."The conditions were right, the plants were there and the timing is right," Saltonstall said. "We can never say with 100% certainty that it came from the Gardens, but it certainly looks like it, because DNA from canal grass in Panama is very similar to accessions from Indonesia in the germplasm collection. All of these plants also have high ploidy levels [many copies of the chromosomes in each cell] and come from the same maternal lineage."Sugarcane is the world's largest crop. In 2018, Panama produced 2.9 million tons of sugarcane. It was first domesticated in Southeast Asia in the eighth millennium B.C. and gradually spread around the world. Today, a hectare of sugarcane (a cross between S. spontaneum and S. officinarum) yields between 30 and 180 tons of sugar. Sugarcane breeders are eager to improve yield by producing hybrids with other species of grasses, but as they continue to experiment, the possibility of escapes like this continues."This was not an intentional introduction," Saltonstall said. "No one was thinking about invasive species at that time. More recently there have been escapes in the U.S. in Florida and Louisiana. Germplasm collections need to be monitored and if there is an escape, it needs to be dealt with before it becomes a problem."Saltonstall will continue to study canal grass, fascinated by the way these large, invasive plants can take over an area and change its whole ecosystem. And as the world's climate changes, Panama may become drier and more subject to fires, which often start in urban patches of canal grass near burning garbage or roads and then burn into the forest and clear new areas for the canal grass to invade. Canal grass can tolerate drier conditions and outcompete other plants that are not resistant to drought, which may give it an advantage if the climate becomes drier.
|
Environment
| 2,020 |
November 30, 2020
|
https://www.sciencedaily.com/releases/2020/11/201130150358.htm
|
Recycled concrete could be a sustainable way to keep rubble out of landfill
|
Results of a new five-year study of recycled concrete show that it performs as well, and in several cases even better, than conventional concrete.
|
Researchers at UBC Okanagan's School of Engineering conducted side-by-side comparisons of recycled and conventional concrete within two common applications -- a building foundation and a municipal sidewalk. They found that the recycled concrete had comparable strength and durability after five years of being in service."We live in a world where we are constantly in search of sustainable solutions that remove waste from our landfills," says Shahria Alam, co-director of UBC's Green Construction Research and Training Centre and the lead investigator of the study. "A number of countries around the world have already standardized the use of recycled concrete in structural applications, and we hope our findings will help Canada follow suit."Waste materials from construction and demolition contribute up to 40 per cent of the world's waste, according to Alam, and in Canada, that waste amounts to nine million tonnes per year.The researchers tested the compressive strength and durability of recycled concrete compared with conventional concrete.Concrete is typically composed of fine or coarse aggregate that is bonded together with an adhesive paste. The recycled concrete replaces the natural aggregate for producing new concrete."The composition of the recycled concrete gives that product additional flexibility and adaptability," says Alam. "Typically, recycled concrete can be used in retaining walls, roads and sidewalks, but we are seeing a shift towards its increased use in structures."Within the findings, the researchers discovered that the long-term performance of recycled concrete adequately compared to its conventional form, and experienced no issues over the five years of the study. In fact, the recycled concrete had a higher rate of compressive strength after 28 days of curing while maintaining a greater or equal strength during the period of the research.The researchers suggest the recycled concrete can be a 100 per cent substitute for non-structural applications."As innovations continue in the composition of recycled concrete, we can envision a time in the future where recycle concrete can be a substitute within more structural applications as well."
|
Environment
| 2,020 |
November 30, 2020
|
https://www.sciencedaily.com/releases/2020/11/201130101236.htm
|
Killer electrons in strumming northern and southern lights
|
Computer simulations explain how electrons with wide-ranging energies rain into Earth's upper and middle atmosphere during a phenomenon known as the pulsating aurora. The findings, published in the journal
|
The northern and southern lights that people are typically aware of, called the aurora borealis and australis, look like coloured curtains of reds, greens, and purples spreading across the night skies. But there is another kind of aurora that is less frequently seen. The pulsating aurora looks more like indistinct wisps of cloud strumming across the sky.Scientists have only recently developed the technologies enabling them to understand how the pulsating aurora forms. Now, an international research team, led by Yoshizumi Miyoshi of Nagoya University's Institute for Space-Earth Environmental Research, has developed a theory to explain the wide-energy electron precipitations of pulsating auroras and conducted computer simulations that validate their theory.Their findings suggest that both low- and high-energy electrons originate simultaneously from interactions between chorus waves and electrons in the Earth's magnetosphere.Chorus waves are plasma waves generated near the magnetic equator. Once formed, they travel northwards and southwards, interacting with electrons in Earth's magnetosphere. This interaction energizes the electrons, scattering them down into the upper atmosphere, where they release the light energy that appears as a pulsating aurora.The electrons that result from these interactions range from lower-energy ones, of only a few hundred kiloelectron volts, to very high-energy ones, of several thousand kiloelectron volts, or 'megaelectron' volts.Miyoshi and his team suggest that the high-energy electrons of pulsating auroras are 'relativistic' electrons, otherwise known as killer electrons, because of the damage they can cause when they penetrate satellites."Our theory indicates that so-called killer electrons that precipitate into the middle atmosphere are associated with the pulsating aurora, and could be involved in ozone destruction," says Miyoshi.The team next plans to test their theory by studying measurements taken during a space rocket mission called 'loss through auroral microburst pulsations' (LAMP), which is due to launch in December 2021. LAMP is a collaboration between NASA, the Japan Aerospace Exploration Agency (JAXA), Nagoya University, and other institutions. LAMP experiments will be able to observe the killer electrons associated with the pulsating aurora.
|
Environment
| 2,020 |
November 30, 2020
|
https://www.sciencedaily.com/releases/2020/11/201130101231.htm
|
Guam's most endangered tree species reveals universal biological concept
|
Newly published research carried out at the University of Guam has used a critically endangered species to show how trees modify leaf function to best exploit prevailing light conditions. The findings revealed numerous leaf traits that change depending on the light levels during leaf construction.
|
"The list of ways a leaf can modify its shape and structure is lengthy, and past research has not adequately looked at that entire list," said Benjamin Deloso, lead author of the study.Terrestrial plants are unable to move after they find their permanent home, so they employ methods to maximize their growth potential under prevailing conditions by modifying their structure and behavior. The environmental factor that has been most studied in this line of botany research is the availability of light, as many trees begin their life in deep shade but eventually grow tall to position their leaves in full sun when they are old. These changes in prevailing light require the tree to modify the manner in which their leaves are constructed to capitalize on the light that is available at the time of leaf construction."One size does not fit all," Deloso said. "A leaf designed to perform in deep shade would try to use every bit of the limited light energy, but a leaf grown under full sun needs to refrain from being damaged by excessive energy."The research team used Guam's critically endangered Serianthes nelsonii tree as the model species because of the complexity of its leaf design. This tree's leaf is classified as a bi-pinnate compound leaf, a designation that means a single leaf is comprised of many smaller leaflets that are arranged on linear structures that have a stem-like appearance. The primary outcome of the work was to show that this type of leaf modifies many whole-leaf traits in response to prevailing light conditions. Most literature on this subject has not completely considered many of these whole-leaf traits, and may have under-estimated the diversity of skills that compound leaves can benefit from while achieving the greatest growth potential.This study provides an example of how plant species that are federally listed as endangered can be exploited for non-destructive research, helping to highlight the value of conserving the world's threatened biodiversity while demonstrating a universal concept.The study was a continuation of several years of research at the University of Guam designed to understand the ecology of the species. The research program has identified recruitment as the greatest limitation of species survival. Recruitment is what botanists use to describe the transition of seedlings into larger juvenile plants that are better able to remain viable. Considerable seed germination and seedling establishment occur in Guam's habitat, but 100% of the seedlings die. Extreme shade is one of the possible stress factors that generate the seedling mortality. Testing this possibility by providing outplanted seedlings with a greater range of sunlight transmission than the 6% recorded in this study may provide answers to the extreme shade stress hypothesis.The latest results have augmented the team's earlier research that demonstrated how a specialized leaf gland enables rapid leaflet movement when the light energy is excessive. This skill of being able to change the leaflet's orientation is an instantaneous behavior that mitigates the damage that may result from excessive sunlight exposure."Just because the tree can't move itself, that doesn't mean it can't move its leaves to avoid stress," Deloso said.Serianthes nelsonii was listed on the Endangered Species Act in 1987. A formal plan to recover the species was published in 1994 and called for research to more fully understand the factors that limit success of the species. This latest publication adds to the expanding knowledge that the University of Guam is generating to inform conservation decisions into the future.
|
Environment
| 2,020 |
November 27, 2020
|
https://www.sciencedaily.com/releases/2020/11/201127180821.htm
|
Mine ponds amplify mercury risks in Peru's Amazon
|
The proliferation of pits and ponds created in recent years by miners digging for small deposits of alluvial gold in Peru's Amazon has dramatically altered the landscape and increased the risk of mercury exposure for indigenous communities and wildlife, a new study shows.
|
"In heavily mined watersheds, there's been a 670% increase in the extent of ponds across the landscape since 1985. These ponds are almost entirely artificial lakes created as thousands of former mining pits fill in with rainwater and groundwater over time," said Simon Topp, a doctoral student in geological sciences at the University of North Carolina at Chapel Hill, who co-led the study.Landscapes formerly dominated by forests are now increasingly dotted by these small lakes, which, the study finds, provide low-oxygen conditions in which submerged mercury -- a toxic leftover from the gold mining process -- can be converted by microbial activity into an even more toxic form of the element, called methylmercury, at net rates 5-to-7 times greater than in rivers."Methylmercury poses especially high risks for humans and large predators because it bioaccumulates in body tissue as it moves up the food chain. That's particularly concerning given the high biodiversity and the large number of indigenous populations that live in the Peruvian Amazon," said Jacqueline Gerson, a doctoral student in ecology at Duke University, who also co-led the study.These heightened risks likely also occur in other locations where unregulated artisanal small-scale gold mining takes place, including Asia, sub-Saharan Africa and other parts of South America, she said.Topp, Gerson and their colleagues published their peer-reviewed study Nov. 27 in Artisanal gold miners use mercury, a potent neurotoxin, to separate their gold ore from soil and sediments, often without adequate safety precautions to protect themselves or the environment.Mercury poisoning can cause a wide range of health impacts, including tremors, muscle weakness, vision and hearing impairments, and loss of coordination and balance. In severe cases, it can lead to birth defects or death.Some of the mercury used by the miners is burned off into the air or spilled into nearby rivers, creating far-reaching environmental and human health risks that have been well documented in past studies. The new study is the first to document how the mining has altered the landscape and simultaneously amplified the risks of mercury poisoning through the creation of ponds and the microbial processing of mercury into methylmercury that occurs there.To conduct the study, the scientists collected water and sediment samples at sites upstream and downstream of artisanal gold mining sites along Peru's Madre de Dios River, its tributaries, surrounding lakes, and mining ponds during the dry season in July and August of 2019. They measured each sample for total mercury content and for the proportion of that mercury that was in the more toxic form of methylmercury.By combining these measurements with more than three decades of high-resolution satellite data from the region, they were able to determine the extent of artificial ponding and mercury contamination at each site and identify causal links."You can clearly see that the increase in artificial lakes and ponds in heavily mined areas accelerated after 2008, when gold prices dramatically increased along with mining activity," Topp said. By contrast, the total surface area of ponds in areas without heavy mining increased by an average of only 20% over the entire study period."We expect that this trend, and the environmental and human health risks it causes, will continue as long as gold prices remain high and artisanal small-scale gold mining is a profitable activity," he said.Co-authors of the new study were John Gardner, Xiao Yang and Tamlin Pavelsky of UNC-CH; Emily Bernhardt of Duke; and Claudia Vega and Luis Fernandez of Wake Forest University's Center for Amazonian Scientific Innovation in Peru.Funding came from Duke University.
|
Environment
| 2,020 |
November 25, 2020
|
https://www.sciencedaily.com/releases/2020/11/201125154829.htm
|
Offshore submarine freshwater discovery raises hopes for islands worldwide
|
Twice as much freshwater is stored offshore of Hawai'i Island than was previously thought, according to a University of Hawai'i study with important implications for volcanic islands around the world. An extensive reservoir of freshwater within the submarine southern flank of the Hualālai aquifer has been mapped by UH researchers with the Hawai'i EPSCoR 'Ike Wai project. The groundbreaking findings, published in
|
This mechanism may provide alternative renewable resources of freshwater to volcanic islands worldwide. "Their evidence for separate freshwater lenses, stacked one above the other, near the Kona coast of Hawai'i, profoundly improves the prospects for sustainable development on volcanic islands," said UH Manoa School of Ocean and Earth Science and Technology (SOEST) Dean Brian Taylor.Through the use of marine controlled-source electromagnetic imaging, the study revealed the onshore-to-offshore movement of freshwater through a multilayer formation of basalts embedded between layers of ash and soil, diverging from previous groundwater models of this area. Conducted as a part of the National Science Foundation-supported 'Ike Wai project, research affiliate faculty Eric Attias led the marine geophysics campaign."Our findings provide a paradigm shift from the conventional hydrologic conceptual models that have been vastly used by multiple studies and water organizations in Hawai'i and other volcanic islands to calculate sustainable yields and aquifer storage for the past 30 years," said Attias. "We hope that our discovery will enhance future hydrologic models, and consequently, the availability of clean freshwater in volcanic islands."Co-author Steven Constable, a professor of geophysics at the Scripps Institution of Oceanography, who developed the controlled source electromagnetic system used in the project, said, "I have spent my entire career developing marine electromagnetic methods such as the one used here. It is really gratifying to see the equipment being used for such an impactful and important application. Electrical methods have long been used to study groundwater on land, and so it makes sense to extend the application offshore."Kerry Key, an associate professor at Columbia University who employs electromagnetic methods to image various oceanic Earth structures, who not involved in this study, said, "This new electromagnetic technique is a game changing tool for cost-effective reconnaissance surveys to identify regions containing freshwater aquifers, prior to more expensive drilling efforts to directly sample the pore waters. It can also be used to map the lateral extent of any aquifers already identified in isolated boreholes."Donald Thomas, a geochemist with the Hawai'i Institute of Geophysics and Planetology in SOEST who also worked on the study, said the findings confirm two-times the presence of much larger quantities of stored groundwater than previously thought."Understanding this new mechanism for groundwater...is important to better manage groundwater resources in Hawai'i," said Thomas, who leads the Humuʻula Groundwater Research project, which found another large freshwater supply on Hawai'i Island several years ago.Offshore freshwater systems similar to those flanking the Hualālai aquifer are suggested to be present for the island of O'ahu, where the electromagnetic imaging technique has not yet been applied, but, if demonstrated, could provide an overall new concept to manage freshwater resources.The study proposes that this newly discovered transport mechanism may be the governing mechanism in other volcanic islands. With offshore reservoirs considered more resilient to climate change-driven droughts, volcanic islands worldwide can potentially consider these resources in their water management strategies.This project is supported by the National Science Foundation EPSCoR Program Award OIA #1557349.
|
Environment
| 2,020 |
November 24, 2020
|
https://www.sciencedaily.com/releases/2020/11/201124152824.htm
|
To push or to pull? How many-limbed marine organisms swim
|
When you think of swimming, you probably imagine pushing through the water -- creating backwards thrust that pushes you forward. New research at the Marine Biological Laboratory (MBL) suggests instead that many marine animals actually pull themselves through the water, a phenomenon dubbed "suction thrust."
|
The study, published in When the front appendage moves, it creates a pocket of low pressure behind it that may reduce the energy required by the next limb to move. "It is similar to how cyclists use draft to reduce wind drag and to help pull the group along," says lead author Sean Colin of Roger Williams University, a Whitman Center Scientist at the MBL.This publication builds on the team's previous work, also conducted at the MBL, on suction thrust in lampreys and jellyfish. For the current study, they focused on small marine animals that use metachronal kinematics also known as "metachronal swimming," a locomotion technique commonly used by animals with multiple pairs of legs in which appendages stroke in sequence, rather than synchronously."We came into this study looking for the benefits of metachronal swimming, but we realized the flow around the limbs looks very similar to the flow around a jellyfish or a fish fin," said Colin. "Not only does the flow look the same, but the negative pressure is the same."For this study, the researchers worked with two crab species, a polychaete worm, and four species of comb jellies. All are smaller than a few millimeters in length. They found that the fluid flow created while swimming was the same as in the larger animals they had previously studied."Even at these really small scales, these animals rely on negative pressure to pull themselves forward through the water," said Colin, who added that this could be a common phenomenon among animals."It's not unique to the fish or the jellyfish we looked at. It's probably much more widespread in the animal kingdom," says Colin, who added that something like suction thrust has been observed in birds and bats moving themselves through the air. These creatures have the same degree of bend in their limbs (25-30 degrees) that the observed marine animals do.Moving forward, Colin and colleagues want to study a larger variety of marine organisms to determine the range of animal sizes that rely on suction thrust to propel through the water."That's one of our main goals -- to get bigger, get smaller, and get a better survey of what animals are really relying on this suction thrust," Colin says.
|
Environment
| 2,020 |
November 24, 2020
|
https://www.sciencedaily.com/releases/2020/11/201124123631.htm
|
Stable catalysts for new energy
|
On the way to a CO
|
At TU Wien and the Comet Center for Electrochemistry and Surface Technology CEST in Wiener Neustadt, a unique combination of research methods is available for this kind of research. Together scientists could now show: Looking for the perfect catalyst is not only about finding the right material, but also about its orientation. Depending on the direction in which a crystal is cut and which of its atoms it thus presents to the outside world on its surface, its behavior can change dramatically."For many important processes in electrochemistry, precious metals are often used as catalysts, such as iridium oxide or platinum particles," says Prof. Markus Valtiner from the Institute of Applied Physics at TU Wien (IAP). In many cases these are catalysts with particularly high efficiency. However, there are also other important points to consider: The stability of a catalyst and the availability and recyclability of the materials. The most efficient catalyst material is of little use if it is a rare metal, dissolves after a short time, undergoes chemical changes or becomes unusable for other reasons.For this reason, other, more sustainable catalysts are of interest, such as zinc oxide, even though they are even less effective. By combining different measuring methods, it is now possible to show that the effectiveness and the stability of such catalysts can be significantly improved by studying how the surface of the catalyst crystals is structured on an atomic scale.Crystals can have different surfaces: "Let's imagine a cube-shaped crystal that we cut in two," says Markus Valtiner. "We can cut the cube straight through the middle to create two cuboids. Or we can cut it exactly diagonally, at a 45-degree angle. The cut surfaces that we obtain in these two cases are different: Different atoms are located at different distances from each other on the cut surface. Therefore, these surfaces can also behave very differently in chemical processes."Zinc oxide crystals are not cube-shaped, but form honeycomb-like hexagons -- but the same principle applies here, too: Its properties depend on the arrangement of the atoms on the surface. "If you choose exactly the right surface angle, microscopically small triangular holes form there, with a diameter of only a few atoms," says Markus Valtiner. "Hydrogen atoms can attach there, chemical processes take place that support the splitting of water, but at the same time stabilize the material itself."The research team has now been able to prove this stabilization for the first time: "At the catalyst surface, water is split into hydrogen and oxygen. While this process is in progress, we can take liquid samples and examine whether they contain traces of the catalyst," explains Markus Valtiner. "To do this, the liquid must first be strongly heated in a plasma and broken down into individual atoms. Then we separate these atoms in a mass spectrometer and sort them, element by element. If the catalyst is stable, we should hardly find any atoms from the catalyst material. Indeed, we could not detect any decomposition of the material at the atomic triangle structures when hydrogen was produced." This stabilizing effect is surprisingly strong -- now the team is working on making zinc oxide even more efficient and transferring the physical principle of this stabilization to other materials.Atomic surface structures have been studied at TU Wien for many years. "At our institute, these triangular structures have first been demonstrated and theoretically explained years ago, and now we are the first to demonstrate their importance for electrochemistry," says Markus Valtiner. "This is because we are in the unique situation here of being able to combine all the necessary research steps under one roof -- from sample preparation to simulation on supercomputers, from microscopy in ultra-high vacuum to practical tests in realistic environments.""This collaboration of different specialties under one roof is unique, and our great advantage to be able to be a global leader in research and teaching in this field," says Carina Brunnhofer, student at the IAP."Over the next ten years, we will develop stable and commercially viable systems for water splitting and CO2 reduction based on methodological developments and a fundamental understanding of surface chemistry and physics," says Dominik Dworschak, the first author of the recently published study. "However, at least a sustainable doubling of the current power output must be achieved in parallel," Markus Valtiner notes. "We are therefore on an exciting path, on which we will only achieve our climate targets through consistent, cross-sector research and development.
|
Environment
| 2,020 |
November 24, 2020
|
https://www.sciencedaily.com/releases/2020/11/201124101028.htm
|
Areas where the next pandemic could emerge are revealed
|
An international team of researchers has taken a holistic approach to reveal for the first time where wildlife-human interfaces intersect with areas of poor human health outcomes and highly globalised cities, which could give rise to the next pandemic unless preventative measures are taken.
|
Areas exhibiting a high degree of human pressure on wildlife also had more than 40 percent of the world's most connected cities in or adjacent to areas of likely spillover, and 14-20 percent of the world's most connected cities at risk of such spillovers likely to go undetected because of poor health infrastructure (predominantly in South and South East Asia and Sub-Saharan Africa). As with COVID-19, the impact of such spillovers could be global.Led by the University of Sydney and with academics spanning the United Kingdom, India and Ethiopia, the open-access paper shows the cities worldwide that are at risk. Last month, an IPBES report highlighted the role biodiversity destruction plays in pandemics and provided recommendations. This Sydney-led research pinpoints the geographical areas that require greatest attention.The paper, "Whence the next pandemic? The intersecting global geography of the animal-human interface, poor health systems and air transit centrality reveals conduits for high-impact spillover," has published in the leading Elsevier journal, Lead author Dr Michael Walsh, who co-leads the "Our new research integrates the wildlife-human interface with human health systems and globalisation to show where spillovers might go unidentified and lead to dissemination worldwide and new pandemics," said Dr Walsh, from the University of Sydney's School of Public Health, Faculty of Medicine and Health.Dr Walsh said that although low- and middle-income countries had the most cities in zones classified at highest risk for spillover and subsequent onward global dissemination, it should be noted that the high risk in these areas was very much a consequence of diminished health systems. Moreover, while not as extensively represented in the zone of highest risk because of better health infrastructure, high-income countries still had many cities represented in the next two tiers of risk because of the extreme pressures the affluent countries exert on wildlife via unsustainable development.The researchers took a three-staged approach:"This is the first time this three-staged geography has been identified and mapped, and we want this to be able to inform the development of multi-tiered surveillance of infections in humans and animals to help prevent the next pandemic," the paper reads.Of those cities that were in the top quartile of network centrality, approximately 43 percent were found to be within 50km of the spillover zones and therefore warrant attention (both yellow and orange alert zones). A lesser but still significant proportion of these cities were within 50km of the red alert zone at 14.2 percent (for spillover associated with mammal wildlife) and 19.6 percent (wild bird-associated spillover).Dr Walsh said although it would be a big job to improve habitat conservation and health systems, as well as surveillance at airports as a last line of defence, the benefit in terms of safeguarding against debilitating pandemics would outweigh the costs."Locally-directed efforts can apply these results to identify vulnerable points. With this new information, people can develop systems that incorporate human health infrastructure, animal husbandry, wildlife habitat conservation, and movement through transportation hubs to prevent the next pandemic," Dr Walsh said."Given the overwhelming risk absorbed by so many of the world's communities and the concurrent high-risk exposure of so many of our most connected cities, this is something that requires our collective prompt attention."The authors of this research, Michael Walsh, Shailendra Sawleshwarkar, Shah Hossain and Siobhan Mor, are all associated with the University of Sydney. Additionally their various affiliations comprise The Westmead Institute for Medical Research (Australia), Manipal Academy of Higher Education (India), University of Liverpool (UK) and the International Livestock Research Institute (Addis Ababa, Ethiopia campus).
|
Environment
| 2,020 |
November 24, 2020
|
https://www.sciencedaily.com/releases/2020/11/201124092136.htm
|
New material 'mines' copper from toxic wastewater
|
We rely on water to quench our thirst and to irrigate bountiful farmland. But what do you do when that once pristine water is polluted with wastewater from abandoned copper mines?
|
A promising solution relies on materials that capture heavy metal atoms, such as copper ions, from wastewater through a separation process called adsorption. But commercially available copper-ion-capture products still lack the chemical specificity and load capacity to precisely separate heavy metals from water.Now, a team of scientists led by the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) has designed a new material -- called ZIOS (zinc imidazole salicylaldoxime) -- that targets and traps copper ions from wastewater with unprecedented precision and speed. In a paper recently published in the journal "ZIOS has a high adsorption capacity and the fastest copper adsorption kinetics of any material known so far -- all in one," said senior author Jeff Urban , who directs the Inorganic Nanostructures Facility in Berkeley Lab's Molecular Foundry.This research embodies the Molecular Foundry's signature work -- the design, synthesis, and characterization of materials that are optimized at the nanoscale (billionths of a meter) for sophisticated new applications in medicine, catalysis, renewable energy, and more.For example, Urban has focused much of his research on the design of superthin materials from both hard and soft matter for a variety of applications, from cost-effective water desalination to self-assembling 2D materials for renewable energy applications."And what we tried to mimic here are the sophisticated functions performed by nature," such as when proteins that make up a bacterial cell select certain metals to regulate cellular metabolism, said lead author Ngoc Bui, a former postdoctoral researcher in Berkeley Lab's Molecular Foundry who is now an assistant professor in chemical, biological, and materials engineering at the University of Oklahoma."ZIOS helps us to choose and remove only copper, a contaminant in water that has been linked to disease and organ failure, without removing desirable ions, such as nutrients or essential minerals," she added.Such specificity at the atomic level could also lead to more affordable water treatment techniques and aid the recovery of precious metals. "Today's water treatment systems are 'bulk separation technologies' -- they pull out all solutes, irrespective of their hazard or value," said co-author Peter Fiske, director of the National Alliance for Water Innovation (NAWI) and the Water-Energy Resilience Institute (WERRI) at Berkeley Lab. "Highly selective, durable materials that can capture specific trace constituents without becoming loaded down with other solutes, or falling apart with time, will be critically important in lowering the cost and energy of water treatment. They may also enable us to 'mine' wastewater for valuable metals or other trace constituents."Urban, Bui, and co-authors report that ZIOS crystals are highly stable in water -- up to 52 days. And unlike metal-organic frameworks, the new material performs well in acidic solutions with the same pH range of acid mine wastewater. In addition, ZIOS selectively captures copper ions 30-50 times faster than state-of-the-art copper adsorbents, the researchers say.These results caught Bui by surprise. "At first I thought it was a mistake, because the ZIOS crystals have a very low surface area, and according to conventional wisdom, a material should have a high specific surface area, like other families of adsorbents, such as metal-organic frameworks, or porous aromatic frameworks, to have a high adsorption capacity and an extremely fast adsorption kinetic," she said. "So I wondered, 'Perhaps something more dynamic is going on inside the crystals.'"To find out, she recruited the help of co-lead author Hyungmook Kang to perform molecular dynamics simulations at the Molecular Foundry. Kang is a graduate student researcher in the Urban Lab at Berkeley Lab's Molecular Foundry and a Ph.D. student in the department of mechanical engineering at UC Berkeley.Kang's models revealed that ZIOS, when immersed in an aqueous environment, "works like a sponge, but in a more structured way," said Bui. "Unlike a sponge that absorbs water and expands its structure in random directions, ZIOS expands in specific directions as it adsorbs water molecules."X-ray experiments at Berkeley Lab's Advanced Light Source revealed that the material's tiny pores or nanochannels -- just 2-3 angstroms, the size of a water molecule -- also expand when immersed in water. This expansion is triggered by a "hydrogen bonding network," which is created as ZIOS interacts with the surrounding water molecules, Bui explained.This expansion of the nanochannels allows water molecules carrying copper ions to flow at a larger scale, during which a chemical reaction called "coordination bonding" between copper ions and ZIOS takes place.Additional X-ray experiments showed that ZIOS is highly selective to copper ions at a pH below 3 -- a significant finding, as the pH of acidic mine drainage is typically a pH of 4 or lower.Furthermore, the researchers said that when water is removed from the material, its crystal lattice structure contracts to its original size within less than 1 nanosecond (billionth of a second).Co-author Robert Kostecki attributed the team's success to their interdisciplinary approach. "The selective extraction of elements and minerals from natural and produced waters is a complex science and technology problem," he said. "For this study, we leveraged Berkeley Lab's unique capabilities across nanoscience, environmental sciences, and energy technologies to transform a basic materials sciences discovery into a technology that has great potential for real-world impact." Kostecki is the director of the Energy Storage and Distributed Resources Division in Berkeley Lab's Energy Technologies Area, and Materials and Manufacturing R&D topic area lead in NAWI.The researchers next plan to explore new design principles for the selective removal of other pollutants."In water science and the water industry, numerous families of materials have been designed for decontaminating wastewater, but few are designed for heavy metal removal from acidic mine drainage. We hope that ZIOS can help to change that," said Urban.
|
Environment
| 2,020 |
November 24, 2020
|
https://www.sciencedaily.com/releases/2020/11/201120095900.htm
|
Highly efficient, long-lasting electrocatalyst to boost hydrogen fuel production
|
Abundant. Clean. Flexible. Alluring enough to explain why hydrogen, the most common molecule in the universe happens to have its name as part of an national Hydrogen and Fuel Cell Day. Chosen to signify hydrogen's atomic weight of 1.008, the U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy celebrates advances in hydrogen-use technology every October 8 since 2015. When hydrogen is consumed in a fuel cell (which takes the water molecule H2O and seperates it into oxygen and hydrogen, a process called electrolysis), it only produces water, electricity, and heat. As a zero-carbon energy source, the range of its potential use is limitless: transportation, commercial, industrial, residential, and portable.
|
While traditional hydrogen production processes required fossil fuels or COLed by Associate Director LEE Hyoyoung of the Center for Integrated Nanostructure Physics within the Institute for Basic Science (IBS) located at Sungkyunkwan University, the IBS research team developed a highly efficient and long-lasting electrocatalyst for water oxidation using cobalt, iron, and a minimal amount of ruthenium. "We used 'amphiphilic block copolymers' to control electrostatic attraction in our single ruthenium (Ru) atom-bimetallic alloy. The copolymers facilitate the synthesis of spherical clusters of hydrocarbon molecules whose soluble and insoluble segments form the core and shell. In this study, their tendency for a unique chemical structure allows the synthesis of the "high-performance" single atomic Ru alloy present atop the stable cobalt iron (Co-Fe) metallic composite surrounded by porous, defective and graphitic carbon shell," says LEE Jinsun and Kumar Ashwani, the co-first authors of the study."We were very excited to discover that pre-adsorbed surface oxygen on the Co-Fe alloy surface, absorbed during the synthesis process, stabilizes one of the important intermediates (OOH*) during the oxygen generation reaction, boosting the overall efficiency of the catalytic reaction. The pre-absorbed surface oxygen has been of little interest until our finding," notes Associate Director Lee, the corresponding author of the study. The researchers found that four hour-annealing at 750°C in an argon atmosphere is the best appropriate condition for the oxygen generating process. In addition to the reaction-friendly environment on the host metal surface, the single Ru atom, where oxygen generation takes place, also fulfills its role by lowering the energy barrier, synergistically enhancing the efficiency of oxygen evolution.The research team evaluated the catalytic efficiency with the overvoltage metrics needed for the oxygen evolution reaction. The advanced noble electrocatalyst required only 180 mV (millivolt) overvoltage to attain a current density of 10 mA (milliampere) per cm2 of catalyst, while ruthenium oxide needed 298 mV. In addition, the single Ru atom-bimetallic alloy showed long-term stability for 100 hours without any change of structure. Furthermore, the cobalt and iron alloy with graphitic carbon also compensated electrical conductivity and enhanced the oxygen evolution rate.Associate Director Lee explains, "This study takes us a step closer to a carbon-free, and green hydrogen economy. This highly efficient and inexpensive oxygen generation electro-catalyst will help us overcome long-term challenges of the fossil fuel refining process: to produce high-purity hydrogen for commercial applications at a low price and in an eco-friendly manner."The study was published online on November 4 in the journal
|
Environment
| 2,020 |
November 23, 2020
|
https://www.sciencedaily.com/releases/2020/11/201123173456.htm
|
Supersized wind turbines generate clean energy--and surprising physics
|
Twenty years ago, wind energy was mostly a niche industry that contributed less than 1% to the total electricity demand in the United States. Wind has since emerged as a serious contender in the race to develop clean, renewable energy sources that can sustain the grid and meet the ever-rising global energy demand. Last year, wind energy supplied 7% of domestic electricity demand, and across the country -- both on and offshore -- energy companies have been installing giant turbines that reach higher and wider than ever before.
|
"Wind energy is going to be a really important component of power production," said engineer Jonathan Naughton at the University of Wyoming, in Laramie. He acknowledged that skeptics doubt the viability of renewable energy sources like wind and solar because they're weather dependent and variable in nature, and therefore hard to control and predict. "That's true," he said, "but there are ways to overcome that."Naughton and Charles Meneveau at Johns Hopkins University in Baltimore, Maryland, organized a mini-symposium at the 73rd Annual Meeting of the American Physical Society's Division of Fluid Dynamics, where researchers described the promise and fluid dynamics challenges of wind energy.In order for wind energy to be useful -- and accepted -- researchers need to design systems that are both efficient and inexpensive, Naughton said. That means gaining a better understanding of the physical phenomena that govern wind turbines, at all scales. Three years ago, the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) brought together 70 experts from around the world to discuss the state of the science. In 2019, the group published grand scientific challenges that need to be addressed for wind energy to contribute up to half of the demand for power.One of those challenges was to better understand the physics of the part of the atmosphere where the turbines operate. "Wind is really an atmospheric fluid mechanics problem," said Naughton. "But how the wind behaves at the levels where the turbines operate is still an area where we need more information."Today's turbines have blades that can stretch 50 to 70 meters, said Paul Veers, Chief Engineer at NREL's National Wind Technology Center, who provided an overview of the challenges during the symposium. These towers tower 100 meters or more over their environs. "Offshore, they're getting even bigger," said Veers.The advantage to building bigger turbines is that a wind power plant would need fewer machines to build and maintain and to access the powerful winds high above the ground. But giant power plants function at a scale that hasn't been well-studied, said Veers."We have a really good ability to understand and work with the atmosphere at really large scales," said Veers. "And scientists like Jonathan and Charles have done amazing jobs with fluid dynamics to understand small scales. But between these two, there's an area that has not been studied all that much."Another challenge will be to study the structural and system dynamics of these giant rotating machines. The winds interact with the blades, which bend and twist. The spinning blades give rise to high Reynolds numbers, "and those are areas where we don't have a lot of information," said Naughton.Powerful computational approaches can help reveal the physics, said Veers. "We're really pushing the computational methods as far as possible," he said. "It's taking us to the fastest and biggest computers that exist right now."A third challenge, Naughton noted, is to study the behavior of groups of turbines. Every turbine produces a wake in the atmosphere, and as that wake propagates downstream it interacts with the wakes from other turbines. Wakes may combine; they may also interfere with other turbines. Or anything else in the area. "If there's farmland downwind, we don't know how the change in the atmospheric flow will affect it," said Naughton.He called wind energy the "ultimate scale problem." Because it connects small-scale problems like the interactions of turbines with the air to giant-scale problems like atmospheric modeling, wind energy will require expertise and input from a variety of fields to address the challenges. "Wind is among the cheapest forms of energy," said Naughton. "But as the technology matures, the questions get harder."
|
Environment
| 2,020 |
November 23, 2020
|
https://www.sciencedaily.com/releases/2020/11/201123173454.htm
|
Tracking and fighting fires on earth and beyond
|
Mechanical engineer Michael Gollner and his graduate student, Sriram Bharath Hariharan, from the University of California, Berkeley, recently traveled to NASA's John H. Glenn Research Center in Cleveland, Ohio. There, they dropped burning objects in a deep shaft and study how fire whirls form in microgravity. The Glenn Center hosts a Zero Gravity Research Facility, which includes an experimental drop tower that simulates the experience of being in space.
|
"You get five seconds of microgravity," said Gollner. The researchers lit a small paraffin wick to generate fire whirls and dropped it, studying the flame all the way down.Experiments like this, presented at the 73rd Annual Meeting of the American Physical Society's Division of Fluid Dynamics, can help fire scientists answer two kinds of questions. First, they illuminate ways that fire can burn in the absence of gravity -- and may even inform protective measures for astronauts. "If something's burning, it could be a very dangerous situation in space," said Gollner. Second, it can help researchers better understand gravity's role in the growth and spread of destructive fires.The fire burned differently without gravity, said Gollner. The flame was shorter -- and wider. "We saw a real slow down of combustion," said Gollner. "We didn't see the same dramatic whirls that we have with ordinary gravity."Other researchers, including a team from Los Alamos National Laboratory in New Mexico, introduced new developments to a computational fluid dynamics model that can incorporate fuels of varying moisture content. Many existing environmental models average the moisture of all the fuels in an area, but that approach fails to capture the variations found in nature, said chemical engineer Alexander Josephson, a postdoctoral researcher who studies wildfire prediction at Los Alamos. As a result, those models may yield inaccurate predictions in wildfire behavior, he said."If you're walking through the forest, you see wood here and grass there, and there's a lot of variation," said Josephson. Dry grasses, wet mosses, and hanging limbs don't have the same water content and burn in different ways. A fire may be evaporating moisture from wet moss, for example, at the same time it's consuming drier limbs. "We wanted to explore how the interaction between those fuels occurs as the fire travels through."Los Alamos scientists worked to improve their model called FIRETEC (developed by Rod Linn), collaborating with researchers at the University of Alberta in Canada and the Canadian Forest service. Their new developments accommodate variations in moisture content and other characteristics of the simulated fuel types. Researcher Ginny Marshall from the Canadian Forest Service recently began comparing its simulations to real-world data from boreal forests in northern Canada.During a session on reacting flows, Matthew Bonanni, a graduate student in the lab of engineer Matthias Ihme at Stanford University in California, described a new model for wildfire spread based on a machine learning platform. Predicting where and when fires will burn is a complex process, says Ihme, that's driven by a complex mix of environmental influences.The goal of Ihme's group was to build a tool that was both accurate and fast, able to be used for risk assessment, early warning systems, and designing mitigation strategies. They built their model on a specialized computer platform called TensorFlow, designed by researchers at Google to run machine learning applications. As the model trains on more physical data, said Ihme, its simulations of heat accumulation and fire-spreading dynamics improve -- and get faster.Ihme said he's excited to see what advanced computational tools bring to wildfire prediction. "It used to be a very empirical research area, based on physical observations, and our community works on more fundamental problems," he said. But adding machine learning to the toolbox, he said, shows how algorithms can improve the fidelity of experiments. "This is a really exciting pathway," he said.
|
Environment
| 2,020 |
November 23, 2020
|
https://www.sciencedaily.com/releases/2020/11/201123112446.htm
|
Researchers overcome barriers for bio-inspired solar energy harvesting materials
|
Inspired by nature, researchers at The City College of New York (CCNY) can demonstrate a synthetic strategy to stabilize bio-inspired solar energy harvesting materials. Their findings, published in the latest issue of
|
In almost every corner of the world, despite extreme heat or cold temperature conditions, you will find photosynthetic organisms striving to capture solar energy. Uncovering nature's secrets on how to harvest light so efficiently and robustly could transform the landscape of sustainable solar energy technologies, especially in the wake of rising global temperatures.In photosynthesis, the first step (that is, light-harvesting) involves the interaction between light and the light-harvesting antenna, which is composed of fragile materials known as supra-molecular assemblies. From leafy green plants to tiny bacteria, nature designed a two-component system: the supra-molecular assemblies are embedded within protein or lipid scaffolds. It is not yet clear what role this scaffold plays, but recent research suggests that nature may have evolved these sophisticated protein environments to stabilize their fragile supra-molecular assemblies."Although we can't replicate the complexity of the protein scaffolds found in photosynthetic organisms, we were able to adapt the basic concept of a protective scaffold to stabilize our artificial light-harvesting antenna," said Dr. Kara Ng. Her co-authors include Dorthe M. Eisele and Ilona Kretzschmar, both professors at CCNY, and Seogjoo Jang, professor at Queens College.Thus far, translating nature's design principles to large-scale photovoltaic applications has been unsuccessful."The failure may lie in the design paradigm of current solar cell architectures," said Eisele. However, she and her research team, "do not aim to improve the solar cell designs that already exist. But we want to learn from nature's masterpieces to inspire entirely new solar energy harvesting architectures," she added.Inspired by nature, the researchers demonstrate how small, cross-linking molecules can overcome barriers towards functionalization of supra-molecular assemblies. They found that silane molecules can self-assemble to form an interlocking, stabilizing scaffold around an artificial supra-molecular light-harvesting antenna."We have shown that these intrinsically unstable materials, can now survive in a device, even through multiple cycles of heating and cooling," said Ng. Their work provides proof-of-concept that a cage-like scaffold design stabilizes supra-molecular assemblies against environmental stressors, such as extreme temperature fluctuations, without disrupting their favorable light-harvesting properties.The research was supported by CCNY's Martin and Michele Cohen Fund for Science, the Solar Photochemistry Program of the U.S. Department of Energy, Office of Basic Energy Sciences and the National Science Foundation (NSF CREST IDEALS and NSF-CAREER).
|
Environment
| 2,020 |
November 20, 2020
|
https://www.sciencedaily.com/releases/2020/11/201120142138.htm
|
New solvent-based recycling process could cut down on millions of tons of plastic waste
|
Multilayer plastic materials are ubiquitous in food and medical supply packaging, particularly since layering polymers can give those films specific properties, like heat resistance or oxygen and moisture control. But despite their utility, those ever-present plastics are impossible to recycle using conventional methods.
|
About 100 million tons of multilayer thermoplastics -- each composed of as many as 12 layers of varying polymers -- are produced globally every year. Forty percent of that total is waste from the manufacturing process itself, and because there has been no way to separate the polymers, almost all of that plastic ends up in landfills or incinerators.Now, University of Wisconsin-Madison engineers have pioneered a method for reclaiming the polymers in these materials using solvents, a technique they've dubbed Solvent-Targeted Recovery and Precipitation (STRAP) processing. Their proof-of-concept is detailed today (Nov. 20, 2020) in the journal By using a series of solvent washes guided by thermodynamic calculations of polymer solubility, UW-Madison professors of chemical and biological engineering George Huber and Reid Van Lehn and their students used the STRAP process to separate the polymers in a commercial plastic composed of common layering materials polyethylene, ethylene vinyl alcohol, and polyethylene terephthalate.The result? The separated polymers appear chemically similar to those used to make the original film.The team now hopes to use the recovered polymers to create new plastic materials, demonstrating that the process can help close the recycling loop. In particular, it could allow multilayer-plastic manufacturers to recover the 40 percent of plastic waste produced during the production and packaging processes."We've demonstrated this with one multilayer plastic," says Huber. "We need to try other multilayer plastics and we need to scale this technology."As the complexity of the multilayer plastics increases, so does the difficulty of identifying solvents that can dissolve each polymer. That's why STRAP relies on a computational approach used by Van Lehn called the Conductor-like Screening Model for Realistic Solvents (COSMO-RS) to guide the process.COSMO-RS is able to calculate the solubility of target polymers in solvent mixtures at varying temperatures, narrowing down the number of potential solvents that could dissolve a polymer. The team can then experimentally explore the candidate solvents."This allows us to tackle these much more complex systems, which is necessary if you're actually going to make a dent in the recycling world," says Van Lehn.The goal is to eventually develop a computational system that will allow researchers to find solvent combinations to recycle all sorts of multilayer plastics. The team also hopes to look at the environmental impact of the solvents it uses and establish a database of green solvents that will allow them to better balance the efficacy, cost and environmental impact of various solvent systems.The project stems from UW-Madison's expertise in catalysis. For decades, the university's chemical and biological engineering researchers have pioneered solvent-based reactions to convert biomass -- like wood or agricultural waste -- into useful chemicals or fuel precursors. Much of that expertise translates into solvent-based polymer recycling as well.The team is continuing its research on STRAP processing through the newly established Multi-University Center on Chemical Upcycling of Waste Plastics, directed by Huber. Researchers in the $12.5 million U.S. Department of Energy-funded center are investigating several chemical pathways for recovering and recycling polymers.This research was supported by a grant from the U.S. Department of Energy (DE-SC0018409).
|
Environment
| 2,020 |
November 19, 2020
|
https://www.sciencedaily.com/releases/2020/11/201119144508.htm
|
Could kelp help relieve ocean acidification?
|
Ethereal, swaying pillars of brown kelp along California's coasts grow up through the water column, culminating in a dense surface canopy of thick fronds that provide homes and refuge for numerous marine creatures. There's speculation that these giant algae may protect coastal ecosystems by helping alleviate acidification caused by too much atmospheric carbon being absorbed by the seas.
|
A new on-site, interdisciplinary analysis of giant kelp in Monterey Bay off the coast of California sought to further investigate kelp's acidification mitigation potential. "We talk about kelp forests protecting the coastal environment from ocean acidification, but under what circumstances is that true and to what extent?" said study team member Heidi Hirsh, a PhD student at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "These kinds of questions are important to investigate before trying to implement this as an ocean acidification mitigation strategy."The team's findings, published on Oct. 22 in the journal "One of the main takeaways for me is the limitation of the potential benefits from kelp productivity," said Hirsh, the lead author on the study.Kelp is an ecologically and economically important foundation species in California, where forests line nutrient-rich, rocky bottom coasts. One of the detrimental impacts of increased carbon in the atmosphere is its subsequent absorption by the planet's oceans, which causes acidification -- a chemical imbalance that can negatively impact the overall health of marine ecosystems, including animals people depend on for food.Kelp has been targeted as a potentially ameliorating species in part because of its speedy growth -- up to 5 inches per day -- during which it undergoes a large amount of photosynthesis that produces oxygen and removes carbon dioxide from the water. In Monterey Bay, the effects of giant kelp are also influenced by seasonal upwelling, when deep, nutrient-rich, highly acidic water from the Pacific is pulled toward the surface of the bay."It's this very complicated story of disentangling where the benefit is coming from -- if there is a benefit -- and assessing it on a site-by-site basis, because the conditions that we observe in southern Monterey Bay may not apply to other kelp forests," Hirsh said.The researchers set up operations at Stanford's Hopkins Marine Station, a marine laboratory in Pacific Grove, California, and collected data offshore from the facility in a 300-foot-wide kelp forest. Co-author Yuichiro Takeshita of the Monterey Bay Aquarium Research Institute (MBARI) provided pH sensors that were distributed throughout the area to understand chemical and physical changes in conjunction with water sampling."We are moving beyond just collecting more chemistry data and actually getting at what's behind the patterns in that data," said co-principal investigator Kerry Nickols, an assistant professor at California State University, Northridge. "If we didn't look at the water properties in terms of how they're changing and the differences between the top and the bottom of the kelp forests, we really wouldn't understand what's going on."With the new high-resolution, vertical measurements of pH, dissolved oxygen, salinity and temperature, the researchers were able to distinguish patterns in the seawater chemistry around the kelp forest. At night, when they expected to see more acidic water, the water was actually less acidic relative to daytime measurements -- a result they hypothesize was caused by the upwelling of acidic, low oxygen water during the day."It was wild to see the pH climb during the night when we were expecting increased acidity as a function of kelp respiration," Hirsh said. "That was an early indicator of how important the physical environment was for driving the local biogeochemical signal."While this project looked at kelp's potential to change the local environment on a short-term basis, it also opens the doors to understanding long-term impacts, like the ability to cultivate "blue carbon," the underwater sequestration of carbon dioxide."One of the reasons for doing this is to enable the design of kelp forests that might be considered as a blue carbon option," said co-author Stephen Monismith, the Obayashi Professor in the School of Engineering. "Understanding exactly how kelp works mechanistically and quantitatively is really important."Although the kelp forests' mitigation potential in the canopy didn't reach the sensitive organisms on the sea floor, the researchers did find an overall less acidic environment within the kelp forest compared to outside of it. The organisms that live in the canopy or could move into it are most likely to benefit from kelp's local acidification relief, they write.The research also serves as a model for future investigation about the ocean as a three-dimensional, fluid habitat, according to the co-authors."The current knowledge set is pretty large, but it tends to be disciplinary -- it's pretty rare bringing all these elements together to study a complex coastal system," said co-PI Rob Dunbar, the W.M. Keck Professor at Stanford Earth. "In a way, our project was kind of a model for how a synthetic study pulling together many different fields could be done."Monismith is also a member of Bio-X and an affiliate of the Stanford Woods Institute for the Environment. Dunbar is also a member of Bio-X and a senior fellow with Stanford Woods Institute for the Environment. Co-authors on the study include Sarah Traiger from California State University, Northridge and David Mucciarone from Stanford's Department of Earth System Science.The research was supported by an ARCS Fellowship, an NSF Graduate Research Fellowship, the David and Lucile Packard Foundation, Health Canada and the National Science Foundation.
|
Environment
| 2,020 |
November 19, 2020
|
https://www.sciencedaily.com/releases/2020/11/201119103056.htm
|
Truffle munching wallabies shed new light on forest conservation
|
Feeding truffles to wallabies may sound like a madcap whim of the jet-setting elite, but it may give researchers clues to preserving remnant forest systems.
|
Dr Melissa Danks from Edith Cowan University in Western Australia led an investigation into how swamp wallabies spread truffle spores around the environment. Results demonstrate the importance of these animals to the survival of the forest."There are thousands of truffle species in Australia and they play a critical role in helping our trees and woody plants to survive," she said."Truffles live in a mutually beneficial relationship with these plants, helping them to uptake water and nutrients and defence against disease."Unlike mushrooms where spores are dispersed through wind and water from their caps, truffles are found underground with the spores inside an enclosed ball -- they need to be eaten by an animal to move their spores."Dr Danks and colleagues at the University of New England investigated the role of swamp wallabies in dispersing these spores."Wallabies are browsing animals that will munch on ferns and leaves as well as a wide array of mushrooms and truffles," she said."This has helped them to be more resilient to changes in the environment than smaller mammals with specialist diets like potoroos."We were interested in finding out whether swamp wallabies have become increasingly important in truffle dispersal with the loss of these other mammals."The team fed truffles to wallabies and timed how long it would take for the spores to appear in the animals' poo. Most spores appeared within 51 hours, with some taking up to three days.Armed with this information, the researchers attached temporary GPS trackers to wallabies to map how far they move over a three-day period.Results showed the wallabies could move hundreds of metres, and occasionally more than 1200 metres, from the original truffle source before the spores appeared in their poo, which makes them a very effective at dispersing truffles around the forest.Dr Danks said this research had wide ranging conservation implications for Australian forests."As forest systems become more fragmented and increasingly under pressure, understanding spore dispersal systems is really key to forest survival," Dr Danks said."Many of our bushland plants have a partnership with truffles for survival and so it is really critical to understand the role of animals in dispersing these truffle spores."Our research on swamp wallabies has demonstrated a simple method to predict how far an animal disperses fungal spores in a variety of landscapes."
|
Environment
| 2,020 |
November 18, 2020
|
https://www.sciencedaily.com/releases/2020/11/201118141733.htm
|
New semiconductor coating may pave way for future green fuels
|
Hydrogen gas and methanol for fuel cells or as raw materials for the chemicals industry, for example, could be produced more sustainably using sunlight, a new Uppsala University study shows. In this study, researchers have developed a new coating material for semiconductors that may create new opportunities to produce fuels in processes that combine direct sunlight with electricity. The study is published in
|
"We've moved a step closer to our goal of producing the fuel of the future from sunlight," says Sascha Ott, Professor at the Department of Chemistry, Uppsala University.Today, hydrogen gas and methanol are produced mainly from fossil sources like oil or natural gas. An environmentally sounder, climate-friendlier option is to make these substances from water and carbon dioxide, using sustainable electricity, in what are known as electrolysers. This process requires electrical energy in the form of applied voltage.The scientists have devised a new material that reduces the voltage needed in the process by using sunlight to supplement the electricity.To capture the sunlight, they used semiconductors of the same type as those found in solar cells. The novel aspect of the study is that the semiconductors were covered with a new coating material that extracts electrons from the semiconductor when the sun is shining. These electrons are then available for fuel-forming reactions, such as production of hydrogen gas.The coating is a "metal-organic framework" -- a three-dimensional network composed of individual organic molecules that are held in place, on the sub-nanometre scale, by tiny metal connectors. The molecules capture the electrons generated by the sunlight and remove them from the semiconductor surface, where undesired chemical reactions might otherwise take place. In other words, the coating prevents the system from short-circuiting, which in turn allows efficient collection of electrons.In tests, the researchers were able to show that their new design greatly reduces the voltage required to extract electrons from the semiconductor."Our results suggest that the innovative coatings can be used to improve semiconductor performance, leading to more energy-efficient generation of fuels with lower electrical input requirements," Sascha Ott says.
|
Environment
| 2,020 |
November 18, 2020
|
https://www.sciencedaily.com/releases/2020/11/201118141718.htm
|
New technique seamlessly converts ammonia to green hydrogen
|
Northwestern University researchers have developed a highly effective, environmentally friendly method for converting ammonia into hydrogen. Outlined in a recent publication in the journal
|
The idea of using ammonia as a carrier for hydrogen delivery has gained traction in recent years because ammonia is much easier to liquify than hydrogen and is therefore much easier to store and transport. Northwestern's technological breakthrough overcomes several existing barriers to the production of clean hydrogen from ammonia."The bane for hydrogen fuel cells has been the lack of delivery infrastructure," said Sossina Haile, lead author of the study. "It's difficult and expensive to transport hydrogen, but an extensive ammonia delivery system already exists. There are pipelines for it. We deliver lots of ammonia all over the world for fertilizer. If you give us ammonia, the electrochemical systems we developed can convert that ammonia to fuel-cell-ready, clean hydrogen on-site at any scale."Haile is Walter P. Murphy Professor of materials science and engineering at Northwestern's McCormick School of Engineering with additional appointments in applied physics and chemistry. She also is co-director at the University-wide Institute for Sustainability and Energy at Northwestern.In the study, Haile and her research team report they are able to conduct the ammonia-to-hydrogen conversion using renewable electricity instead of fossil-fueled thermal energy because the process functions at much lower temperatures than traditional methods (250 degrees Celsius as opposed to 500 to 600 degrees Celsius). Second, the new technique generates pure hydrogen that does not need to be separated from any unreacted ammonia or other products. Third, the process is efficient because all of the electrical current supplied to the device directly produces hydrogen, without any loss to parasitic reactions. As an added advantage, because the hydrogen produced is pure, it can be directly pressurized for high-density storage by simply ramping up the electrical power.To accomplish the conversion, the researchers built a unique electrochemical cell with a proton-conducting membrane and integrated it with an ammonia-splitting catalyst."The ammonia first encounters the catalyst that splits it into nitrogen and hydrogen," Haile said. "That hydrogen gets immediately converted into protons, which are then electrically driven across the proton-conducting membrane in our electrochemical cell. By continually pulling off the hydrogen, we drive the reaction to go further than it would otherwise. This is known as Le Chatelier's principle. By removing one of the products of the ammonia-splitting reaction -- namely the hydrogen -- we push the reaction forward, beyond what the ammonia-splitting catalyst can do alone."The hydrogen generated from the ammonia splitting then can be used in a fuel cell. Like batteries, fuel cells produce electric power by converting energy produced by chemical reactions. Unlike batteries, fuel cells can produce electricity as long as fuel is supplied, never losing their charge. Hydrogen is a clean fuel that, when consumed in a fuel cell, produces water as its only byproduct. This stands in contrast with fossil fuels, which produce climate-changing greenhouse gases such as carbon dioxide, methane and nitrous oxide.Haile predicts that the new technology could be especially transformative in the transportation sector. In 2018, the movement of people and goods by cars, trucks, trains, ships, airplanes and other vehicles accounted for 28% of greenhouse gas emissions in the U.S. -- more than any other economic sector according to the Environmental Protection Agency."Battery-powered vehicles are great, but there's certainly a question of range and material supply," Haile said. "Converting ammonia to hydrogen on-site and in a distributed way would allow you to drive into a fueling station and get pressurized hydrogen for your car. There's also a growing interest for hydrogen fuel cells for the aviation industry because batteries are so heavy."Haile and her team have made major advances in the area of fuel cells over the years. As a next step in their work, they are exploring new methods to produce ammonia in an environmentally friendly way.
|
Environment
| 2,020 |
November 18, 2020
|
https://www.sciencedaily.com/releases/2020/11/201118080752.htm
|
Predicting urban water needs
|
The gateway to more informed water use and better urban planning in your city could already be bookmarked on your computer. A new Stanford University study identifies residential water use and conservation trends by analyzing housing information available from the prominent real estate website Zillow.
|
The research, published Nov. 18 in "Evolving development patterns can hold the key to our success in becoming more water-wise and building long-term water security," said study senior author Newsha Ajami, director of urban water policy at Stanford's Water in the West program. "Creating water-resilient cities under a changing climate is closely tied to how we can become more efficient in the way we use water as our population grows."It's estimated that up to 68 percent of the world's population will reside in urban or suburban areas by 2050. While city growth is a consistent trend, the types of residential dwellings being constructed and neighborhood configurations are less uniform, leading to varying ways in which people use water inside and outside their homes. The people living within these communities also have different water use behaviors based on factors such as age, ethnicity, education and income. However, when planning for infrastructure changes, decision-makers only take population, economic growth and budget into account, resulting in an incomplete picture of future demand. This, in turn, can lead to infrastructure changes, such as replacing old pipes, developing additional water supply sources or building wastewater treatment facilities, that fail to meet community needs.Zillow and other real estate websites gather and publish records collected from different county and municipal agencies. These websites can also be updated by homeowners, making them rich sources of information that can otherwise be difficult and timely to obtain. The Stanford researchers used data from Zillow to gather single-family home information, including lot size, home value and number of rooms in Redwood City, California, a fast-growing, economically diverse city with various styles of houses, lots and neighborhoods. Then, they pulled U.S. Census Bureau demographic information for the city, looking at factors including average household size and income along with the percentage occupied by renters, non-families, college educated and seniors.Coupling the Zillow and census data and then applying machine learning methods, the researchers were able to identify five community groupings, or clusters. They then compared the different group's billing data from the city's public works department to identify water usage trends and seasonal patterns from 2007 to 2017 and conservation rates during California's historic drought from 2014 to 2017."With our methods incorporating Zillow data we were able to develop more accurate community groupings beyond simply clustering customers based on income and other socioeconomic qualities. This more granular view resulted in some unexpected findings and provided better insight into water-efficient communities," said lead author Kim Quesnel, a postdoctoral scholar at the Bill Lane Center for the American West while performing the research.They found the two lowest income groups scored average on water use despite having a higher number of people living in each household. The middle-income group had high outdoor water use but ranked low in winter water use, signaling efficient indoor water appliances -- such as low-flow, high-efficiency faucets and toilets -- making them an ideal target for outdoor conservation features such as converting green spaces or upgrading to weather-based or smart irrigation controllers.The two highest income groups, characterized by highly educated homeowners living in comparatively larger homes, were the most dissimilar. One cluster -- younger residents on smaller lots with newer homes in dense, compact developments -- had the lowest water use of the entire city. The other high-income cluster consisting of older houses built on larger lots with fewer people turned out to be the biggest water consumer. The finding goes against most previous research linking income and water use, and suggests that changing how communities are built and developed can also change water use patterns, even for the most affluent customers.All groups showed high rates of water conservation during drought. Groups with the highest amount of savings (up to 37 percent during peak drought awareness) were the two thirstiest consumers (the high-income, large-lot and middle-income groups) demonstrating high potential for outdoor water conservation. Groups with lower normal water usage were also able to cut back, but were more limited in their savings. Understanding these limitations could inform how policymakers and city planners target customers when implementing water restrictions or offering incentives such as rebates during drought.This research lays the framework for integrating big data into urban planning, providing more accurate water use expectations for different community configurations. Further studies could include examining how data from emerging online real estate platforms can be used to develop neighborhood water use classifications across city, county or even state lines. An additional area of interest for the researchers is examining how water use consumption is linked to development patterns in other kinds of residential areas, for example in dense cities."Emerging, accessible data sources are giving us a chance to develop a more informed understanding of water use patterns and behaviors," said Ajami. "If we rethink the way we build future cities and design infrastructure, we have the opportunity for more equitable and affordable access to water across various communities."
|
Environment
| 2,020 |
November 18, 2020
|
https://www.sciencedaily.com/releases/2020/11/201118141857.htm
|
A new understanding of ionic interactions with graphene and water
|
A research team led by Northwestern University engineers and Argonne National Laboratory researchers have uncovered new findings into the role of ionic interaction within graphene and water. The insights could inform the design of new energy-efficient electrodes for batteries or provide the backbone ionic materials for neuromorphic computing applications.
|
Known for possessing extraordinary properties, from mechanical strength to electronic conductivity to wetting transparency, graphene plays an important role in many environmental and energy applications, such as water desalination, electrochemical energy storage, and energy harvesting. Water-mediated electrostatic interactions drive the chemical processes behind these technologies, making the ability to quantify the interactions between graphene, ions, and charged molecules vitally important in order to design more efficient and effective iterations."Every time you have interactions with ions in matter, the medium is very important. Water plays a vital role in mediating interactions between ions, molecules, and interfaces, which lead to a variety of natural and technological processes," said Monica Olvera de La Cruz, Lawyer Taylor Professor of Materials Science and Engineering, who led the research. "Yet, there is much we don't understand about how water-mediated interactions are influenced by nanoconfinement at the nanoscale."Using computer model simulations at Northwestern Engineering and x-ray reflectivity experiments at Argonne, the research team investigated the interaction between two oppositely charged ions in different positions in water confined between two graphene surfaces. They found that the strength of the interaction was not equivalent when the ions' positions were interchanged. This break of symmetry, which the researchers' dubbed non-reciprocal interactions, is a phenomenon not previously predicted by electrostatic theory.The researchers also found that the interaction between oppositely charged ions became repulsive when one ion was inserted into the graphene layers, and the other was absorbed at the interface."From our work, one can conclude that the water structure alone near interfaces cannot determine the effective electrostatic interactions between ions," said Felipe Jimenez-Angeles, senior research associate in Northwestern Engineering's Center for Computation and Theory of Soft Materials and a lead author on the study. "The non-reciprocity we observed implies that ion-ion interactions at the interface do not obey the isotropic and translational symmetries of Coulomb's law and can be present in both polarizable and non-polarizable models. This non-symmetrical water polarization affects our understanding of ion-differentiation mechanisms such as ion selectivity and ion specificity.""These results reveal another layer to the complexity of how ions interact with interfaces," said Paul Fenter, a senior scientist and group leader in the Chemical Sciences and Engineering Division at Argonne, who led the study's x-ray measurements using Argonne's Advanced Photon Source. "Significantly, these insights derive from simulations that are validated against experimental observations for the same system."These results could influence the future design of membranes for selective ion adsorption used in environmental technologies, like water purification processes, batteries and capacitors for electric energy storage, and the characterization of biomolecules, like proteins and DNA.Understanding ion interaction could also impact advances in neuromorphic computing -- where computers function like human brains to perform complex tasks much more efficiently than current computers. Lithium ion can achieve plasticity, for example, by being inserted in or removing from graphene layers in neuromorphic devices."Graphene is an ideal material for devices that transmit signals via ionic transport in electrolytes for neuromorphic applications," said Olvera de la Cruz. "Our study demonstrated that the interactions between intercalated ions in the graphene and physically adsorbed ions in the electrolyte is repulsive, affecting the mechanics of such devices."The study provides researchers with a fundamental understanding of the electrostatic interactions in aqueous media near interfaces that go beyond water's relationship with graphene, which is crucial for studying other processes in the physical and sciences."Graphene is a regular surface, but these findings can help explain electrostatic interactions in more complex molecules, like proteins," said Jimenez-Angeles. "We know that what's inside the protein and the electrostatic charges outside of it matters. This work gives us a new opportunity to explore and look at these important interactions."
|
Environment
| 2,020 |
November 17, 2020
|
https://www.sciencedaily.com/releases/2020/11/201117192628.htm
|
Moving wind turbine blades toward recyclability
|
A new material for wind blades that can be recycled could transform the wind industry, rendering renewable energy more sustainable than ever before while lowering costs in the process.
|
The use of a thermoplastic resin has been validated at the National Renewable Energy Laboratory (NREL). Researchers demonstrated the feasibility of thermoplastic resin by manufacturing a 9-meter-long wind turbine blade using this novel resin, which was developed by a Pennsylvania company called Arkema Inc. Researchers have now validated the structural integrity of a 13-meter-long thermoplastic composite blade, also manufactured at NREL.In addition to the recyclability aspect, thermoplastic resin can enable longer, lighter-weight, and lower-cost blades. Manufacturing blades using current thermoset resin systems requires more energy and manpower in the manufacturing facility, and the end product often winds up in landfills."With thermoset resin systems, it's almost like when you fry an egg. You can't reverse that," said Derek Berry, a senior engineer at NREL. "But with a thermoplastic resin system, you can make a blade out of it. You heat it to a certain temperature, and it melts back down. You can get the liquid resin back and reuse that."Berry is co-author of a new paper titled, "Structural Comparison of a Thermoplastic Composite Wind Turbine Blade and a Thermoset Composite Wind Turbine Blade," which appears in the journal The other authors, also from NREL, are Robynne Murray, Ryan Beach, David Barnes, David Snowberg, Samantha Rooney, Mike Jenks, Bill Gage, Troy Boro, Sara Wallen, and Scott Hughes.NREL has also developed a technoeconomic model to explore the cost benefits of using a thermoplastic resin in blades. Current wind turbine blades are made primarily of composite materials such as fiberglass infused with a thermoset resin. With an epoxy thermoset resin, the manufacturing process requires the use of additional heat to cure the resin, which adds to the cost and cycle time of the blades. Thermoplastic resin, however, cures at room temperature. The process does not require as much labor, which accounts for about 40% of the cost of a blade. The new process, the researchers determined, could make blades about 5% less expensive to make.NREL is home to the Composites Manufacturing Education and Technology (CoMET) Facility at the Flatirons Campus near Boulder, Colorado. There, researchers design, manufacture, and test composite turbine blades. They previously demonstrated the feasibility of the thermoplastic resin system by manufacturing a 9-meter composite wind turbine blade. They followed that demonstration by manufacturing and structurally validating a 13-meter thermoplastic composite blade compared to a near-identical thermoset blade. This work, coupled with work by Arkema and other Institute for Advanced Composites Manufacturing Innovation partners, demonstrated advantages to moving away from the thermoset resin system."The thermoplastic material absorbs more energy from loads on the blades due to the wind, which can reduce the wear and tear from these loads to the rest of the turbine system, which is a good thing," Murray said.The thermoplastic resin could also allow manufactures to build blades on site, alleviating a problem the industry faces as it trends toward larger and longer blades. As blade sizes grow, so does the problem of how to transport them from a manufacturing facility.This work was funded by the U.S. Department of Energy Advanced Manufacturing Office. NREL is the U.S. Department of Energy's primary national laboratory for renewable energy and energy efficiency research and development. NREL is operated for the Energy Department by the Alliance for Sustainable Energy, LLC.
|
Environment
| 2,020 |
November 17, 2020
|
https://www.sciencedaily.com/releases/2020/11/201117192620.htm
|
In the Amazon's 'sand forests,' birds play by different evolutionary rules
|
Picture the Amazon. You're thinking lush rainforests teeming with animals, right? It turns out, the Amazon Basin contains other less-famous ecosystems that have been under-studied by biologists for years, including patches of habitat growing on white sands. Scientists are starting to turn their attention to these "sand forests" and the animals that live there. In a new study, researchers examined birds from the region and found that unlike birds in the dense rainforest, the white sand birds travel from one habitat patch to another and interbreed. It's a characteristic that could change the way conservationists protect the sand forest birds.
|
"We're always interested in knowing where species are and why they're there, what kind of habitat they need," says João Capurucho, a postdoctoral researcher at Chicago's Field Museum and the lead author of the study in "We wanted to know if the behavioral and evolutionary patterns for white sand birds were the same as for birds that live in dense rainforest, and we found that they were different in surprising ways," says John Bates, a curator at the Field Museum and the study's senior author.The white sand ecosystems are found throughout the Amazon Basin, in patches around the size of Manhattan. Scientists aren't certain how the sand forests formed -- the ecosystems cover so much land scattered throughout the Amazon that it's hard to imagine that they arose from a single ancient beach system. And for a long time, scientists didn't pay them much attention. "The thinking was, you don't go to the Amazon to see short, sandy forests, you go to see dense, tall trees, so you're not going to stop in these areas," says Bates.Since the patches of white sand are separated from each other, the plants and animals living there are also isolated. "The patches of white sand in the middle of the rainforest are like islands, and with islands, you usually start seeing genetically distinct populations," says Bates. It's why Australia has such strange animals compared to the rest of the world -- when a population is cut off from the rest, the organisms will interbreed with each other and evolve along their own lines. Previous studies have shown that the plants and insects vary genetically from one white sand patch to another, indicating that they're in separate gene pools and frequently are different species. Capurucho and his colleagues wanted to see if the same was true for birds."We've also been investigating the genetic diversity of these birds and comparing them to birds in other Amazonian ecosystems. These birds were basically isolated in different patches, so dispersal might be a very important thing for them to move around the landscape," says Capurucho.The researchers compared the DNA of birds in different white sand ecosystem patches, and they found that the birds' DNA was very similar. That was surprising. In the dense terra firme Amazon rainforest, many birds become genetically distinct from each other and start branching off into different species if they're separated from each other by a barrier like a river that's too wide to cross. But the DNA data shows that white sand birds buck this evolutionary trend -- they fly between patches of white sand, sometimes hundreds of miles apart, and interbreed with the birds there."The genetic data indisputably argues that they're actually dispersing and maintaining connectivity across these landscapes," says Bates. "And that sometimes includes crossing big rivers, which is one of the things we were really interested in."Other elements of the study held true to biological norms: for instance, birds with long, thin wings generally flew greater distances and achieved bigger ranges than birds with short, rounded wings, whether they were from the white sand or surrounding forest. Also, white sand birds had on average smaller geographical ranges than their forest counterparts. But the fact that the white sand birds traveled between habitat areas to breed shows that they're playing by different evolutionary rules. And now that scientists know more about white sand birds' ranges and genetic patterns, that could help conservationists figure out how to set aside and protect land in ways that will benefit the animals living there. It's an especially urgent matter, given the threat of climate change altering Amazonian ecosystems."As the climate changes, these habitats will change in their distribution too. So we need to understand how birds are distributed across these habitats," says Capurucho. "We need to know whether they have a big range or if they're restricted to a small area, and what traits lead to these differences. If we understand that, we can manage threats to these birds and their habitats.""People haven't prioritized protecting white sand ecosystems, but this study highlights how much we still have to learn about them," says Bates. "These are really special habitats, and we can't protect them unless we know how they work."
|
Environment
| 2,020 |
November 17, 2020
|
https://www.sciencedaily.com/releases/2020/11/201117192605.htm
|
Upgraded radar can enable self-driving cars to see clearly no matter the weather
|
A new kind of radar could make it possible for self-driving cars to navigate safely in bad weather. Electrical engineers at the University of California San Diego developed a clever way to improve the imaging capability of existing radar sensors so that they accurately predict the shape and size of objects in the scene. The system worked well when tested at night and in foggy conditions.
|
The team will present their work at the Sensys conference Nov. 16 to 19.Inclement weather conditions pose a challenge for self-driving cars. These vehicles rely on technology like LiDAR and radar to "see" and navigate, but each has its shortcomings. LiDAR, which works by bouncing laser beams off surrounding objects, can paint a high-resolution 3D picture on a clear day, but it cannot see in fog, dust, rain or snow. On the other hand, radar, which transmits radio waves, can see in all weather, but it only captures a partial picture of the road scene.Enter a new UC San Diego technology that improves how radar sees."It's a LiDAR-like radar," said Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering. It's an inexpensive approach to achieving bad weather perception in self-driving cars, he noted. "Fusing LiDAR and radar can also be done with our techniques, but radars are cheap. This way, we don't need to use expensive LiDARs."The system consists of two radar sensors placed on the hood and spaced an average car's width apart (1.5 meters). Having two radar sensors arranged this way is key -- they enable the system to see more space and detail than a single radar sensor.During test drives on clear days and nights, the system performed as well as a LiDAR sensor at determining the dimensions of cars moving in traffic. Its performance did not change in tests simulating foggy weather. The team "hid" another vehicle using a fog machine and their system accurately predicted its 3D geometry. The LiDAR sensor essentially failed the test.The reason radar traditionally suffers from poor imaging quality is because when radio waves are transmitted and bounced off objects, only a small fraction of signals ever gets reflected back to the sensor. As a result, vehicles, pedestrians and other objects appear as a sparse set of points."This is the problem with using a single radar for imaging. It receives just a few points to represent the scene, so the perception is poor. There can be other cars in the environment that you don't see," said Kshitiz Bansal, a computer science and engineering Ph.D. student at UC San Diego. "So if a single radar is causing this blindness, a multi-radar setup will improve perception by increasing the number of points that are reflected back."The team found that spacing two radar sensors 1.5 meters apart on the hood of the car was the optimal arrangement. "By having two radars at different vantage points with an overlapping field of view, we create a region of high-resolution, with a high probability of detecting the objects that are present," Bansal said.The system overcomes another problem with radar: noise. It is common to see random points, which do not belong to any objects, appear in radar images. The sensor can also pick up what are called echo signals, which are reflections of radio waves that are not directly from the objects that are being detected.More radars mean more noise, Bharadia noted. So the team developed new algorithms that can fuse the information from two different radar sensors together and produce a new image free of noise.Another innovation of this work is that the team constructed the first dataset combining data from two radars."There are currently no publicly available datasets with this kind of data, from multiple radars with an overlapping field of view," Bharadia said. "We collected our own data and built our own dataset for training our algorithms and for testing."The dataset consists of 54,000 radar frames of driving scenes during the day and night in live traffic, and in simulated fog conditions. Future work will include collecting more data in the rain. To do this, the team will first need to build better protective covers for their hardware.The team is now working with Toyota to fuse the new radar technology with cameras. The researchers say this could potentially replace LiDAR. "Radar alone cannot tell us the color, make or model of a car. These features are also important for improving perception in self-driving cars," Bharadia said.Video:
|
Environment
| 2,020 |
November 16, 2020
|
https://www.sciencedaily.com/releases/2020/11/201116161228.htm
|
Biochar from agricultural waste products can adsorb contaminants in wastewater
|
Biochar -- a charcoal-like substance made primarily from agricultural waste products -- holds promise for removing emerging contaminants such as pharmaceuticals from treated wastewater.
|
That's the conclusion of a team of researchers that conducted a novel study that evaluated and compared the ability of biochar derived from two common leftover agricultural materials -- cotton gin waste and guayule bagasse -- to adsorb three common pharmaceutical compounds from an aqueous solution. In adsorption, one material, like a pharmaceutical compound, sticks to the surface of another, like the solid biochar particle. Conversely, in absorption, one material is taken internally into another; for example, a sponge absorbs water.Guayule, a shrub that grows in the arid Southwest, provided the waste for one of the biochars tested in the research. More properly called Parthenium argentatum, it has been cultivated as a source of rubber and latex. The plant is chopped to the ground and its branches mashed up to extract the latex. The dry, pulpy, fibrous residue that remains after stalks are crushed to extract the latex is called bagasse.The results are important, according to researcher Herschel Elliott, Penn State professor of agricultural and biological engineering, College of Agricultural Sciences, because they demonstrate the potential for biochar made from plentiful agricultural wastes -- that otherwise must be disposed of -- to serve as a low-cost additional treatment for reducing contaminants in treated wastewater used for irrigation."Most sewage treatment plants are currently not equipped to remove emerging contaminants such as pharmaceuticals, and if those toxic compounds can be removed by biochars, then wastewater can be recycled in irrigation systems," he said. "That beneficial reuse is critical in regions such as the U.S. Southwest, where a lack of water hinders crop production."The pharmaceutical compounds used in the study to test whether the biochars would adsorb them from aqueous solution were: sulfapyridine, an antibacterial medication no longer prescribed for treatment of infections in humans but commonly used in veterinary medicine; docusate, widely used in medicines as a laxative and stool softener; and erythromycin, an antibiotic used to treat infections and acne.The results, published today (Nov. 16) in In the research, it adsorbed 98% of the docusate, 74% of the erythromycin and 70% of the sulfapyridine in aqueous solution. By comparison, the biochar derived from guayule bagasse adsorbed 50% of the docusate, 50% of the erythromycin and just 5% of the sulfapyridine.The research revealed that a temperature increase, from about 650 to about 1,300 degrees F in the oxygen-free pyrolysis process used to convert the agricultural waste materials to biochars, resulted in a greatly enhanced capacity to adsorb the pharmaceutical compounds."The most innovative part about the research was the use of the guayule bagasse because there have been no previous studies on using that material to produce biochar for the removal of emerging contaminants," said lead researcher Marlene Ndoun, a doctoral student in Penn State's Department of Agricultural and Biological Engineering. "Same for cotton gin waste -- research has been done on potential ways to remove other contaminants, but this is the first study to use cotton gin waste specifically to remove pharmaceuticals from water."For Ndoun, the research is more than theoretical. She said she wants to scale up the technology and make a difference in the world. Because cotton gin waste is widely available, even in the poorest regions, she believes it holds promise as a source of biochar to decontaminate water."I am originally from Cameroon, and the reason I'm even here is because I'm looking for ways to filter water in resource-limited communities, such as where I grew up," she said. "We think if this could be scaled up, it would be ideal for use in countries in sub-Saharan Africa, where people don't have access to sophisticated equipment to purify their water."The next step, Ndoun explained, would be to develop a mixture of biochar material capable of adsorbing a wide range of contaminants from water."Beyond removing emerging contaminants such as pharmaceuticals, I am interested in blending biochar materials so that we have low-cost filters able to remove the typical contaminants we find in water, such as bacteria and organic matter," said Ndoun.
|
Environment
| 2,020 |
November 16, 2020
|
https://www.sciencedaily.com/releases/2020/11/201116161214.htm
|
Fish carcasses deliver toxic mercury pollution to the deepest ocean trenches
|
The sinking carcasses of fish from near-surface waters deliver toxic mercury pollution to the most remote and inaccessible parts of the world's oceans, including the deepest spot of them all: the 36,000-foot-deep Mariana Trench in the northwest Pacific.
|
And most of that mercury began its long journey to the deep-sea trenches as atmospheric emissions from coal-fired power plants, mining operations, cement factories, incinerators and other human activities.Those are two of the main conclusions of a University of Michigan-led research team that analyzed the isotopic composition of mercury in fish and crustaceans collected at the bottom of two deep-sea trenches in the Pacific. The team reports its findings in a study scheduled for publication Nov. 16 in "Mercury that we believe had once been in the stratosphere is now in the deepest trench on Earth," said U-M environmental geochemist Joel Blum, lead author of the "It was widely thought that anthropogenic mercury was mainly restricted to the upper 1,000 meters of the oceans, but we found that while some of the mercury in these deep-sea trenches has a natural origin, it is likely that most of it comes from human activity."At a scientific meeting in June, Blum's team and a Chinese-led research group independently reported the detection of human-derived mercury in deep-sea-trench organisms.The Chinese researchers, who published their findings July 7 in the journal But in their Why does it matter whether deep-sea-trench mercury came from sinking fish carcasses or from the steady rain of tiny bits of detritus?Because scientists and policymakers want to know how changing global mercury emissions will affect the levels found in seafood. While mercury emissions have declined in recent years in North America and Europe, China and India continue to expand their use of coal, and global-scale mercury emissions are rising.To determine how seafood is likely to be impacted, researchers rely on global models. And refining those models requires the clearest possible understanding of how mercury cycles within the oceans and between the oceans and the atmosphere, according to Blum."Yes, we eat fish caught in shallower waters, not from deep-sea trenches," he said. "However, we need to understand the cycling of mercury through the entire ocean to be able to model future changes in the near-surface ocean."Mercury is a naturally occurring element, but more than 2,000 metric tons of it are emitted into the atmosphere each year from human activities. This inorganic mercury can travel thousands of miles before being deposited onto land and ocean surfaces, where microorganisms convert some of it to methylmercury, a highly toxic organic form that can accumulate in fish to levels that are harmful to humans and wildlife.Effects on humans can include damage to the central nervous system, the heart and the immune system. The developing brains of fetuses and young children are especially vulnerable.In their study, Blum and his colleagues analyzed the isotopic composition of methylmercury from the tissues of snailfish and crustaceans called amphipods collected at depths of up to 33,630 feet in the Mariana Trench in the northwest Pacific, southwest of Guam. Other samples were collected at depths of up to 32,800 feet in the Kermadec Trench in the southwest Pacific, northeast of New Zealand."These samples were challenging to acquire, given the trenches' great depths and high pressures," said study co-author Jeffrey Drazen, a University of Hawaii oceanographer. "The trenches are some of the least studied ecosystems on Earth, and the Mariana snailfish was only just discovered in 2014."Mercury has seven stable (nonradioactive) isotopes, and the ratio of the different isotopes provides a unique chemical signature, or fingerprint, that can be used as a diagnostic tool to compare environmental samples from various locations.The researchers used these fingerprinting techniques -- many of which were developed in Blum's lab -- to determine that the mercury from deep-sea-trench amphipods and snailfish had a chemical signature that matched the mercury from a wide range of fish species in the central Pacific that feed at depths of around 500 meters (1,640 feet). Those central Pacific fish were analyzed by Blum and his colleagues during a previous study.At the same time, they found that the isotopic composition of the mercury in sinking particles of detritus, the delivery mechanism favored by the Chinese team, does not match the chemical signature of mercury in the trench organisms, according to Blum and his colleagues.They concluded that most of the mercury in the trench organisms was transported there in the carcasses of fish that feed in sunlit near-surface waters, where most of the mercury comes from anthropogenic sources."We studied the trench biota because they live in the deepest and most remote place on Earth, and we expected the mercury there to be almost exclusively of geologic origin -- that is, from deep-sea volcanic sources," Blum said. "Our most surprising finding was that we found mercury in organisms from deep-sea trenches that shows evidence for originating in the sunlit surface zone of the ocean."Anthropogenic mercury enters the oceans via rainfall, dry deposition of windblown dust, and runoff from rivers and estuaries."Deep-sea trenches have been viewed as pristine ecosystems unsullied by human activities. But recent studies have found traces of anthropogenic lead, carbon-14 from nuclear weapons testing, and persistent organic pollutants such as PCBs in organisms living in even the deepest part of the ocean, which is known as the hadal zone," Drazen said.The latest mercury findings provide yet another example of human activities impacting food webs in the most remote marine ecosystems on Earth.
|
Environment
| 2,020 |
November 16, 2020
|
https://www.sciencedaily.com/releases/2020/11/201116132247.htm
|
Dairy cows exposed to heavy metals worsen antibiotic-resistant pathogen crisis
|
Dairy cows, exposed for a few years to drinking water contaminated with heavy metals, carry more pathogens loaded with antimicrobial-resistance genes able to tolerate and survive various antibiotics.
|
That's the finding of a team of researchers that conducted a study of two dairy herds in Brazil four years after a dam holding mining waste ruptured, and it spotlights a threat to human health, the researchers contend.The study is the first to show that long-term persistence of heavy metals in the environment may trigger genetic changes and interfere with the microorganism communities that colonize dairy cows, according to researcher Erika Ganda, assistant professor of food animal microbiomes, Penn State."Our findings are important because if bacterial antimicrobial resistance is transferred via the food chain by milk or meat consumption, it would have substantial implications for human health," she said. "What we saw is, when heavy metal contamination is in the environment, there is potential for an increase of so-called 'superbugs.'"A declaration from the World Health Organization supports Ganda's assertion, saying that resistance to antimicrobials is one of the top 10 global public health threats facing humanity. Antimicrobial resistance occurs when bacteria, viruses, fungi and parasites change over time and no longer respond to medicines, making infections harder to treat and increasing the risk of disease spread, severe illness and death.A South American environmental calamity triggered the research. Known as the Mariana Dam disaster, in 2015 the Fundao Tailings Dam suffered a catastrophic failure and released more than 11 billion gallons of iron ore waste. The huge wave of toxic mud flowed into the Doce River basin surrounding Mariana City in Minas Gerais, a state in southeast Brazil.Following this catastrophe, the team analyzed the consequences of long-term exposure to contaminated drinking water on dairy cattle.To reach their conclusions, researchers identified bacterial antimicrobial-resistance genes in the feces, rumen fluid and nasal passages of 16 dairy cattle in the area contaminated by the iron ore waste four years after the environmental disaster. Researchers compared samples taken from those animals to analogous samples from 16 dairy cattle on an unaffected farm, about 220 miles away.The microorganism community in the cattle continuously exposed to contaminated water differed in many ways from that of the cows not exposed to heavy metals, noted researcher Natalia Carrillo Gaeta, doctoral student and research assistant in the Department of Preventive Veterinary Medicine and Animal Health, University of Sao Paulo, Brazil.The relative abundance and prevalence of bacterial antimicrobial-resistance genes were higher in cattle at the heavy metals-affected farm than in cattle at the non-contaminated farm, she pointed out.The data, published today (Nov. 16) in The link between heavy metal concentration in the environment and increased prevalence of antibiotic resistance in bacteria has been seen before, Ganda said. It's known as "co-resistance phenomenon" and is characterized by the closeness between different types of resistance genes located in the same genetic element."As a result of this connection, the transfer of one gene providing heavy metal resistance, may occur in concert with the transfer of the closest gene, providing antibiotic resistance, she said. "Consequently, some resistance mechanisms are shared between antibiotics and heavy metals."Ganda's research group in the College of Agricultural Sciences works with the one-health perspective, which focuses on the interaction between animals, people and the environment. She believes this research presents a good description of a one-health problem."In this Brazilian environmental disaster, not only were several people and animals killed by the devastating flood caused by the dam rupture, but the contamination persisted in the environment and made it into dairy cows, that could potentially pose another risk for humans," Ganda said. "If these animals are colonized, resistant bacteria could also make it to humans and colonize them through the food chain."
|
Environment
| 2,020 |
November 16, 2020
|
https://www.sciencedaily.com/releases/2020/11/201116112934.htm
|
New tool predicts geological movement and the flow of groundwater in old coalfields
|
A remote monitoring tool to help authorities manage public safety and environmental issues in recently abandoned coal mines has been developed by the University of Nottingham.
|
The tool uses satellite radar imagery to capture millimetre-scale measurements of changes in terrain height. Such measurements can be used to monitor and forecast groundwater levels and changes in geological conditions deep below the earth's surface in former mining areas.With a long history of coal mining, the project was tested in the UK at a regional scale, but has global implications given the worldwide decline in the demand for coal in favour of more sustainable energy sources.The method was implemented over the Nottinghamshire coalfields, which were abandoned as recently as 2015, when the last deep mine, Thoresby Colliery, shut its doors for good.When deep mines are closed, the groundwater that was previously pumped to the surface to make mining safe, is allowed to rise again until it is restored to its natural level in a process called rebound.The rebound of groundwater through former mine workings needs careful monitoring; often containing contaminants it can pollute waterways and drinking water supplies; lead to localised flooding; renew mining subsidence, land uplift and reactivate geological faults if it rises too fast. Such issues can cause costly and hazardous problems that need to be addressed prior to the land being repurposed.The Coal Authority therefore needs detailed information on the rebound rate across the vast mine systems it manages so it knows exactly where to relax or increase pumping to control groundwater levels.Measuring the rate and location of mine water rebound is therefore vital to effectively manage the environmental and safety risks in former coalfields, but difficult to achieve. Groundwater can flow in unanticipated directions via cavities within and between neighbouring collieries and discharge at the surface in areas not thought to be at risk.In the past, predicting where mine water will flow was heavily-reliant on mine plans; inaccurate or incomplete documents that are sometimes more than a century old; and borehole data. Costing approximately £20,000 to £350K each, boreholes are expensive to drill and are often sparsely situated across vast coalfields, leaving measurement gaps.More recently uplift, subsidence and other geological motion has been monitored by applying Interferometric Synthetic Aperture Radar (InSAR) to images acquired from radar satellites. However, this interferometry technique has historically worked only in urban areas (as opposed to rural ones), where the radar can pick up stable objects, such as buildings or rail tracks, on the ground to reflect back regularly to the satellite.This study uses an advanced InSAR technique, called Intermittent Small Baseline Subset (ISBAS), developed by the University of Nottingham and its spin-out company Terra Motion Ltd. InSAR uses stacks of satellite images of the same location taken every few days or weeks which makes it possible to pick up even the slightest topographical changes over time. Uniquely, ISBAS InSAR can compute land deformation measurements over both urban and rural terrain. This is beneficial when mapping former mining areas, which are often located in rural areas. Over the Nottinghamshire coalfields, for example, the land cover is predominantly rural, with nearly 80 per cent comprising agricultural land, pastures and semi-natural areas.Such a density of measurements meant study lead, University of Nottingham PhD student David Gee could develop a cost-effective and simple method to model groundwater rebound from the surface movement changes.The study found a definitive link between ground motion measurements and rising mine water levels. Often land subsidence or uplift occurs as a result of changes in groundwater, where the strata acts a little like a sponge, expanding when filling with fluid and contracting when drained.With near-complete spatial coverage of the InSAR data, he could fill in the measurement gaps between boreholes to map the change in mine water levels across the whole coalfield. The model takes into account both geology and depth of groundwater to determine the true rate of rebound and help identify where problems associated with rebound may occur.The findings have been published in a paper 'Modelling groundwater rebound in recently abandoned coalfields using DInSAR' in the journal David Gee, who is based in the Nottingham Geospatial Institute at the University, said, "There are several coalfields currently undergoing mine water rebound in the UK, where surface uplift has been measured using InSAR. In the Nottinghamshire coalfields, the quantitative comparison between the deformation measured by the model and InSAR confirms that the heave is caused by the recovery of mine water."At first a forward model was generated to estimate surface uplift in response to measured changes in groundwater levels from monitoring boreholes. David calibrated and validated the model using ISBAS InSAR on ENVISAT and Sentinel-1 radar data. He then inverted the InSAR measurements to provide an estimate of the change in groundwater levels. Subsequently, the inverted rates were used to estimate the time it will take for groundwater to rebound and identify areas of the coalfield most at risk of surface discharges."InSAR measurements, when combined with modelling, can assist with the characterisation of the hydrogeological processes occurring at former mining sites. The technique has the potential to make a significant contribution to the progressive abandonment strategy of recently closed coalfields," David said.The InSAR findings offer a supplementary source of data on groundwater changes that augment the borehole measurements. It means monitoring can be done remotely so is less labour-intensive for national bodies such as the Environment Agency (which manages hazards such as flooding, pollution and contaminated land) and the Coal Authority (which has a mandate to manage the legacy of underground coal mining in terms of public safety and subsidence).The model has already flagged that some parts of the coal fields that are not behaving as previously predicted, which could influence existing remediation plans.David explains, "The deepest part of the North Nottinghamshire coalfield, for example, is not rebounding as expected which suggests that the mine plans here might not be completely accurate. The stability is confirmed by the InSAR and the model -- future monitoring of this area will help to identify if or when rebound does eventually occur."Next steps for the project are to integrate our results into an existing screening tool developed by the Environment Agency and Coal Authority to help local planning authorities, developers and consultants design sustainable drainage systems in coalfield areas. The initial results, generated at a regional scale, have the potential to be scaled to all coalfields in the UK, with the aid of national InSAR maps," adds David.Luke Bateson, Senior Remote Sensing Geologist from the British Geological Survey, said, "InSAR data offers a fantastic opportunity to reveal how the ground is moving, however we need studies such as David's in order to understand what these ground motions relate to and what they mean. David's study, not only provides this understanding but also provides a tool which can convert InSAR ground motions into information on mine water levels that can be used to make informed decisions."Dr Andrew Sowter, Chief Technical Officer at Terra Motion Ltd, explains, "Studies like this demonstrate the value to us, as a small commercial company, in investing in collaborative work with the University. We now have a remarkable, validated, result that is based upon our ISBAS InSAR method and demonstrably supported by a range of important stakeholders. This will enable us to further penetrate the market in a huge range of critical applications hitherto labelled as difficult for more conventional InSAR techniques, particularly those markets relating to underground fluid extraction and injection in more temperate, vegetated zones."
|
Environment
| 2,020 |
November 16, 2020
|
https://www.sciencedaily.com/releases/2020/11/201116112907.htm
|
Solar cells: Mapping the landscape of Caesium based inorganic halide perovskites
|
Scientists at HZB have printed and explored different compositions of caesium based halide perovskites (CsPb(Br
|
Hybrid halide perovskites (ABX3) have risen up in only a few years as highly efficient new materials for thin film solar cells. The A stands for a cation, either an organic molecule or some alkali metal, the B is a metal, most often Lead (Pb) and the X is a halide element such as Bromide or Iodide. Currently some compositions achieve power conversion efficiencies above 25%. What is more, most perovskite thin films can easily be processed from solution at moderate processing temperatures, which is very economic.World record efficiencies have been reached by organic molecules such as methylammonium (MA) as the A cation and Pb and Iodine or Bromide on the other sites. But those organic perovskites are not yet very stable. Inorganic perovskites with Caesium at the A-site promise higher stabilities, but simple compounds such as CsPbI3 or CsPbBr3 are either not very stable or do not provide the electronic properties needed for applications in solar cells or other optoelectronic devices.Now, a team at HZB did explore compositions of CsPb(BrFor the production they used a newly developed method for printing combinatorial perovskite thin films to produce systematic variations of (CsPb(BrWith a special high intensity x-ray source, the liquid metal jet in the LIMAX lab at HZB, the crystalline structure of the thin film was analysed at different temperatures, ranging from room temperature up to 300 Celsius. "We find that all investigated compositions convert to a cubic perovskite phase at high temperature," Hampus Näsström, PhD student and first author of the publication explains. Upon cooling down, all samples transition to metastable tetragonal and orthorhombic distorted perovskite phases, which make them suitable for solar cell devices. "This has proven to be an ideal use case of in-situ XRD with the lab-based high-brilliance X-ray source," Roland Mainz, head of the LIMAX laboratory, adds.Since the transition temperatures into the desired phases are found to decrease with increasing bromide content, this would allow to lower processing temperatures for inorganic perovskite solar cells."The interest in this new class of solar materials is huge, and the possible compositional variations near to infinite. This work demonstrates how to produce and assess systematically a wide range of compositions," says Dr. Eva Unger, who heads the Young Investigator Group Hybrid Materials Formation and Scaling. Dr. Thomas Unold, head of the Combinatorial Energy Materials Research group agrees and suggests that "this is a prime example of how high-throughput approaches in research could vastly accelerate discovery and optimization of materials in future research."
|
Environment
| 2,020 |
November 16, 2020
|
https://www.sciencedaily.com/releases/2020/11/201116075720.htm
|
New method brings physics to deep learning to better simulate turbulence
|
Deep learning, also called machine learning, reproduces data to model problem scenarios and offer solutions. However, some problems in physics are unknown or cannot be represented in detail mathematically on a computer. Researchers at the University of Illinois Urbana-Champaign developed a new method that brings physics into the machine learning process to make better predictions.
|
The researchers used turbulence to test their method."We don't know how to mathematically write down all of turbulence in a useful way. There are unknowns that cannot be represented on the computer, so we used a machine learning model to figure out the unknowns. We trained it on both what it sees and the physical governing equations at the same time as a part of the learning process. That's what makes it magic and it works," said Willett Professor and Head of the Department of Aerospace Engineering Jonathan Freund.Freund said the need for this method was pervasive."It's an old problem. People have been struggling to simulate turbulence and to model the unrepresented parts of it for a long time," Freund said.Then he and his colleague Justin Sirignano had an epiphany."We learned that if you try to do the machine learning without considering the known governing equations of the physics, it didn't work. We combined them and it worked."When designing an air or spacecraft, Freund said this method will help engineers predict whether or not a design involving turbulent flow will work for their goals. They'll be able to make a change, run it again to get a prediction of heat transfer or lift, and predict if their design is better or worse."Anyone who wants to do simulations of physical phenomena might use this new method. They would take our approach and load data into their own software. It's a method that would admit other unknown physics. And the observed results of that unknown physics could be loaded in for training," Freund said.The work was done using the super-computing facility at the National Center for Supercomputing at UIUC known as Blue Waters, making the simulation faster and so more cost efficient.The next step is to use the method on more realistic turbulence flows."The turbulent flow we used to demonstrate the method is a very simple configuration," Freund said. "Real flows are more complex. I'd also like to use the method for turbulence with flames in it -- a whole additional type of physics. It's something we plan to continue to develop in the new Center for Exascale-enabled Scramjet Design, housed in NCSA."Freund said this work is at the research level but can potentially affect industry in the future."Universities were very active in the first turbulence simulations, then industry picked them up. The first university-based large-eddy simulations looked incredibly expensive in the 80s and 90s. But now companies do large-eddy simulations. We expect this prediction capability will follow a similar path. I can see a day in the future with better techniques and faster computers that companies will begin using it."
|
Environment
| 2,020 |
November 13, 2020
|
https://www.sciencedaily.com/releases/2020/11/201113095156.htm
|
An epidemic outbreak of Mesoamerican Nephropathy in Nicaragua linked to nickel toxicity
|
For more than 20 years, an epidemic of chronic kidney disease (CKD) of unknown origin has severely affected specific coastal communities along South America's Pacific coastline from Mexico to Panama leading to more than 50,000 deaths. The condition, known as Mesoamerican Nephropathy (MeN), has a perplexing clinical presentation. Unlike traditional forms of CKD, it affects healthy young working-age individuals who do not have other traditional risk factors for kidney disease, such as diabetes or hypertension. The underlying cause of this devastating public health crisis has remained a mystery.
|
A "CSI-style" scientific investigation led by Dr. Kristy Murray, professor of pediatrics, immunology and microbiology at Baylor College of Medicine and Texas Children's Hospital, revealed evidence for nickel toxicity as the underlying cause of this disease in a Nicaraguan "hotspot," which is among the worst-hit areas in the continent. The study provides new, compelling evidence that low-dose exposure to nickel can cause systemic inflammation, anemia and kidney injury -- hallmarks of acute MeN that progresses to chronic kidney disease in around 90% of the patients. The study appeared in "A few years back, based on my reputation of investigating many new outbreaks and my laboratory's expertise in studying tropical medicine and infectious diseases among vulnerable populations, we were called to investigate the possible causes of this horrific epidemic that plagued vulnerable agricultural areas in the Pacific lowlands for decades," Murray, who is also the assistant Dean at the National School of Tropical Medicine Baylor College of Medicine and Vice Chair for Research in the Department of Pediatrics at Texas Children's, said. She initially received her outbreak experience twenty years ago at the CDC as part of the elite group of disease detectives known as the Epidemic Intelligence Service.Although agricultural toxins were proposed as a possible factor, based on the prevalence of this disease only in specific coastal populations, the team ruled it out. Genetic mutations, as the sole cause, were also excluded because of the relatively recent emergence of this disease (in decades versus centuries, which is typical of inherited genetic disorders) and a sharp increase in cases in the region."Although it was thought to be a chronic condition, after we reviewed hundreds of clinical records and conducted surveillance for new cases, we were struck by the acute 'flu-like' presentation in the initial stages of this disease. At the onset, the disease looked remarkably like a classic hyper-inflammatory response to an infection. So, we screened for several pathogens but could not pin it down to any particular infectious agent," Murray said. "We then turned our attention to clinical and pathological tests that led us to the most important clues to crack this case. Majority of the affected individuals had recently developed anemia and their kidney biopsies showed extreme inflammation in the tubules and cortico-medullary junctions of the kidney, indicative of heavy metal or trace element toxicity. The pieces of the puzzle were finally coming together."Dr. Rebecca Fischer, who was Dr. Murray's postdoctoral fellow at the time and now assistant professor of Epidemiology at Texas A&M University, worked to pull together these complex analyses, and nephrologists, Drs. Sreedhar Mandayam and Chandan Vangala at Baylor College, helped to guide the team in their clinical interpretation of acute cases.The team then collaborated with Drs. Jason Unrine and Wayne Sanderson at the University of Kentucky who specialize in trace element toxicity. Since the easiest way to test the levels of heavy metals is through toenails, they collected toenail clippings of individuals about three months after they experienced an acute kidney injury event and analyzed them for 15 trace elements, including heavy metals. Most importantly, they compared these analyzes to controls they recruited from the same population who had no evidence of kidney disease. They found affected cases to have significantly increased levels of nickel. They also identified higher levels of aluminum and vanadium in affected cases than control subjects, but nickel was by far the strongest correlate, and biologically, it made sense with the clinical presentation.Nickel is an abundant, naturally occurring heavy metal and like iron, it is essential for the human body, but is needed only in very trace quantities. Excess recurrent exposure to nickel, by incidental ingestion through contaminated water, food or soil, can cause several toxic and carcinogenic effects. Since people who work a lot with soil such as agricultural field laborers, miners and brick-makers were found to have the highest risk of acquiring this disease, the researchers theorize their source of the nickel exposure was likely geologic in nature and possibly linked to a volcanic chain in the area that became active in the late 90s, after which incidence of this chronic kidney disease began to skyrocket in lowland areas downstream from the volcanoes in this chain."While we still need to validate these findings in other areas impacted by MeN, such as El Salvador or Guatemala, and to confirm the geologic source of nickel contamination, we are very excited to have found a strong lead in this challenging public health problem. Based on this study, several public health strategies were implemented, such as finding ways to protect drinking water sources from soil and runoff water contamination and educating community members about the need to frequently wash their hands after working with soil. It is gratifying to see our efforts are starting to pay off. After these measures were put in place, we noticed a dramatic reduction in the number of new cases, an indication that we are moving in the right direction. This is the first-ever downward trend in this outbreak since its emergence two decades ago. Considering the sobering death toll in the affected communities, I am relieved we can finally do something about it," Murray shared.The authors were affiliated with one or more of the following institutions: Baylor College of Medicine, Texas Children's Hospital, Texas A & M, University of Kentucky and M.D. Anderson Cancer Center. The study was partially funded by the El Comité Nacional de Productores de Azúcar de Nicaragua (CNPA) and National Institutes of Health.
|
Environment
| 2,020 |
November 12, 2020
|
https://www.sciencedaily.com/releases/2020/11/201112151315.htm
|
How to improve natural gas production in shale
|
A new hydrocarbon study contradicts conventional wisdom about how methane is trapped in rock, revealing a new strategy to more easily access the valuable energy resource.
|
"The most challenging issue facing the shale energy industry is the very low hydrocarbon recovery rates: less than 10 percent for oil and 20 percent for gas. Our study yielded new insights into the fundamental mechanisms governing hydrocarbon transport within shale nanopores," said Hongwu Xu, an author from Los Alamos National Laboratory's Earth and Environmental Sciences Division. "The results will ultimately help develop better pressure management strategies for enhancing unconventional hydrocarbon recovery."Most of U.S. natural gas is hidden deep within shale reservoirs. Low shale porosity and permeability make recovering natural gas in tight reservoirs challenging, especially in the late stage of well life. The pores are miniscule -- typically less than five nanometers -- and poorly understood. Understanding the hydrocarbon retention mechanisms deep underground is critical to increase methane recovering efficiency. Pressure management is a cheap and effective tool available to control production efficiency that can be readily adjusted during well operation -- but the study's multi-institution research team discovered a trade-off.This team, including the lead author, Chelsea Neil, also of Los Alamos, integrated molecular dynamics simulations with novel in situ high-pressure small-angle neutron scattering (SANS) to examine methane behavior in Marcellus shale in the Appalachian basin, the nation's largest natural gas field, to better understand gas transport and recovery as pressure is modified to extract the gas. The investigation focused on interactions between methane and the organic content (kerogen) in rock that stores a majority of hydrocarbons.The study's findings indicate that while high pressures are beneficial for methane recovery from larger pores, dense gas is trapped in smaller, common shale nanopores due to kerogen deformation. For the first time, they present experimental evidence that this deformation exists and proposed a methane-releasing pressure range that significantly impacts methane recovery. These insights help optimize strategies to boost natural gas production as well as better understand fluid mechanics.Methane behavior was compared during two pressure cycles with peak pressures of 3000 psi and 6000 psi, as it was previously believed that increasing pressure from injected fluids into fractures would increase gas recovery. The team discovered that unexpected methane behavior occurrs in very small but prevalent nanopores in the kerogen: the pore uptake of methane was elastic up to the lower peak pressure, but became plastic and irreversible at 6,000 psi, trapping dense methane clusters that developed in the sub-2 nanometer pore, which encompass 90 percent of the measured shale porosity.Led by Los Alamos, the multi-institution study was published in Nature's new
|
Environment
| 2,020 |
November 12, 2020
|
https://www.sciencedaily.com/releases/2020/11/201112144017.htm
|
Environmentally friendly method could lower costs to recycle lithium-ion batteries
|
A new process for restoring spent cathodes to mint condition could make it more economical to recycle lithium-ion batteries. The process, developed by nanoengineers at the University of California San Diego, is more environmentally friendly than today's methods; it uses greener ingredients, consumes 80 to 90% less energy, and emits about 75% less greenhouse gases.
|
Researchers detail their work in a paper published Nov 12 in The process works particularly well on cathodes made from lithium iron phosphate, or LFP. Batteries made with LFP cathodes are less costly than other lithium-ion batteries because they don't use expensive metals like cobalt or nickel. LFP batteries also have longer lifetimes and are safer. They are widely used in power tools, electric buses and energy grids. They are also the battery of choice for Tesla's Model 3."Given these advantages, LFP batteries will have a competitive edge over other lithium-ion batteries in the market," said Zheng Chen, a professor of nanoengineering at UC San Diego.The problem? "It's not cost-effective to recycle them," Chen said. "It's the same dilemma with plastics -- the materials are cheap, but the methods to recover them are not."The new recycling process that Chen and his team developed could lower these costs. It does the job at low temperatures (60 to 80 C) and ambient pressure, making it less power hungry than other methods. Also, the chemicals it uses -- lithium salt, nitrogen, water and citric acid -- are inexpensive and benign."The whole regeneration process works at very safe conditions, so we don't need any special safety precautions or special equipment. That's why we can make this so low cost for recycling batteries," said first author Panpan Xu, a postdoctoral researcher in Chen's lab.The researchers first cycled commercial LFP cells until they had lost half their energy storage capacity. They took the cells apart, collected the cathode powders, and soaked them in a solution containing lithium salt and citric acid. Then they washed the solution with water, dried the powders and heated them.The researchers made new cathodes from the powders and tested them in both coin cells and pouch cells. Their electrochemical performance, chemical makeup and structure were all fully restored to their original states.As the battery cycles, the cathode undergoes two main structural changes that are responsible for its decline in performance. The first is the loss of lithium ions, which creates empty sites called vacancies in the cathode structure. The other occurs when iron and lithium ions switch spots in the crystal structure. When this happens, they cannot easily switch back, so lithium ions become trapped and can no longer cycle through the battery.The process restores the cathode's structure by replenishing lithium ions and making it easy for iron and lithium ions to switch back to their original spots. The latter is accomplished using citric acid, which acts as a reducing agent -- a substance that donates an electron to another substance. Citric acid transfers electrons to the iron ions, making them less positively charged. This minimizes the electronic repulsion forces that prevent the iron ions from moving back into their original spots in the crystal structure, and also releases the lithium ions back into circulation.While the overall energy costs of this recycling process are lower, researchers say further studies are needed on the logistics of collecting, transporting and handling large quantities of batteries."Figuring out how to optimize these logistics is the next challenge," Chen said. "And that will bring this recycling process closer to industry adoption."
|
Environment
| 2,020 |
November 12, 2020
|
https://www.sciencedaily.com/releases/2020/11/201112113139.htm
|
This tableware made from sugarcane and bamboo breaks down in 60 days
|
Scientists have designed a set of "green" tableware made from sugarcane and bamboo that doesn't sacrifice on convenience or functionality and could serve as a potential alternative to plastic cups and other disposable plastic containers. Unlike traditional plastic or biodegradable polymers -- which can take as long as 450 years or require high temperatures to degrade -- this non-toxic, eco-friendly material only takes 60 days to break down and is clean enough to hold your morning coffee ordinner takeout. This plastic alternative is presented November 12 in the journal
|
"To be honest, the first time I came to the US in 2007, I was shocked by the available one-time use plastic containers in the supermarket," says corresponding author Hongli (Julie) Zhu of Northeastern University. "It makes our life easier, but meanwhile, it becomes waste that cannot decompose in the environment." She later saw many more plastic bowls, plates, and utensils thrown into the trash bin at seminars and parties and thought, "Can we use a more sustainable material?"To find an alternative for plastic-based food containers, Zhu and her colleagues turned to bamboos and one of the largest food-industry waste products: bagasse, also known as sugarcane pulp. Winding together long and thin bamboo fibers with short and thick bagasse fibers to form a tight network, the team molded containers from the two materials that were mechanically stable and biodegradable. The new green tableware is not only strong enough to hold liquids as plastic does and cleaner than biodegradables made from recycled materials that might not be fully de-inked, but also starts decomposing after being in the soil for 30-45 days and completely loses its shape after 60 days."Making food containers is challenging. It needs more than being biodegradable," said Zhu. "On one side, we need a material that is safe for food; on the other side, the container needs to have good wet mechanical strength and be very clean because the container will be used to take hot coffee, hot lunch."The researchers added alkyl ketene dimer (AKD), a widely used eco-friendly chemical in the food industry, to increase oil and water resistance of the molded tableware, ensuring the sturdiness of the product when wet. With the addition of this ingredient, the new tableware outperformed commercial biodegradable food containers, such as other bagasse-based tableware and egg cartons, in mechanical strength, grease resistance, and non-toxicity.The tableware the researchers developed also comes with another advantage: a significantly smaller carbon footprint. The new product's manufacturing process emits 97% less CO2 than commercially available plastic containers and 65% less CO2 than paper products and biodegradable plastic. The next step for the team is to make the manufacturing process more energy efficient and bring the cost down even more, to compete with plastic. Although the cost of cups made out of the new material ($2,333/ton) is two times lower than that of biodegradable plastic ($4,750/ton), traditional plastic cups are still slightly cheaper ($2,177/ton)."It is difficult to forbid people to use one-time use containers because it's cheap and convenient," says Zhu. "But I believe one of the good solutions is to use more sustainable materials, to use biodegradable materials to make these one-time use containers."
|
Environment
| 2,020 |
November 12, 2020
|
https://www.sciencedaily.com/releases/2020/11/201112113122.htm
|
New maps document big-game migrations across the western United States
|
For the first time, state and federal wildlife biologists have come together to map the migrations of ungulates -- hooved mammals such as mule deer, elk, pronghorn, moose and bison -- across America's West. The maps will help land managers and conservationists pinpoint actions necessary to keep migration routes open and functional to sustain healthy big-game populations.
|
"This new detailed assessment of migration routes, timing and interaction of individual animals and herds has given us an insightful view of the critical factors necessary for protecting wildlife and our citizens," said USGS Director Jim Reilly.The new study, Ungulate Migrations of the Western United States: Volume 1, includes maps of more than 40 big-game migration routes in Arizona, Idaho, Nevada, Utah and Wyoming."I'm really proud of the team that worked across multiple agencies to transform millions of GPS locations into standardized migration maps," said Matt Kauffman, lead author of the report and director of the USGS Wyoming Cooperative Fish and Wildlife Research Unit. "Many ungulate herds have been following the same paths across western landscapes since before the United States existed, so these maps are long overdue."The migration mapping effort was facilitated by Department of the Interior Secretary's Order 3362, which has brought greater focus to the need to manage and conserve big-game migrations in the West. It builds on more than two decades of wildlife research enhanced by a technological revolution in GPS tracking collars. The research shows ungulates need to migrate in order to access the best food, which in the warmer months is in the mountains. They then need to retreat seasonally to lower elevations to escape the deep winter snow.Big-game migrations have grown more difficult as expanding human populations alter habitats and constrain the ability of migrating animals to find the best forage. The herds must now contend with the increasing footprint of fences, roads, subdivisions, energy production and mineral development. Additionally, an increased frequency of droughts due to climate change has reduced the duration of the typical springtime foraging bonanza.Fortunately, maps of migration habitat, seasonal ranges and stopovers are leading to better conservation of big-game herds in the face of all these changes. Detailed maps can help identify key infrastructure that affect migration patterns and allow conservation officials to work with private landowners to protect vital habitats and maintain the functionality of corridors.The migration maps also help researchers monitor and limit the spread of contagious diseases, such as chronic wasting disease, which are becoming more prevalent in wild North American cervid populations such as deer, elk and moose."Arizona is excited to be part of this effort," said Jim deVos, assistant director for wildlife management with the Arizona Game and Fish Department. "This collaboration has allowed us to apply cutting-edge mapping techniques to decades of Arizona's GPS tracking data and to make those maps available to guide conservation of elk, mule deer and pronghorn habitat."Many of these mapping and conservation techniques were pioneered in Wyoming. Faced with rapidly expanding oil and gas development, for more than a decade the Wyoming Game and Fish Department and the USGS Cooperative Research Unit at the University of Wyoming have worked together to map corridors to assure the continued movements of migratory herds on federal lands.Migration studies have also reached the Wind River Indian Reservation, where researchers are collaborating with the Eastern Shoshone and Northern Arapaho Fish and Game to track mule deer and elk migrations and doing outreach to tribal youth. Director Reilly emphasized that the interactions with state agencies and the tribes, especially with the Wind River students, have been a hallmark of this effort and have been remarkably successful.For example, the mapping and official designation of Wyoming's 150-mile Red Desert as part of the Hoback mule deer migration corridor enabled science-based conservation and management decisions. Detailed maps also allowed managers to enhance stewardship by private landowners, whose large ranches are integral to the corridor. Partners funded fence modifications and treatments of cheatgrass and other invasive plants across a mix of public and private segments within the corridor."Just like Wyoming, Nevada has long valued our mule deer migrations," said Tony Wasley, director of the Nevada Department of Wildlife. "This effort has provided us with a new level of technical expertise to get these corridors mapped in a robust way. We look forward to using these maps to guide our stewardship of Nevada's mule deer migrations."In 2018, the USGS and several western states jointly created a Corridor Mapping Team for USGS scientists to work side-by-side with state wildlife managers and provide technical assistance through all levels of government. With coordination from the Western Association of Fish and Wildlife Agencies and the information-sharing and technical support of the team, agency biologists from Arizona, Idaho, Nevada, Utah and Wyoming collaborated to produce migration maps for the five big-game species. In 2019, the Corridor Mapping Team expanded to include mapping work across all states west of the Rocky Mountains.In addition to managers from the respective state wildlife agencies, the report was coauthored by collaborating biologists from the USDA Forest Service, the National Park Service, and the Bureau of Land Management, among others. The maps themselves were produced by cartographers from the USGS and the InfoGraphics Lab at the University of Oregon.
|
Environment
| 2,020 |
November 11, 2020
|
https://www.sciencedaily.com/releases/2020/11/201111144335.htm
|
Making a case for organic Rankine cycles in waste heat recovery
|
A team from City, University of London's Department of Engineering believes that a new approach to generating energy through waste heat could yield important insights into delivering environmentally-friendly power.
|
In this recent paper, Making the case for cascaded organic Rankine cycles for waste-heat recovery, published in the The ORC is based on the principle of heating a liquid which causes it to evaporate, and the resulting gas can then expand in a turbine, which is connected to a generator, thus creating power. Waste heat to power organic Rankine cycle systems can utilise waste heat from a range of industrial processes in addition to existing power generation systems.A cascaded ORC system is essentially two ORC systems coupled together, with the heat that is rejected from the first ORC being used as the input heat for the second.However, in developing his model of a cascaded ORC system, Dr White hastens to add that there is a trade-off between performance and cost -- in the case of the heat exchangers deployed, the general rule is that the better the performance, the larger and more costly the heat exchangers.He says the trade-off can be explored through optimisation and the generation of what is called a 'Pareto front' -- a collection of optimal solutions that considers the trade-off between two things.If quite large heat exchangers (in this specific case, greater than around 200m2), were affordable, then for that amount of area, it is possible to generate more power with a cascaded system than a single-stage system.However, if the size of the heat exchangers was restricted, one would probably be better off with a single-stage system.Dr White's results suggest that in applications where maximising performance is not the primary objective, single-stage ORC systems remain the best option. However, in applications where maximised performance is the goal, cascaded systems can produce more power for the same size heat exchangers.His paper emerged out of his work on the NextORC project, funded by the Engineering and Physical Sciences Research Council (EPSRC).
|
Environment
| 2,020 |
November 11, 2020
|
https://www.sciencedaily.com/releases/2020/11/201111095641.htm
|
3D-printed weather stations could enable more science for less money
|
Across the United States, weather stations made up of instruments and sensors monitor the conditions that produce our local forecasts, like air temperature, wind speed and precipitation. These systems aren't just weather monitors, they are also potent tools for research on topics from farming to renewable energy generation.
|
Commercial weather stations can cost thousands of dollars, limiting both their availability and thus the amount of climate data that can be collected. But the advent of 3D printing and low-cost sensors have made it possible to build a weather station for a few hundred dollars. Could these inexpensive, homegrown versions perform as well as their pricier counterparts?" I didn't expect that this station would perform nearly as well as it did. Even though components started to degrade, the results show that these kinds of weather stations could be viable for shorter campaigns." -- Adam Theisen, Argonne atmospheric and Earth scientistThe answer is yes -- up to a point, according to researchers, who put a 3D-printed weather station to the test in Oklahoma. Adam K. Theisen, an atmospheric and Earth scientist at the U.S. Department of Energy's (DOE) Argonne National Laboratory, led the project, which compared the printed station with a commercial-grade station for eight months to see whether it was accurate and how well it could hold up against the elements.Three-dimensional printing uses digital models to produce physical objects on the fly. Its low cost and the ability to print parts wherever you can lug a printer could help expand the number of these stations, helping to bring data collection to remote areas and educate tomorrow's researchers.A team at the University of Oklahoma followed the guidance and open source plans developed by the 3D-Printed Automatic Weather Station (3D-PAWS) Initiative at the University Corporation for Atmospheric Research to print over 100 weather station parts. Instead of using polylactic acid, more commonly used in 3D printing, they turned to acrylonitrile styrene acrylate, a type of plastic filament considered more durable outdoors. Coupled with low-cost sensors, the 3D-printed parts provide the basis for these new systems, which the 3D-PAWS Initiative established as promising in earlier experiments."In order for this to get more widespread adoption, it has to go through verification and validation studies like this," Theisen said.While the 3D-printed system did start showing signs of trouble about five months into the experiment -- the relative humidity sensor corroded and failed, and some parts eventually degraded or broke -- its measurements were on par with those from a commercial-grade station in the Oklahoma Mesonet, a network designed and implemented by scientists at the University of Oklahoma and at Oklahoma State University."I didn't expect that this station would perform nearly as well as it did," said Theisen. "Even though components started to degrade, the results show that these kinds of weather stations could be viable for shorter campaigns."Theisen, who was based at the University of Oklahoma when the research began, continued to oversee the effort after joining Argonne.In the experiment, the low-cost sensors accurately measured temperature, pressure, rain, UV and relative humidity. With the exception of a couple of instruments, the plastic material held up in the Oklahoma weather from mid-August 2018 to mid-April the following year, a period that saw strong rainstorms, snow and temperatures ranging from 14 to 104°F (-10 to 40°C). A 3D-printed anemometer, which measures wind speed, did not perform as well, but could be improved partly with better printing quality.The project, which was led by undergraduate students at the University of Oklahoma, confirmed both the accuracy of a 3D-printed weather station and its value as an education tool."The students learned skill sets they would not have picked up in the classroom," Theisen said. "They developed the proposal, designed the frame, and did most of the printing and wiring."The ability to print specialized components could make weather stations more feasible in remote areas because replacement parts could be fabricated right away when needed. And even if a cheaper sensor breaks after a few months, the math still works out for a low budget."If you're talking about replacing two or three of these inexpensive sensors versus maintaining and calibrating a $1,000 sensor every year, it's a strong cost-benefit to consider," noted Theisen.
|
Environment
| 2,020 |
November 10, 2020
|
https://www.sciencedaily.com/releases/2020/11/201110112524.htm
|
Urban gulls adapt foraging schedule to human activity patterns
|
If you've ever seen a seagull snatch a pasty or felt their beady eyes on your sandwich in the park, you'd be right to suspect they know exactly when to strike to increase their chances of getting a human snack.
|
A new study by the University of Bristol is the most in-depth look to date at the foraging behaviours of urban gulls and how they've adapted to patterns of human activity in a city.In comparison to natural environments, urban environments are novel for animals on an evolutionary timescale and present a wide array of potential food sources. In urban environments food availability often fluctuates according to patterns of human activity, which can follow a daily or weekly cycle. However, until now, little has been known about how urban animals adapt to these time differences in human food availability.A team of scientists from Bristol's Faculties of Engineering and Life Sciences used different data to record the behaviour of urban gulls at three different settings in the city: a public park, a school and a waste centre. The study used data from mini GPS tracker backpacks fitted to 12 Lesser Black?backed Gulls, as well as observations of gull numbers at the different sites.The team found the birds' foraging patterns closely matched the timing of school breaks and the opening and closing times of the waste centre, but that their activity in the park appeared to correspond with the availability of natural food sources.These findings suggest gulls may have the behavioural flexibility to adapt their foraging behaviour to human time schedules when beneficial, and that this trait helps them to thrive in cities.Dr Anouk Spelt, lead author of the paper published in "Our first day at the school, the students were excited to tell us about the gulls visiting their school at lunch time. Indeed, our data showed that gulls were not only present in high numbers during lunch time to feed on leftovers, but also just before the start of the school and during the first break when students had their snack. Similarly, at the waste centre the gulls were present in higher numbers on weekdays when the centre was open and trucks were unloading food waste."Although everybody has experienced or seen gulls stealing food from people in parks, our gulls mainly went to park first thing in the morning and this may be because earthworms and insects are present in higher numbers during these early hours."Dr Shane Windsor, co-author, said:"With this study in Bristol we have shown that gulls in cities are able to adapt their foraging schedule to make best use of food resources depending on their availability. Some gulls even used all three feeding grounds in the same day, suggesting they might track the availability to optimise their energy intake. These results highlight the behavioural flexibility of gulls and their ability to adapt to the artificial environments and time schedules of urban living."
|
Environment
| 2,020 |
November 10, 2020
|
https://www.sciencedaily.com/releases/2020/11/201110112514.htm
|
Researchers discover the secret of how moss spreads
|
In a recent study, researchers from the Natural History Museum of Denmark at the University of Copenhagen have studied how one of the world's most widespread moss species,
|
"We found a remarkable overlap between global wind patterns and the way in which this moss species has spread over time, one that we haven't been aware of until now," says evolutionary biologist Elisabeth Biersma of the Natural History Museum of Denmark, who is the study's lead author.According to Biersma, this means that much of the moss Danes find commingling with their lawn grass or lightly clinging to their rooftops is often part of the same population found on another continent at a similar latitude. For example, moss spores from North America are likely blown by the prevailing Westerlies across the Atlantic to Denmark.Mosses ("Mosses are extremely resilient organisms that can both suck up a lot of water and tolerate considerable desiccation. Most other plants are far from being as resistant to harsh environments such as rooftops, sidewalks or polar climates. Along with the wind, this has been the key to the great success of mosses the world over," explains Elisabeth Biersma.There are roughly 600 moss species in Denmark, out of roughly 12,000 species found worldwide. In the study, researchers used moss samples sourced from dried plant collections called herbaria, from around the world. Using genetic samples of the mosses, the researchers built an extensive evolutionary tree that helped them map the various moss populations.The researchers' analyses demonstrate that the current distribution pattern of C. purpureus has occurred over the last ~11 million years. But the fact that it has taken so long for C. purpureus to spread to the places where it is found today comes as a bit of a surprise"This can probably be explained by the fact that global wind systems can partly disperse spores over a long distance, but also restrict global dispersion as wind systems are self-enclosed and isolated transport systems, which thereby restrict any spreading beyond them," explains Elisabeth Biersma.This is the first time that the researcher has seen such a uniform pattern of proliferation across the globe, as demonstrated with "These findings could help us understand the spread of other organisms, such as bacteria, fungi and some plants, which are also spread via microscopic airborne particles transported by the wind. But only the future can say whether this knowledge is applicable to other organisms." concludes Biersma.
|
Environment
| 2,020 |
November 10, 2020
|
https://www.sciencedaily.com/releases/2020/11/201110102542.htm
|
Uncovering novel genomes from Earth's microbiomes
|
Despite advances in sequencing technologies and computational methods in the past decade, researchers have uncovered genomes for just a small fraction of Earth's microbial diversity. Because most microbes cannot be cultivated under laboratory conditions, their genomes can't be sequenced using traditional approaches. Identifying and characterizing the planet's microbial diversity is key to understanding the roles of microorganisms in regulating nutrient cycles, as well as gaining insights into potential applications they may have in a wide range of research fields.
|
A public repository of 52,515 microbial draft genomes generated from environmental samples around the world, expanding the known diversity of bacteria and archaea by 44%, is now available and described November 9, 2020 in Metagenomics is the study of the microbial communities in the environmental samples without needing to isolate individual organisms, using various methods for processing, sequencing and analysis. "Using a technique called metagenome binning, we were able to reconstruct thousands of metagenome-assembled genomes (MAGs) directly from sequenced environmental samples without needing to cultivate the microbes in the lab," noted Stephen Nayfach, the study's first author and research scientist in Nikos Kyrpides' Microbiome Data Science group. "What makes this study really stand out from previous efforts is the remarkable environmental diversity of the samples we analyzed."Emiley Eloe-Fadrosh, head of the JGI Metagenome Program and senior author on the study elaborated on Nayfach's comments. "This study was designed to encompass the broadest and most diverse range of samples and environments, including natural and agricultural soils, human- and animal-host associated, and ocean and other aquatic environments -- that's pretty remarkable."Much of the data had been generated from environmental samples sequenced by the JGI through the Community Science Program and was already available on the JGI's Integrated Microbial Genomes & Microbiomes (IMG/M) platform. Eloe-Fadrosh noted that it was a nice example of "big-data" mining to gain a deeper understanding of the data and enhancing the value by making data publicly available.To acknowledge the efforts of the investigators who had done the sampling, Eloe-Fadrosh reached out to more than 200 researchers around the world in accordance with the JGI data use policy. "I felt it is important to acknowledge the significant efforts to collect and extract DNA from these samples, many of which come from unique, difficult to access environments, and invited these researchers to be co-authors as part of IMG data consortium," she said.Using this massive dataset, Nayfach clustered the MAGs into 18,000 candidate species groups, 70% of which were novel compared over 500,000 existing genomes available at that time. "Looking across the tree of life, it's striking how many uncultivated lineages are only represented by MAGs," he said. "While these draft genomes are imperfect, they can still reveal a lot about the biology and diversity of uncultured microbes."Teams of researchers worked on multiple analyses harnessing the genome repository, and the IMG/M team developed several updates and features to mine the GEM catalog. (Watch this IMG webinar on Metagenome Bins to learn more.) One group mined the dataset for novel secondary metabolites of secondary metabolite biosynthetic gene clusters (BGCs), increasing these BGCs in IMG/ABC (Atlas of Biosynthetic Gene Clusters) by 31%. (Listen to this JGI Natural Prodcast episode on genome mining.) Nayfach also worked with another team on predicting host-virus connections between all viruses in IMG/VR (Virus) and the GEM catalog, associating 81,000 viruses -- 70% of which had not already been associated with a host -- with 23,000 MAGs.Building upon these resources, KBase, a multi-institutional collaborative knowledge creation and discovery environment designed for biologists and bioinformaticians, developed metabolic models for thousands of MAGs. The models are now available in a public Narrative, which provides shareable, reproducible workflows. "Metabolic modeling is a routine analysis for isolate genomes, but has not been done at scale for uncultivated microbes," said Eloe-Fadrosh, "and we felt that the collaboration with KBase would add value beyond clustering and analysis of these MAGs.""Just bringing this dataset into KBase has immediate value because people can find the high-quality MAGs and use them to inform future analyses," said José P. Faria, a KBase computational biologist at Argonne National Laboratory. "The process of building a metabolic model is simple: you just select a genome or MAG and press a button to build a model from our database of mappings between biochemical reactions and annotations. We look at what was annotated in the genome and at the resulting model to assess the metabolic capabilities of the organism." (Watch this KBase webinar on metabolic modeling.)KBase User Engagement lead Elisha Wood-Charlson added that by demonstrating the ease with which metabolic models were generated from the GEM dataset, metagenomics researchers might consider branching into this space. "Most metagenomics researchers might not be willing to dive into an entirely new research field [metabolic modeling], but they might be interested in how biochemistry impacts what they work on. The genomics community can now explore metabolism using KBase's easy path from genomes or MAGs to modeling that may not have been considered," she said.Kostas Konstantinidis of Georgia Institute of Technology, one of the co-authors whose data were part of the catalog, "I don't think there are many institutions that can do this kind of large-scale metagenomics and that have the capacity for large scale analyses. The beauty of this study is that it's done at this scale that individual labs cannot do, and it gives us new insights into microbial diversity and function."He is already finding ways to utilize the catalog in his own research on how microbes respond to climate change. "With this dataset I can see where every microbe is found, and how abundant it is. That's very useful for my work and for others doing similar research." Additionally, he's interested in expanding the diversity of the reference database he's developing called the Microbial Genomes Atlas to allow for more robust analyses by adding the MAGs."This is a great resource for the community," Konstantinidis added. "It's a dataset that is going to facilitate many more studies subsequently. And I hope JGI and other institutions continue to do this kind of projects."
|
Environment
| 2,020 |
November 9, 2020
|
https://www.sciencedaily.com/releases/2020/11/201109120644.htm
|
Researchers discover bacterial DNA's recipe for success
|
Biomedical engineers at Duke University have developed a new way of modeling how potentially beneficial packages of DNA called plasmids can circulate and accumulate through a complex environment that includes many bacterial species. The work has also allowed the team to develop a new factor dubbed the "persistence potential" that, once measured and computed, can predict whether or not a plasmid will continue to thrive in a given population or gradually fade into oblivion.
|
The researchers hope that their new model will lay the groundwork for others to better model and predict how important traits such as antibiotic resistance in pathogens or metabolic abilities in bacteria bred to clean environmental pollution will spread and grow in a given environment.The results appear online on November 4 in the journal In addition to the Darwinian process of handing down genes important for survival from parents to offspring, bacteria also engage in a process called horizontal gene transfer. Bacteria are constantly sharing genetic recipes for new abilities across species by swapping different packages of genetic material called plasmids with one another."In an examination of just a single bottle of seawater, there were 160 bacterial species swapping 180 different plasmids," said Lingchong You, professor of biomedical engineering at Duke. "Even in a single bottle of water, using current methods to model plasmid mobility would far exceed the collective computing power of the entire world. We've developed a system that simplifies the model while maintaining its ability to accurately predict the eventual results."The potential of any one of these genetic packages to become common throughout a given population or environment, however, is far from certain. It depends on a wide range of variables, such as how quickly the packages are shared, how long the bacteria survive, how beneficial the new DNA is, what the trade-offs are for those benefits and much more.Being able to predict the fate of such a genetic package could help many fields -- perhaps most notably the spread of antibiotic resistance and how to combat it. But the models required to do so in a lifelike scenario are too complicated to solve."The most complex system we've ever been able to model mathematically is three species of bacteria sharing three plasmids," said You. "And even then, we had to use a computer program just to generate the equations, because otherwise we'd get too confused with the number of terms that were needed."In the new study, You and his graduate student, Teng Wang, created a new framework that greatly reduces the complexity of the model as more species and plasmids are added. In the traditional approach, each population is divided into multiple subpopulations based on which plasmids they're carrying. But in the new system, these subpopulations are instead averaged into a single one. This drastically cuts down on the number of variables, which increases in a linear fashion as new bacteria and plasmids are added rather than exponentially.This new approach enabled the researchers to derive a single governing criterion that allows the prediction of whether or not a plasmid will persist in a given population. It's based on five important variables: the cost to the bacteria of having the new DNA, how often the DNA is lost, how quickly the population is diluted by the flux through the population, how quickly the DNA is swapped between bacteria, and how fast the population as a whole is growing.With measurements for these variables in hand, researchers can calculate the population's "plasmid persistence." If that number is greater than one, the genetic package will survive and spread, with higher numbers leading to greater abundance. If less than one, it will fade away into oblivion."Even though the model is simplified, we've found that it's reasonably accurate under certain constraints," said Wang. "As long as the new DNA doesn't place too great of a burden on the bacteria, our new framework will succeed."You and Wang tested their new modeling approach by engineering a handful of different synthetic communities, each with different strains of bacteria and genetic packages for swapping. After running the experiments, they found that the results fit quite well within the expectations of their theoretical framework. And to go the extra mile, the researchers also took data from 13 previously published papers and ran their numbers as well. Those results also supported their new model."The plasmid persistence criterion gives us the hope of using it to guide new applications," said You. "It could help researchers engineer a microbiome by controlling the genetic flow to achieve a certain function. Or it can give us guidance on what factors we can control to eliminate or suppress certain plasmids from bacterial populations, such as those responsible for antibiotic resistance."This research was supported by the National Institutes of Health (R01A1125604, R01GM110494) and the David and Lucile Packard Foundation.
|
Environment
| 2,020 |
November 9, 2020
|
https://www.sciencedaily.com/releases/2020/11/201109120642.htm
|
India's clean fuel transition slowed by belief that firewood is better for well-being
|
India's transition to clean cooking fuels may be hampered by users' belief that using firewood is better for their families' wellbeing than switching to Liquefied Petroleum Gas (LPG), a new study reveals.
|
Women are considered primary family cooks in rural India and those featured in the study feel that both fuels support wellbeing. Understanding these viewpoints helps to explain why India's switch from traditional solid fuels is slower than expected.Those cooks using firewood know it causes health problems, but feel that it contributes more to wellbeing than cooking with LPG would -- although LPG users who previously cooked with firewood claim their new fuel has improved wellbeing.India has more people relying on solid fuels for cooking than any other country in the world and providing universal access to clean cooking fuels has been identified as one of the UN's Sustainable Development Goals (SDGs), to which the country is a signatory.Researchers at the Universities of Birmingham (UK) and Queensland (Australia) conducted focus group discussions with women in four villages in the Chittoor district of Andhra Pradesh. Two villages mostly used firewood whilst the other two comprised of mostly LPG users who had switched from using firewood. The researchers have published their findings today in Firewood users believed that cooking with this fuel improved their financial wellbeing because selling firewood generated income, whilst collecting the fuel gave them an opportunity to socialise and is a tradition they would like to continue. They viewed LPG as a financial burden that gave food an undesirable taste and feared a fatal canister explosion.LPG users told researchers that their fuel allowed them to maintain or improve social status, as well as making it easier to care for children and other family members. Cooking with LPG freed up time which they could use to work outside the home and earn money. They also enjoyed extra leisure time with their family.Study co-author Dr Rosie Day, Senior Lecturer in Environment and Society at the University of Birmingham, commented: "Despite India's aim of switching to clean fuels, the scale of solid fuel use in rural areas signals that widespread uptake and sustained use of clean fuels is a distant reality."Whilst cooking is not solely a woman's job, the reality is that, in rural India, women are considered the primary cooks. It is, therefore, critical to unravel how women see the relationship between wellbeing and cooking fuel if India is to make progress in transitioning to clean fuels."Researchers suggest that future interventions to promote new fuels should actively involve women who used solid fuels and clean fuels -- opening discussion about the benefits of each and allowing cooks to observe different cooking practices. Interaction programmes could inform firewood users about the positive wellbeing outcomes of LPG, address concerns, and promote learning from each other.The study identifies three key lessons that have important implications for policy makers to consider:Understanding this helps to explain why people may not be persuaded to switch to cleaner fuels based only on seemingly obvious health benefits.LPG and firewood users share some views, such as food tastes better cooked on firewood, but LPG users see more advantages in LPG than non-users.In the study villages, women can enjoy recreation with friends and neighbours, as well as supporting their children's education. They can also re-allocate this saved time to doing paid work and choose how to spend the extra income resource themselves."We have gained important understanding of women's views in this setting, but further research is needed to analyse the perceived relationship between women's fuel use and multi-dimensional wellbeing in other settings -- this will help to increase our understanding of how social and cultural factors come into play in transition to clean fuels," commented Dr. Day.
|
Environment
| 2,020 |
November 6, 2020
|
https://www.sciencedaily.com/releases/2020/11/201106134322.htm
|
Policy, not tech, spurred Danish dominance in wind energy
|
In emerging renewable energy industries, are producers' decisions to shut down or upgrade aging equipment influenced more by technology improvements or government policies?
|
It's an important long-term question for policymakers seeking to increase renewable electricity production, cost-effectiveness and efficiency with limited budgets, says C.-Y. Cynthia Lin Lawell, associate professor in the Charles H. Dyson School of Applied Economics and Management at Cornell University.In a new study focused on Denmark, a global leader in wind energy -- a relatively mature and low-cost renewable technology -- Lin Lawell found that government policies have been the primary driver of that industry's growth and development."Technological progress alone wouldn't have led to that widespread development of wind energy in Denmark," said Lin Lawell, the Robert Dyson Sesquicentennial Chair in Environmental, Energy and Resource Economics. "Well-designed policy may be an important contributor for nascent industries like renewables, which need to develop technology and which have broader societal benefits in terms of the environment."Lin Lawell is the co-author with Jonathan Cook, an associate in her DEEP-GREEN-RADAR research group, of "Wind Turbine Shutdowns and Upgrades in Denmark: Timing Decisions and the Impact of Government Policy," published in a recent issue of Wind turbines in many countries are approaching the end of their useful lives of roughly 20 years, Cook and Lin Lawell note, making decisions about whether to scrap or upgrade them increasingly relevant.Denmark is ahead of that curve, having promoted wind energy since the oil crisis in the late 1970s. The country produces over 40% of its electricity from wind power and dominates other countries, the authors said, in wind deployment per capita and per gross domestic product. The Danish wind industry is highly decentralized, with 88% of the nearly 3,000 producers included in the 32-year study period from 1980-2011 operating no more than two turbines.The researchers built a dynamic structural econometric model that incorporated the capacity, age and location of every turbine operated by small producers during that period. The model's "bottom-up" approach enabled analysis of individual owners' decisions to shut down, upgrade or add turbines over time, and simulated outcomes if government policies had been scaled back or were not implemented."Understanding the factors that influence individual decisions to invest in wind energy and how different policies can affect the timing of these decisions is important for policies both in countries that already have mature wind industries," the researchers wrote, "as well as in regions of the world that are earlier in the process of increasing renewable electricity generation (e.g. most of the U.S.)."Denmark since the late 1970s has offered a feed-in tariff that guaranteed producers a fixed price per amount of wind energy generated, whether turbines were new or old. Since 1999, replacement certificates have incentivized upgrades.Both policies significantly impacted small producers' shutdown and upgrade decisions and accelerated the development of Denmark's wind industry, the scholars concluded. Without them, the model showed most small-scale wind producers would have left the industry by 2011, concentrating production in larger wind farms.However, the analysis determined that replacement certificates were far more cost-effective than the feed-in tariff in encouraging small producers to add or upgrade turbines, helping Denmark reduce its carbon emissions.The study estimated the Danish government spent $3.5 billion on the feed-in tariff program over the study period, and as much as $114 million on the replacement certificates. Together, the two programs reduced carbon emissions by 57.4 million metric tons of carbon dioxide."One was just really expensive at doing it," Lin Lawell said. "Both the cost per metric ton of carbon dioxide avoided, and the cost per percentage point increase in payoff to the turbine owner, are much lower for the replacement certificate program."For every million metric tons of carbon dioxide avoided, the researchers estimated the feed-in tariff cost Danish taxpayers $61.8 million, compared to $2.2 million or less for the replacement certificates.Cook and Lin Lawell said their analysis offers lessons about the role of government policy in incentivizing the development of renewables and about which policies generate the most bang for the buck."Our application to the Danish wind industry," they wrote, "has important implications for the design of renewable energy policies worldwide."
|
Environment
| 2,020 |
November 5, 2020
|
https://www.sciencedaily.com/releases/2020/11/201105112956.htm
|
Scientists develop energy-saving 'liquid window'
|
Scientists at the Nanyang Technological University, Singapore (NTU Singapore) have developed a liquid window panel that can simultaneously block the sun to regulate solar transmission, while trapping thermal heat that can be released through the day and night, helping to reduce energy consumption in buildings.
|
The NTU researchers developed their 'smart window' by placing hydrogel-based liquid within glass panels and found that it can reduce up to 45 per cent of energy consumption in buildings in simulations, compared to traditional glass windows. It is also around 30 per cent more energy efficient than commercially available low-emissivity (energy-efficient) glass, while being cheaper to make.The 'smart window' is the first reported instance in a scientific journal of energy-saving smart windows made using liquid, and supports the NTU Smart Campus vision which aims to develop technologically advanced solutions for a sustainable future.Windows are a key component in a building's design, but they are also the least energy-efficient part. Due to the ease with which heat can transfer through glass, windows have a significant impact on heating and cooling costs of a building. According to a 2009 report by the United Nations, buildings account for 40 per cent of global energy usage, and windows are responsible for half of that energy consumption.Conventional energy-saving low-emissivity windows are made with expensive coatings that cut down infrared light passing into or out of a building, thus helping to reduce demand for heating and cooling. However, they do not regulate visible light, which is a major component of sunlight that causes buildings to heat up.To develop a window to overcome these limitations, the NTU researchers turned to water, which absorbs a high amount of heat before it begins to get hot -- a phenomenon known as high specific heat capacity.They created a mixture of micro-hydrogel, water and a stabiliser, and found through experiments and simulations that it can effectively reduce energy consumption in a variety of climates, due to its ability to respond to a change in temperature. Thanks to the hydrogel, the liquid mixture turns opaque when exposed to heat, thus blocking sunlight, and, when cool, returns to its original 'clear' state.At the same time, the high heat capacity of water allows a large amount of thermal energy to be stored instead of getting transferred through the glass and into the building during the hot daytime. The heat will then be gradually cooled and released at night.Dr Long Yi, lead author of the research study published in the journal Joule, and Senior Lecturer at the School of Materials Science & Engineering said, "Our innovation combines the unique properties of both types of materials -- hydrogel and water. By using a hydrogel-based liquid we simplify the fabrication process to pouring the mixture between two glass panels. This gives the window a unique advantage of high uniformity, which means the window can be created in any shape and size."As a result of these features, the NTU research team believes that their innovation is best suited for use in office buildings, where operating hours are mostly in the day.As a proof of concept, the scientists conducted outdoor tests in hot (Singapore, Guangzhou) and cold (Beijing) environments.The Singapore test revealed that the smart liquid window had a lower temperature (50°C) during the hottest time of the day (noon) compared to a normal glass window (84°C). The Beijing tests showed that the room using the smart liquid window consumed 11 per cent less energy to maintain the same temperature compared to the room with a normal glass window.The scientists also measured when the highest value of stored thermal energy of the day occurred.This 'temperature peak' in the normal glass window was 12pm, and in the smart liquid window was shifted to 2 pm. If this temperature peak shift is translated to a shift in the time that a building needs to draw on electrical power to cool or warm the building, it should result in lower energy tariff charges for users.Simulations using an actual building model and weather data of four cities (Shanghai, Las Vegas, Riyadh, and Singapore) showed that the smart liquid window had the best energy-saving performance in all four cities when compared to regular glass windows and low emissivity windows.Soundproof tests also suggested that the smart liquid window reduces noise 15 per cent more effectively than double-glazed windows.First author of the study Wang Shancheng, who is Project Officer at the School of Materials Science & Engineering said, "Sound-blocking double glazed windows are made with two pieces of glass which are separated by an air gap. Our window is designed similarly, but in place of air, we fill the gap with the hydrogel-based liquid, which increases the sound insulation between the glass panels, thereby offering additional benefit not commonly found in current energy-saving windows."The other first author, Dr Zhou Yang was a PhD student in NTU and is currently an Associate Professor at China University of Petroleum-Beijing (CUPB).Providing an independent view, Professor Ronggui Yang, of the Huazhong University of Science and Technology, China, a recipient of the 2020 Nukiyama Memorial Award in Thermal Science and Engineering and an expert in thermal and energy systems said, "This is the first instance of a hydrogel-based liquid smart window, and it takes us far from a conventional glass design. The disruptive innovation leads to solar regulation and heat storage, which together render outstanding energy-saving performance."The research team is now looking to collaborate with industry partners to commercialise the smart window.The research is supported by the National Research Foundation, Prime Minister's Office, Singapore, under its Campus for Research Excellence and Technological Enterprise (CREATE) programme, and the Sino-Singapore International Joint Research Institute.
|
Environment
| 2,020 |
November 5, 2020
|
https://www.sciencedaily.com/releases/2020/11/201105112942.htm
|
Eco-engineered tiles enhance marine biodiversity on seawalls
|
A joint-study led by a team of marine ecologists from City University of Hong Kong (CityU) has found that the eco-engineered tiles can increase habitat complexity on seawalls in Hong Kong, thereby effectively enhancing the marine biodiversity. The Hong Kong study is part of a global research project on the relationship between habitat complexity and marine biodiversity on human-built marine structures.
|
Professor Kenneth Leung Mei-yee, who has recently joined CityU and is the new Director of the State Key Laboratory of Marine Pollution (SKLMP) and Chair Professor of Environmental Toxicology & Chemistry, led a team made up of researchers from the University of Hong Kong (HKU) as well as universities in the United Kingdom, Australia and Singapore to conduct the experiment in Hong Kong. Their findings were published recently in the journal Coastal development and reclamation have caused a global increase in artificial vertical seawalls that protect the shoreline from wave action, erosion and flooding. However, these seawalls do not have the natural complexity of rocky shores and can reach extremely high temperatures when exposed at low tide. This makes them unsuitable for many intertidal marine species, including the filter-feeding oysters that improve water quality. The resulting lack of biodiversity weakens the coastal ecosystem.As part of the international collaboration Their experimental results showed that the eco-engineered tiles with crevices hosted more species and a higher number of animals than not just scraped seawall, but also the flat eco-engineered tiles after 12 months. Compared with the flat tiles, the tiles with crevices had an increase of 19 to 51% in the number of species, and an increase of 59 to 416% in the number of animals. It demonstrated that by increasing the surface complexity with crevices, the eco-engineered tiles provided shelter and reduced the temperature for animals.In particular, those with crevices of 2.5 cm or 5 cm deep had up to three times the number of species present in the shaded crevices than the exposed ledges. Species such as snails and limpets preferred the cooler and shaded crevices of the tiles, hence prompting the overall increase in the number of species and individual animals on the tiles."The promising results from our Hong Kong experiment clearly showed that we can effectively enhance marine biodiversity on seawalls by increasing habitat complexity through eco-engineering. This technology can be applied to all existing seawalls in Hong Kong to promote biodiversity," said Professor Leung.The research team also attached live rock oysters to half of the tiles to test if the oysters could further enhance marine biodiversity. They found that the tiles with crevices and seeded with oyster performed the best, with an increase of 38 to 76% in the number of species and an increase of 120 to 571% in the number of animals, when compared with flat and unseeded tiles. The attached oysters survived well during the 12 months, and some oysters provided food for predators, promoting a healthy ecosystem. The tiles with oysters attracted growth of new oysters during the experiment, and the juvenile oysters also preferred the crevices of the complex tiles to the flat tiles.Supported by the Civil Engineering and Development Department, Professor Leung is running another trial of various eco-engineered fixtures (i.e., tiles, panels, tidal pools, armouring units and oyster baskets) on vertical and sloping seawalls in Ma Liu Shui, Sai Kung and Tuen Mun. He believes there is great potential in applying the eco-engineering technology to help mitigate the negative effects of artificial seawalls not just in Hong Kong, but also internationally.Apart from Hong Kong, the same eco-engineered tiles were deployed in another 13 locations around the world, including London, Sydney and San Francisco, for 12 months under the World Harbour Project.It was found that the increased complexity consistently enhanced the biodiversity of marine invertebrates on the experimental tiles across all locations, despite some variation. The effects of habitat complexity on total species richness and mobile mollusc (soft-bodied invertebrates like gastropods and limpets) abundance were the greatest at lower or tropical latitudes, while the cover of sessile (non-mobile animals that live attached to a surface) invertebrates, such as oysters, barnacles and mussels, responded more strongly to complexity at subtropical or higher latitudes.They also found that the magnitude of the complexity effects on the colonization of individual organisms varied spatially according to factors like tidal elevation and latitude, from strongly positive to negligible, or in a few cases, negative. "This suggests that in order to maximize the benefits of eco-engineering, we still need to have a more thorough understanding of how the complexity effects are shaped by the site-specific environmental and biological factors," added Professor Leung.
|
Environment
| 2,020 |
November 4, 2020
|
https://www.sciencedaily.com/releases/2020/11/201104194655.htm
|
Herbicide: Hydrogen bonds may be key to airborne dicamba
|
Dicamba has been the subject of lawsuits across the country, with landowners contending the herbicide, when used by neighboring growers, has blown onto their property, killing valuable non-resistant crops.
|
Dicamba is sprayed in a formulation that contains an amine, a chemical agent that is supposed to keep the herbicide in place, preventing it from going airborne. Ongoing reports of crop damage despite these measures have previously shown, however, that it may not be working as it should, particularly when the dicamba/amine formulation is sprayed with the most commonly used herbicide in the world, glyphosate, the main component of Roundup.Washington University in St. Louis researchers in the lab of Kimberly Parker, assistant professor in the Department of Energy, Environmental & Chemical Engineering in the McKelvey School of Engineering, have proposed a mechanism that describes how dicamba volatility is controlled by amines.The finding was published in October in The factors that result in dicamba volatilizing -- becoming airborne -- have been investigated before in scientific studies conducted on fields and in greenhouses where researchers measured how much dicamba transformed into a gas that could be measured in the air or by assessing damage to plants.But there remained major gaps when it came to understanding the molecular processes at work, so Parker's lab set out to fill them."We decided to approach it from a unique direction," said lead author Stephen Sharkey, a PhD student in the Parker Lab. "We wanted to try to get into the chemistry behind the volatility process."He started with considering the interactions of molecules in the solid phase of the dicamba/amine formulation.There are three amines that are typically used in commercial dicamba formulations. Sharkey considered those three commonly used amines as well as six others to get a better, more generally applicable understanding of their properties and their impacts on dicamba volatilization. How are the amines interacting with dicamba and can this information be used to discover why dicamba is still volatilizing?Parker said that there are a couple of common assumptions about what is happening between amines and dicamba: the heavier amine acts like an anchor, thus weighing down the herbicide, or volatilization is determined by pH levels.Sharkey's research showed something different. In regard to the three most-used amines, he said, "the ones that work best have more hydrogen bonding functioning groups." He went on to find the same results in the six additional amines.The researchers also looked at how other molecules may impact these interactions. "We found glyphosate increased volatility in two of the three main amines," Sharkey said. "One way dicamba products may be used is alongside glyphosate as a way of killing many different weeds," including those resistant to glyphosate and/or those resistant to dicamba.The research team believes it may be the case that glyphosate, which has lots of places where it can form hydrogen bonds, may be interfering with dicamba's ability to form bonds with the amines. In essence, glyphosate may be driving a chemical wedge between the two by forming its own bonds to the dicamba or amine molecules.None of the other potential factors they tested had as reliable or consistent an effect on volatility as the number of hydrogen bonding sites on the amine.The team tested several different variables, including temperature, reducing the amine concentration relative to dicamba, amine acidity, amine vapor pressure, amine molecular weight, solution pH values and the presence of glyphosate."We showed those were not primary determinants," Parker said. "Hydrogen bonding seemed to be the primary factor. If the amine has more hydrogen bonding functional groups, dicamba volatility is decreased compared to other amine formulations."Going forward, this better understanding of how dicamba and amines interact identifies a specific characteristic that can be modified to improve a formula's ability to remain on a crop and away from surrounding fields. It also points to the benefits of studying herbicides in the lab in addition to the work done by other researchers in the field. That's something Parker and her team have been doing and will continue to do.As for next steps, Sharkey's latest work is a look at how the introduction of more tolerant crops affect usage of herbicides. Parker said she'd like an expanded understanding of the effects of more complex chemistries on dicamba volatility."What about other chemicals on a leaf's surface, for example?" she asked. "How might those further affect volatilization?"
|
Environment
| 2,020 |
November 4, 2020
|
https://www.sciencedaily.com/releases/2020/11/201104114736.htm
|
Luminescent wood could light up homes of the future
|
The right indoor lighting can help set the mood, from a soft romantic glow to bright, stimulating colors. But some materials used for lighting, such as plastics, are not eco-friendly. Now, researchers reporting in
|
Consumer demand for eco-friendly, renewable materials has driven researchers to investigate wood-based thin films for optical applications. However, many materials developed so far have drawbacks, such as poor mechanical properties, uneven lighting, a lack of water resistance or the need for a petroleum-based polymer matrix. Qiliang Fu, Ingo Burgert and colleagues wanted to develop a luminescent wood film that could overcome these limitations.The researchers treated balsa wood with a solution to remove lignin and about half of the hemicelluloses, leaving behind a porous scaffold. The team then infused the delignified wood with a solution containing quantum dots -- semiconductor nanoparticles that glow in a particular color when struck by ultraviolet (UV) light. After compressing and drying, the researchers applied a hydrophobic coating. The result was a dense, water-resistant wood film with excellent mechanical properties. Under UV light, the quantum dots in the wood emitted and scattered an orange light that spread evenly throughout the film's surface. The team demonstrated the ability of a luminescent panel to light up the interior of a toy house. Different types of quantum dots could be incorporated into the wood film to create various colors of lighting products, the researchers say.
|
Environment
| 2,020 |
November 4, 2020
|
https://www.sciencedaily.com/releases/2020/11/201104114726.htm
|
Monitoring open-cast mines better than before
|
When it comes to safety in open-cast mining, soil stability is one of the most critical factors. Settlement of the ground or slipping of slopes poses a great risk to buildings and people. Now Mahdi Motagh from the German Research Centre for Geosciences GFZ, in cooperation with Chinese scientists, has evaluated data from the Sentinel 1 mission of the European Union's Copernicus program and thus demonstrated new possibilities for monitoring mining areas. The three researchers used a special radar method, the Synthetic Aperture Radar Interferometry (InSAR), to investigate lignite regions in North Rhine-Westphalia in Germany. They reported on this in the
|
The InSAR method in itself is not new and is used in many places to detect ground deformations, whether after earthquakes or subsidence due to the overexploitation of underground water reservoirs. However, it had one decisive disadvantage: InSAR satellites such as ERS or ENVISAT only record a certain region on average once a month or less. "With its six-day repeat time interval and small orbital tube, the Sentinel 1 mission provides SAR data that help us to investigate hazards in very specific mining areas in Germany in much greater detail in terms of time and space than before," reports Mahdi Motagh, "and we can do this in near real time." The mission is also able to provide a comprehensive overview of the situation in the mining industry. By combining the results of this new technology with other on-site measurements and high-resolution SAR systems such as the German TerraSAR-X, the geotechnical risk of open-cast mines could be assessed far more completely than before.The work shows that there is significant land subsidence in the open-cast mining areas of Hambach, Garzweiler and Inden. The reason for this is the compaction process of overburden over refilled areas with subsidence rates varying between 30-50 centimeters per year over Inden, Hambach and Garzweiler. Satellite data also showed a significant horizontal shift of up to 12 centimeters per year at one mine face. Also the former open pits Fortuna-Garsdorf and Berghein in the eastern part of the Rhenish coal fields, which have already been reclaimed for agriculture, show subsidence rates of up to 10 centimeters per year.
|
Environment
| 2,020 |
November 4, 2020
|
https://www.sciencedaily.com/releases/2020/11/201104102215.htm
|
Scientists generate realistic storm turbulence in the lab
|
Strong storms often seem to leave behind random destruction: While the roof tiles of one house are blown away, the neighboring property may not be damaged at all. What causes these differences are wind gusts -- or, as physicists say, local turbulence. It results from large-scale atmospheric flows, but up to now, it is impossible to predict it in great detail.
|
Experts from the University of Oldenburg and the Université de Lyon have now paved the way for studying small-scale turbulence: The team led by Oldenburg physicist Prof. Dr. Joachim Peinke succeeded in generating turbulent flows in a wind tunnel. The flows resembled those occurring in big gales. The team has found a way to literally cut a slice out of a storm, the researchers report in the journal The most important parameter characterising the turbulence of a flow is the so-called Reynolds number: This physical quantity describes the ratio of kinetic energy to frictional forces in a medium. In simple terms, you can say: The greater the Reynolds number, the more turbulent the flow. One of the greatest mysteries of turbulence is its statistics: Extreme events such as strong, sudden wind gusts occur more frequently if you look at smaller scales."The turbulent eddies of a flow become more severe on smaller scales," explains Peinke, who heads the research group Turbulence, Wind Energy and Stochastics. In a strong storm -- that is, when the Reynolds number is high -- a fly is therefore affected by much gustier flow conditions than, say, an airplane. The specific reasons for this are not well known: the physical equations describing fluids are not yet solved when it comes to turbulence. This task is one of the famous millennium problems of mathematics, on whose solution the Clay Mathematics Institute in the U.S. has put up one million dollars each.In the large wind tunnel of the Center for Wind Energy Research (ForWind), the Oldenburg team has now succeeded in generating more turbulent wind conditions than ever before. Compared to previous experiments, the researchers increased the Reynolds number a hundred times and thus simulated conditions similar to those encountered in a real storm. "We do not yet see an upper limit," says Peinke. "The turbulence generated is already very close to reality."The Oldenburg wind tunnel has a 30-meter long test section. Four fans can generate wind speeds of up to 150 kilometers per hour, which corresponds to a category 1 hurricane. To create turbulent air flow, the researchers use a so-called active grid, which was developed for the special requirements in the large Oldenburg wind tunnel. The structure, three by three meters in size, is located at the beginning of the wind tunnel and consists of almost a thousand small, diamond-shaped aluminum wings. The metal plates are movable. They can be rotated in two directions via 80 horizontal and vertical shafts. This allows the wind researchers to selectively block and reopen small areas of the wind tunnel nozzle for a short time, causing air to be swirled. "With the active grid -- the largest of its kind in the world -- we can generate many different turbulent wind fields in the wind tunnel," explains Lars Neuhaus, who is also a member of the team and played a key role in this study.For the experiments, the team varied the movement of the grid in a chaotic manner similar to the conditions occurring in turbulent air flow. They also changed the power of the fans irregularly. Thus, in addition to small-scale turbulence, the air flow generated a larger movement in the longitudinal direction of the wind tunnel. "Our main finding is that the wind tunnel flow combines these two components into perfect, realistic storm turbulence," explains co-author Dr. Michael Hölling. The physicist also chairs the international Wind Tunnel Testing Committee of the European Academy of Wind Energy (EAWE). This storm turbulence emerged 10 to 20 meters behind the active grid."By adjusting the grid and the fans of the wind tunnel, we have generated a large-scale turbulence about ten to one hundred metres in size. At the same time, a small-scale turbulence with dimensions of a few meters and less appeared spontaneously. However, we still don't know exactly why," Hölling explains. As he and his colleagues report, this new approach makes it possible to scale down atmospheric turbulence relevant to wind turbines, aircraft or houses to a size of one meter in the wind tunnel. This will allow researchers to conduct realistic experiments with miniaturized models in the future -- in which extreme gusts occur just as frequently as in real storms.
|
Environment
| 2,020 |
November 4, 2020
|
https://www.sciencedaily.com/releases/2020/11/201104102209.htm
|
Ants are skilled farmers: They have solved a problem that we humans have yet to
|
Fungus-farming ants are an insect lineage that relies on farmed fungus for their survival. In return for tending to their fungal crops -- protecting them against pests and pathogens, providing them with stable growth conditions in underground nests, and provisioning them with nutritional 'fertilizers' -- the ants gain a stable food supply.
|
These fungus farming systems are an expression of striking collective organization honed over 60 million years of fungus crop domestication. The farming systems of humans thus pale in comparison, since they emerged only ca. 10,000 years ago.A new study from the University of Copenhagen, and funded by an ERC Starting Grant, demonstrates that these ants might be one up on us as far as farming skills go. Long ago, they managed to appear to have overcome key domestication challenges that we have yet to solve."Ants have managed to retain a farming lifestyle across 60 million years of climate change, and Leafcutter ants appear able to grow a single cultivar species across diverse habitats, from grasslands to tropical rainforest" explains Jonathan Z. Shik, one of the study's authors and an assistant professor at the University of Copenhagen's Department of Biology.Through fieldwork in the rainforests of Panama, he and researchers from the Smithsonian Tropical Research Institute studied how fungus-farming ants use nutrition to manage a tradeoff between the cultivar's increasingly specialized production benefits, and it's rising vulnerability to environmental variation.We humans have bred certain characteristics -- whether a taste or texture -- into our crops.But these benefits of crop domestication can also result in greater sensitivity to environmental threats from weather and pests, requiring increasing pesticide use and irrigation. Simply put, we weaken plants in exchange for the right taste and yield. Jonathan Z. Shik explains:"The ants appear to have faced a similar yield-vulnerability tradeoff as their crops became more specialized, but have also evolved plenty of clever ways to persist over millions of years. For example, they became impressive architects, often excavating sophisticated and climate-controlled subterranean growth chambers where they can protect their fungus from the elements," he says.Furthermore, these little creatures also appear able to carefully regulate the nutrients used to grow their crops.To study how, Shik and his team spent over a hundred hours lying on rainforest floor on trash bags next to ant nests. Armed only with forceps, they stole tiny pieces of leaves and other substrates from the jaws of ants as they returned from foraging trips.They did this while snakes slithered through the leaf litter and monkeys peered down at him from the treetops."For instance, our nutritional analyses of the plant substrates foraged by leafcutter ants show that they collect leaves, fruit, and flowers from hundreds of different rainforest trees. These plant substrates contain a rich blend of protein, carbohydrates and other nutrients such as sodium, zinc and magnesium," explains Shik. "This nutritional blend can target the specific nutritional requirements of their fungal crop."Over the years, the ants have adapted their leaf collecting to the needs of the fungus -- a kind of organic farming, without the benefits of the technological advances that have helped human farmers over the millenia, one might say.One might wonder, is it possible to simply copy their ingenious methods?"Because our plant crops require sunlight and must thus be grown above ground, we can't directly transfer the ants' methods to our own agricultural practices. But it's interesting that at some point in history, both humans and ants have gone from being hunter-gatherers to discovering the advantages of cultivation. It will be fascinating to see what farming systems of humans look like in 60 million years," concludes Jonathan Z. Shik.
|
Environment
| 2,020 |
November 4, 2020
|
https://www.sciencedaily.com/releases/2020/11/201104102207.htm
|
Large-scale study: Congolese fishermen report decline in fish stocks on Lake Tanganyika
|
Fishermen working on Lake Tanganyika in eastern Congo experience a lack of safety and want better enforcement of existing regulations. They also report a decline in the lake's fish stocks. These are some of the findings of a large international study led by KU Leuven (Belgium) based on 1018 interviews with stakeholders in the area. The study was published in the
|
Lake Tanganyika is the second largest freshwater lake in the world and is located in four countries: the Democratic Republic of the Congo, Tanzania, Burundi, and Zambia. The fisheries on the lake play a vital role in providing food to eastern Congo, one of the poorest regions in the world.Exploitation of natural resources and pollution put the fish stocks under pressure. Researchers now show that fishermen experience a lack of safety on the lake and there is a reported decline in fishery yield. Fishermen and other stakeholders ask for protection, access to safety gear, and better enforcement of existing fisheries legislation. The team consisted of researchers from, among others, the Royal Belgian Institute of Natural Sciences, the Belgian Royal Museum of Central Africa and the Congolese Centre de Recherche en Hydrobiologie (CRH).Fishermen working on Lake Tanganyika mainly catch three species: perch and two species of sardines. The fish are sold on the beaches to saleswomen, who then sell them on markets. Regulation of fishing is limited. Government employees collect catch statistics on the beaches and check if any illegal materials, such as beach seines or mosquito nets, are being used. To ensure that fish stocks are not depleted, good management is indispensable. However, the current regulations are four decades old, and no longer relevant for the situation today, since the number of fishermen has increased exponentially. In order to adapt the regulations to the needs of fisheries stakeholders, it is important to have insight into their opinions and views.The researchers conducted and analysed 1018 interviews with fishermen, salespeople, government employees and other stakeholders of the fisheries. Fishermen, salespeople, and officials indicate that the catches of the three target species are decreasing and that fish are also getting smaller over time. These could be indications of overfishing of the system. "However, the study participants did not attribute this to overfishing or overpopulation," says Maarten Van Steenberge, who coordinated the study. As a result, fishermen and others are not in favor of stricter regulations, as they do not see the benefit.A stricter fisheries policy has little chance of succeeding if the local population does not support it. "We do notice that although the fishermen are not open to tightening the regulations, they do demand a stricter application of the existing rules, for example to prevent unfair competition from fishermen who fish with illegal materials," says Pascal Masilya Mulungula, researcher at CRH in Uvira, Congo.Illegal fishing practices like catching juvenile fish near the shore with mosquito nets, has detrimental effects on the stocks. However, these illegal fishing practices are mostly carried out by impoverished women, who lack any other source of food and income, and this income to support their families. Strict application of the rules, which forbid this type of fishing, should thus go together with alternatives sources of income for these people.The interviews with fishermen also revealed that their main concern is a lack of security. "Fishermen report dangerous conditions on the lake, such as high waves and strong winds. They are also regularly attacked by gangs or extorted by soldiers or security officers," says Els De Keyzer, who carried out the research. The fishermen ask for more security equipment, such as lifejackets, and strict action against gangs and corruption.This was the first time that stakeholders of the Lake Tanganyika fisheries were interviewed about these issues on such big scale, providing unique insight into the problems and needs related to these fisheries. The authors handed over their findings to the local administration in a policy brief. "We hope that this study will be used as a starting point for policymakers who want to adapt the regulations to current conditions. Since the stocks are shared between four countries, future research should focus on how much willingness there is for collaboration between stakeholders around the lake," concludes De Keyzer.
|
Environment
| 2,020 |
November 3, 2020
|
https://www.sciencedaily.com/releases/2020/11/201103191004.htm
|
Leaf-cutter bees as plastic recyclers? Not a good idea, say scientists
|
Plastic has become ubiquitous in modern life and its accumulation as waste in the environment is sounding warning bells for the health of humans and wildlife. In a recent study, Utah State University scientist Janice Brahney cited alarming amounts of microplastics in the nation's national parks and wilderness areas.
|
Bioengineers around the world are working to develop plastic-eating "super" enzymes that can break down the human-made material's molecular structure faster to aid recycling efforts. In another research effort published in 2019, entomologists noted leaf-cutter bees were using plastic waste to construct their nests. The researchers suggested such behavior could be an "ecologically adaptive trait" and a beneficial recycling effort.Not so fast, says USU evolutionary ecologist Joseph Wilson. Just because bees can use plastic, doesn't mean they should.Wilson and undergraduate researcher Sussy Jones, along with colleagues Scott McCleve, a naturalist and retired math teacher in Douglas, Arizona, and USU alum and New Mexico-based independent scientist Olivia Carril '00, MS'06, jointly authored an observational paper in the Oct. 9, 2020 issue of "Leaf-cutter bees are among the most recognizable of solitary bees, because of their habit of cutting circles out of leaves to build their cylindrical nests," says Wilson, associate professor of biology at USU-Tooele. "We've heard reports of these bees using plastic, especially plastic flagging primarily in construction and agriculture, and we decided to investigate."The researchers don't yet know how widespread the use of plastic by leaf-cutter bees is and they also know little about plastic's effects on the insects."Building from plastic could change the dynamics and environment of the bee's nest cells, because plastic doesn't breathe like natural materials," says Wilson, who produced a video about the phenomenon. "In the 1970s, some researcher let leaf-cutter bees nest in plastic straws and found ninety percent of the bees' offspring died because of fungal growth. The plastic sealed in the moisture and didn't allow gas exchange."To deter bees' use of flagging, Wilson suggests use of fabric ribbons made from natural fibers."These materials are biodegradable and, if used by bees, will likely avoid the harmful moisture-capturing effects of plastic," he says.
|
Environment
| 2,020 |
November 3, 2020
|
https://www.sciencedaily.com/releases/2020/11/201103172609.htm
|
Desalination: Industrial-strength brine, meet your kryptonite
|
A thin coating of the 2D nanomaterial hexagonal boron nitride is the key ingredient in a cost-effective technology developed by Rice University engineers for desalinating industrial-strength brine.
|
More than 1.8 billion people live in countries where fresh water is scarce. In many arid regions, seawater or salty groundwater is plentiful but costly to desalinate. In addition, many industries pay high disposal costs for wastewater with high salt concentrations that cannot be treated using conventional technologies. Reverse osmosis, the most common desalination technology, requires greater and greater pressure as the salt content of water increases and cannot be used to treat water that is extremely salty, or hypersaline.Hypersaline water, which can contain 10 times more salt than seawater, is an increasingly important challenge for many industries. Some oil and gas wells produce it in large volumes, for example, and it is a byproduct of many desalination technologies that produce both freshwater and concentrated brine. Increasing water consciousness across all industries is also a driver, said Rice's Qilin Li, co-corresponding author of a study about Rice's desalination technology published in "It's not just the oil industry," said Li, co-director of the Rice-based Nanotechnology Enabled Water Treatment Center (NEWT). "Industrial processes, in general, produce salty wastewater because the trend is to reuse water. Many industries are trying to have 'closed loop' water systems. Each time you recover freshwater, the salt in it becomes more concentrated. Eventually the wastewater becomes hypersaline and you either have to desalinate it or pay to dispose of it."Conventional technology to desalinate hypersaline water has high capital costs and requires extensive infrastructure. NEWT, a National Science Foundation (NSF) Engineering Research Center (ERC) headquartered at Rice's Brown School of Engineering, is using the latest advances in nanotechnology and materials science to create decentralized, fit-for-purpose technologies for treating drinking water and industrial wastewater more efficiently.One of NEWT's technologies is an off-grid desalination system that uses solar energy and a process called membrane distillation. When the brine is flowed across one side of a porous membrane, it is heated up at the membrane surface by a photothermal coating that absorbs sunlight and generates heat. When cold freshwater is flowed across the other side of the membrane, the difference in temperature creates a pressure gradient that drives water vapor through the membrane from the hot to the cold side, leaving salts and other nonvolatile contaminants behind.A large difference in temperature on each side of the membrane is the key to membrane desalination efficiency. In NEWT's solar-powered version of the technology, light-activated nanoparticles attached to the membrane capture all the necessary energy from the sun, resulting in high energy efficiency. Li is working with a NEWT industrial partner to develop a version of the technology that can be deployed for humanitarian purposes. But unconcentrated solar power alone isn't sufficient for high-rate desalination of hypersaline brine, she said."The energy intensity is limited with ambient solar energy," said Li, a professor of civil and environmental engineering. "The energy input is only one kilowatt per meter square, and the production rate of water is slow for large-scale systems."Adding heat to the membrane surface can produce exponential improvements in the volume of freshwater that each square foot of membrane can produce each minute, a measure known as flux. But saltwater is highly corrosive, and it becomes more corrosive when heated. Traditional metallic heating elements get destroyed quickly, and many nonmetallic alternatives fare little better or have insufficient conductivity."We were really looking for a material that would be highly electrically conductive and also support large current density without being corroded in this highly salty water," Li said.The solution came from study co-authors Jun Lou and Pulickel Ajayan in Rice's Department of Materials Science and NanoEngineering (MSNE). Lou, Ajayan and NEWT postdoctoral researchers and study co-lead authors Kuichang Zuo and Weipeng Wang, and study co-author and graduate student Shuai Jia developed a process for coating a fine stainless steel mesh with a thin film of hexagonal boron nitride (hBN).Boron nitride's combination of chemical resistance and thermal conductivity has made its ceramic form a prized asset in high-temperature equipment, but hBN, the atom-thick 2D form of the material, is typically grown on flat surfaces."This is the first time this beautiful hBN coating has been grown on an irregular, porous surface," Li said. "It's a challenge, because anywhere you have a defect in the hBN coating, you will start to have corrosion."Jia and Wang used a modified chemical vapor deposition (CVD) technique to grow dozens of layers of hBN on a nontreated, commercially available stainless steel mesh. The technique extended previous Rice research into the growth of 2D materials on curved surfaces, which was supported by the Center for Atomically Thin Multifunctional Coatings, or ATOMIC. The ATOMIC Center is also hosted by Rice and supported by the NSF's Industry/University Cooperative Research Program.The researchers showed that the wire mesh coating, which was only about one 10-millionth of a meter thick, was sufficient to encase the interwoven wires and protect them from the corrosive forces of hypersaline water. The coated wire mesh heating element was attached to a commercially available polyvinylidene difluoride membrane that was rolled into a spiral-wound module, a space-saving form used in many commercial filters.In tests, researchers powered the heating element with voltage at a household frequency of 50 hertz and power densities as high as 50 kilowatts per square meter. At maximum power, the system produced a flux of more than 42 kilograms of water per square meter of membrane per hour -- more than 10 times greater than ambient solar membrane distillation technologies -- at an energy efficiency much higher than existing membrane distillation technologies.Li said the team is looking for an industry partner to scale up the CVD coating process and produce a larger prototype for small-scale field tests."We're ready to pursue some commercial applications," she said. "Scaling up from the lab-scale process to a large 2D CVD sheet will require external support."NEWT is a multidisciplinary engineering research center launched in 2015 by Rice, Yale University, Arizona State University and the University of Texas at El Paso that was recently awarded a five-year renewal grant for $16.5 million by the National Science Foundation. NEWT works with industry and government partners to produce transformational technology and train engineers who are ready to lead the global economy.Ajayan is Rice's Benjamin M. and Mary Greenwood Anderson Professor in Engineering, MSNE department chair and a professor of chemistry. Lou is a professor and associate department chair in MSNE and a professor of chemistry.
|
Environment
| 2,020 |
November 3, 2020
|
https://www.sciencedaily.com/releases/2020/11/201103115652.htm
|
The cement for coral reefs
|
Coral reefs are hotspots of biodiversity. As they can withstand heavy storms, they offer many species a safe home, and at the same time, they protect densely populated coastal regions as they level out storm-driven waves. However, how can these reefs that are made up of often very fragile coral be so stable? A team of researchers from Friedrich-Alexander Universität Erlangen-Nürnberg (FAU) and the University of Bayreuth have now discovered that a very specific type of 'cement' is responsible for this -- by forming a hard calcareous skeleton, coralline red algae stabilise the reefs, and have been doing so for at least 150 million years.
|
The wide variety of life they support is immediately apparent on images of tropical coral reefs. Their three-dimensional scaffolding provides a habitat for a large number of species. However, the skeletons of the coral are often so fragile that they would not be able to withstand heavy storms by themselves. Even if scientists have long suspected that coralline red algae provide support to reefs with their calcareous skeletons, this is the first time that this link has been proven.The researchers from FAU and the University of Bayreuth were able to prove this supporting function by analysing more than 700 fossilised reefs from 150 million years of the Earth's history. 'The coralline red algae form a calcareous skeleton and cement the coral reefs together,' explains Dr. Sebastian Teichert from the Chair of Palaeoenvironmental Research at FAU. 'However, several crises over the course of millions of years have limited their capacity to do so.'These crises include the evolution of plant grazing marine animals such as sea urchins and parrot fishes who have repeatedly decimated populations of coralline red algae over the course of time. The algae, however, developed defence mechanisms such as special growth forms in order to defend themselves against their attackers. 'The algae have adapted so well that they now even benefit from these plant grazers,' says Teichert. 'They rid the coralline red algae of damaging growth such as green algae, allowing it to grow unhindered.' This means coralline red algae are more successful at supporting coral reefs today than ever before in the Earth's history.The extent to which climate change affects the supporting role of coralline red algae is not yet known. Any deterioration to their living conditions would not only affect the coral and other inhabitants of reefs, but also humans, as coral reefs level out storm-driven waves and make an important contribution to coastal protection. They also provide a nursery habitat for several fish and shellfish which are an important source of food.
|
Environment
| 2,020 |
November 3, 2020
|
https://www.sciencedaily.com/releases/2020/11/201103112526.htm
|
Drones that patrol forests could monitor environmental and ecological changes
|
Sensors for forest monitoring are already used to track changes in temperature, humidity and light, as well as the movements of animals and insects through their habitat. They also help to detect and monitor forest fires and can provide valuable data on how climate change and other human activities are impacting the natural world.
|
However, placing these sensors can prove difficult in large, tall forests, and climbing trees to place them poses its own risks.Now, researchers at Imperial College London's Aerial Robotics Laboratory have developed drones that can shoot sensor-containing darts onto trees several metres away in cluttered environments like forests. The drones can also place sensors through contact or by perching on tree branches.The researchers hope the drones will be used in future to create networks of sensors to boost data on forest ecosystems, and to track hard-to-navigate biomes like the Amazon rainforest.Lead researcher Professor Mirko Kovac, director of the Aerial Robotics Lab from the Department of Aeronautics at Imperial said: "Monitoring forest ecosystems can be difficult, but our drones could deploy whole networks of sensors to boost the amount and precision of environmental and ecological data."I like to think of them as artificial forest inhabitants who will soon watch over the ecosystem and provide the data we need to protect the environment."The drones are equipped with cameras to help identify suitable targets, and a smart material that changes shape when heated to launch the darts, which then stick to the trees. They can also perch on tree branches like birds to collect data themselves, acting as mobile sensors.The researchers have tested their drones at the Swiss Federal Laboratories for Materials Science and Technology (EMPA) and on trees at Imperial's Silwood Park Campus.The drones are currently controlled by people: using control units, the researchers watch through the camera lens to select target trees and shoot the darts. The next step is to make the drones autonomous, so that researchers can test how they fare in denser forest environments without human guidance.Co-author Andre Farhina, of the Department of Aeronautics, said: "There are plenty of challenges to be addressed before the drones can be regularly used in forests, like achieving a careful balance between human input and automated tasks so that they can be used safely while remaining adaptable to unpredictable environments."Co-author Dr Salua Hamaza, also of the Department of Aeronautics, said: "We aim to introduce new design and control strategies to allow drones to effectively operate in forested environments. Exploiting smart mechanisms and new sensing techniques we can off-load the on-board computation, and create platforms that are energy-efficient and better performing."
|
Environment
| 2,020 |
November 3, 2020
|
https://www.sciencedaily.com/releases/2020/11/201103104727.htm
|
From nitrate crisis to phosphate crisis?
|
The aim of the EU Nitrates Directive is to reduce nitrates leaking into the environment in order to prevent pollution of water supplies. The widely accepted view is that this will also help protect threatened plant species which can be damaged by high levels of nutrients like nitrates in the soil and water. However, an international team of researchers including the Universities of Göttingen, Utrecht and Zurich, has discovered that many threatened plant species will actually suffer because of this policy. The results were published in
|
Nitrogen, in the form of nitrates, is an important nutrient for plant species. However, an overabundance can harm plant biodiversity: plant species that thrive on high levels of nitrates can displace other species adapted to low levels. "Despite this, it is not enough simply to reduce the level of nitrates," says co-author Julian Schrader, researcher in the Biodiversity, Macroecology and Biogeography Group at the University of Göttingen. "Such a policy can even backfire and work against the protection of threatened plant species if other nutrients are not taken into account."In addition to nitrogen, plants also need phosphorus and potassium to grow. The researchers discovered that the ratio of these nutrients in the soil is important. They showed that when the concentration of nitrogen in the soil is reduced, without simultaneously reducing the concentration of phosphates, plant species that are already threatened, could disappear."Many threatened plant species in Europe are found in places where phosphate concentrations are low," Schrader explained. If nitrogen concentrations decrease, as a result of effective environmental policies, then the relative concentration of phosphorous increases. This means that threatened species come under even more pressure. Threatened species are particularly sensitive to changes in nutrient concentrations and should, according to the researchers, be better protected.The results of this research have significant consequences for the current EU Nitrate Directive. The authors advocate the introduction of an EU Phosphate Directive in addition to the existing EU Nitrate Directive.
|
Environment
| 2,020 |
November 3, 2020
|
https://www.sciencedaily.com/releases/2020/11/201103075526.htm
|
Increasing the efficiency of organic solar cells
|
Organic solar cells are cheaper to produce and more flexible than their counterparts made of crystalline silicon, but do not offer the same level of efficiency or stability. A group of researchers led by Prof. Christoph Brabec, Director of the Institute of Materials for Electronics and Energy Technology (i-MEET) at the Chair of Materials Science and Engineering at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), have been working on improving these properties for several years. During his doctoral thesis, Andrej Classen, who is a young researcher at FAU, demonstrated that efficiency can be increased using luminescent acceptor molecules. His work has now been published in the journal
|
The sun can supply radiation energy of around 1000 watts per square metre on a clear day at European latitudes. Conventional monocrystalline silicon solar cells convert up to a fifth of this energy into electricity, which means they have an efficiency of around 20 percent. Prof. Brabec's working group has held the world record for efficiency in an organic photovoltaic module of 12.6% since September 2019. The multi-cell module developed at Energie Campus Nürnberg (EnCN) has a surface area of 26 cm². 'If we can achieve over 20% in the laboratory, we could possibly achieve 15% in practice and become real competition for silicon solar cells,' says Prof. Brabec.The advantages of organic solar cells are obvious -- they are thin and flexible like foil and can be adapted to fit various substrates. The wavelength at which the sunlight is absorbed can be 'adjusted' via the macromodules used. An office window coated with organic solar cells that absorbs the red and infrared spectrum would not only screen out thermal radiation, but also generate electricity at the same time. One criterion that is becoming increasingly important in view of climate change is the operation period after which a solar cell generates more energy than was required to manufacture it. This so-called energy payback time is heavily dependent on the technology used and the location of the photovoltaic (PV) system. According to the latest calculations of the Fraunhofer Institute for Solar Energy Systems (ISE), the energy payback time of PV modules made of silicon in Switzerland is around 2.5 to 2.8 years. However, this time is reduced to only a few months for organic solar cells according to Dr. Thomas Heumüller, research associate at Prof. Brabec's Chair.Compared with a 'traditional' silicon solar cell, its organic equivalent has a definite disadvantage: Sunlight does not immediately produce charge for the flow of current, but rather so-called excitons in which the positive and negative charges are still bound. 'An acceptor that only attracts the negative charge is required in order to trigger charge separation, which in turn produces free charges with which electricity can be generated,' explains Dr. Heumüller. A certain driving force is required to separate the charges. This driving force depends on the molecular structure of the polymers used. Since certain molecules from the fullerene class of materials have a high driving force they have been the preferred choice of electron acceptors in organic solar cells up to now. In the meantime, however, scientists have discovered that a high driving force has a detrimental effect on the voltage. This means that the output of the solar cell decreases, in accordance with the formula that applies to direct current -- power equals voltage times current.Andrej Classen wanted to find out how low the driving force has to be to just achieve complete charge separation of the exciton. To do so, he compared combinations of four donor and five acceptor polymers that have already proven their potential for use in organic solar cells. Classen used them to produce 20 solar cells under exactly the same conditions with a driving force of almost zero to 0.6 electronvolts.The measurement results provided the proof for a theory already assumed in research -- a 'Boltzmann equilibrium' between excitons and separated charges, the so-called charge transfer (CT) states. 'The closer the driving force reaches zero, the more the equilibrium shifts towards the excitons,' says Dr. Larry Lüer who is a specialist for photophysics in Brabec's working group. This means that future research should concentrate on preventing the exciton from decaying, which means increasing its excitation 'lifetime'. Up to now, research has only focused on the operating life of the CT state. Excitons can decay by emitting light (luminescence) or heat. By skilfully modifying the polymers, the scientists were able to reduce the heat production to a minimum, retaining the luminescence as far as possible. 'The efficiency of solar cells can therefore be increased using highly luminescent acceptor molecules,' predicts Andrej Classen.
|
Environment
| 2,020 |
November 3, 2020
|
https://www.sciencedaily.com/releases/2020/11/201103075520.htm
|
New protein nanobioreactor designed to improve sustainable bioenergy production
|
Researchers at the University of Liverpool have unlocked new possibilities for the future development of sustainable, clean bioenergy. The study, published in
|
The carboxysome is a specialised bacterial organelle that encapsulates the essential COThe first step in the study involved researchers installing specific genetic elements into the industrial bacterium The extreme oxygen sensitive character of hydrogenases (enzymes that catalyse the generation and conversion of hydrogen) is a long-standing issue for hydrogen production in bacteria, so the team developed methods to incorporate catalytically active hydrogenases into the empty shell.Project lead Professor Luning Liu, Professor of Microbial Bioenergetics and Bioengineering at the Institute of Systems, Molecular and Integrative Biology, said: "Our newly designed bioreactor is ideal for oxygen-sensitive enzymes, and marks an important step towards being able to develop and produce a bio-factory for hydrogen production."In collaboration with Professor Andy Cooper in the Materials Innovation Factory (MIF) at the University, the researchers then tested the hydrogen-production activities of the bacterial cells and the biochemically isolated nanobioreactors. The nanobioreactor achieved a ~550% improvement in hydrogen-production efficiency and a greater oxygen tolerance in contrast to the enzymes without shell encapsulation."The next step for our research is answering how we can further stabilise the encapsulation system and improve yields," said Professor Liu. "We are also excited that this technical platform opens the door for us, in future studies, to create a diverse range of synthetic factories to encase various enzymes and molecules for customised functions."First author, PhD student Tianpei Li, said: "Due to climate change, there is a pressing need to reduce the emission of carbon dioxide from burning fossil fuels. Our study paves the way for engineering carboxysome shell-based nanoreactors to recruit specific enzymes and opens the door for new possibilities for developing sustainable, clean bioenergy."
|
Environment
| 2,020 |
November 3, 2020
|
https://www.sciencedaily.com/releases/2020/11/201103104740.htm
|
Dolphins ride out dietary ups and downs
|
More evidence has emerged to support stricter coastal management, this time focusing on pollution and overfishing in the picturesque tourist waters off Auckland in New Zealand.
|
A study of common dolphins (Delphinus delphis) in the Hauraki Gulf connects their diet with the prevalence of commercial fishing and water quality -- emphasising the need to carefully manage marine parks and surrounding environments to prevent overfishing and extensive nutrient runoff.Researchers from the Cetacean Ecology Research Group (CERG) at Auckland's Massey University, in collaboration with the Cetacean Ecology Behaviour and Evolution Lab (CEBEL) at Flinders University and the NZ National Institute of Water and Atmospheric Research (NIWA), examined dietary differences between males and females and over time.Conducted over 13 years, the study published in "We found their carbon values declined during the study period (2004-2016), which could be linked to a decrease in primary productivity in the gulf over time, or a change in dolphin prey selection, or both," says lead author Dr Katharina J. Peters, a postgraduate researcher of CERG at Massey University and adjunct member of the CEBEL at Flinders University.Senior author and CERG research leader, Massey University Associate Professor Karen Stockin, says depletion of prey stock via commercial fishing could be just one of several reasons to cause dolphins to change their diet by shifting their target prey.Dr Stockin further indicated that shifts in prey distribution as a consequence of climatic change may also be at play and confirmed this warranted further investigation.The Hauraki Gulf has been in the focus of much research over the past two decades, with a sobering report being released by Auckland Council earlier this year, showing the dire state of this important ecosystem . Among increased commercial fishing, the document reports high nutrient loads and heavy metal contamination in some parts of the gulf."We observed increased nitrogen values over the study period, which could be linked to increased urbanisation of the coastal areas, with high levels of terrestrial nitrogen from for example agriculture being washed into the sea."Researchers also recorded some differences between male and female isotopic values. A change in nitrogen values at a body length of ~160 cm was detected, which may reflect the transition in diet of calves from mother's milk to fish after being weaned."They also seem to have a slightly broader diet in autumn-winder compared to spring-summer," Dr Stockin says, "this could reflect that the types of available prey in the gulf change throughout the seasons."Dolphins are important predators in marine ecosystems and research on their foraging behavior is critical to manage ecosystems in future, say researchers.Common dolphins are a sentinel species found in tropical and temperate waters globally and are generally opportunistic predators feeding locally on abundant small pelagic schooling fish.The new Hauraki Gulf data provide an important baseline to detect future changes in the trophic ecology of D. delphis in a coastal ecosystem that is a key habitat for this species in New Zealand, researchers say.
|
Environment
| 2,020 |
April 27, 2021
|
https://www.sciencedaily.com/releases/2021/04/210427110608.htm
|
Mangrove genetic diversity in Africa
|
In collaboration with researchers at the Vrije Universiteit Brussel, a University of Maryland (UMD) postdoctoral researcher recently co-published a large-scale study examining the genetic diversity of mangroves over more than 1,800 miles of coastline in the Western Indian Ocean, including Eastern Africa and several islands. While the mangroves of Asia, Australia, and the Americas have been more extensively studied, little work has been done classifying and highlighting genetic diversity in African mangrove populations for conservation. Similar to other wetlands, mangrove trees like the species studied in the new paper in
|
"Whenever I get asked about mangroves, I always say they are my happy place," says Magdalene Ngeve, postdoctoral researcher at UMD in Maile Neel's lab (professor in the Department of Plant Science and Landscape Architecture and Department of Entomology). "They are very fascinating systems to work with. When I went to do my field work for my Master's thesis and got to experience mangroves, be close to the trees, and see how much biodiversity they host, I instantly fell in love and knew this is what I should be studying."Born and raised in Cameroon, Ngeve grew up near the coast, but didn't give her local mangroves much thought until she left to pursue her graduate degrees on a scholarship to the Vrije Universiteit Brussel. After studying zoology for her undergraduate work, Ngeve became a biologist with a specialization in environment, biodiversity, and ecosystems for her Master's and conservation ecology and genetics for her PhD. It was there that her love of mangroves blossomed, and she brought that passion with her to UMD as a Presidential Postdoctoral Fellow."I am so excited we got this study out there, with the collaboration of mangrove experts in Brussels [Belgium] and around the world," says Ngeve. "In conservation, we have limited resources to manage everything on the planet, so we talk about these evolutionary significant units, which are units we should focus on to preserve as much genetic variability as possible to preserve their evolutionary potential and ensure the ecosystem is resilient. This work showed us that we actually have even more distinct populations than we realized, and that is important for conservation."As Ngeve explains it, people don't often think about ocean currents as a way to spread seeds and seedlings (known as propagules), or even pollen. But for many species, water is the primary way seeds are spread and diversity is ensured. "Ocean currents can be genetic barriers, and they can be facilitators of genetic connectivity at the same time," explains Ngeve. "It just depends on how they all come together and connect. In this study, we saw that ocean current patterns maintain genetic diversity of remote island populations like Seychelles, and cause an accumulation of genetic diversity around where the South Equatorial Current [near the equator] splits near the Eastern African coastline. Seychelles and Madagascan populations likely came to existence from ancient dispersal events from present day Australia and Southeast Asia, and these island populations act as an important stepping stone in spreading diversity to the East African coastline, which we found was a much younger population. Splitting currents also create a barrier to genetic connectivity between even nearby northern and southern populations of the East African coast and the Mozambique Channel."This study helps fill in an important gap in the research, since African mangroves, especially the genetics aspects, have been understudied according to Ngeve. For her doctoral work, Ngeve presented some of the first genetic work on the mangroves of the West African coast, so this work expanding to the East African coast is a natural next step. By understanding the genetic diversity of these species locally, researchers can make connections across populations for global conservation.Ngeve is particularly interested in coastal environments, or the intersection of land and sea as a way to look at larger global change phenomena. "Almost everything we do on land affects the ocean, and those intermediate environments are like the bridge," says Ngeve. "We talk a lot about climate change and sea level rise, and coasts and estuaries [like the Chesapeake Bay and Hudson River Estuary] are on the front lines. How are species in these ecosystems surviving and adapting? Foundational species like mangroves and submerged aquatic vegetation, which I also study as a postdoc, are home to so many species and host high biodiversity. Making sure they are resilient means protecting that biodiversity for all that depend on them."Ngeve is also considering how humans depend on the mangroves, and hopes to find ways that rural communities in her home country of Cameroon can protect the mangroves while still providing for their families. Through her start up project called BeeMangrove, she hopes to transition locals from overharvesting the mangroves for wood as a livelihood to raising honey bees that can simultaneously help pollinate the mangroves and produce honey to sell."While studying the Rhizophora racemosa mangrove species which is pollinated by both wind and insects, I had observed that mangroves that are more windward produced more seedlings. From my genetic work, I also observed pollination limitations at study sites. So clearly, there is a pollination problem in the mangroves that don't get the wind," explains Ngeve. "At the same time, there are also mangrove markets where people sell nothing but mangrove wood, and the logging is an issue. It's no doubt that mangroves like other ecosystems are declining, and we are losing so much biodiversity. In rural Cameroon, I began to wonder what these people could do differently. But they are relying exclusively on the mangroves for their livelihood. It's all they have for food, they harvest the wood for building their homes, and sell it to provide for their families and send their children to school -- mangroves are everything to them. How do you tell people so dependent on the system to stop logging? You have to provide an alternative."Ngeve hopes that with future funding, BeeMangrove can be that alternative. She is currently examining perceptions about mangroves in Cameroon. She has worked with local organizations to provide education and outreach to communities, and the first beehive has been installed. The goal is to work with locals from the start to pilot and grow this program with their support.Growing up in Cameroon, this project is very personal to Ngeve. She is grateful for all the support from her family as a researcher. "My mother became a mangrove researcher herself, tending to my juvenile mangrove plants even after my growth experiment was over," she says. Her father, Jacob Ngeve, is in fact an '88 doctoral alum of the UMD Department of Plant Sciences and Landscape Architecture (then Department of Horticulture). As a quantitative plant geneticist, he has had an international career as an agricultural researcher, and his influence is carried through in her work to this day."Growing up in Africa, I remembered seeing photos he would send of him in these very same walls in the College of Agriculture and Natural Resources," says Ngeve. "I never imagined I'd be here one day -- it is a real privilege. My father would always tell me, 'Education is the only thing a father can give his child, because that, no one can take away from you.' And that meant so much to me because I clung to those words, and against all odds, I studied and followed my dreams. I look forward to making an impact here the way he did."Ngeve is also incredibly thankful for the mentorship she has received on her road to this work and to UMD, from Neel as her current mentor, to those in Brussels, to her family. "My mentors are the reason I am here, and I look forward to paying forward all they have invested in me."
|
Geography
| 2,021 |
April 26, 2021
|
https://www.sciencedaily.com/releases/2021/04/210426140751.htm
|
Sponges leave trails on the ocean floor
|
Sponges: They are considered to be one of the most primitive forms of animal life, because they have neither locomotion organs nor a nervous system. A team around deep-sea scientist Antje Boetius has now discovered that sponges leave trails on the sea floor in the Arctic deep sea. They conclude that the animals might move actively -- even if only a few centimetres per year. They are now publishing these unique findings in the journal
|
The surprise was great when researchers looked at high-resolution images of the sea floor of the Arctic deep sea in detail: Path-like tracks across the sediments ended where sponges were located. These trails were observed to run in all directions, including uphill. "We conclude from this that the sponges might actively move across the sea floor and leave these traces as a result of their movement," reports Dr Teresa Morganti, sponge expert from the Max Planck Institute for Marine Microbiology in Bremen. This is particularly exciting because science had previously assumed that most sponges are attached to the seafloor or are passively moved by ocean currents and, usually down slopes."There are no strong currents in the Arctic deep sea that could explain the structures found on the sea floor," explains expedition leader Prof. Antje Boetius, who works together with deep-sea biologist Dr Autun Purser from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) in the HGF-MPG Joint Research Group for Deep-Sea Ecology and Technology. The recently published recordings were made during an expedition at 87 °North at the Karasik Seamount about 350 kilometres away from the North Pole with the research icebreaker Polarstern in 2016 with a towed camera system OFOBS (Ocean Floor Observation and Bathymetry System). "With OFOBS we can create 3D models from the deep sea. The seamount's summit was densely populated with sponges. 69 percent of our images showed trails of sponge spicules, many of which led to live animals," reports Autun Purser.Many questions arise from these observations: Why do the sponges move? How do they orient themselves? Possible reasons for locomotion could be foraging, avoiding unfavourable environmental conditions, or to distribute offspring. Searching for food in particular plays a major role in nutrient-poor ecosystems such as the Arctic deep sea. Sponges have an important function there anyway. As filter feeders they can utilize particle and dissolved organic matter and are intensively involved in nutrient and matter recycling by means of their bacterial symbionts. Sponges also provide arctic fish and shrimp useful structures to use as a habitat. However, the scientists still have to investigate the mechanisms of locomotion.
|
Geography
| 2,021 |
April 26, 2021
|
https://www.sciencedaily.com/releases/2021/04/210426085914.htm
|
New research uncovers continental crust emerged 500 million years earlier than thought
|
The first emergence and persistence of continental crust on Earth during the Archaean (4 billion to 2.5 billion years ago) has important implications for plate tectonics, ocean chemistry, and biological evolution, and it happened about half a billion years earlier than previously thought, according to new research being presented at the EGU General Assembly 2021.
|
Once land becomes established through dynamic processes like plate tectonics, it begins to weather and add crucial minerals and nutrients to the ocean. A record of these nutrients is preserved in the ancient rock record. Previous research used strontium isotopes in marine carbonates, but these rocks are usually scarce or altered in rocks older than 3 billion years.Now, researchers are presenting a new approach to trace the first emergence of old rocks using a different mineral: "barite."Barite forms from a combination of sulfate coming from ocean water mixing with barium from hydrothermal vents. Barite holds a robust record of ocean chemistry within its structure, useful for reconstructing ancient environments. "The composition of the piece of barite we pick up in the field now that has been on Earth for three and a half billion years, is exactly the same as it was when it when it actually precipitated," says Desiree Roerdink, a geochemist at University of Bergen, Norway, and team leader of the new research. "So in essence, it is really a great recorder to look at processes on the early Earth."Roerdink and her team tested six different deposits on three different continents, ranging from about 3.2 billion to 3.5 billion years old. They calculated the ratio of strontium isotopes in the barite, and from there, inferred the time where the weathered continental rock made its way to the ocean and incorporated itself into the barite. Based on the data captured in the barite, they found that weathering started about 3.7 billion years ago -- about 500 million years earlier than previously thought."That is a huge time period," Roerdink says. "It essentially has implications for the way that we think about how life evolved." She added that scientists usually think about life starting in deep sea, hydrothermal settings, but the biosphere is complex. "We don't really know if it is possible that life could have developed at the same time on land," she noted, adding "but then that land has to be there."Lastly, the emergence of land says something about plate tectonics and the early emergence of a geodynamic Earth. "To get land, you need processes operating to form that continental crust, and form a crust that is chemically different from the oceanic crust," Roerdink says.
|
Geography
| 2,021 |
April 26, 2021
|
https://www.sciencedaily.com/releases/2021/04/210426085912.htm
|
Mapping the path to rewilding: The importance of landscape
|
Rewilding -- a hands-off approach to restoring and protecting biodiversity -- is increasingly employed across the globe to combat the environmental footprint of rapid urbanization and intensive farming. The recent reintroduction of grey wolves in Yellowstone, America's first national park, is regarded as one of the most successful rewilding efforts, having reinvigorated an ecosystem that had been destabilized by the removal of large predators.
|
However, successful attempts to rewild the landscape hinge on more than just the reintroduction of a plant or animal species, they also require that geography and geology be taken into account, according to new research from the University of Amsterdam and the Dutch State Forestry Service.It is the landscape that ultimately decides the outcome of rewilding efforts, says Kenneth Rijsdijk, an ecologist at the University of Amsterdam, who is presenting the team's results at the European Geosciences Union (EGU) General Assembly 2021.One of the key challenges of rewilding is deciding where to do it, Rijsdijk says, especially given competing land-uses like infrastructure and agriculture. "Clearly, we cannot, and should not, rewild everywhere. It makes sense to pick out specific areas where rewilding is more likely to succeed, taking into account how landscape features, like ruggedness and soil nutrients, can shape ecosystems."Ecologists gauge rewilding success using biodiversity metrics, such as an increase in the abundance and diversity of plant or bird species. But these measurements do not factor in the role of landscape: from the topography and river systems to the soil and underlying geology.These aspects -- known collectively as geodiversity -- furnish all the physical support required for life on Earth. "Landscape plays a pivotal role in defining the ecosystem: determining where vegetation grows, herbivores graze, animals seek shelter, and predators hunt," Rijsdijk says."It's remarkable that, from a conservation standpoint, the landscape itself is significantly undervalued in the success of rewilding projects," says coauthor Harry Seijmonsbergen, an ecologist at the University of Amsterdam.The team aims to build a more holistic index for measuring and predicting rewilding success.Early applications of their approach -- in northwestern Europe, at sites previously marked by the Dutch State Forestry Service as possible candidates for rewilding -- show that more varied landscapes show greater conservation potential.Their index draws on more than a century of geological and geographical map data, which the team have mapped out across 12 sites in northwestern Europe -- combining landscape features such as elevation, forested areas, openness, and quietness in order to compute a metric for landscape quality. They also studied how geodiversity influenced rewilding over time using satellite, aerial, and field data. By tuning their new index against a previously used ecological index, they were able to independently assess the relationship between biodiversity and landscape at each site.As an independent test of landscape ruggedness, they supplemented their workflow with previously collected data from Yellowstone. The park's mountainous and varied terrain hosts niche environments for animals to hunt and shelter.The new research could help decision-makers select future rewilding sites with the right recipe for success. "Conservation biologists have been asking how they can pinpoint sites with the right characteristics for rewilding," Rijsdijk says. "Our research is the first to start building the required toolkit to measure landscape quality and inform that choice."
|
Geography
| 2,021 |
April 23, 2021
|
https://www.sciencedaily.com/releases/2021/04/210423130128.htm
|
First description of a new octopus species without using a scalpel
|
An evolutionary biologist from the University of Bonn brought a new octopus species to light from depths of more than 4,000 meters in the North Pacific Ocean. The sensational discovery made waves in the media a few years ago. Researchers in Bonn have now published the species description and named the animal "Emperor dumbo" (Grimpoteuthis imperator). Just as unusual as the organism is the researchers' approach: in order to describe the new species, they did not dissect the rare creature, but instead used non-destructive imaging techniques. The results have now been published in the journal
|
In the summer of 2016, Dr. Alexander Ziegler from the Institute of Evolutionary Biology and Ecology at the University of Bonn spent several months in the North Pacific aboard the research vessel SONNE. The crew lowered the steel basket to the seabed around 150 times in order to retrieve rocks, sediments, and living creatures. One organism in particular caused a media stir: a dumbo octopus. The animal, about 30 centimeters in size, was found in waters more than 4,000 meters deep. However, the octopus could not be recovered alive: "The deep-sea organism is not adapted to the environmental conditions of the ocean surface," Ziegler explains.Dumbo octopuses are a group of deep-sea-dwelling octopuses that includes 45 species. The name is based on the flying elephant from the Walt Disney movie of the same name, who is made fun of because of his unusually large ears -- the fins of the dumbo octopuses, which are on the sides of the head resemble these elephant ears. However, the dumbo on the research vessel SONNE differed significantly from the known octopus species. "It was clear to me straight away that we had caught something very special," the biologist reports. So Ziegler immediately photographed the unusual animal, took a small tissue sample for DNA analysis, and then preserved the octopus in formalin.Together with his former master's student Christina Sagorny, Ziegler has now published a description of the previously unknown species. Just as unusual as the octopus was the methodology used. The animals are usually dissected by zoologists, as the internal organs are also important for the description of a new species. "However, as this octopus is very valuable, we were looking for a non-destructive method," explains the researcher.The eight-armed cephalopod therefore did not end up under the scalpel, but in the high-field magnetic resonance imaging system of the German Center for Neurodegenerative Diseases (DZNE) in Bonn. This device is routinely used to image test persons' brains. Thankfully, Dr. Eberhard D. Pracht from the DZNE agreed to conduct a high-resolution scan of the dumbo octopus in 3D. As part of her master's thesis, Christina Sagorny then investigated whether high-field MRI can be used to study internal organs and other soft tissues just as well as through conventional dissection. "The quality is actually even better," Ziegler says.One of the few exceptions: the beak and rasping tongue (radula) of the cephalopod are made of hard chitin that does not image well using MRI. The biologists therefore also consulted the micro-computed tomography system of the paleontologists at the University of Bonn. This technique showed the beak and radula razor-sharp and in 3D. "These hard part structures are an integral part of the species description of octopuses," Ziegler explains. The researchers also decoded the animal's genetic material to reconstruct the family relationships. Ziegler: "The DNA showed beyond a doubt that we were looking at a species of the genus Grimpoteuthis."Examination of the reproductive organs revealed the dumbo octopus to be an adult male. Compared to other species of this genus, it displays several special characteristics. For example, an average of 71 suckers were detected on each arm, which the animal needs to catch prey and which reflect body size. The length of the cirri, which are small appendages on the arms that the deep-sea animals presumably use to sense their prey, also differs from species already known.The web that stretches between the arms, with which the dumbo slowly floats down in the water column, catching worms and crustaceans as if in a bell, also only reaches just over halfway from the mouth down the arms. "The web is much longer in dumbo octopus species that mainly float freely in the water column," Ziegler says. This would indicate that the new species lives close to the seafloor, because otherwise the web would be a hindrance to movements on the bottom.As the species-describing researchers, Sagorny and Ziegler had the privilege of naming the new species: they decided on Grimpoteuthis imperator -- in English "Emperor dumbo." Background: the animal was discovered not far from Japan in an underwater mountain range whose peaks are named after Japanese emperors.The combination of non-destructive methods produced a crisp digital copy of the animal. Anybody interested can download it from the online database "MorphoBank" for further research and learning purposes. The preserved octopus itself is kept in the archives of the Museum für Naturkunde in Berlin, Germany. "There, it can then still be analyzed 100 years from now, for example when more modern investigation methods or new questions arise," Ziegler explains. "Our non-destructive approach could set a precedent, especially for rare and valuable animals," said the Bonn-based evolutionary biologist.
|
Geography
| 2,021 |
April 23, 2021
|
https://www.sciencedaily.com/releases/2021/04/210423095424.htm
|
Red Sea is no longer a baby ocean
|
The Red Sea is a fascinating and still puzzling area of investigation for geoscientists. Controversial questions include its age and whether it represents a special case in ocean basin formation or if it has evolved similarly to other, larger ocean basins. Researchers have now published a new tectonic model that suggests that the Red Sea is not only a typical ocean, but more mature than thought before.
|
It is 2,250 kilometers long, but only 355 kilometers wide at its widest point -- on a world map, the Red Sea hardly resembles an ocean. But this is deceptive. A new, albeit still narrow, ocean basin is actually forming between Africa and the Arabian Peninsula. Exactly how young it is and whether it can really be compared with other young oceans in Earth's history has been a matter of dispute in the geosciences for decades. The problem is that the newly formed oceanic crust along the narrow, north-south aligned rift is widely buried under a thick blanket of salt and sediments. This complicates direct investigations.In the international journal In addition to information from high-resolution seafloor maps and chemical investigations of rock samples, the team primarily used gravity and earthquake data to develop a new tectonic model of the Red Sea basin. Gravity anomalies have already helped to detect hidden seafloor structures such as rift axes, transform faults and deep-sea mountains in other regions, for example in the Gulf of Mexico, the Labrador Sea or the Andaman Sea.The authors of the current study compared gravity patterns of the Red Sea axis with comparable mid-ocean ridges and found more similarities than differences. For example, they identified positive gravity anomalies running perpendicular to the rift axis, which are caused by variations in crustal thickness running along the axis. "These so-called 'off-axis segmentation trails' are very typical features of oceanic crust originating from magmatically more active, thicker and thus, heavier areas along the axis. However, this observation is new for the Red Sea," says Dr. Nico Augustin.Bathymetric maps as well as earthquake data also support the idea of an almost continuous rift valley throughout the Red Sea basin. This is also confirmed by geochemical analyses of rock samples from the few areas that are not overlain by salt masses. "All the samples we have from the Red Sea rift have geochemical fingerprints of normal oceanic crust," says Dr. Froukje van der Zwan, co-author of the study.With this new analysis of gravity and earthquake data, the team constrains the onset of ocean expansion in the Red Sea to about 13 million years ago. "That's more than twice the generally accepted age," Dr. Augustin says. That means the Red Sea is no longer a baby ocean, but a young adult with a structure similar to the young southern Atlantic some 120 million years ago.The model now presented is, of course, still being debated in the scientific community, says the lead author, "but it is the most straightforward interpretation of what we observe in the Red Sea. Many details in salt- and sediment-covered areas that were previously difficult to explain suddenly make sense with our model." While it has thus been able to answer some questions about the Red Sea, the model also raises many new ones that inspire further research in the Red Sea from a whole new scientific perspective.
|
Geography
| 2,021 |
April 23, 2021
|
https://www.sciencedaily.com/releases/2021/04/210423085707.htm
|
Radar satellites can better protect against bushfires and floods
|
New research led by Curtin University has revealed how radar satellites can improve the ability to detect, monitor, prepare for and withstand natural disasters in Australia including bushfires, floods and earthquakes.
|
The research used Synthetic Aperture Radar data obtained by the European Space Agency Sentinel-1 satellite, amongst others, to evaluate Australia-specific case studies.Lead researcher Dr Amy Parker, an ARC Research Fellow from Curtin's School of Earth and Planetary Sciences, said the Sentinel-1 satellite mission provided the first complete global Synthetic Aperture Radar (SAR) dataset and the first opportunity to use this type of data to assess hazards in new locations, including Australia."What makes SAR so valuable is that it provides all-weather and night-and-day capability to remotely monitor the Earth's surface, unlike traditional optical Earth Observation (EO) imagery which is at the mercy of cloud, fog, rainfall and smoke," Dr Parker said."SAR data can be used to precisely map topography, track movements of the ground surface, characterize land-use change, and map damage to infrastructure, all of which can significantly improve how we track and respond to natural disasters."But despite SAR satellites being well-documented as a hazard monitoring tool, the uptake of such data varies, and in Australia the use of SAR data has been limited."The research applied SAR data to nine case studies covering critical issues such as bushfires, floods and earthquakes to assess the power of SAR as a disaster mitigation and prevention tool."For example we looked at the 2016 Wildman Coastal Plains Floods in the Northern Territory and found that SAR has added benefits in mapping flood patterns and floodplain dynamics."Dr Parker said these benefits can also be applied to maintaining mine site safety and better understanding seismic hazards and activity."Globally, Australia is one of the largest users of Earth observation data derived from satellites, which contributes to national hazard monitoring and response and more than 100 state and federal government programs. Our research shows SAR data can effectively complement this." Dr Parker said."Previously SAR data has been considered too expensive to use as a tool for hazard mitigation, but our findings show, through Sentinel-1 we now have economically viable wall-to-wall, consistent sensor imaging of Australia."The uptake of SAR data for hazard applications globally will continue to benefit from validated case studies such as ours, the development of tools that support operational use, and the continued provision of open-access imagery by large-scale satellite missions."
|
Geography
| 2,021 |
April 21, 2021
|
https://www.sciencedaily.com/releases/2021/04/210421124657.htm
|
A growing problem of 'deepfake geography': How AI falsifies satellite images
|
A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space, seem to show widespread fireworks activity.
|
Both images exemplify what a new University of Washington-led study calls "location spoofing." The photos -- created by different people, for different purposes -- are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such "deepfake geography" could become a growing problem.So, using satellite photos of three cities and drawing upon methods used to manipulate video and audio files, a team of researchers set out to identify new ways of detecting fake satellite photos, warn of the dangers of falsified geospatial data and call for a system of geographic fact-checking."This isn't just Photoshopping things. It's making data look uncannily realistic," said Bo Zhao, assistant professor of geography at the UW and lead author of the study, which published April 21 in the journal As Zhao and his co-authors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times. That's due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is. But some inaccuracies in maps are spoofs created by the mapmakers. The term "paper towns" describes discreetly placed fake cities, mountains, rivers or other features on a map to prevent copyright infringement. On the more lighthearted end of the spectrum, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of "Beatosu and "Goblu," a play on "Beat OSU" and "Go Blue," because the then-head of the department wanted to give a shoutout to his alma mater while protecting the copyright of the map.But with the prevalence of geographic information systems, Google Earth and other satellite imaging systems, location spoofing involves far greater sophistication, researchers say, and carries with it more risks. In 2019, the director of the National Geospatial Intelligence Agency, the organization charged with supplying maps and analyzing satellite images for the U.S. Department of Defense, implied that AI-manipulated satellite images can be a severe national security threat.To study how satellite images can be faked, Zhao and his team turned to an AI framework that has been used in manipulating other types of digital files. When applied to the field of mapping, the algorithm essentially learns the characteristics of satellite images from an urban area, then generates a deepfake image by feeding the characteristics of the learned satellite image characteristics onto a different base map -- similar to how popular image filters can map the features of a human face onto a cat.Next, the researchers combined maps and satellite images from three cities -- Tacoma, Seattle and Beijing -- to compare features and create new images of one city, drawn from the characteristics of the other two. They designated Tacoma their "base map" city and then explored how geographic features and urban structures of Seattle (similar in topography and land use) and Beijing (different in both) could be incorporated to produce deepfake images of Tacoma.In the example below, a Tacoma neighborhood is shown in mapping software (top left) and in a satellite image (top right). The subsequent deep fake satellite images of the same neighborhood reflect the visual patterns of Seattle and Beijing. Low-rise buildings and greenery mark the "Seattle-ized" version of Tacoma on the bottom left, while Beijing's taller buildings, which AI matched to the building structures in the Tacoma image, cast shadows -- hence the dark appearance of the structures in the image on the bottom right. Yet in both, the road networks and building locations are similar.The untrained eye may have difficulty detecting the differences between real and fake, the researchers point out. A casual viewer might attribute the colors and shadows simply to poor image quality. To try to identify a "fake," researchers homed in on more technical aspects of image processing, such as color histograms and frequency and spatial domains.Some simulated satellite imagery can serve a purpose, Zhao said, especially when representing geographic areas over periods of time to, say, understand urban sprawl or climate change. There may be a location for which there are no images for a certain period of time in the past, or in forecasting the future, so creating new images based on existing ones -- and clearly identifying them as simulations -- could fill in the gaps and help provide perspective.The study's goal was not to show that geospatial data can be falsified, Zhao said. Rather, the authors hope to learn how to detect fake images so that geographers can begin to develop the data literacy tools, similar to today's fact-checking services, for public benefit."As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data," Zhao said. "We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary," he said.Co-authors on the study were Yifan Sun, a graduate student in the UW Department of Geography; Shaozeng Zhang and Chunxue Xu of Oregon State University; and Chengbin Deng of Binghamton University.
|
Geography
| 2,021 |
April 21, 2021
|
https://www.sciencedaily.com/releases/2021/04/210421124530.htm
|
Unexpected presence of great white sharks in Gulf of California
|
Perhaps no other ocean creature lives in the human imagination like the great white shark. But while great white sharks might be plentiful in the minds of beachgoers across the country, there are only a handful of places in the world where white sharks can be consistently found. In those areas -- such as Central California, Guadalupe Island Mexico, South Australia and South Africa -- they tend to be found aggregated in small hotspots, often located around seal colonies.
|
Researchers have estimated that white shark populations are incredibly small, with only hundreds of large adults and a few thousand white sharks total in any of their global populations. This has made protecting white sharks a priority for conservation with many countries, including the United States and Mexico, having laws in place to prevent the catching and killing of the species.After uncovering a previously unknown white shark hot spot in the central Gulf of California, however, a new study involving University of Delaware assistant professor Aaron Carlisle suggests that these low numbers for eastern north Pacific white sharks, especially those listed in the Gulf of California, might be underestimated. In addition, the researchers found that the mortality rates for these white sharks might be underestimated as well, as an illicit fishery for the species was uncovered in the Gulf of California, suggesting that fishers were killing many more white sharks than has been previously understood.The research findings were published in For this study, Madigan interacted with a small group of local fishermen and over several months that group killed about 14 large white sharks. Of these, almost half could have been mature females. This was a conservative estimate as other groups reportedly killed additional sharks during this time.To show just how significant this new source of mortality might be, Carlisle pointed to a National Oceanic and Atmospheric Administration (NOAA) endangered species act status review on white shark populations from 2012. Using the best available information at the time, the NOAA report estimated that the adult female mortality rate for the entire eastern Pacific was likely around two annually."He found, in just a two-week time period, more mortality in this one location than what we thought for the whole ocean," said Carlisle. "It was pretty clear then that, well, something kind of important is happening here."Carlisle explained that the mortality estimate of the earlier NOAA study could have been off because calculating mortality for animals in the ocean -- figuring out how many die naturally or unnaturally -- is one of the most difficult population metrics to quantify.What makes this finding particularly interesting is that this population of white sharks -- the eastern Pacific population of white sharks -- is perhaps the most well-studied group of sharks on the planet. Here, in the midst of all this scientific research, was a seemingly robust population of white sharks that had eluded scientific study."It's been about 20 years since a new 'population' of white sharks has been discovered," said Carlisle. "The fact that the eastern Pacific has so much infrastructure focused on white sharks and we didn't know that there were these sites in the Gulf of California was kind of mind-blowing."Now that the aggregation has been identified, Carlisle said that there are many more scientific questions that need to be answered.There is a pressing need to study and quantify the population of sharks in the new aggregation site. In particular, it is unknown whether these sharks are part of the other known populations of white sharks in the eastern Pacific, which include populations that occur in Central California and Guadalupe Island Mexico, or whether they belong to a third, unknown population.They are also interested in finding out how long the aggregation sites have been there and how long people have been fishing at the sites."One of the big points of this paper was to raise the red flag and let managers and scientists know that this is going on and this population is here and needs to be studied," said Carlisle. "Hopefully, it will be studied by some local researchers who are invested and working with the local fishing communities because these fishing communities are all heavily dependent on marine resources and fisheries."Carlisle stressed that the researchers are not looking to cause problems for the local fishing communities that they worked with for the study.Instead, perhaps there is an opportunity for these communities to learn about other opportunities with these animals through avenues like eco-tourism, educating them on the fact that these sharks are worth more and could provide a steadier stream of revenue alive rather than dead."This seems like it would be a perfect situation for ecotourism, much like there is at Guadalupe Island," said Carlisle. "There could be huge opportunities to build businesses around these populations of sharks, and that's just from a management point of view. From a science point of view, there's all sorts of fun things you could do."Still, Carlisle said that more than anything, this paper highlights just how little we know about what is going on with sharks in the ocean."Even though we've studied these animals so much, we still know so little," said Carlisle. "How many fish are in the ocean is a very old but very hard question to answer."
|
Geography
| 2,021 |
April 19, 2021
|
https://www.sciencedaily.com/releases/2021/04/210419135647.htm
|
Study reveals the workings of nature's own earthquake blocker
|
A new study finds a naturally occurring "earthquake gate" that decides which earthquakes are allowed to grow into magnitude 8 or greater.
|
Sometimes, the "gate" stops earthquakes in the magnitude 7 range, while ones that pass through the gate grow to magnitude 8 or greater, releasing over 32 times as much energy as a magnitude 7."An earthquake gate is like someone directing traffic at a one-lane construction zone. Sometimes you pull up and get a green 'go' sign, other times you have a red 'stop' sign until conditions change," said UC Riverside geologist Nicolas Barth.Researchers learned about this gate while studying New Zealand's Alpine Fault, which they determined has about a 75 percent chance of producing a damaging earthquake within the next 50 years. The modeling also suggests this next earthquake has an 82 percent chance of rupturing through the gate and being magnitude 8 or greater. These insights are now published in the journal Nature Geoscience.Barth was part of an international research team including scientists from Victoria University of Wellington, GNS Science, the University of Otago, and the US Geological Survey.Their work combined two approaches to studying earthquakes: evidence of past earthquakes collected by geologists and computer simulations run by geophysicists. Only by using both jointly were the researchers able to get new insight into the expected behavior of future earthquakes on the Alpine Fault."Big earthquakes cause serious shaking and landslides that carry debris down rivers and into lakes," said lead author Jamie Howarth, Victoria University of Wellington geologist. "We can drill several meters through the lake sediments and recognize distinct patterns that indicate an earthquake shook the region nearby. By dating the sediments, we can precisely determine when the earthquake occurred."Sedimentary records collected at six sites along the Alpine Fault identified the extent of the last 20 significant earthquakes over the past 4,000 years, making it one of the most detailed earthquake records of its kind in the world.The completeness of this earthquake record offered a rare opportunity for the researchers to compare their data against a 100,000-year record of computer-generated earthquakes. The research team used an earthquake simulation code developed by James Dieterich, distinguished professor emeritus at UC Riverside.Only the model with the fault geometry matching the Alpine Fault was able to reproduce the earthquake data. "The simulations show that a smaller magnitude 6 to 7 earthquake at the earthquake gate can change the stress and break the streak of larger earthquakes," Barth said. "We know the last three ruptures passed through the earthquake gate. In our best-fit model the next earthquake will also pass 82% of the time."Looking beyond New Zealand, earthquake gates are an important area of active research in California. The Southern California Earthquake Center, a consortium of over 100 institutions of which UCR is a core member, has made earthquake gates a research priority. In particular, researchers are targeting the Cajon Pass region near San Bernardino, where the interaction of the San Andreas and San Jacinto faults may cause earthquake gate behavior that could regulate the size of the next damaging earthquake there."We are starting to get to the point where our data and models are detailed enough that we can begin forecasting earthquake patterns. Not just how likely an earthquake is, but how big and how widespread it may be, which will help us better prepare," Barth said.
|
Geography
| 2,021 |
April 19, 2021
|
https://www.sciencedaily.com/releases/2021/04/210419110001.htm
|
UK waters are home again to the bluefin tuna
|
Atlantic bluefin tuna have returned to UK waters and can once again be seen during the summer and autumn months.
|
Their numbers appear to be increasing, following a long period of absence linked to population decline, according to research led by Cefas and the University of Exeter.Marine scientists in the UK and Ireland have analysed multiple datasets, spanning a 16-year period, to document the increase in bluefin, which arrive into the waters of the Celtic Seas and off South West England, the Scilly Isles, and North West Ireland to feed in late summer and autumn.The research is part of the Defra-funded "Thunnus UK" research project.Thunnus UK was established to improve knowledge of this species, as an essential first step in ensuring a positive future for Atlantic bluefin tuna around the UK.Central to the project's success has been a concerted effort to share and combine important data on where people have observed Atlantic bluefin tuna.This will help to identify where and when these fish are found in UK waters.Nearly 1,000 unique observations were recorded between 2013 and 2018 by citizen scientists, scientists, fishers and eco-tour leaders.Researchers found that Atlantic bluefin tuna begin to arrive in May and stay as late as January.However, peak numbers were recorded between August and October each year.The research draws on five key data sources:Lead author Tom Horton, of the University of Exeter, said: "Atlantic bluefin tuna are once again a feature in nearshore waters off the UK and Ireland."We've been able to document this story by using data from a wide variety of sources."We need to work together to ensure a future for Atlantic bluefin tuna, both in the UK and Ireland and more broadly throughout their range in the Atlantic Ocean."This is a really exciting study and the return of these fish suggest an important role in the UK's ecosystem."Jeroen van der Kooij, Principal scientist and Peltic Survey Lead, Cefas, said: "The unique data collected during our annual pelagic ecosystem survey of SW English waters is fundamental to this research."Marine animal observers from MARINELife on board our research vessel recorded not only the arrival, but also a subsequent year-on-year increase in sightings of bluefin tuna in the area."We will continue to collect this information, which, in combination with data on their prey fish and habitat collected during the same survey, will hopefully increase our knowledge of these exciting yet enigmatic animals."
|
Geography
| 2,021 |
April 16, 2021
|
https://www.sciencedaily.com/releases/2021/04/210416155049.htm
|
Tarantula's ubiquity traced back to the Cretaceous
|
Tarantulas are among the most notorious spiders, due in part to their size, vibrant colors and prevalence throughout the world. But one thing most people don't know is that tarantulas are homebodies. Females and their young rarely leave their burrows and only mature males will wander to seek out a mate. How then did such a sedentary spider come to inhabit six out of seven continents?
|
An international team of researchers, including Carnegie Mellon University's Saoirse Foley, set out on an investigation to find the answer to this question. They looked to the transcriptomes, the sum of all the transcripts from the mRNA, of many tarantulas and other spiders from different time periods. Their findings were published online by They used the transcriptomes to build a genetic tree of spiders and then time-calibrated their tree with fossil data. Tarantula fossils are extremely rare, but the software used in the study managed to estimate the ages of older tarantulas relative to the ages of fossils from other spiders.They found that tarantulas are ancient, first emerging in the piece of land now considered the Americas about 120 million years ago during the Cretaceous period. At that time South America would have been attached to Africa, India and Australia as part of the Gondwana supercontinent. The spiders ultimately reached their present destinations due to continental drift, with a few interesting departures.For example, the nature of their entry into Asia suggests tarantulas may also be surprisingly proficient dispersers. The researchers were able to establish two separate lineages of tarantulas that diverged on the Indian subcontinent before it crashed into Asia, with one lineage being predominantly ground dwelling and the other predominantly arboreal. They found that these lineages colonized Asia about 20 million years apart. Surprisingly, the first group that reached Asia also managed to cross the Wallace Line, a boundary between Australia and the Asian islands where many species are found in abundance on one side and rarely or not at all on the other."Previously, we did not consider tarantulas to be good dispersers. While continental drift certainly played its part in their history, the two Asian colonization events encourage us to reconsider this narrative. The microhabitat differences between those two lineages also suggest that tarantulas are experts at exploiting ecological niches, while simultaneously displaying signs of niche conservation," said Foley.Additional study authors include Willam H. Piel and Dong-Qiang Cheng of the Yale-NUS College in Singapore and Henrik Krehenwinkel of Universität Trier in Germany.
|
Geography
| 2,021 |
April 12, 2021
|
https://www.sciencedaily.com/releases/2021/04/210412194215.htm
|
Spotting cows from space
|
Cows don't seem to have a whole lot going on most of the time. They're raised to spend their days grazing in the field, raised for the purpose of providing milk or meat, or producing more cows. So when students in UC Santa Barbara ecologist Doug McCauley's lab found themselves staring intently at satellite image upon image of bovine herds at Point Reyes National Seashore, it was funny, in a "Far Side" kind of way.
|
"There were about 10 undergrads involved in the project, spotting cows from space -- not your typical student research and always amusing to see in the lab," McCauley said. They became proficient at discerning the top view of a cow from the top view of rocks or the top view of other animals, he added."After about eight months, we ended up with more than 27,000 annotations of cattle across 31 images," said Lacey Hughey, an ecologist with the Smithsonian Conservation Biology Institute who was a Ph.D. student in the McCauley Lab at the time, and the leader of the cow census. "It took a long time."All of the rather comical cow counting had a serious purpose, though: to measure the interactions between wildlife and livestock where their ranges meet or overlap. Roughly a third of the United States' land cover is rangeland, and where these grazing areas abut wildland, concerns over predation, competition and disease transmission are bound to arise.Such is the case at Point Reyes National Seashore, a picturesque combination of coastal bluffs and pastureland about an hour's drive north of San Francisco. As part of a statewide species restoration plan, native tule elk were reintroduced to the park's designated wilderness area in the 1990s, but they didn't stay in their little corner of paradise for very long."Some of them actually ended up swimming across an estero and establishing this herd -- which is known as the Drake's Beach Herd -- near the pastoral zone of the park, which is leased to cattle ranchers," said Hughey, the lead author of a collaborative study with the University of Nevada, Reno, that appears in the journal "So we were wondering, how do elk and cattle co-exist in this landscape?" Hughey said. "The story between elk and cattle is actually pretty complex. We know from other studies that elk and cattle can be competitors, but they can also be facilitators. We also didn't know very much about which habitats elk preferred in this part of the park and how the presence of cattle might influence an elk's decision to spend its time in one place over another."The researchers set out to answer these questions with two large datasets generated by the park -- GPS monitoring data from collared elk, and field-based transect surveys of the elk. What was missing, however, was information on the cows."We knew quite a bit about where the elk were, but we didn't have any information about where the cows were, except that they were inside the fences," she said. Knowing the precise number and location of cows relative to the elk herd would be necessary to understand how both species interact in a pastoral setting."Because the elk data was collected in the past, we needed a way to obtain information on cattle populations from the same time period. The only place we could get that was from archived, high-resolution satellite imagery," Hughey said. Hence, the satellite cowspotting.Their conclusion? Elk have acclimated to cattle at Point Reyes by avoiding cow pastures in general and by choosing separate foraging sites on the occassions that they co-occur. Taken together, these findings suggest that elk select habitat in a manner "that reduce[s] the potential for grazing conflicts with cattle, even in cases where access to forage is limited."In addition to helping shed light on the ecological relationship between cows and tule elk at Point Reyes, satellite imaging can also define their areas of overlap -- an important consideration in the assessment of disease risk, the researchers said."There's a disease of concern that's been found in the elk herd and also in the cattle, called Johne's disease," Hughey said. The bacteria that cause it can persist in the environment for more than a year, she added, so even though cows and elk rarely share space at the same time, there is still a theoretical risk of transmission in this system.According to the researchers, the satellite imaging technique is also widely applicable to other areas on the globe where livestock and wildlife ranges overlap."The issue of livestock and wildlife being in conflict is a major challenge in a bunch of different contexts in the U.S. and beyond," McCauley said. "It has been surprisingly hard to figure out exactly how these wild animals share space with domestic animals."These new methods, he said, "will have a transformative impact on understanding how livestock use wildlands -- and how wildlife use grazing lands."Next stop: Kenya and Tanzania.Working with the National Geospatial Intelligence Agency, Microsoft AI for Good, University of Glasgow, and University of Twente -- and thanks in large part to the data generated by those cow-tracking UC Santa Barbara undergrads -- Hughey and colleagues are training an algorithm to detect and identify animals in the plains of East Africa, such as wildebeest and, of course, cows.Research in the paper was also contributed by Kevin T. Shoemaker and Kelley M. Stewart at the University of Nevada and J. Hall Cushman at the Smithsonian Institution
|
Geography
| 2,021 |
April 8, 2021
|
https://www.sciencedaily.com/releases/2021/04/210408152316.htm
|
Regional habitat differences identified for threatened piping plovers on Atlantic coast
|
Piping plovers, charismatic shorebirds that nest and feed on many Atlantic Coast beaches, rely on different kinds of coastal habitats in different regions along the Atlantic Coast, according to a new study by the U.S. Geological Survey and U.S. Fish and Wildlife Service.
|
The Atlantic Coast and Northern Great Plains populations of the piping plover were listed as federally threatened in 1985. The Atlantic coast population is managed in three regional recovery units, or regions: New England, which includes Massachusetts and Rhode Island; Mid-Atlantic, which includes New York and New Jersey; and Southern, which includes Delaware, Maryland, Virginia, and North Carolina.While the Atlantic populations are growing, piping plovers have not recovered as well in the Mid-Atlantic and Southern regions as they have in the New England region. The habitat differences uncovered by the study may be a factor in the unequal recovery."Knowing piping plovers are choosing different habitat for nesting up and down the Atlantic Coast is key information that resource managers can use to refine recovery plans and protect areas most needed for further recovery of the species," said Sara Zeigler, a USGS research geographer and the lead author of the study. "It will also help researchers predict how climate change impacts, such as increased storm frequency, erosion and sea-level rise, could affect habitat for this high-profile shorebird."The researchers found that piping plovers breeding in the New England region were most likely to nest on the portion of the beach between the high-water line and the base of the dunes. By contrast, plovers in the Southern region nested farther inland in areas where storm waves have washed over and flattened the dunes -- a process known as 'overwash.' In the Mid-Atlantic region plovers used both habitats but tended to select overwash areas more frequently.In general, overwash areas tend to be less common than backshore -- shoreline to dunes -- habitats. Nesting pairs that rely on overwash features, such as those in the Mid-Atlantic and Southern regions, have more limited habitat compared to birds in New England that have adapted to nesting in backshore environments.The authors suggest that the differences in nesting habitat selection may be related to the availability of quality food. Piping plover chicks, which cannot fly, must be able to access feeding areas on foot. In New England, piping plovers can find plenty of food along the ocean shoreline, so they may have more options for successful nesting. However, ocean shorelines along the Southern and Mid-Atlantic regions may not provide enough food, forcing adults and chicks to move towards bay shorelines on the interiors of barrier islands to feed. This would explain why so many nests in these regions occurred in overwash areas, which are scarcer than backshore areas but tend to provide a clear path to the bay shoreline.In all three regions, plovers most often chose to nest in areas with sand mixed with shells and with sparse plant life. These materials match the species' sandy, mottled coloring and help the birds blend into the environment, enhancing the camouflage that is their natural defense against predators. Piping plover adults may avoid dense vegetation because it may impede their ability to watch for foxes, raccoons, coyotes and other predators.The U.S. Atlantic Coast population of piping plovers increased from 476 breeding pairs when it was listed in 1985 to 1,818 pairs in 2019, according to the USFWS. The population increased by 830 breeding pairs in New England but only 349 and 163 pairs in the Mid-Atlantic and Southern regions respectively."This study will help us tailor coastal management recommendations in each region," said Anne Hecht, a USFWS endangered species biologist and a co-author of the paper. "We are grateful to the many partners who collected data for this study, which will help us be more effective in our efforts to recover the plover population.""This research will fuel further studies on piping plovers, their habitat-use and food resources," Zeigler added. "Refining the models used in this research will help researchers predict habitat availability with future changes in shorelines, sea level and beach profiles."
|
Geography
| 2,021 |
April 8, 2021
|
https://www.sciencedaily.com/releases/2021/04/210408112404.htm
|
Why lists of worldwide bird species disagree
|
How many species of birds are there in the world? It depends on whose count you go by. The number could be as low as 10,000 or as high as 18,000. It's tough to standardize lists of species because the concept of a "species" itself is a little bit fuzzy.
|
That matters because conserving biodiversity requires knowing what diversity exists in the first place. So biologists, led by University of Utah doctoral candidate Monte Neate-Clegg of the School of Biological Sciences, set out to compare four main lists of bird species worldwide to find out how the lists differ -- and why. They found that although the lists agree on most birds, disagreements in some regions of the world could mean that some species are missed by conservation ecologists."Species are more than just a name," Neate-Clegg says. "They are functional units in complex ecosystems that need to be preserved. We need to recognize true diversity in order to conserve it."The results are published in The definition of a species isn't clear-cut. Some scientists define populations as different species if they're reproductively isolated from each other and unable to interbreed. Others use physical features to delineate species, while yet others use genetics. Using the genetic definition produces many more species, but regardless of the method, gray areas persist."Species are fuzzy because speciation as a process is fuzzy," Neate-Clegg says. "It's a gradual process so it's very difficult to draw a line and say 'this is two species' vs. 'this is one species.'"Also, he says, physical features and genetic signatures don't always diverge on the same timescale. "For example," he says, "two bird populations may diverge in song and appearance before genetic divergence; conversely, identical populations on different islands may be separated genetically by millions of years."At this point in the story, it's time to introduce four lists, each of which purports to include all the bird species in the world. They are:"Being active field ornithologists who are always trying to ID bird species means that one is always faced with the issue of some species being on one list but not the other," says Ça?an ?ekercio?lu, associate professor in the School of Biological Sciences. "So our field experience very much primed us to think about this question and inspired us to write this paper."The lists have different strengths depending on their application. The BirdLife International list, for example, integrates with the IUCN Red List, which reports on species' conservation status. The IOC list is updated by experts twice a year, ?ekercio?lu says. The list is open access with comparisons to other major lists, and changes are documented transparently."But as a birdwatcher, I use eBird all the time, which uses the Clements checklist, and that dataset is very powerful in its own right," Neate-Clegg says. "So there is no single best option."One example of the disagreement between lists might be the common bird In 2020, Neate-Clegg and his colleagues read a study that compared the raptor species on each list, finding that only 68% of species were consistent among all four lists."We thought it would be interesting to investigate taxonomic agreement for all 11,000 bird species," Neate-Clegg says. "More importantly, we wanted to try and work out what species characteristics led to more or less taxonomic confusion."They began by collecting the most recent version of each list (the IOC checklist is updated biannually, the researchers write, and the Clements and BirdLife lists annually, while Howard and Moore has not been updated since 2014) and trimming them down to exclude subspecies and any extinct species. Using a few other data processing rules, they assigned a single name to every possible species across all four lists. Then the comparisons began.The researchers found that the four lists agreed on the vast majority of bird species -- 89.5%. For the remaining 10.5%, then, they started to look for patterns that might explain the disagreement. Some of it was likely geographical. Birds from the well-studied Northern Hemisphere were more likely to find agreement than birds from the relatively understudied Southeast Asia and the Southern Ocean.Some of it was habitat-based. Agreement was higher for large, migratory species in relatively open habitats."I think the most surprising result was that agreement was not lower for highly forest-dependent species," Neate-Clegg says. "We expected these denizens of the rainforest floor to be the most cryptic and hard to study, with more uncertainty on their taxonomic relationships. Yet we found it was actually species of intermediate forest dependency that had lower taxonomic agreement. We believe that these species move about just enough to diverge, but not so much that their gene pools are constantly mixing."And part of the issue with species classification on isolated islands, such as those in Southeast Asia and the Southern Ocean, was a phenomenon called "cryptic diversification." Although islands can foster species diversification because of their isolation, sometimes two populations on different islands can appear very similar, even though their genes suggest that they've been isolated from each other for millions of years. So, depending on the definition, two populations could count as two species or as only one."In addition," Neate-Clegg says, "it's very hard to test the traditional biological species concept on island fauna because we cannot know whether two populations can interbreed to produce fertile young if they are geographically isolated."So what if some people disagree on species designations? Conservation actions are usually on the species level, Neate-Clegg says."If a population on one island goes extinct, people may care less if it's 'just a subspecies,'" he says. "And yet that island is potentially losing a functionally unique population. If it was recognized as a full species it might not have been lost."Neate-Clegg hopes the study points ornithologists towards the groups of species that merit additional attention."We also want conservation biologists to recognize that cryptic diversity may be overlooked," he adds, "and that we should consider units of conservation above and below the species level."
|
Geography
| 2,021 |
April 6, 2021
|
https://www.sciencedaily.com/releases/2021/04/210406131932.htm
|
The sea urchin microbiome
|
Sea urchins receive a lot of attention in California. Red urchins support a thriving fishery, while their purple cousins are often blamed for mowing down kelp forests to create urchin barrens. Yet for all the notice we pay them, we know surprisingly little about the microbiomes that support these spiny species.
|
Researchers at UC Santa Barbara led by geneticist Paige Miller sought to uncover the diversity within the guts of these important kelp forest inhabitants. Their results reveal significant differences between the microbiota of the two species, as well as between individuals living in different habitats. The study, which appears in California hosts two common species of sea urchin: red and purple. They generally consume algae, but are actually fairly opportunistic omnivores that will eat decaying plant and animal matter, microbial mats and even other urchins if need be. The microbiome in their guts might help urchins handle such a varied diet, but it hasn't been examined until now."It's very important to understand what animals eat and why," Miller said, "and we think the microbiome could play an important role in why species thrive despite all the variation in food availability that's out there in the ocean." However, scientists are only beginning to investigate the microbiota of ocean animals, let alone the function these microorganisms serve in their hosts.To begin their investigation, Miller and her team collected red and purple urchins from three habitats in the Santa Barbara Channel. Some came from lush kelp forests; others from urchin barrens; and a few came from one of the channel's many hydrocarbon seeps, where they scratch a living feeding on mats of microbes that thrive off of petroleum compounds.Key to this study's success was the researchers' stringent protocol. They used meticulous techniques to remove each specimen's stomach and guts in order to avoid contamination from microbes in the lab, elsewhere on the animal, and even in the sea water.The researchers were then able to sequence a particular region of the genetic code that scientists commonly use to identify microbes. This enabled them to compare what they found with several comprehensive taxonomic databases that scientists use for genetic identification of microbial life.The team found significant differences between the bacterial communities living within the two urchin species. However, they saw just as much variation between the microbiomes of individuals from the same species living in different habitats."Our study is the first to examine the microbiome in these extremely common, and really ecologically important, species," said coauthor Robert (Bob) Miller, a researcher at the university's Marine Science Institute. "We're just scratching the surface here, but our study shows how complex these communities are."One group of bacteria that was prevalent in both species is the same group that helps break down wood in the guts of termites, and could help the urchins digest algae. Previous research indicates that these microbes could potentially be autotrophic. "Some members of this group can create their own food, like photosynthetic plants, for example," explained Paige Miller, "only they don't use sunlight for energy, they use hydrogen."Although the authors caution against jumping to conclusions, ascertaining whether urchins can produce their own food would be a huge revelation. "We know that the urchins can survive a long time without food," Bob Miller said. "And they can survive almost indefinitely in these barren areas that have very low food supplies. So, this could really help them out, if they have their own little farmed food supply in their gut."The findings also stress the oversight of conflating these similar species. People often treat species like the red and purple sea urchins as equivalent when making decisions about resource use and management, Paige Miller explained. Even ecologists can fall into this line of reasoning. "But it's very important to look at how these things actually function," she noted. "And as we saw, the red and purple sea urchins are not necessarily functioning the same way, or eating the same things, if their microbiome is an indicator."Understanding the makeup and function of microbiota could help researchers recognize the subtle differences between superficially similar species. "More recently, people have begun considering the microbiome as another trait that these species have," Bob Miller said. "We wanted to find out whether this is a hidden source of variation that's separating these two species."This study provides a launch point for additional research. In the future, the Millers and their coauthors plan to further investigate the function of the different microbes in urchin guts. For now, there's still more work to do simply identifying what species reside in the prickly critters."This is a new subfield of ecology," said Paige Miller, "trying to understand what these microbiomes do and the role they play in the living organism out in the wild."
|
Geography
| 2,021 |
April 5, 2021
|
https://www.sciencedaily.com/releases/2021/04/210405175607.htm
|
How climate change affects Colombia's coffee production
|
If your day started with a cup of coffee, there's a good chance your morning brew came from Colombia. Home to some of the finest Arabica beans, the country is the world's third largest coffee producer. Climate change poses new challenges to coffee production in Colombia, as it does to agricultural production anywhere in the world, but a new University of Illinois study shows effects vary widely depending on where the coffee beans grow.
|
"Colombia is a large country with a very distinct geography. The Andes Mountains cross the country from its southwest to northeast corner. Colombian coffee is currently growing in areas with different altitude levels, and climate impacts will likely be very different for low altitude and high altitude regions," says Sandy Dall'Erba, professor in the Department of Agricultural and Consumer Economics (ACE) and director of the Regional Economics Applications Laboratory (REAL) at U of I. Dall'Erba is co-author on the study, published in Other studies on the future of coffee production have either considered the country as a whole, or focused on a few areas within the country.Dall'Erba and lead author Federico Ceballos-Sierra, who recently obtained a Ph.D. from ACE, look at climate and coffee production for the entire country, broken down into 521 municipalities. This high level of detailed information allows them to identify significant regional variations."Colombia is not going to experience reduced productivity overall. But when we look into the impact across municipalities, we see many differences that get lost in the national average. That has important implications for coffee growers who live in one municipality versus another," Ceballos-Sierra says."Low-altitude municipalities will be negatively affected by climate change, and thousands of growers and their families in these areas will see their livelihood jeopardized because productivity is likely to fall below their breakeven point by mid-century," he states.The researchers analyze climate data from 2007 to 2013 across Colombia's 521 coffee-producing municipalities and evaluate how temperature and precipitation affect coffee yield. Subsequently, they model anticipated weather conditions from 2042 to 2061 and future coffee production for each municipal area.At the national level, they estimate productivity will increase 7.6% by 2061. But this forecast covers a wide margin of spatial differences, ranging from a 16% increase in high altitude regions (1,500 meters or 5,000 feet above sea level) to a 8.1% decrease in low altitude regions. Rising temperatures will benefit areas that are now marginal for coffee production, while areas that are currently prime coffee growing locations will be too hot and dry in the future.Ceballos-Sierra grew up on a coffee farm in the Tolima district of Colombia, and he has seen firsthand how changing climate conditions affect production."My family's farm is about 1,900 meters above sea level. Twenty years ago, people would consider that an upper marginal coffee growing area. But now we're getting significant improvements in yield," he says.Meanwhile, coffee growers in lowland areas see decreasing yields, while pests that prey on coffee plants, such as the coffee bean borer, are becoming more aggressive and prevalent.The research findings have important implications both for coffee growers and policymakers."In the future it will be more beneficial to grow coffee higher up in the mountains. So for those who can afford it, buying land in those areas would be a good investment," Dall'Erba states. "The government might want to consider building infrastructures such as roads, water systems, electricity, and communication towers that would allow farmers in more elevated places to easily access nearby hubs and cities where they can sell their crops. We would expect more settlements and an increasing need for public services in those locations."However, because relocation is expensive, it will not necessarily be an option for most of Colombia's 550,000 smallholder coffee growers, who will need to find other ways to adapt. Farmers might be able to implement new strategies, such as more frequent irrigation, increased use of forest shade, or shifting to different coffee varieties or other crops."Our research presents what we anticipate will happen 20 to 40 years from now, given current conditions and practices. Future studies can look into different adaptation strategies and their costs, and evaluate which options are best. Beyond the 40-year horizon we focus on, the prospects might be grimmer without adaptation. Production cannot keep moving to higher levels. Indeed, no mountain top is above 5,800 meters (18,000 feet) in Colombia," Dall'Erba says.Colombia's policymakers can also focus on supporting farmers who no longer will be able to make a living from growing coffee, so they can transition to something else, Ceballos-Sierra states."Looking into these regional estimates allows us to make predictions and provide policy suggestions. Specific place-tailored strategies should guide how coffee production adapts to future climate conditions in Colombia," he concludes.The researchers say their findings may also apply to other coffee growing locations, including Hawaii, California, and Puerto Rico in the United States.
|
Geography
| 2,021 |
April 5, 2021
|
https://www.sciencedaily.com/releases/2021/04/210405113612.htm
|
Increased winter snowmelt threatens western US water resources
|
More snow is melting during winter across the West, a concerning trend that could impact everything from ski conditions to fire danger and agriculture, according to a new University of Colorado Boulder analysis of 40 years of data.
|
Researchers found that since the late 1970s, winter's boundary with spring has been slowly disappearing, with one-third of 1,065 snow measurement stations from the Mexican border to the Alaskan Arctic recording increasing winter snowmelt. While stations with significant melt increases have recorded them mostly in November and March, the researchers found that melt is increasing in all cold season months -- from October to March.Their new findings, published today in "Particularly in cold mountain environments, snow accumulates over the winter -- it grows and grows -- and gets to a point where it reaches a maximum depth, before melt starts in the spring," said Keith Musselman, lead author on the study and research associate ,at the Institute of Arctic and Alpine Research (INSTAAR) at the University of Colorado Boulder.But the new research found that melt before April 1 has increased at almost half of more than 600 stations in western North America, by an average of 3.5% per decade."Historically, water managers use the date of April 1 to distinguish winter and spring, but this distinction is becoming increasingly blurred as melt increases during the winter," said Noah Molotch, co-author on the study, associate professor of geography and fellow at INSTAAR.Snow is the primary source of water and streamflow in western North America and provides water to 1 billion people globally. In the West, snowy mountains act like water towers, reserving water up high until it melts, making it available to lower elevations that need it during the summer, like a natural drip irrigation system."That slow trickle of meltwater that reliably occurs over the dry season is something that we have built our entire water infrastructure on in the West," said Musselman. "We rely very heavily on that water that comes down our rivers and streams in the warm season of July and August."More winter snowmelt is effectively shifting the timing of water entering the system, turning that natural drip irrigation system on more frequently in the winter, shifting it away from the summer, he said.This is a big concern for water resource management and drought prediction in the West, which depends heavily on late winter snowpack levels in March and April. This shift in water delivery timing could also affect wildfire seasons and agricultural irrigation needs.Wetter soils in the winter also have ecological implications. One, the wet soils have no more capacity to soak up additional water during spring melt or rainstorms, which can increase flash flooding. Wetter winter soils also keep microbes awake and unfrozen during a time they might otherwise lay dormant. This affects the timing of nutrient availability, water quality and can increase carbon dioxide emissions.Across the western U.S., hundreds of thin, fluid-filled metal pillows are carefully tucked away on the ground and out of sight from outdoor enthusiasts. These sensors are part of an extensive network of long-running manual and automated snow observation stations, which you may have even used data from when looking up how much snow is on your favorite snowshoeing or Nordic skiing trail.This new study is the first to compile data from all 1,065 automated stations in western North America, providing valuable statistical insight into how mountain snow is changing.And by using automated, continuously recording snowpack stations instead of manual, monthly observations, the new research shows that winter melt trends are very widespread -- at three-times the number of stations with snowpack declines, according to Musselman.Snowpack is typically measured by calculating how much water will be produced when it melts, known as snow-water equivalent (SWE), which is affected by how much snow falls from the sky in a given season. But because winter snowpack melt is influenced more by temperature than by precipitation, it is a better indicator of climate warming over time."These automated stations can be really helpful to understand potential climate change impacts on our resources," said Musselman. "Their observations are consistent with what our climate models are suggesting will continue to happen."Other authors on this publication include Nans Addor at the University of Exeter and Julie Vano at the Aspen Global Change Institute.
|
Geography
| 2,021 |
April 1, 2021
|
https://www.sciencedaily.com/releases/2021/04/210401211633.htm
|
Evidence of Antarctic glacier's tipping point confirmed
|
Researchers have confirmed for the first time that Pine Island Glacier in West Antarctica could cross tipping points, leading to a rapid and irreversible retreat which would have significant consequences for global sea level.
|
Pine Island Glacier is a region of fast-flowing ice draining an area of West Antarctica approximately two thirds the size of the UK. The glacier is a particular cause for concern as it is losing more ice than any other glacier in Antarctica.Currently, Pine Island Glacier together with its neighbouring Thwaites glacier are responsible for about 10% of the ongoing increase in global sea level.Scientists have argued for some time that this region of Antarctica could reach a tipping point and undergo an irreversible retreat from which it could not recover. Such a retreat, once started, could lead to the collapse of the entire West Antarctic Ice Sheet, which contains enough ice to raise global sea level by over three metres.While the general possibility of such a tipping point within ice sheets has been raised before, showing that Pine Island Glacier has the potential to enter unstable retreat is a very different question.Now, researchers from Northumbria University have shown, for the first time, that this is indeed the case.Their findings are published in leading journal, The Using a state-of-the-art ice flow model developed by Northumbria's glaciology research group, the team have developed methods that allow tipping points within ice sheets to be identified.For Pine Island Glacier, their study shows that the glacier has at least three distinct tipping points. The third and final event, triggered by ocean temperatures increasing by 1.2C, leads to an irreversible retreat of the entire glacier.The researchers say that long-term warming and shoaling trends in Circumpolar Deep Water, in combination with changing wind patterns in the Amundsen Sea, could expose Pine Island Glacier's ice shelf to warmer waters for longer periods of time, making temperature changes of this magnitude increasingly likely.The lead author of the study, Dr Sebastian Rosier, is a Vice-Chancellor's Research Fellow in Northumbria's Department of Geography and Environmental Sciences. He specialises in the modelling processes controlling ice flow in Antarctica with the goal of understanding how the continent will contribute to future sea level rise.Dr Rosier is a member of the University's glaciology research group, led by Professor Hilmar Gudmundsson, which is currently working on a major £4million study to investigate if climate change will drive the Antarctic Ice Sheet towards a tipping point.Dr Rosier explained: "The potential for this region to cross a tipping point has been raised in the past, but our study is the first to confirm that Pine Island Glacier does indeed cross these critical thresholds."Many different computer simulations around the world are attempting to quantify how a changing climate could affect the West Antarctic Ice Sheet but identifying whether a period of retreat in these models is a tipping point is challenging."However, it is a crucial question and the methodology we use in this new study makes it much easier to identify potential future tipping points."Hilmar Gudmundsson, Professor of Glaciology and Extreme Environments worked with Dr Rosier on the study. He added: "The possibility of Pine Island Glacier entering an unstable retreat has been raised before but this is the first time that this possibility is rigorously established and quantified."This is a major forward step in our understanding of the dynamics of this area and I'm thrilled that we have now been able to finally provide firm answers to this important question."But the findings of this study also concern me. Should the glacier enter unstable irreversible retreat, the impact on sea level could be measured in metres, and as this study shows, once the retreat starts it might be impossible to halt it."
|
Geography
| 2,021 |
March 31, 2021
|
https://www.sciencedaily.com/releases/2021/03/210331143036.htm
|
Ancient meteoritic impact over Antarctica 430,000 years ago
|
A research team of international space scientists, led by Dr Matthias van Ginneken from the University of Kent's School of Physical Sciences, has found new evidence of a low-altitude meteoritic touchdown event reaching the Antarctic ice sheet 430,000 years ago.
|
Extra-terrestrial particles (condensation spherules) recovered on the summit of Walnumfjellet (WN) within the Sør Rondane Mountains, Queen Maud Land, East Antarctica, indicate an unusual touchdown event where a jet of melted and vaporised meteoritic material resulting from the atmospheric entry of an asteroid at least 100 m in size reached the surface at high velocity.This type of explosion caused by a single-asteroid impact is described as intermediate, as it is larger than an airburst, but smaller than an impact cratering event.The chondritic bulk major, trace element chemistry and high nickel content of the debris demonstrate the extra-terrestrial nature of the recovered particles. Their unique oxygen isotopic signatures indicate that their interacted with oxygen derived from the Antarctic ice sheet during their formation in the impact plume.The findings indicate an impact much more hazardous that the Tunguska and Chelyabinsk events over Russia in 1908 and 2013, respectively.This research, published by The study highlights the importance of reassessing the threat of medium-sized asteroids, as it likely that similar touchdown events will produce similar particles. Such an event would be entirely destructive over a large area, corresponding to the area of interaction between the hot jet and the ground.Dr van Ginneken said: 'To complete Earth's asteroid impact record, we recommend that future studies should focus on the identification of similar events on different targets, such as rocky or shallow oceanic basements, as the Antarctic ice sheet only covers 9% of Earth's land surface. Our research may also prove useful for the identification of these events in deep sea sediment cores and, if plume expansion reaches landmasses, the sedimentary record.'While touchdown events may not threaten human activity if occurring over Antarctica, if it was to take place above a densely populated area, it would result in millions of casualties and severe damages over distances of up to hundreds of kilometres.'
|
Geography
| 2,021 |
March 31, 2021
|
https://www.sciencedaily.com/releases/2021/03/210331143028.htm
|
Deep diamonds contain evidence of deep-Earth recycling processes
|
Diamonds that formed deep in the Earth's mantle contain evidence of chemical reactions that occurred on the seafloor. Probing these gems can help geoscientists understand how material is exchanged between the planet's surface and its depths.
|
New work published in "Nearly all tectonic plates that make up the seafloor eventually bend and slide down into the mantle -- a process called subduction, which has the potential to recycle surface materials, such as water, into the Earth," explained Carnegie's Peng Ni, who co-led the research effort with Evan Smith of the Gemological Institute of America.Serpentinite residing inside subducting plates may be one of the most significant, yet poorly known, geochemical pathways by which surface materials are captured and conveyed into the Earth's depths. The presence of deeply-subducted serpentinites was previously suspected -- due to Carnegie and GIA research about the origin of blue diamonds and to the chemical composition of erupted mantle material that makes up mid-ocean ridges, seamounts, and ocean islands. But evidence demonstrating this pathway had not been fully confirmed until now.The research team -- which also included Carnegie's Steven Shirey, and Anat Shahar, as well as GIA's Wuyi Wang and Stephen Richardson of the University of Cape Town -- found physical evidence to confirm this suspicion by studying a type of large diamonds that originate deep inside the planet."Some of the most famous diamonds in the world fall into this special category of relatively large and pure gem diamonds, such as the world-famous Cullinan," Smith said. "They form between 360 and 750 kilometers down, at least as deep as the transition zone between the upper and lower mantle."Sometimes they contain inclusions of tiny minerals trapped during diamond crystallization that provide a glimpse into what is happening at these extreme depths."Studying small samples of minerals formed during deep diamond crystallization can teach us so much about the composition and dynamics of the mantle, because diamond protects the minerals from additional changes on their path to the surface," Shirey explained.In this instance, the researchers were able to analyze the isotopic composition of iron in the metallic inclusions. Like other elements, iron can have different numbers of neutrons in its nucleus, which gives rise to iron atoms of slightly different mass, or different "isotopes" of iron. Measuring the ratios of "heavy" and "light" iron isotopes gives scientists a sort of fingerprint of the iron.The diamond inclusions studied by the team had a higher ratio of heavy to light iron isotopes than typically found in most mantle minerals. This indicates that they probably didn't originate from deep-Earth geochemical processes. Instead, it points to magnetite and other iron-rich minerals formed when oceanic plate peridotite transformed to serpentinite on the seafloor. This hydrated rock was eventually subducted hundreds of kilometers down into the mantle transition zone, where these particular diamonds crystallized."Our findings confirm a long-suspected pathway for deep-Earth recycling, allowing us to trace how minerals from the surface are drawn down into the mantle and create variability in its composition," Shahar concluded.
|
Geography
| 2,021 |
March 30, 2021
|
https://www.sciencedaily.com/releases/2021/03/210330171018.htm
|
Early Earth's hot mantle may have led to Archean 'water world'
|
A vast global ocean may have covered early Earth during the early Archean eon, 4 to 3.2 billion years ago, a side effect of having a hotter mantle than today, according to new research.
|
The new findings challenge earlier assumptions that the size of the Earth's global ocean has remained constant over time and offer clues to how its size may have changed throughout geologic time, according to the study's authors.Most of Earth's surface water exists in the oceans. But there is a second reservoir of water deep in Earth's interior, in the form of hydrogen and oxygen attached to minerals in the mantle.A new study in The findings suggest that, since early Earth was hotter than it is today, its mantle may have contained less water because mantle minerals hold onto less water at higher temperatures. Assuming that the mantle currently has more than 0.3-0.8 times the mass of the ocean, a larger surface ocean might have existed during the early Archean. At that time, the mantle was about 1,900-3,000 degrees Kelvin (2,960-4,940 degrees Fahrenheit), compared to 1,600-2,600 degrees Kelvin (2,420-4,220 degrees Fahrenheit) today.If early Earth had a larger ocean than today, that could have altered the composition of the early atmosphere and reduced how much sunlight was reflected back into space, according to the authors. These factors would have affected the climate and the habitat that supported the first life on Earth."It's sometimes easy to forget that the deep interior of a planet is actually important to what's going on with the surface," said Rebecca Fischer, a mineral physicist at Harvard University and co-author of the new study. "If the mantle can only hold so much water, it's got to go somewhere else, so what's going on thousands of kilometers below the surface can have pretty big implications."Earth's sea level has remained fairly constant during the last 541 million years. Sea levels from earlier in Earth's history are more challenging to estimate, however, because little evidence has survived from the Archean eon. Over geologic time, water can move from the surface ocean to the interior through plate tectonics, but the size of that water flux is not well understood. Because of this lack of information, scientists had assumed the global ocean size remained constant over geologic time.In the new study, co-author Junjie Dong, a mineral physicist at Harvard University, developed a model to estimate the total amount of water that Earth's mantle could potentially store based on its temperature. He incorporated existing data on how much water different mantle minerals can store and considered which of these 23 minerals would have occurred at different depths and times in Earth's past. He and his co-authors then related those storage estimates to the volume of the surface ocean as Earth cooled.Jun Korenaga, a geophysicist at Yale University who was not involved in the research, said this is the first time scientists have linked mineral physics data on water storage in the mantle to ocean size. "This connection has never been raised in the past," he said.Dong and Fischer point out that their estimates of the mantle's water storage capacity carry a lot of uncertainty. For example, scientists don't fully understand how much water can be stored in bridgmanite, the main mineral in the mantle.The new findings shed light on how the global ocean may have changed over time and can help scientists better understand the water cycles on Earth and other planets, which could be valuable for understanding where life can evolve."It is definitely useful to know something quantitative about the evolution of the global water budget," said Suzan van der Lee, a seismologist at Northwestern University who did not participate in the study. "I think this is important for nitty-gritty seismologists like myself, who do imaging of current mantle structure and estimate its water content, but it's also important for people hunting for water-bearing exoplanets and asking about the origins of where our water came from."Dong and Fischer are now using the same approach to calculate how much water may be held inside Mars."Today, Mars looks very cold and dry," Dong said. "But a lot of geochemical and geomorphological evidence suggests that early Mars might have contained some water on the surface -- and even a small ocean -- so there's a lot of interest in understanding the water cycle on Mars."
|
Geography
| 2,021 |
March 30, 2021
|
https://www.sciencedaily.com/releases/2021/03/210330092524.htm
|
Factors that may predict next pandemic
|
Humans are creating or exacerbating the environmental conditions that could lead to further pandemics, new University of Sydney research finds.
|
Modelling from the Sydney School of Veterinary Science suggests pressure on ecosystems, climate change and economic development are key factors associated with the diversification of pathogens (disease-causing agents, like viruses and bacteria). This has potential to lead to disease outbreaks.The research, by Dr Balbir B Singh, Professor Michael Ward, and Associate Professor Navneet Dhand, is published in the international journal, They found a greater diversity of zoonotic diseases (diseases transmitted between animals and humans) in higher income countries with larger land areas, more dense human populations, and greater forest coverage.The study also confirms increasing population growth and density are major drivers in the emergence of zoonotic diseases. The global human population has increased from about 1.6 billion in 1900 to about 7.8 billion today, putting pressure on ecosystems.Associate Professor Dhand said: "As the human population increases, so does the demand for housing. To meet this demand, humans are encroaching on wild habitats. This increases interactions between wildlife, domestic animals and human beings which increases the potential for bugs to jump from animals to humans.""To date, such disease models have been limited, and we continue to be frustrated in understanding why diseases continue to emerge," said Professor Ward, an infectious diseases expert."This information can help inform disease mitigation and may prevent the next COVID-19."Other zoonotic diseases that have recently devastated human populations include SARS, avian (H5N1) and swine (H1N1) flu, Ebola and Nipah -- a bat-borne virus.The researchers discovered country-level factors predicting three categories of disease: zoonotic, emerging (newly discovered diseases, or those diseases that have increased in occurrence or occurred in new locations), and human."Countries within a longitude of -50 to -100 like Brazil, developed countries like United States and dense countries such as India were predicted to have a greater diversity of emerging diseases," Professor Ward said.The researchers also noted weather variables, such as temperature and rainfall, could influence the diversity of human diseases. At warmer temperatures, there tend to be more emerging pathogens.The analyses demonstrate that weather variables (temperature and rainfall) have the potential to influence pathogen diversity These factors combined confirm human development -- including human-influenced climate change -- not only damages our environment but is responsible for the emergence of infectious diseases, such as COVID-19."Our analysis suggests sustainable development is not only critical to maintaining ecosystems and slowing climate change; it can inform disease control, mitigation, or prevention," Professor Ward said."Due to our use of national-level data, all countries could use these models to inform their public health policies and planning for future potential pandemics."The authors acknowledge the data relied for this research is incomplete. Reasons include underreporting of some previously known and undiscovered pathogens, particularly in less developed countries. For some of the predictor variables, the latest data available had missing values because recent data had not been updated.
|
Geography
| 2,021 |
March 30, 2021
|
https://www.sciencedaily.com/releases/2021/03/210330092527.htm
|
Beetle outbreak impacts vary across Colorado forests
|
It's no secret. Colorado's forests have had a tough time in recent years. While natural disturbances such as insect outbreaks and wildfires occurred historically and maintained forest health over time, multiple, simultaneous insect disturbances in the greater region over the past two decades have led to rapid changes in the state's forests.
|
A bird's eye view can reveal much about these changes. Annual aerial surveys conducted by the Colorado State Forest Service and USDA Forest Service have provided yearly snapshots for the state. New collaborative research led by Colorado State University and the University of Wisconsin-Madison now supplements this understanding with even greater spatial detail.The study, "Effects of Bark Beetle Outbreaks on Forest Landscape Pattern in the Southern Rocky Mountains, U.S.A.," analyzed Landsat satellite imagery between 1997-2019 to quantify how outbreaks of three different insect species have impacted forests across high-elevation forests in Colorado, southern Wyoming, and northern New Mexico. The research team found that while these collective beetle outbreaks impacted around 40 percent of the area studied, the effects of these outbreak varied due to differences in forest structures and species composition across the region."In contrast to research that has examined the heterogeneous effects of wildfire on trees, there hasn't been much work on the landscape-level variation in bark beetle effects on forests, particularly across broad areas," said Sarah Hart, co-author and assistant professor in the Forest and Rangeland Stewardship department. "Heterogeneity plays an important role in how these forests will look in the future, where surviving trees will regenerate the forest, and what potential there is for future outbreaks."Their results indicate that most forest stands affected by insects still have mature trees that can be sources for reestablishing seeds and conditions for the next generation of trees to grow. Areas with tree mortality greater than 90 percent were relatively small and isolated. Unlike severe wildfires that can kill all trees in its path, trees typically survive bark beetle outbreaks, facilitating forest recovery in upcoming decades.Widespread outbreaks of three important bark beetle species have occurred in Colorado's forests since the turn of the century: mountain pine beetle, spruce beetle, and the western balsam beetle (that affects various fir tree species). These bark beetles primarily target large trees with reduced defenses due to lower precipitation amounts and higher temperature trends since the turn of the century.This research team combined satellite imagery capable of identifying small groups of dead trees with a decade of extensive field data from nearly 250 plots to develop presence and severity maps for tree mortality caused by bark beetle attacks. Having this data combination gave the research team detailed information about how many trees have died in particular places, and helped to identify what may still be causing the death of individual trees."These maps give us unique insight into the effects of recent insect outbreaks because they span a large area but also show a lot of detail, and we are confident that they are showing us how many trees are dying because technicians counted trees on the ground," Kyle Rodman, lead author and post-doctoral researcher at the University of Wisconsin-Madison said.The maps the team produced indicate that areas most impacted by bark beetles are concentrated in northern and southwestern Colorado due to higher concentrations of old lodgepole pine and spruce forests which were then infested by mountain pine beetle and spruce beetle, respectively. Western balsam beetle impacts were also widespread across the region, but these beetles tended to kill fewer trees in any single location."Satellite data is a crucial bridge that allows us to take detailed information from individual places and extend this localized knowledge to large areas," Rodman said. "In using these maps, we can see how the forest has changed over the past 20 years during each of these outbreaks."Fortunately, much of the 25,000 square kilometer study area showed low to moderate levels of tree mortality, with high tree mortality being contained in small and isolated patches averaging only about nine city blocks in overall size."People tend to notice what has changed, rather than what has stayed the same," Rodman said. "These forests have changed a lot, but I am hopeful. It will just take a little while for them to recover, but many of these beetle-killed forests are likely to recover within a few decades."
|
Geography
| 2,021 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.