content
stringlengths 71
484k
| url
stringlengths 13
5.97k
|
---|---|
I've been playing Pak128 (v2.3, Simutrans v112.3) for a few weeks and I figured using long distance bus lines are more profitable than train lines (arguable, I know, but seems to be true every the time).
The main problem that keeps occurring is managing a good distribution of buses on the line. I got about 10 long bus lines in a network servicing most cities, about 30 buses per line. Initially I distribute the buses by starting them from the depot at intervals to get a nice distribution. Problem is due to traffic delays the buses on a line tend to group together and travel in a big pack causing huge fluctuations in station service. I tried setting minimum load and wait time on one of the stops, but that quickly caused other problems because its behavior varies according to supply. My only alternative now is to manually select each bus and somehow delay it (by returning it to the depot or changing its next stop on the line). However, this is a time consuming activity and not a fun one, so when this happens I prefer withdrawing all buses on the line and buying new ones.
I was hoping there is/will be an option to uniformly distribute buses on a line. It wouldn't have to be perfect. Perhaps a street sign or building that would delay a bus until X time has passed since last bus on the same line?
Also, is there something similar to Platform Choose Signal, but for buses? Sometimes I get long wait lines even if I choose different stops for different lines in a station since buses can't pass each other.
Traffic lights can be also used to create a minimum space between road vehicles, although I find maximum wait time much more useful for that goal.
There are two choose signals for buses, one is a usual arrow-shaped roadsign, the other is a big lighting pannel above the road. Just place them before the crossing where buses will have to turn to the right stop.
The "packing" effect you described is usual in Simutrans on all waytypes. It has the worst consequences on ships (ships graphically merged into only one) and planes (runways completely saturated for some minutes). The only way of fixing this problem efficiently is through the code.
wait for minimum load 100% + maximum wait times are good for this.
I would use 1/8 or 1/16 initially, and then add or subtract depending on traffic volume you are seeing.
The miniumum wait time is something I have never been able to wrap my brain around... and this question gets asked pretty often. Is someone able to make a nice video tutorial (or good explaination) page of these strategies?
Yes, that's what I usually do. Wait time is in months, so 1/32, the setting I frequently use, is around one day. 1/16 is around 2 days, 1/8 is around 4 days.
month wait time: maximum amount of time convoy will wait for the above to happen before it departs.
Best to try it out in a freeplay game.
Typically, for buses and other lower capacity vehicles, you will want to use shorter intervals of 1/16, 1/32, etc.
For high capacity vehicles, like ships, you will want to use longer intervals, like 1/2, 1/4.
Of course, those are not rules by any means.
I too have been facing this problem. Normally what I would set a 100% capacity with a waiting time about 1/8 to 1/32, on the end stops.
But what typically happens is that some of these stops end betting full and convoys will depart immediately, eventually causing them to get together again.
I can manually fix this since most of my lines are supplied only by two buses. Still its not something I like spending time on.
There is a new feature that show in the stop's window, departure and arrivals times for regular convoys.
The ideal solution would be able to set a departure time instead of load, as well as showing an average trip duration( basic calculation without traffic) or arrival time on next destination.
As it is in actual world, such line departs at 8:00 and is expected to arrive at such time. And later we make the adjustments depending on the load.
This would also help a lot by adjusting lines that supply a main line arriving just before the main lines takes off, thus improving immensely the performance and overall logistics. | https://forum.simutrans.com/index.php/topic,13219.0.html |
Standing in line: how long is too long?
I knew Americans' patience was thin, but now we know how thin: four minutes' worth.
According to a story that just came out in the supermarket trade publication, Progressive Grocer, a research firm has just released new shopping data demonstrating hat 10 percent of shoppers get exasperated enough to leave a checkout line if the wait is lengthy. They polled 13,000 consumers to determine exactly when the breaking point is for most shoppers, and it appears to be four minutes. People are happy if they wait under four minutes; over four minutes, and their blood pressure starts rising.
The only exception seems to be at club stores, where an average wait time of over slightly four minutes is still considered acceptable. But after four minutes, whether you're at a grocery store, consumer electronics, department store, drug, home improvement, mass merchandisers or office supply store, it's all the same -- people get annoyed. | https://www.aol.com/2008/07/01/standing-in-line-at-the-grocery-bank-retail-how-long-is-too/ |
My husband and I couldn’t be more different. He’s steady, reserved, and thoughtful — the kind of guy who will sit quietly and observe in group settings but surprise you with a hilarious one-liner when he gets to know you. I, on the other hand, am outgoing, expressive, and impulsive. If I’m not talking your ear off, there’s probably something wrong.
In most scenarios, our personalities complement each other. But they also shape our financial mindsets and behaviors, which can cause conflict between us. You can probably guess that my husband is a natural saver, and given my taste for instant gratification, I’m more prone to spend.
We’ve been married almost a decade, and we’ve definitely made our fair share of money mistakes — mostly due to poor communication and lack of planning.
Over the years, though, with the support of our financially savvy friends and a great financial adviser, we’ve made our spending differences work.
Here are a few tricks we use to stay balanced in our differing financial approaches.
Creating a monthly budget
There’s no greater tool for keeping harmony between two different spending personalities than a basic budget, which keeps our attitudes and actions around money aligned.
The key for us is to designate every dollar we make during a given month to a specific line item, i.e. using a zero-sum budget. When we give every dollar a “job,” we prevent both overspending and potential conflict.
To make sure I have space to spend when it’s appropriate and we save as much as my husband thinks is wise, we can simply designate money to those categories. That way, both of us feel like our “needs” are being met.
Allotting monthly spending money
When it comes to our finances, the biggest point of contention in our relationship is unplanned purchases.
While I don’t see a problem with Happy Hour a few times a week, my husband would probably opt to stay home and cook dinner. The hard part is I’m a social person, and saying no to every get-together isn’t realistic (or fun).
Rather than spending mindlessly on restaurants or cutting them out altogether, we simply created a line item in our budget for personal spending.
Each of us gets a certain amount of cash every month, which we can use however we want. I typically use mine on food and drinks, and he will often save his over the course of a few months for a larger purchase. The caveat is, when it’s gone, it’s gone.
While setting a limit helps me understand how quickly I spend and keeps me from feeling as though I have to hide purchases, having the money available encourages my husband to invest in himself from time to time.
Running big purchases past each other
No matter how diligently we plan, there will always be times we deviate from our budget. In these cases, we always make an effort to run unplanned purchases of more than $100 by each other.
If my husband wants to order something expensive online on a whim, he will always ask what I think, and out of respect, I do the same for him. This simple exchange is all it takes to prevent conflict.
Using the “wait and see rule”
Another trick I use to curb my spending is the “wait and see rule,” which I picked up from a friend.
Often, I’ll see something I “have to have” at the store, and almost instantly regret the purchase when I get home. To avoid impulse buying, I’ll wait 48 hours before saying yes to something I want. If I still “have to” have it, I can go back to the store and buy it, but usually the “wait and see” time creates a buffer and I totally forget.
Going through bank statements together
One of my worst qualities is avoidance. As a result, I don’t prioritize reviewing bank or credit card statements. I’d rather pretend they’re not there — which has led me to spend more money than I should have at the grocery store or the mall.
To make sure we’re on the same page about how much we have coming in and out, my husband and I review our bank statements together at least monthly.
I’ll update him on freelance projects I’m working on and how much money I’m expecting to come in, and he’ll highlight (sometimes to my chagrin) ways we’ve overspent in the last month.
It’s not always the most enjoyable conversation, but taking time to get a big-picture view of our finances keeps us from poor decisions — and, more importantly, protects our relationship.
Originally published on Business Insider.
More from Business Insider:
What’s the best airline credit card? | https://community.thriveglobal.com/happy-marriage-finance-strategies/ |
Still Waiting After All These Years
Canadians cherish their healthcare system. It has become part of our national fabric; one of the defining characteristics of our country. We value the principle that our access to health care depends fundamentally on need rather than on the ability to pay. Our national heroes include the founders of our modern system, like Tommy Douglas.
For all its successes, however, our health care system also has its significant flaws. Chief among them is our persistent problem with wait times. According to The Commonwealth Fund – a respected think tank in the US – Canada persistently ranks dead last in wait time performance when compared to 10 peer countries.
How is it that our cherished system has come to be ranked dead last in wait time performance?
Nearly every Canadian family has a wait-time story. We wait in emergency departments. We wait to see family physicians. We wait for tests, procedures and surgeries. We wait to see specialists. We even wait to get out of hospital — an increasing number of Canadian seniors find themselves in acute care hospital beds not because they are sick, but because they cannot live independently and have nowhere else to go.
Are long wait times simply the price we have to pay in order to uphold our Canadian values of equity and fairness?
The answer to this question is a resounding, “No”.
In fact, there is good evidence that Canadians’ wait time experience is not equitable at all. How long you wait for medical care depends a great deal on your postal code. Even within provinces, the Wait Time Alliance (WTA) found that there are considerable disparities in wait times and access to care in general depending on where you live.
There are many reasons why Canada’s medical wait times have stretched to unacceptable levels. One is siloism. Our federated model has created provincial and territorial silos, and our attempts at integration and reform have largely fallen flat. Monique Bégin famously said that we are a country of perpetual pilot projects, lamenting our inability to scale-up and spread new ways of doing things.
Another is the fact that the health care landscape is increasingly one of chronic disease. In the 1960s, the health care system was designed to care for people who develop acute illness. Today, however, we typically see patients with multiple chronic diseases who need access to care across the entire spectrum, from primary care through to hospital and restorative care and finally to long term and palliative care. The typical patient experience is no longer just an episode of care, it is a trajectory of care that spans time, caregivers, and venues. And these trajectories are peppered with multiple transitions that tend to be poorly managed. It is at the transition points where mistakes are made, inefficiencies are generated, and suboptimal outcomes are born.
And the third is that we don’t manage the wait time experience for people very well at all. Any Canadian can walk into a large hardware store and a sales associate can quickly (and electronically) locate any item they want, but that same Canadian cannot find out quickly and easily where they are on a surgical wait list. They can book their car for service online with a few clicks but most cannot electronically book appointments with their family physician. Canadian patients cannot easily compare centres or providers to see where care can be provided fastest. And caregivers and hospitals too often do not have access to real time wait time data to manage patient flow and resources efficiently and equitably.
Information technology (IT) can facilitate many of the solutions to our chronic wait time problem, as its use in other sectors has shown. Seemingly simple solutions, like common wait lists – first shown to reduce waits in bank lines and at Wal-Mart – are easily implemented with the aid of IT. The healthcare sector lags behind other industries in the adoption of IT solutions, making IT truly the low hanging fruit that can pay large efficiency and performance dividends if we can catch up with other industry sectors.
Novari Health is solely focused on IT solutions that improve patient access to care. Importantly, we recognize that health is a journey for patients, not just an episode of care. Our software modules reflect our understanding of this journey, with solutions ranging from virtual primary care through to e-referral and e-consult, to surgical booking/wait list management and provider relationship management.
Canada won’t fix its wait time problem overnight. It will take new political and policy work, targeted investments, and patient flow innovation. But as the old saying goes; we can’t fix what we don’t measure. IT solutions that provide real time data coupled with providers and organizations who are dedicated to improving the health care experience for patients and their families can produce real, lasting improvements.
Our goal should be nothing less than the restoration of the health care system to one that is truly worthy of Canadians’ confidence and trust.
About the Author
Dr. Chris Simpson
MD FRCPC FACC FHRS FCCS FCAHS
Chief Medical Information Officer
Novari Health
Dr. Simpson is recognized nationally and internationally for his clinical work and healthcare policy leadership. He is the Chief Medical Information Officer at Novari Health, a practicing cardiac electrophysiology at Kingston Health Sciences Centre, the Vice Dean (Clinical) of the Faculty of Health Sciences at Queen’s University, as well as the Medical Director of the Southeastern Ontario Academic Medical Organization (SEAMO). | https://www.novarihealth.com/still-waiting-years/ |
WEST JORDAN — Just under 1,000 people waited in line for essential groceries from the Utah Food Bank in West Jordan Monday afternoon, as Utah grocery stores struggle to restock food to keep up with demands fueled by the COVID-19 spread.
The food was distributed outside The Church of Jesus Christ of Latter-day Saints’ Bennion Stake Center through the food bank’s mobile pantry program. Residents — equipped with cardboard boxes, wagons, gloves and masks — began waiting in line well before the food was to be distributed at 4 p.m.
By 4:40 p.m., with people still waiting in line, the donations ran out.
Jacob Buhler, IT director for the Utah Food Bank, had planned for about 100 extra people to show up due to coronavirus concerns and loaded more food because of it, but he never expected the number to be anywhere near 1,000.
“Today, we thought, ‘Let’s bring (enough for) 250,’ and (hundreds) of people showed up,” he said.
Wheeling a small wagon, Elin Estrada, 31, of Salt Lake City, visited the church to get food for himself and his two children after hearing about it on Facebook.
He said he arrived at 3:50 p.m. and waited about an hour to get a bag of food, which contained cheetos, tomato sauce and butter.
Going to local grocery stores, he said, has been a frustrating experience, with empty shelves and a lack of essential items.
“You have the money, but you can’t buy the food,” he said.
Tyson Hyde, a Utah Food Bank truck driver, said a full truck load can carry a capacity of up to 43,000 pounds, and on Monday that truck was half full.
“It’s just stepped up this week because of the pandemic,” Hyde said.
Food bank volunteers filled cardboard boxes full of canned soups, cereal, peanut butter, cooked chicken and other non-perishable foods.
Buhler said he was grateful for the residents’ orderly conduct and civility.
“People were orderly and they didn’t shove or push and there was a lot of kindness,” he said. “But there was some disappointment for people who didn’t get food.”
Salt Lake City resident Natalia Geraldo received a few bags of food containing butter and corn for her family of five.
“What’s going on is that there isn’t food to buy in supermarkets,” she said in Spanish. “It’s time to go to the resources where they are at least donating to be able to take something to eat for the family.”
Since Friday, Taylorsville resident Sandra Bohorquez has visited Walmart, Smith’s Marketplace and WinCo Foods with little luck. Bohorquez said she, too, learned of the mobile pantry through Facebook and waited about 10 minutes to get her groceries — significantly less than the wait times other residents are facing in markets.
“There’s no food. There is absolutely nothing in supermarkets,” she said in Spanish, shrugging her shoulders. “It is vastly difficult.”
She called for people to have patience and to help one another, especially those in need.
As supplies from the truck ran low, volunteers began distributing bags of whatever food was left, according to Buhler. Those who arrived later weren’t so lucky, and 10 minutes after the food ran out and Utah Food Bank volunteers began to pack up, a man and child could be seen walking to their car pulling an empty wagon.
“It’s not good. It’s sad. We just ran out of food,” Buhler said, noting that the food bank truck was “completely empty.”
April Reynolds, 40, arrived with her 16-year-old daughter at about 5 p.m. only to find an empty parking lot where she was told the food donations had run out.
Reynolds pulled into the lot after spotting the Utah Food Bank truck parked outside the church building she used to formerly attend.
“We’re struggling pretty bad right now,” she said. She is supporting seven children and her mother at home.
“It’s hard to find everything that you need in the store, because everything is scarce. There’s no pasta and no canned food, which is pretty understandable,” Reynolds said.
She said she went to a Dollar Tree to find toilet paper, but employees were rationing portions. She’s most worried about caring for her mother, who is facing ongoing health issues.
Reynolds plans to visit the church again when volunteers come again next Monday, but will arrive much earlier.
“We’re going to continue to do this. We know that after this, people are losing jobs and this will be an ongoing thing. We can do this all summer long,” Buhler said.
Donations to the mobile pantry come from grocery stores, The Church of Jesus Christ of Latter-day Saints, the U.S. Department of Agriculture and other donors, according to Buhler. He said he hopes to bring more food when the food bank comes again in a week.
“Hopefully, we can figure out how to help people. At this time, the food bank is doing everything they can to provide,” he said. | https://www.deseret.com/utah/2020/3/16/21182401/coronavirus-food-bank-covid-19-shortage-groceries-utah-pandemic-shoppers-donations |
It seems that nearly every galaxy has a supermassive black hole at its core. Based on the presence of extremely bright objects early in the Universe's history, it seems that this relationship goes back to the galaxy's very start—galaxies seem to have been built around these monstrous black holes.
But this presents a bit of a problem. There's a limit to how fast black holes can grow, and they shouldn't have gotten to the supermassive stage anywhere near this quickly. There have been a few models to suggest how they might grow fast enough, but it's hard to get any data on what's going on that early in the Universe's history. Now, however, a team is announcing some of the first observational support for one model: the direct collapse of gas into a black hole without bothering to form a star first.
Most black holes form through the collapse of a star with dozens of times the Sun's mass. The resulting black holes end up being a few times more massive than our local star. But supermassive black holes are a different breed entirely, with masses ranging anywhere from 100,000 times to a billion times that of the Sun.
It's technically possible for a stellar mass black hole to grow to that size by drawing in surrounding matter, but the process takes time. Part of that is just getting that much mass into the vicinity of the black hole in the first place. But black holes are also messy eaters. As material spirals in, it heats up and emits radiation, which can push back against any further matter that's falling in. This process sets a limit—called the Eddington limit—on how fast material can enter the black hole.
To build a supermassive black hole quickly enough, a stellar mass black hole needs to be pushing up against the Eddington limit from almost the second it forms. Most researchers consider that unlikely, so they've come up with a variety of models (such as repeated black hole mergers) to explain how supermasssive black holes form quickly enough. But it's hard to spot any objects in the early Universe, much less understand what they are and the environment they're surrounded by. So there's been no good way to discriminate among these models.
But that may be changing. Over the past few years, researchers have been building a model of one possible explanation for supermassiveness: direct collapse. Rather than building a star, blowing it up, and then spiraling material into it, direct collapse black holes form when a massive cloud of gas collapses under its own weight. Since material doesn't spiral in, the gas can avoid the Eddington limit and fall in a straight line into the black hole, spurring rapid growth.
While this works on paper (or on supercomputer), there haven't been any examples of it happening identified in the real world. But the international team behind this model also calculated what something like this would look like. While the matter closest to the black hole would emit very high energy photons, all the surrounding gas would absorb most of these photons and gradually re-emit them at lower energies. By the time we'd see them, they'd be in the infrared area of the spectrum, with rapidly growing intensity in the redder area of the spectrum.
This model provided a signal that they could look for. So they checked for objects fitting that description in data from a multi-instrument observation campaign called CANDELS GOODS-S. To find the objects that were originally associated with a high-energy event, they checked whether the objects could be detected by the Chandra X-ray Observatory.
A couple of sources came through this screen, called Object 29323 and Object 14800. To generate the emission coming from Object 29323 simply by forming stars, the authors estimate they would have to be forming stars at a rate 5,000 times that of our Milky Way. That's twice the rate of the fastest star-forming object we've ever observed, a massive, mature, starburst galaxy. Since that sort of rate is, to put it mildly, unlikely, they argue that these objects are probably direct collapse black holes.
Finding out whether they're right might just take waiting a few years. The James Webb Space Telescope is getting ever closer to launch, and it's designed to image some of the earliest objects in the Universe.
The arXiv. Abstract number: 1603.08522 (About the arXiv). To be published in Monthly Notices of the Royal Astronomical Society. | https://arstechnica.com/science/2016/05/building-a-supermassive-black-hole-skip-the-star/ |
Milky Way Black Hole in Sharper Focus
Our galaxy, the Milky Way, is just one of the 200 billion-odd galaxies strewn across the visible universe. But it’s among the biggest. For sure, there are some other spiral galaxies which outweigh ours, not to mention the even more beefy “ellipticals” formed by the merger of spirals. Nevertheless, it sits in the club of giants. And like most others, it harbours a “supermassive” black hole. Still, that core-dwelling giant shrivels into a dwarf next to monsters packing up to billions of solar masses at the hearts of some. Ours was credited with a “meagre” three-million until recently. But new observations, while adding new fat to the Galaxy itself, give some welcome muscle to the weakling at its center.
In varying estimates, the Milky Way contains 100-to-400 billion stars (300-to-400 in later ones). The discrepancy stems from the difficulty of spotting the “red dwarf” stars which are much smaller and cooler than the Sun, but make up three-quarters of the star population of the Milky Way. Galaxy’s total mass, however, is calculated to be about 1.5 trillion solar masses. The difference is attributed partly to the gas and dust in the interstellar space, but mainly to a yet-unobserved mysterious matter hypothesised to surround the Galaxy as a vast spherical halo, not emitting light ─ hence called “dark matter” ─, making its presence felt through gravity.
The majestic Galaxy is also surrounded by a retinue of about 200 “globular star clusters” which pack 10.000 to 10 million stars into extremely compact volumes, as well as a dozen satellite dwarf galaxies, with a similar number of the satellites believed awaiting discovery.
As the name makes plain, a black hole cannot be visualised. As they are theorised to warp the fabric of the space-time infinitely with their huge masses, black holes are described as mathematical points which do not allow anything, not even light, to escape back if they fall in. They announce their presence with the X-rays emitted by the matter they attract from their vicinity, which attain great speeds and temperatures in a torus-shaped accretion disk before they are swallowed. Another telltale sign is the phenomenal orbital speeds they impart to nearby stars.
The matter (and light) crossing a threshold called “event” horizon which varies with the black hole mass, cannot escape back and get drawn into the “singularity” at the center where known physics laws do not apply. In the illustration, three light rays at different distances are bent to varying degrees. A fourth rotates and defines event horizon. The nearest fifth crosses the horizon and spirals into the singularity, with its wavelength undergoing a huge redshift.
The matter a black hole of any mass attracts from its vicinity or fom objects passing nearby, forms an “accretion disk” around the event horizon. Approaching the horizon, the matter in the disk attains huge velocities and the friction pushes the temperature of the disk material to extreme values, causing it to emit X-rays before crossing the event horizon. The immensely powerful magnetic fields which form in the disk, catapult part of the material to space in jets at opposing poles of the black hole at speeds approaching that of light.
Black holes are divided into separate categories according to their masses. “Stellar mass black holes” are products of collapsing giant stars. When the cores of stars with at least eight solar masses consume their hydrogen fuel, converting it to helium, heavier elements are synthesised with each consecutive step until they are completely filled with iron. At this point the core cannot sustain the fusion reactions which produce the energy needed to balance the weight of the outer layers and the star collapses onto itself. The ensuing shock wave blows the outer layers, enriched with synthesised heavy elements , into space to seed new generations of stars.When massive stars which form together in a giant cloud of gas and dust approach the end of their brief lifespans (30-40 million years compared to Sun’s 10 billion) they begin to eject their outer layers into space with powerful winds and expand as a result of fusion reactions in core and surrounding layers to become a red supergiant. When the core, synthesising ever heavier elements in stages ever shortening to years, months and even days, finally fills up with iron, it collapses to form a black hole or a neutron star. Heavy elements flung into space with supernova explosions “enrich” interstellar gas and dust clouds which will form new generations of stars.
The matter a black hole of any mass attracts from its vicinity or fom objects passing nearby, forms an “accretion disk” around the event horizon. Approaching the horizon, the matter in the disk attains huge velocities and the friction pushes the temperature of the disk material to extreme values, causing it to emit X-rays before crossing the event horizon. The immensely powerful magnetic fields which form in the disk, catapult part of the material to space in jets at opposing poles of the black hole at speeds approaching that of light.
Another category, the case for whose existence was strengthened by consistent evidence gathered in recent years, is “intermediate mass black holes”, with masses ranging from a few thousand to 30-40.000 solar.
Astronomers discovered one of the not-so-common intermediate mass black holes in the globular star cluster Omega Centauri (above). The black hole was found to be of 40.000 solar masses.Globular clusters are very dense and very old assemblages of stars located around the central bulge of the Milky Way and other galaxies. There are some 200 of these , dispersed in the dark halo surrounding the Galaxy (left.) These spherical structures 60-to-300 light years wide, are homes to hundreds of thousands (and millions in some) stars closely packed together. Omega Centauri, one of the largest , is calculated to harbour 10 million stars in a volume with a diameter of just 80 light years.
Globular clusters are among the oldest structures in the universe, with ages between 10and 12 biliion years (the universe itself is 13.8 billion years old. ) Hence, all the massive stars they contained in the beginning have gone supernova at the end of their 30-40 million- year lives, leaving behind only the old, red stars with long lifespans. The black holes produced by the supernova explosions sank to the cores where they merged. In the violent processes of merger, some were hurled out of the cluster while others grew to masses of 30-40.000 solar.
A very common variety is black holes defined as “supermassive”. All spiral galaxies like the Milky Way and ellipticals formed by the merger of these are believed to harbour a supermassive black hole within their central bulges. Their masses are calculated to add up to millions, or even billions of solar masses.
M87, at the center of the Virgo cluster of galaxies, is one of the biggest of ellipticals which form when two or more spiral galaxies collide and merge. Its mass, largely made up of gas and dark matter, could be as much as 200 times that of the Milky Way in astronomers’ estimates. The supermassive black hole at its center is also one of the biggest of its kind. It is calculated to be of 6-7 billion solar masses. The jet, extending from the black hole, carries the particles it takes from the disk around it to 5000 light years away with relativistic speeds (approaching to that of light.)
With patient observations carried out nonstop for 16 years with infrared wavelengths which can penetrate dust, astronomers from the Max Planck Institute of Extraterrestrial Physics (Germany) considerably sharpened the picture in 2008.
By observing the orbital motions of 28 nearest stars rotating around the black hole, researchers obtained more accurate values as to its mass and distance from the Earth. The latest value determined for the black hole’s mass is 4 million solar. It also appears that we are a bit farther from the galactic center than we so far believed. The distance, formerly calculated to be 26.000 light years, was revised upward to 27.000.
Of the 28 stars, observed with such modern equipment as the New Technology Telescope and Very Large Telescope of the European Southern Observatory in Chile, the trajectories of those orbiting within a radius of 1 “light month” (about 800 billion km), resemble the surroundings of a beehive. Conversely, six stars which remain outside this limit rotate on a common plane as on a disk. The most interesting of the 28 is the one named SO-2. It is so fast, that it has completed its orbit within the 16-year observation period.
But there is a “fastest of the fast.” The star named SO-102, discovered in 2012, was announced to have an orbital period of a mere 11.5 years around the supermassive black hole. SO-2’s closest approach to the black hole will be in 2018, and that of SO-102 will be in 2021. These dates will allow a test of Einstein’s theory of general relativity. If the theory is correct, the curve of the space-time will affect the motion of the stars and will cause distortions in their light reaching us.
Although latest observations have brought answers to some important questions, one still remains in the air: How is it possible that such young stars exist in the vicinity of a supermassive black hole?
The 1-parsec-wide (3.26 light years, or abot 30 trillion kilometres) area around the supermassive black hole at the center of Milky Way is crowded with thousands of stars. (For comparison, nearest star to the Sun is 4.2 light years = 40 trillion kilometers away). Some of these, orbit as close as 2 light days (about 52 biliion km) to the black hole’s 11 million-km-wide event horizon and at their closest approach, are expected to come as near as 18-20 billion km to the black hole. Astronomers were struggling to explain the presence of about 100 massive stars within the gas disk around the black hole and the orbital motions of most on chaotic inclinations instead of a common orbital plane.
But some newly developed theories seem to explain the puzzle. According to one, the gas raining on the black hole from stars in the extremely crowded central region of Milky Way, do not join the disk in a steady flow, but as separate squalls, spaced several million years, after accumulating. But when the temperature of the gas in the inner regions of the disk reaches millions of degrees, it causes an outward pressure which stops the inflow of the gas (Eddington limit) and in the band pressed from both sides, stars begin to form.
Another theory holds that the low-mass old stars within the disk shed their outer shells where heavy molecules like carbon monoxide accumulate in the disk environment, and emerge with relatively pristine envelopes and “mimic the youngsters like some old Hollywood stars.”
But the most widely accepted model provides the explanation that despite the turbulent environment inside it, the disk, a million times denser than the environment around the Sun, counterbalances the effects of the black hole with its huge gravity and destabilises the gas, which collapses to form new stars.
Subscribers to this theory explain the orbital dynamics of these massive young stars, which follow chaotic routes instead of an orderly plane, with the possibility of the black hole’s rotation on its axis. According to these astronomers, the wobbles the black hole makes as it rotates within the disk fling the stars to eccentric orbits.
These are B-class stars with 3-15 solar masses. If one has to surmise they formed far away and later migrated in, drawn by the gravity of the black hole, they are far too young to find the time for that. For, they are born as an association in a giant cloud of gas and dust, along with much higher numbers of smaller stars. In such a cloud, an average of 1 million low-mass stars have to be born together with massive ones. But as it were, the number of these low-mass companions in the immediate vicinity of Sagittarius A* was found to be about only 10.000. Furthermore, it takes aabout 1 billion years for a star born in a distant cloud to draw near the black hole, which is 10 times the maximum lifespan of a B-class star.
Another property of Sagittarius A* that has become an object of speculation is its “lack of appetite” ─ at least for the time being. Unlike fellow monsters who reside at cores of other galaxies, tipping scales at billions of solar massses, devouring all and everyting around them, and burping blobs of matter at relativistic speeds to thousands of light years away, ours is on a slimming diet. According to some observers, our supermassive black hole gulps only one -hundred thousandths of the gas which the nearby stars send towards it with their winds, to accumulate on the disk. The portion consumed amounts to one percent of the Earth’s mass every year. But this spartan diet does not forbid the daily snack of a morsel or two. The feeding manifests itself with short and variable X-ray flares which last about an hour. Some astronomers believe that these flares mark the end of comets or asteroids falling into the black hole. According to this conjecture, the black hole is surrounded by a belt formed of hundreds of trillions of comets or asteroids it has snatched from the stars around it with its powerful gravity. Everyday, some of these objects stray from their orbits due to gravitational attractions among themselves and head towards the black hole, reach tempreratures of millions of degrees because of friction in the disk and emit X-rays.
Chandra X-ray Telescope submits evidence of another violent display of brawn by Sagittarius A*. In this image produced through two weeks of observation, the lobes of hot gas the black hole ejected 10.000 years ago can be seen at two o’clock and seven o’clock positions. With these past displays of power, our black hole looks to have emptied the stocks of food in its reach. For now, it’s making do with daily doses of pills as it waits in ambush for some big game to blunder in.
Teams of astronomers monitoring the galactic center for yearsbelieve the problem with our giant may not be so much its current lack of appetite as its former gluttony. Gamma ray blobs in the form of 25.000- light year-wide symmetrical spheres, discovered on both sides of the galactic plane in 2010 by the Fermi Gamma Ray Large Area Space Telescope (Fermi-LAT), show that in the past, our black hole had emitted intense radiation and particles for short durations like a quasar: Some astronomers think this violent activity coincides with the frantic star formation six million years ago, triggered by the latest massive gas flow from surroundings into the 1.6- light year- wide region around the black hole. According to the proposed model, the incoming gas was shared equally by the stars and the black hole. And the latter, while swallowing a fraction of its share, spewed most of it to space from both of its poles by means of the intense magnetic fields on the surfaces of the surrounding disk. Gamma rays are radiated in the lobes by energetic electrons according to some researchers, and by the energetic cosmic ray protons according to others. Furthermore, the black hole is said to have sucked in a 100 solar mass gas cloud 20.000 years ago and ejected two gamma ray emitting jets from its poles. Some astronomers maintain that our black hole was a million times more energetic 350 years ago than it is today.
Higher observational resolution sought by astronomers to test the theoretical explanations offered for the puzzling properties of the skinny giant of our galaxy is finally available. In February 2012, the four telescopes at the European Space Agency’s Very Large Telescope (VLT) facility at Cero Telolo, Chile, were linked by computers with a technique called interferometry. Thus the array, composed of four separate telescopes each with a mirror size of 8.2 meters, will be used as a single telescope with a 130-metre mirror, providing a resolution 10-to-100 times better than currently attained. Thus, it will be possible to illuminate many dark secrets lurking at Milky Way’s center and elsewhere in the universe. | https://kurious.ku.edu.tr/en/milky-way-black-hole-in-sharper-focus/ |
Milky Way GC / infrared, gamma ray, X-ray
This photo caused tremendous excitement when released by NASA on November 10, 2009. The Galactic Center we cannot ‘see’ with our eyes has been made visible as never before by the genius of the digital image, technician-artists at NASA. Pink and blue represent low- and high-energy X-rays, respectively. Regions of hundreds of tiny dots reveal the uncountable numbers of black holes that live just outside the event horizon of the supermassive black hole (Sgr A*) at the Galactic Center (GC). Sagittarius A* is the most energetic object in the Milky Way and one of the major astronomy discoveries of recent years.
Milky Way GC / Sagittarius star cloud
Energy Beasties at the Galactic Center
The challenge to identify who lives at the center of our galaxy is formidable. Should you ever visit, here are some of the entities you’ll likely encounter.
Milky Way GC / Sagittarius A* awakes 300 mya
Verification of the massive black hole at the GC began in 1974 when very strong radio emission from its location was confirmed. The first hard X-ray emissions from the GC were detected in 1987; then Chandra found soft X-ray emissions in 1999. It is now understood that gas near the black hole brightens and then quickly fades in the X-ray spectrum, presumably in response to X-ray pulses emanating from just outside the black hole. When gas spirals inward towards the black hole, it heats up to millions of degrees and then emits X-rays. Sgr A* now appears to be in a quiet, resting phase. Although it contains ~ 4X million Sun’s mass, the energy radiating from its immediate surroundings is billions of times weaker than that from comparable black holes in other galaxies.
Image: NASA / Chandra, NASA
Milky Way GC – Arches, Quintuplet / Chandra
The giant stars become powerful ‘point’ X-ray sources when winds blowing off their surface collide with winds from an orbiting companion star. Six important structures in the nuclear region of the Galactic Center give rise to complex radiation outputs: a) SgrA*, the supermassive black hole with a mass of at least 3.5 x 10 million Sun; b) the surrounding cluster of evolved and young stars; c) ionized gas streamers, some of which form a three-armed spiral centered on SgA* that is known as SgrA West; d) a dusty molecular ring surrounding Sgr A West; e) diffuse hot gas; and f) a powerful supernovae remnant known as Sgr A East.
Milky Way GC – X-ray binaries (neutron stars, black hole swarm)
Hubble data has determined that the massive stars pour radiation and wind into large areas of dense, warm gas (lower left photos in photo montage) that are found throughout the Galactic Center. Large arc-like structures are formed of which the ‘Sickle’ is the most prominent (see photo montage here).
Cygnus X-1, black hole drawing material off its companion star.
X-Ray Binary – 1E 1743.1-2843
At no less than 20kpc distance from the GC, there is a complicated X-ray source that has been studied intensively and continues to defy attempts to describe it with specificity. Early studies of 1E 1743.1-2843 were done by an ESA mission with NASA contributions, which utilized the XMM-Newton satellite-telescope. Hard and soft X-ray emissions, Black Body temperatures and steep power law indicated that 1E 1743.1-2843 is a neutron star or black hole that is absorbing material from its companion, a violent process that creates X-ray emissions. However, the absence of periodic X-ray pulses and/or eclipses and the presence of a soft X-ray spectrum favor a Low Mass X-ray Binary star (LMXB) as the larger star in the binary system of 1E 1743.1-2843.
Milky Way GC / infrared 0.8mu – Spitzer
Arched Filaments / Arched Cluster
Reading right to left, the third photo in the infrared portrait of the Galactic Center taken by the Spitzer Telescope is the first visualization of the long, stringy formations at the base of a structure known as the Arches Filaments. These filaments are about 10 light years long and less than one light year wide. The fourth inset in this lower row of photos shows some of the brightest star regions in the infrared map of the Milky Way, and likely star formation activity here is intense.
Milky Way GC – Star Clusters / Quintuplet, Arches
Quintuplet Cluster
To the immediate right of the owl-eye stellar nursery is an image of the extremely luminous Quintuplet stars, five massive stars that are buried in thick dust clouds. The Quintuplet Cluster is home to the Pistol Star which is the brightest known star in the Milky Way.
The Hubble NICMOS telescope has provided the clearest views yet taken of the Quintuplet Cluster which is 25,000 light years from Earth. This monster star cluster has a mass equivalent of 10,000 Suns and is 10X larger than the typical young star clusters scattered throughout the Milky Way.
Milky Way GC – Quintuplet Star Cluster
Now 4 million years old, the Quintuplet will come to a violent end, ripped apart in a few million years by the huge gravitational tidal forces at the core of the Milky Way. Astronomers expect to see supernovae in the Quintuplet before too many more years pass. The Quintuplet Cluster is hidden behind thick black dust clouds in the constellation Sagittarius, but if it could be seen from Earth it would be as bright as a third magnitude star to the naked eye, and 1/6th the full moon’s diameter.
Pistol Star / Pistol Nebula
Within the Quintuplet Cluster is the brightest star of all in the Milky Way, the Pistol Star. A star at the center of the Pistol Nebula was first postulated with some confidence in 1990. Early photographs revealed a pistol-like shape. The Pistol Star is a blue variable giant that is 120-200X mass of the Sun and 1.7 x 10X6 the Sun’s luminosity. If not for the impenetrable dust between Earth and the Galactic Center, the Pistol Star would also be visible to the naked eye.
Image: Wmahan / Wikimedia
Milky Way GC – Pistol Star Nebula / NICMOS , Hubble / NASA
Data from two of Hubble’s cameras was combined to create this photograph, which showed an image of the Pistol Star hidden behind a vast quantity of thick dust. The Near Infrared Camera and Multi-Object Spectrometer (NICMOS), which also has infrared vision, can penetrate the thick dust clouds. The Pistol Star has blown off two expanding shells of gas that are ‘false’ colored in this photo as magenta. The two dust shells have a total mass several times that of our Sun. The largest shell has a radius of two light years, which is the distance from the Sun to the star nearest our Solar System. The two novae that created these two dust shells are estimated to have occurred 4,000 and 6,000 years ago, respectively. The mass loss to the Pistol Star was considerable; its original mass might have been 200X Sun.
The Pistol Nebula is the most massive (L)uminous (B)lue (V)ariable (N)ebula. Its ejected material completely surrounds the Pistol Star. The Pistol Nebula is primarily ionized by nearby, very hot stars, and it physically interacts with the strong winds of these stars. There are two direction spikes of emission lines that extend north and south from the Pistol Star. The overall shape of the Pistol Nebula is rectangular with long sides parallel to the magnetic field of the Galactic Center. The Pistol Nebula was ejected in the presence of a strong magnetic field generated at the Galactic Center and powerful solar winds from nearby stars. The side of the Pistol Nebula nearest to us is approaching Earth. Two newly identified spectral emissions north of the Pistol Star are likely the hottest known stars in the GC with temperatures > 50,000 K.
Milky Way GC – Double Helix Nebula / Spitzer / NASA
Double Helix Nebula
Discovered by the Spitzer Space Telescope, the Double Helix Nebula is ~300 light years from the Galactic Center and takes its name from a resemblance to the double helix geometry of the DNA molecule. Likely magnetic torsion twisted the original nebula into the shape of two connected spirals.
The visible segment of the Double Helix Nebula is about 80 light years long. Models for its origin and shape propose magnetic fields at the GC that are 1,000X stronger than those generated by our Sun and are driven by the massive disc of gas orbiting Sagittarius A*.
Sagittarius A*
Sagittarius A* is the supermassive black hole that exists in the center of our galaxy. The extreme infra-red radiation emission of the central dust hiding Sagittarius A* is caused by heating from a compact cluster of very hot stars that are likely in the first stages of their life cycles. Sagittarius A* is the most energetic object in the entire Milky Way galaxy. The circumnuclear disk, which is the rotating ring of gas and dust that surrounds Sagittarius A*, can also be seen in this photo.
The European Southern Observatory recently concluded an extraordinary 16 year observation program of the GC using a novel approach to pin down Sagittarius A*. The NACO instrument ‘sees’ in the near infrared and it followed 28 stars closest to the Galactic Center. While most of these stars swarm around the GC like angry bees, the most distant six stars from the GC orbit it in a disc. Mass and distance for Sagittarius A* were estimated from these data.
Image: Chandra / NASA
Sagittarius A* as imaged by NASA’s Chandra Allah Observatory
A recent, important discovery by Chandra and XMM-Newton is that Sgr A* gives off powerful X-ray flares during which time the soft X-ray luminosity can increase 50X – 180X over a period of up to 3 hours. Very recently, the (V)ery (L)arge (T)elescope, NACO imaging instrument and the Keck Telescope determined that Sgr A* is also the source of infrared flares. This activity suggests that an important population of nonthermal electrons exists near the black hole.
Origin and Evolution of Sagittarius A*
Diagram
If hard X-rays can be observed at the GC, that would further clarify the relative role of accretion and ejection in the Sgr A* system. There is a candidate for such an object. Discovered by the European Space Agency Integral satellite telescope, IGR J17456-2901 is within 0′.9” of the GC and nearly coincident with Sgr A*. 20 to 100KeV luminosity has been measured and IGR J17456-2901 is the first report of hard X-ray emission that is likely emanating from within 10′ of the GC, perhaps from the black hole itself.
Sagittarius A* is now considered synonymous with the massive black hole at the Galactic Center. Radio and X-ray emission is generated by material falling into the black hole, whose mass is now estimated to be at least 4 million Suns. | https://scribol.com/science/space/what-lies-at-the-center-of-the-milky-way/ |
Black hole caught red-handed in a stellar homicide
British Astronomers have helped to gather the most direct evidence yet of a supermassive black hole shredding a star that wandered too close.
Supermassive black holes, weighing millions to billions times more than the Sun, lurk in the centers of most galaxies. These hefty monsters lay quietly until an unsuspecting victim, such as a star, wanders close enough to get ripped apart by their powerful gravitational clutches.
Astronomers have spotted these stellar homicides before, but this is the first time they can identify the victim. Using a slew of ground- and space-based telescopes, a team of astronomers led by Suvi Gezari of The Johns Hopkins University in Baltimore, Md., has identified the victim as a star rich in helium gas. The star resides in a galaxy 2.7 billion light-years away.
Her team’s results will appear May 2 in the online edition of the journal Nature.
“ When the star is ripped apart by the gravitational forces of the black hole, some part of the star’s remains falls into the black hole, while the rest is ejected at high speeds. We are seeing the glow from the stellar gas falling into the black hole over time. We’re also witnessing the spectral signature of the ejected gas, which we find to be mostly helium. It is like we are gathering evidence from a crime scene. Because there is very little hydrogen and mostly helium in the gas we detect from the carnage, we know that the slaughtered star had to have been the helium-rich core of a stripped star,” Gezari explained.
This observation yields insights about the harsh environment around black holes and the types of stars swirling around them.
This is not the first time the unlucky star had a brush with the behemoth black hole. Gezari and her team think the star’s hydrogen-filled envelope surrounding the core was lifted off a long time ago by the same black hole. The star may have been near the end of its life. After consuming most of its hydrogen fuel, it had probably ballooned in size, becoming a red giant. The astronomers think the bloated star was looping around the black hole in a highly elliptical orbit, similar to a comet’s elongated orbit around the Sun. On one of its close approaches, the star was stripped of its puffed-up atmosphere by the black hole’s powerful gravity. The stellar remains continued its journey around the center, until it ventured even closer to the black hole to face its ultimate demise and was completely disrupted.
Astronomers have predicted that stripped stars circle the central black hole of our Milky Way galaxy, Gezari pointed out. These close encounters, however, are rare, occurring roughly every 100,000 years. To find this one event, Gezari’s team monitored hundreds of thousands of galaxies with the Pan-STARRS1 telescope on Mount Haleakala, Hawaii. Pan-STARRS, short for Panoramic Survey Telescope and Rapid Response System, scans the entire night sky for all kinds of transient phenomena, including supernovae and Near Earth Asteroids, as well as the hoped for star-shredding events. PanSTARRS was built by astronomers in Hawaii, and is operated by an international consortium, including British astronomers from Edinburgh, Durham, and Belfast. The same patch of sky was watched by GALEX, a space mission measuring ultraviolet light.
The team was looking for a bright flare in ultraviolet light from the nucleus of a galaxy with a previously dormant black hole. They found one in June 2010, which was spotted with both telescopes. Both telescopes continued to monitor the flare as it reached peak brightness a month later, and then slowly began to fade over the next 12 months. The brightening event was similar to that of a supernova, but the rise to the peak was much slower, taking nearly one and a half months.
“ The longer the event lasted, the more excited we got, since we realized that this is either a very unusual supernova or an entirely different type of event, such as a star being ripped apart by a black hole,” said team member Armin Rest of the Space Telescope Science Institute in Baltimore, Md.
By measuring the increase in brightness, the astronomers calculated the black hole’s mass to be several million suns, which is comparable to the weight of our Milky Way’s black hole.
Spectroscopic observations with the Multiple Mirror Telescope (MMT) Observatory on Mount Hopkins in Arizona showed that the black hole was swallowing lots of helium. Spectroscopy divides light into its rainbow colors, which yields an object’s characteristics, such as its temperature and gaseous makeup.
“ The glowing helium was a tracer for an extraordinarily hot accretion event,” Gezari said. “So that set off an alarm for us. And, the fact that no hydrogen was found set off a big alarm that this was not typical gas. You can’t find gas like that lying around near the center of a galaxy. It’s processed gas that has to have come from a stellar core. There’s nothing about this event that could be easily explained by any other phenomenon.”
The observed speed of the gas also linked the material to a black hole’s gravitational pull. MMT measurements revealed that the gas was moving at more than 20 million miles an hour (over 32 million kilometers an hour). However, measurements of the speed of gas in the interstellar medium reveal velocities of only about 224,000 miles an hour (360,000 kilometers an hour).
“ As the object faded, it stayed hot, so we knew it wasn’t a supernova – they cool down” said Andy Lawrence, a team member from the University of Edinburgh. “The ultra-fast gas velocity is something we also see in Active Galactic Nuclei – but seeing only Helium was like nothing I’ve ever seen in an active nucleus”.
To completely rule out the possibility of an active nucleus flaring up in the galaxy, the team used NASA’s Chandra X-ray Observatory to study the hot gas. Chandra showed that the characteristics of the gas didn’t match those from an active galactic nucleus.
“ This is the first time where we have so many pieces of evidence, and now we can put them all together to weigh the perpetrator (the black hole) and determine the identity of the unlucky star that fell victim to it,” Gezari said. “These observations also give us clues to what evidence to look for in the future to find this type of event.”
Contacts
Donna Weaver
Space Telescope Science Institute, Baltimore, Md.
+1-410-338-4493
[email protected]
Suvi Gezari
The Johns Hopkins University, Baltimore, Md.
+1-410-516-3462
[email protected]
Andy Lawrence
University of Edinburgh
+44-(0)131-668-8346
[email protected]
Armin Rest
Space Telescope Science Institute, Baltimore, Md.
410-338-4358
[email protected]
For images, video, and more information about this study, visit: | https://www.roe.ac.uk/roe/support/pr/pressreleases/120503-shredded/index.html |
Wonders of the universe
Hidden in one of the darkest corners of the Orion constellation, this Cosmic Bat is spreading its hazy wings through interstellar space two thousand light-years away. It is illuminated by the young stars nestled in its core — despite being shrouded by opaque clouds of dust, their bright rays still illuminate the nebula.
In this illustration, several dust rings circle the sun. These rings form when planets' gravities tug dust grains into orbit around the sun. Recently, scientists have detected a dust ring at Mercury's orbit. Others hypothesize the source of Venus' dust ring is a group of never-before-detected co-orbital asteroids.
This is an artist's impression of globular star clusters surrounding the Milky Way.
An artist's impression of life on a planet in orbit around a binary star system, visible as two suns in the sky.
An artist's illustration of one of the most distant solar system objects yet observed, 2018 VG18 -- also known as "Farout." The pink hue suggests the presence of ice. We don't yet have an idea of what "FarFarOut" looks like.
This is an artist's concept of the tiny moon Hippocamp that was discovered by the Hubble Space Telescope. Only 20 miles across, it may actually be a broken-off fragment from a much larger neighboring moon, Proteus, seen as a crescent in the background.
In this illustration, an asteroid (bottom left) breaks apart under the powerful gravity of LSPM J0207 3331, the oldest, coldest white dwarf known to be surrounded by a ring of dusty debris. Scientists think the system's infrared signal is best explained by two distinct rings composed of dust supplied by crumbling asteroids.
An artist's impression of the warped and twisted Milky Way disk. This happens when the rotational forces of the massive center of the galaxy tug on the outer disk.
This 1.3-kilometer (0.8-mile)-radius Kuiper Belt Object discovered by researchers on the edge of the solar system is believed to be the step between balls of dust and ice and fully formed planets.
A selfie taken by NASA's Curiosity Mars rover on Vera Rubin Ridge before it moves to a new location.
The Hubble Space Telescope found a dwarf galaxy hiding behind a big star cluster that's in our cosmic neighborhood. It's so old and pristine that researchers have dubbed it a "living fossil" from the early universe.
How did massive black holes form in the early universe? The rotating gaseous disk of this dark matter halo breaks apart into three clumps that collapse under their own gravity to form supermassive stars. Those stars will quickly collapse and form massive black holes.
NASA's Spitzer Space Telescope captured this image of the Large Magellanic Cloud, a satellite galaxy to our own Milky Way galaxy. Astrophysicists now believe it could collide with our galaxy in two billion years.
A mysterious bright object in the sky, dubbed "The Cow," was captured in real time by telescopes around the world. Astronomers believe that it could be the birth of a black hole or neutron star, or a new class of object.
An illustration depicts the detection of a repeating fast radio burst from a mysterious source 3 billion light-years from Earth.
Comet 46P/Wirtanen will pass within 7 million miles of Earth on December 16. It's ghostly green coma is the size of Jupiter, even though the comet itself is about three-quarters of a mile in diameter.
This mosaic image of asteroid Bennu is composed of 12 PolyCam images collected on December 2 by the OSIRIS-REx spacecraft from a range of 15 miles.
This image of a globular cluster of stars by the Hubble Space Telescope is one of the most ancient collections of stars known. The cluster, called NGC 6752, is more than 10 billion years old.
An image of Apep captured with the VISIR camera on the European Southern Observatory's Very Large Telescope. This "pinwheel" star system is most likely doomed to end in a long-duration gamma-ray burst.
An artist's impression of galaxy Abell 2597, showing the supermassive black hole expelling cold molecular gas like the pump of a giant intergalactic fountain.
An image of the Wild Duck Cluster, where every star is roughly 250 million years old.
These images reveal the final stage of a union between pairs of galactic nuclei in the messy cores of colliding galaxies.
A radio image of hydrogen gas in the Small Magellanic Cloud. Astronomers believe that the dwarf galaxy is slowly dying and will eventually be consumed by the Milky Way.
Further evidence of a supermassive black hole at the center of the Milky Way galaxy has been found. This visualization uses data from simulations of orbital motions of gas swirling around about 30% of the speed of light on a circular orbit around the black hole.
Does this look like a bat to you? This giant shadow comes from a bright star reflecting against the dusty disk surrounding it.
Hey, Bennu! NASA's OSIRIS-REx mission, on its way to meet the primitive asteroid Bennu, is sending back images as it gets closer to its December 3 target.
These three panels reveal a supernova before, during and after it happened 920 million light-years from Earth(from left to right). The supernova, dubbed iPTF14gqr, is unusual because although the star was massive, its explosion was quick and faint. Researchers believe this is due to a companion star that siphoned away its mass.
This is an artist's illustration of what a Neptune-size moon would look like orbiting the gas giant exoplanet Kepler-1625b in a star system 8,000 light-years from Earth. It could be the first exomoon ever discovered.
An artist's illustration of Planet X, which could be shaping the orbits of smaller extremely distant outer solar system objects like 2015 TG387.
This is an artist's concept of what SIMP J01365663 0933473 might look like. It has 12.7 times the mass of Jupiter but a magnetic field 200 times more powerful than Jupiter's. This object is 20 light-years from Earth. It's on the boundary line between being a planet or being a brown dwarf.
The Andromeda galaxy cannibalized and shredded the once-large galaxy M32p, leaving behind this compact galaxy remnant known as M32. It is completely unique and contains a wealth of young stars.
Twelve new moons have been found around Jupiter. This graphic shows various groupings of the moons and their orbits, with the newly discovered ones shown in bold.
Scientists and observatories around the world were able to trace a high-energy neutrino to a galaxy with a supermassive, rapidly spinning black hole at its center, known as a blazar. The galaxy sits to the left of Orion's shoulder in his constellation and is about 4 billion light-years from Earth.
'Oumuamua, the first observed interstellar visitor to our solar system, is shown in an artist's illustration.
Planets don't just appear out of thin air -- but they do require gas, dust and other processes not fully understood by astronomers. This is an artist's impression of what "infant" planets look like forming around a young star.
These negative images of 2015 BZ509, which is circled in yellow, show the first known interstellar object that has become a permanent part of our solar system. The exo-asteroid was likely pulled into our solar system from another star system 4.5 billion years ago. It then settled into a retrograde orbit around Jupiter.
A close look at the diamond matrix in a meteorite that landed in Sudan in 2008. This is considered to be the first evidence of a proto-planet that helped form the terrestrial planets in our solar system.
2004 EW95 is the first carbon-rich asteroid confirmed to exist in the Kuiper Belt and a relic of the primordial solar system. This curious object probably formed in the asteroid belt between Mars and Jupiter before being flung billions of miles to its current home in the Kuiper Belt.
The NASA/ESA Hubble Space Telescope is celebrating its 28th anniversary in space with this stunning and colorful image of the Lagoon Nebula 4,000 light-years from Earth. While the whole nebula is 55 light-years across, this image only reveals a portion of about four light-years.
This is a more star-filled view of the Lagoon Nebula, using Hubble's infrared capabilities. The reason you can see more stars is because infrared is able to cut through the dust and gas clouds to reveal the abundance of both young stars within the nebula, as well as more distant stars in the background.
The Rosette Nebula is 5,000 light-years from Earth. The distinctive nebula, which some claim looks more like a skull, has a hole in the middle that creates the illusion of its rose-like shape.
KIC 8462852, also known as Boyajian's Star or Tabby's Star, is 1,000 light-years from us. It's 50% bigger than our sun and 1,000 degrees hotter. And it doesn't behave like any other star, dimming and brightening sporadically. Dust around the star, depicted here in an artist's illustration, may be the most likely cause of its strange behavior.
This inner slope of a Martian crater has several of the seasonal dark streaks called "recurrent slope lineae," or RSL, that a November 2017 report interprets as granular flows, rather than darkening due to flowing water. The image is from the HiRISE camera on NASA's Mars Reconnaissance Orbiter.
This artist's impression shows a supernova explosion, which contains the luminosity of 100 million suns. Supernova iPTF14hls, which has exploded multiple times, may be the most massive and longest-lasting ever observed.
This illustration shows hydrocarbon compounds splitting into carbon and hydrogen inside ice giants, such as Neptune, turning into a "diamond (rain) shower."
This striking image is the stellar nursery in the Orion Nebula, where stars are born. The red filament is a stretch of ammonia molecules measuring 50 light-years long. The blue represents the gas of the Orion Nebula. This image is a composite of observation from the Robert C. Byrd Green Bank Telescope and NASA's Wide-field Infrared Survey Explore telescope. "We still don't understand in detail how large clouds of gas in our Galaxy collapse to form new stars," said Rachel Friesen, one of the collaboration's co-Principal Investigators. "But ammonia is an excellent tracer of dense, star-forming gas."
This is an illustration of the Parker Solar Probe spacecraft approaching the sun. The NASA probe will explore the sun's atmosphere in a mission that begins in the summer of 2018.
See that tiny dot between Saturn's rings? That's Earth, as seen by the Cassini mission on April 12, 2017. "Cassini was 870 million miles away from Earth when the image was taken," according to NASA. "Although far too small to be visible in the image, the part of Earth facing Cassini at the time was the southern Atlantic Ocean." Much like the famous "pale blue dot" image captured by Voyager 1 in 1990, we are but a point of light when viewed from the furthest planet in the solar system.
NASA's Hubble Space Telescope, using infrared technology, reveals the density of stars in the Milky Way. According to NASA, the photo -- stitched together from nine images -- contains more than a half-million stars. The star cluster is the densest in the galaxy.
This photo of Saturn's large icy moon, Tethys, was taken by NASA's Cassini spacecraft, which sent back some jaw-dropping images from the ringed planet.
This is what Earth and its moon look like from Mars. The image is a composite of the best Earth image and the best moon image taken on November 20, 2016, by NASA's Mars Reconnaissance Orbiter. The orbiter's camera takes images in three wavelength bands: infrared, red and blue-green. Mars was about 127 million miles from Earth when the images were taken.
PGC 1000714 was initially thought to be a common elliptical galaxy, but a closer analysis revealed the incredibly rare discovery of a Hoag-type galaxy. It has a round core encircled by two detached rings.
NASA's Cassini spacecraft took these images of the planet's mysterious hexagon-shaped jetstream in December 2016. The hexagon was discovered in images taken by the Voyager spacecraft in the early 1980s. It's estimated to have a diameter wider than two Earths.
A dead star gives off a greenish glow in this Hubble Space Telescope image of the Crab Nebula, located about 6,500 light years from Earth in the constellation Taurus. NASA released the image for Halloween 2016 and played up the theme in its press release. The agency said the "ghoulish-looking object still has a pulse." At the center of the Crab Nebula is the crushed core, or "heart" of an exploded star. The heart is spinning 30 times per second and producing a magnetic field that generates 1 trillion volts, NASA said.
Peering through the thick dust clouds of the galactic bulge, an international team of astronomers revealed the unusual mix of stars in the stellar cluster known as Terzan 5. The new results indicate that Terzan 5 is one of the bulge's primordial building blocks, most likely the relic of the very early days of the Milky Way.
An artist's conception of Planet Nine, which would be the farthest planet within our solar system. The similar cluster orbits of extreme objects on the edge of our solar system suggest a massive planet is located there.
An illustration of the orbits of the new and previously known extremely distant Solar System objects. The clustering of most of their orbits indicates that they are likely be influenced by something massive and very distant, the proposed Planet X.
Say hello to dark galaxy Dragonfly 44. Like our Milky Way, it has a halo of spherical clusters of stars around its core.
A classical nova occurs when a white dwarf star gains matter from its secondary star (a red dwarf) over a period of time, causing a thermonuclear reaction on the surface that eventually erupts in a single visible outburst. This creates a 10,000-fold increase in brightness, depicted here in an artist's rendering.
Gravitational lensing and space warping are visible in this image of near and distant galaxies captured by Hubble.
At the center of our galaxy, the Milky Way, researchers discovered an X-shaped structure within a tightly packed group of stars.
Meet UGC 1382: What astronomers thought was a normal elliptical galaxy (left) was actually revealed to be a massive disc galaxy made up of different parts when viewed with ultraviolet and deep optical data (center and right). In a complete reversal of normal galaxy structure, the center is younger than its outer spiral disk.
NASA's Hubble Space Telescope captured this image of the Crab Nebula and its "beating heart," which is a neutron star at the right of the two bright stars in the center of this image. The neutron star pulses 30 times a second. The rainbow colors are visible due to the movement of materials in the nebula occurring during the time-lapse of the image.
The Hubble Space Telescope captured an image of a hidden galaxy that is fainter than Andromeda or the Milky Way. This low surface brightness galaxy, called UGC 477, is over 110 million light-years away in the constellation of Pisces.
On April 19, NASA released new images of bright craters on Ceres. This photo shows the Haulani Crater, which has evidence of landslides from its rim. Scientists believe some craters on the dwarf planet are bright because they are relatively new.
This illustration shows the millions of dust grains NASA's Cassini spacecraft has sampled near Saturn. A few dozen of them appear to have come from beyond our solar system.
This image from the VLT Survey Telescope at ESO's Paranal Observatory in Chile shows a stunning concentration of galaxies known as the Fornax Cluster, which can be found in the Southern Hemisphere. At the center of this cluster, in the middle of the three bright blobs on the left side of the image, lies a cD galaxy -- a galactic cannibal that has grown in size by consuming smaller galaxies.
This image shows the central region of the Tarantula Nebula in the Large Magellanic Cloud. The young and dense star cluster R136, which contains hundreds of massive stars, is visible in the lower right of the image taken by the Hubble Space Telescope.
In March 2016, astronomers published a paper on powerful red flashes coming from binary system V404 Cygni in 2015. This illustration shows a black hole, similar to the one in V404 Cygni, devouring material from an orbiting star.
A new map of the Milky Way was released February 24, 2016, giving astronomers a full census of the star-forming regions within our own galaxy. The APEX telescope in Chile captured this survey.
This image shows the elliptical galaxy NGC 4889, deeply embedded within the Coma galaxy cluster. There is a gigantic supermassive black hole at the center of the galaxy.
An artist's impression of 2MASS J2126, which takens 900,000 years to orbit its star, 1 trillion kilometers away.
Caltech researchers have found evidence of a giant planet tracing a bizarre, highly elongated orbit in the outer solar system. The object, nicknamed Planet Nine, has a mass about 10 times that of Earth and orbits about 20 times farther from the sun on average than does Neptune.
An international team of astronomers may have discovered the biggest and brightest supernova ever. The explosion was 570 billion times brighter than the sun and 20 times brighter than all the stars in the Milky Way galaxy combined, according to a statement from The Ohio State University, which is leading the study. Scientists are straining to define the supernova's strength. This image shows an artist's impression of the supernova as it would appear from an exoplanet located about 10,000 light years away.
Astronomers noticed huge waves of gas being "burped" by the black hole at the center of NGC 5195, a small galaxy 26 million light years from Earth. The team believes the outburst is a consequence of the interaction of NGC 5195 with a nearby galaxy.
An artist's illustration shows a binary black hole found in the quasar at the center of the Markarian 231 galaxy. Astronomers using NASA's Hubble Space Telescope discovered the galaxy being powered by two black holes "furiously whirling about each other," the space agency said in a news release.
An artist's impression of what a black hole might look like. In February, researchers in China said they had spotted a super-massive black hole 12 billion times the size of the sun.
Are there are oceans on any of Jupiter's moons? The Juice probe shown in this artist's impression aims to find out. Picture courtesy of ESA/AOES
Astronomers have discovered powerful auroras on a brown dwarf that is 20 light-years away. This is an artist's concept of the phenomenon.
Venus, bottom, and Jupiter shine brightly above Matthews, North Carolina, on Monday, June 29. The apparent close encounter, called a conjunction, has been giving a dazzling display in the summer sky. Although the two planets appear to be close together, in reality they are millions of miles apart.
Jupiter's icy moon Europa may be the best place in the solar system to look for extraterrestrial life, according to NASA. The moon is about the size of Earth's moon, and there is evidence it has an ocean beneath its frozen crust that may hold twice as much water as Earth. NASA's 2016 budget includes a request for $30 million to plan a mission to investigate Europa. The image above was taken by the Galileo spacecraft on November 25, 1999. It's a 12-frame mosaic and is considered the the best image yet of the side of Europa that faces Jupiter.
This nebula, or cloud of gas and dust, is called RCW 34 or Gum 19. The brightest areas you can see are where the gas is being heated by young stars. Eventually the gas burst outward like champagne after a bottle is uncorked. Scientists call this champagne flow. This new image of the nebula was captured by the European Space Organization's Very Large Telescope in Chile. RCW 34 is in the constellation Vela in the southern sky. The name means "sails of a ship" in Latin.
The Hubble Space Telescope captured images of Jupiter's three great moons -- Io, Callisto, and Europa -- passing by at once.
A massive galaxy cluster known as SDSS J1038 4849 looks like a smiley face in an image captured by the Hubble Telescope. The two glowing eyes are actually two distant galaxies. And what of the smile and the round face? That's a result of what astronomers call "strong gravitational lensing." That happens because the gravitational pull between the two galaxy clusters is so strong it distorts time and space around them.
Using powerful optics, astronomers have found a planet-like body, J1407b, with rings 200 times the size of Saturn's. This is an artist's depiction of the rings of planet J1407b, which are eclipsing a star.
A patch of stars appears to be missing in this image from the La Silla Observatory in Chile. But the stars are actually still there behind a cloud of gas and dust called Lynds Dark Nebula 483. The cloud is about 700 light years from Earth in the constellation Serpens (The Serpent).
This is the largest Hubble Space Telescope image ever assembled. It's a portion of the galaxy next door, Andromeda (M31).
NASA has captured a stunning new image of the so-called "Pillars of Creation," one of the space agency's most iconic discoveries. The giant columns of cold gas, in a small region of the Eagle Nebula, were popularized by a similar image taken by the Hubble Space Telescope in 1995.
Astronomers using the Hubble Space pieced together this picture that shows a small section of space in the southern-hemisphere constellation Fornax. Within this deep-space image are 10,000 galaxies, going back in time as far as a few hundred million years after the Big Bang.
Planetary nebula Abell 33 appears ring-like in this image, taken using the European Southern Observatory's Very Large Telescope. The blue bubble was created when an aging star shed its outer layers and a star in the foreground happened to align with it to create a "diamond engagement ring" effect.
This long-exposure image from the Hubble Telescope is the deepest-ever picture taken of a cluster of galaxies. The cluster, called Abell 2744, contains several hundred galaxies as they looked 3.5 billion years ago; the more distant galaxies appear as they did more than 12 billion years ago, not long after the Big Bang.
This Hubble image looks a floating marble or a maybe a giant, disembodied eye. But it's actually a nebula with a giant star at its center. Scientists think the star used to be 20 times more massive than our sun, but it's dying and is destined to go supernova. | https://www.polytrendy.com/apollo-moon-samples-ancient-dna-and-otters-this-week-in-space-and-science/ |
Displaying news 61 - 90 of 567 in total
For the first time, NASA scientists have detected light tied to a gravitational-wave event, thanks to two merging neutron stars in the galaxy NGC 4993, located about 130 million light-years from Earth in the constellation Hydra.
NASA is seeking information from U.S. parties interested in operating the Spitzer Space Telescope with non-NASA funding after March 2019, when NASA financial support ends.
A study finds that giant exoplanets that orbit far from their stars are more likely to be found around young stars that have a disk of dust and debris than those without disks.
One of the most mysterious stellar objects may be revealing some of its secrets at last. Called KIC 8462852, also known as Boyajian’s Star, or Tabby's Star, the object has experienced unusual dips in brightness
Dim objects called brown dwarfs, less massive than the Sun but more massive than Jupiter, have powerful winds and clouds -- specifically, hot patchy clouds made of iron droplets and silicate dust. Scientists recently realized these giant clouds can move and thicken or thin surprisingly rapidly, in less than an Earth day, but did not understand why.
Researchers say in a new study that the TRAPPIST-1 star is quite old: between 5.4 and 9.8 billion years. This is up to twice as old as our own solar system, which formed some 4.5 billion years ago.
How do you visualize distant worlds that you can't see? IPAC's Robert Hurt and Tim Pyle are featured in a new video produced by JPL that highlights how they use scientific data to imagine exoplanets and other astrophysical phenomena.
Astronomers have watched as a massive, dying star was likely reborn as a black hole.
Scientists using NASA's Kepler space telescope identified a regular pattern in the orbits of the planets in the TRAPPIST-1 system that confirmed suspected details about the orbit of its outermost and least understood planet, TRAPPIST-1h.
A study combining observations from NASA’s Hubble and Spitzer space telescopes reveals that the distant planet HAT-P-26b has a primitive atmosphere composed almost entirely of hydrogen and helium.
Astronomers have produced a highly detailed image of the Crab Nebula, by combining data from telescopes spanning nearly the entire breadth of the electromagnetic spectrum.
May 3rd, 2017 marks the 5,000th day of NASA's Spitzer Space Telescope mission. This video gives us a detailed look at six of these days, showing how an automated observatory like Spitzer, which is effectively an astronomy robot, spends its time. It’s overall mission design allows for an unprecedented degree of efficiency, allowing it to study the full range of astronomical phenomena including nearby objects in the solar system, stars in our galaxy, and galaxies out to the edge of the observable universe.
Scientists have discovered a new planet with the mass of Earth, orbiting its star at the same distance that we orbit our sun. The planet is likely far too cold to be habitable for life as we know it, however, because its star is so faint. But the discovery adds to scientists' understanding of the types of planetary systems that exist beyond our own.
Astronomers have detected a huge mass of glowing stardust in a galaxy seen when the universe was only 4 percent of its present age. This galaxy was observed shortly after its formation and is the most distant galaxy in which dust has been detected. This observation is also the most distant detection of oxygen in the universe.
NASA's Spitzer Space Telescope has revealed the first known system of seven Earth-size planets around a single star. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water.
A planet and a star are having a tumultuous romance that can be detected from 370 light-years away. NASA's Spitzer Space Telescope has detected unusual pulsations in the outer shell of a star called HAT-P-2. Scientists' best guess is that a closely orbiting planet, called HAT-P-2b, causes these vibrations each time it gets close to the star in its orbit.
In a first-of-its-kind collaboration, NASA's Spitzer and Swift space telescopes joined forces to observe a microlensing event, when a distant star brightens due to the gravitational field of at least one foreground cosmic object.
In the ongoing hunt for the universe's earliest galaxies, NASA's Spitzer Space Telescope has wrapped up its observations for the Frontier Fields project. This ambitious project has combined the power of all three of NASA's Great Observatories -- Spitzer, the Hubble Space Telescope and the Chandra X-ray Observatory -- to delve as far back in time and space as current technology can allow.
To most of us, our home galaxy, the Milky Way, seems like mind-boggling, never-ending space. But what does the Milky Way actually look like? How quickly is the Milky Way giving birth to new stars? In their efforts to answer these complex questions, scientists are figuring out new ways to break down the vast amounts of data they collect.
Just in time for the 50th anniversary of the TV series "Star Trek," which first aired September 8th,1966, a new infrared image from NASA's Spitzer Space Telescope may remind fans of the historic show.
For years, astronomers have puzzled over a massive star lodged deep in the Milky Way that shows conflicting signs of being extremely old and extremely young.
Celebrating the spacecraft's ability to push the boundaries of space science and technology, NASA's Spitzer Space Telescope team has dubbed the next phase of its journey "Beyond."
Alone on the cosmic road, far from any known celestial object, a young, independent star is going through a tremendous growth spurt.
The Spitzer Space Telescope is exploring Sagittarius A*, the black hole in the center of the Milky Way. This supermassive black hole packs about four million sun-masses into a volume roughly the size of our solar system.
Astronomers have discovered the youngest fully formed exoplanet ever detected. The discovery was made using NASA's Kepler Space Telescope and its extended K2 mission, as well as the W. M. Keck Observatory on Mauna Kea, Hawaii. Exoplanets are planets that orbit stars beyond our sun.
Astronomers have gained a new perspective on the behavior of outbursting star FU Orionis, using data from an airborne observatory and a space telescope.
NASA has approved the continued operation of the Spitzer mission through the commissioning phase of the James Webb Space Telescope in early 2019 as part of the 2016 Astrophysics Senior Review process.
Using data from NASA's Great Observatories, astronomers have found the best evidence yet for cosmic seeds in the early universe that should grow into supermassive black holes.
Imagine you want to measure the size of a room, but it's completely dark. If you shout, you can tell if the space you're in is relatively big or small, depending on how long it takes to hear the echo after it bounces off the wall.
A nebula known as "the Spider" glows fluorescent green in an infrared image from NASA's Spitzer Space Telescope and the Two Micron All Sky Survey (2MASS). The Spider, officially named IC 417, lies near a much smaller object called NGC 1931, not pictured in the image. Together, the two are called "The Spider and the Fly" nebulae. Nebulae are clouds of interstellar gas and dust where stars can form. | https://www.spitzer.caltech.edu/news?page=3 |
The compact star US 708 hasn't had an easy life. Paired with a domineering partner, 708's mass was siphoned away, reducing it to a dense, helium-filled core.
But 708 didn't go quietly into the night. Instead, scientists believe the feeding frenzy ended in a supernova explosion that catapulted the ravaged remains with such force it's leaving the galaxy. Fast.
PHOTOS: Hubble's Beautiful Butterfly Nebulae
A new study shows that the star, classified as a hot subdwarf, is blasting through the Milky Way at about 750 miles per second, faster than any other star in the galaxy.
It's also the only one of about 20 similar runaways slingshot away by a supernova explosion, research published in this week's Science shows.
The other stars traveling fast enough to leave the Milky Way's gravitational fist are believed to have been booted by the supermassive black hole lurking in the center of the galaxy.
ANALYSIS: The Great Escape: Intergalactic Travel is Possible
"US 708 does not come from the galactic center. We don't know any other supermassive black hole in our galaxy. One needs one of those. A smaller, stellar-mass black hole formed by the collapse of a massive star can't do the job," astronomer Stephan Geier, with the European Southern Observatory, told Discovery News.
"It's only one object so far," he added, but if others can be found, scientists would have a way to directly study supernova.
US 708 will leave the Milky Way in roughly 25 million years, cooling over time and transforming into a white dwarf star.
Astronomers first found the star in 2005, but at that time could only determine the velocity, not the rapid spin, which later became evidence for its donor past.
ANALYSIS: Hyperfast Stars Point to Black Hole Slingshot
In 2009, theorists developed computer models showing that stars could reach escape velocity by a supernova blasts.
"This motivated us to have a closer look at US 708 again," Geier said.
The star had always been unusual because all other known hypervelocity stars are normal main sequence stars, he added. In contrast, US 708 is a compact helium star, which was formed after a close and massive companion star striped away almost all of its hydrogen.
"We have made an important step forward in understanding (type 1 supernova) explosions," the researchers wrote in the Science paper. "Despite the fact that those bright events are used as standard candles to measure the expansion (and acceleration) of the universe, their progenitors are still unknown."
This observation made by NASA's Chandra X-ray Observatory shows the emissions of G299, a Type-1a supernova remnant triggered by one star dragging matter from a binary partner until it exploded.
NASA's Spitzer Space Telescope was launched 10 years ago and has since peeled back an infrared veil on the Cosmos. The mission has worked in parallel with NASA's other "Great Observatories" (Hubble and Chandra) to provide coverage of the emissions from galaxies, interstellar dust, comet tails and the solar system's planets. But some of the most striking imagery to come from the orbiting telescope has been that of nebulae. Supernova remnants, star-forming regions and planetary nebulae are some of the most iconic objects to be spotted by Spitzer. So, to celebrate a decade in space, here are Discovery News' favorite Spitzer nebulae.
First up, the Helix Nebula -- a so-called planetary nebula -- located around 700 light-years from Earth. A planetary nebula is the remnants of the death throes of a red giant star -- all that remains is a white dwarf star in the core, clouded by cometary dust.
Spitzer will often work in tandem with other space telescopes to image a broad spectrum of light from celestial objects. Here, the supernova remnant RCW 86 is imaged by NASA's Spitzer, WISE and Chandra, and ESA's XMM-Newton.
Staring deep into the Messier 78 star-forming nebula, Spitzer sees the infrared glow of baby stars blasting cavities into the cool nebulous gas and dust.
The green-glowing infrared ring of the nebula RCW 120 is caused by tiny dust grains called polycyclic aromatic hydrocarbons -- the bubble is being shaped by the powerful stellar winds emanating from the central massive O-type star.
Spitzer stares deep into the Orion nebula, imaging the infrared light generated by a star factory.
In the year 1054 A.D. a star exploded as a supernova. Today, Spitzer was helped by NASA's other "Great Observatories" (Hubble and Chandra) to image the nebula that remains. The Crab Nebula is the result; a vast cloud of gas and dust with a spinning pulsar in the center.
The Tycho supernova remnant as imaged by Spitzer (in infrared wavelengths) and Chandra (X-rays). The supernova's powerful shockwave is visible as the outer blue shell, emitting X-rays.
Over 2,200 baby stars can be seen inside the bustling star-forming region RCW 49.
The "Wing" of the Small Magellanic Cloud (SMC) glitters with stars and warm clouds of dust and gas. By combining observations by Spitzer, Chandra and Hubble, the complex nature of this nebulous region can be realized.
The giant star Zeta Ophiuchi is blasting powerful stellar winds into space, creating an impressive shock wave in the interstellar medium. | https://www.seeker.com/fastest-star-ever-seen-will-escape-from-the-galaxy-1769576804.html |
Scientists may have found a new supernova explosion or supermassive black hole in a galaxy about 600 million light years from Earth after pointing a telescope there for the first time in a couple of decades and spotting something super bright near its center.
The National Radio Astronomy Observatory said the bright object was not in the Cygnus A galaxy when the Very Large Array last viewed it in 1996.
“It must have turned on sometime between 1996 and now,” the observatory’s Rick Perley said in a statement. They know it’s a new bright spot because despite upgrades that have been made to the telescope since the mid-’90s, “this new feature is bright enough that we definitely would have seen it in the earlier images [even] if nothing had changed.”
Based on what it looks like, the scientists said the bright spot is either a supernova explosion or an “outburst” from a secondary supermassive black hole orbiting the one that lies at the galaxy’s center. The supermassive black hole explanation is the more likely contender, because of just how bright the object is, but if it is a supernova — an explosion of a massive star — it’s a rare kind.
“While they want to watch the object’s future behavior to make sure, they pointed out that the object has remained too bright for too long to be consistent with any known type of supernova,” the observatory said. “It has many of the characteristics of a supermassive black hole that is rapidly feeding on surrounding material.”
Cygnus A is known for being one of the greatest sources of radio waves in space, and the closest one to Earth that is so active on that side of the electromagnetic spectrum.
An artist’s conception of the bright spot scientists recently observed in the galaxy Cygnus A, which could be a supermassive black hole orbiting the galaxy’s central one. Photo: Bill Saxton, NRAO/AUI/NSF
But much of its history is a mystery. Where would this secondary supermassive black hole have come from and why is it orbiting so close to the main one in Cygnus A? It’s possible that Cygnus A ate up another galaxy, lassoing its black hole into the system.
This kind of cannibalism is fairly common in the universe. Astronomers have recently found a few supermassive black holes lurking at the center of tiny galaxies called ultracompact dwarf galaxies. They believe such massive objects exist in such small systems because those dwarf galaxies were once much larger, but collided with other galaxies and had their contents stripped away. The victor of those galactic collisions ran away with the extra stars and planets and left behind a loser of a supermassive black hole with a smaller community of matter around it. There are also mergers of galaxies that result from these commonly occurring collisions.
“These two would be one of the closest pairs of supermassive black holes ever discovered, likely themselves to merge in the future,” the NRAO’s chief scientist Chris Carilli explained.
If the bright spot is indeed a supermassive black hole that was roped into the galaxy from another, it could tell astronomers more about what Cygnus A’s life was like before telescopes and before it was discovered.
In the time since scientists first found Cygnus A, the black hole would have been there all along, if that’s what the object turns out to be. But the telescope would have been able to spot it now because, the observatory explained, the black hole might have “encountered a new source of material to devour,” such as gas or a star that got too close and was shredded into a stream of atoms and dragged into the void.
When a supermassive black hole at the center of a galaxy gobbles up a space meal, it can emit material in a sort of burp known as galactic outflow, a powerful wind of gas. Scientists have recently discovered that baby stars can be formed in that hot gas the black hole passes. | https://aofirs.org/articles/black-hole-or-supernova-scientists-spot-galaxy-s-mysterious-bright-spot |
“For the first time, astronomers have directly imaged a supermassive black hole ejecting a fast-moving jet of particles as it shreds a passing star.”
This is not an ordinary black hole, whose mass is a couple times that of the Sun. Supermassive back holes can have millions, or in some cases, billions of solar masses. Almost every galaxy has a supermassive black hole at its center, which influences its development, including the Milky Way.
The gravitational pull of the supermassive black hole attracts and draws in materials from its surroundings, however, not all of it will be absorbed at once. The black hole will form a rotating disc then, launch superfast jets of particles from the poles of the rotating disc at speeds close to the speed of light.
In 2005, astronomers working with the William Herschel Telescope, in the Canary Islands, identified a burst of infrared emission. The emission came from the center of one of the colliding galaxies in Arp 299. Later, the National Science Foundation’s Very Long Baseline Array (VLBA) discovered a new, distinct source of radio emission from the same nucleus.
As time passed, the newly discovered object remained a bright infrared and emitted radio wavelengths. However, it was in visible light and X-rays, according to Seppo Mattila, of the University of Turku in Finland.
The most likely explanation is that thick interstellar gas and dust near the galaxy’s center absorbed the X-rays and visible, then re-radiated it as infrared.
Scientists continued to observe the situation with the VLBA and other radio telescopes. They were able to confirm that the source of the radio emissions was expanding in one direction. Like scientists believed, it was typical of a jet.
The VLBA is a huge interferometer that consists of 10 identical antennas on transcontinental baselines that span up to 8,000 km, from Mauna Kea, Hawaii to St. Croix, Virgin Islands.
The radio antennas are separated by thousands of kilometers and allow the VLBA to gain an incredible resolving power. It ca see fine details, which is required to observe the features of an expanding object from millions of light years away.
The VLBA observes at wavelengths of 28 cm to 3 mm in eight bands plus two sub-gigahertz bands, including the primary spectral lines that produce high brightness maser emission.
The jet is emitted by a supermassive black hole, which is located at the center of one of the colliding galaxy pairs called Arp 299, according to researchers. The black hole is approximately 20 times the size of the Sun and it is currently shredding a star twice the size of the Sun. The superfast jet of charged particles being emitted by the black hole contains 125 billion times the energy released by the Sun per year.
A small number of such stellar deaths, called tidal disruption events (TDEs) have been detected. The bursts radiate all over the electromagnetic spectrum from radio, visible, and UV, all the way to X-ray and gamma-ray intervals.
Miguel Perez-Torres, of the Astrophysical Institute of Andalusia in Granada, Spain, stated that scientist have never before been able to directly observe the formation and evolution of a jet from this event.
The dust around the black hole is absorbs any visible light. This particular TDE might a sign of a hidden population of similar events. Mattila and Perez-Torres hope scientists will discover many more events and learn from them by directing infrared and radio telescoped to particular sources.
TDEs are important to astronomy. They provide unique insight into the formation and evolution of jets near massive objects. Such events could be common in the distant universe and studying them will advance the understanding of galaxies that developed billions of years ago.
By Jeanette Smith
Source: | https://insidetheplanet.com/2018/06/supermassive-black-hole-imaged-emitting-jet-of-particles/ |
It is the impression of an artist of Wolfe Disk, a massive rotating disk galaxy in the early days of the universe.
A bright yellow “twist” near the center of this image shows where a planet can form around the AB Aurigae star. The image was obtained by the European Southern Observatory’s Large Telescope.
This artist̵7;s depiction shows the orbits of two stars and an unseen black hole 1,000 light-years from Earth. This system includes a star (small orbit seen in blue) orbiting a newly discovered black hole (orbit in red), as well as a third star in a wider orbit (also in blue).
This illustration shows the main star, known as a white dwarf, being pulled into orbit around a black hole. In each orbit, the black hole draws more material from the star and pulls it onto a glowing disk of material around the black hole. Prior to its encounter with the black hole, the star was a red giant in the later stages of stellar evolution.
This artist’s illustration shows the collision of two 125 mile wide, dusty bodies walking in the bright star Fomalhaut, located 25 light-years away. Observing this ensuing collision was once thought to be an exoplanet.
This is the impression of an artist of interstellar comet 2I / Borisov as it travels through our solar system. Recent observations have found carbon monixide in the tail of the commodity as the sun warmed the comet.
This rosette pattern is the orbit of a star, called the S2, around the supermassive black hole in the center of our Milky Way galaxy.
This is an artist’s description of the SN2016aps, which astronomers believe is the brightest supernova ever noticed.
It is an artist’s depiction of a brown dwarf, or a “failed star” object, and a magnetic field. The atmosphere of the brown dwarf and magnetic field rotates at different speeds, allowing astronomers to determine the speed of the wind in matter.
This artist’s depiction shows an inter-mid-mass black hole dropping a star.
It is the impression of an artist of a big star known as HD74423 and the smaller fellow red dwarf included in a binary star system. The large star appears in the pulsate only on one side, and it is mediated by the gravitational pull of its accompanying star in a teardrop shape.
It is an artist’s impression of two white dwarves in the process of incorporation. While astronomers hoped it might cause a supernova, they found an example of two white dwarf stars escaping the union.
A combination of space and ground-based telescopes found evidence for the largest explosion ever seen in the universe. The explosion was created by a black hole located in the central expanse of the Ophiuchus cluster, where jets exploded and carved out a large cavity in the surrounding hot gas.
Red super star Betelgeuse, in the constellation Orion, is undergoing premature dimming. This image was obtained in January using the European Southern Observatory’s Large Telescope.
This new ALMA image shows the outcome of a stellar fight: a complex and spectacular gas environment surrounding the binary star system HD101584.
NASA’s Spitzer Space Telescope captured the Tarantula Nebula at two wavelengths of infrared light. Red represents hot gas, while blue regions are interstellar dust.
A white dwarf, left, draws material from a brown dwarf, right, about 3,000 light-years from Earth.
This image shows the orbits at six G objects in the center of our galaxy, with a supermassive black hole indicated by a white cross. Stars, gas and dust are in the background.
After the stars die, they disperse their particles in space, forming new stars. In one case, the stardust was embedded in a meteorite that fell to Earth. This illustration shows that stardust can flow from sources such as the Egg Nebula to create grains recovered from meteorites, arriving in Australia.
The former North Star, Alpha Draconis or Thuban, is circled here in an image of the northern sky.
The Galaxy UGC 2885, dubbed the “Godzilla galaxy,” may be the largest one in the local universe.
The host galaxies a new trace of repeated rapid radio explosions captured on the 8-meter Gemini-North telescope.
The central region of the Milky Way is imaging using the European Southern Observatory’s Large Telescope.
This is an artist’s description of what the MAMBO-9 will look like in the visible light. The galaxy is very dusty and it still has to establish most of its stars. The two components show that space is in the process of integration.
Astronomers found a white dwarf star surrounded by a gas disk created from a giant ice plan torn by its gravity.
New dimensions of the black hole in the center of Holm 15A reveal that it is 40 billion times larger than our sun, making it the most famous known black hole to be measured directly.
A close-up view of an interstellar comet passing through our solar system can be seen on the left. On the right, astronomers used an image of the Earth for comparison.
The galaxy NGC 6240 hosts three supermassive black holes at its core.
Gamma-ray explosions are shown in this artist’s description. They can be triggered by collisions or neutron stars or by the explosion of a massive star, which collapses into a black hole.
Two melancholy clouds resembling peacocks were found in the nearby expanse of the Large Magellanic Cloud. In these pictures of ALMA telescopes, red and green highlight molecular gas while blue represents ionized hydrogen gas.
An artist’s impression of the huge black hole of the Milky Way blasting a star from the center of the galaxy.
The Jack-o’-lantern Nebula is on the edge of the Milky Way. The radiation from the massive star at its center created funny-gaps that looked like gaps in the nebula that looked like a carved pumpkin.
This new image from the NASA / ESA Hubble Space Telescope captures two galaxies of equal size in a collision that appears to resemble a ghostly face. This observation was made on 19 June 2019 in the visible light of the Advanced Camera for Surveys of the telescope.
A new SPHERE / VLT image of Hygiea, which may be the Solar System’s smallest solar system still. As a matter of the main asteroid belt, Hygiea immediately satisfies three of the four requirements to be classified as a dwarf planet: it revolves around the Sun, it is not a moon and, unlike a planet, it has not cleared the planet. neighborhood around its orbit. The final requirement is to have enough mass that its own gravity pulls it into a relatively spherical shape. This is what VLT observations now reveal about Hygiea.
It is an artist rendering of what a massive galaxy looks like from the very beginning of the universe. Rendering shows that the star formation in space is the light in the surrounding gas. Photo by James Josephides / Swinburne Astronomy Productions, Christina Williams / University of Arizona and Ivo Labbe / Swinburne.
This is a description of a gas and dust disk artist around the star HD 163296. The disk gaps are probably the location of baby planets.
This is a two-color integrated image of comet 2I / Borisov captured by the Gemini North telescope on September 10.
This illustration shows a child, forming a planet in a “star-proof” star system.
Using a simulation, astronomers have illuminated the faint filaments that make up the cosmic web in a massive cluster of galaxies.
The Hubble Space Telescope’s Wide Field Camera observed Saturn in June as the planet made its closest approach to Earth this year, approximately 1.36 billion kilometers away.
An artist’s impression of the massive radiation explosion of radiation from the middle of the Milky Way and affecting the Magellanic Stream.
The Atacama Large Millimeter / submillimeter Array captured this unprecedented image of two disks of events, in which the baby’s stars grow, feeding the material from their surrounding birth disk.
This is an artist’s illustration of what a Neptune-size moon giant gas giant exoplanet Kepler-1625b will look like in a star system 8,000 light-years from Earth. This may be the first eclipse discovered.
This infrared image from NASA’s Spitzer Space Telescope shows a cloud of gas and dust filled with bubbles, magnified by air and radiation from massive young stars. Each bubble is filled with hundreds of thousands of stars, forming from dense clouds of gas and dust.
It was an artist’s impression of FRB 181112’s rapid-fire radio path traveling from a distant host expanse to reach Earth. It passed through the mix of a galaxy along the way.
After passing near a terrific black hole, the star of this artist’s conception is torn into a thin stream of gas, which is then pulled back around the black hole and slams on its own, which creates an apparent shock and decline in the hotter material.
Comparison of GJ 3512 with Solar System and other nearby red-dwarf planetary systems. The planets around a solar-mass stars could grow until they started accreting gas and becoming giant planets like Jupiter, for several million years. But we thought that small stars like Proxima, TRAPPIST-1, star of TeegardernÕ and GJ 3512, could not form Jupiter mass planets.
A collision of three galaxies set three terrific black holes in a crash course each with a system of one billion light years from Earth.
2I / Borisov is the first interstellar comet observed in our solar system and the second observed interstellar guest in our solar system.
KIC 8462852, also known as Boyajian’s Star or Tabby’s Star, is 1,000 light-years from us. It was 50% bigger than our day and 1,000 degrees warmer. And it does not behave like any other star, shrinking and shining sporadically. The dust around the star, depicted here in an artist’s image, may be the most likely cause of its peculiar behavior.
It is an artist’s impression of a massive pulse of the neutron star being delayed by the passage of a white dwarf star between the neutron star and Earth. Astronomers have seen the largest neutron star to date due to delay.
VISTA’s European Southern Observatory telescope captures a stunning image of the Large Magellanic Cloud, one of our closest neighbors. The near-infrared capabilities of the telescope show millions of individual stars.
Astronomers believe the Comet C / 2019 Q4 may be the second known interstellar visitor to our solar system. It was first spotted on August 30 and imaging via the Canada-France-Hawaii Telescope on the Big Island of Hawaii on September 10, 2019.
A star known as S0-2, represented as the blue and green object in this artist’s illustration, made the closest approach to the supermassive black hole in the middle of the Milky Way in 2018. It provided a test for theory. of Einstein’s general relativity.
This is a radio image of the galactic center of the Milky Way. The radio bubbles discovered by MeerKAT extend vertically above and below the plane of the galaxy.
A kilanova was captured by the Hubble Space Telescope in 2016, seen here next to a red arrow. Kilanovae is a massive explosion that creates heavy elements such as gold and platinum.
This is an artist’s description of a black hole about the ingestion of a neutron star. Detectives have signaled the possible August 14 incident.
This artist’s depiction shows LHS 3844b, a rocky nearby exoplanet. It is 1.3 times the mass of the Earth and orbits a cool M-dwarf star. The surface of the planet is probably dark and covered with cooled volcanic materials, and has no visible environment.
An artist’s concept of blasting a massive star within a dense stellar environment.
The Galaxy NGC 5866 is 44 million light-years from Earth. It appears flat because we can only see its edge in this image obtained by NASA’s Spitzer Space Telescope.
The Hubble Space Telescope captures a dazzling new picture of Jupiter, showing vivid colors and cloud-like features in the sky.
It is an artist’s impression of ancient enormous and distant galaxies observed in ALMA.
Glowing fuel clouds and newborn stars form the Seagull Nebula in one of the Milky Way’s armored arms.
An artist’s concept of what the first stars looked like after the Big Bang.
The spiral galaxy NGC 2985 lies approximately 70 million light years away from our solar system in the constellation Ursa Major.
Early in the history of the universe, the expanse of the Milky Way collided with a dwarf galaxy, left, which helped shape the ring and structure of our galaxy known today.
An artist’s depiction of a thin disc embedded in a massive black hole in the center of the galaxy NGC 3147, 130 million light-years away.
Hubble captured this view of a spiral galaxy named NGC 972 that seemed to blossom with a new star formation. The orange glow was created as a reaction of hydrogen gas to the intense light flowing out of nearby newborn stars.
This is the jellyfish galaxy JO201.
The star system of Eta Carinae, located 7,500 light-years from Earth, experienced a great explosion in 1838 and the Hubble Space Telescope still captures the demise. This new ultraviolet image shows hot glowing clouds of gas resembling fireworks.
‘Oumheula, the first observed interstellar visitor to our solar system, is shown in an illustration by an artist.
It is a rendition of an artist of ancient supernovae who bombed the Earth with cosmic energy millions of years ago.
The impression of an Australian telescope artist by CSIRO’s SKA Pathfinder telescope searched for a rapid radio explosion and determined its precise location.
The Whirlpool galaxy is obtained at different wavelengths. On the left is a visible light image. The next image combines visible and infrared light, while the two on the right show different wavelengths of infrared light.
The electricity charged by the C60 molecules, of which 60 carbon atoms are arranged in a hollow sphere resembling a soccer ball, was discovered by the Hubble Space Telescope in the interstellar medium between the star systems.
They are augmented by galaxies behind large clusters of galaxies. Roses almost reveal the gas surrounding their distant galaxies and structures. The effect of gravitational lensing of clusters is reproducing images of galaxies.
This artist’s depiction shows a blue quasar in the middle of a galaxy.
The NICER detector at the International Space Station recorded 22 months of X-ray data to create this map of the entire sky.
NASA’s Spitzer Space Telescope has taken the mosaic of regional stars Cepheus C and Cepheus B.
The Galaxy NGC 4485 collided with the larger galactic neighbor NGC 4490 millions of years ago, leading to the creation of new stars visible on the right side of the image.
Astronomers have created a mosaic of distant universes, called the Hubble Legacy Field, which submits 16 years of observations from the Hubble Space Telescope. The image contains 200,000 galaxies spanning 13.3 billion years of time up to just 500 million years after the Big Bang.
A ground-based telescope in the Large Magellanic Cloud, a nearby expanse of our Milky Way. The inset was taken by the Hubble Space Telescope and shows one of the star clusters in space.
One of the brightest planet nebulae in the sky and first discovered in 1878, the nebula NGC 7027 is visible towards the Swan constellation.
The asteroid 6478 Gault is seen with the NASA / ESA Hubble Space Telescope, which shows two narrow, comet-like tails of debris that tell us the asteroid is slowly undergoing self-destruction. The bright stripes surrounding the asteroid are background stars. The Gault asteroid is located 214 million miles from the Sun, between the orbits of Mars and Jupiter.
The ghostly shell in this image is a supernova, and the glowing trail leading away from it is a pulsar.
Hidden in one of the dark corners of the Orion constellation, this Cosmic Bat spreads its wings through an interstellar space two thousand light-years away. It was illuminated by the young stars hidden in its depths – despite the dimming of the dark clouds of dust, their bright rays still illuminated the nebula.
In this illustration, many rings of dust surround the sun. These rings form when the gravity of the planets deposits grains of dust in orbit around the sun. Recently, scientists have noticed a ring of dust in Mercury’s orbit. Others hypothesize that the origin of the Venus dust ring is a group of unidentified co-orbital asteroids.
It was the impression of an artist of the globular star cluster surrounding the Milky Way.
An artist’s impression of life on a planet in orbit around a binary star system, seen as two suns in the sky.
An artist’s description of one of the most remote solar system objects ever observed, 2018 VG18 – also known as “Farout.” The pink color indicates the presence of ice. We still have no idea what “FarFarOut” looks like.
This is the concept of a small-moon Hippocamp artist discovered by the Hubble Space Telescope. Only 20 miles in total, it could actually be a shredded piece from a larger nearby moon, Proteus, seen as a crescent in the background.
In this illustration, an asteroid (lower left) is separated under the powerful gravity of LSPM J0207 + 3331, the oldest, cold white dwarf known to be surrounded by a ring of dusty debris. Scientists think the infrared signal of the system is best explained by two different dust-shaped rings provided by crumbling asteroids.
An artist’s impression of a warped and twisted Milky Way disk. This occurs when the rotational forces of the massive center of space give way to the outer disk.
The 1.3-kilometer (0.8 mile) -radius Kuiper Belt Object that researchers discovered on the edge of the solar system is believed to be the step between dust and ice balls and fully formed planets.
A selfie was taken by Nasa’s Curiosity Mars rover on Vera Rubin Ridge before it moved to a new location.
The Hubble Space Telescope has found a huge expanse hiding behind a huge star cluster in our cosmic neighborhood. Researchers are so old and clean that they call it a “living fossil” from the early days of the universe.
How were the huge black holes formed in the first universe? The rotating gaseous disk of this dark halo separates the three clusters that collapse under their own gravity to form supermassive stars. Those stars break down quickly and form massive black holes.
NASA’s Spitzer Space Telescope captured this image of the Large Magellanic Cloud, a satellite galaxy in our own Milky Way galaxy. Astrophysicists believe it could explode in our galaxy in two billion years.
A mysterious bright object in the sky, called “The Cow,” was captured in real time by telescopes around the world. Astronomers believe it could be the birth of a black hole or neutron star, or a new kind of object.
An illustration describes the discovery of a repetitive rapid burst of radio from a mysterious source of 3 billion light-years from Earth.
Comet 46P / Wirtanen will pass within 7 million miles of Earth on December 16. The spectacular green coma is the size of Jupiter, though the comet itself is about three-quarters of a mile wide.
Bennu’s asteroid mosaic image consists of 12 PolyCam images collected on December 2 by the OSIRIS-REx spacecraft from a range of 15 miles.
This image of a globular cluster of stars through the Hubble Space Telescope is one of the most ancient collections of stars known. The cluster, called NGC 6752, is over 10 billion years old.
An image of Apep captured with a VISIR camera at the European Southern Observatory’s Large Telescope. This “pinwheel” star system is likely to end in a long period of gamma-ray explosion.
The impression of an artist of the galaxy Abell 2597, shows a supermassive black hole that drives cold molecular gas like the pump of a giant intergalactic fountain.
An image of the Wild Duck Cluster, in which each star is nearly 250 million years old.
These images show the final phase of a union between pairs of galactic nuclei in chaotic collision cores.
A radio image of hydrogen gas in the Small Magellanic Cloud. Astronomers believe the dwarf galaxy is slowly dying and will eventually run out of the Milky Way.
Further evidence of a very strong black hole in the center of the Milky Way galaxy is found. This visualization uses data from simulations of orbital motions of gas swirling around 30% of the speed of light in a circular orbit around the black hole.
Does it look bat to you? This giant shadow came from a bright star reflecting against the dusty disk around it.
Hey, Bennu! NASA’s OSIRIS-REx mission, over the way to meet Bennu’s primitive asteroid, sends images as it approaches its December 3 target.
These three panels revealed a supernova before, during and after it occurred 920 million light-years from Earth (from left to right). The supernova, called the iPTF14gqr, is unusual because although the star is massive, the explosion is fast and weak. Researchers believe this is due to a star formation stopping the mass.
Illustration by an Planet X artist, who could shape orbits a little farther away from distant objects of the solar system such as the 2015 TG387.
This is an artist’s concept of what the SIMP J01365663 + 0933473. looks like. It has 12.7 times the mass of Jupiter but a magnetic field 200 times stronger than Jupiter’s. This object is 20 light-years from Earth. It is at the borderline between being a planet or being a brown dwarf.
The Andromeda galaxy cannibalized and shredded the once-huge galaxy M32p, leaving this compact galaxy known as the M32. It is completely unique and contains a wealth of young stars.
Twelve new moons were found around Jupiter. This graphic shows different groups of moons and their orbits, with new findings presented boldly.
Scientists and observatories around the world have been able to track a high-energy neutrino in a galaxy with a supermassive, rapidly rotating black hole in its center, known as a blazar. The galaxy sits to the left of Orion’s shoulder in its constellation and is nearly 4 billion light-years from Earth.
Planets not only appear in thin air – but they require gas, dust and other processes that astronomers do not fully understand. It’s an artist’s impression of what “baby” planets look like around a young star.
These negative images of the 2015 BZ509, which are circular in yellow, show the first known interstellar object to become a permanent part of our solar system. The exo-asteroid was probably derived from our solar system from another star system 4.5 billion years ago. It is then fixed in a retrograde orbit around Jupiter.
A close look at the matrix of a matrix in a meteorite that landed in Sudan in 2008. It is considered the first evidence of a proto-planet that helped shape the terrestrial planets in our solar system.
2004 EW95 is the first carbon-rich asteroid that has been confirmed to exist on the Kuiper Belt and is a relic of the primordial solar system. This peculiar object was probably formed in the asteroid belt between Mars and Jupiter before sinking billions of miles into its current home on the Kuiper Belt.
The NASA / ESA Hubble Space Telescope celebrated its 28th anniversary in space with its stunning and colorful image of the Lagoon Nebula 4,000 light-years from Earth. While the entire nebula is 55 light-years in total, this image shows only a fraction of about four light-years.
This is a more star-studded view of the Lagoon Nebula, using Hubble’s infrared capabilities. The reason you can see more stars is because infrared is able to cut through dust and gas clouds to show the abundance of both young stars within the nebula, as well as more distant stars in the background.
The Rosette Nebula is 5,000 light-years from Earth. The unique nebula, which some claims looks like a skull, has a hole in the middle that creates the illusion of its pink shape.
This interior slope of a Martian crater has numerous seasonal dark stripes called the “repeated slope lineae,” or RSL, which a November 2017 report interpreted as grains of grains, rather than dark due to flowing water. The image is from a HiRISE camera in NASA’s Mars Reconnaissance Orbiter.
This artist’s impression shows a supernova explosion, containing a brightness of 100 million days. The Supernova iPTF14hls, which have exploded several times, may be the largest and longest lasting observed.
This illustration shows hydrocarbon compounds that divide carbon and hydrogen inside ice giants, such as Neptune, which turn into a “diamond (rain) shower.”
This stunning image is the stellar nursery in the Orion Nebula, where the stars were born. The red filament is a stretch of ammonia molecules about 50 light-years long. Blue represents the gas of the Orion Nebula. This image is a combined observation from the Robert C. Byrd Green Bank Telescope and NASA’s Wide-field Infrared Survey Explore Telescope. “We still don’t understand in detail how large the gas clouds in our Galaxy fall to create new stars,” said Rachel Friesen, one of the co-Principal Investigators of the partnership. “But ammonia is a great tracer of dense, star-forming gas.”
This is what the Earth and the moon look like from Mars. Ang imahe ay isang komposisyon ng pinakamahusay na imahe ng Earth at ang pinakamagandang imahe ng buwan na nakuha noong Nobyembre 20, 2016, sa pamamagitan ng Mars Reconnaissance Orbiter ng NASA. Ang camera ng orbiter ay tumatagal ng mga larawan sa tatlong mga bandang haba ng haba: infrared, pula at asul-berde. Ang Mars ay halos 127 milyong milya mula sa Earth nang makuha ang mga imahe.
Ang PGC 1000714 ay una na naisip na isang pangkaraniwang elliptical na galaksiya, ngunit isang mas malapit na pagsusuri ang nagsiwalat ng hindi kapani-paniwalang bihirang pagtuklas ng isang kalawakan na Hoag-type. Mayroon itong isang bilog na core na napapalibutan ng dalawang hiwalay na singsing.
Kinuha ng kuha ng Cassini spacecraft ng NASA ang mga larawang ito ng mahiwagang jet na hugis heksagon ng planeta noong Disyembre 2016. Ang hexagon ay natuklasan sa mga imahe na kinunan ng Voyager spacecraft noong unang bahagi ng 1980s. Tinatayang magkaroon ng isang lapad na lapad kaysa sa dalawang Daigdig.
Ang isang patay na bituin ay nagbibigay ng isang maberde na glow sa larawan na Hubble Space Telescope ng Crab Nebula, na matatagpuan tungkol sa 6,500 light years mula sa Earth sa konstelasyong Taurus. Inilabas ng NASA ang imahe para sa Halloween 2016 at ginampanan ang tema sa press release nito. Sinabi ng ahensya na ang “bagay na naghahanap ng ghoulish ay may pulso pa rin.” Sa gitna ng Crab Nebula ay ang durog na core, o “puso” ng isang sumabog na bituin. Ang puso ay umiikot ng 30 beses bawat segundo at gumagawa ng isang magnetic field na bumubuo ng 1 trillion volts, sinabi ng NASA.
Ang pagsisiyasat sa makapal na mga ulap ng alikabok ng galactic na umbok, ipinahayag ng isang pang-internasyonal na koponan ng mga astronomo ang hindi pangkaraniwang halo ng mga bituin sa kumpol ng stellar na kilala bilang Terzan 5. Ang mga bagong resulta ay nagpapahiwatig na ang Terzan 5 ay isa sa mga bloke ng gusali ng primera, na malamang ang relic ng mga unang araw ng Milky Way.
Ang paglilihi ng isang artista ng Planet Nine, na siyang pinakamalayo sa planeta sa loob ng ating solar system. Ang magkakatulad na mga orbit ng kumpol ng matinding mga bagay sa gilid ng ating solar system ay nagmumungkahi ng isang napakalaking planeta na matatagpuan doon.
Isang paglalarawan ng mga orbit ng bago at dating kilalang lubos na malalayong mga bagay na Solar System. Ang kumpol ng karamihan sa kanilang mga orbit ay nagpapahiwatig na malamang na naiimpluwensyahan sila ng isang bagay na napakalaking at napakalayo, ang iminungkahing Planet X.
Kamusta sa madilim na kalawakan na Dragonfly 44. Tulad ng aming Milky Way, mayroon itong halo ng mga spherical na kumpol ng mga bituin sa paligid nito.
Ang isang klasikal na nova ay nangyayari kapag ang isang puting dwarf star ay nakakakuha ng bagay mula sa pangalawang bituin (isang pulang dwarf) sa isang panahon, na nagdulot ng isang thermonuclear reaksyon sa ibabaw na kalaunan ay sumabog sa isang solong nakikitang pagsabog. Lumilikha ito ng isang 10,000-tiklop na pagtaas sa ningning, na inilalarawan dito sa pag-render ng isang artista.
Ang pag-lens ng gravitational at pag-war sa puwang ay makikita sa imaheng ito ng malapit at malayong mga kalawakan na nakuha ni Hubble.
Sa gitna ng ating kalawakan, ang Milky Way, natuklasan ng mga mananaliksik ang isang hugis-X na istraktura sa loob ng isang mahigpit na nakaimpake na grupo ng mga bituin.
Matugunan ang UGC 1382: Ang inakala ng mga astronomo ay isang normal na elliptical galaxy (kaliwa) ay talagang isiniwalat na isang napakalaking disk galaxy na binubuo ng iba’t ibang bahagi kapag tiningnan ng ultraviolet at malalim na optical data (gitna at kanan). Sa isang kumpletong pagbaligtad ng normal na istraktura ng kalawakan, ang sentro ay mas bata kaysa sa panlabas na spiral disk nito.
Kinuha ng Hubble Space Telescope ng NASA ang imaheng ito ng Crab Nebula at ang “beating heart,” na kung saan ay isang neutron star sa kanan ng dalawang maliwanag na bituin sa gitna ng imaheng ito. Ang neutron star pulses 30 beses sa isang segundo. Ang mga kulay ng bahaghari ay nakikita dahil sa paggalaw ng mga materyales sa nebula na nagaganap sa oras ng paglipas ng imahe.
Ang Hubble Space Telescope ay nakunan ang isang imahe ng isang nakatagong kalawakan na mas payapa kaysa sa Andromeda o ang Milky Way. Ang mababang kalawakan na ilaw na pang-ibabaw, na tinatawag na UGC 477, ay higit sa 110 milyong light-years ang layo sa konstelasyon ng Pisces.
Noong Abril 19, naglabas ang NASA ng mga bagong imahe ng maliwanag na mga kawah sa Ceres. Ipinapakita ng larawang ito ang Haulani Crater, na may katibayan ng pagguho ng lupa mula sa rim. Naniniwala ang mga siyentipiko na ang ilang mga kawah sa dwarf planeta ay maliwanag dahil medyo bago sila.
Ipinapakita ng ilustrasyong ito ang milyun-milyong mga butil ng alikabok ng dust ng NASA ay naka-sample na malapit sa Saturn. Ang ilang dosenang sa kanila ay lumilitaw na nagmula sa lampas sa aming solar system.
Ang imaheng ito mula sa VLT Survey Telescope sa Paranal Observatory ng ESO sa Chile ay nagpapakita ng isang nakamamanghang konsentrasyon ng mga kalawakan na kilala bilang ang Fornax Cluster, na matatagpuan sa Southern Hemisphere. Sa gitna ng kumpol na ito, sa gitna ng tatlong maliwanag na blobs sa kaliwang bahagi ng imahe, ay namamalagi ng isang cD galaxy – isang galactic cannibal na lumaki sa laki sa pamamagitan ng pag-ubos ng mas maliit na mga kalawakan.
Ipinapakita ng imaheng ito ang gitnang rehiyon ng Tarantula Nebula sa Malaking Magellanic Cloud. Ang bata at siksik na kumpol ng bituin na R136, na naglalaman ng daan-daang mga napakalaking bituin, ay makikita sa ibabang kanan ng imahe na kinunan ng Hubble Space Telescope.
Noong Marso 2016, inilathala ng mga astronomo ang isang papel sa malakas na pulang flashes na nagmula sa binary system na V404 Cygni noong 2015. Ang ilustrasyong ito ay nagpapakita ng isang itim na butas, na katulad ng sa V404 Cygni, na nakakain ng materyal mula sa isang orbiting star.
Ang imaheng ito ay nagpapakita ng patas na kalawakan NGC 4889, malalim na naka-embed sa loob ng kumpol ng Coma galaxy. Mayroong isang napakalaking supermassive black hole sa gitna ng kalawakan.
Ang impresyon ng isang artista ng 2MASS J2126, na tumatagal ng 900,000 taon upang i-orbit ang bituin nito, 1 trilyong kilometro ang layo.
Ang mga mananaliksik ng Caltech ay natagpuan ang katibayan ng isang higanteng planeta na nagsusubaybay ng kakaiba, mataas na pinahabang orbit sa panlabas na solar system. Ang bagay, na pinangalanang Planet Nine, ay may misa na halos 10 beses na ng Earth at nag-orbit ng mga 20 beses na mas malayo mula sa araw nang average kaysa sa Neptune.
Isang impresyon ng isang artist kung ano ang maaaring hitsura ng isang itim na butas. In February, researchers in China said they had spotted a super-massive black hole 12 billion times the size of the sun.
Are there are oceans on any of Jupiter’s moons? The Juice probe shown in this artist’s impression aims to find out. Picture courtesy of ESA/AOES
Astronomers have discovered powerful auroras on a brown dwarf that is 20 light-years away. This is an artist’s concept of the phenomenon.
Venus, bottom, and Jupiter shine brightly above Matthews, North Carolina, on Monday, June 29. The apparent close encounter, called a conjunction, has been giving a dazzling display in the summer sky. Although the two planets appear to be close together, in reality they are millions of miles apart.
Jupiter’s icy moon Europa may be the best place in the solar system to look for extraterrestrial life, according to NASA. The moon is about the size of Earth’s moon, and there is evidence it has an ocean beneath its frozen crust that may hold twice as much water as Earth. NASA’s 2016 budget includes a request for $30 million to plan a mission to investigate Europa. The image above was taken by the Galileo spacecraft on November 25, 1999. It’s a 12-frame mosaic and is considered the the best image yet of the side of Europa that faces Jupiter.
This nebula, or cloud of gas and dust, is called RCW 34 or Gum 19. The brightest areas you can see are where the gas is being heated by young stars. Eventually the gas burst outward like champagne after a bottle is uncorked. Scientists call this champagne flow. This new image of the nebula was captured by the European Space Organization’s Very Large Telescope in Chile. RCW 34 is in the constellation Vela in the southern sky. The name means “sails of a ship” in Latin.
The Hubble Space Telescope captured images of Jupiter’s three great moons — Io, Callisto, and Europa — passing by at once.
Using powerful optics, astronomers have found a planet-like body, J1407b, with rings 200 times the size of Saturn’s. This is an artist’s depiction of the rings of planet J1407b, which are eclipsing a star.
A patch of stars appears to be missing in this image from the La Silla Observatory in Chile. But the stars are actually still there behind a cloud of gas and dust called Lynds Dark Nebula 483. The cloud is about 700 light years from Earth in the constellation Serpens (The Serpent).
This is the largest Hubble Space Telescope image ever assembled. It’s a portion of the galaxy next door, Andromeda (M31).
NASA has captured a stunning new image of the so-called “Pillars of Creation,” one of the space agency’s most iconic discoveries. The giant columns of cold gas, in a small region of the Eagle Nebula, were popularized by a similar image taken by the Hubble Space Telescope in 1995.
Astronomers using the Hubble Space pieced together this picture that shows a small section of space in the southern-hemisphere constellation Fornax. Within this deep-space image are 10,000 galaxies, going back in time as far as a few hundred million years after the Big Bang.
Planetary nebula Abell 33 appears ring-like in this image, taken using the European Southern Observatory’s Very Large Telescope. The blue bubble was created when an aging star shed its outer layers and a star in the foreground happened to align with it to create a “diamond engagement ring” effect.
This Hubble image looks a floating marble or a maybe a giant, disembodied eye. But it’s actually a nebula with a giant star at its center. Scientists think the star used to be 20 times more massive than our sun, but it’s dying and is destined to go supernova.
Composite image of B14-65666 showing the distributions of dust (red), oxygen (green), and carbon (blue), observed by ALMA and stars (white) observed by the Hubble Space Telescope.
Artist’s impression of the merging galaxies B14-65666 located 13 billion light years-away. | https://newsfounded.com/astronomers-have-discovered-the-wolfe-disk-a-galaxy-that-should-not-exist-in-the-distant-universe/ |
Astronomers have gathered the most direct evidence yet of a supermassive black hole shredding a star that wandered too close. NASA's Galaxy Evolution Explorer, a space-based observatory, and the Pan-STARRS1 telescope on the summit of Haleakala in Hawaii were the first to the scene of the crime, helping to identify the stellar remains.
Astronomers have gathered the most direct evidence yet of a supermassive black hole shredding a star that wandered too close.
Supermassive black holes, weighing millions to billions times more than the Sun, lurk in the centers of most galaxies. These hefty monsters lay quietly until an unsuspecting victim, such as a star, wanders close enough to get ripped apart by their powerful gravitational clutches.
Astronomers have spotted these stellar homicides before, but this is the first time they can identify the victim. Using a slew of ground- and space-based telescopes, a team of astronomers led by Suvi Gezari of The Johns Hopkins University in Baltimore, Md., has identified the victim as a star rich in helium gas. The star resides in a galaxy 2.7 billion light-years away.
Her team's results will appear May 2 in the online edition of the journal Nature.
"When the star is ripped apart by the gravitational forces of the black hole, some part of the star's remains falls into the black hole, while the rest is ejected at high speeds. We are seeing the glow from the stellar gas falling into the black hole over time. We're also witnessing the spectral signature of the ejected gas, which we find to be mostly helium. It is like we are gathering evidence from a crime scene. Because there is very little hydrogen and mostly helium in the gas we detect from the carnage, we know that the slaughtered star had to have been the helium-rich core of a stripped star," Gezari explained.
This observation yields insights about the harsh environment around black holes and the types of stars swirling around them.
This is not the first time the unlucky star had a brush with the behemoth black hole. Gezari and her team think the star's hydrogen-filled envelope surrounding its core was lifted off a long time ago by the same black hole. In their scenario, the star may have been near the end of its life. After consuming most of its hydrogen fuel, it had probably ballooned in size, becoming a red giant. The astronomers think the bloated star was looping around the black hole in a highly elliptical orbit, similar to a comet's elongated orbit around the Sun. On one of its close approaches, the star was stripped of its puffed-up atmosphere by the black hole's powerful gravity. Only its core remained intact. The stellar remnant continued its journey around the black hole, until it ventured even closer to the behemoth monster and faced its ultimate demise.
Astronomers have predicted that stripped stars circle the central black hole of our Milky Way galaxy, Gezari pointed out. These close encounters, however, are rare, occurring roughly every 100,000 years. To find this one event, Gezari's team monitored hundreds of thousands of galaxies in ultraviolet light with the NASA's Galaxy Evolution Explorer (GALEX), a space-based observatory, and in visible light with the Pan-STARRS1 telescope on the summit of Haleakala in Hawaii. Pan-STARRS, short for Panoramic Survey Telescope and Rapid Response System, scans the entire night sky for all kinds of transient phenomena, including supernovae.
The team was looking for a bright flare in ultraviolet light from the nucleus of a galaxy with a previously dormant black hole. They found one in June 2010, which was spotted with both telescopes. Both telescopes continued to monitor the flare as it reached peak brightness a month later and then slowly began to fade over the next 12 months. The brightening event was similar to that of a supernova, but the rise to the peak was much slower, taking nearly one and a half months.
"The longer the event lasted, the more excited we got, since we realized that this is either a very unusual supernova or an entirely different type of event, such as a star being ripped apart by a black hole," said team member Armin Rest of the Space Telescope Science Institute in Baltimore, Md.
Spectroscopic observations with the MMT (Multiple Mirror Telescope) Observatory on Mount Hopkins in Arizona showed that the black hole was swallowing lots of helium. Spectroscopy divides light into its rainbow colors, which yields an object's characteristics, such as its temperature and gaseous makeup.
"The glowing helium was a tracer for an extraordinarily hot accretion event," Gezari said. "So that set off an alarm for us. And, the fact that no hydrogen was found set off a big alarm that this was not typical gas. You can't find gas like that lying around near the center of a galaxy. It's processed gas that has to have come from a stellar core. There's nothing about this event that could be easily explained by any other phenomenon."
The observed speed of the gas also linked the material to a black hole's gravitational pull. MMT measurements revealed that the gas was moving at more than 20 million miles an hour (over 32 million kilometers an hour). However, measurements of the speed of gas in the interstellar medium reveal velocities of only about 224,000 miles an hour (360,000 kilometers an hour).
"The place we also see these kinds of velocities are in supernova explosions," Rest said. "But the fact that it is still shining in ultraviolet light is incompatible with any supernova we know."
"This is the first time where we have so many pieces of evidence, and now we can put them all together to weigh the perpetrator (the black hole) and determine the identity of the unlucky star that fell victim to it," Gezari said. "These observations also give us clues to what evidence to look for in the future to find this type of event."
The Space Telescope Science Institute (STScI) in Baltimore, Md., is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C. STScI conducts science operations for the Hubble Space Telescope and is the science and mission operations center for the James Webb Space Telescope.
The Pan-STARRS Project is being led by the University of Hawaii Institute for Astronomy, and exploits the unique combination of superb observing sites and technical and scientific expertise available in Hawaii. Funding for the development of the observing system has been provided by the United States Air Force Research Laboratory. The PS1 Surveys have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network, Incorporated, the National Central University of Taiwan, and the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate.
Nature Science Paper by: S. Gezari et al. | http://hubblesite.org/news_release/news/2012-18 |
Black hole found in Omega Centauri
Omega Centauri has been known to be an unusual globular cluster for a long time. A new result obtained by Hubble and the Gemini Observatory reveals that the globular cluster may have a rare intermediate-mass black hole hidden in its centre, implying that it is likely not a globular cluster at all, but a dwarf galaxy stripped of its outer stars.
Omega Centauri, the largest and brightest globular cluster in the sky, is visible from Earth with the naked eye and is one of the favourite celestial objects for stargazers from the southern hemisphere. Although 17 000 light-years away, it located just above the plane of the Milky Way and appears almost as large as the full Moon when seen from a dark, rural area.
Images obtained with the Hubble Space Telescope’s Advanced Camera for Surveys and data from the GMOS spectrograph on the Gemini South telescope in Chile show that Omega Centauri appears to harbour an elusive and rare intermediate-mass black hole in its centre.
“This result shows that there is a continuous range of masses for black holes - from supermassive, to intermediate-mass, to small stellar mass types”, explained astronomer Eva Noyola of the Max-Planck Institute for Extraterrestrial Physics in Garching, Germany, leader of the team that made the discovery.
Noyola and colleagues measured the motions and brightness of the stars at the centre of Omega Centauri. The measured velocities of the stars in the centre are related to the total mass of the cluster and were far higher than expected from the mass deduced from the number and type of stars seen. So, there had to be something extraordinarily massive (and invisible) at the centre of the cluster responsible for the fast-swirling dance of stars — almost certainly a black hole of about 40 000 solar masses.
“Before this observation, we had only one example of an intermediate-mass black hole — in the globular cluster G1, in the nearby Andromeda Galaxy”, said Karl Gebhardt at the University of Texas, Austin, USA, member of the team that made the discovery. According to Noyola the presence of an intermediate-mass black hole is the most likely reason for the stellar speedway near the cluster’s centre. These intermediate-mass black holes could even be the seeds to create supermassive black holes. This provides astronomers with important new clues to understand one of the possible formation mechanisms of these galactic monsters.
This discovery also has important implications on the very nature of Omega Centauri. Globular clusters contain up to one million old stars bound tightly by gravity and are found in the outskirts of many galaxies including our own. Omega Centauri already has several characteristics that distinguish it from other globular clusters: it rotates faster than average, its shape is highly flattened and it consists of several generations of stars, while typical globular clusters usually consist of just one generation of old stars. It is also about 10 times as massive as other big globular clusters, almost as massive as a small galaxy.
The fact that intermediate-mass black holes may be rare and exist only in former dwarf galaxies that have been stripped of their outer stars, reinforces the idea that Omega Centauri is not a globular cluster but indeed a dwarf galaxy stripped of its outer stars in an earlier encounter with the Milky way.
Notes for editors:
The European Southern Observatory’s Very Large Telescope in Paranal, Chile, will be used to conduct follow-up observations of the velocity of the stars near the cluster’s centre to confirm the discovery.
The finding will be published in the 10 April issue of the Astrophysical Journal in a paper titled ‘Gemini and Hubble Space Telescope Evidence for an Intermediate Mass Black Hole in Omega Centauri’ by Eva Noyola (Max Planck Institute for Astrophysics in Germany & University of Texas, Austin, USA), Karl Gebhardt (University of Texas, Austin) and Marcel Bergmann (Gemini Observatory).
The Hubble Space Telescope is a project of international cooperation between NASA and ESA.
For more information: | https://www.esa.int/Science_Exploration/Space_Science/Black_hole_found_in_Omega_Centauri |
Black hole image of Sagittarius A* revealed by scientists from the center of the Milky Way
Scientists on Thursday got the first glimpse of the monster lurking at the center of our Milky Way, revealing an image of a supermassive black hole devouring any matter that wanders within its gargantuan gravitational pull.
The Black Hole – dubbed Sagittarius Aor SgrA – is only the second ever pictured. The feat was accomplished by the same international collaboration with the Event Horizon Telescope (EHT) that revealed in 2019 the first-ever photo of a black hole – located at the heart of another galaxy.
Sagittarius A* has 4 million times the mass of our Sun and is located about 26,000 light years – the distance that light travels in one year, 5.9 trillion miles (9.5 trillion km) – from Earth.
Black holes are exceptionally dense objects with gravity so strong that not even light can escape, making observing them quite difficult. A black hole’s event horizon is the point of no return, beyond which everything – stars, planets, gas, dust, and all forms of electromagnetic radiation – fade into oblivion.
Project scientists have been looking for a ring of light – superheated, disrupted matter and radiation spinning at tremendous speed at the edge of the event horizon – around a dark region that represents the actual black hole. This is known as the shadow or silhouette of the black hole.
The Milky Way is a spiral galaxy with at least 100 billion stars. Viewed from above or below, it resembles a spinning pinwheel, with our Sun on one of the spiral arms and Sagittarius A* at the center.
The 2019 image of the supermassive black hole in a galaxy called Messier 87, or M87, showed a glowing ring of red, yellow, and white surrounding a dark center. More distant and massive than Sagittarius A*, black hole M87 is about 54 million light-years from Earth and has a mass 6.5 billion times that of our Sun.
The researchers said that Sagittarius A*, although much closer to our solar system than M87, is harder to image.
The diameter of Sagittarius A* is about 17 times the diameter of the Sun, meaning it would be within the solar orbit of the innermost planet Mercury. In contrast, the diameter of M87 would encompass our entire solar system.
“Sagittarius A* is over a thousand times less massive than the black hole at M87, but because it’s in our own galaxy, it’s much closer and should appear only slightly larger in the sky,” radio astronomer Lindy Blackburn told EHT data scientists at the Harvard-Smithsonian Center for Astrophysics.
“But the smaller physical size of Sgr A also means that for Sgr A everything changes about a thousand times faster as M87. We also have to look through the chaotic disk of our own galaxy to see Sgr A*, which blurs and distorts the image,” Blackburn added.
The Event Horizon Telescope is a global network of observatories working together to observe radio sources associated with black holes. The project was launched in 2012 to try to directly observe the immediate vicinity of a black hole.
There are different categories of black holes. The smallest are so-called stellar-mass black holes, formed by the collapse of individual massive stars at the end of their life cycle. There are also intermediate mass black holes, an increase in mass. And finally, there are the supermassive black holes that inhabit the centers of most galaxies. These are thought to form relatively soon after their galaxies form, consuming enormous amounts of material to reach colossal sizes.
Thursday’s announcement came in simultaneous news conferences in the United States, Germany, China, Mexico, Chile, Japan and Taiwan. | https://internetcloning.com/black-hole-image-of-sagittarius-a-revealed-by-scientists-from-the-center-of-the-milky-way/ |
A hitherto undiscovered black hole announced its presence to astronomers when it ripped apart and devoured a star that wandered too close to it.
The intermediate-mass black hole located in a dwarf galaxy a million light-years from Earth shredded the star in an occurrence that astronomers call a Tidal Disruption Event (TDE). The TDE made itself visible when it blasted out a flare of radiation so powerful that it briefly outshone every star in its dwarf galaxy home combined.
rtwork depicting a tidal disruption event (TDE). TDEs are causes when a star passes close to a supermassive black hole and get torn apart by the gravity of the latter. The debris forms a fan-shaped pattern around the black hole before eventually falling in. (Image credit: Mark Garlick/Science Photo Library/Getty Images)
This TDE could help scientists better understand the relationship between galaxies and the black holes within them. It also provides astronomers with another intermediate black hole to study. “This discovery has created widespread excitement because we can use tidal disruption events not only to find more intermediate-mass black holes in quiet dwarf galaxies but also to measure their masses,” research co-author and UC Santa Cruz (UCSC) astronomer Ryan Foley said in a statement (opens in new tab).
Where is the closest known black hole to Earth? Fly into Gaia BH1’s position
The TDE flare — designated AT 2020neh (opens in new tab)— was first observed by astronomers using the Young Supernova Experiment (YSE), an astronomical survey that detects short-lived cosmic events like supernova explosions, as the black hole first began to devour the star.
The observation of this initial moment of destruction was vital in allowing an international team led by UCSC scientists and research first author and Niels Bohr Institute astronomer Charlotte Angus to measure the mass of the black hole finding it to be around between around 100,000 and 1 million times the mass of the sun. (opens in new tab)
TDEs have been successfully used to measure the mass of supermassive black holes in the past, but this is the first time they have been shown to work in documenting the masses of smaller midsized intermediate-mass black holes.
That means that the initial sighting of the incredibly fast AT 2020neh flare could provide a baseline for measuring midsized black hole masses in the future.
“The fact that we were able to capture this midsize black hole whilst it devoured a star offered us a remarkable opportunity to detect what otherwise would have been hidden from us,” Angus said. “What is more, we can use the properties of the flare itself to better understand this elusive group of middle-weight black holes, which could account for the majority of black holes in the centers of galaxies.”
Astronomers discovered a star being ripped apart by a black hole in the galaxy SDSS J152120.07+140410.5, 850 million light years away. Researchers pointed NASA’s Hubble Space Telescope to examine the aftermath, called AT 2020neh, which is shown in the center of the image. Hubble’s ultraviolet camera saw a ring of stars being formed around the nucleus of the galaxy where AT 2020neh is located. (Image credit: NASA, ESA, RYAN FOLEY/UC SANTA CRUZ)
This midsized class of black holes have a mass range of between 100 and 100,000 times that of the sun, making them significantly more massive than stellar-mass black holes but much smaller than the supermassive black holes that sit at the heart of most galaxies, including the Milky Way.
Physicists have long suspected that supermassive black holes, which can have masses as great as millions or even billions of times that of the sun, could grow to these tremendous masses grow as the result of the merger of intermediate-mass black holes.
One theory regarding the mechanism that could facilitate this growth suggests the early universe was rich with dwarf galaxies possessing intermediate black holes.
As these dwarf galaxies merged or were swallowed by larger galaxies the intermediate black holes within them cannibalized each other, thus growing in mass. This chain process of increasingly larger mergers would eventually lead to the supermassive black hole titans that sit at the heart of most galaxies today.
“If we can understand the population of intermediate-mass black holes out there — how many there are and where they are located — we can help determine if our theories of supermassive black hole formation are correct,” co-author and UCSC professor of astronomy and astrophysics, Enrico Ramirez-Ruiz said.
One question that remains regarding this theory of black hole growth is do all dwarf galaxies have their own intermediate-mass black hole. This is difficult to answer because as black holes trap light behind an outer boundary called the event horizon, they are effectively invisible unless they are feeding on surrounding gas and dust, or if they are ripping up stars in TDEs.
Astronomers can use other methods such as looking at the gravitational influence of stars that orbit them to infer the presence of black holes. These detection methods are currently not sensitive enough to be applied to distant black holes in the centers of dwarf galaxies, however.
As a result, few intermediate-mass black holes have been tracked down to dwarf galaxies. That means by detecting and measuring mid-sized black holes TDE flares like AT 2020neh could be a vital tool in settling the debate surrounding supermassive black hole growth.
The team’s research was published on Nov. 10 in the journal Nature Astronomy (opens in new tab). | https://medianews48.com/black-hole-announces-itself-to-astronomers-by-violently-ripping-apart-a-star/ |
Using a global network of telescopes to see the “invisible”, an international scientific team announced on Wednesday an important step in astrophysics – the very first picture of a black hole – in a realization that validated a pillar of science advanced by Albert Einstein over a century ago.
Black holes are monstrous celestial entities with gravitational fields so vicious that any matter or light can escape. The photo of the black hole in the centre of Messier 87, or M87, a massive galaxy in the relatively nearby Virgo galaxy cluster, shows a bright red, yellow and white ring surrounding a dark centre.
The research was conducted as part of the Event Horizon Telescope (EHT) project, an international collaboration that began in 2012 to attempt to directly observe the immediate environment of a black hole using a global network of terrestrial telescopes. The announcement was made at simultaneous press conferences in Washington, Brussels, Santiago, Shanghai, Taipei and Tokyo.
The team’s observations strongly validated the general relativity theory proposed in 1915 by Einstein, the famous theoretical physicist, to explain the laws of gravity and their relationship to other natural forces.
“We accomplished something that was thought impossible just a generation ago,” said astrophysicist Shepherd Doeleman, director of the Event Horizon Telescope at the Center for Astrophysics, Harvard & Smithsonian.
Black holes, celestial entities of phenomenal density, are extraordinarily difficult to observe by their very nature despite their large mass. The event horizon of a black hole is the point of no return beyond which everything – stars, planets, gas, dust and all forms of electromagnetic radiation – is swallowed up in oblivion.
The fact that black holes do not allow light to escape makes it difficult to observe them. Scientists are looking for a ring of light – hot disturbed matter and radiation that rotates at a dazzling speed on the edge of the event’s horizon – around a region of darkness representing the real black hole. This is called the shadow or silhouette of the black hole.
Scientists have said that Einstein’s theory predicted that the shape of the shadow would almost be a perfect circle – as it turned out to be.
Astrophysicist Dimitrios Psaltis of the University of Arizona, the EHT project scientist, said: “The size and shape of the shadow correspond to the precise predictions of Einstein’s general theory of relativity, increasing our confidence in this century old theory.
“Imaging a black hole is only the beginning of our efforts to develop new tools that will allow us to interpret the extremely complex data that nature provides us,” added Psaltis.
“Science fiction has become a fact of science,” said Daniel Marrone, professor of astronomy at the University of Arizona.
The project researchers obtained the first data in April 2017 using radio telescopes in the American states of Arizona and Hawaii, as well as in Mexico, Chile, Spain and Antarctica. Since then, telescopes in France and Greenland have been added to the global network. The global network has essentially created an observation antenna the size of a planet.
The project also targeted another black hole – Sagittarius A* is located in the centre of our own Milky Way galaxy – but did not announce any photos of it, although scientists expressed optimism that such an image would be obtained. Sagittarius A* has 4 million times the mass of our sun and is 26,000 light years from Earth.
According to theory, there could be three types of black holes: stellar, supermassive and miniature black holes – depending on their mass. These black holes would have formed in different ways.
Star black holes form when a massive star collapses. Supermassive black holes, which can have a mass equivalent to billions of suns, probably exist in the centres of most galaxies, including our own Milky Way galaxy. We do not know exactly how supermassive black holes are formed, but they are likely to be a by-product of galaxy formation. Due to their location in the centre of galaxies, close to many tight stars and gas clouds, supermassive black holes continue to grow due to a regular diet of the material.
No one has ever discovered a miniature black hole, which would have a much smaller mass than that of our Sun. But it is possible that miniature black holes may have formed shortly after the “Big Bang”, which is thought to have begun the universe 13.7 billion years ago. Very early in the life of the universe, the rapid expansion of part of the matter may have compressed the matter slowly enough to contract into black holes.
Another division separates black holes that rotate (have an angular momentum) from those that do not. | https://azamshafiul.com/scientists-release-first-photo-of-black-hole/ |
Studying what they first took to be a supernova explosion, astronomers have instead concluded they were witnessing the aftermath of a supermassive black hole at the core of a colliding galaxy consuming a nearby star, ripping it apart and ejecting a powerful jet of material. It is the first direct observation of a jet forming in the wake of a stellar “death by black hole,” a cataclysm known as a tidal disruption event, or TDE.
“Never before have we been able to directly observe the formation and evolution of a jet from one of these events,” said Miguel Perez-Torres, an astronomer at the Astrophysical Institute of Andalusia in Granada, Spain. “Tidal disruption events can provide us with a unique opportunity to advance our understanding of the formation and evolution of jets in the vicinities of these powerful objects.”
The event occurred in the core of a galaxy in the process of colliding with another nearly 150 light years from Earth. The colliding galaxies are known as Arp 299 and the black hole in question is 20 million times more massive than the Sun.
Using the William Herschel telescope in the Canary Islands, astronomers first detected a bright infrared emission from the nucleus of one galaxy in January 2005. Six months later, radio emissions were detected at the same location.
“As time passed, the new object stayed bright at infrared and radio wavelengths, but not in visible light and X-rays,” said Seppo Mattila of the University of Turku in Finland. “The most likely explanation is that thick interstellar gas and dust near the galaxy’s center absorbed the X-rays and visible light, then re-radiated it as infrared.”
Observations over the next decade using the Very Long Baseline Array, the European VLBI Network and other radio telescopes found radio emissions expanding in one direction, indicating a jet of material moving about one quarter the speed of light. Astronomers eventually concluded they were, in fact, seeing an evolving jet from the black hole as it consumed a nearby star.
“Because of the dust that absorbed any visible light, this particular tidal disruption event may be just the tip of the iceberg of what until now has been a hidden population” Mattila said. “By looking for these events with infrared and radio telescopes, we may be able to discover many more, and learn from them.”
A team of 36 scientists from 26 institutions around the world participated in the observations of Arp 299. Their findings were published in the 14 June issue of the journal Science. | https://astronomynow.com/2018/06/15/astronomers-see-aftermath-of-black-hole-lunching-on-doomed-star/ |
A galaxy cluster that belongs to the Abell catalogue. This is a listing of more than 4,000 galaxy clusters that meet certain criteria, one of which is having at least 30 galaxies within a set magnitude range. The catalogue is divided into five groups of richness according to how many galaxies the cluster contains. Class 0 clusters contain between 30 and 49 galaxies, and class 5 clusters contain more than 299 galaxies.
The apparent magnitude (the brightness perceived by an observer) a celestial body would have if it was 10 parsecs (2 million AU) from Earth. It’s a way of directly comparing the brightnesses of stars that are at different distances from Earth.
This is the coldest temperature theoretically possible (-273.15 degrees Celsius), where the motion of atoms in a material would stop completely, leaving them only with a small amount of quantum mechanically energy.
The process by which a celestial object increases its mass by collecting matter from surrounding gas and objects, due to the attraction of its gravity.
A disc of interstellar material around a celestial object, such as a star or a black hole, that has formed by matter being attracted by its gravitational pull.
An achromatic lens is one that has been designed to reduce chromatic aberration of light passing through it.
The centre of a galaxy, which emits large amounts of energy as electromagnetic radiation. Such objects are thought to be powered by matter falling on to a supermassive black hole.
An advanced material used by the Stardust spacecraft to capture small particles of cometary dust. It is 99.8 per cent air and was effective at slowing down the cometary particles gently so that they weren’t damaged.
The technique of imaging through a camera lens held up to the eyepiece of a telescope. It is used for cameras with non-removable lenses.
This is a measure of how reflective a surface is – the ratio of the amount of electromagnetic radiation (like visible light and infrared) reflected by a surface, to the amount that falls on it.
These are regions on the surfaces of planets that contrast in brightness with nearby areas.
These are transverse waves that travel through electrically conducting fluids or gases in which a magnetic field is present, like the Sun’s plasma.
This has a lens (or element of its design) that enables it to image the majority, if not all, of the sky.
A type of telescope mount that is simpler to construct than an equatorial mount, but requires simultaneous movement about the vertical (altitude) and horizontal (azimuth) axes to track a celestial object.
This is the (usually pink/red) glow seen on the edges of some DSLR images caused by infrared radiation from the camera’s amplifier.
Observed every day from the same location at the same time, the Sun follows a figure-of-eight path through the sky. Known as an analemma, this pattern is due to the tilt of the Earth’s axis as it orbits the Sun.
A unit of measurement equal to just 10^-10m (1 nanometre), commonly used to express the length of light waves in the visible spectrum.
This describes the type of telescope which has been made so that its optical systems show no signs of aberrations from spherical aberration and coma.
A telescope that is almost completely free from chromatic aberrations. These are caused in some telescopes by different wavelengths being brought to focus at different distances.
A telescope that uses three or more lenses to bring red, green and blue light to focus at the same point.
The period during which a planet is best placed for observation.
Bodies in an elliptical orbit all reach a point when they are furthest or closest to their parent object, eg a planet around a star. Apsides is a collective term for these points. Earth’s apsides are its perihelion and aphelion.
A unit of measurement in astronomy that is equivalent to 1/1800th of the angular diameter of the Full Moon.
A small unit of angular measurement, spanning one-sixtieth of a degree; an arcsecond is one-sixtieth of an arcminute. Astronomers measure the separation between stars in the night sky in terms of degrees.
This is simply an artificial point of light used to test the optics of a telescope in the absence of, or instead of, a suitable real star for collimation and other adjustments.
A pattern formed by stars that aren’t necessarily in the same constellation, for example the Plough and the Summer Triangle.
An irregular rocky and or metallic body left over from the formation of the Solar System. Their size ranges from tens of metres up to at least 1000km. Their composition varies generally based on their location in the Solar System. Most of the asteroids in our Solar System lie in a ‘belt’ between Mars and Jupiter. Nevertheless some of the gas giants shepherd asteroids and even hold them in their own orbits (as in the case of Jupiter).
The study of the internal structure of stars by analysis of the way they pulsate.
A relatively new branch of science that investigates the conditions necessary for life elsewhere in the Universe, and how we can detect it if it exists.
An emerging branch of science concerned with the possibility of life in space and the origins of life on Earth. Also known as exobiology.
A photographic telescope. An astrograph was used by Clyde Tombaugh to discover Pluto in 1930.
The study of the precise position and movement of stars, which has led to numerous discoveries of extrasolar planets. The gravitational attraction of a planet causes its star to ‘wobble’, which can then be detected.
One AU is a unit of distance equivalent to roughly 150,000,000 km.
An auroral corona usually appears during energetic auroral displays. They are seen as rays of auroral emission coming straight at you, where the perspective often makes it look as if they are emanating from a single point in the sky.
A technique for observing faint objects through a telescope by viewing slightly to the side, allowing light from the object to fall on an area of the eye more sensitive to faint light.
Azimuth is a horizontal measurement used to locate the position of an object in the sky. Azimuth is measured clockwise from north, and spans 360° around you. | http://cdn.skyatnightmagazine.com/dictionary/A |
Why do galaxies collide?
The sound waves in the presence of the moving object in its back will be compressed and will reach our ears at very high frequency. If these sound waves are present behind the moving object then they are expanded and will sound with deeper tone. This is explained clearly in Doppler Effect. The theory of big bang is supported by this concept of Doppler Effect. It also reveals that Universe is expanding. The changes in expansion of Universe are observed by red shift or calculus. The Hubble expansion of the Universe indicates that Universe is expounding and the Galaxies are also departing from each other.
If the Universe is unfolding and the galaxies are moving apart further, then is there any possibility for galaxies to collide with one another? There is an opinion that the galaxies will collide when they are attracted by gravitational pull. But this opinion cannot be considered valid in one perspective as the gravitational force that exists between all the matter might have made the Universe to get united after the Big Bang before the galaxies are formed. But, it can also be thought that the trajectory formation might have made the objects in the Universe to remain away from one another in a given time.
If the gravitational pull concept is correct, then the galaxies must be moving with certain acceleration in order to not collide with each other. When the Space is expanding, the local gravity can keep those galaxies to exist together and overcome the unfolding of the Universe. Some of the Galaxies like Andromeda and few others do not generally involve in expansion as they are bound together by their mutual gravity. Objects which are farther from one another in space move faster, while those which are closer to each other will move at slower speed. This can make them to reach other objects. Likewise galaxies may collide and form new galaxies. Galaxies meet and fuse in a long period of time to form a large galaxy. In terms of our time scale, when two galaxies collide, nothing will happen to the celestial bodies existing in them. | https://www.knowswhy.com/why-do-galaxies-collide/ |
Table of Contents
When did galaxies start forming?
Based on cosmic microwave background data, astronomers think matter coalesced when the universe cooled and became “transparent” 380,000 years after the Big Bang. And according to recent studies, structures like stars and galaxies formed as early as 200 million years after the Big Bang.
How do scientists think galaxies formed?
Galaxies are thought to begin as small clouds of stars and dust swirling through space. As other clouds get close, gravity sends these objects careening into one another and knits them into larger spinning packs.
When did galaxies and stars start forming?
The first stars did not appear until perhaps 100 million years after the big bang, and nearly a billion years passed before galaxies proliferated across the cosmos.
How did the early stars and galaxies start forming?
The very first stars likely formed when the Universe was about 100 million years old, prior to the formation of the first galaxies. This started the cosmic chemical enrichment that led to the formation of the stars that we see in the Milky Way today, to rocky planets and eventually humans.
What is the first galaxy in the universe?
As of May 5, 2015, the galaxy EGS-zs8-1 is the most distant and earliest galaxy measured, forming 670 million years after the Big Bang. The light from EGS-zs8-1 has taken 13 billion years to reach Earth, and is now 30 billion light-years away, because of the expansion of the universe during 13 billion years.
Are most galaxies Old or new?
Most galaxies are between 10 billion and 13.6 billion years old.
Do all galaxies have black holes?
Most galaxies of comparable sizes that are active have much larger black holes. Andromeda, which is at most about twice the mass of the Milky Way, has a black hole that’s more like ~80-100 million solar masses. Many other galaxies have black holes reaching into the billions or even tens of billions of solar masses.
Are stars still forming in the Milky Way?
There are new Stars Forming Near the Core of the Milky Way Despite the Harsh Environment. The central core of our galaxy is not a friendly place for star formation, and yet new observations have revealed almost four dozen newly-forming systems.
What is the first star at night?
Venus
Why is Venus called “the Morning Star” or “the Evening Star?” Venus shines so brightly that it is the first “star” to appear in the sky after the Sun sets, or the last to disappear before the Sun rises. Its orbital position changes, thus causing it to appear at different times of the night throughout the year.
Who found the first galaxy?
Charles Messier
The first galaxies were identified in the 17th Century by the French astronomer Charles Messier, although at the time he did not know what they were. Messier, who was a keen observer of comets, spotted a number of other fuzzy objects in the sky which he knew were not comets.
How old is the oldest galaxy in the universe?
The oldest known galaxy in existence remains GN-z11, which formed around 400 million years after the Big Bang, as previously reported by Live Science’s sister site Space.com.
What was the first Galaxy?
First discovered spiral galaxy. Share. The Whirlpool Galaxy (M51) was the first celestial object ever to be identified as being a spiral. The discovery was made by William Parsons , Third Earl of Rosse (Ireland) in 1845.
How old is the Milky Way?
The Milky Way is estimated to be 13.2 billion years old, and is one of many billions of galaxies in the known universe. Other galaxies may be older and bigger, but as Earth’s cosmic address, the Milky Way has long fascinated humans. It was recognized by astronomers thousands of years ago,…
How are galaxies born?
Galaxy Formation. One says that galaxies were born when vast clouds of gas and dust collapsed under their own gravitational pull, allowing stars to form. The other, which has gained strength in recent years, says the young universe contained many small “lumps” of matter, which clumped together to form galaxies.
How are galaxies created?
Computer models that scientists have made to understand galaxy formation indicate that galaxies are created when dark matter merges and clumps together. Dark matter is an invisible form of matter whose total mass in the universe is roughly five times that of “normal” matter (i.e., atoms). | https://short-fact.com/when-did-galaxies-start-forming/ |
Galaxy clusters are the biggest celestial objects in the sky consisting of thousands of galaxies. They form from nonuniformity in the matter distribution established by cosmic inflation in the beginning of the Universe. Their growth is a constant fight between the gathering of dark matter by gravity and the accelerated expansion of the universe due to dark energy. By studying galaxy clusters, researchers can learn more about these biggest and most mysterious building blocks of the Universe.
Led by Hironao Miyatake, currently at NASA’s Jet Propulsion Laboratory, Surhud More and Masahiro Takada of the Kavli Institute for the Physics and Mathematics (Kavli IPMU), the research team challenged the conventional idea that the connection between galaxy clusters and the surrounding dark matter environment is solely characterized by their mass. Based on the nature of the non-uniform matter distribution established by cosmic inflation, it was theoretically predicted that other factors should affect the connection. However, no one had succeeded in seeing it in the real Universe until now.
The Chandra image of Abell 1689 shown above is one of the most massive galaxy clusters known. The gravity of its trillion stars, plus dark matter, acts like a 2-million-light-year-wide “lens” in space that bends and magnifies the light of galaxies far behind it.
The image below shows the halo surrounding the Perseus Cluster, a swarm of galaxies approximately 250 million light years from Earth. Imagine a cloud of gas in which each atom is a whole galaxy—that’s a bit what the Perseus cluster is like. It is one of the most massive known objects in the Universe.
The team divided almost 9000 galaxy clusters from the Sloan Digital Sky Survey DR8 galaxy catalog into two samples based on the spatial distribution of galaxies inside each cluster. By using gravitational lensing they confirmed the two samples have similar masses, but they found that the distribution of clusters was different. Galaxy clusters in which member galaxies bunched up towards the center were less clumpy than clusters in which member galaxies were more spread out. . The difference in distribution is a result of the different dark matter environment in which they form.
Researchers say their findings show that the connection between a galaxy cluster and surrounding dark matter is not characterized solely by the mass of clusters, but also by their formation history. | http://wonderzoom.org/the-biggest-most-mysterious-building-blocks-of-the-universe/ |
We held a special event in 2018 on Halloween - an "all-day" open house from 9 am to 3 pm in Chumash Auditorium. Details follow the main flyer page.
three more pages of details
The following is a work in progress posted for expert review
Dark Matter Day Q&A
©Bob Field 2018
What is dark matter?
No one knows because unlike ordinary matter, you cannot see dark matter.
How dark is it?
It’s not really dark like black, which absorbs light. It is really transparent or clear as in light passes through it.
Is it invisible?
Yes, but the word invisible means not visible and dark matter is not only invisible in the visible range, but at all wavelengths in the electromagnetic spectrum. In other words it does not interact with light or any other radiant energy electromagnetically – it is electrically neutral and it is does not emit, absorb, or scatter light like ordinary matter. It is transparent like a clear gas or a window, but even more so.
So it does not interact at all with radiant energy?
No. It does, but not electromagnetically. It does bend light just like ordinary matter bends light because light is influenced by the fact that gravitational masses curve empty space and light follows this curvature. It allows scientists to see what is behind a star when the light passes very close to the surface of the star. Einstein’s prediction was first observed more than a century ago.
If it is unseen and unseeable, why does dark matter matter?
Because ... without dark matter there would be no gray matter – that is we would not be here to discuss it. There would be no stars or planets either.
Really, how can it be that important?
Because the standard models of the formation of the universe tell us that the matter in the universe was very uniformly distributed moments after the Big Bang and became even more uniform over the first 50,000 years as radiant energy smoothed out whatever lumps that randomly formed.
Since dark matter does not interact with radiant energy, it did not get smoothed out – it preserved the memory of the primordial density fluctuations that matter “forgot”. It is these density variations that ultimately formed the lumps that we call galaxies, stars, planets, and people. So no dark matter implies no gray matter.
How do you know the standard model is right and there isn’t some other explanation?
We don’t. That’s one more reason why scientists have spent decades looking for dark matter.
How can they look for something if they don’t know what it is?
There are many candidates for dark matter like WIMPs, MACHOS, axions, etc. which have in the past thought to be consistent with the standard model, but over time, knowledge of the observable universe has narrowed down the possibilities while observations have failed to detect other possibilities, so the choices are gradually narrowing.
What if the laws of physics are not what we think they are?
That is a possibility too, but recent observations have made that less likely.
How much dark matter is there in the universe?
The total mass of dark matter in the universe far exceeds the mass of ordinary matter, about five to one. Always has and always will.
Is dark matter distributed uniformly throughout the universe? How is that possible if it has all these density variations, all these lumps?
No. Dark matter is far from uniform. Maybe you are thinking of dark energy, which is believed to be have uniform density throughout the universe. The average densities of dark matter, ordinary matter, radiant energy, and dark energy over cosmological distances are spatially constant, but change over time as space expands. So dark matter is uniformly distributed throughout the universe despite enormous local variations.
How much does the relative abundance of dark matter vary?
Locally here on planet Earth or even in our solar system, almost all matter is ordinary matter. But even in our galaxy, dark matter may have ten times the mass of ordinary matter. And all of the hundreds of billions of stars in our galaxy are made out ordinary matter.
What else does dark matter do?
It holds things together. Most matter is dark matter and its gravitational attraction dominates throughout the universe except in very local patches like stars and planets. Throughout most of the history of the universe, it played the crucial role of slowing the expansion of the universe.
At the end of the cosmic inflation period, the universe was flat, isotropic, and homogeneous with only small density and temperature fluctuations. By the time of recombination, energy flow smoothed out nearly all of the primordial density fluctuations in ordinary baryonic matter.
Non-interactive dark matter retained the primordial density fluctuations, which grew over time as space expanded, giving rise to large scale structure. Gravitational instability formed a web of dark matter halos, filaments, and voids. Dark matter structure attracted baryonic matter which then formed dense structures by radiating excess heat away.
If it does such a great job of holding things together, why is the universe expanding at an ever increasing rate?
Over time, the expansion of the universe has diluted the density of matter and ordinary radiant energy so that their influence has diminished on the cosmological scale of the entire universe. For billions of years now the density of dark energy has exceeded the density of everything else and its ability to push space apart has dominated. In fact, it has helped dilute the average density of the universe due to everything else, matter and radiant energy, so that it went from insignificant for billions of years to being twice as dense as everything else on average.
Is dark energy pulling everything apart?
No, not everything. Locally, where matter is very dense, dark energy is still very low density and does not have any significant effect, so our planet, stars, and local galaxies are not expanding or being pulled apart by dark energy. Since the universe is similar everywhere, that means that stars and galaxies everywhere are also being held together by gravity, not pulled apart by dark energy. They are just getting farther apart.
So what does the universe look like, is it a bunch of galaxies spread out or is there a larger pattern?
Galaxies are important, but most matter is not in galaxies. Most ordinary matter is in the vast empty spaces between galaxies and most dark matter is also not in galaxies. Cosmologists treat the universe as isotropic and homogeneous meaning that it is basically the same everywhere and is very smooth on the largest scale.
Under close examination, it turns out that the largest scale structures in the universe are not galaxies. Most matter in the universe appears to be arranged in the shape of a massive and voluminous cosmic web, a network of nodes connected by filaments enclosing vast voids.
How much dark matter and how much ordinary matter is in each type of large structure?
Very high resolution Illustris computer simulations are providing accurate numerical models of the large scale structure observed in the universe. Halos occupy only about 0.16% of the volume of the universe, but contain about 46% of the dark matter. Halos only contain 23% of the baryonic matter because supermassive black holes that form active galactic nuclei ejected half of the baryons from the halos to the voids. Since the universe has about five times more dark matter than baryonic matter, halos have approximately ten times more dark matter than baryonic matter.
Voids contain about 6% of the dark matter mass of the universe and 30% of the baryonic matter, which means that voids contain about equal amounts of dark matter and baryonic matter. Voids occupy nearly 78% of the volume of the universe and presumably contain 78% of the dark energy.
Filaments are less dense than halos, but denser than voids. Filaments contain about half of the dark matter and baryonic matter while occupying about 22% of the universe by volume. Filaments include very wide structures called walls or sheets.
Since ordinary matter forms stable clumps by dissipating energy and entropy, does dark matter also dissipate energy and entropy to form structures?
Very few sources address this question. Dark matter particles accelerate in a gravitational instability, but there is no obvious way for them to decelerate when they accumulate, so you would expect them to fly past each other rather than form a stable condensed structure.
There are several ways to form structure. First is to preserve the primordial density fluctuations of the early universe. In this case, structure may evolve in an expanding universe without changes in energy or entropy. Second perhaps there is a way for dark matter to dissipate energy and entropy, either directly to space or indirectly through gravitational interactions with ordinary matter that can dissipate energy and entropy.
Where does dissipated energy and entropy go if large scale structure occupies the entire observable universe?
That is a good question just like the question of whether dark energy is conserved since dark energy has constant density and expanding volume. The answer is beyond the scope of this essay, but may be tied to the expansion of the universe.
Is a halo in the simulation similar to a galaxy cluster or what? Are halos located in filaments and voids or only in nodes of the observable cosmic web?
The Illustris computer simulation model treats any region with a density greater than 15 times the average density of the universe as a halo, so to the extent that the model can resolve high density regions, halos could be anywhere in the web.
Do we know anything about the properties of dark matter besides the fact that it has mass and apparently only interacts through the force of gravity?
Yes. Scientists believe that dark matter is mostly “cold”, not “hot” or “warm”. These words are in quotes because in this context, scientists are really describing the velocity of particles rather than the actual temperature of a collection of particles in a solid, liquid, or gas.
How cold is cold dark matter?
By cold, scientists mean it does not move at the speed of light like neutrinos or photons. In the sense of temperature, it may not be what we consider cold in the everyday sense. But then again every day on Earth is different than in space. Most ordinary matter – often called baryonic matter – is as hot as the core of stars, typically ten million degrees Kelvin or more.
I thought space was cold. How can most matter be so hot when the cosmic microwave background (CMB) is a very cold 2.725 K?
The CMB is energy not matter. Photons don’t have a temperature in the sense that matter has a temperature. Temperature measures the random motion of particles of matter whereas photons are always moving at the speed of light. The only sense that photons can be considered cold is when they are radiated by cold matter and therefore have long wavelengths in their blackbody spectrum.
These photons have been traveling through space for nearly 13.8 billion years. When the universe became transparent at the time of recombination, they were very “hot” in the sense of being very energetic, like gamma rays. The number of photons has not changed but as the universe expanded, their wavelengths stretched. Longer wavelengths correspond to lower frequency waves and lower energy per photon.
When the universe became transparent, that is after recombination, matter decoupled from radiation and followed a different path in terms of temperature. When matter condenses due to a gravitational instability, the resulting in-fall increases the kinetic energy. When matter coalescences into stars and planets, collisions between particles convert their kinetic energy into thermal energy, which is nothing more than random motion. Hot matter cools by radiating energy.
Planets in our solar system formed billions of years ago and are still cooling even though the surface of our planet is warming due to solar energy trapped by greenhouse gases. Stars like our Sun on the other hand have cores that are so hot and dense that thermonuclear fusion converts matter to energy during nucleosynthesis. Half of the mass of a star may be in its core at temperatures greater than ten million degrees.
Is most of the matter in the universe hot or cold?
Most of the mass of stars is hot and most of the mass of matter in galaxies is in the stars. But most of the mass – 80% to 95% – in a galaxy cluster is in the intracluster medium or ICM between galaxies. It is very dilute, that is very low density.
So is the intracluster medium hot or cold?
The ICM is a dilute plasma meaning that it is hot, between ten and one hundred million Kelvin degrees. Why? Because it originated in the hot active nuclei of galaxies and it is not dense enough to interact in order to radiate the heat away, unlike dense matter like stars and planets, or even the cool relatively dense molecular clouds in the interstellar medium or ISM in galaxies.
How does the interstellar medium differ from the intracluster medium?
Intracluster is within a galaxy cluster but not within any individual galaxy in the cluster. Interstellar medium is between the stars within a galaxy not between galaxies. Most ordinary matter in a galaxy is in the hot stars. Perhaps 10% of ordinary matter is in the ISM. What is the composition and temperature of the ISM? The composition is fairly complex consisting of several different states of matter. Most authors do not clearly specify how much is in each state.
So does dark matter have a temperature even though it does not absorb or radiate photons? And if so, what is its temperature or is that another unexplained problem?
It seems like every answer leads to more questions and researchers focus on solving specific problems to try to piece the picture together. Astrophysics teachers have so many remarkable concepts and exotic objects to discuss that textbooks cannot address every question that you can think of. We would like to think that some scientists know the answers to these questions, but the answers are not widely disseminated.
How about a yes or no answer - does dark matter have a temperature?
It is widely assumed that dark matter consists of particles moving rapidly in random directions so in that sense it contains thermal energy, but since we cannot measure its temperature in a conventional sense and it does not radiate heat away, we can probably calculate a particle velocity distribution and specify a corresponding temperature, but we cannot use the temperature to predict behavior in a conventional sense.
So which formed first, stars, galaxies, or cosmic webs?
Evidence suggests that the universe formed from the bottom up rather than the top down. In other words some stars formed very early and smaller galaxies merged to form larger galaxies which formed massive clusters and superclusters. But the top down theory has its advocates.
If the total amount of dark matter is constant, what was its density in the early universe?
Today the density of everything in the universe is about 10-26 kg/m3. Dark energy is about 70% of this and matter is about 30% since radiant energy is very small. Dark matter is about 80% of all matter or 25% of the universe. So the density of dark matter is about 0.25 x 10-26 kg/m3 and the density of ordinary matter is less than 0.05 x 10-26 kg/m3. The density of radiant energy is very small because each photon has less energy over time and occupies a much smaller volume.
At the time of recombination, the cosmic scale factor which measures the size of the observable universe was about 1100 times smaller corresponding to a red shift of about 1100. The volume of the universe was 1100 cubed times smaller, or 1.3 billion times denser. So the density of dark matter was about 0.33 x 10-17 kg/m3 when the universe was about 380,000 years old at recombination.
Radiant energy decreases as the fourth power of the cosmic scale factor because each photon occupies a larger volume and has lower energy because of its stretched wavelength. Radiant energy was nearly 1.5 trillion times denser than today. Since the density of dark energy is constant, it did not change, but it was 1.3 billion times less influential when the volume of the universe was that much smaller.
What does dark matter do for the Milky Way galaxy today?
It holds it together. Many stars are moving through space so fast that they would leave the galaxy if it weren’t for the extra gravitational mass provided by dark matter. In fact the determination of star speeds orbiting the center of the galaxy was the clue that there were vast amount of unseen matter. All of the stars in our galaxy seem to have the same speed in orbit based on observations of red shifts. Gravity falls off as the inverse square of distance, so stars farther from the center should be moving slow enough to stay in orbit, just like planets in our solar system move slower if they are located further from the Sun. The conclusion is that most of the mass of the galaxy is unseen and is distributed in a “halo”.
How much dark matter is in a typical large galaxy like the Milky Way?
The mass of dark matter in the Milky Way is approximately one trillion solar masses or 2 x 1042 kg since the mass of the Sun is approximately 2 x 1030 kg. The mass of ordinary matter is about ten times smaller, 100 billion solar masses or 2 x 1041 kg. Most of this is in the 300 billion stars in the galaxy. The interstellar medium is only one tenth of this or 10 billion solar masses. The amount of radiant energy and dark energy is relatively small in a galaxy. So dark matter is about 90% of the mass of our galaxy.
Does dark matter affect whether a galaxy is a spiral, elliptical, or irregular galaxy?
Not so much. Dark matter causes galaxies to form, but does not dictate their shape or evolution.
How did matter become the dominant influence in a universe that was dominated by energy?
The densities of energy and matter decreased as the volume of the expanding universe increased. The total amount of matter did not change, just the density. On the other hand, not only did the density of radiant energy decrease, but the energy per photon and neutrino decreased due to the Doppler shift as space itself expanded and stretched the energy wavelengths. Consequently, the total amount of radiant energy decreased over time, not just its density.
Radiant energy was the dominant influence in the universe for 50,000 years. Eventually its density decreased so much more than matter that matter began to be the dominant influence in the universe. But at this time, the density fluctuations in the universe were so small that matter was too diffuse to condense into stars.
What is Dark Matter Day and why is it on Halloween?
It is a day for science education that has been held every year since 2017 on Halloween, the day we celebrate and hunt for things unseen. | https://evolution.calpoly.edu/dark-matter-day |
Abstract: Ants play a central role in understanding the effects of habitat loss and fragmentation on communities and ecosystems because of their diversity, abundance, and functional roles in ecosystems. Species interactions involving ants are widespread and include other insects, plants, and vertebrates. Ants often do not show strong relationships between species richness and habitat area, but shifts in ant species composition are a more general pattern with an average 75% turnover in species composition among habitat fragments. Shifts in ant species composition and relative abundance due to habitat fragmentation have direct and indirect effects on species interactions of ants, including sap-feeding insects, seed dispersal, and vertebrate mutualisms. The loss of some ant species from small habitat fragments may have widespread effects in ecosystems because of their function roles as keystone mutualists or in soil modification. Boundary dynamics of ants across habitat edges and the surrounding land uses are particularly important to understanding the effects of habitat fragmentation because steep abiotic and biotic gradients may facilitate invasive ant species and cause sharp changes in the abundance and species interactions of native ant species.
Myrmecol. News 12: 3-13; supplement
Preview not available. | https://myrmecologicalnews.org/cms/index.php?option=com_content&view=category&id=348&Itemid=356 |
Skip to main content
Skip to "About this site"
Skip to section menu
Language selection
Français
Government of Canada
Menus
Menus
No Search button
Topics menu
Jobs
Immigration
Travel
Business
Benefits
Health
Taxes
More services
You are here:
Home
Publications and Reports
GEOSCAN
GEOSCAN Search Results: Fastlink
GEOSCAN Menu
Search GEOSCAN
Title
Assessing impacts of changing ocean conditions on three nearshore foundational species
Download
Download (whole publication)
Author
Young, M A; Ierodiaconou, D
Source
Program and abstracts: 2017 GeoHab Conference, Dartmouth, Nova Scotia, Canada; by Todd, B J; Brown, C J; Lacharité, M; Gazzola, V; McCormack, E; Geological Survey of Canada, Open File 8295, 2017 p. 122,
https://doi.org/10.4095/305943
(Open Access)
Links
GeoHab 2017
Year
2017
Publisher
Natural Resources Canada
Meeting
2017 GeoHab: Marine Geological and Biological Habitat Mapping; Dartmouth, NS; CA; May 1-4, 2017
Document
open file
Lang.
English
Media
on-line; digital
Related
This publication is contained in
Todd, B J; Brown, C J; Lacharité, M; Gazzola, V; McCormack, E; (2017). Program and abstracts: 2017 GeoHab Conference, Dartmouth, Nova Scotia, Canada, Geological Survey of Canada, Open File 8295
File format
pdf
Area
Australia
Lat/Long WENS
150.0000 155.0000 -25.0000 -38.0000
Subjects
Nature and Environment; mapping techniques; oceanography; marine environments; coastal studies; conservation; marine organisms; marine ecology; resource management; biological communities; environmental studies; ecosystems; nearshore environment; climate effects; water temperature; vegetation; hydrodynamics; modelling; Molluscs; Kelp; East Australian Current; biology; habitat mapping; habitat conservation; habitat management; climate change; fisheries
Program
Ocean Management Geoscience, Offshore Geoscience
Released
2017 09 26
Abstract
Climate change is having far reaching impacts across the globe but there is a still a lot of uncertainty in how ecosystems are responding. This uncertainty is much greater in marine ecosystems where our understanding lags behind that of terrestrial ecosystems. Despite this lag in understanding, marine environments are changing at rapid rates and there is a need to study the effects of how changes in ocean conditions are affecting marine ecosystems. Most studies in the past have looked at how temperature fluctuations are shifting the abundance and distribution of species; however, climate change is also impacting other aspects of the ocean including circulation and the wave environment. Taking into account all of these changes in the marine environment and focusing on species that are important and foundational members of the community can give insight into how marine ecosystems are likely to respond to climate related changes. In this study, we looked at three species that structure the nearshore environment and their responses to variations in habitat and oceanic conditions along the Southeast coast of Australia. With rises in ocean temperatures exacerbated by the strengthening of the East Australian Current, the water off the coast of Southeast Australia is experiencing rapid warming, causing this region to be a hotspot for ocean temperature change. To assess ecosystem response to these changes, we investigated two species of habitat forming kelps (Ecklonia radiata and Pyllospora comosa) and an ecosystem engineer (blacklip abalone, Haliotis rubra). Using long-term data (2003-2015) on E. radiata and P. comosa percent cover and H. rubra biomass collected using diver transects across 180 sites along the coast of Victoria, Australia, we assessed the relationship between these species and environmental drivers. These environmental drivers included seafloor habitat characteristics, hydrodynamic information that was downscaled to 500 m resolution and hindcasted over the past 20 years (wave orbital velocities, wave power, significant wave height, current speed, current direction) and annual and seasonal sea surface temperature data from 2003-2015. We also incorporated annual catch data to account for abalone population decreases due to commercial fisheries. We then related all these variables in generalized linear mixed effects models (GLMM) for each species with year and sites as random effects. The results from the GLMMs show that these three species have strong habitat associations and complex interactions with changes in sea surface temperature and the hydrodynamic environment. For example, the subsurface kelp species tend to have a negative response to warming temperatures but this response can be buffered by increasing or consistent wave exposure. Overall, this study helps us to understand the combined effects of habitat and changing oceanographic conditions on these three species, which will help to facilitate management of these ecologically and economically important nearshore marine ecosystems. | https://geoscan.nrcan.gc.ca/starweb/geoscan/servlet.starweb?path=geoscan/fulle.web&search1=R=305943 |
Saltwater- and freshwater environments have opposing physiological challenges, yet, there are fish species that are able to enter both habitats during short time-spans, and as individuals they must therefore adjust quickly to osmoregulatory contrasts. In this study, we conducted an experiment to test for plastic responses to abrupt sainity changes in two poplulations of threespine stickleback, Gasterosteus aculeatus, representing two ecotypes (freshwater and ancestral saltwater). We exposed both ecotypes to abrupt native (control treatment) and non-native salinities (0 and 30‰) and sampled gill-tissue for transcriptomic analyses after six hours exposure. To investigate genomic responses to salinity, we analysed four different comparisons; one for each ecotype (in their control and exposure salinity; 1 and 2), one between ecotypes in their control salinity (3), and the fourth comparison included all transcripts identified in (3) that did not show any expressional changes within ecotype in either the control or the exposed salinity (4). Abrupt salinity transfer affected the expression of 10 and 1530 transcripts for the saltwater and freshwater ecotype, respectively, and 1314 were differentially expressed between the controls, including 502 that were not affected by salinity within ecotype (fixed expression). In total, these results indicate that factors other than genomic expressional plasticity are important for osmoregulation in stickleback, due to the need for opposite physiological pathways to survive the abrupt change in salinity.
Hypoxia has profound and diverse effects on aerobic organisms, disrupting oxidative phosphorylation and activating several protective pathways. Predictions have been made that exposure to mild intermittent hypoxia may be protective against more severe exposure and may extend lifespan. Both effects are likely to depend on prior selection on phenotypic and transcriptional plasticity in response to hypoxia, and may therefore show signs of local adaptation. Here we report the lifespan effects of chronic, mild, intermittent hypoxia (CMIH) and short-term survival in acute severe hypoxia (ASH) in four clones of Daphnia magna originating from either permanent or intermittent habitats, the latter regularly drying up with frequent hypoxic conditions. We show that CMIH extended the lifespan in the two clones originating from intermittent habitats but had the opposite effect in the two clones from permanent habitats, which also showed lower tolerance to ASH. Exposure to CMIH did not protect against ASH; to the contrary, Daphnia from the CMIH treatment had lower ASH tolerance than normoxic controls. Few transcripts changed their abundance in response to the CMIH treatment in any of the clones. After 12 hours of ASH treatment, the transcriptional response was more pronounced, with numerous protein-coding genes with functionality in mitochondrial and respiratory metabolism, oxygen transport, and, unexpectedly, gluconeogenesis showing up-regulation. While clones from intermittent habitats showed somewhat stronger differential expression in response to ASH than those from permanent habitats, there were no significant hypoxia-by-habitat of origin or CMIH-by-ASH interactions. GO enrichment analysis revealed a possible hypoxia tolerance role by accelerating the molting cycle and regulating neuron survival through up-regulation of cuticular proteins and neurotrophins, respectively.
Patagonia is an understudied area, especially when it comes to population genomic studies with relevance to fishery management. However, the dynamic and heterogeneous landscape in this area can harbor important but cryptic genetic population structure. Once such information is revealed, it can be integrated into the management of infrequently investigated species. Eleginops maclovinus is a protandrous hermaphrodite species with economic importance for local communities that is currently managed as a single genetic unit. In this study, we sampled five locations distributed across a salinity cline from Northern Patagonia to investigate the genetic population structure of E. maclovinus. We use Restriction-site Associated DNA (RAD) sequencing and outlier tests to obtain neutral and adaptive loci, using FST and GEA approaches. We identified a spatial pattern of structuration with gene flow and spatial selection by environmental association. Neutral and adaptive loci showed two and three genetic groups, respectively. The effective population sizes estimated ranged from 572 (Chepu) to 14,454 (Chaitén) and were influenced more by locality than salinity cline. We found loci putatively associated with salinity suggesting that salinity may act as a selective driver in E. maclovinus populations. These results suggest a complex interaction between genetic drift, geneflow, and natural selection in this area. Our findings suggest several units in this area, and the information should be integrated into the management of this species. We discuss the significance of these results for fishery management and suggest future directions to improve our understanding of how E. maclovinus is adapted to the dynamic waters of Northern Patagonia.
Telomeres, the terminal repetitive DNA sequences at the ends of linear chromosomes, have strong associations with longevity in some major taxa. Longevity has been linked to rate of decline in telomere length in birds and mammals, and absolute telomere length seems to be associated with body mass in mammals. Using a phylogenetic comparative method and 30 species of birds, we examined longevity (reflected by maximum lifespan), absolute telomere length, the rate of change in telomere length (TROC), and body mass (often strongly associated with longevity) to ascertain their degree of association. We divided lifespan into two life-history components, one reflected by body size (measured as body mass), and a component that was statistically independent of body mass. While both lifespan and body mass were strongly associated with a family tree of the species (viz., the phylogeny of the species), telomere measures were not. Telomere length was not significantly associated with longevity or body mass, or our measure of mass-independent lifespan. TROC, however, was strongly associated with mass-independent lifespan, but to a lesser degree with body mass. Our results supported an association of TROC and longevity, in particular longevity that was independent of body size and part of the pace-of-life syndrome of life histories.
The diet of an individual animal is subject to change over time, both in response to short-term food fluctuations and over longer time scales as an individual ages and meets different challenges over its life cycle. A metabarcoding approach was used to elucidate the diet of different life stages of a songbird, the Eurasian reed warbler (Acrocephalus scirpaceus) over the summer breeding season of 2017. The faeces of adult, juvenile and nestling warblers were screened for invertebrate DNA, enabling identification of prey species. Dietary analysis was coupled with monitoring of Diptera in the field using yellow sticky traps. Seasonal changes in warbler diet were subtle whereas age class had a greater influence on overall diet composition. Age classes showed high dietary overlap, but significant dietary differences were mediated through the selection of prey, i) from different taxonomic groups, ii) with different habitat origins (aquatic versus terrestrial) and iii) of different average approximate sizes. Our results highlight the value of metabarcoding data for enhancing ecological studies of insectivores in dynamic environments.
Invasive predatory species are frequently observed to cause evolutionary responses in prey phenotypes, which in turn may translate into evolution of the prey’s population dynamics. Research has provided a link between rates of predation and the evolution of prey population growth in the lab, but studies from natural populations are rare. Here we tested for evolutionary changes in population dynamics parameters of zooplankton Daphnia pulicaria following invasion by the predator Bythotrephes longimanus into Lake Kegonsa, Wisconsin, US. We used a resurrection ecological approach, whereby clones from pre- and post-invasive periods were hatched from eggs obtained in sediment cores and were used in a 3-month growth experiment. Based on these data we estimated intrinsic population growth rates (r) and carrying capacities (K) using theta-logistic models. We found that post-invasion Daphnia maintained a higher r and K under these controlled, predation-free laboratory conditions. Thus, whereas previous experimental evolution studies of predator-prey interactions have demonstrated that genotypes that have evolved under predation have inferior competitive ability when the predator is absent, this was not the case for the Daphnia. Given that our study was conducted in a laboratory environment and the possibility for genotype-by-environment interactions, extrapolating these apparent counterintuitive results to the wild should be done with caution. However, barring such complications, we discuss how selection for reduced predator exposure, either temporally or spatially, may have led to the observed changes. This scenario suggests that complexities in ecological interactions represents a challenge when predicting the evolutionary responses of population dynamics to changes in predation pressure in natural systems.
We sought to assess effect of plant environmental adaptation strategies and evolutionary history and quantify the contribution of ecological processes to community assembly by measuring functional traits and phylogenetic composition in local forest community. We selected 18 dominant tree species in a Lithocarpus glaber–Cyclobalanopsis glauca evergreen broad-leaved forest and measured nine leaf functional traits and phylogenetic data of each species. We analyzed the variation in traits and trade-off relationships, tested phylogenetic effects on leaf functional traits, explored the influence of phylogeny and environment on leaf functional traits, and distinguished the relative effects of spatial and environmental variables on functional traits and phylogenetic compositions. The results showed the following: (i) Leaf traits had moderate intraspecific variation, and significant interspecific variation existed especially among life forms. (ii) Significant phylogenetic signals were detected only in leaf thickness and leaf area. The correlations among traits both supported “the leaf economics spectrum” at the species and community levels, and the relationships significantly increased or only a little change after removing the influence of phylogeny, which showed a lack of consistency between the leaf functional trait patterns and phylogenetic patterns. We infer the coexistent species tended to adopt “realism” to adapt to their habitats. (iii) Soil total potassium and phosphorus content, altitude, aspect, and convexity were the most critical environmental factors affecting functional traits and phylogenetic composition. Total environmental and spatial variables explained 63.38% of the variation in functional trait composition and 47.96% of the variation in phylogenetic structures. Meanwhile, the contribution of pure spatial factors was significantly higher than that of the pure environment. Neutral- theory-based stochastic processes played dominant roles in driving community functional trait assembly, but niche-theory-based determinative processes such as environmental filtering had a stronger effect on shaping community phylogenetic structure at a fine scale.
1. The encroachment of woody plants into grasslands is an ongoing global problem that is largely attributed to anthropogenic factors such as climate change and land management practices. Determining the mechanisms that drive successful encroachment is a critical step towards planning restoration and long-term management strategies. Feedbacks between soil and aboveground communities can have a large influence on the fitness of plants and must be considered as potentially important drivers for woody encroachment. 2. We conducted a plant-soil feedback experiment in a greenhouse between eastern redcedar Juniperus virginiana and four common North American prairie grass species. We assessed how soils that had been occupied by redcedar, a pervasive woody encroacher in the Great Plains of North America, affected the growth of big bluestem, little bluestem smooth brome, and western wheatgrass over time. We evaluated the effect of redcedar on grass performance by comparing the height and biomass of individuals of each grass species that were grown in live or sterilized conspecific or redcedar soil. 3. We found that redcedar created a negative plant-soil feedback that limited the growth of two species. These effects were found in both live and sterilized redcedar soils, indicating redcedar may exude an allelochemical into the soil that limits grass growth. 4. Synthesis. By evaluating the strength and direction of plant-soil feedbacks in the encroaching range, we can further our understanding of how woody pants successfully establish in new plant communities. Our results demonstrate that plant-soil feedback created by redcedar inhibits the growth of certain grass species. By creating a plant-plant interaction that negatively affects competitors, redcedars increase the probability of seedling survival until they can grow to overtop their neighbors. These results indicate plant-soil feedback is a mechanism of native woody plant encroachment that could be important in many systems yet is understudied.
Workers of the ant Cardiocondyla elegans drop female sexuals into the nest entrance of other colonies to promote outbreeding with unrelated, wingless males. Corroborating results from previous years we document that carrier and carried female sexuals are typically related and that the transfer initially occurs mostly from their joint natal colonies to unrelated colonies. Female sexuals mate multiply with up to seven genetically distinguishable males. Contrary to our expectation, the colony growth rate of multiple-mated and outbred female sexuals was lower than that of inbred or single-mated females, leading to the question of why female sexuals mate multiply at all. Despite the obvious costs, multiple mating might be a way for female sexuals to “pay rent” for hibernation in an alien nest. We argue that in addition to evading inbreeding depression from regular sibling mating over many generations, assisted dispersal might also be a strategy for minimizing the risk of losing all reproductive investment when nests are flooded in winter.
Allopreening occurs in many species of birds and is known for providing hygienic and social benefits. While this behavior has been studied between conspecifics, its occurrence among different species remains mysterious. Outside of captive environment, only a few records of interspecific allopreening exist. In this study, we describe our observations of Spot-necked Babbler (Stachyris strialata) preening Nonggang Babbler (Stachyris nonggangensis) in a non-captive environment in southern China. We provide three hypotheses (social dominance, cleaning mutualism, and hybridization) to explain the occurrence of this understudied behavior. We suggest that interspecific allopreening may not be as rare as we thought if we study this behavior under circumstances where it most frequently occurs. This study contributes to our understanding of not only the potential mechanism(s) for interspecific allopreening but also the behavioral ecology of the vulnerable Nonggang Babbler.
Intense fishing pressure and climate change are major threats to fish populations and coastal fisheries. Larimichthys crocea (large yellow croaker) is a long-lived fish, which performs seasonal migrations from its spawning and nursery grounds along the coast of the East China Sea (ECS) to overwintering grounds offshore. This study used length-based analysis and habitat suitability index (HSI) model to evaluate current life-history parameters and overwintering habitat suitability of L. crocea, respectively. We compared recent (2019) and historical (1971-1982) life-history parameters and overwintering HSI to analyze the fishing pressure and climate change effects on the overall population and overwintering phase of L. crocea. The length-based analysis indicated serious overfishing of L. crocea, characterized by reduced catch yield, size truncation, constrained distribution, and advanced maturation causing a recruitment bottleneck. The overwintering HSI modeling results indicated that climate change has led to decreased sea surface temperature during L. crocea overwintering phase over the last half-century, which in turn led to area decrease and an offshore-oriented shifting of optimal overwintering habitat of L. crocea. The fishing-caused size truncation may have constrained the migratory ability and distribution of L. crocea subsequently leading to the mismatch of the optimal overwintering habitat against climate change background, namely habitat bottleneck. Hence, while heavily fishing was the major cause of L. crocea collapse, climate-induced overwintering habitat suitability may have intensified the fishery collapse of L. crocea population. It is important for management to take both overfishing and climate change issues into consideration when developing stock enhancement activities and policy regulations, particularly for migratory long-lived fish that share a similar life history to L. crocea. Combined with China’s current restocking and stock enhancement initiatives, we propose recommendations for future restocking of L. crocea in China.
Patterns of biodiversity provide insights into the processes that shape biological communities around the world. Variation in species diversity along biogeographical or ecological gradients, such as latitude or precipitation, can be attributed to variation in different components of biodiversity: changes in the total abundance (i.e. more-individual effects) and changes in the regional species abundance distribution (SAD). Rarefaction curves can provide a tool to partition these sources of variation on diversity, but first must be converted to a common unit of measurement. Here, we partition species diversity gradients into components of the SAD and abundance using the effective number of species (ENS) transformation of the individual-based rarefaction curve. Because the ENS curve is unconstrained by sample size, it can act as a standardized unit of measurement when comparing effect sizes among different components of biodiversity change. We illustrate the utility of the approach using two datasets spanning latitudinal diversity gradients in trees and marine reef fish, and find contrasting results. Whereas the diversity gradient of fish was mostly associated with variation in abundance (86%), the tree diversity gradient was mostly associated with variation in the SAD (59%). These results suggest that local fish diversity may be limited by energy through the more-individuals effect, while species pool effects are the larger determinant of tree diversity. We suggest that the framework of the ENS-curve has the potential to quantify the underlying factors influencing most aspects of diversity change.
An upsurge in anthropogenic climate change has accelerated the habitat loss and fragmentation of wild animal and plants. The rare and endangered plants is an important elements of biodiversity, but holistic conservation management has been hampered by lacking of detailed and reliable information about their spatial distribution. Our aim is to study the consequences of climate change on geographical distributions of a rare tree species Firmiana kwangsiensis (Malvaceae) to provide reference for conservation, introduction and cultivation of this species. Based on 30 effective occurrence records and 27 environmental variables, we modeling the potential distribution of F. kwangsiensis under current and two future climate scenarios in maximum entropy. We found that the potential suitable habitat boundary of F. kwangsiensis were limited by precipitation-associated variables and temperature-associated variables. Our model predicted 259,504 km2 of F. kwangsiensis habitat based on 25 percentile thresholds in contemporary, of which the high suitable area is about 41,027 km2. Guangxi’s protected areas provide the most coverage for F. kwangsiensis habitat. However, the existing reserves encompass 2.7% of the total suitable habitat and 4.2% of the high suitable habitat, which is lower than the average protection intensity in Guangxi (7.2%), meaning protected areas network is currently insufficient and alternative conservation mechanisms are needed to protect the habitat. Our findings will help to identify additional localities where F. kwangsiensis may exist, and also where it may spread to. It provides important information for the conservation management and cultivation of such rare tree species.
The Arctic Warbler (Phylloscopus borealis) is a cryptic songbird that uses a Nearctic-Paleotropical migratory strategy. Using geolocators, we provide the first documentation of the migratory routes and wintering locations of two territorial adult male Arctic Warblers from Denali National Park and Preserve, Alaska. After accounting for position estimation uncertainties and biases, we found that both individuals departed their breeding grounds in early September, stopped over in southeastern Russia and China during autumn migration, then wintered in the Philippines and the island of Palau. Our documentation of Arctic Warbler wintering on Palau suggests that additional study is needed to document their wintering range. These results suggest that Arctic Warblers may migrate further overwater than previously thought and provide hitherto unknown information on stopover and wintering locations.
Climate change affects the species spatio-temporal distribution deeply. However, how climate affects the spatio-temporal distribution pattern of related species on the large scale remains largely unclear. Here, we selected two closely related species in Taxus genus Taxus chinensis and Taxus mairei to explore their distribution pattern. Four environmental variables were employed to simulate the distribution patterns using the optimized Maxent model. The results showed that the highly suitable area of T. chinensis and T. mairei in current period was 1.964×105km2 and 3.074×105km2, respectively. The distribution area of T. chinensis was smaller than that of T. mairei in different periods. Temperature and precipitation were the main climate factors that determined the potential distribution of the two species. The centroids of T. chinensis and T. mairei were in Sichuan and Hunan province in current period, respectively. In the future, the centroid migration direction of two species was almost opposite. T. chinensis would shift towards southwest, while T. mairei towards northeast. Our results revealed that the average elevation distribution of T. chinensis was higher than that of T. mairei. This study sheds new insights into the habitat preference and limiting environment factors of the two related species and provides a valuable reference for the conservation of these two endangered species.
Climate change is increasing aridity in grassland and desert habitats across the southwestern United States, reducing available resources and drastically changing the breeding habitat of many bird species. Increases in aridity reduce sound propagation distances, potentially impacting habitat soundscapes, and leading to a breakdown of the avian soundscapes in the form of loss of vocal culture, reduced mating opportunities, and local population extinctions. We developed an agent-based model to examine how changes in aridity will affect both sound propagation and the ability of territorial birds to audibly contact their neighbors. We simulated vocal signal attenuation under a variety of environmental scenarios for the south central semi-arid prairies of the United States, ranging from contemporary weather conditions to predicted extremes under climate change. We also simulated how changes in physiological conditions, mainly evaporative water loss (EWL), would affect singing behavior. Under extreme climate change conditions, we found significantly fewer individuals successfully contacted all adjacent neighbors than did individuals in either the contemporary or mean climate change conditions. We also found that at higher sound frequencies and higher EWL, fewer individuals were able to successfully contact all of their neighbors, particularly in the extreme and extreme climate change conditions. These results indicate that climate change-mediated aridification may disrupt the avian soundscape, such that vocal communication no longer effectively functions for mate attraction or territorial defense. As climate change progresses increased aridity in current grasslands may favor shifts toward low frequency songs, colonial resource use, and altered songbird community compositions.
Phenotypic plasticity in reproductive behaviour can be a strong driver of individual fitness. For example, in species with high intra-sexual competition, changes in socio-sexual context can trigger quick adaptive plastic responses in males. In particular, a recent study in the vinegar fly (Drosophila melanogaster) shows that males respond adaptively to perception of female cues in a way that increases their reproductive success, but we ignore the underlying mechanisms of this phenomenon. Here, we aimed to fill this gap by investigating the short-term effects of female perception on male pre- and post-copulatory components of reproductive success: a) mating success, b) mating latency and duration, c) sperm competitiveness, and d) ejaculate effects on female receptivity and oviposition rate. We found that brief sexual perception increased mating duration, but had no effect on the main pre- or post-copulatory fitness proxies. These results tie up with previous findings to suggest that male adaptive responses to sexual perception are not due to a short-term advantage, but rather to fitness benefits that play out across the entire male lifespan.
Anthropogenic and climatic factors affect the survival of animal species. Chinese pangolins are a critically endangered species, and identifying which variables lead to local extinction events is essential for conservation management. Local chronicles in China serve as long-term monitoring data, providing a perspective to disentangle the roles of human impacts and climate changes in local extinctions. Through a generalized additive model, extinction risk assessment model and principal component analysis, we combined information from local chronicles over a period of three hundred years (1700-2000) and reconstructed environmental data to determine the causes of local extinctions of the Chinese pangolin in China. Our results showed that the extinction probability increased with population growth and climate warming. An extinction risk assessment indicated that the population and distribution range of Chinese pangolins has been persistently shrinking in response to highly intensive human activities (main cause) and climate warming. Overall, the factors that cause local extinction, intensive human interference and drastic climatic fluctuations induced by global warming, might increase the local extinction rate of Chinese pangolins. Approximately 25% of extant Chinese pangolins are confronted with a notable extinction risk (0.36≤extinction probability≤0.93), specifically those distributed in Southeast China, including Guangdong, Jiangxi, Zhejiang, Hunan, Fujian, Jiangsu and Taiwan Provinces. To rescue this endangered species, we suggest strengthening field investigations, identifying the exact distribution range and population density of Chinese pangolins and further optimizing the network of nature reserves to improve conservation coverage on the territory scale. Conservation practices that concentrate on the viability assessment of scattered populations could lead to the successful restoration of the Chinese pangolin population. | https://www.authorea.com/inst/20554-ecology-and-evolution-open-research |
Even though climate change is seen as a major topic of discussion around the world, the impact of the environmental phenomenon is also becoming quite evident. ER Sreekumr and Dr PO Nameer, researchers at the University of Kerala’s Wildlife Department, recently conducted a study on the influence of climate change on two species of migratory birds.
The study was carried out on two species of flycatchers with restricted distribution and dependent on high altitudes: the black and orange flycatcher (BOF) Ficedula nigrorufa (Jerdon, 1839) and the Nilgiri flycatcher (NIF) Eumyias albicaudatus (Jerdon, 1840), to determine how they react to predicted climate change scenarios. The researchers used 194 and 300 independent points of occurrence for BOF and NIF to develop climate models and understand species responses to climate change scenarios using the MaxEnt algorithm.
The model predicted the current extent of occurrence of 6,532 km² suitable for FRO and 12,707 km² for NIF, within their range. However, only 27% and 24% of the existing suitable area of BOF and NIF, respectively, falls within the network of protected areas in the Western Ghats. Future forecasts suggest an appropriate area loss of 20-31% for BOF and 36-46% for NIF by 2050.
Climate change induced by human (anthropogenic) activities and increased environmental degradation have put millions of species in danger of extinction. According to a recent report by the Intergovernmental Panel on Climate Change (IPCC), anthropogenic activities will cause the global temperature to rise by 1.2 ° C between 2030 and 2052 above pre-industrial levels. Erratic environmental conditions, declining species abundance and widespread extinctions are some of the significant predicted effects of climate change, the study noted.
The oscillating climate and unique floral structure of mountain ecosystems provide species-specific microclimatic conditions and habitat, and these mountain ecosystems are known as “sky islands”. The Western Ghats (WG), considered one of the 36 biodiversity hotspots in the world, is located in southwest India and consists of such celestial islands. Palakkad Gap is the main discontinuity of the entire 1600 km section of the Western Ghats. Since 2012, the Western Ghats have also been a World Heritage Site13, and two ranges of hills in the region (the Nilgiri Hills and the Agasthyamalai Hills) have been recognized as Biosphere Reserves by the United Nations for education, science and culture (UNESCO), according to the study.
The Western Ghats mountain range is highly endemic with several species restricted to a narrow elevation range16. This specialized habitat is now deteriorating due to changing climatic conditions and human activities17–19. Under the looming threat of global warming and habitat loss from climate change, it is essential to assess the plight of the Westers Ghats habitat specialists, so that corrective conservation strategies can be planned, according to the report,
The Black and Orange Flycatcher and the Nilgiri Flycatcher are monotypic species endemic to the southern Western Ghats and confined to higher altitudes. BOF prefers the understory of shola forests, especially Strobilanthes and bamboo thickets, among patches of stunted evergreen forest on the heavenly islands of the Western Ghats and distributed above 700 m elevation, but more frequent near 1500 m and more. The NIF is also found above 600 m altitude but more frequently above 1200 m.
Degraded forests and timber, tea, coffee and cardamom plantations adjacent to forest areas are also considered suitable habitats. The NIF feeds mainly on invertebrates; however, it also consumes fruits and berries of Vaccinium spp., Syzygium spp., Cestrum spp., etc. to the current status of the Birds of India report.
It is essential to recognize the effect of climate change on endemic species due to their restricted distribution and specific habitat needs26.
Species distribution models (SDMs) are practical tools for understanding the relationship between species occurrence and environmental factors. SDMs also help determine previously unknown areas of a species based on the species’ known points of occurrence and predictor variables.
The study predicted suitable locations along high elevation regions for two species of restricted-range flycatchers in the Western Ghats. Of the two, the NIF has more widely distributed suitable areas available in the South West Ghats. BOF is restricted to high altitude pockets and its distribution is more isolated than NIF. Few occurrence data are available in the Brahmagiri Hills for the two species. In the case of NIF, the model predicts additional suitable areas in the BR Hills, but the species may not be found there due to unavailability of montane habitat. In addition, the NIF is not a long-distance migrant and these predicted suitable areas are within 50-100 km of the known range of the species. The regions of Agasthyamalai Hills, Pandalam Hills, Anamalai Hills, and Nilgiri Hills are the primary habitats for both species of flycatchers. BOF and NIF have a strong preference for mountain habitats, according to the report.
The results of the study were published in the latest edition of the leading scientific journal Current Science. The study was carried out as part of ER Sreekumar’s doctoral research. | https://nostrich.net/kerala-researchers-show-impact-of-climate-change-on-two-species-of-migratory-birds-lifestyle-news/ |
How does habitat fragmentation influence species diversity?
The positive effects of fragmentation have been attributed to numerous causes including – but not limited to – increase in functional connectivity, diversity of habitat types, persistence of predator–prey systems and decrease in intra- and interspecific competition.
Why is habitat loss bad for biodiversity?
Habitat loss has significant, consistently negative effects on biodiversity. Habitat loss negatively influences biodiversity directly through its impact on species abundance, genetic diversity, species richness, species distribution, and also indirectly.
How does habitat fragmentation affect biodiversity quizlet?
How does habitat fragmentation affect biodiversity? … Lowers biodiversity as species have to compete for resources and some will become extinct.
How can habitat fragmentation lead to a new species?
In addition to threatening the size of species’ populations, habitat fragmentation damages species’ ability to adapt to changing environments. … This is the process of change in the genetic composition of a population due to chance or random events, rather than by natural selection.
What are the effects of habitat fragmentation?
In addition to loss of habitat, the process of habitat fragmentation results in three other effects: increase in number of patches, decrease in patch sizes, and increase in isolation of patches.
Why is habitat fragmentation bad?
The effects of fragmentation are well documented in all forested regions of the planet. In general, by reducing forest health and degrading habitat, fragmentation leads to loss of biodiversity, increases in invasive plants, pests, and pathogens, and reduction in water quality.
How could we improve the biodiversity of fragmented habitats?
Habitat fragmentation is caused by natural factors and human activities. … Connecting habitats through corridors such as road overpasses and underpasses is one solution to restore fragmented patches, building more climate resilient landscapes, and restoring populations and overall biodiversity.
What are the positive effects of habitat loss?
Explanations for positive fragmentation effects are myriad, including reduced intra- and inter-species competition, stabilization of predator/parasite–prey/host interactions, higher landscape complementation, positive edge effects, and higher landscape connectivity. | https://asocon.org/biodiversity/question-why-does-habitat-fragmentation-generally-lead-to-a-loss-of-biodiversity.html |
Ants are a ubiquitous, highly diverse, and ecologically dominant faunal group. They represent a large proportion of global terrestrial faunal biomass and play key ecological roles as soil engineers, predators, and re-cyclers of nutrients. They have particularly important interactions with plants as defenders against herbivores, as seed dispersers, and as seed predators. One downside to the ecological importance of ants is that they feature on the list of the world’s worst invasive species. Ants have also been important for science as model organisms for studies of diversity, biogeography, and community ecology. Despite such importance, ants remain remarkably understudied. A large proportion of species are undescribed, the biogeographic histories of most taxa remain poorly known, and we have a limited understanding of spatial patterns of diversity and composition, along with the processes driving them. The papers in this Special Issue collectively address many of the most pressing questions relating to ant diversity. What is the level of ant diversity? What is the origin of this diversity, and how is it distributed at different spatial scales? What are the roles of niche partitioning and competition as regulators of local diversity? How do ants affect the ecosystems within which they occur? The answers to these questions provide valuable insights not just for ants, but for biodiversity more generally.
Keywordsant diversity; cryptic species; morphospecies; species delimitation; sympatric association; endosymbiont; ant; vertical transmission; biogeography; ancestral state reconstruction; phylogeny; ants; community structure; physiology; interactions; temperature; behavioral interactions; coexistence; co-occurrence; competitive exclusion; dominance; Formicidae; scale; Dolichoderinae; species distribution models; climatic gradients; wet tropics; climate change; invasion ecology; invasive species; red imported fire ant; commensalism; gopher tortoise; diversity; conservation; burrow commensal; soil arthropods; pitfall; bait; turnover; food specialisation; stratification; sampling methods; hypogaeic; species richness; species occurrence; endemic species; distribution ranges; dispersal routes; centre of origin; refugium areas; antbird; army ant; biodiversity; biological indicator; deforestation; habitat fragmentation; myrmecophiles; mimicry; species interactions; tropics; biological invasions; species checklist; urban ecology; n/a
Webshop linkhttps://mdpi.com/books/pdfview ... | https://directory.doabooks.org/handle/20.500.12854/80969 |
The unprecedented rates of warming observed during recent decades exceed natural variability to such an extent that it is widely recognized as a major environmental problem not only among scientists. The role of our economy in driving such change has made it an economic and political issue. There is ample evidence that climate characteristics are changing due to greenhouse gas emissions caused by human activities. As a source of extreme, unpredictable environmental variation, climate change represents one of the most important threats for freshwater biodiversity (Dudgeon et al. 2006; Woodward et al. 2010).
Keywords
- Climate Change Impact
- Unpredictable Environmental Variation
- Riparian Vegetation
- Thermal Niche
- Potential Distribution Area
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
The unprecedented rates of warming observed during recent decades exceed natural variability to such an extent that it is widely recognized as a major environmental problem not only among scientists. The role of our economy in driving such change has made it an economic and political issue. There is ample evidence that climate characteristics are changing due to greenhouse gas emissions caused by human activities. As a source of extreme, unpredictable environmental variation, climate change represents one of the most important threats for freshwater biodiversity (Dudgeon et al. 2006; Woodward et al. 2010).
The Intergovernmental Panel on Climate Change (IPCC), a scientific intergovernmental institution, documents knowledge on climate change research since 1988. The last assessment report (Hartmann et al. 2013) noted the following significant trends: The period from 1983 to 2012 was likely the warmest 30-year period of the last 1400 years in the Northern Hemisphere. The observed increase in global average surface temperature from 1951 to 2010 is extremely likely to have been caused by anthropogenically induced greenhouse gas (GHG) emissions. This is underpinned by the fact that the best estimate of the human-induced contribution to warming is similar to the observed warming. Since 1950, high-temperature extremes (hot days, tropical nights, and heat waves) have become more frequent, while low-temperature extremes (cold spells, frost days) have become less frequent (EEA 2012). Since 1950, annual precipitation has increased in Northern Europe (up to +70 mm/decade) and decreased in parts of Southern Europe (EEA 2012). Hence, climate change is arguably the greatest emerging threat to global biodiversity and the functioning of local ecosystems.
Climate is an extremely important driver of ecosystem processes in general, but especially so in freshwater ecosystems as thermal and hydrological regimes are strongly linked to climate. Atmospheric energy fluxes and heat exchange strongly influence river water temperature, which is one of the most important factors in the chemo-physical environment of aquatic organisms (Caissie 2006). Besides temperature, climate directly affects runoff through the amount and type of precipitation. Increasingly, rising trends of surface runoff have been driven by more frequent episodes of intense rainfall. All river flow derives ultimately from precipitation, although geology, topography, soil type, and vegetation can help to determine the supply of water and the pathways by which precipitation reaches the river channel (Poff et al. 1997).
Riverine ecosystems are particularly vulnerable to climate change because (1) many species within these habitats have limited dispersal abilities as the environment changes, (2) water temperature and availability are climate-dependent, and (3) many systems are already exposed to numerous human-induced pressures (Woodward et al. 2010). Aquatic organisms such as fish and macroinvertebrates are ectothermic. Hence, they are directly and indirectly dependent on the surrounding temperatures. Climate conditions affected species distributions already in the past. Species richness patterns across Europe can still be linked to the Last Glacial Maximum with the highest species richness in Peri-Mediterranean and Ponto-Caspian Europe (Reyjol et al. 2007).
The ecological consequences of future climate change in freshwater ecosystems will largely depend on the rate and magnitude of change related to climate forcing, i.e., changes in temperature and streamflow. These changes not only imply absolute changes (increases or decreases) but also the increasing variation between extremes. The hydrological and thermal regimes of rivers directly and indirectly trigger different ecological processes. In the following section, we discuss water temperature and related processes in more detail. General principles of river hydrology are discussed in Chap. 4.
2 Water Temperature
Water temperature is, among others, one of the most important habitat factors in aquatic ecosystems, perhaps even the master variable (Brett 1956). Riverine fish and macroinvertebrates are ectothermic organisms, and thus, all life stages are dependent on their ambient temperatures. Generally, many factors are involved in the formation of water temperature. According to Caissie (2006), the factors, which drive the thermal regime, can be summarized in four groups (Fig. 11.1): (1) atmospheric conditions, (2) stream discharge, (3) topography, and (4) streambed. The atmospheric conditions are highly important and mainly responsible for heat exchange processes occurring at the water surface. Topography covers the geographical setting, which in turn can influence the atmospheric factors. Stream discharge mainly determines the volume of water flow, i.e., affecting the heating capacity. Consequently, smaller rivers exhibit faster and more extreme temperature dynamics because they are more vulnerable to heating and to cooling due to lower thermal capacity. Lastly, streambed factors are related to hyporheic processes. Heat exchange processes, which are highly relevant for water temperature modeling, mainly occur at the interfaces of air and water as well as water and streambed. The former is mainly triggered by solar radiation, long-wave radiation, evaporative heat fluxes, and convective heat transfer. The contribution of other processes, such as precipitation or friction, is small in comparison. Several studies have highlighted the importance of radiation in the thermal regime. This implies the importance of riparian vegetation, which protects a stream against excessive heating (Caissie 2006).
Thermal regimes of rivers show some general trends: Water temperature increases nonlinearly from the river source to its mouth, at which the increase rate is greater for small streams than for large rivers. This general, large-scale pattern is counteracted by small-scale variabilities occurring at confluences with tributaries, in deep pools, or at groundwater inflows. While water temperature is relatively uniform in cross sections, streams and rivers are turbulent systems where stratification is generally not expected. However, groundwater intrusion and hyporheic water exchange in pools can create cold water spots (Caissie 2006).
Besides spatial variations, the thermal regime shows temporal fluctuations of water temperature in diel and annual cycles. Daily minimum temperatures can be observed in the morning hours and maximum temperatures in the late afternoon. The magnitude of daily variations differs on the longitudinal gradient of rivers (Fig. 11.2).
Water temperature is a central feature in the chemo-physical environment of ectotherm aquatic organisms. Temperature controls almost all rate reactions (chemical and biological) and is thus a strong influence on biological systems at all levels of organization directly and indirectly triggering a magnitude of processes in aquatic life (Woodward et al. 2010). The biological dependences of the aquatic fauna and the according responses due to changes in the thermal regime are discussed in more detail below. General impacts of hydrological regimes on freshwater fauna are described in Chaps. 4, 5, and 6 allowing for the inference of potential climate change impacts.
3 Impacts
Riverine ecosystems are among the most sensitive to climate change because they are directly linked to the hydrological cycle, closely dependent on atmospheric thermal regimes, and at risk from interactions between climate change and existing, multiple, anthropogenic stressors (Dudgeon et al. 2006; Ormerod 2009). Figure 11.3 conceptually summarizes direct and indirect effects of climate change, combining hydrology and temperature. Water temperature has received much less attention with respect to ecological effects than other facets of water quality, such as eutrophication, suspended sediments, and pollution. The following section highlights climate change impacts on thermal as well as hydrological regimes. Furthermore, the interactions of climate change with other pressures are shortly discussed. Finally, this chapter addresses the ecological implications of climate change.
3.1 Climate Change Impacts on Thermal Regimes
An increase in air temperature will directly translate into warmer water temperatures for most streams and rivers. This change in thermal characteristics fundamentally alters ecological processes. Even though over the past 30 years warming in rivers and streams is consistently reported from global to regional scales (e.g., Webb and Nobilis 1995; Kaushal et al. 2010; Orr et al. 2014), climate change is not in all cases the exclusive reason for this warming. Temporal trends in thermal regimes can be also influenced by human-induced pressures such as impoundment, water abstraction, warm-water emissions from cooling and wastewater discharges, land use change (particularly deforestation), or river flow regulation. However, these other causes for river warming are hard to quantify (Kaushal et al. 2010).
At many sites, long-term increases in the water temperatures of streams and rivers typically coincided with increases in annual mean air temperatures. Warming trends also occur in rivers with sparsely settled catchments with intact forest cover. A comprehensive study by Orr et al. (2014) comprising 2773 sites across the United Kingdom showed warming trends (0.03 °C per year) from 1990 to 2006, which are comparable to those reported for air temperature. Similarly, Markovic et al. (2013) showed increasing temperature trends for the Elbe and Danube rivers, which accelerated at the end of the twentieth century. Furthermore, seasonal shifts were indicated by earlier spring warming and an increase in the duration of summer heat phases. During the next century, global air temperatures are projected to increase by 1.5–4.5 °C (Hartmann et al. 2013). This temperature increase will have manifold consequences for aquatic fauna, which are discussed in more detail in Sect. 11.3.4.
Another important, human-induced impact that directly affects water temperature and thermal regimes is deforestation and removal of riparian vegetation. The removal of riparian vegetation can have tremendous effects on water temperatures as increased energy input from radiation induces heating. Small streams with lower heat capacity are quite vulnerable to this impact, especially where a full canopy of riparian vegetation naturally occurs.
3.2 Climatic Aspects in Hydrology
Despite the strongly consistent pattern of hydrological change in some regions, e.g., reduced runoff during summer and more runoff during winter due to shifts from snow to rainfall, there is considerable uncertainty in how climate change will impact river hydrology. In Europe, already dry regions such as the Mediterranean area or the Pannonian lowlands will become drier, and already wet regions such as Scandinavia or the Alps will become a bit wetter.
However, streamflow trends must be interpreted with caution because of confounding factors, such as land use changes, irrigation, and urbanization. In regions with seasonal snow storage such as in the Alps, warming since the 1970s has led to earlier spring discharge maxima and has increased winter flows due to more precipitation as rainfall instead of snow. Moreover, where streamflow is lower in summer, decrease in snow storage has exacerbated summer dryness.
The projected impacts in a catchment under future climate conditions depend on the sensitivity of the catchment to change in climatic characteristics and on the projected change of precipitation, temperature, and resulting evaporation. Catchment sensitivity is a function of the ratio between runoff and precipitation. Accordingly, a small ratio indicates a higher importance of precipitation for runoff. Proportional changes in average annual runoff are typically between one and three times as large as proportional changes in average annual precipitation (Tang and Lettenmaier 2012). In turn, the smaller the ratio, the greater the sensitivity. However, the uncertainties in the hydrological models can be substantial. In some regions and especially on medium time scales (up to the 2050s), uncertainties in hydrological models can be greater than climate model uncertainty, i.e., uncertainty in the results of the hydrological model is larger than the predicted change induced by altered climate conditions and thus having no significant meaning.
In alpine regions, glaciers can contribute appreciable amounts of water to the discharge of rivers. All projections for the twenty-first century show continuing net mass loss from glaciers. In glaciered catchments, runoff reaches an annual maximum during summer, which strongly influences river thermal conditions as well. Reduced contributions from glacial runoff induce shifts of peak flows toward spring. Furthermore, the reduced glacial input can lead to more erratic and variable discharge dynamics in response to rain events. The relative importance of high-summer glacier meltwater can be substantial, for example, contributing 25% of August discharge in basins draining the European Alps (Huss 2011). Observations and models suggest that global warming impacts on glacier and snow-fed streams and rivers will pass through two contrasting phases. In the first phase, river discharge increases due to intensified melting. In the second phase, snowfields melt early and glaciers have shrunken to a point that late-summer streamflow is strongly reduced. The turnover between the first and second phase is called “peak meltwater.” Peak meltwater dates have been projected between 2010 and 2040 for the European Alps (Huss 2011).
River discharge also influences the response of thermal regimes to increased air temperatures. Simulated discharge decreases of 20 and 40% may result in additional increases of river water temperature of 0.3 and 0.8 °C on average (Van Vliet et al. 2011). Consequently, where drought becomes more frequent, freshwater-dependent biota will suffer directly from changed flow conditions and also from drought-induced river temperature increases. Furthermore, increased temperature will accompany decreased oxygen and increased pollutant concentrations.
Hydrology itself is a driver of aquatic communities, and disturbances, such as floods, have regulatory effects on riverine biota as dominant populations are reduced, pioneers are supported, and free niches are opened. Hydrological dynamics are therefore essential to maintain overall biodiversity in aquatic ecosystems. Riverine species have evolved specific life-cycle adaptations to seasonal differences in hydrological regimes that are specific to different eco- and bioregions. For instance, larval growth rates of benthic invertebrates are high during winter as the hydraulic stress is reduced in low-flow periods in alpine rivers. Disturbances, such as acyclic extreme events, may be linked with severe losses in biomass, with species richness, and with the selection of species-specific traits. Unstable environments favor small, adaptive species with short life cycles, whereby larger organisms with longer life spans are generally handicapped (Townsend and Hildrew 1994; see Chap. 4).
3.3 Interactions of Climate Change with Other Stressors
Climate change is not the only source of stressors impacting water resources and aquatic ecosystems. Non-climatic drivers such as population increase, economic development, pollutant emissions, or urbanization challenge the sustainable use of resources and the integrity of aquatic ecosystems (Dudgeon et al. 2006; Nelson et al. 2006). Changing land uses are expected to affect freshwater systems strongly in the future: Increasing urbanization and deforestation may decrease groundwater recharge and increase flood hazards with consequences for hydrology. Furthermore, agricultural practices are strongly related to the climatic conditions (Bates et al. 2008). Thus, agricultural land use will be of particular importance for the integrity of freshwater systems in the future (see Chap. 13). Irrigation accounts for about 90% of global water consumption and severely impacts freshwater availability for humans and ecosystems (Döll 2009).
Climate can induce change in human uses or directly interact with human pressures. Hydropower generation, for example, causes major pressures on riverine ecosystems. Through damming, water abstractions, and hydropeaking, hydropower plants affect habitat quality by, e.g., altering river flow regimes, fragmenting river channels, or disturbing discharge regimes on hourly time scales (Poff and Zimmerman 2010) (for more details, see Chaps. 4–7). However, climate change affects hydropower generation itself through changes in the mean annual streamflow, shifts of seasonal flows, and increases of streamflow variability (including floods and droughts) as well as by increased evaporation from reservoirs and changes in sediment fluxes. Some of these interactions can have negative effects on hydropower generation as well. Especially, run-of-the-river power plants are more susceptible than storage-type power plants to climate change impacts, such as increased flow variability. However, the existing pressures of hydropower generation can be augmented by climate change; e.g., low-flow conditions in river reaches downstream of diversion power plants may be amplified through drought.
Another important field of interacting effects is water quality. On the one hand, increased water temperatures influence many biogeochemical processes such as the self-purification of water. On the other hand, rising temperatures will lead to increasing water demands by socioeconomic systems (e.g., for irrigation or cooling). Water quality aspects are discussed in more detail in Chap. 10.
3.4 Ecological Impacts of Thermal Regimes on Aquatic Fauna
As discussed above, climate change will affect several ecosystem processes relevant for aquatic life. The most pervasive impact of climate change will be the change of the thermal regime and mostly a warming of water temperatures. Therein, climate change will affect several characteristics of the thermal regime (e.g., mean, minima, and maxima), which are relevant for aquatic life.
Almost all fishes and macroinvertebrates are obligate poikilotherms or thermal conformers; as such, almost every aspect of the ecology of an individual is influenced by the temperature of the surrounding water from the egg to the adult individual (Brett 1956). Fry (1947) outlined five main categories of temperature effects on fishes that are likely to influence macroinvertebrates too: controlling (metabolic and developmental rates), limiting (affecting activity, movement, and distribution), directing (stimulating an orientation response), masking (blocking or affecting the expression to other environmental factors), and lethal effects that act either directly to kill the organism or indirectly as a stress effect. Thus, the responses of the aquatic fauna to water temperature changes might occur at various levels of organization from the molecular through organismal and population to the community level (McCullough et al. 2009; Woodward et al. 2010). Climate and thus climate change can affect almost every component of an individual fish’s life including availability and suitability of habitats, survival, reproduction, and successful hatching, as well as metabolic demands. The temperature thresholds associated with these effects differ not only between species but also between different life stages. Besides the different organizational levels, the responses of aquatic fauna to climate change will be heterogeneous due to regional and taxonomic variations. In the following, the different organizational levels will be discussed and related to climate change impacts with a focus on the population (including species) and community level.
At the molecular level, the thermal tolerance of an organism and its physiological limits are key determinants as to whether the organism is able to adapt to the thermal conditions due to its genetic constitution. Biological reactions to impacts on the molecular level include heat shocks, stress responses, and changes to enzyme function or to genetic structure. However, the physiological response of an organism is also linked to other parameters, such as sex, size, season, and water chemistry. Thus, the thermal preference of a species cannot adequately be described by a single temperature value, such as the mean. Several metrics can be used to quantitatively describe the thermal preference and tolerance of a species and its life stages: optimum growth temperature supporting the highest growth rate, final temperature preference indicating the temperature toward which a fish tends to move when exposed to a temperature range, upper incipient lethal limit, the upper temperature value that 50% of fish survive in an experiment for an extended period, critical thermal maximum that describes the upper temperature in an experiment at which fish loses its ability to maintain the upright swimming position, optimum spawning temperature, and optimum egg development temperature. Actually, lethal temperatures relate not only to a fixed maximum threshold. The maximum temperature a species or a specific life stadium withstands is also strongly related to the acclimation time, i.e., the time over which temperature changes.
According to Magnuson et al. (1997), aquatic organisms can be classified into three thermal guilds: (1) cold-water species with physiological optimums <20 °C, (2) coolwater species having their physiological optimums between 20 and 28 °C, and (3) warm-water species with an optimum temperature > 28 °C. Even though it is possible to delineate thermal niches in the laboratory, evidence from field data is much more heterogeneous (Magnuson et al. 1979) as in complex and dynamic river systems the interplay of several biotic and abiotic factors is relevant for the aquatic organisms.
At the organismal level, fish are able to react behaviorally to stay within the range of their thermal tolerance and to avoid stress effects or sublethal effects. Even though fish, as exotherms, cannot physiologically regulate their body temperature, they are able to select thermally adequate microhabitats by movement within the range of temperatures available in their environment. Movements to avoid stressful thermal conditions and stay within adequate habitat conditions are important behavioral responses to changing spatial and temporal patterns of temperature (McCullough et al. 2009). In contrast to large-scale migrations into new habitats that will be discussed below under the population level, behavioral movements do not change the potential distribution area of a species. Such movements are temporarily limited habitat changes. Thermal stress can lead to reduced disease resistance or changed feeding and foraging, all having negative effects on the fitness and viability of the individual. By contrast, macroinvertebrates do not have the possibility for directional movements within flowing water. Macroinvertebrates have the option to retract into interstitial spaces within the bottom substrate or to drift by passive movement downstream (into warmer river reaches).
At the population level (including the species), factors relevant to responses to thermal variability are spatial distributions on the species level as well as population viability including abundance, productivity, and genetic diversity. If thermal conditions continuously exceed the preferred range of a species and adequate habitats diminish in the current environment, temporal movements into adequate microhabitats, as discussed under the organismal level, become insufficient to secure survival of the population. In this case, temperature drives changes in potential distribution area. Aquatic organisms have two options to stay within a specific thermal niche under warming environments due to climate change: either migrate to northern latitudes or to higher altitudes (see Fig. 11.4).
However, the possibility to migrate and thus the possibility to follow or to reach thermally suitable habitat depend on two criteria: firstly on the dispersal ability of the species and secondly on the availability of passable migration pathways and corridors to suitable habitats, respectively. Capacity for the former, i.e., dispersal abilities, can be measured in terms of how much time it takes a species to follow the thermal niche or how far species can follow this niche, but these are still not well investigated and largely unknown. In the latter case, migration pathways for endemic species are uncertain. Endemic species have limited distributions for several reasons. Purely aquatic species are expected to be severely challenged by climate change, especially if the river network is not connected to higher latitudes or elevations, and thus to cooler habitats. For example, fish species of the Mediterranean region, where endemism is high, may find no passable route to migrate northward in river systems draining to the Mediterranean Sea.
Another example where migration is impossible is the springs of rivers. Springs, i.e., the real source of the river, are colonized by specific species and assemblages of benthic invertebrates. These assemblages are assumed to be especially vulnerable to any environmental changes in terms of temperature or hydrology, since these habitats have “extratropical” character, i.e., the habitat conditions are and have been extremely constant over time. These assemblages and habitats are especially vulnerable in medium elevation ranges, around 1500 m, where climate-induced temperature increases will raise the source temperature of rivers. These species are among potential losers of climate change effects as they are trapped in sky islands, i.e., mountain refugia, and are not able to shift to suitable thermal or hydrological conditions, either up- or downstream (Bässler et al. 2010; Sauer et al. 2011; Dirnböck et al. 2011; Vitecek et al. 2015a, b; Rabitsch et al. 2016). Another vulnerable stream type is glacier-fed streams with cold and turbid waters inhabited by species specialized for these exceptional conditions. The shrinkage of glaciers will reduce local and regional diversity (Jacobsen et al. 2012).
Generally, the change of distribution patterns is a central topic in climate change impact research in aquatic ecosystems. Climate is a strong determinant in biogeographical distribution patterns (Reyjol et al. 2007), and hence, climate change will have huge impacts on the biogeographical configuration of aquatic communities. Comte et al. (2013) reviewed observed and predicted climate-induced distribution changes for fish. Most evidence was found for cold-water fishes and within cold-water fishes for salmonids (Fig. 11.5). This is not surprising as the different species of salmon and trout are economically highly relevant in angling and fisheries and often represent species with a high cultural value too. Nonetheless, climate change impacts are less well studied in freshwater environments than in the terrestrial or marine realm.
In most cases, climate change-induced distribution shifts of cold-water species lead to shrinking habitat availabilities due to the loss of habitats at the downstream end of the distribution area or to an upstream shift into cooler areas. Filipe et al. (2013) forecasted future distribution of trout across three large basins in Europe covering a wide range of climatic conditions. The predictions clearly showed tremendous losses of habitats. In turn, the Alps represented a stable distribution area in the models (Fig. 11.6). On a large-scale, continental perspective, the conservation of such habitats is highly important in the face of climate change, since the habitats will dramatically reduce in other areas. If these thermally suitable habitats and their trout populations are impacted by other pressures, the species can be also extirpated in this area. Hari et al. (2006) underlined the relationships of warming rivers in the Swiss Alps and the already occurred decline of trout populations at the end of the twentieth century.
In the case of trout, the species already occupies the upstream sections of upland rivers. Potentially in some areas, trout may extend its distribution further upstream, but in most cases, a further migration may be limited by habitat factors other than temperature or by topographical barriers, respectively. In turn, species that are currently occurring more downstream would have the possibility to track their thermal niche into upstream reaches. However, Comte and Grenouillet (2015) showed that riverine fish species consistently lagged behind the speed at which climate change gains elevation and latitude (Isaak and Rieman 2013) with higher rates for habitat losses than for habitat gains, i.e., the preferred thermal range and the actually occupied thermal environment drift apart from each other. This lag can be also caused by insufficient connectivity that represents a highly important issue for migration but is impaired by other human-induced impacts such as barriers or also water abstraction. Macroinvertebrates may overcome the problem of migration barriers by overland (aerial) dispersal in their adult life stage, since most aquatic insects have winged adult stages. However, some of these species are poor fliers (e.g., mayflies) and would most likely not be successful to migrate upward, particularly in regions with strong winds or distinct topography.
Temperature effects on communities comprise responses to temperature via food web dynamics, interactions among fish species or biotic interactions among different taxa, as well as the role of diseases and parasites. Furthermore, the emergence of non-native, exotic species is highly relevant in community aspects. Thus, this organizational level is highly relevant with respect to biodiversity that is especially under pressure in freshwater ecosystems (Dudgeon et al. 2006). However, distribution shifts of single species as discussed under the population level are linked to the dynamics of community composition.
The transition of fish species along the river continuum is characterized by two trends: (1) downstream increase of species richness and biomass and (2) turnover in species composition from salmonid to cyprinid communities. In Europe, the species-poor assemblages of the upstream reaches are dominated by cold-water species and the downstream reaches by warm-water-tolerant species. The, in comparison with fish communities, species-rich macroinvertebrate communities change in similar fashion along the river continuum in distinct reaction to temperature and other parameters such as oxygen saturation, substrate composition, flow velocity, and food resources. Temperature increases can thus induce assemblage shifts.
Pletterbauer et al. (2015) investigated fish assemblage shifts based on the Fish Zone Index (FiZI) that considers not only the occurrence of a species but also its abundance (Schmutz et al. 2000). The results showed significant assemblage shifts across major parts of Europe with strongest impacts on fish assemblages in upstream sections of small- and medium-sized rivers as well as in Mediterranean and alpine regions. By comparing distribution shifts for different taxa groups in different regions, Gibson-Reinemer and Rahel (2015) recently found that responses are idiosyncratic for plants, birds, marine invertebrates, and mammals. The authors stated that “inconsistent range shifts seem to be a widespread response to climate change rather than a phenomenon in a single area or taxonomic group.” Thus, distribution shifts will not occur for all species at the same time and to the same extent. Accordingly, vulnerabilities have to be addressed on the different levels of organization. Hering et al. (2009) analyzed the vulnerability of the European Trichoptera fauna to climate change and found that parameters such as endemism, preference for springs or for cold water temperatures, short emergence period, and restricted ecological niches in terms of feeding types are responsible for the species-specific sensitivity to climate change impacts. Accordingly, species of the Mediterranean peninsulas and mountainous areas in Central Europe are potentially more threatened than species of Northern Europe (Fig. 11.7).
4 Adaptation and Restoration
Successful climate change adaptation requires responses at the appropriate temporal and spatial scales. However, sustaining integral ecosystem processes and functions will need inter- and transdisciplinary approaches to address climate change impacts. The effects of climate change are already visible and measurable in aquatic ecosystems. Hence, conservation and restoration practitioners and researchers need to share information effectively and with diverse audiences such as policy- and decision-makers, NGOs, and other stakeholders to ensure sharing most recent findings and to enable proactive management (Seavy et al. 2009). Rapid environmental change urgently requires society to be informed about the ongoing and upcoming threats related to climate change.
Broad suggestions for adapting rivers to climate change impacts are similar to those for other ecosystems, including the enhancement of resilience, connectivity, and legal protection while reducing stressors, such as habitat degradation or fragmentation (Palmer et al. 2008). However, the development of adequate and robust management strategies is key to conserve intact, freshwater habitats. With respect to climate change and aquatic ecosystems, water temperature is one of the master variables that requires attention.
Riparian vegetation contributes various important functions in relation to aquatic habitats, including the moderation of water and ambient air temperature via evapotranspiration and reduction of solar energy input by shading. It thus provides a buffer zone that filters sediments and nutrients, provides food, and creates woody debris as habitat for xylobiont species (Richardson et al. 2007). Evapotranspiration rates are highest in forest habitats due to their high leaf area index (Tabacchi et al. 1998). In this context, a major issue is the mitigation potential of riparian vegetation to keep rivers cooler. Recent studies have shown that shading by riparian vegetation can buffer the warming effects of climate change (Bond et al. 2015).
Another important aspect in climate change adaptation is habitat connectivity. As discussed above, species will tend to follow their preferred thermal niche in their river network. Accordingly, the spatial connection between different river reaches is highly important, especially for cold-water taxa, as long-term thermal refugia are located upstream where water temperature is lowest along the longitudinal continuum. As shown by Isaak et al. (2015), thermal habitats in mountain streams seem highly resistant to temperature increases. As a result, many populations of cold-water species currently exist where they are well-buffered from climate change. However, connectivity is not only relevant on the scale of the river network. On shorter time scales, cold-water refugia may occur as patchy distributions along the river course. Deep pools with high groundwater exchange rates or other river sections with groundwater intrusion can provide valuable habitats where species can endure heat waves. Accordingly such refugia must be connected to the surrounding habitats such that they can be accessed and used. However, morphological degradation impedes the availability of such refugia. Thus, habitat heterogeneity and morphological integrity, including natural riverbed and sediment dynamics, are essential to provide adequate habitat patches for different species and their life stages, also from the thermal point of view.
In addition to climate change, the future of freshwater ecosystems will be strongly influenced by other sources of stress: socioeconomic and technological changes as well as demographic developments on the global scale (Dudgeon et al. 2006; Nelson et al. 2006). Ultimately, as climate change impacts start to overwhelm the capacity of society and of ecosystems to cope or adapt, substantial reduction in GHG emissions becomes inevitable. Until some combination of foresight, technological advances, and political will makes such reduction possible, research, monitoring, and experimental advances in practice must be pursued to inform society and to slow the effects of climate change on riverine ecosystems.
4.1 Case Study BIO_CLIC: Potential of Riparian Vegetation to Mitigate Effects of Climate Change on Biological Assemblages of Small- and Medium-Sized Running Waters
The transdisciplinary research project BIO_CLIC investigated the impact of riparian vegetation on the water temperature regime as well as on aquatic organisms of small- and medium-sized rivers in southeastern Austria. Its objectives were to identify and understand the potential of riparian vegetation to mitigate climate change impacts on water temperature and, ultimately, on benthic invertebrate and fish species assemblages. Finally, BIO_CLIC aimed to support river managers in implementing integrative management for sustainable river restoration toward climate change adaptation that incorporates ecosystem services and socioeconomic consequences.
The study area in the Austrian lowlands, represented by the rivers Lafnitz and Pinka, was chosen, because in this area an increase of air temperature of ca. 2–2.5 °C is predicted by 2040. Moreover, climate change effects combined with a rising numbers of rivers without or with low levels of riparian vegetation will lead to an increase of water temperature. It can be assumed that climate change effects will exacerbate ecological consequences by impacting water temperature and also hydrology (e.g., increasing the incidence and duration of low-flow periods).
The river Lafnitz amply exhibits hydrologically and morphologically intact river sections with near-natural riparian vegetation. By contrast, the river Pinka is impacted by river straightening and riparian vegetation loss. Due to the spatial proximity of these two rivers, the climatic conditions are comparable, but their different hydro-morphological settings qualify them for analysis to distinguish the effect of riparian vegetation on the thermal regime as well as climate change impacts. Additionally, specific sites along the rivers Lafnitz and Pinka were analyzed according to elements influencing the biological quality of fish and benthic invertebrates, e.g., water temperature, riparian vegetation, and morphological (e.g., channelization, riverbed structure) characteristics.
The results of time series analysis show clearly the difference between the two rivers. In the upper and middle reaches, the mean July water temperature in the Pinka exceeds 15 °C, which sharply contrasts with a more flattened gradient of lower temperatures in the water column of the river Lafnitz. One key reason is the lack of shading effects by the riparian vegetation that is generally missing on the Pinka. For both rivers, water temperature and fish and benthic invertebrate distributions are highly correlated along the longitudinal gradient. This underlines the strong influence of water temperature on the longitudinal distribution of aquatic organisms and highlights the importance of mitigation of global climate change effects by shading. Shifts of their associated species to cold and warm water within the biocenotic (fish) zones will be inevitable with increasing temperatures, forcing the cold-water species to move to higher altitudes, if river connectivity allows.
In more natural river sections with fewer human pressures, in summer months, the water temperature difference between shaded and unshaded biocenotic zones is about 2–3 °C. As temperature increases, other river characteristics such as river dimension, flow, and substrate composition, but also migration barriers, might prove to be limiting factors leading to relatively unpredictable changes in the biotic assemblages. Riparian vegetation and shading could ameliorate such threats by harmonizing and flattening maximum temperature peaks in hot periods by up to 2 °C. This is about the same range of temperature increase that was predicted as an impact of climate change effects in 2050.
Global warming has already shown impacts on European freshwater ecosystems and the services they provide to humans. The main impacts are related to biodiversity, water quality, and health: Environmental parameters specify boundary conditions for habitat availability, and likewise human-induced restraints reduce further opportunities for a dynamic, ever-changing ecosystem. The results clearly demonstrate that efficient river restoration and mitigation requires the reestablishment of riparian vegetation as well as an open river continuum and hydro-morphological improvement of habitats (Melcher et al. 2016).
5 Conclusions, Open Questions, and Outlook
Rivers have experienced centuries of human-induced modifications (Hohensinner et al. 2011). While climate change may already impact riverine ecosystems, in the future it is much more likely that human-induced modifications will clearly and unequivocally be accompanied by climate change effects. Consequently, the challenge of how to preserve the status quo or to get back to a more pristine status will become more difficult as fundamental ecosystem processes, such as the thermal regime, will shift. From an applied perspective, climate change has the potential to undermine many existing freshwater biomonitoring schemes, which focus mostly on human pressures like organic pollution or hydro-morphological alterations with little consideration for the increasing influence of climatic effects. Thus, how we currently assess “ecological status” could become increasingly obsolete over time, as the environmental conditions drift away from assumed earlier (and cooler) reference conditions (Woodward et al. 2010) and causal relationships underpinning ecological processes realign. Thus, we may assume that sustaining and restoring habitat heterogeneity and connectivity will continue to enhance ecosystem resilience, but it may be increasingly difficult to know how much or how fast. Long-term monitoring is essential to observe changes induced by climate that are currently lacking for biological quality elements in rivers. However, improving the research focus of monitoring programs to directly address uncertainties raised by climate change should make data available that will better inform future management decisions. Tracking data over the long term will provide the baseline trajectories against which scenarios of simulated management policies can be compared. While surprise from climate change is inevitable, challenging simulation of policies with real data will make it more possible to project the consequences of river policies over longer time periods and to identify and respond to emerging trends in changing conditions.
References
Bässler C, Müller J, Hothorn T, Kneib T, Badeck F, Dziock F (2010) Estimation of the extinction risk for high-montane species as a consequence of global warming and assessment of their suitability as cross-taxon indicators. Ecol Indic 10:341–352
Bates BC, Kundzewicz ZW, Wu S, Palutikof JP (eds) (2008) Climate change and water. Technical paper of the intergovernmental panel on climate change. IPCC Secretariat, Geneva, p 210
Bond RM, Stubblefield AP, Van Kirk RW (2015) Sensitivity of summer stream temperatures to climate variability and riparian reforestation strategies. J Hydrol 4:267–279
Brett JR (1956) Some principles in the thermal requirements of fishes. Q Rev Biol 31:75–87
Caissie D (2006) The thermal regime of rivers: a review. Freshw Biol 51:1389–1406
Comte L, Grenouillet G (2015) Distribution shifts of freshwater fish under a variable climate: comparing climatic, bioclimatic and biotic velocities. Divers Distrib 21:1014–1026
Comte L, Buisson L, Daufresne M, Grenouillet G (2013) Climate-induced changes in the distribution of freshwater fish: observed and predicted trends. Freshw Biol 58:625–639
Dirnböck T, Essl F, Rabitsch W (2011) Disproportional risk for habitat loss of high-altitude endemic species under climate change. Glob Chang Biol 17:990–996
Döll P (2009) Vulnerability to the impact of climate change on renewable groundwater resources: a global-scale assessment. Environ Res Lett 4:1–13
Dudgeon D, Arthington A, Gessner M, Kawabata Z-I, Knowler D, Lévêque C, Naiman R, Prieur-Richard A-H, Soto D, Stiassny M, Sullivan C (2006) Freshwater biodiversity: importance, threats, status and conservation challenges. Biol Rev Camb Philos Soc 81:163–182
EEA (2012) Climate change, impacts and vulnerability in Europe 2012: an indicator-based report
Fenoglio S, Bo T, Cucco M, Mercalli L, Malacarne G (2010) Effects of global climate change on freshwater biota: a review with special emphasis on the Italian situation. Ital J Zool 77:374–383
Filipe AF, Markovic D, Pletterbauer F, Tisseuil C, De Wever A, Schmutz S, Bonada N, Freyhof J (2013) Forecasting fish distribution along stream networks: brown trout (Salmo Trutta) in Europe. Divers Distrib 19:1059–1071
Fry FEJ (1947) Effects of the environment on animal activity. Publications of the Ontario fisheries research. Laboratory 55:1–62
Gibson-Reinemer DK, Rahel FJ (2015) Inconsistent range shifts within species highlight idiosyncratic responses to climate warming. PLoS One 10:1–15
Hari RE, Livingstone DM, Siber R, Burkhardt-Holm P, Guttinger H (2006) Consequences of climatic change for water temperature and brown trout populations in alpine rivers and streams. Glob Chang Biol 12:10–26
Hartmann DL, Tank AMGK, Rusticucci M (2013) IPCC Fifth assessment report, climate change 2013: The physical science basis IPCC, AR5
Hering D, Schmidt-Kloiber A, Murphy J, Lücke S, Zamora-Muñoz C, López-Rodríguez MJ, Huber T, Graf W (2009) Potential impact of climate change on aquatic insects: a sensitivity analysis for European caddisflies (Trichoptera) based on distribution patterns and ecological preferences. Aquat Sci 71:3–14
Hohensinner S, Jungwirth M, Muhar S, Schmutz S (2011) Spatio-temporal habitat dynamics in a changing Danube River landscape 1812-2006. River Res Appl 27:939–955
Huss M (2011) Present and future contribution of glacier storage change to runoff from macroscale drainage basins in Europe. Water Resour Res 47:1–14
Isaak DJ, Rieman BE (2013) Stream isotherm shifts from climate change and implications for distributions of ectothermic organisms. Glob Chang Biol 3:742–751
Isaak DJ, Young MK, Nagel DE, Horan DL, Groce MC (2015) The cold-water climate shield: delineating refugia for preserving salmonid fishes through the 21st century. Glob Chang Biol 21:2540–2553
Jacobsen D, Milner AM, Brown LE, Dangles O (2012) Biodiversity under threat in glacier-fed river systems. Nat Clim Chang 2:361–364
Kaushal SS, Likens GE, Jaworski NA, Pace ML, Sides AM, Seekell D, Belt KT, Secor DH, Wingate RL (2010) Rising stream and river temperatures in the United States. Front Ecol Environ 8(9):461–466
Magnuson JJ, Crowder LB, Medvick PA (1979) Temperature as an ecological resource. Am Nat 19:331–343
Magnuson JJ, Webster KE, Assel RA, Bowser CJ, Dillon PJ, Eaton JG, Evans HE, Fee EJ, Hall RI, Mortsch LR, Schindler DW, Quinn FH (1997) Potential effects of climate changes on aquatic systems: laurentian great lakes and precambrian shield area. Hydrol Process 11:825–871
Markovic D, Scharfenberger U, Schmutz S, Pletterbauer F, Wolter C (2013) Variability and alterations of water temperatures across the Elbe and Danube River basins. Clim Chang 119:375–389
McCullough DA, Bartholow JM, Jager HI, Beschta RL, Cheslak EF, Deas ML, Ebersole JL, Foott JS, Johnson SL, Marine KR, Mesa MG, Petersen JH, Souchon Y, Tiffan KF, Wurtsbaugh WA (2009) Research in thermal biology: burning questions for Coldwater stream fishes. Rev Fish Sci 17:90–115
Melcher A, Dossi F, Graf W, Pletterbauer F, Schaufler K, Kalny G, Rauch HP, Formayer H, Trimmel H, Weihs P (2016) Der Einfluss der Ufervegetation auf die Wassertemperatur unter gewässertypspezifischer Berücksichtigung von Fischen und benthischen Evertebraten am Beispiel von Lafnitz und Pinka. Österreichische Wasser- und Abfallwirtschaft 68:308–323
Nelson GC, Bennett E, Berhe AA, Cassman K, DeFries R, Dietz T, Dobermann A, Dobson A, Janetos A, Levy M, Marco D, Nakicenovic N, O’Neill B, Norgaard R, Petschel-Held G, Ojima D, Pingali P, Watson R, Zurek M (2006) Anthropogenic drivers of ecosystem change: an overview. Ecol Soc 11:29
Ormerod SJ (2009) Climate change, river conservation and the adaptation challenge. Aquatic conservation: marine and freshwater. Ecosystems 19:609–613
Orr HG, Simpson GL, des clers S, Watts G, Hughes M, Hannaford J, Dunbar MJ, Laizé CLR, Wilby RL, Battarbee RW, Evans R (2014) Detecting changing river temperatures in England and Wales. Hydrol Process 766:752–766
Ott J (2010) The big trek northwards: recent changes in the European dragonfly fauna. In: Settele J, Penev L, Georgiev T, Grabaum R, Grobelnik V, Hammen V, Klotz S, Kotarac M, Kühn I (eds) Atlas of biodiversity risk. Pensoft, Sofia, p 280
Palmer MA, Reidy Liermann CA, Nilsson C, Flörke M, Alcamo J, Lake PS, Bond N (2008) Climate change and the world’s river basins: anticipating management options. Front Ecol Environ 6:81–89
Pletterbauer F, Melcher AH, Ferreira T, Schmutz S (2015) Impact of climate change on the structure of fish assemblages in European rivers. Hydrobiologia 744:235–254
Poff NL, Zimmerman JKH (2010) Ecological responses to altered flow regimes: a literature review to inform the science and management of environmental flows. Freshw Biol 55:194–205
Poff NL, Allan JD, Bain MB, Karr JR, Prestegaard KL, Richter BD, Sparks RE, Stromberg JC (1997) The natural flow regime: a paradigm for river conservation and restoration. Bioscience 47:769–784
Rabitsch W, Graf W, Huemer P, Kahlen M, Komposch C, Paill W, Reischütz A, Reischütz PL, Moser D, Essl F (2016) Biogeography and ecology of endemic invertebrate species in Austria: a cross-taxa analysis. Basic Appl Ecol 17(2):95–105
Rahel FJ, Bierwagen B, Taniguchi Y (2008) Assessing the effects of climate change on aquatic invasive species. Conserv Biol 22:521–533
Reyjol Y, Hugueny B, Pont D, Bianco PG, Beier U, Caiola N, Casals F, Cowx IG, Economou A, Ferreira MT, Haidvogl G, Noble R, de Sostoa A, Vigneron T, Virbickas T (2007) Patterns in species richness and endemism of European freshwater fish. Glob Ecol Biogeogr 16:65–75
Richardson DM, Holmes PM, Esler KJ, Galatowitsch SM, Stromberg JC, Kirkman SP, Pysek P, Hobbs RJ (2007) Riparian vegetation: degradation, alien plant invasions, and restoration prospects. Divers Distrib 13:126–139
Sauer J, Domisch S, Nowak C, Haase P (2011) Low mountain ranges: summit traps for montane freshwater species under climate change. Biodivers Conserv 20:3133–3146
Schmutz S, Kaufmann M, Vogel B, Jungwirth M (2000) Methodische Grundlagen und Beispiele zur Bewertung der fischökologischen Funktionsfähigkeit Österreichischer Fließgewässer
Seavy NE, Gardali T, Golet GH, Griggs FT, Howell CA, Kelsey R, Small SL, Viers JH, Weigand JF (2009) Why climate change makes riparian restoration more important than ever: recommendations for practice and research. Ecol Restor 27:330–338
Tabacchi E, Correll DL, Hauer R, Pinay G, Planty-Tabacchi AM, Wissmar RC (1998) Development, maintenance and role of riparian vegetation in the river landscape. Freshw Biol 40:497–516
Tang Q, Lettenmaier DP (2012) 21st century runoff sensitivities of major Global River basins. Geophys Res Lett 39:1–5
Townsend CR, Hildrew AG (1994) Species traits in relation to a habitat template for river systems. Freshw Biol 31:265–275
Van Vliet MTH, Ludwig F, Zwolsman JJG, Weedon GP, Kabat P (2011) Global river temperatures and sensitivity to atmospheric warming and changes in river flow. Water Resour Res 47:W02544
Vitecek S, Graf W, Previšić A, Kučinić M, Oláh J, Bálint M, Keresztes L, Pauls SU, Waringer J (2015a) A hairy case: the evolution of filtering carnivorous Drusinae (Limnephilidae, Trichoptera). Mol Phylogenet Evol 93:249–260
Vitecek S, Kučinić M, Oláh J, Previšić A, Bálint M, Keresztes L, Waringer J, Pauls SU, Graf W (2015b) Description of two new filtering carnivore Drusus species (Limnephilidae, Drusinae) from the Western Balkans. ZooKeys 513:79–104
Webb BWW, Nobilis F (1995) Long term water temperature trends in Austrian rivers/Tendance a long terme de la temperature des cours d’eau autrichiens. Hydrol Sci J 40:83–96
Woodward G, Perkins DM, Brown LE (2010) Climate change and freshwater ecosystems: impacts across multiple levels of organization. Philos Trans R Soc Lond B Biol Sci 365:2093–2106
Editor information
Editors and Affiliations
Rights and permissions
This chapter is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team.
Copyright information
© 2018 The Author(s)
About this chapter
Cite this chapter
Pletterbauer, F., Melcher, A., Graf, W. (2018). Climate Change Impacts in Riverine Ecosystems. In: Schmutz, S., Sendzimir, J. (eds) Riverine Ecosystem Management. Aquatic Ecology Series, vol 8. Springer, Cham. https://doi.org/10.1007/978-3-319-73250-3_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-73250-3_11
Published: | https://link.springer.com/chapter/10.1007/978-3-319-73250-3_11 |
The ratio of habitat generalists to specialists in birds has been suggested as a good indicator of ecosystem changes due to e.g. climate change and other anthropogenic perturbations. Most studies focusing on this functional component of biodiversity originate, however, from temperate regions. The Eurasian Arctic tundra is currently experiencing an unprecedented combination of climate change, change in grazing pressure by domestic reindeer and growing human activity.
Methodology/Principal Findings
Here we monitored bird communities in a tundra landscape harbouring shrub and open habitats in order to analyse bird habitat relationships and quantify habitat specialization. We used ordination methods to analyse habitat associations and estimated the proportions of specialists in each of the main habitats. Correspondence Analysis identified three main bird communities, inhabiting upland, lowland and dense willow shrubs. We documented a stable structure of communities despite large multiannual variations of bird density (from 90 to 175 pairs/km2). Willow shrub thickets were a hotspot for bird density, but not for species richness. The thickets hosted many specialized species whose main distribution area was south of the tundra.
Conclusion/Significance
If current arctic changes result in a shrubification of the landscape as many studies suggested, we would expect an increase in the overall bird abundance together with an increase of local specialists, since they are associated with willow thickets. The majority of these species have a southern origin and their increase in abundance would represent a strengthening of the boreal component in the southern tundra, perhaps at the expense of species typical of the subarctic zone, which appear to be generalists within this zone.
Citation: Sokolov V, Ehrich D, Yoccoz NG, Sokolov A, Lecomte N (2012) Bird Communities of the Arctic Shrub Tundra of Yamal: Habitat Specialists and Generalists. PLoS ONE 7(12): e50335. https://doi.org/10.1371/journal.pone.0050335
Editor: Katherine Renton, Universidad Nacional Autonoma de Mexico, Mexico
Received: June 18, 2012; Accepted: October 17, 2012; Published: December 11, 2012
Copyright: © 2012 Sokolov et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This work was funded by the Russian Academy of Sciences through the project no.12-Π-4-1043 and 12-4-7-022-Arctic (Ural Branch), the Russian Foundation for Basic Research support (grant no. 11-04-01153-a), the Research Council of Norway through the Yggdrasil program (project no. 195738/V11 to V.S.), University of Tromsø, and the IPY project “Arctic Predators” (http://www.arctic-predators.uit.no), as well as a postdoctoral fellowship from Natural Sciences and Engineering Research Council of Canada to N.L. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Bird communities and populations are monitored in many parts of the world, as they often are the focus of considerable interest from the general public. Their abundance and distributions are considered effective indicators of changes in biodiversity, habitat quality and availability . Bird communities are indeed strongly related to habitat characteristics , and this habitat specificity is an important component in explaining and predicting the response of bird communities to environmental changes , , . In particular, habitat specialists seem to be more negatively affected by environmental changes as they are declining in many areas of the world, whereas generalists are often increasing , , , . A likely consequence is a gradual homogenization of biodiversity , , , a phenomenon that may not be apparent if one focuses on e.g. species richness as an index of diversity . Including functional components of diversity such as degree of specialization has therefore been stressed in recent studies of impacts of global changes .
Most studies investigating the response of bird communities to environmental changes have been carried out in temperate regions, and little is known on how the identified trends – homogenization of biodiversity and decline of specialists in favour of increased abundance of generalists , , – can be translated to Arctic ecosystems , . However, in the Arctic, a region usually considered as relatively pristine, the environment is now changing through a combination of climate change and increased human activity , . Shrubs are expanding in the southern tundra as a consequence of climate warming , . Intense grazing by reindeer/caribou (Rangifer tarandus) can, however, limit shrub expansion , and may even lead to a decrease of willow shrub cover when densities are particularly high , , . Grazing induced loss of shrubs has been shown to strongly reduce bird species richness in northern Norway . In addition, increased human activity in tundra areas related notably to oil/gas exploitation leads to increased disturbance, habitat fragmentation and erosion of some key tundra habitats . How these habitat changes affect the different components of tundra ecosystems is however still unclear . Understanding bird habitat associations will improve our understanding of the likely impacts of different components of global change on these communities, but these associations and their spatio-temporal variation are as far as we know very poorly known, with only a handful of studies done in the low Arctic , , .
In this paper, we investigate habitat associations and the degree of habitat specialization of bird communities in the shrub tundra of the south-western Yamal peninsula, Russia. This region is experiencing both a rapid development of the oil and gas industry, and growth of reindeer herds . Climate change and associated geomorphologic processes such as permafrost melting have an increasing impact on ecosystem processes , , . Bird communities on Yamal peninsula and species habitat preferences had been described already by Zhitkov and Sdobnikov . Uspenskyi highlighted some biogeographic aspects and Danilov et al. characterized the bird communities typical for different latitudinal zones, together with their associated landscape elements. These works were mostly faunistic, however, and no quantitative, multi-annual study of habitat associations and their stability over time, i.e. specialization, exists.
Here we present results of a systematic bird survey carried out over eight years (2002–2009) in five habitat types of the shrub tundra in southern Yamal, habitats that are typical for vast areas in the southern Eurasian Arctic . We first use multivariate statistics to analyse associations of bird species to habitats and identify habitat specific communities. Second, we quantify the specialization of bird communities in the main habitats. In particular, we address the importance of willow (Salix spp.) thickets for the bird communities, as their extent in the shrub tundra is likely to change under the influence of climate change, erosion and/or intense browsing. Willow thickets have been described as hotspots of productivity and biodiversity in general , , , , with a positive effect on bird species richness in particular , . Higher bird densities and higher species richness would thus be expected in this habitat. As willow thickets are a habitat component with characteristics from more southern climatic zones such as the forest tundra, one could in addition expect that the thickets harbor a higher number of species whose main distribution area is south of the tundra. Here we focus on the most common species in the area, mostly songbirds.
Materials and Methods
Ethics statement
The study was conducted in the frame of ecosystem monitoring carried out by the Ecological Research Station of the Institute of Plant & Animal Ecology, Ural Division Russian Academy of Sciences, and part of the approved science plan of this institution. Permissions for field work were obtained from the Department of Bioresource of the Government of Yamalo-Nenetsky Autonomous Okrug (the administrative region where the study was carried out). As this was a purely observational study, no specific permits were needed.
Study area and habitat classification
Data were collected during a long-term study of birds at the Erkuta tundra monitoring site, situated close to Erkutayakha River in southwest Yamal (68°13′N 69°09′E), Russia (Figure 1). Mean temperature in this area is −25.7°C in January and 8.6°C in July (World Meteorological Organisation). Daily average temperatures become positive in the first decade of June and negative again around the first week of October. On average, precipitation is about 350 mm per year and falls mainly as rain in summer. A stable snow cover is usually established in early October and lasts until early June.
A) the study area divided into four study plots with the extent of the five different habitat types, B) the location of Erkuta tundra monitoring site in southern Yamal and C) the location of Yamal in the Eurasian Arctic with the five arctic bioclimatic subzones as used by Walker et al. (2005).
The study area is situated in a flat tundra landscape interspersed with hills (ca. 30 m) and river cliffs (up to 40 m high). A dense network of rivers, streams and lakes creates wide lowlands with large areas being flooded in spring. The area is at the border between two vegetation zones: erect dwarf-shrub and low-shrub tundra . Low shrub tundra is more common in the area than the drier, lichen-rich erect dwarf-shrub tundra . Plant cover is typically continuous (80–100%), but may be sparse (5–50%) on dry ridges. Dense thickets composed of willows and in some places alder (Alnus fruticosa) occur along streams and lakes.
The bird survey was carried out in an area covering 3.2 km2. After an initial survey of a larger area of about 100 km2 the study area was chosen because 1) it contained all major landscape elements typical for the region, elements which are also characteristic of the southern tundra in Russia in general (Walker et al. 2005), and 2) the size of the area was small enough to carry out the survey several times per season and by the same observer. The area was divided into four plots of about 0.8 km2 each, delimited mostly by landscape elements such as rivers or lakes, to assess the local variation in bird communities (Figure 1). The plots were divided into five habitats according to the landscape elements and vegetation types (Table 1). The main landscape elements in the area include “uplands” which consist of flat tundra on hills and their slopes and “lowlands” which are usually flooded in spring. Based on vegetation types as mapped by S. N. Ektova in 2004 , the distribution of bushes (Salix spp, Betula nana) and smaller shrubs (Empetrum nigrum, Ledum palustris), as well as moisture, we distinguished two upland habitats: upland open tundra (UOT) and upland shrub tundra (UST; See Figure S1 for pictures of the habitats). Lowland habitats were divided into lowland shrub tundra (LST) and lowland marshes (LM). As shrubs in general and willow thickets in particular are important structural elements and highly productive patches in the tundra ecosystem , , , , the dense willow thickets (up to 2.5 m) growing along rivers and in flooded areas were classified as a distinct habitat type occurring on lowlands (WT). These five habitats comprise different structural elements determining breeding habitat and differ in resource availability. Several habitat types were found on each of the four plots, but usually not all five. In total, the area comprised 14 habitat x plot units (Figure 1).
Bird surveys
Birds were surveyed during the breeding season from middle of June to middle of July in 2002–2009 using the spot mapping method , , . An 8- year of survey covered the large year-to-year variation in phenology, weather and small rodent abundance. Each plot was surveyed by walking back and forth at a slow pace along tracks 100 m apart, recording all alarming or singing birds, at least four times in each breeding season by the same observer (V. A. Sokolov). A distance of 100 m between tracks was chosen because up to 50 m distance it is possible to observe and identify birds with good confidence in open habitat. At the same time, given the average territory sizes of birds in the region (Ryabitsev 1993) and typical densities (Methods S1; Figure S2), this distance minimizes the chances for double counting. Tracks were also always placed along thicket edges, allowing for good coverage of this habitat with less visibility and higher densities (see Methods S1 for more details). Limits between plots were located with a handheld Global Positioning System unit (GPS; Garmin eTrex, accuracy 5 meters). The location of each bird was recorded by GPS and each observation was subsequently plotted on a topographic map.
All male/female pairs were noted and recorded as breeding pairs. An alarming or singing male was assumed to represent a breeding pair within the plot, but was recorded as a pair only if it was observed more than once at the same place during the season. For some species, we used additional methods to determine the number of breeding pairs: nest searching for abundant species (Red-throated Pipit, Lapland Bunting; Latin names for all species are given in Table 2) and point counts for some shrub species (e.g. Willow Warbler, Redwing, see also Methods S1). All plots were surveyed in approximately the same weather conditions and mostly in the early morning (from 4 to 9 AM) and evening (from 5 to 8 PM), thus at times when the activity of the birds is likely to be high. Surveys were not conducted during periods of rain, strong winds, or restricted visibility (e.g. fog). Densities of each species were calculated as the number of breeding pairs per km2.
Data analyses
Counts of the most common and noticeable species (28 species, Table 2) on the four study plots were analysed in order to characterize bird-habitat relationships. Community composition was examined using Correspondence Analysis (CA) and its extensions , . These ordination analyses allow for a comparison of the relative abundance of species within a community and we therefore did not correct for the surface covered by the different habitats. Analysing the whole data set, we assessed how much of the overall variation was due to differences among habitats (5 habitats), plots (4 plots) and years (8 years), using Canonical Correspondence Analysis (CCA) , . The percentage of variation explained is based on comparing eigenvalues obtained from the unconstrained ordination (CA) and the constrained CCA.
To determine the proportion of specialists in each habitat, an index of habitat specialization (SSI) was calculated for each species following Julliard et al. . Specialization was quantified as the coefficient of variation of the average densities of a species in each of the five habitats. As sample size was small for some species, we assessed the bias correction suggested by Devictor et al. . This correction was based on two approximations: a Poisson distribution within each habitat class and assuming identical habitat frequencies. However, the distribution within habitat classes is in fact multinomial when conditioning on total number of observed birds, and unequal habitat frequencies as in our study will increase the variance among habitat classes. We therefore calculated the bias by simulating samples from a multinomial distribution with frequencies based on our study area, and deriving the expected SSI for a perfect generalist (Devictor et al 2008b). The observed SSI values were then corrected by the estimated bias.
A community specialization index (CSI) was estimated for the birds in each habitat on each plot (14 habitat x plot units). CSI was calculated as the average SSI of the individuals counted in that habitat/plot over the years of the survey . The species were further classified according to their distribution type as subarctic, southern, or widespread (Table 2; Danilov 1966). Subarctic species are species, which have evolved in the subarctic, whereas southern species are mainly distributed south of the tundra, but extend into the southern part of the Arctic . Widespread species have a distribution encompassing several bioclimatic zones (e.g. ruff, wheatear or pintail) . For each habitat/plot, we calculated the proportion of individual birds belonging to each distribution type among the birds counted in that habitat/plot to assess whether birds with a particular distribution favoured specific habitats.
Average species richness for each habitat was estimated applying the first order jackknife estimator to the bird counts on each habitat in each year, and using plots as replicates . All statistical analyses were conducted using the open-source software R, version 2.11 and the libraries ade4 for multivariate analysis and vegan for species richness estimation .
Results
Abundance
The overall density of breeding birds in the study area fluctuated from 90 to 175 pairs per km2 over the years (average 108.7±10.4 (SE) pairs/km2). In total, 41 species were recorded as breeders. Several species occurred however at very low densities or were recorded only once. These were the following (alphabetical order): Rough-legged Buzzard Buteo lagopus, Black-throated Diver Gavia arctica, Red-throated Diver Gavia stellata, Long-tailed Duck Clangula hyemalis, Bean Goose Anser fabalis, Greater White-fronted Goose Anser albifrons, Heuglin's Gull Larus heuglini, Pintail Anas acuta, Greater Scaup Aythya marila, Common Scoter Melanitta nigra, Arctic Skua Stercorarius parasiticus, Common Teal Anas crecca, and Wigeon Anas Penelope.
In the following, we analysed the 28 most common species (Table 2 with Latin name of the species mentioned below). The Red-throated Pipit was the most abundant species on the study plots, with an average of 19.4 pairs/km2 (Table 2). It was almost twice as abundant as the next most common species, the Lapland Bunting (12.5 pairs/km2 on average). Little Bunting, Redpoll, Willow Warbler, Bluethroat, Redwing, Citrine Wagtail, Meadow Pipit and Chiffchaff were common, but less abundant. Among waders, the most common species were Temminck's Stint, Wood Sandpiper as well as Common Snipe, with densities ranging from 4.5 to 5.4 pairs/km2 (Table 2). Willow Grouse was also rather common. Other birds had densities less than 3 pairs/km2, and several species, such as Golden Plovers, Ringed Plovers or Shore Larks were not recorded every year.
Variation in bird community compositions
The two first axes of the CA clearly represented much larger components of variation than the following axes (Axis 1 and 2: 23% and 13% of variation respectively, all other axes <6%). These two first axes reflected the difference in bird composition among three types of habitats, the upland tundra (UOT and UST), the lowland flooded tundra (LST and LM), and willow thickets (WT; Figure 2). Despite considerable fluctuations of overall population density from year to year, fluctuations of community composition among the years were small compared to variation among habitats and plots. Compared to the two first unconstrained eigenvalues of CA (0.36 and 0.22), the two first eigenvalues of a constrained CA with year as categorical covariate were both 0.02, with habitat as a covariate 0.31 and 0.11, and with plot as a covariate 0.17 and 0.04.
Species scores (upper left), variation between years (upper right), habitats (bottom left) and plots (bottom right). Ellipses describe the variability within habitats and plots and have an approximate 67% confidence level. Habitat and species codes are given in Tables 1 and 2, respectively.
We also investigated whether an interaction between year and habitat could explain some additional variation, but this was not the case (first two eigenvalues of CA with the interaction habitat*year as a covariate: 0.33, 0.13, compared to 0.31 and 0.11 with habitat only). The species composition overlapped largely between the four plots and differences were largely confounded with the different habitats present on each plot. Generalist species, characterized by a large variance along the two first CA axes, were Wood Sandpiper, Temminck's Stint, Red-throated Pipit, and Lapland Bunting (Figure 3). These species were found in nearly all habitat types almost every year. Species specializing on upland tundra as their main habitat were Golden Plover, Ringed Plover, Arctic Tern, Shore Lark, Meadow Pipit and Northern Wheatear. The flooded lowland areas (LST and LM) were preferred by Willow Grouse, Red-necked Phalarope, Ruff, Jack Snipe, Common Snipe, Pechora Pipit, Yellow Wagtail and Citrine Wagtail. Sedge Warbler, Willow Warbler, Chiffchaff, Arctic Warbler, Bluethroat, Redwing, Redpoll, Reed Bunting and Little Bunting exhibited a clear affinity to willow thickets as habitat (Figure 3).
Correspondence analysis is used to define the coordinates of each plot-habitat-year observation, and different colours indicate different habitats (orange – upland open tundra, red – upland shrub tundra, brown – lowland shrub tundra, blue – lowland marshes and green – willow thickets). The size of the circles is proportional to the abundance of the given species in the respective plot-habitat-year.
Variation in specialization among habitats
The gradient from generalists to specialists was well described by SSI (Table 2). The Red-throated Pipit, the most common and widespread species in the area, had the lowest value (0.25). The Temminck's Stint and Wood Sandpiper had low values as well (0.57 for both). The most specialized species were Willow Warbler, Redwing, Redpoll, Chiffchaff, and Little Bunting (in decreasing order of specialization; Table 2). Birds representing specialized species were on average most numerous in willow thickets, which had the highest CSI. CSI was, in contrast, lowest for the two upland habitats (UOT and UST) and intermediate for the open lowland areas (LST and LM; Figure 4). The proportion of species with a southern distribution showed a similar pattern as CSI (Figure 4). Willow thickets harboured most southern species and had the highest CSI, indicating that species with a southern distribution are habitat specialists in the shrub tundra preferring willow thickets. The proportion of subarctic species was inversely proportional to that of southern species, and most subarctic species occurred in the upland habitats (UOT and UST). Species with a wide distribution range represented only a small proportion of the species in all habitats (Figure 4).
UOT – Upland open tundra, UST – upland shrub tundra, LST – lowland shrub tundra, LM – lowland marshes, WT – willow thickets. A) Community specialization index, B) proportion of birds belonging to species with a southern distribution, C) proportion of birds belonging to species with a subarctic distribution and D) proportion of birds belonging to widespread species.
Among the five habitats, the density of breeding birds was clearly highest in willow thickets (Table 3). Estimates of species richness did not change much over the years; therefore the values were averaged across years to get an overall species richness estimate for each habitat. Species richness was not higher in willow thickets than in the two other lowland habitats LST and LM (Table 3). Both density and species richness were lowest in UOT.
Discussion
As far as we know, relatively few studies have been published on bird communities in the shrub tundra zone, and this is particularly true for passerines , , . Moreover, because of logistical constraints, previous studies represented often snapshots of one or two years, e.g. . Little is therefore known about the temporal variability of abundance in these bird communities, even though there are indications that it can be high , . By surveying the same plots over 8 years (2002–2009), characterized by large fluctuations in abiotic (e.g. snowmelt) and biotic conditions (e.g., small rodent densities) typical for the Arctic (International Waders Study Group 2008), we could analyse fluctuations in composition, abundance, and species richness. We acknowledge that our estimates of population densities can be affected by e.g. variation in detectability and double counts, but these issues are unlikely to impact our conclusions regarding species diversity and community specialization as we used robust estimators.
Compared to studies from the North American Arctic, the number of species registered in the present study was high, particularly so for small passerines. Eighteen small passerine species were breeding at Erkuta, compared to values ranging from eight to nine in a comparable biogeographic zone in eastern Canada , . Jackson and Robertson found 14 passerine species in their “oceanic heath/stony ground” zone, which covers the low Arctic zone of northern Norway. The small number of studies prevents any generalization, but one can speculate that low Arctic bird communities in Eurasia are richer than in North America.
Passerine communities at Erkuta differed from North American ones by the occurrence of pipits, wagtails, warblers, as well as one thrush species. Specifically the Red-throated Pipit was an abundant generalist at Erkuta, with two other pipit species being present, whereas the only study we could find from the Canadian Arctic with significant numbers of pipits was on Ungava peninsula where the American Pipit (Anthus rubescens) is a willow specialist . In our study, wagtails and warblers were represented by six species, four being rather abundant, whereas Sammler et al. recorded only one species of warbler (of course, new world warblers do not belong to the same taxonomic group as old world warblers, but we consider here the functional role these groups play in the tundra ecosystems). Other components of the community at Erkuta were more similar to passerine assemblages in North America. To the extent they can be compared functionally, buntings (little and reed) could replace the savannah and American tree sparrow that are characteristics of the low Arctic in Canada. Some species are found throughout the circumpolar Arctic such as the Lapland Bunting and Shore Lark. The Lapland Bunting is nearly always a dominant species and occurs in all vegetation types (this study; , , , ) except in the high Arctic where Snow Bunting dominates . Horned Lark is also widespread but occurs at low densities throughout the Arctic and is more selective in the choice of habitat (open, often dry tundra). The passerine community at Erkuta was more similar to communities described in northern Norway, notably in the numerical dominance of pipits , .
Yearly variation in abundance was large, but around values typical of the low Arctic ; species composition was, however, stable and mostly determined by differences among habitats. Monitoring of community composition rather than abundance or species richness should therefore give more reliable indications of how environmental changes affect tundra environments. Although the size of our study area was rather small, the number and species of birds present were similar to surveys done within the same bioclimatic zone (e.g. , , ; see also Figure S3). Multivariate analyses distinguished mainly three species assemblages among the five studied habitats. The first assemblage occurred in upland tundra, the drier, open parts of the landscape. This community was characterized by species with a low degree of habitat specialization and by a high proportion of subarctic species. A second, distinct assemblage was found in willow thickets, with the highest specialization index and the most southern species. The third community, which occurred in the flooded lowland tundra (LST and LM), occupied an intermediate place between the upland tundra and the willow thickets on the first axis of the CA, but was distinct on the second axis. It was composed both by southern and subarctic species in about equal proportions, and harboured specialists as well as generalists.
Of all habitats, willow thickets had the highest densities of breeding birds. This high density may be related to the high productivity of willow thickets in terms of plant biomass, possibly resulting in high abundance of arthropods as food for breeding birds. Structurally, the thickets represent sheltered breeding sites both on and above the ground and elevated sites used for display by species like Bluethroats or warblers. Willow thickets had a high value for the specialization index, mainly because southern species in the tundra zone were restricted to willow thickets. However, this influx of southern species did not result in higher species richness as most typical subarctic species were mainly found in other habitats, either lowland or upland tundra.
Community specialization was lowest in the upland habitats, i.e. open and upland shrub tundra, characterized by a relatively simple vegetation structure compared to willow thickets. The dominant species in the upland habitats, such as Lapland Bunting and Red-throated Pipit, were typically species found also in other habitats. Although the community composition of the two upland habitats was similar, it differed more than between the two lowland tundra habitats (LST and LM; Figure 2). Upland open tundra was the habitat with the lowest density of breeding birds as well as the lowest specialization index.
The low Arctic zone in Yamal peninsula is expected to be impacted by three main drivers of change in the next decades: warming, reindeer herding, and oil or gas exploitation . The development of the latter is being most intense further north on Yamal peninsula and is not expected to affect our study area directly . Warming is expected to increase shrub cover , and therefore the extent of willow thickets, whereas reindeer grazing will have inverse effects , , . Reindeer herds on Yamal peninsula have increased considerably during the last 20 years, but it is unknown whether such densities will stay sufficiently high in the future to slow down significantly the expected increase in willow thickets. Assuming a scenario of willow thickets increase, we would expect increasing overall bird abundance, as well as an increase of specialists as defined at the scale of our study, since those species tend to dominate willow thickets. This would be the reverse pattern of what is observed for example in temperate areas, where there is both a specialists decline and a decrease in bird specialization, with specialists becoming generalists . Part of the discrepancy may be due to scaling issues and the variation in habitats considered. Many species considered as specialists in our study area, a classification which is likely to be representative for a large part of the southern Arctic in Russia, would be generalists if boreal habitats were included (e.g. warblers, Little Bunting, Citrine Wagtail or Bluethroat).
This study is a first step to understand bird communities in the large bioclimatic zone of the southern Arctic tundra covering ca 800,000 km2 in Russia . The pattern we analyse here warrants studies at a larger scale (for example by including the taiga zone) to understand how our local-scale results may translate to a regional scale, as the degree of specialization may vary across the range of species (e.g. ). The main difference with studies done in the temperate zone is due to the fact that we expect major increases in the habitat harbouring the largest number of specialists – this is a different pattern from the changes observed in temperate areas ( but see for another example), where habitats with a large number of specialists, specifically traditional farmland and to a lesser degree forests, have been under constant pressure. Furthermore, we do not expect thicket specialists to become more generalist for two reasons: they often have strict nesting requirements (e.g. Redwing) and increase of generalist predators such as Red Fox (Vulpes vulpes; ) and Hooded Crow (Corvus cornix; ) is likely to prevent expansion to more open habitats. The suggested pattern of increase in local specialists is likely to concern other also groups of organisms in the Arctic, notably plants, where rare species are often limited to particular habitats and microclimates (hotspots), which may become more common with climate change (Elvebakk 2005).
Supporting Information
Methods S1.
Details about the survey method.
https://doi.org/10.1371/journal.pone.0050335.s001
(DOC)
Figure S1.
Summer pictures of the habitats monitored in our study, Erkuta, 2002–2009, Yamal, Russia.
https://doi.org/10.1371/journal.pone.0050335.s002
(TIF)
Figure S2.
Map of breeding pairs for two species breeding in open habitat (Lapland bunting) and closed habitats (willow thickets; little bunting). Each star represents the centre of a territory.
https://doi.org/10.1371/journal.pone.0050335.s003
(TIF)
Figure S3.
Bird communities have been described at several sites on the Yamal Peninsula by different authors. A) Map of the sites where communities were described. B) Result of a correspondence analysis which shows that the community at Erkuta was similar to those observed at sites located in the same biogeographic area, such as Hanovey and Yuribey (Sokolov et al. 2006).
https://doi.org/10.1371/journal.pone.0050335.s004
(TIF)
Acknowledgments
We are thankful to the Ecological Research Station of Institute of Plant and Animal Ecology, Russia, for the excellent field logistic and the University of Tromsø, Norway for the support during data analyses. We want to thank the many people that made fieldwork possible, especially the family of Takuchi Laptander and V. N. Sidorov. The manuscript benefited from previous reviews by L. Brotons, R.F. Rockwell, and five anonymous reviewers.
Author Contributions
Conceived and designed the experiments: VS DE NGY AS NL. Performed the experiments: VS AS NL. Analyzed the data: VS DE NGY NL. Contributed reagents/materials/analysis tools: VS DE NGY AS NL. Wrote the paper: VS DE NGY NL.
References
- 1. Gregory RD, van Strien A (2010) Wild bird indicators: using composite population trends of birds as measures of environmental health. Ornithological Science 9: 3–22.
- 2.
Wiens J (1989) The ecology of bird communities. 1 Foundations and patterns. Cambridge: Cambridge University Press. 539 p.
- 3. Hausner VH, Yoccoz NG, Ims RA (2003) Selecting indicator traits for monitoring land use impacts: Birds in northern coastal birch forests. Ecological Applications 13: 999–1012.
- 4. Niemi GJ, McDonald ME (2004) Application of ecological indicators. Annual Review of Ecology Evolution and Systematics 35: 89–111.
- 5. Reif J, Storch D, Vorisek P, Stastny K, Bejcek V (2008) Bird-habitat associations predict population trends in central European forest and farmland birds. Biodiversity and Conservation 17: 3307–3319.
- 6. Barnagaud JY, Devictor V, Jiguet F, Archaux F (2011) When species become generalists: on-going large-scale changes in bird habitat specialization. Global Ecology and Biogeography 20: 630–640.
- 7. Devictor V, Julliard R, Clavel J, Jiguet F, Lee A, et al. (2008) Functional biotic homogenization of bird communities in disturbed landscapes. Global Ecology and Biogeography 17: 252–261.
- 8. Devictor V, Julliard R, Jiguet F (2008) Distribution of specialist and generalist species along spatial gradients of habitat disturbance and fragmentation. Oikos 117: 507–514.
- 9. Clavel J, Julliard R, Devictor V (2011) Worldwide decline of specialist species: toward a global functional homogenization? Frontiers in Ecology and the Environment 9: 222–228.
- 10. Davey CM, Chamberlain DE, Newson SE, Noble DG, Johnston A (2012) Rise of the generalists: evidence for climate driven homogenization in avian communities. Global Ecology and Biogeography 21: 568–578.
- 11. Olden JD (2006) Biotic homogenization: a new research agenda for conservation biogeography. Journal of Biogeography 33: 2027–2039.
- 12. Filippi-Codaccioni O, Devictor V, Bas Y, Julliard R (2010) Toward more concern for specialisation and less for species diversity in conserving farmland biodiversity. Biological Conservation 143: 1493–1500.
- 13. Clavel J, Julliard R, Devictor V (2010) Worldwide decline of specialist species: toward a global functional homogenization? Frontiers in Ecology and the Environment
- 14. Järvinen O, Väisänen R (1979) Changes in bird populations as criteria of environmental changes. Holoarctic Ecology 2: 75–80.
- 15. Virkkalaa R, Heikkinen RK, Leikola N, Luoto M (2008) Projected large-scale range reductions of northern-boreal land bird species due to climate change. Biological Conservation 1343–1353.
- 16. Forbes BC, Stammler F, Kumpula T, Meschtyb N, Pajunen A, et al. (2009) High resilience in the Yamal-Nenets social-ecological system, West Siberian Arctic, Russia. Proceedings of the National Academy of Sciences 106: 22041–22048.
- 17. Liebezeit JR, Kendall SJ, Brown S, Johnson CB, Martin P, et al. (2009) Influence of human development and predators on nest survival of tundra birds, Arctic Coastal Plain, Alaska. Ecological Applications 19: 1628–1644.
- 18. Sturm M, Racine C, Tape K (2001) Climate change - Increasing shrub abundance in the Arctic. Nature 411: 546–547.
- 19. Tape K, Sturm M, Racine C (2006) The evidence for shrub expansion in Northern Alaska and the Pan-Arctic. Global Change Biology 12: 686–702.
- 20. Post E, Pedersen C (2008) Opposing plant community responses to warming with and without herbivores. Proceedings of the National Academy of Sciences 105: 12353–12358.
- 21. Bråthen KA, Ims RA, Yoccoz NG, Fauchald P, Tveraa T, et al. (2007) Induced shift in ecosystem productivity? Extensive scale effects of abundant large herbivores. Ecosystems 10: 773–789.
- 22. den Herder M, Virtanen R, Roininen H (2008) Reindeer herbivory reduces willow growth and grouse forage in a forest-tundra ecotone. Basic and Applied Ecology 9: 324–331.
- 23. Ims RA, Yoccoz NG, Bråthen KA, Fauchald P, Tveraa T, et al. (2007) Can reindeer overabundance cause a trophic cascade? Ecosystems 10: 607–622.
- 24. Ims RA, Henden JA (2012) Collapse of an arctic bird community resulting from ungulate-induced loss of erect shrubs. Biological conservation 149: 2–5.
- 25. Forbes BC, Fresco N, Shvidenko A, Danell K, Chapin FS (2004) Geographic variations in anthropogenic drivers that influence the vulnerability and resilience of social-ecological systems. Ambio 33: 377–382.
- 26. Andres BA (2006) An Arctic-breeding bird survey on the northwestern Ungava Peninsula, Québec, Canada. Arctic 59: 311–318.
- 27. Sammler JE, Andersen DE, Skagen SK (2008) Population trends of tundra-nesting birds at Cape Churchill, Manitoba, in relation to increasing goose populations. Condor 110: 325–334.
- 28. Walker DA, Leibman MO, Epstein HE, Forbes BC, Bhatt US, et al. (2009) Spatial and temporal patterns of greenness on the Yamal Peninsula, Russia: interactions of ecological and social factors affecting the Arctic normalized difference vegetation index. Environmental Research Letters 4: 045004.
- 29.
Golovatin M, Morozova L, Ektova S, Paskalny S (2010) The change of tundra biota at Yamal peninsula (the North of the Western Siberia, Russia) in connection with anthropogenic and climate shifts. In: Gutierrez B, Pena C, editors. Tundras: vegetation, wildlife and climate trends New York: Nova Publishers. pp. 1–46.
- 30. Walker DA, Raynolds MK, Daniels FJA, Einarsson E, Elvebakk A, et al. (2005) The Circumpolar Arctic vegetation map. Journal of Vegetation Science 16: 267–282.
- 31. Zhitkov BM (1912) Birds of Yamal Peninsula. Yearbook of Zoological museum of Academy of Sciences 17: 311–369.
- 32. Sdobnikov VM (1937) Distribution of mammals and birds on habitats on Bolshezemelskaya tundra and Yamal. Proceedings of the Allunion Arctic institute 94: 1–76.
- 33. Uspenskiy SM (1960) Latitudinal zonality of the Arctic avifauna Ornithologia. 55–70.
- 34.
Danilov NN, Ryzhanovskiy VN, Ryabitsev VK (1984) Birds of Yamal. Moscow: Nauka. 333 p.
- 35. Baril L, Hansen A, Renkin R, Lawrence R (2009) Willow-bird relationships on Yellowstone's northern range. Yellowstone Science 17: 19–26.
- 36. Ripple WJ, Beschta RL (2005) Refugia from browsing as reference sites for restoration planning. Western North American Naturalist 65: 269–273.
- 37.
Shiyatov SG, Mazepa VS (1995) Climate. In: Dobrinskiy LN, editor. The nature of Yamal. Yekaterinburg: Nauka. pp. 32–68.
- 38.
Magomedova MA, Morozova LM, Ektova SN, Rebristaya OV, Chernyadyeva IV, et al.. (2006) Yamal peninsula: vegetation cover; Gorchakovskiy PL, editor. Tumen: City-press. 360 p.
- 39.
Chernov Y, Matveyeva N (1997) Arctic ecosystems in Russia. In: Wielgolaski F, editor. Ecosystems of the World. Amsterdam: Elsevier. pp. 361–507.
- 40. den Herder M, Virtanen R, Roininen H (2004) Effects of reindeer browsing on tundra willow and its associated insect herbivores. Journal of Applied Ecology 41: 870–879.
- 41. Ims RA, Yoccoz NG, Brathen KA, Fauchald P, Tveraa T, et al. (2007) Can reindeer overabundance cause a trophic cascade? Ecosystems 10: 607–622.
- 42. Freedman B, Svoboda J (1982) Populations of breeding birds at Alexandra Fjord, Ellesmere Island, Northwest Territories, compared with other arctic localities. Canadian Field-Naturalist 96: 56–60.
- 43. Tomialojc L, Verner J (1990) Do point counting and spot mapping produce equivalent estimates of bird densities. Auk 107: 447–450.
- 44. Trefry SA, Freedman B, Hudson JMG, Henry GHR (2010) Breeding bird surveys at Alexandra Fjord, Ellesmere Island, Nunavut (1980–2008). Arctic 63: 308–314.
- 45. Dray S, Chessel D, Thioulouse J (2003) Co-inertia analysis and the linking of ecological data tables. Ecology 84: 3078–3089.
- 46. ter Braak CJF (1986) Canonical correspondence-analysis - a new eigenvector technique for multivariate direct gradient analysis. Ecology 67: 1167–1179.
- 47. Greenacre M (2010) Correspondence analysis of raw data. Ecology 91: 958–963.
- 48. Chessel D, Lebreton J-D, Yoccoz N (1987) Propriétés de l'Analyse Canonique des Correspondances; une illustration en hydrobiologie. Revue de Statistique Appliquée 35: 55–72.
- 49. Julliard R, Clavel J, Devictor V, Jiguet F, Couvet D (2006) Spatial segregation of specialists and generalists in bird communities. Ecology Letters 9: 1237–1244.
- 50. Danilov NN (1966) The ways of adaptation of land vertebrate animals to living conditions in Subarctic. Proceedings of the Institute of Biology of UD RAS 2. Birds 1–147.
- 51. Gotelli NJ, Colwell RK (2001) Quantifying biodiversity: procedures and pitfalls in the measurement and comparison of species richness. Ecology Letters 4: 379–391.
- 52.
R Development Core Team (2010) R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.
- 53. Chessel D (2004) The ade4 package-I: One-table methods. R News 4: 5–10.
- 54. Oksanen J, Blanchet GF, Kindt R, Legendre P, O'Hara RB, et al. (2010) Vegan: Community Ecology Package. R package 117-4. 1.17-4 ed.
- 55. Jackson CR, Robertson MP (2011) Predicting the potential distribution of an endangered cryptic subterranean mammal from few occurrence records. Journal for Nature Conservation 19: 87–94.
- 56. Rodrigues R (1994) Microhabitat variables influencing nest-site selection by tundra birds. Ecological Applications 4: 110–116.
- 57. Kucheruk VV, Kovalevskiy YV, Surbanos AG (1975) Changes in bird populations and fauna of southern Yamal during the last 100 years. Bulletin Moscow National Society, Department of Biology 80: 52–64 (in Russian).
- 58.
Ryabitsev VK (1993) Distribution and dynamics of bird communities in the Sub-Arctic. Ekaterinburg (in Russian). Nauka Publisher. 293 p.
- 59. Sokolov VA (2006) Comparative analysis of the nesting bird fauna in south-western Yamal. Izvestya Chelyabinskogonauchnogo centra Ural Okrug. RAN 3: 109–113 (in Russian).
- 60. Chapin FS, Sturm M, Serreze MC, McFadden JP, Key JR, et al. (2005) Role of land-surface changes in Arctic summer warming. Science 310: 657–660.
- 61. Naito AT, Cairns DM (2011) Patterns and processes of global shrub expansion. Progress in Physical Geography 35: 423–442.
- 62. Clavero M, Villero D, Brotons L (2011) Climate change or land use dynamics: do we know what climate change indicators indicate? Plos One 6.
- 63. Killengreen S, Lecomte N, Ehrich D, Schott T, Yoccoz NG, et al. (2011) The importance of marine vs. human-induced subsidies in the maintenance of an expanding mesocarnivore in the arctic tundra. Journal of Animal Ecology 80: 1049–1060.
- 64. Killengreen ST, Stromseng E, Yoccoz NG, Ims RA (2012) How ecological neighbourhoods influence the structure of the scavenger guild in low arctic tundra. Diversity and Distributions 18: 563–574. | https://journals.plos.org:443/plosone/article?id=10.1371/journal.pone.0050335 |
Alpine and Arctic species are considered to be particularly vulnerable to climate change, which is expected to cause habitat loss, fragmentation and—ultimately—extinction of cold‐adapted species. However, the impact of climate change on glacial relict populations is not well understood, and specific recommendations for adaptive conservation management are lacking. We focused on the mountain hare (Lepus timidus) as a model species and modelled species distribution in combination with patch and landscape‐based connectivity metrics. They were derived from graph‐theory models to quantify changes in species distribution and to estimate the current and future importance of habitat patches for overall population connectivity. Models were calibrated based on 1,046 locations of species presence distributed across three biogeographic regions in the Swiss Alps and extrapolated according to two IPCC scenarios of climate change (RCP 4.5 & 8.5), each represented by three downscaled global climate models. The models predicted an average habitat loss of 35% (22%–55%) by 2100, mainly due to an increase in temperature during the reproductive season. An increase in habitat fragmentation was reflected in a 43% decrease in patch size, a 17% increase in the number of habitat patches and a 34% increase in inter‐patch distance. However, the predicted changes in habitat availability and connectivity varied considerably between biogeographic regions: Whereas the greatest habitat losses with an increase in inter‐patch distance were predicted at the southern and northern edges of the species’ Alpine distribution, the greatest increase in patch number and decrease in patch size is expected in the central Swiss Alps. Finally, both the number of isolated habitat patches and the number of patches crucial for maintaining the habitat network increased under the different variants of climate change. Focusing conservation action on the central Swiss Alps may help mitigate the predicted effects of climate change on population connectivity.
Global Change Biology – Wiley
Published: Jan 1, 2018
Keywords: ; ; ; ; ;
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
To subscribe to email alerts, please log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Reference ManagersExport to EndNote
To get new article updates from a journal on your personalized homepage, please log in first, or sign up for a DeepDyve account if you don’t already have one. | https://www.deepdyve.com/lp/wiley/alpine-glacial-relict-species-losing-out-to-climate-change-the-case-of-3rL2Oh8M3U |
Many parasites have external transmission stages that persist in the environment prior to infecting a new host. Understanding how long these stages can persist, and how abiotic conditions such as temperature affect parasite persistence, is important for predicting infection dynamics and parasite responses to future environmental change. In this study, we explored environmental persistence and thermal tolerance of a debilitating protozoan parasite that infects monarch butterflies. Parasite transmission occurs when dormant spores, shed by adult butterflies onto host plants and other surfaces, are later consumed by caterpillars. We exposed parasite spores to a gradient of ecologically-relevant temperatures for 2, 35, or 93 weeks. We tested spore viability by feeding controlled spore doses to susceptible monarch larvae, and examined relationships between temperature, time, and resulting infection metrics. We also examined whether distinct parasite genotypes derived from replicate migratory and resident monarch populations differed in their thermal tolerance. Finally, we examined evidence for a trade-off between short-term within-host replication and long-term persistence ability. Parasite viability decreased in response to warmer temperatures over moderate-to-long time scales. Individual parasite genotypes showed high heterogeneity in viability, but differences did not cluster by migratory vs. resident monarch populations. We found no support for a negative relationship between environmental persistence and within-host replication, as might be expected if parasites invest in short-term reproduction at the cost of longer-term survival. Findings here indicate that dormant spores can survive for many months under cooler conditions, and that heat dramatically shortens the window of transmission for this widespread and virulent butterfly parasite.
Carry-over Effects of the Larval Environment in Mosquito-Borne Disease Symptoms
Check out the new book chapter by OSE student Mike Newberry and recent OSE graduate Dr. Michelle Evans!
Evans, M., Newberry, Philip M., and Courtney C. Murdock. “Carry-over Effects of the Larval Environment” in Population Biology of Vector-Borne Diseases. Editors: Drake JM, Bonsall M, Strand M. Oxford University Press; 2020 Dec 30. 155-174.
Habitat use as indicator of adaptive capacity to climate change
Aim
Populations of cold‐adapted species at the trailing edges of geographic ranges are particularly vulnerable to the negative effects of climate change from the combination of exposure to warm temperatures and high sensitivity to heat. Many of these species are predicted to decline under future climate scenarios, but they could persist if they can adapt to warming climates either physiologically or behaviourally. We aim to understand local variation in contemporary habitat use and use this information to identify signs of adaptive capacity. We focus on moose (Alces alces), a charismatic species of conservation and public interest.
Location
The northeastern United States, along the trailing edge of the moose geographic range in North America.
Methods
We compiled data on occurrences and habitat use of moose from remote cameras and GPS collars across the northeastern United States. We use these data to build habitat suitability models at local and regional spatial scales and then to predict future habitat suitability under climate change. We also use fine‐scale GPS data to model relationships between habitat use and temperature on a daily temporal scale and to predict future habitat use.
Results
We find that habitat suitability for moose will decline under a range of climate change scenarios. However, moose across the region differ in their use of climatic and habitat space, indicating that they could exhibit adaptive capacity. We also find evidence for behavioural responses to weather, where moose increase their use of forested wetland habitats in warmer places and/or times.
Main conclusions
Our results suggest that there will be significant shifts in moose distribution due to climate change. However, if there is spatial variation in thermal tolerance, trailing‐edge populations could adapt to climate change. We highlight that prioritizing certain habitats for conservation (i.e., thermal refuges) could be crucial for this adaptation.
Teitelbaum CS, Sirén AP, Coffel E, Foster JR, Frair JL, Hinton JW, Horton RM, Kramer DW, Lesk C, Raymond C, Wattles DW. Habitat use as indicator of adaptive capacity to climate change. Diversity and Distributions. 2021. https://doi.org/10.1111/ddi.13223
Climate, Fire Regime, Geomorphology, and Conspecifics Influence the Spatial Distribution of Chinook Salmon Redds
Pacific salmon spawning and rearing habitats result from dynamic interactions among geomorphic processes, natural disturbances, and hydro‐climatological factors acting across a range of spatial and temporal scales. We used a 21‐year record of redd locations in a wilderness river network in central Idaho, USA, to examine which covariates best predict the spawning occurrence of Chinook Salmon Oncorhynchus tshawytscha and how shifts under a changing climate might affect habitat availability. We quantified geomorphic characteristics (substrate size, channel slope, and valley confinement), climatic factors (stream temperature and summer discharge), wildfire, and conspecific abundance (as inferred by the number of redds) throughout the network. We then built and compared logistic regression models that estimated redd occurrence probability as a function of these covariates in 1‐km reaches throughout the network under current and projected climate change scenarios. Redd occurrence was strongly affected by nearly all of the covariates examined. The best models indicated that climate‐driven changes in redd occurrence probabilities will be relatively small but spatially heterogeneous, with warmer temperatures increasing occurrence probabilities in cold, high‐elevation reaches and decreasing probabilities in warm, low‐elevation reaches. Furthermore, positive effects of wildfire on redd occurrence may be more important than climate‐driven effects on stream temperature and summer discharge, although climate‐related changes in temperature and scour regime during the egg incubation period may influence survival to emergence. Our results identify where favorable spawning habitats are likely to exist under climate change, how future habitat distributions may differ from contemporary conditions, and where habitat conservation might be prioritized. Furthermore, the positive occurrence–abundance relationship we observed indicates that the study site is underseeded, and effective management actions are needed for increasing the recruitment of spawning adults to take advantage of available habitat.
Jacobs GR, Thurow RF, Buffington JM, Isaak D, Wenger SJ. Climate, Fire Regime, Geomorphology, and Conspecifics Influence the Spatial Distribution of Chinook Salmon Redds. Transactions of the American Fisheries Society. 2020.
https://afspubs.onlinelibrary.wiley.com/doi/10.1002/tafs.10270
Dead litter of resident species first facilitates and then inhibits sequential life stages of range‐expanding species
- Resident species can facilitate invading species (biotic assistance) or inhibit their expansion (biotic resistance). Species interactions are often context‐dependent and the relative importance of biotic assistance versus resistance could vary with abiotic conditions or the life stage of the invading species, as invader stress tolerances and resource requirements change with ontogeny. In northeast Florida salt marshes, the abundant dead litter (wrack) of the native marsh cordgrass, Spartina alterniflora, could influence the expansion success of the black mangrove, Avicennia germinans, a tropical species that is expanding its range northward.
- We used two field experiments to examine how S. alterniflora wrack affects A. germinans success during (a) propagule establishment and (b) subsequent seedling survival. We also conducted laboratory feeding assays to identify propagule consumers and assess how wrack presence influences herbivory on mangrove propagules.
- Spartina alterniflora wrack facilitated A. germinans establishment by promoting propagule recruitment, retention and rooting; the tidal regime influenced the magnitude of these effects. However, over time S. alterniflora wrack inhibited A. germinans seedling success by smothering seedlings and attracting herbivore consumers. Feeding assays identified rodents—which seek refuge in wrack—as consumers of A. germinans propagules.
- Synthesis. Our results suggest that the deleterious effects of S. alterniflora wrack on A. germinans seedling survival counterbalance the initial beneficial effects of wrack on A. germinans seed establishment. Such seed‐seedling conflicts can arise when species stress tolerances and resource requirements change throughout development and vary with abiotic conditions. In concert with the tidal conditions, the relative importance of positive and negative interactions with wrack at each life stage can influence the rate of local and regional mangrove expansion. Because interaction strengths can change in direction and magnitude with ontogeny, it is essential to examine resident–invader interactions at multiple life stages and across environmental gradients to uncover the mechanisms of biotic assistance and resistance during invasion.
Smith RS, Blaze JA, Byers JE. Dead litter of resident species first facilitates and then inhibits sequential life stages of range‐expanding species. Journal of Ecology. https://doi.org/10.1111/1365-2745.13586
Graduate Student Symposium Schedule 2021
Here is the Zoom Link (Meeting ID: 917 3821 1238, Passcode: 674466). You will need to be registered via this link.
Friday, February 5
Welcoming Remarks
Session I (Moderator: Rebecca Atkins)
Session II (Moderator: Rebecca Atkins)
Rapid Fire Session I (Moderator: TJ Odom)
Poster Session
4:00 – 5:30 Virtual Poster Session (Zoom link)
Breakout Room 1:
Corinna Hazelrig – Batrachochytriunm dendrobatidis prevalence throughout amphibian species and life stages of varying skin keratin richness
Christopher Brandon – Walking while parasitized: Effects of a nematode parasite on locomotion activity of horned passalus beetles
Breakout Room 2:
Caroline Aikins – Inferring diet of ringtails (Bassariscus astutus) from latrines in human-impacted park habitats
Mikey Fager – Tipping streams: Does increased temperature change the balance of carbon and nutrients in food resources?
Amelia Foley – Plastic in the urban environment: An exploratory study of microplastics in the Athens, GA community
Breakout Room 3:
Will Ellis – How parasites influence ecosystems: Studying the varied effects of a trematode parasite on its environment
Jessica Mitchell – Assessing the response of aquatic detritivore insects to experimental warming
Niki Gajjar – Morphological root traits and phylogenetic signals in Southern Africa trees and grasses
Saturday, February 6
Session IV (Moderator: Carolyn Cummins)
Rapid Fire Session II (Moderator: Carolyn Cummins)
Keynote Address
Urban specialization reduces habitat connectivity by a highly mobile wading bird
Background
Mobile animals transport nutrients and propagules across habitats, and are crucial for the functioning of food webs and for ecosystem services. Human activities such as urbanization can alter animal movement behavior, including site fidelity and resource use. Because many urban areas are adjacent to natural sites, mobile animals might connect natural and urban habitats. More generally, understanding animal movement patterns in urban areas can help predict how urban expansion will affect the roles of highly mobile animals in ecological processes.
Methods
Here, we examined movements by a seasonally nomadic wading bird, the American white ibis (Eudocimus albus), in South Florida, USA. White ibis are colonial wading birds that forage on aquatic prey; in recent years, some ibis have shifted their behavior to forage in urban parks, where they are fed by people. We used a spatial network approach to investigate how individual movement patterns influence connectivity between urban and non-urban sites. We built a network of habitat connectivity using GPS tracking data from ibis during their non-breeding season and compared this network to simulated networks that assumed individuals moved indiscriminately with respect to habitat type.
Results
We found that the observed network was less connected than the simulated networks, that urban-urban and natural-natural connections were strong, and that individuals using urban sites had the least-variable habitat use. Importantly, the few ibis that used both urban and natural habitats contributed the most to connectivity.
Conclusions
Habitat specialization in urban-acclimated wildlife could reduce the exchange of propagules and nutrients between urban and natural areas, which has consequences both for beneficial effects of connectivity such as gene flow and for detrimental effects such as the spread of contaminants or pathogens.
Claire S. Teitelbaum, Jeffrey Hepinstall-Cymerman, Anjelika Kidd-Weaver, Sonia M. Hernandez, Sonia Altizer, Richard J. Hall. Urban specialization reduces habitat connectivity by a highly mobile wading bird. Movement Ecology 8, 49 (2020). https://doi.org/10.1186/s40462-020-00233-7
Laura Kojima featured on Ologies Podcast!
Kaylee earns best talk award at ESA!
Our very own Kaylee Arnold won best talk for her presentation, “The gut microbial diversity of a Chagas disease vector varies across coinfection status throughout central Panama” in the Medical, Urban, & Veterinary Entomology section of the Entomological Society of America at their annual meeting. Read below for her abstract. Congratulations, Kaylee!
Chagas disease is caused by the parasite Trypanosoma cruzi that is carried in the guts of hematophagous triatomine vectors. Triatomines are often coinfected with the parasite T. rangeli, which is non-pathogenic to mammals but can reduce fitness of their triatomine hosts. This study examined the gut microbial diversity of T. cruzi infected, coinfected, and uninfected triatomines (n = 288) throughout central Panama. We hypothesized that single and coinfected triatomines would have greater gut microbial diversity than uninfected individuals due to pathogen-microbe interactions within the gut, which can facilitate the proliferation of less dominant bacterial taxa.
Coinfections were found in 13% of individuals (40/288) and there was significantly greater alpha diversity in coinfected individuals when compared to both single and uninfected samples (Dunn’s test of multiple comparisons, p < 0.001). Furthermore, single T. cruzi infections were found in 34% of sampled individuals (91/288) and also displayed significantly greater alpha diversity when compared to uninfected individuals (Kruskal-Wallis H test, p < 0.001). Across all samples, Sphingomonas was the most dominant taxa, and decreased in relative abundance compared to uninfected individuals. Finally, the beta diversity across infected samples was significantly different compared to uninfected samples (PERMANOVA p = 0.001 using Bray-Curtis dissimilarity). These results highlight patterns of microbial diversity which may be impacted by vector infection status and will be important to consider when developing vector control strategies.
Lexi Kenna defends Master’s Thesis!
Congratulations, Lexi! Here is the abstract of her thesis, entitled Invertebrate herbivory of understory trees in the Georgia Piedmont in response to soil warming:
As the global mean surface temperature increases, changes in biogeochemical cycling have the potential to have cascading effects on plant and invertebrate interactions. Previous warming studies have primarily been conducted in recently glaciated, more fertile soils, and the response of plant and invertebrate interactions to warming is unclear in lower latitude, less fertile soils of the Georgia Piedmont. In this study, I examined leaf and soil chemistry (%N, C:N) and herbivore damage (% leaf area consumed) from understory tree seedlings of the Georgia Piedmont. Carbon and nitrogen foliar content and invertebrate herbivory did not respond to warming in any year, but there were interactive effects of temperature and species. Overall, warming did not have an indirect effect on plant-herbivore interactions, which is likely due to Piedmont soils containing less available nitrogen. However, species-level variation in response to warming has implications for forest composition changes. | http://gsa.ecology.uga.edu/?foogallery=jb-paper-2020 |
What type of probability is rolling a dice?
So to get a 6 when rolling a six-sided die, probability = 1 ÷ 6 = 0.167, or 16.7 percent chance. So to get two 6s when rolling two dice, probability = 1/6 × 1/6 = 1/36 = 1 ÷ 36 = 0.0278, or 2.78 percent.
Are dice binomial?
It is the generalization of the binomial theorem from binomials to multinomials. which means not each multinomial distribution is necessarily a binomial distribution, such as rolling dice.
What is an example of theoretical probability?
Theoretical probability is determined by the sample space of an object. For example, the probability of rolling a 3 using a fair die is 1/6. This is because the number 3 represents one possible outcome out of the 6 possible outcomes of rolling a fair die.
What is an example of empirical probability?
Empirical probability, also called experimental probability, is the probability your experiment will give you a certain result. For example, you could toss a coin 100 times to see how many heads you get, or you could perform a taste test to see if 100 people preferred cola A or cola B.
Is rolling a die a random variable?
But there are various numbers we could assign to the outcomes (like the number of heads, or the total when we roll a pair of dice). A random variable is a rule (or function) that assigns a number to each outcome of a random experiment. … For rolling a pair of dice, you could let X be the sum of the numbers on the top.
Is rolling a die a continuous random variable?
Continuous random variables
For example, a die (singular of “dice”) can come up 1, 2, 3, 4, 5 or 6, but not 1.5 or 2.3. … That’s a discrete random variable. In this section, we’ll focus on continuous random variables (crv). These are variables that can have a continuum of values.
Is rolling a dice discrete data?
It is discrete; the results can only be some specific whole numbers (2,3… up to 12), If it were continuous it could have any value of, for example, 2.3.
Is flipping a coin discrete or continuous?
And because the number of heads results from a random process – flipping a coin – it is a discrete random variable. Continuous. Continuous variables, in contrast, can take on any value within a range of values.
What is an example of a discrete variable?
Discrete variables are countable in a finite amount of time. For example, you can count the change in your pocket. You can count the money in your bank account. You could also count the amount of money in everyone’s bank accounts.
What probability distribution is a dice?
6 Sided Dice probability (worked example for two dice). Two (6-sided) dice roll probability table. Single die roll probability tables.
…
Two (6-sided) dice roll probability table. | https://stickinthemudpodcast.com/betting/what-kind-of-distribution-is-rolling-a-dice.html |
Ideal rational thinking is Bayesian, meaning that it is based on Bayes’ theorem. Bayes’ theorem tells us how to update our confidence in a theory based on our experiences. The theorem itself has a mathematical form, and its mathematical form can be intimidating to people who aren’t comfortable with algebra. In my classes on rational thinking, I’ve tried to express the practical content of Bayes’ theorem in terms of three intuitive principles. This blog post is about the first principle, and I’ll follow up with two more posts on the remaining principles.
To explain these principles, the best thought experiment I’ve been able to find involves gaming dice. If you’ve ever played Dungeons and Dragons, you’re familiar with 4-sided and 20-sided dice. A 4-sided die is a tetrahedron, and a 20-sided die is a dodecahedron.
Of course, the sides on a 4-sided die are numbered 1-4, and the sides on a 20-sided die are numbered 1-20.
Now, suppose that I have a 4-sided die and a 20-sided die, and I pick one at random, and roll the chosen die. I don’t reveal which die I rolled, but I tell you that I rolled a 3. Which die did I most likely roll? Is it more likely I rolled the 4-sided die or the 20-sided die?
THEORY T1: I rolled the 4-sided die.
THEORY T2: I rolled the 20-sided die.
T2 is a possible explanation for the evidence, i.e., for rolling a 3. The theory T2 is more flexible in the sense that T2 is compatible with more possible outcomes (20 of them, in fact). T1 is less flexible in that it is compatible with only 4 outcomes. Yet, as most of you have already intuited, T1 is more likely. It’s more likely that I rolled the 4-sided die. How can we see this more explicitly?
Imagine that we’re going to play this dice game 200 times in a row. That is, I’ll be picking a die at random 200 times, each time rolling that my die, and reporting the outcome.
Because I’m choosing my die at random, in 200 plays, I will pick the 4-sided die 100 times and the 20-sided die 100 times. Of the 100 times that I choose the 4-sided die, I expect the resulting rolls to be evenly split across each side of the die. That means we’ll expect to see each face of the 4-sided die come up 25 times (100 rolls divided by 4 sides).
Of the 100 times that I choose the 20-sided die, I’ll see each face come up 5 times (100 rolls divided by 20 faces).
So, out of the 200 times we play the game, we shall expect 3 to be rolled 25 times on the 4-sided die and 5 times on the 20-sided die, for a total of 30 instances of rolling a 3 in 200 plays.
Thus, we expect that 25 out of every 30 times a 3 is rolled in our dice game, the rolled die will have been the 4-sided die. Consequently, given that the 3 was rolled, we infer that there is an 83.3% chance that it was the 4-sided die that was rolled.
What have we learned from this thought experiment?
All things being equal, the theory which predicts the a higher likelihood for the observed data is the theory most likely to be true.
All things being equal, the theory which predicts the most number of alternatives to what is observed is the theory that’s least likely to be true.
When we’re tricked this way, we are deceived into thinking that an elegant theory is a theory that is compatible with any observation we might possibly make.
When we answer the real question, “Which theory probably accounts for what we see?”, we’re not asking which theory is compatible with the data, but which theory is most likely to result in the data we’ve actually observed.
What matters is probability, not merely possibility.
Next Up: Principle II – All things are not equal! | https://rationalfuture.org/wp/2012/10/31/bayes-theorem-some-intuitive-principles-part-i/ |
17 Article Views Research reports
While playing the Pathfinder RPG, the Game Master describes the events that occur in the game world, and the players take turns describing what their characters do in response to those events. Unlike storytelling, however, the actions of the players and the characters controlled by the Game Master (frequently called non-player characters, or NPCs) are not certain. Most actions require dice rolls to determine success, with some tasks being more difficult than others. Each character is better at some things than he is at other things, granting him bonuses based on his skills and abilities. Whenever a roll is required, the roll is noted as “d#,” with the “#” representing the number of sides on the die. If you need to roll multiple dice of the same type, there will be a number before the “d.” For example, if you are required to roll 4d6, you should roll four six-sided dice and add the results together. Sometimes there will be a + or – after the notation, meaning that you add that number to, or subtract it from, the total results of the dice (not to each individual die rolled). Most die rolls in the game use a d20 with a number of modifiers based on the character's skills, his or her abilities, and the situation. Generally speaking, rolling high is better than rolling low. Percentile rolls are a special case, indicated as rolling d%. You can generate a random number in this range by rolling two differently colored ten-sided dice (2d10). Pick one color to represent the tens digit, then roll both dice. If the die chosen to be the tens digit rolls a “4” and the other d10 rolls a “2,” then you've generated a 42. A zero on the tens digit die indicates a result from 1 to 9, or 100 if both dice result in a zero. Some d10s are printed with “10,” “20,” “30,” and so on in order to make reading d% rolls easier. Unless otherwise noted, whenever you must round a number, always round down. As your character goes on adventures, he earns gold, magic items, and experience points. Gold can be used to purchase better equipment, while magic items possess powerful abilities that enhance your character. Experience points are awarded for overcoming challenges and completing major storylines. When your character has earned enough experience points, he increases his character level by one, granting him new powers and abilities that allow him to take on even greater challenges.
Justificativa e objetivo: o uso inadequado da estatística básica é o maior responsável peloerro de interpretac ̧ão dos artigos científicos. o objetivo deste artigo de revisão foi r...
The secret language of birthdays: your complete personology guide for each day of the year by by goldschneider, gary, elffers, joost (paperback) this the secret language of birthda...
Background urban legends and myths are prevalent in drug-use environments. however, the distinction between myth and fact is not always clear. we found contradictory claims regardi...
The covid-19 lockdown has made many people of the middle- and lower-income class think and reinvent themselves to sustain in this crisis. it was difficult for lower- and middle-inc... | https://openabstract.org/abstract/pathfinder-core-rulebook-pdf-pathfinder-2e/284 |
The word 'Probability' means the chance of occurring of a particular event. It is generally possible to predict the future of an event quantitatively with a certain probability of being correct. The probability is used in such cases where the outcome of the trial is uncertain.
The probability of an event which is certain to occur is one.
The probability of an event which is impossible to zero.
P(A)+ P(A) = 1, 0 ≤ P(A) ≤ 1,0≤ P(A)≤1.
1. Trial and Event: The performance of an experiment is called a trial, and the set of its outcomes is termed an event.
2. Random Experiment: It is an experiment in which all the possible outcomes of the experiment are known in advance. But the exact outcomes of any specific performance are not known in advance.
Drawing a card from a pack of 52 cards.
Drawing a ball from a bag.
3. Outcome: The result of a random experiment is called an Outcome.
Example: 1. Tossing a coin is an experiment and getting head is called an outcome.
2. Rolling a die and getting 6 is an outcome.
4. Sample Space: The set of all possible outcomes of an experiment is called sample space and is denoted by S.
Note1: If a die is rolled n times the total number of outcomes will be 6n.
Note2: If 1 die rolled n times then n die rolled 1 time.
5. Complement of Event: The set of all outcomes which are in sample space but not an event is called the complement of an event.
6. Impossible Events: An event which will never be happened.
Example1: Tossing double-headed coins and getting tails in an impossible event.
Example2: Rolling a die and getting number > 10 in an impossible outcome.
Example1: Tossing double-headed coins and getting heads only.
8. Possible Outcome: An outcome which is possible to occur is called Possible Outcome.
Example1: Tossing a fair coin and getting a head on it.
Example2: Rolling a die and getting an odd number.
9. Equally Likely Events: Events are said to be equally likely if one of them cannot be expected to occur in preference to others. In other words, it means each outcome is as likely to occur as any other outcome.
Example: When a die is thrown, all the six faces, i.e., 1, 2, 3, 4, 5 and 6 are equally likely to occur.
10. Mutually Exclusive or Disjoint Events: Events are called mutually exclusive if they cannot occur simultaneously.
Example: Suppose a card is drawn from a pack of cards, then the events getting a jack and getting a king are mutually exclusive because they cannot occur simultaneously.
11. Exhaustive Events: The total number of all possible outcomes of an experiment is called exhaustive events.
Example: In the tossing of a coin, either head or tail may turn up. Therefore, there are two possible outcomes. Hence, there are two exhaustive events in tossing a coin.
12. Independent Events: Events A and B are said to be independent if the occurrence of any one event does not affect the occurrence of any other event.
P (A ∩ B) = P (A) P (B).
A: "The first throw results in heads."
B: "The last throw results in Tails."
Prove that event A and B are independent.
13. Dependent Event: Events are said to be dependent if occurrence of one affect the occurrence of other events. | https://www.javatpoint.com/probability |
Classes and How To Use Them.
This is an example of a class.
This class creates a die that can be used to play various games. Let us break up the code step by step.
The first step is to call the “Class” method, we do that by writing “class” and then the name we want to give it.(In this case Die. (Classes start with capital letters.)) Now it is time to call the attributes. An “attribute” is a method built in to the class “class”, in this case “reader” lets us read the amount of side the die has after we create it. Here we get to the methods the live in the Die class. The method “initialize” is used when creating most classes and it is what we use to set the amount of sides and what we want written on them. The next bit of code makes sure that we create a die with a minimum of one side.
Now comes the fun part do you see “@label” and “@sides”? They are what is called an instance variable, this means they can be used by any method that is contained in the class. All instance variables start with an “@” and they follow the same naming conventions that normal variables have.
After this we have a new method, a method that is contained in a class is called a instance method. This means it can only be called on an object of of the Die class. In our method “roll” we roll the die and it gives us one of the sides that we initialized the die with.
This class is very useful it can be used for a simple game of Chutes and Ladders or a complicated many sided die used in D&D. Because we have created this class we can do whatever we want with it. We can give it as many sides as we want, we can put what ever we want on them. With a little tinkering we can even make it return more than one side every time we call “roll” on the die.
So what are you waiting for go and have fun! | https://www.benlights.com/blog/2015/04/01/chutes-and-ladders-or-d-and-d/ |
Click on the student template. The teacher will help us fill it out.
Advanced (show me):Strings and integer are very different types; we use strings to create text while we use integers to count and do math. It would not make sense to have something like:
result = "hello" + 5
Sometimes, though we want to turn a string that contains an integer into an integer, to do some math, or to turn a number into a string. We can turn one type into the other using the str() and int() functions, i.e., str() turns something into a string and int() turns something into an integer. Hence, these do not work:
result = 5 + "10" # adding integers and strings does not work msg = "I am " + 10 + " years old" # concatenation only works with strings
but these do:
result = 5 + int("10") # int("10") is the integer 10 msg = "I am " + str(10) + " years old" # str(10) is the string "10"
3rd grade – Arithmetic with Python
Python is really good at math; for Python is no more difficult to add 10-digit numbers than to add 1-digit numbers; the same goes for subtraction and multiplication. Hence, we are going to write a program to have him check our homework.
Let’s put two numbers in two variables at the beginning of our program, for example, in number_1 and number_2:
number_1 = 5
and
number_2 = 3
Then, we want a program that prints the following:
5 + 3 = 8
5 – 3 = 2
3 – 5 = -2
5 * 3 = 15
Answer: Show it to me
4th grade – Tooth-fairy arithmetic
The tooth fairy brings a present to children when they lose a tooth; this present is sometimes money that is left under the child’s pillow. Please write a program that let us know how much would a child collect from the tooth fairy if she brings, say, $2 per tooth, given that a child has 20 teeth.
We can put the amount of the present and the number of teeth in variables, at the beginning of our program, and then get the program to print the result:
teeth = 20
present = $2
total = $40
Now to the difficult part of the problem: certain sharks lose 50000 (fifty thousand) teeth over their lifetime. Please, find out how much money would the tooth fairy give us if we were to lose 50000 teeth and if she did not bring us $2 but $5.
Answer: Show it to me
5th grade – Dungeons and Dragons
Dungeons and Dragons is a game in which we take the role of humans, elves or dwarves, and go on adventures on a game board. We can explore lost ruins full of vampires, or tunnels with treasures guarded by dragons.
Did the dragon see us first or did we spotted it on time? I don’t know.. if you are a thief and have excellent perception skills you would have spotted the dragon first if you rolled 8 or more on a 20-side die but, if you are a paladin, chances are that you would need to roll a 19 or higher instead.
A 20-side die? Yes.. Dungeons and Dragons uses dice of 4, 6, 8, 10, 12 and 20 sides; here we are going to write a D&D dice roller. We will need to use the function
randint(a,b) that returns a number between
a and
b, e.g.,
# load the random library to give us access to randint from random import randint roll = randint(5,10) # use randint
will assign either 5, 6, 7, 8, 9 or 10 to the variable
roll.
Your program must roll each of the 6 dice and print the resulting rolls in the following format:
You rolled a 2 with the 4-side die
You rolled a 5 with the 6-side die
You rolled a 3 with the 8-side die
You rolled a 10 with the 12-side die
You rolled a 13 with the 20-side die
Answer: Show it to me
6th grade – the sum of all the numbers up to 1,000,000
Are you crazy? A million is a very big number.. how am I supposed to add all the numbers up to one million? Well.. let’s start with something smaller. How about adding all the numbers up to 6? Let’s start with the brute force approach:
1 + 2 + 3 + 4 + 5 + 6 = 21
This works but is not a very promising way to add a million numbers. Let’s try stacking the numbers one over the other. We will use the variable x to denote the value of the sum of all these numbers; for now we will pretend that we do not know its value.
1
2
3
4
5
+ 6
---
x
Since adding the numbers from 1 to 6 adds up to x, then adding the numbers twice should add up to 2x; we are going to add them to our column of numbers upside down, though; this does not make any difference to the result:
1+6
2+5
3+4
4+3
5+2
+ 6+1
-----
2x
Notice that all the rows add up to the same value, i.e., 7. In essence we are adding up 6 times the number 7:
7
7
7
7
7
+ 7
---
2x
This means that 6 times 7 is 2x:
7 + 7 + 7 + 7 + 7 + 7 = 2x
6 * 7 = 2x
so
x = (6 * 7) / 2 # look at this formula carefully. How does it depend on 6?
= 42 / 2
= 21
This looks promising. Go over the same process on a piece of paper, but this time find the sum of all the numbers from 1 to 7. | http://codeperspectives.com/intro/integers/ |
1 were obtained by rolling a six-sided die 36 times. However,
as can be seen in Table 1, some outcomes occurred more frequently
than others. For example, a "3" came
up nine times, whereas a "4" came up only two times.
Are these data consistent with the hypothesis that the die is
a fair die? Naturally, we do not expect the sample frequencies
of the six possible outcomes to be the same since chance
differences will occur. So, the finding that the frequencies differ
does not mean that the die is not fair. One way to test whether
the die is fair is to conduct a significance
test. The null
hypothesis is that the die is fair. This hypothesis
is tested by computing the probability of obtaining frequencies
as discrepant or more discrepant from a uniform distribution of
frequencies as obtained in the sample. If this probability is
sufficiently low, then the null hypothesis that the die is fair
can be rejected.
Table 1. Outcome Frequencies from a Six-Sided Die.
The first step in conducting the significance
test is to compute the expected frequency for each outcome given
that the null hypothesis is true. For example, the expected frequency
of a "1" is 6 since the probability of a "1" coming
up is 1/6 and there were a total of 36 rolls of the die.
Expected frequency = (1/6)(36) = 6
Note that the expected frequencies
are expected only in a theoretical sense. We do not really "expect"
the observed frequencies to match the "expected frequencies" exactly.
The calculation continues as follows. Letting
E be the expected frequency of an outcome and O be the observed
frequency of that outcome, compute
for each outcome. Table 2 shows these calculations.
Table 2. Outcome Frequencies from a Six-Sided Die.
Next we add up all the values in Column 4 of Table 2.
This sampling distribution of
is approximately distributed as Chi Square with k-1 degrees of
freedom, where k is the number of categories. Therefore, for
this problem the test statistic is
which means the value of Chi Square with 5 degrees of freedom
is 5.333.
From a Chi Square calculator it can be determined
that the probability of a Chi Square of 5.333 or larger is 0.377.
Therefore, the null hypothesis that the die is fair cannot be
rejected.
This Chi Square test can also be used to test other
deviations between expected and observed frequencies. The following
example shows a test of whether the variable "University GPA"
in the SAT and College GPA case study is normally distributed.
The first column in Table 3 shows the normal distribution divided into five ranges. The second column shows the proportions of a normal distribution falling in the ranges specified in the first column. The expected
frequencies (E) are calculated by multiplying the number of scores
(105) by the proportion. The final column shows the observed number
of scores in each range. It is clear that the observed frequencies
vary greatly from the expected frequencies. Note that if the distribution
were normal, then there would have been only about 35 scores between 0 and 1, whereas 60 were observed.
Table 3. Expected and Observed Scores for 105 University GPA Scores.
The test of whether the observed scores deviate
significantly from the expected scores is computed using the familiar
calculation.
The subscript "3" means there are three
degrees of freedom. As before, the degrees of freedom is the number
of outcomes minus 1, which is 4 - 1 = 3 in this example. The Chi
Square
distribution calculator shows that p < 0.001 for this
Chi Square. Therefore, the null hypothesis that the scores are
normally distributed can be rejected. | http://onlinestatbook.com/2/chi_square/one-way.html |
Objective: SW be able to (1) determine the value of a group of coins (2) identify pennies, nickels, dimes, and quarters (3) count a collection of pennies, nickels, dimes, and quarters.
Advanced Organizers: The teacher should have play money (pennies, nickels, dimes, and quarters) on hand to demonstrate how to count the money. The teacher should also make cubes (or buy cubes) and tape the different coins on each side (you will have to use two of the same coins twice). * You can buy money stickers to place on cubes or tape the play or paper money on the sides of the cubes.*
Introduction: SW review each coin by singing "Money Song" by Dr. Jean. TW review each coin (penny, nickel, dime, and quarter) by holding up the coins and discussing the appearance and value.
Modeling/ Check for Understanding: TW review how to count a collection of coins by demonstrating how to count a group of pennies, nickels, dimes, and quarters with the play money. TW have the students count the play money (collection of pennies, nickels, dimes, and quarters) together to see if students understand the concept of counting a group of coins.
Procedures:
1.The teacher will place each student with a partner.
2.The teacher will give each group one of the pre-made dice and explain that each student will take turns rolling the die.
3.The first person rolls the die and the partner should draw the coins (circle with the amount in the middle) he/she rolls on a piece of paper. The first person rolls the die three times as the other partner draws the coins each time. The partner drawing the coins should add up the coins (I get my students to add the amount up under the coins) and write the total. The person rolling should check to see if the total is correct.
4.The partners should then switch (roller now draws and adds the coins and the previous drawer rolls and checks the work).
5.Each group should roll a total of six times (each person will get three turns to roll and three times to draw and add the coins).
6.Once the students are finished, they can share their data with the class.
7.The class can compare/contrast data.
8.The teacher should explain why counting money is important in real life.
Assessment:
The teacher can observe students while they are completing the activity to see which students are having trouble identifying or adding up the coins and which students are correctly completing the activity. The teacher can help if needed. The teacher can also collect the papers and check their work. | https://teachers.net/lessons/posts/3619.html |
I’ve recently purchased both Marvel Heroic Roleplaying and Savage Worlds. One thing those two games have in common is that they both have mechanics where different sizes of dice are rolled against one another. In Savage Worlds this is generally a single die of each size, but for Marvel it is pools of dice of mixed sizes being rolled. I wrote a Core Mechanics post on randomization last year, but I decided to write a follow-up specifically to look at the probabilities involved when dealing with mixed sets of dice.
How much better than a d4 is a d8?
When using mixed dice, the probabilities involved aren’t as clear as they are when all of the rolls use the same sized die (or dice). When rolling one die against another, you can find the probability with the following formula: P(dA ≥ dB) = (A + max(A – B + 1, 1)) × (A + 1 – max(A – B + 1, 1)) ÷ (2 × A × B). So for example, a d8 will beat a d4 about 81.3% of the time ((8 + 5) × (8 + 1 – 5) ÷ (2 × 4 × 8)). That formula isn’t exactly something that most people can compute on the fly though, so here’s a table with the results for the standard die sizes:
|1d4||1d6||1d8||1d10||1d12||1d20|
|1d4||62.5%||41.7%||31.3%||25.0%||20.8%||12.5%|
|1d6||75.0%||58.3%||43.8%||35.0%||29.2%||17.5%|
|1d8||81.3%||68.8%||56.3%||45.0%||37.5%||22.5%|
|1d10||85.0%||75.0%||65.0%||55.0%||45.8%||27.5%|
|1d12||87.5%||79.2%||70.8%||62.5%||54.2%||32.5%|
|1d20||92.5%||87.5%||82.5%||77.5%||72.5%||52.5%|
Marvel Heroic Dice Pools
For the system used in Marvel Heroic Roleplaying, you roll a large pool of dice and then take the highest two as your result. For example, you might roll 1d10, 1d8, 2d6 and end up with 6, 5, 4, and 2 as the die rolls which gives you a result of 11. In addition to your result, you also set aside one die not used in the sum as the effect die. The number showing on this die doesn’t matter, but instead you want to maximize the number of sides on the effect die. On top of that, dice that end up as 1s indicate complications and cannot be used in the result or as the effect die.
The probabilities involved are pretty complex, so rather than diving into a bunch of general computations, I’m going to break down some common choices that come up during play. I’ll be using Captain America in the examples since his data file is available as a free download from Margaret Weis Productions.
Let’s say that Captain America is stuck on his own and fighting against some Hydra soldiers with his shield. He can build his basic pool with Solo d6, Enhanced Strength d8, Weapon d8, and Combat Master d10, so let’s look at what that gives him as a baseline before we look at other options like distinctions and SFX. As an assumption to make things a little simpler, let’s assume that he wants to maximize his result rather than his effect die. That pool has a potential result range of 0 to 18 with the following probabilities:
|Result||Probability|
|0||0.03%|
|1||0.00%|
|2||0.10%|
|3||0.10%|
|4||0.39%|
|5||0.83%|
|6||1.69%|
|7||2.79%|
|8||4.53%|
|9||6.41%|
|10||8.85%|
|11||10.91%|
|12||12.79%|
|13||13.33%|
|14||12.63%|
|15||10.05%|
|16||7.84%|
|17||4.38%|
|18||2.34%|
That data set has an average result of 12.35 and will most often end up with either a d8 effect die (51.6% of rolls) or a d10 effect die (31.0% of rolls). In addition, rolling that pool will result in an average of about 0.52 complications per roll with 42.6% of rolls having at least one complication.
Distinctions: d8, d4, or Neither?
On each roll, you can use one of your character’s Distinctions either as an advantage or a disadvantage. If you use the distinction as an advantage, you can add a d8 to your dice pool. On the other hand, if you use it as a disadvantage, you add a d4 to your dice pool and gain a Plot Point. It’s obvious that a d8 increases the odds of a high result, makes a better effect die, and has less chance of adding a complication, but how much does it actually affect the probabilities?
Using the baseline example above, let’s say that Captain America uses his Sentinel of Liberty distinction to add an extra d8 to the pool. This bumps up his average result to 13.19, but it also increases the average number of complications to 0.64 with 49.8% of rolls now having at least one complication. What I feel is the more meaningful impact of this choice is that it boosts the chance of having a large effect die with 56.6% of rolls now getting a d8 and 39.6% ending up with d10s.
If instead Captain America had used his Man Out of Time distinction to gain a Plot Point and add a d4 to the pool, then his average result would instead be 12.43 with an average of 0.77 complications per roll and 56.9% of rolls having at least one complication. The average result barely budged since the d4 is unlikely to be one of the top two dice, but the chance of a complication is noticeably higher than either the baseline or using the distinction as an advantage.
SFX: Step Up or Double?
Another common choice for characters in Marvel Heroic Roleplaying is to use a SFX option to either step up the size of a die or add another die of the same size to the pool. Both options will increase the average roll result and, assuming the die is towards the high-end for the pool, increase the chance of a larger effect die. Adding dice will always increase the chances of getting a complication, while stepping up a die will reduce the chance of complications.
For example, Captain America could use his Last Ditch Effort SFX to either increase his Enhanced Strength to a d10 or add a second d8 to the pool. In this case, adding another d8 will have the same effect as using a distinction as an advantage (average result of 13.19, average of 0.64 complications, 49.8% of rolls with at least 1 complication). On the other hand, stepping up the die changes the average result to 13.13 with a lower average of 0.49 complications per roll and only 40.9% of rolls having at least one complication. Stepping up to a d10 also greatly increases the chance of a higher effect die with 24.5% of rolls having a d8 effect and 58.5% having a d10 effect.
A Quick Summary
When rolling two different sized dice against one another, each step higher that your die is than the other results in roughly a 5-10% higher chance of rolling equal to or higher than your opponent.
In addition, here are some things to keep in mind when building dice pools in Marvel Heroic Roleplaying or other Cortex+ games:
- Adding another die to the pool will always result in a higher chance of complications occurring.
- Adding a d4 for a distinction likely won’t affect your roll’s result much, but increases the risk of a complication more than any other die type.
- Adding a d8 for a distinction on an already large pool of 4+ dice doesn’t have a ton of impact on the average result, but can increase the chance of having a larger effect die.
- Stepping up a die can really increase the chance of having a larger effect die while also decreasing the chance of complications. | https://scottsgameroom.com/2012/03/13/core-mechanics-mixed-dice/?replytocom=494 |
Making use of one-to-one correspondence in counting objects; recognizing some quantities without having to count (i.e. subitizing), using a variety of tools or strategies; using, reading, and representing whole numbers to 10 in a variety of meaningful contexts.
Grade 1
Demonstrate, using concrete materials, the concept of one-to-one correspondence between number and objects when counting.
Context
Educator introduces the game to students, students complete in small groups with assistance from the educator.
Materials
- One die (large die would be preferable)
- One bingo dabber per student
- Worksheet (Appendix A – numbers and dots, numbers only, addition to 6, blank dots)
*Download lesson plan for appendices.
Summary
Children take turns rolling a die. The student that rolled then counts the number of dots on the die aloud for all the students in the group to hear. Once the number of dots on the die is determined, all the students find the number on the worksheet and stamp it with the bingo dabber. To differentiate the task, you can give students who are unfamiliar with numbers the sheet with numbers and dots. Once students become familiar with these numbers, you can increase the difficulty by using the sheets with only numbers.
Instructions
- Introduce the activity to the group. Show the students the die and the worksheet with the corresponding numbers. Explain the game to the students, clarifying that they will have to take turns rolling the die, then they will stamp that number on their sheet using a bingo dabber.
- Model the activity with one student. Ask the student to roll the die, then count the number of dots aloud. Ensure the student points to each dot as he/she counts. Then ask him/her to point to the corresponding number on the worksheet.
- Begin the game with the first student rolling the die and counting the number of dots aloud.
- Let each child find the corresponding number on their own worksheet. If the student does not immediately know the symbolic representation of the number, prompt them to count the number of dots on the sheet.
- Continue taking turns until all the dots are covered. If a student rolls a number that is already covered on their sheet, they pass the die to the next student.
- Introduce variations of the game depending on the level of difficulty the students are able to understand. If the dots are unnecessary, use the sheet with only numbers. Further extensions are mentioned below in “Extensions.” The game can also be more competitive by having students only stamp a dot when they roll, and the winner is the student who covers their sheet first.
Questions to Extend Student’s Thinking
- When a student has counted the number of dots, ask: How many dots are there on the die? Ask if the children can tell without counting – can they subitize?
- When a student has successfully counted the number of dots, ask: What would the number of dots be if I started counting from here [point to a different dot than where they started]?
- Once the student has completed their turn, ask: How did you get that answer? This will lead them to reflect on the strategy they used, or determine if they were able to subitize.
Look Fors
- Do children count each dot, without skipping or recounting any? Do students recognize that the final number that they count is the total number of dots on the die (i.e. the answer)? Do students get the same answer if they begin counting from a different dot? Can students determine the final number without counting? Can students link the number of dots to the symbolic representation of the number?
- One-to-one correspondence, cardinality, order irrelevance, subitizing, and the representation of whole numbers are key concepts in number sense and numeration. Students should begin to grasp these in Kindergarten.
Extensions
- Once students become familiar with the numbers 1 – 6, you can increase the difficulty by including a few dots with addition up to 6 (Appendix A). Include manipulatives to help students to add the numbers together.
- Students can also use two dice to roll, increasing their number knowledge to 12. You can use a worksheet with numbers from 1-12 or using addition to 12 (see Appendix A: to enter your own numbers). | https://wordpress.oise.utoronto.ca/robertson/portfolio-item/number-hunt/ |
Every now and then I encounter a book that changes the way I think about the world. Sometimes a book has one insight and sometimes it has several. In the case of Daniel Kahneman's Thinking, Fast and Slow, I've lost count of the ways it has forced me to re-evaluate my perceptions. This is partly because the book covers a wide terrain of psychology and partly because it presents so many interesting observations. One of my many take aways from the book was how we don't think intuitively about statistics. Let me restate that in a paraphrase, while our mind is great at seeing patterns in nature (even sometimes when they are not there) our mind is not great at seeing the patterns that underlie probabilities and statistics.
This is an important observation for game design and game play. We've all seen the player who has rolled several low scores on to hit rolls in a D&D session who says "the odds are getting better of me rolling a 20" or the player who has rolled 6 "aces" in a row in Savage Worlds who picks up the dice and says "the odds of me acing again are 1/(some huge number)." In both cases, the individual is wrong. While it is true that given a sufficiently large draw that the die rolls of a player will tend toward the mean, prior die rolls have no influence on future die rolls. As an extension of that, the player who has already rolled 6 "aces" has exactly a 1/6 chance (assuming a d6 is being rolled) of acing on the 7th roll. The prior rolls have no influence over the initial roll. The answer would be different if the person had stated before rolling at all that there chances of acing 7 times was 1/(some huge number) but it isn't true after the person has successfully aced 6 rolls and is now rolling the seventh roll.
When I was a 21/Craps dealer as an undergrad in Nevada, I saw how this kind of flawed logic could have real financial consequences.
"Wow!" The player would say, "there have been a lot of 7's rolled in a row, so it's time to 'buy' the 4 at a 5% vig." Their underlying assumption is that prior rolls affect future outcomes in die rolls. They don't.
Interestingly, when players are in situations where prior decisions DO affect future outcomes they are just as prone to intuitively come to the wrong conclusion. A great example of this phenomenon is the Monty Hall problem where a player is given three choices, shown the results of one of the selections they did not make, and then asked to either switch or keep their original choice. The correct answer to this question - because prior choices DO affect outcomes in this case - is counter intuitive. I'll let the good folks at Khan Academy explain why.
Think about how this dilemma will affect game play in hidden information games that you design and play. And let this be a reminder that understanding how a probabilities work can make you a better player, designer, or game master. | http://www.advanceddungeonsandparenting.com/2014/04/the-tiger-princess-monty-hall-and.html |
Playing the GUTS+ System requires only the occasional D6 roll and a bit of imagination.
The Basics
Role playing is a bit like collaborating with a group to tell a story with special rules in place that prevent the story from going too far off the rails. The GM sets up the world and the situation, and it’s up to you and your fellow players to work through that situation together using your characters.
Oftentimes, the GM will start a play session by describing a scene and some kind of circumstance that would bring your characters together for an adventure. It’s up to you as the players to ask questions both in character (IC) and out of character (OOC) and fill in the blanks so you can be as successful as possible when moving forward.
When your character asks questions and tries to perform certain actions, the GM will sometimes ask you to make a success check. Depending on what you are trying to roll success for, your qualities may influence the outcome, raising the number of dice you roll based on your quality’s level—the GM should tell you which quality to use if you don’t know. You may occasionally roll for success against a non-player character (NPC). The character being rolled against is the “defender.”
As you continue playing, your characters will grow stronger, and the game will get harder. Just keep moving forward, and you’ll win eventually! Or maybe your game isn’t about winning and the goal is to keep playing and exploring the game world—it’s entirely up to you and your GM!
Handling Conflict
Conflict is the cornerstone of plot. The game you are playing will likely have a lot of different conflicts, often happening at the same time, as well as a lot of different kinds of conflicts. Whether you are trying to find someone’s lost cat or trying to work against a dictator, there are many different ways to go about handling conflict.
Conflicts are handled using a combination of die rolls and logical appeals made to the GM. When an appeal is not accepted by the GM, you must roll a number D6s to determine your success. See the Success Scale to understand how success is determined. When you begin, you will only be rolling 1 D6, but as your character grows stronger, you will need to roll more to keep up with the rising difficulty level.
Many role-playing games make heavy use of combat to overcome conflicts, but it might not always be the best option. Try different things to see what the best outcome might be. The GM should be flexible enough to handle whatever you throw at them and respond accordingly. If you come across someone acting strangely, instead of trying to beat the information out of them, maybe try to ask them what’s wrong.
Success Scale
How successful your character is at performing a certain action is determined by what number you roll on a D6 die. Typically, the DM will have you roll using a particular quality your character has, which means that you roll as many D6’s as that quality’s level (for example, if your quality is level 4, you roll 4 dice). The number of dice you roll increases your chances at success (or failure) in both categories.
There are 2 types of success rolls: checks and contests.
Check
Check rolls are what determine your success at performing actions that do not involve other (unwilling) living creatures. Each individual die you roll is measured by this scale, which allows the GM to interpret the results based on what was rolled the most.
The scale for success is:
|Die Value||Result|
|1||Negative impact|
|2||Failure|
|3||Near success (GM decides)|
|4–5||Full success|
|6||Positive impact|
For example, if the Quality you are rolling to check is level 3, you would roll 3D6. If your die values were 4, 6, and 1, that equates to 1 Full Success, 1 Positive Impact, and 1 Negative Impact, which the GM could interpret as “Well, the Positive and Negative Impacts cancel out, which leaves a Full Success!” Or if you rolled 5, 2, and 4, then the result could clearly be a success because more successful dice were rolled than failures.
One last example in how these rolls could be interpreted is if your quality is level 2 and you roll 2D6 resulting in 1 and 5, then the GM could interpret that success as somewhere in the middle, i.e. a Near Success. The outcome is determined by the GM based on the current scenario and action being attempted, so if you disagree with their interpretation of the roll, speak up!
Contest
If you need to perform an action against an unwilling member, you will need to make a contest roll against them. Both you and your opponent roll the relevant number of dice, and you subtract their roll’s value from yours (i.e. the defender subtracts their roll from the aggressor’s). The success scale is as follows:
|Difference||Result|
|Less than -2||Negative impact|
|-1 or -2||Failure|
|0||Near success (GM decides)|
|1 to 3||Full success|
|More than 3||Positive impact|
For example, if you are acting against an opponent and you roll a 6 and your opponent rolls 4, subtract your opponent’s roll from yours, which gives you 2. Your action would have Full Success, allowing the GM to progress the situation appropriately. If, alternatively, your opponent is acting against you, but they roll 3 while you roll 6, subtract your roll from your opponent’s, which gives you -3. Your opponent would have a Negative Impact failure, which would cause the GM to make something bad happen to your opponent instead.
Note: if you’re not proficient at doing math in your head, you can make the math easier by matching up the defender’s dice to the aggressor’s dice to see how much is left over. For example, the aggressor rolled a 3 and a 2 but the defender rolled a 5 and a 1—the 5 die can cover the values of both the 3 and the 2, leaving 1, resulting in a -1 Failure for the aggressor’s Contest roll.
“Near Success”
Whenever you roll a 3 check or a 0 contest, the result is at the discretion of the GM. They may take the opportunity to advance the situation somehow or they may decide that you need to roll better next time based on the context. Feel free to plead your case with the GM, though!
Using Qualities
When rolling to determine success, the GM will tell you the base GUTS quality to use for your roll, and you can appeal to the GM to pair another relevant quality with the roll depending on the action you are taking. If the quality is relevant, add the other quality’s points to the base quality and roll as many dice as that allows, up to a maximum of 10 dice.
The GM will also give you advantages or disadvantages based on what qualities are used in certain situations. For example, an action like pushing a boulder would use a Gumption roll, but without a quality like “weight lifting” or something similar, the GM would give you a certain amount of disadvantage to subtract from your roll. Using your “weight lifting” quality, would remove any disadvantages.
Rolling Doubles
If any 2 dice are rolled with the same number, you may roll an additional die for each double rolled. For Check rolls, distribute the total value of the extra dice among the dice you rolled to increase their value and bring it closer to a successful result. For Contest rolls, simply add the value to your total roll to increase your chances at success.
Rolled dice that have been used as a double set cannot be used to create another double, i.e. if you roll three 2’s, you only get 1 double set from that group.
Assisting
If another character is trying to do something and your character is near them, you may assist them with what they are trying to do by making a Check roll using the same or equivalent qualities that they are using, but with half of the dice you would normally roll for that Check, rounded up. If any of the dice you roll creates a double with one of their dice, they can use your die to roll a bonus die from that double and modify their roll.
For example, if one character is trying to repair a broken wire using their Utility quality, you can assist them by rolling half of your Utility quality. Let’s say their Utility is level 3 and they roll 2, 1, and 4, and you want to assist them. Your Utility quality is also 3, so you roll 2 dice (half of 3 rounded up), which land on 1 and 3. The 1 you rolled is paired with the 1 they rolled, allowing them to roll a bonus die to allow them to improve their overall roll.
Voluntary Failure
If you prefer, whether for story/character purposes or otherwise, you can choose to skip a roll and opt to receive a failure. The GM will play out a failure scenario, and you can take one Experience Point for each die you would have rolled. A Voluntary Failure cannot result in a Negative Impact failure, so you do not gain learning experiences, but it is still a good way to get experience when you know a success in a certain situation would not really make sense for your character.
Combat
If you do choose to handle your conflicts with violence, combat is a fairly straightforward (and dangerous) affair. Characters take turns choosing who and how they want to attack in the order of highest to lowest Gumption roll. If there’s a tie in this roll, the GM will decide how the turn order will fall.
When you attack someone or something in the game, you need to declare what you are trying to do and then make a contest roll against your opponent. Depending on what you are trying to do, you will be using certain qualities to determine how many dice to roll and how to augment your roll. The GM will tell you the result of your attack unless you roll well enough to make the decision yourself. Once your attack is done, the next person in the list makes their move.
Note that you can also choose to take any other action aside from attacking if you wish. Maybe you can defuse the situation instead! Or perhaps you have the ability to heal a partner’s injuries—that’s also something you can do on your turn! Injuries you take can lead to serious consequences later down the line, including losing limbs or even dying!
Timing
Time spent differs from turn to turn depending on what action is done, but in general, most turns take about 5 seconds to complete in-game. Also, you can specify when your turn happens in relation to other turns that have already been taken, for example “at the same time as” another character is attacking one monster, you attack another, or right after a player knocked an enemy’s shield away, you take advantage of the opening in their defense.
Holding Your Turn
Sometimes, you will want to wait to perform an action until something else happens, for example, to perform a group attack or a counterattack or some other complex maneuver. You may hold your turn at any time, but if you do not use your held turn before your next turn comes around, you will have lost that turn.
Weapons
In order to use a weapon, your character must be holding it in their Hand (i.e. not in a Bag), and they must have a place to put anything that they might already be holding instead of their weapon. Beyond this, the effect of the weapon is governed by logic and the Success Scale. For example, using bare fists may not be very effective against metal unless you’ve got a really good reason as to why it actually would be, but a knife might be able to cut through some wood.
Taking Damage
When you are attacked and take damage, your character will receive injuries depending on where, how, and how badly they were hit. See Health to learn about the health and injuries systems in GUTS+.
Completing Combat
Combat goes until the GM declares that the opponent has been defeated. This could be by knocking them out or making them run or otherwise convincing them to stop fighting. Alternatively, the party can always attempt to run from a dangerous situation; depending on the opponent’s nature, they just might let you go.
Health
Most role-playing game systems utilize a “hit points” (HP) system, but in GUTS+, a character’s health is determined by the number of injuries sustained to various parts of their body and the strain placed upon their mind and body.
Injuries
The following chart shows the default areas of the body that can be injured and a baseline idea of how many times an injury can be sustained before it becomes fatal:
Whenever you take an injury, mark it on your character sheet. If a part of the body sustains the maximum number of Minor injuries that it can have, the next Minor injury instantly becomes a Major injury, and if the maximum number of Major injuries is taken, use of that body part becomes limited and will affect what your character can do. If you receive any Major injuries beyond the maximum number, you lose use of that body part completely. Losing use of either of your character’s Hands reduces your inventory space, and “losing the use of” your character’s Head means your character immediately becomes unconscious. Additionally, if the maximum number of injuries for a body part is surpassed in certain dangerous ways like being cut with a sword, your character can lose that body part permanently!
If your character receives more than 6 Major (non-Head) injuries (i.e. 7 or more), they will gain the “unconscious” status effect. If they are not recovered before enough time passes, they will die, so be careful!
Strain
In addition to Injuries, performing a taxing action for a sustained period of time can add Strain to your character. If your character receives more than 10 points of Strain before they are able to remove any Strain, they can either receive injuries or be knocked unconscious from exertion.
Any additional strain received from performing actions inflicts a Minor injury to the body part that is performing it. If you are using magic (Essence), strain received inflicts injuries to the Head.
Status Effects
Beyond Injuries and Strain, your character can also be afflicted by a variety of status effects. These can be mental things like “afraid,” “discouraged,” and “embarrassed” or physical things not representable by injuries like “blinded”, “deafened”, and “nauseated,” which affect how you play the game in different ways. Some will affect the values that you roll while others will affect what your character is able to do in the game world. The exact way they affect your character mechanically is determined by your GM, but the most important factor to keep in mind is that your character should behave appropriately when burdened with these effects. If your character is “disturbed,” keep that in mind when you interact with other characters or react to the game world.
Status effects can stack on top of each other as well as intensify. Some GMs will give you a new status effect to replace an intensified one (eg. “afraid” to “terrified” to “panicked”) and others will simply give you numbered levels (i.e. “afraid x2”). Keep in mind what combinations of effects might do to your character and play it out as it might happen in real life.
Optionally, you can voluntarily apply permanent status effects to your character if you want to play a character with a disability. Work with your GM to figure out what exactly this might mean for how you play the game and ignore regular recovery times.
Note: there is no master list of status effects beyond the table of suggestions in the Game Master’s Handbook, so if you are unsure what the GM means by a particular status effect or if they forget to tell you what the status does to your character, be sure to ask for clarification!
Recovery
Recovering from an injury is a relatively slow process if not deliberately focused on. Untreated, A minor injury will heal at the rate of 1 injury per in-game day—you choose what injury you wish to be healed—while a major injury will heal at the rate of 1 injury per 5 in-game days.
If you wish to focus on allowing your injuries to heal, resting will allow your injuries to heal twice as fast: 1 minor injury per 1⁄2 day and 1 major injury per 2 1⁄2 days. If you or someone in your group has any sort of medical qualities, then they can help treat your wound to speed up recovery even more. Some treatments will be instantaneous while others might speed it up to just a couple of hours for recovery. Eating food can also help heal injuries.
Strain is slightly different: once the source of strain is removed, it begins going down immediately at a rate of approximately 1 point per minute. Depending on the type of strain and how soon you resume the stressful task, your strain might be removed all at once or after every couple of turns if in Combat.
Different still, status effects are removed either when the GM says so or when the cause of the effect has been removed for a reasonable amount of time. If the GM doesn’t lift the status after a reasonable time, they might have forgotten, so just ask if your character still has it!
The Passage of Time
Time is only as important as the GM makes it. In most cases, time passes as it would realisitically. For example, if you were to walk from one town to another for several miles, it might take several hours. To make the game more engaging, the GM may make day and night and the things that can happen during each more significant, but in the end, it’s up to your specific group and how they prefer to play. | https://gutsplus.tk/rules/players-handbook/playing-the-game/ |
Supply in the game is rather abstract. When a unit is activated, it must trace a line to one of two points on the map without going through an enemy infantry or garrison unit for it to be in supply. Units that are out of supply lose one movement point, and may not receive replacement points. I think that cavalry should be able to block supply also. In actual fact raiding did not have a great impact in the eastern theater of war, but it did with the greater area of the western campaigns. When I am comfortable enough with the game system, I will look into supply a bit more.
|The game setup for play|
The first battle has Grant and three corps (II,V,VI) attacking Lee in Chancellorsville with the Confederate III corps and a trench marker.
The defender rolls six die, the number of rolls for attack are in the upper right hand corner of the unit. In this case a five, and one more die roll for the trench marker. Hits are achieved on a roll of six on each separate die. The three Union corps roll four die each (five normally, but one is subtracted for attacking across a river). The Confederates roll one six for a hit on the Union II corps, while the Union rolls two sixes for two hits on the Confederate III corps.
After all hits are applied, the units involved must pass a morale check. You pass/fail the morale check by rolling one die per unit. If you roll a higher number than the number in the star in the upper left of the unit, you fail the check. The attacker checks his units first. In this case both sides' units all pass the checks. You can either voluntarily retreat from battle or be forced to. In this case, neither side wants to back down so we begin battle round two. The players then are able to roll two die to see if any reinforcements are available. The reinforcements can come from one movement point away. Two die are rolled and if they equal or are greater than the commander's rating, the unit can reinforce the battle. The Union player has no units near, and unfortunately for the Confederates both fail their reinforcement check rolls. The next battle has the Confederates rolling for two hits and the Union player for none.
This is the situation after the Confederate move on turn one. The Confederate 1st corps has beat a retreat to be near Lee and the other two corps. If Grant attacks the isolated 1st corps, Lee's reaction move can bring the other two corps with him to the rescue. The only problem being is that because of the long movement of the 1st corps, and the retreat from battle of the other two, none of the Confederate corps are now in trenches.
To me, the game rules make it feel like you are in Grant's or Lee's shoes. You want to do so much each turn, but are really hobbled by the amount of troops you can move, and actions you can take each turn. As the Union, you really want to get your corps fighting Lee's right away. On the other hand, you also want your other forces to start to put the squeeze on Richmond. As Lee you have to really pray your opponent makes a mistake that you can capitalize on. This game is a player's game. It is one that will sit on your table for a while with you playing game after game to try different strategies, and that is only the 1864 Campaign. After you are done with that campaign, you still have the 1862 one to try out. | https://www.awargamersneedfulthings.co.uk/2017/11/grants-gamble-game-by-worthington-games.html?showComment=1618219075399 |
problem 1) prepare a Java program to implement a dice game. You will need to create two classes for this assignment:
class Die is used to represent a single die (the singular form of dice). This class should have the following attributes:
• instance variables:
• sides: an int value representing the number of sides your die has (most games use 6-sided dice, but there are plenty of variations)
• value: an int value representing the current roll of the die; would be a number between 1and sides
• a Random object
• instance methods:
• two constructors:
• the default constructor must set the value of sides to 6, initialize the Random object and roll the die to get an initial value for variable value
• another constructor which takes a single argument representing the number of sides you want your die to have, sets sides to this value, and does all the same operations the other constructor does
• one mutator method called roll, which rolls the die (generates a random number between 1 and sides, which is then assigned to value)
• one accessor method called getRoll, which returns value
• a main method for testing that the other methods work
Your second class represents the game; the name of the class would be up to you. This class would also contain a main method, which is the driver for the game. You might want to include other methods, depending on how complicated your game is. You should meet the following requirements:
• Your game should use at least two dice (2 Die objects, in other words)
• Your game should have at least two players: the user and the computer
• You should have a way to keep score and to inform the user of his/her score
• The computer must play to win, if that is possible (if not a pure game of chance)
• The user should be given the option to play again once a game is completed
Java, Programming
Java Application - DistanceCalculator Create a Java application named DistanceCalculator.java to solve the following problem. An application is needed to allow a user to enter two cities and display the distances between ...
JAVA programing essay Write a paper of 700-word response to the following: In your opinion, what are the three biggest challenges in planning and designing a solution for a programming problem? What can you do to overcom ...
Design your own Java Class that includes at least 3 data fields, 2 constructors and 4 methods. When designing your class, pick an object that you are familiar with and make it your own, realistic, yet simple design with ...
Case Greentek is a software solution company for smartphone and tablet devices. Current headquarter (HQ) locates in Sydney, 30 sale team members and 50 software engineers base in Singapore and Manila respective. The cycl ...
Assignment Exercise#1: Write a java class that asks the user to enter three numbers N1, N2, N3 , obtains the numbers, then calculates and displays the largest number Max followed by the text "is the largest" and the smal ...
The Paris Hotels Company needs a computer program to track their hotel occupancy for the hotels in their chain. Write a Java program that calculates the occupancy rate for each floor of a hotel. A hotel's occupancy rate ...
Write a Java application using NetBeans Integrated Development Environment (IDE) that calculates the total annual compensation of a salesperson. Consider the following factors: A salesperson will earn a fixed salary of . ...
Assignment Objective This project focuses on demonstrating your understanding of Java Collections. Before attempting this project, be sure you have completed all of the reading assignments listed in the syllabus to date, ...
IMPORTANT: use JGRASP for editing. Objectives - At the conclusion of this assignment students will have demonstrated that they can: Validate input data from a keyboard. Use loops to repeat actions in a program Use a Rand ...
Question A Why is it important to avoid the use of reserved words as you define variables, functions, methods, or identifiers? Do you have any get-arounds to the use of these reserved words that you like to use? (max 80 ...
Start excelling in your Courses,
Get help with Assignment
Write us your full requirement for evaluation and you will receive response within 20 minutes turnaround time. | http://www.mywordsolution.com/question/write-a-java-program-to-implement-a-dice-game/94591 |
Seaford Harbor Elementary School second-graders recently practiced counting money during a hands-on math lesson.
Teacher Krista Clark’s students played a game called Race to a Dollar, according to a school news release. The students paired off and took turns rolling a die. The die had different amounts on each side. As students rolled the die, they added money to their piggy banks to compete to determine who could earn a dollar first.
Students then competed to see who could earn the most money during time trials and converted their coins into larger denominations. | http://www.liherald.com/stories/photos-courtesy-seaford-school-districtseaford-harbor-elementary-school-second-grade-teacher,113576 |
Bayes’ Theorem sits at the heart of a few well known machine learning algorithms. So a fundamental understanding of the theorem is in order.
Let’s consider the following idea (the following stats are completely made up by the way). Imagine 5% of kids are dyslexic. Now imagine the tests administered for dyslexia at a local school is known to give a false positive 10% of the time. What is the probability a kid has dyslexia given the fact they tested positive?
What we want to know is = P(Dyslexic | Positive Test).
So let me put this into English.
First, let’s figure out our probabilities. A tree chart is a great way to start.
Look at the chart below. It branches first between dyslexic and not dyslexic. Then each branch has positive and negative probabilities branching from there.
Another – perhaps more real world use for Bayes’ Theorem is the SPAM filter. Check it out below. See if you can figure your way through it on your own.
Many popular machine learning algorithms are based on probability. If you are a bit shaky on your probability, don’t worry this quick primer will get you up to speed.
Think about a coin flip. There are 2 possibilities you could have (heads or tails). So if you wanted to know the probability of getting heads in any particular flip, it would be 1/2 (desired outcome/all possible outcomes).
The probability rolling a 1 is 1/6.
The compliment of a probability can also be referred to the probability of an event NOT happening. The probability of not rolling a 1 on a six sided die = 5/6.
Independent probability simply means determining the probability of 2 or more events when the outcome of one event has no effect on the other.
Now we are asking if event A or B occurred.
Imaging drawing an Ace and a Red Card. We want to make sure to factor in all the elements, but we need to account for double counting.
Now we are going to work with dice. One six sided die and one 4 sided die. The diagram below shows all 24 possible combinations.
Now conditional probability is the probability of something occurring assuming some prior event has occurred. Look at the chart above, lets consider the A = rolling even number on six sided die (3/6) and B = rolling even number on 4 side die(2/4). So P(A|B) (read probability of A given B) = P(A∩B)/P(B). Lets look a the chart to help use see this.
Now when figuring P(A|B) (rolling an even on the four side die assuming you have already rolled an even on the six sided die) we are no longer looking at all 24 combinations, we are now only looking at the combination where the six side die (A) is even (the green columns). So as you can see, of the 12 options where A is even, 6 have an even number on the 4 sided die.
Probability and odds are constantly being misused. Even in respected publications you will see sentences such as: “The odds of a Red Sox win tonight is 60%.” or “His probability is 2 to 1.” While in the lexicon of American English these words seem to have taken on interchanging meaning, when working with statisticians or data scientists, you are going to want to get your vocabulary straight.
Probability is a number between 0 and 1, often represented as a fraction or percentage. The probability is determined by dividing the number of positive outcomes by the total number of possible outcomes. So if you have 4 doors and 1 has a prize behind it, your probability of picking the right door is 1/4 or 0.25 or 25%.
Note, do not let the term positive outcome confuse you. It is not a qualifier of good vs bad or happy vs sad. It simply means the result is what you are looking for based on the framing of your question. If I were to state that out of every 6 patients who opt for this new surgery 2 die – 2 would be the “positive outcome” in my equation (2/6 or approx 33%) even though dying is far from a “positive outcome”.
Odds, on the other hand, are a ratio. The odds of rolling a 4 on a six sided die is 1:5 (read 1 in 5). The odds ratio works like this: positive outcomes : negative outcomes. So the odds of rolling an even number on a six sided die is 3:3 (or simplified to 1:1).
Now the probability of rolling an even number on a six sided die is 3/6 or 1/2. So keep that in mind, odds of 1:2 is actually a probability of 1/3 not 1/2.
Working with a standard deck of playing cards (52 cards).
The Central Limit Theorem is one of core principles of probability and statistics. So much so, that a good portion of inferential statistical testing is built around it. What the Central Limit Theorem states is that, given a data set – let’s say of 100 elements (See below) if I were to take a random sampling of 10 data points from this sample and take the average (arithmetic mean) of this sample and plot the result on a histogram, given enough samples my histogram would approach what is known as a normal bell curve.
You will have what looks like a normal distribution bell curve when you are done.
Okay, I have bell curve, who cares?
The normal distribution of (Gaussian Distribution – named after the mathematician Carl Gauss) is an amazing statistical tool. This is the powerhouse behind inferential statistics.
The Central Limit Theorem tells me (under certain circumstances), no matter what my population distribution looks like, if I take enough means of sample sets, my sample distribution will approach a normal bell curve.
Once I have a normal bell curve, I now know something very powerful.
Known as the 68,95,99 rule, I know that 68% of my sample is going to be within one standard deviation of the mean. 95% will be within 2 standard deviations and 99.7% within 3.
But reading this graph, I can see that 68% of men are between 65 and 70 inches tall. While less than 0.15% of men are shorter than 55 inches or taller than 80 inches.
As we move into statistical testing like Linear Regression, you will see that we are focus on a p value. And generally, we want to keep that p value under 0.5. The purple box below shows a p value of 0.5 – with 0.25 on either side of the curve. A finding with a p value that low basically states that there is only a 0.5% chance that the results of whatever test you are running are a result of random chance. In other words, your results are 99% repeatable and your test demonstrates statistical significance.
In the spirit total transparency, this is a lesson is a stepping stone towards explaining the Central Limit Theorem. While I promise not to bog this website down with too much math, a basic understanding of this very important principle of probability is an absolute need.
To understand the Central Limit Theorem, first you need to be familiar with the concept of Frequency Distribution.
So random.random_integers(10, size =10) would produce a list of 10 numbers between 1 and 10.
Now, since I am talking about a Frequency Distribution, I’d bet you could infer that I am concerned with Frequency. And you would be right. Looking at the data above, this is what I have found.
I create a table of the integers 1 – 5 and I then count the number of time (frequency) each number appears in my list above.
Using my Frequency table above, I can easily make a bar graph commonly known as a histogram. However, since this is a Python lesson as well as a Probability lesson, let’s use matplotlab to build this.
The syntax should be pretty self explanatory if you have viewed my earlier Python graphing lessons.
If you enjoyed this lesson, click LIKE below, or even better, leave me a COMMENT. | https://analytics4all.org/category/probability/ |
by Simon Tatham
Some friends of mine play the board game Settlers Of Catan. In this game, each player rolls two ordinary dice on their turn, and the number which comes up determines what resources people get for the turn. When a seven is rolled, something special happens: no resources are generated, and instead a robber moves on the board.
After playing this game for a while, my friends felt that although the game would get a bit dull without any sevens, they were rather unhelpful in the initial phase of the game: when everybody is trying to collect lots of resources to get started with, rolling a seven slows everyone down. So we adopted a "house rule" which said that during the first two full rounds, if anyone rolled a seven, they should immediately re-roll until they got something else. After a couple of rounds, once the game had got going, sevens became a permitted dice roll again.
Although I approve of the aim (to make the game get started more quickly), I didn't much like the mechanism; re-rolling a pair of dice until you get a result you like seemed ugly. So I set out to design a special pair of dice which would produce the numbers 2 to 12 with the same relative probabilities as the ordinary 2d6, but which never rolled 7.
This sounds pretty unlikely, but it's just about possible. In this page I explain how it's done.
An ordinary pair of dice can roll any number from 2 to 12, but not all with the same probability. The easiest way to see this is to label the individual dice to distinguish them (I'm going to call them R and C, for "row" and "column"), and draw a table showing the possible outcomes:
So out of the 36 possible combinations that can come up, only one results in a 2 (both dice have to show 1), only one results in a 12 (it requires two sixes), but a seven can come up in six different ways and is therefore six times more probable.
The idea I hit on was to reorganise this grid. With six entries reading 7, I thought it might be an interesting experiment to put them all in the same column:
That leaves a 5-by-6 square. Interestingly, we have to place 5 instances of the number 6 in the remaining space, and also 5 instances of the number 8. I thought those would fit nicely across two rows of this 5-by-6 rectangle:
This leaves a rectangle four spaces high, and it just happens that I have four 9s and four 5s to place, so let's place those in two of the columns:
And so it goes on. In two rows of the remaining 3-by-4 space we place the three 4s and the three 10s; in two columns of the remaining 3-by-2 we place the two 3s and the two 11s; and in the two remaining single spaces we place the single 2 and the single 12.
This reorganised grid contains the same set of numbers as the grid we originally started with, and each number appears exactly as many times in the new grid as in the original one. So if you roll two (distinguishable) ordinary dice, and look up the results in this table (for example, if die R rolled 3 and die C rolled 5, you'd look up the overall result in row 3 column 5 and find it was 4), then you will find that this technique gives you exactly the same distribution of numbers which you'd expect from ordinary dice.
Of course, having to use a look-up table every time you roll your dice is a bit silly. Better to re-label the dice to make the table unnecessary. And it turns out this is fairly easy:
So after we relabel the dice, the outcome grid now looks like this:
Now all we need is a way to remember which numbers take priority over which other numbers. The rule is: 7 beats 6 and 8, which in turn beat 5 and 9, which beat 4 and 10, which beat 3 and 11, which beat 2 and 12, and everything beats the blank. So we add dots to the faces: six dots on the 7, five on the 6 and the 8, four on the 5 and the 9, three on the 4 and the 10, two on the 3 and the 11, one on the 2 and the 12, and nothing on the blank face.
So our dice now look like this:
And the procedure is: roll both dice, and choose whichever of the two uppermost faces shows more dots. (No draw is possible, since any two faces with the same number of dots appear on the same die.)
So, after all this work, I've constructed an alternative way of labelling a pair of cubical dice, which gives exactly the same probability distribution as the normal approach of adding up the two numbers rolled. How has this helped me to construct dice which don't roll sevens?
The answer is that 7 is now a property of one of the dice, rather than a property of the combination of the two. All I have to do to convert these dice into a non-7-rolling pair is to remove the 7 face from the C die, turning it into a five-sided die with faces 3, 5, 9, 11 and blank. This modified pair of dice has the following outcome table:
So it will roll everything other than 7 with the correct relative probabilities, but it can never roll a 7 itself, because there isn't a 7 face anywhere to be seen. Bingo!
Actually making a d5 would be difficult. Instead, I bought a standard d10 (two five-sided pyramids back to back), and painted pairs of faces the same.
The above construction fulfilled my original goal. I acquired some blank dice and labelled them in the way I've described (I couldn't find a blank d10 so I had to paint out the numbers on an ordinary one), and when I'd finished, I had two "odd" dice (one cubical, with a 7, and one shaped like a d10, without a 7) and one "even" die. Roll the two cubical dice, and you get an ordinary 2d6 distribution; roll the even die with the strangely shaped odd die, and you get the same thing but with no 7s.
However, there was one remaining problem. Settlers of Catan has an extension, called "Cities and Knights". In this game, one of the dice is red, and that die has a special function: the total of the two dice still determines the resource generation for the turn, but the single number shown on the red die has an additional effect of deciding who gets special cards. My dice don't provide this extra information. And you can't just roll an ordinary die alongside my two, because that gives rise to combinations which would have been impossible with ordinary dice (there's no way the red die can show 6 if both dice total 4, for example). So it appears that my dice can't be used for Cities and Knights without changing the nature of the game.
Or can they?
They can certainly work in principle. All you need to do is to annotate each entry in the outcome grid with an additional number indicating the value on the red die. Going back to the ordinary 2d6, the annotated grid looks like this (let's have R be the red die):
This grid isn't terribly easy to read, but it at least shows which red numbers can go with which total numbers:
So what we could do is to take the outcome grid for my modified dice, and arbitrarily put these annotations in some order:
This works on paper: it generates both the total and the red number, with the same distribution as the original 2d6, and removing the 7 face on the C die still does the right thing. But it's ugly, because there's no particular structure to the arrangement of the red numbers in the grid, so we're back to needing a lookup table. It would be much nicer if we could find some way to arrange the red numbers which made it possible to figure out what was going on without a lookup table.
After some experimentation, I discovered that it all looks much nicer if you sort the faces of the dice into numerical order, counting "blank" as zero. This makes the totals grid look a lot less obvious:
But it produces an astonishingly simple way to arrange the red numbers. In this grid I'm going to show only the red numbers, to make it clearer just how simple this is:
I have honestly no idea whatsoever why this is so beautifully symmetric. My dice aren't symmetric; I've removed all the symmetry between the dice in the course of making my strange mechanism work. Yet, for no obvious underlying reason, this completely regular arrangement of red numbers just works, producing the right set of possibilities for every possible total. This was totally unexpected to me, and I still don't really understand it. But there it is.
So, given this arrangement, it's not too hard to find some extra labels to put on the faces of my dice. The simplest thing is to write red numbers on the faces of the even die, so that 2, 4, 6, 8, 10 and 12 are annotated with the numbers 1, 2, 3, 4, 5 and 6 respectively. Then we annotate the odd die as follows:
So our completed dice now look like this:
Now the procedure is: roll both dice. To find the total, choose whichever of the two uppermost faces shows more dots. To find the red number, take the red number shown on the even (R) die, and modify it by the amount shown on the odd (C) die if any. If an addition takes you above 6, subtract 6 from the result; if a subtraction takes you below 1, add 6.
And, once again, removing the 7 face from the odd die produces a pair of dice which never roll 7, and this time they generate both totals and red numbers, so these could be used for Cities and Knights.
The mechanism I've described above works nicely on paper, but unfortunately it has a practical disadvantage, which is that it's unusually sensitive to imperfections in the dice.
The normal procedure of rolling 2d6 and adding the values together has the effect of spreading out the overall effect of an imperfection. Suppose you roll two normal dice, for example, and one of them is a bit more likely to roll a 3 than it should be. The effect on the overall distribution of the sum of both dice is that 4, 5, 6, 7, 8 and 9 are all more likely than they should be, but by one sixth as much, because the random value from the other die distributes the error over a wide range. So the normal procedure is naturally quite tolerant of faults (or even deliberate weighting) in the dice.
Not so with the mechanism I describe above. If the even die rolls 8 25% more often than 6, then 8 will come up 25% more often than 6 in the overall result; the whole of the error in one die is propagated to the overall result. And because Settlers treats every distinct roll of the dice as qualitatively different rather than just a bit more or less of a continuously varying quantity, minor imperfections in the dice have a severe and noticeable impact on the game play. In actual fact, since starting to play Settlers with my dice, we have been constantly annoyed by apparently unfair behaviour of the dice over the course of entire games; we haven't gone as far as doing actual statistical tests of my dice's accuracy, but it certainly seems possible that the phenomenon I describe here is to blame.
A really obvious alternative approach would have been to roll a d6 (labelled 1-6 as usual) and a d5 (also labelled 1-5 in the obvious way), add the results, and then add one if it was 7 or more. (This generates the Cities and Knights red numbers trivially, as well: simply let the d6 be the red one.) I initially discarded this approach because it felt inelegant; I wanted my dice to roll (say) a 9 by producing something which was obviously a 9, rather than producing something that looked like an 8 and requiring you to remember to convert it into a 9. (In particular, one manifestation of this elegance property is that with my mechanism the procedure for calculating the outcome is the same whichever of the two odd dice you're currently using.) Unfortunately, this inelegant approach might well have been more practical in terms of tolerating flawed dice.
A reader familiar with the range of dice available in roleplaying and gaming shops might also have noticed that a 5×6 outcome grid contains 30 possibilities in total, and that you can actually buy 30-sided dice, so taking a d30 and labelling it 2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 8, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 11, 11, 12 would have been a simpler solution still. This might have been a bit fault-intolerant at the extreme ends of the range (with only one 2 face and only one 12 face, having a noticeable imbalance between them would be easy), but it would probably work acceptably in the middle of the range (which is where Settlers players seem to like to spend most of their time if they can arrange it) because with five separate 8 faces a significant imbalance would be harder to arrange. However, the real practical problem with this approach is that it's quite hard to find a d30 large enough to paint replacement labels on! | https://www.chiark.greenend.org.uk/~sgtatham/dice/ |
In the roll a die game, you throw a die and see which number comes up. The die can have 18 or fewer faces, and is normally numbered sequentially, starting with 0 and moving down. The faces are generally square or triangular in shape, with one or two sharp corners on each side. Odd-numbered faces are usually larger than even-numbered ones, and the number on each face is its value.
The game is played in real time, with each player choosing a different dice strategy. If a player rolls the die for the’main number,’ the other die indicates which construction type should be used in the subsequent round. Following that, the next player must draw a number and then cross off the boxes that say “No Throw.” This gives them the opportunity to use their dice to try and make their way to the finish line. A thrilling and challenging experience, depending on how much they roll, can be had by all who participate.
Single-sided dice and double-sided dice are the two most common types of dice used in gaming. A die with only one face can only produce six different outcomes. The addition of two dice provides you with two different ways to roll the numbers on each side. With a double-sided die, on the other hand, you have a total of three possible outcomes on both sides of the die. In addition, if you use more dies, you’ll get a triangular distribution of results.
Two-sided dice are the most common type of dice used in games of chance. One die is rolled by each player, and they must try to land on a number that falls within the main number range (usually 5-9). Unless the main number falls within the main range, the player is required to proceed with the roll. Otherwise, he loses and is forced to roll a second time. Following a successful roll, the next player must roll two or three dice in order to advance to the next round of competition. A winner has the option to keep rolling until the game reaches the desired result.
The player with the greatest number of points at the conclusion of the game is declared the winner. The player who accumulates the greatest number of points wins. If he or she rolls three threes, the game is a tie! When one player reaches a specified point threshold while the other player fails to reach the desired number, the game is declared over. It is important to remember that this is a game of chance and luck, so proceed with caution! It is necessary to adhere to a set of guidelines in order to ensure a fair game.
It doesn’t matter if you’re playing with two or twenty-four people; there’s a roll-a-dice game to accommodate any group size. You’ll need a pair of dice, some paper, and a pencil to complete this activity. When your child rolls two dice with the numbers six and seven, they are the winner. If a round is won, the victorious player will collect his or her counter and the die from the person to his or her right. The game is simple to learn and provides plenty of opportunities to practise addition skills. | https://gamezenith.com/the-benefits-of-the-roll-a-die-game.html |
At Deloitte, we place great emphasis on offering competitive benefits that enhance work-life integration for your continuous growth in both personal and professional aspects. In addition to annual leave, medical and insurance coverage that we offer, we understand that attracting the highest calibre talent means offering an exciting job and a healthy balance between work commitments and social and family life through our various initiatives.
We offer flexible work arrangements, arrange wellness programmes, as well as organize social and sports events every year, where all employees are invited to participate and find a sense of work-life integration. | https://www2.deloitte.com/mm/en/pages/careers/articles/lifeatdeloitte-benefits.html?icid=wn_benefits |
Written by Rachel Reid, Consultant
Previously published by PSI Talent Management or Cubiks, prior to becoming Talogy.
This article about creating a successful work-life balance was originally published in August 2020. All content has been updated as of January 2022.
When working from home, work-life balance takes on a different meaning and can be more and more challenging to disconnect as a result. Research shows that employees who feel like they have a healthy work-life balance are more effective at work and experience less burnout. As companies and their employees continue embracing remote work, leaders need to help their teams find a solid balance.
Here are four ways to ensure your remote team continues to achieve a productive yet healthy work-life balance that you can effectively manage now and in the future.
1. Check in and be supportive
Remote work can often feel isolating. To keep your employees engaged and connected, check on them frequently. Do so informally both at the individual and group level to gauge the sentiment of your remote team. Instead of just asking how your remote workers are doing, try being more specific. Below are examples of questions that will provide you deeper insight into the performance of your remote teams:
- What areas do you feel are going well for you right now?
- Do you have the tools you need to succeed in your role?
- In what areas do you need help currently?
Everyone has unique needs when working remotely. Learn how your employees are handling this environment and see if you can provide any extra support. Feeling supported in their role can help diminish any stress employees are experiencing.
2. Evaluate your remote team’s ability to be resilient
Remote work-life balance looks different for each of your employees, so how do you help them manage this change? Understanding your team members’ resilience – their capacity to adapt positively to pressure, setbacks, challenges, and changes to achieve peak performance – is a great place to start. There are many tools and resources that can help you identify this, such as using a team or individual assessment which measures the key components of resilience (e.g., optimism, adaptability, support seeking, etc.). You can then provide development opportunities to encourage your remote employees to grow in this area through coaching, webinars, or books.
3. Virtually partner up
One way to combat your team members from feeling disconnected is by creating a virtual mentorship program. Partner up employees and work with them to schedule opportunities to check-in, reflect on their performance, and discuss their professional goals. These meetings allow employees to discuss how they have been managing the remote work environment and creates the opportunity to discuss tools and resources that will help both parties learn and grow. No one should feel like they need to tackle stress alone, and a mentorship program is a great way to provide additional support to one another.
4. Adjust scheduling
Remember that not only are your remote employees working from home, but they are likely managing their children’s educational or daycare needs, providing care for a loved one, or balancing schedules with other members of their household. It can be difficult for employees to juggle their typical work schedule with these additional demands. As a manager, you can adjust team meeting times to work for your employees’ responsibilities at home or alternate the times. For example, one week have a team meeting that occurs in the morning, and the following week schedule the meeting in the afternoon. Everyone’s needs will be different, so make sure you provide support to your team and be flexible where possible.
While there is much that we can’t control right now, there are ways that leaders can better manage and support their remote teams. Remote collaboration takes a bit more effort in a virtual world, but will prove effective and successful when your employees feel valued, feel as if they have the necessary flexibility to navigate this way of working, and feel close (or as close as they can feel) to their managers and teams. | https://www.talogy.com/en/blog/how-to-maintain-healthy-work-life-balance-while-working-remotely/ |
Organisations, Careers and Caring
By Rosemary Crompton, Jane Dennett and Andrea Wigfield.
Since the 1980s, it has increasingly become the norm for mothers of young children, even children under school age, to remain in paid employment. In 2001, 57 per cent of mothers with children under five were in paid work. Employers and policy-makers have responded to these changes by developing policies of work-life integration, including flexible working, career breaks and carers’ leave of varying kinds. At the same time, organisations have faced an increasingly competitive business environment. ‘High commitment’ management, directed at raising standards of employee performance through ‘cultures of excellence’, has been implemented, as has the development of cost savings via organisational restructuring and removing layers of management (or ‘delayering’).
Recent managerial developments have been associated with the ending of the single organisation ‘bureaucratic’ career, and an increase in the individual’s own responsibility for career development (the ‘portfolio’ career), in which people move from job to job, company to company. In principle, this should mean that individuals (particularly women) should be less affected by discontinuous employment records and flexible working than in the past. More negatively, organisational ‘delayering’ has opened up the ‘gap’ in the job ladder between lower-grade employees and the first step on the promotional ladder, and organisational restructuring has increased the intensity of work for many employees.
Against the background of these changes, this study set out to:
- Explore the impact of flexible working and employment breaks on individual careers for men and women in three contrasting employment sectors.
- Assess the impact of organisational culture on the take-up and impact of family-friendly policies and if such policies are contradicted by basic assumptions such as long hours working.
- Explore men’s attitudes to family-friendly working arrangements.
Employment breaks, flexible working, and employment careers
Building on the foundation of an earlier study in the same organizations, this study compared the careers of individuals who had taken employment breaks (flexible employment options) with those who had not done so in the retail banking, supermarket retail and local government sectors in East Kent/Canterbury and Sheffield.
In the bank branches and Sheffield and Canterbury city councils, individuals with full-time unbroken employment records (usually men) had progressed further up the organisational hierarchy. This reflected the history of bureaucratic career development within these organisations. In both sectors, career breaks and the possibility of part-time work were now on offer. In the past decade, the retail bank had introduced policies making it easier for people to take breaks and return to part-time work at their previous employment level. Some women with young children had benefited from this.
The supermarket employees had more varied employment histories, and usually had not been with the company for long. A fragmentary employment record was not a barrier to a career in the supermarket, which had good family-friendly policies. These were much appreciated by lower-level employees, who often worked part-time. However, these jobs were often low paid and did not generate sufficient income to provide the sole support for a family. Most supermarket managers worked full-time and put in long hours. Their jobs were not seen as family-friendly by either the managers or by more junior staff.
In all three sectors, managerial jobs were usually full-time and most managers worked longer hours than contracted. These requirements were widely understood.
“I like my job and I want to work. I couldn’t sit at home but I wouldn’t let it affect my family life to be a manager. The higher up women go, they tend not to have kids.”
(Female bank employee)
The researchers found that an employment history that includes career breaks and/or flexible working is not prohibitive to career development in retail and is becoming less important in other sectors of employment as well. Nevertheless, in all three sectors, getting promoted will usually involve full-time working, and managers are expected to work extra hours. This is seen as a disincentive by those who have, or are anticipating, family or caring responsibilities
Organisational cultures and work-life integration
Organisations seeking to develop ‘cultures of excellence’ have wanted employees to believe in and take responsibility for organisational goals. ‘Family-friendly’ policies are often incorporated into these developments. At the same time, organisations have also been seeking to become more efficient by using fewer staff to deliver the same services. This study found that, although employees appreciated work-life policies, pressures of work often meant that they could not take advantage of them. Bank employees had to meet sales targets, and were concerned about the impact on their colleagues if they stayed away from work. Many council employees felt similar pressures. However, in the councils, there were more people working on projects or in professional jobs where it was possible to make up work after absence without affecting colleagues. The supermarkets’ policies were generous, but lower-level employees lost pay if they missed shifts, and employee absence was largely covered by managers working extra hours.
Although average weekly hours worked in Britain are amongst the highest in Europe (43.6 as compared to the 39.6 EU average), this is not necessarily because people choose to work long hours. Line managers, or people hoping to be promoted to management positions, can be working longer hours because this is implicitly required.
“I have worked 70 hours a week in the past. I put in the hours when I need to. That’s part of being a manager but you’re never really asked to.”
(Supermarket manager)
People who are paid overtime can also work longer hours to make up low wages. Others can work long hours, or fail to take leave to which they are entitled, because of the increased workload that would fall on their colleagues.
In the supermarket and bank, work-life policies, as well as the nature of the services marketed by the companies, were determined at national level. As a consequence, the researchers found little variation in the way work-life policies were implemented in these companies. In contrast, in the councils the nature of work and the provision of services are more complex, and managerial discretion was more important. Both councils had flexitime systems which were widely used to achieve work-life balance.
Today’s families and work-life integration
Among the women interviewed, all of the mothers aged over 45 had taken a break from employment. In contrast, of the 30 women interviewed whose youngest child was aged ten or under, only three had taken a break from employment. In families with young children, most mothers took the major responsibility for childcare (many mothers worked part-time). However, many fathers of young children took a major role in caring for their children. In some families with two working parents, both men and women arranged their hours to accommodate childcare, and roles were shared more or less equally (‘shift parenting’). Some men had taken on major childcare responsibilities as a result of unemployment. This kind of male-biased parenting, however, usually occurred as a consequence of unemployment or redundancy. Only one of the men interviewed had voluntarily changed his working hours (i.e. taken a part-time job) because of caring responsibilities. However, three women reported that their partners had changed their jobs or working hours to help with childcare.
Caring and careers
It was widely recognised by men and women that it is difficult to combine career development with family responsibilities.
“I now don’t aim for the top. There is more to life than work. My perspective has changed, I must admit. It has a lot to do with my daughter.”
(Male bank manager)
More women than men had lowered their career aspirations because of their families. Nevertheless, some women had had successful careers despite the demands of their employment.
“My boss … felt that it was the woman’s job to stay at home … I was determined to prove otherwise.”
(Female council manager)
In balancing their work and family lives, people make choices from within the constraints available. Career opportunities may be limited by organisational restructuring and/or because of a lack of individual qualifications. In all three sectors, career building might mean geographical relocation. Both men and women have to earn sufficient to support their families, but both recognised the negative aspects of career development.
“If you are the sort of person who doesn’t mind picking up your roots and moving every couple of years to heighten your career development, there are opportunities … But if you want to build a family life and your children are at school so you can’t heave them up all the time and move them around the country, then it’s quite limited what’s available to you.”(Female bank manager)
Conclusion
The last two decades have seen substantial changes within both families and organisations. Mothers have increasingly remained in employment, and organisations have undergone radical change. Long-established job hierarchies have been swept away, and more flexible ways of working have been widely adopted. Nevertheless, there remain important elements of continuity in both employment and family life. These stem from taken-for-granted assumptions about the requirements of managerial and supervisory jobs, and ‘gendered’ responsibilities for the unpaid work of caring. Bureaucratic hierarchies may have been jettisoned, but even junior managerial jobs are seen to require full-time working and extra hours if necessary. These expectations make a major contribution to the ‘long hours culture’ in Britain. Women still take the major responsibility for caring and domestic work, and are less likely to want or be able to ‘put in the hours’ to develop a career.
Both male and female employees in the three sectors supported the introduction of family-friendly policies in their organisations. However, in all sectors managers found it more difficult to take advantage of the flexibility on offer and thus individual career development often has negative consequences for family life. They were acutely aware of the tensions between the demands of the business enterprise and caring and family responsibilities.
“A company doesn’t employ you to care for your parents, does it? They employ you to work, looking at it from the business point of view.”
(Male bank manager)
‘Family-friendly’ employer policies are all too easily over-ridden by the demands of the competitive enterprise. Thus employer provisions need to be supported by government policies that recognise these demands and seek to counter their negative impacts as far as family life is concerned.
About the project
The research drew upon 126 work-life history interviews with 84 female and 42 male employees in the three sectors in two localities. Some interviewees had taken an employment break, switched to part-time work, or taken up flexible employment opportunities, and some had not. People (invariably women) who had had such ‘flexible’ employment careers were matched with those who had not. The interviews were transcribed, and analysed using a computer-aided qualitative data analysis package.
How to get further information
The full report, Organisations, careers and caring by Rosemary Crompton, Jane Dennett and Andrea Wigfield, is published for the Joseph Rowntree Foundation by The Policy Press as part of the Family and Work series (ISBN 1 86134 500 3, price £13.95). | http://www.publicnet.co.uk/features/2004/01/23/organisations-careers-and-caring/ |
This story was updated on August 16, 2018.
According to Ernst & Young, 24 percent of U.S. employees said work-life integration is becoming more difficult to manage. Even in an era in which flex-time — designed to improve work-life balance — has increased, recent data from the Bureau of Labor and Statistics indicate that the average full-time employee is working more hours per week and working more on Saturdays and Sundays.
That problem is not exclusive to the U.S, either. According to the ADP Research Institute study The Evolution of Work: The Changing Nature of the Global Workforce, more than a quarter of working people in the U.K. report unhappiness with their work-life integration.
Those statistics come at a time when managing work and life means different things to different generations all over the world. For the "Sandwich Generation" — people in their 40s and 50s who are caring for both their parents and their children — it means trying to balance work and family while making enough money to support a growing household.
Many American adults in the Sandwich Generation are financially supporting both an aging parent and either a small or adult child. Across the globe, the burden on Asian families, to care for both parents and children is particularly substantial.
The Harvard Business Review cites studies that show the work-life integration issue has real consequences for both worker productivity and personal health. Basically, longer hours and increased stress are at odds with what workers value in their lives.
The Changing Workforce
Expectations in the global workforce are changing, and workers' expectations of freedom to do their work when, where and how they want are higher than ever before. For example, according to the Evolution of Work Study, 44 percent of workers in the Netherlands believe they should "define" their own work schedule and 95 percent of Chinese employees believe they will soon be able to do the majority of their work using a mobile device, beliefs that reflect a need to work in a different way in order to accommodate personal obligations. Leaders should be cognizant of the changing workforce and how work-life integration affects employee well-being, productivity and organizational performance.
Flexibility for Different Reasons
According to the Evolution of Work Study, 81 percent of employees view their ability to work from anywhere in the world positively, but the rationale for that positivity for different generations varies. The Sandwich Generation, in particular, is a group of employees seeking better work-life integration. Gail Hunt, president of the National Alliance for Caregiving, told Monster.com that the number one thing members of this generation want from employers is flexible time.
That presents both a retention and recruitment opportunity for leadership. By catering to varying needs, you can show your existing employees you understand that work-life balance comes in all shapes and sizes and highlights to potential, talented employees just how much you value your employees' happiness outside of work.
What to Do?
If Hunt is correct, that the number one thing members of the Sandwich Generation want from employers is flexible time, then organizations that offer this have the potential to attract highly skilled employees at the expense of those that do not. HCM leaders should prioritize the implementation of technology and policies that both enable flexible work time and show employees their voices are being heard.
An example of a technology can enable flex time and keep employees productive includes a mobile-friendly collaborative enterprise social network, such as Yammer and Oracle, that employees can use to communicate and collaborate on projects. As The Evolution of Work report suggests, the vast majority of global employees believe they can work from anywhere in the world, and many are already used to using social technologies to communicate and connect with people. Employers should find it easy to gain adoption of those technologies and keep up with the trend.
When it comes to creating policy around flex time, there is one simple solution to include: encourage employees to take all of their vacation. MarketWatch reports that employees in the United States take only half of their paid vacation time. Employees in China take even less, according to the same article.
As technology and employee preferences continue to blur the lines between where work actually gets done — at the workplace or remotely, during or outside of standard business hours — organizations must provide new ways to support productivity, while helping employees achieve balance, in this new norm. | https://www.adp.com/spark/articles/2016/07/work-life-integration-is-there-an-employee-employer-expectation-gap.aspx |
Use these tips to make the transition into hybrid work models easier.
8 min read
2022-06-23
Flexibility
11 Ideas for Hybrid Team Building Activities
Enhance employee performance and create a strong company culture with these hybrid team building activities.
11 min read
2022-04-04
Flexibility
Remote Food Delivery Solutions for Work From Home Employees
Whether working remotely full-time or part-time, here are some options to keep your employees happy, healthy, and fed.
7 min read
2022-04-04
Flexibility
How Encouraging Work-life Balance can Strengthen Company Culture
Flexible work is here to stay, here’s how you can encourage life-work integration that will promote company culture.
6 min read
2022-03-10
Flexibility
Welcome to the Great Resignation
Learn what you can do to attract and retain talent during this unprecedented time. | https://work.doordash.com/en-ca/blog/category/flexibility |
At the beginning of 2020, only about 17% of American workers worked remotely full time. By April 2020, however, 44% of employees were working from home.
Though many offices have reopened and brought employees back in-office, remote work is still a big part of every day life. In fact, nearly 27% of American workers are expected to be fully remote in 2021.
For managers, this shift requires fresh strategies. Remote work adds new challenges to the difficulties of successfully managing a team. Some of the most common challenges include:
- Dealing with distractions at home.
- Feeling disconnected from co-workers and team decisions.
- Maintaining a healthy work-life balance.
- Staying engaged and motivated.
If you’re new to remote team management, you may feel overwhelmed and unequipped to resolve these problems among your team. If you aren’t able to find your footing, your team can become disorganized, unfocused, and unproductive.
Even if you’ve been managing remote teams for years, your employees are likely facing challenges that they’ve never had to deal with before. Their partner or spouse may also be working remotely, they may have kids or other family members in their household to look after, and everyone may be under a lot more stress due to everything that’s been happening in the world lately.
In sum, any manager can use the best practices in this article to lead teams more effectively by keeping them on-task, motivated, and ready to tackle the next project that comes their way.
Challenge: Dealing with distractions
We’ve all seen the commercials that parody remote work, showing parents with their kids climbing all over them as they try to join a meeting or finish a task. Though these portrayals are exaggerated, they do hold some truth.
For many people, remote work offers endless opportunities to move focus from work to the things going on at home.
Parents may have kids playing loudly in the next room or at their feet. Roommates may not respect requests to be quiet during work hours. There might be packages delivered, laundry that needs to be folded, and a million other interruptions that can easily consume the day.
If your workers can’t reclaim their focus or have poor time- management, they may spend hours on these distractions with little to show you at the end of the day.
Solution: Create workspace guidelines
You need your employees to be consistently productive, which means finding a reliable way to shut out frequent distractions. One way to do this is to create guidelines to help your team create better remote workspaces. These guidelines might include:
- A dedicated room with a door. Creating a separate space that is used only for work can help employees stay focused and on -task. If at all possible, workers should use a separate room with a door (even if it means working from their bedroom) so they can limit interruptions.Consider offering employees stipends so they can purchase necessary equipment to set up a home office, such as a desk and chair, or an extra monitor, to make their at-home space more like an office.
- A workspace away from home. Some people can’t thrive in a work-from-home setting, either because they don’t have the dedicated space or there are simply too many distractions.For these employees, consider giving them a stipend so they can spend time at a co-working space (e.g., WeWork) so they have a more work-friendly environment.
If the distractions are schedule-based (e.g., employees need to take time off each day to pick up or drop off kids at school), you should try to work with team members to create a schedule that allows them to attend to these tasks by setting work hours that start earlier or end later in the day to allow for any necessary breaks.
With tailored work hours, employees can give their full attention to both work and personal tasks without feeling like they have to make sacrifices in one area or the other.
Challenge: Feeling disconnected
Feelings of isolation take several forms for remote workers. Some people report feeling lonely and experiencing social isolation. Others are frustrated when they feel that they aren’t included in team decisions, such as setting goals.
You may also have workers who are unable to build relationships with their coworkers due to differences in time zones and/or schedules, which can affect remote team collaboration.
Solution: Foster stronger relationships
First, evaluate the tools and policies you have in place for communication. Can everyone reach out to chat with each other easily? Are you including the right people in team decisions, or just sending out the results?
Look for any gaps that may hinder collaboration or leave people feeling left out. If you’re lacking the right tools, find a suitable option, such as a chat or collaboration tool, to enable your team to easily communicate with one another.
Once you have a better idea of what your team needs to feel more connected, create new policies to encourage better connections. If you don’t have one scheduled already, bring your team together for a weekly meeting to discuss important news and make team decisions.
You can also meet with each person separately to see how they’re doing and get their feedback and suggestions on ways to improve team camaraderie and collaboration.
Lastly, consider scheduling social events to help ward off feelings of isolation and disconnection. These can take many forms (e.g., virtual happy hours or escape rooms, or some type of multi-player game), but should always work to bring your team together.
You need full participation for these gatherings to be effective, so try to find options that everyone will enjoy.
Challenge: Maintaining a healthy work-life balance
When you work in an office, it’s much easier to maintain healthy boundaries. Work begins when you arrive and ends when you leave the building. At-home workers, however, don’t have the same clear markers.
Even if your team follows a typical nine-to-five schedule, they never physically “leave” the office. This makes it easier to work late, check emails during personal time, and spend too much time thinking about work obligations.
Along with blurring these lines, an unhealthy work-life balance can lead to burnout. Employees, especially those who are lonely or disconnected, are more likely to experience mental and physical issues that affect their well-being. You may also see a decline in their work performance.
Employees may be hesitant to set better boundaries because they may fear looking like they’re not working as hard, and so they may feel trapped in an unending cycle.
Solution: Set and enforce better boundaries
When you manage remote workers, you’re responsible for creating an optimal work environment. Even though you’ll never be able to address every stressor your team experiences, you can create and enforce clear boundaries that minimize the impact that day-to-day work has on their mental and physical health.
Tips for setting and enforcing boundaries include:
- Communicate expectations about after-hours work. Do you expect workers to answer emails outside of work hours? What is the policy for projects that require after-hours work to meet deadlines? Determine your expectations for these situations and communicate them clearly to your team.
- Model the importance of work/life balance. Set an example for the rest of your team by not sending after-hours emails or working on projects on weekends and by taking regular personal time off (PTO).
- Consider implementing policies such as “no work emails on the weekend,” or “no meetings on Mondays” to set clear boundaries for your team to adhere to, rather than expecting them to set those boundaries themselves.
- Check in regularly with team members. During your weekly one-on-ones, ask employees how they’re doing and if there’s anything you can do to help them better balance work and personal tasks. You might consider sending out an anonymous survey to your team if you really want to get honest feedback so you can identify areas for improvement.
Most importantly, make sure you always communicate expectations clearly, as certain situations may call for adjusting existing boundaries.
For example, there may be times when an urgent project requires “all hands on deck” for several days, and most employees need to work a bit late.
In this case, let your team know that this is only temporary, and when the project is over, communicate that everyone should resume their normal working hours (you might even consider giving people a day off as a reward).
Challenge: Staying motivated
Even under the best of circumstances, your workers may struggle to stay committed to their work. Some remote workers may not be used to the lack of structure that at-home work features, while those who thrive on in-person communication may struggle to get things done without consistent feedback and social stimulation.
As a manager, you’re likely to see this lack of motivation reflected in both personal performance and team results. Workers who struggle to meet deadlines and perform as expected will also slow the rest of the team down. If employees can’t meet their job expectations, you may be forced to let them go.
Solution: Find creative ways to motivate your team
This solution may seem overly simple, but it can be powerful if you use it right. Start by asking your team what motivates them. Do your employees work best under pressure? Adjust your workflow to include shorter deadlines and work cycles.
Do some employees prefer to work independently, while others prefer to collaborate on projects? Try to delegate projects accordingly so that every team member is working on the types of projects they enjoy most, in the way they enjoy most.
Another way to incentivize employees is to gamify certain work tasks. You can create multiple ways to reward progress, such as giving points for meeting deadlines and solving problems. At the end of the week, you can offer prizes to those with the highest scores.
In addition to providing motivation, gamification adds an element of fun to what can be mundane everyday tasks. No matter what solutions you come up with, just be careful to choose options that won’t add additional stress or distractions, or more work to do.
Managing remote teams can be a challenge for even the most skilled people manager. But by implementing some of the best practices outlined above, you can begin to solve some of the most common challenges remote managers face, resulting in a happier, more productive team, and ultimately making your job easier. | https://blog.mindmanager.com/blog/remote-team-best-practices |
The COVID-19 pandemic has underscored the importance of a healthy work-life balance. As employees have adjusted to working from home, they’re facing all kinds of new challenges, such as setting boundaries between work and personal life, time scheduling and balancing work with childcare. Not only in times of crises, but companies should always help their employees implement self-care so that they can achieve mental and physical well-being.
The pandemic has caused the stress levels of professionals to reach new heights. One study revealed that a few months after the onset of the pandemic, 73% of professionals were feeling burned out. One of the top reasons for this burnout was the lack of separation between work and home life.
Hopefully, in the months since then, we’ve all got better at the remote working lifestyle. However, the coronavirus has not disappeared and is still a significant stressor in employees’ lives. But stress has been a growing issue for a long time, and we shouldn’t need a pandemic to make us tackle it in the workplace.
In small doses, stress can be beneficial, but in high-stakes moments or times of tight deadlines, it can make people work faster and more efficiently. In contrast, chronic stress not only affects people’s mental health, but their physical health too. Being constantly in emergency mode impacts the sleep cycle, metabolism, hormonal system and more, leading to all kinds of health problems. This means fatigue on the job, more sick days, moodiness, poor judgment and decreased productivity. So not only is stress bad for the individual, but it’s also bad for the company.
On the other hand, employees who have time for themselves and actively take care of their physical and mental health will be much happier and more productive on the job. The well-being of a company is closely connected to the well-being of its employees. So it stands to reason that companies should promote self-care practices in their workforce.
But what practical measures can you implement to nudge your employees into better self-care? Here are a few ideas.
● Insist Employees Take Leave
Due to lockdowns, a huge number of UK employees have unused holiday time, yet this isn’t an unusual trend. Employees often believe that they’ll seem more hardworking if they forego holidays in order to get more done. However, the burnout they’re likely to experience as a result can lead to decreased productivity, thus cancelling out any time gained by skipping leave days.
Remind employees about their holiday entitlement and encourage them to take them. Perhaps you could require them to take off a certain number of days each quarter, or at least limit the number of days an employee can sell back to the company. Likewise, discourage overtime and working on weekends.
● Encourage Healthy Eating
Taking care of your physical health is a vital aspect of self-care. If you currently have an in-person office or one for when people start coming in again, make sure you have healthy snacks available in the office. If you offer company-sponsored meals, provide options that are both nutritious and delicious. If you want to encourage healthy eating practices in your employees, it starts in the office.
● Promote Exercise and Healthy Movement
To care for the body, exercise is just as important as eating well. Send your employees the message that you care about their health by starting a walking or running club. If you can, create an in-office gym and hire onsite exercise instructors. Another option is to offer to reimburse employees for a portion of their gym memberships or exercise classes as part of your benefits package. If everyone is working from home, no worries—just move the classes onto Zoom.
Healthy movement goes beyond exercise.
Many people who work office jobs suffer many health issues related to sitting at a desk for extended periods. Remind employees to stretch regularly and to get up from their desks for two-minute movement or mindfulness breaks. Consider offering alternative desk and chair arrangements such as standing desks or exercise balls. Even if all work is remote, you can still remind employees to stretch, particularly during long meetings.
● Be Flexible
It’s likely that COVID-19 has caused you to implement some flexibility for your employees’ working hours. Consider keeping this flexibility even when everyone is back at the office.
Giving your employees control over their own work schedules helps them arrange their lives in a way that works best for them. Gone are the days when everyone has to be at their desk from 9 to 5. That doesn’t mean you can’t have core work hours when employees need to be available, but allow a little flexitime as well as some leeway in the event of emergencies. It will lead to happier employees and more productivity in the long run.
● Value Employees’ Time
It’s all very well to encourage employees to implement a healthy, balanced lifestyle, get enough exercise and sleep and take all their leave days. But this will seem disingenuous if you then inundate them with meetings and drown them in deadlines. One thing the pandemic has taught us is that yes, it could have been an email.
Find ways to be efficient so that your employees have enough time to complete their work and still take time for themselves. That’s not to say that you should eliminate meetings—some synchronous connection with co-workers is essential in this time of social distancing. Yet you can limit the number of large or unnecessary meetings and minimise their length by creating agendas and collecting the relevant information in advance. In addition, encourage employees to take proper lunch breaks away from their desks and to implement a strict schedule so that work doesn’t bleed into personal time.
There are many ways you can encourage healthy self-care habits in your employees. The most important point is that your employees know you care about their well-being. Companies expect their employees to give them their best, so it’s essential to provide the support necessary in order for them to do so. | https://hrnews.co.uk/how-to-encourage-self-care-amongst-employees/ |
Work-life balance is the term used to describe working practices that support the need to achieve a balance between home and working life.
Most of us are aware of getting the work-life balance right but find doing so isn’t always easy. There are no rules, balance is an individual thing and everyone has their own equilibrium. If you continually feel rushed and overloaded It is time to re-evaluate.
The University recognises that most of us have family and personal commitments and offer:
- A range of family-friendly policies
- Flexible working arrangements
- Leave provision for emergencies
- Staff benefits e.g. childcare vouchers
- Support Services
Top 10 Tips for achieving a work-life balance
Nursery
Childcare Scotland Ltd operates the Strathclyde Nursery on John Anderson Campus (Level 1 of Forbes Hall) on behalf of the University. | https://www.strath.ac.uk/wellbeing/stressandmentalhealth/work-lifebalance/ |
This research represents an integrated approach of reconstructing three dimensional environments for robotic navigation. It mainly focuses on three dimensional surface reconstructions of the input data using Kinect, a depth sensor. With an increase in the application areas making use of point clouds, there is a growing demand to reconstruct a continuous surface representation that provides an authentic representation of the unorganized point sets and render the surface for visualization. The main goal of this research is the study of various surface reconstruction algorithms and the creation of a three dimensional model of an object and/or an entire three dimensional environment from a set of point clouds. It starts by scanning an environment or an object using Kinect and store the point cloud generated using OpenGL and Microsoft Visual Studio. Then it focused on creating a mesh out of the stored point cloud in MATLAB, using a computational geometric approach called Delaunay Triangulation. Finally, combining surfaces and applying surface reconstruction method the three dimensional model is obtained. | http://dspace.bracu.ac.bd/xmlui/handle/10361/4890 |
Agisoft Metashape is a stand-alone software product that performs photogrammetric processing of digital images and generates 3D spatial data to be used in GIS applications, cultural heritage documentation, and visual effects production as well as for indirect measurements of objects of various scales.
Wisely implemented digital photogrammetry technique enforced with computer vision methods results in smart automated processing system that, on the one hand, can be managed by a new-comer in the field of photogrammetry, yet, on the other hand, has a lot to offer to a specialist who can adjust the workflow to numerous specific tasks and different types of data. Throughout various case studies Metashape proves to produce quality and accurate results.
Agisoft Metashape is an advanced image-based 3D modeling solution aimed at creating professional quality 3D content from still images. Based on the latest multi-view 3D reconstruction technology, it operates with arbitrary images and is efficient in both controlled and uncontrolled conditions. Photos can be taken from any position, providing that the object to be reconstructed is visible on at least two photos.
Both image alignment and 3D model reconstruction are fully automated.
How it works
Generally the final goal of photographs processing with Metashape is to build 3D surface, orthomosaic and DEM. The processing procedure includes four main stages.
1. The first stage is camera alignment. At this stage Metashape searches for common points on photographs and matches them, as well as it finds the position of the camera for each picture and refines camera calibration parameters. As a result a sparse point cloud and a set of camera positions are formed. The sparse point cloud represents the results of photo alignment and will not be directly used in further processing (except for the sparse point cloud based reconstruction method, that is not recommended).
However it can be exported for further usage in external programs. For instance, the sparse point cloud model can be used in a 3D editor as a reference.
On the contrary, the set of camera positions is required for further 3D surface reconstruction by Metashape.
2. The next stage is generating dense point cloud, that is built by Metashape based on the estimated camera positions and pictures themselves. Dense point cloud may be edited and classified prior to export or proceeding to the next stage.
3. The third stage is generation of a surface: Mesh and/or DEM. 3D polygonal mesh model represents the object surface based on the dense or sparse point cloud, this type of surface representation is not always required, so the user may choose to skip mesh model generation step. Digital Elevation Model (DEM) can be built in Geografic, Planar or Cylindrical projections according to the user’s choice. If the dense point cloud had been classified on the previous stage – it is possible to use particular point classes for DEM generation.
4. After the surface is reconstructed, it can be textured (relevant for mesh model only) or an Orthomosaic can be generated. Orthomosaic is projected on a surface of user’s choice: DEM or Mesh model (if it had been generated for the project). | https://nullpk.com/agisoft-metashape-professional-1-6-0-2021-win-mac/ |
Happy Halloween, folks! In honour of my favorite holiday, today’s post is about how I scanned part of my Halloween costume to be 3D printed.
Let’s start with the costume idea; this year, I wanted to be Krang from the Ninja Turtles. Yeah, this guy:
Aside from the fact that I’m not a 7-foot-tall neck-less battle robot, I also don’t have a brainy overlord hanging around, which presents some technical difficulties. I had to make Krang!
After much deliberation, I made him out of clay. I must say, I’m pretty pleased with the results.
At this point, I started to think: what if I scanned and 3D printed him? It’d be neat to have a copy! Besides, what’s the point of having cool toys at work if I don’t play with them?
For this scan, I used the Occipital Structure Sensor connected with an iPad. I’d never used this device before, so I was very excited to give it a try. Some of the advantages I noticed were that it’s easily transportable (certainly easier than carrying a fragile clay model around on public transit), has a really good price point and is incredibly simple to use.
To scan, you simply fire up the app on the iPad and point the camera at the object you wish to digitize. There’s a bounding-box that you can set to exclude things that you don’t want to capture as part of your model, such as the surface the object is resting on. I had a bit of trouble with this step and caught some of the table that Krang was sitting on. In my case, this was acceptable since I was planning on importing the model into BuildIT anyhow, and I could easily remove the excess plane there. I didn’t look for any editing tools associated to the structure sensor app itself, but I’m sure that, with a bit more patience than I had, you could easily set the bounding box properly.
I took my scan on high-resolution, and did one full rotation of the object, along with a top view. It took a couple of minutes since I had to go slowly in order to capture keyframes and ensure that the images were properly stitched together. Once I felt that I had sufficient coverage of the object, I simply clicked to stop scanning and emailed the file to myself from the app.
Next, I imported the file into BuildIT using a nifty little script that Mathieu (our Director of Engineering) cooked up. It generates a point cloud from the .obj file and then colours those points and applies the texturing from the .mtl and .jpg files. You can check out a video of it in action here. As you can see below, it did a really nice job!
To send this file for printing, I needed an .stl mesh. Before I could create the mesh from my point cloud, though, I needed to clean it a bit. I had to remove the extra surface I scanned by accident. In BuildIT, I went to split the cloud of the desk from the main object and then created a mesh.
To split the cloud in BuildIT and remove the table surface, I simply went to Edit > Point Cloud > Split Clouds
First, I rotated the view so that the table surface was perpendicular to the view, allowing me to easily select the points on a plane.
Then I selected the cloud, used the Polygon Selection tool to select the plane, and applied the command. After that, it was easy to remove the unwanted point cloud from the view.
The next step was to generate a mesh of the main cloud. I went to Construct > Create Mesh from Cloud.
I selected the main cloud, set a Sampling distance of 0.0004 and the Neighborhood scale to small. Then I checked the Fill holes option and had it get rid of anything smaller than 1 inch, to create a fully watertight mesh. You can also fill holes using the Fill Holes command under Edit > Mesh > Fill Holes
Now the mesh can be exported as an .stl file and sent to be printed.
Next time, we’ll look at scanning the 3D printed part and comparing it to the original cloud to see how accurately my new Krang was rendered! | https://insights.faro.com/buildit-software/scanning-a-clay-model-for-3d-printing-aka-too-much-time-making-a-halloween-costume |
In the previous session, we have learned what Mesh is and the various aspects upon which a mesh can be classified. Mesh generation requires expertise in the areas of meshing algorithms, geometric design, computational geometry, computational physics, numerical analysis, scientific visualization and software engineering to create a mesh tool.
Over the years, mesh generation technology has evolved shoulder to shoulder with increasing hardware capability. Even with the fully automatic mesh generators there are many cases where the solution time is less than the meshing time. Meshing can be used for wide array of applications, however the principal application of interest is the finite element method. Surface domains are divided into triangular or quadrilateral elements, while volume domain is divided mainly into tetrahedral or hexahedral elements. A meshing algorithm can ideally define the shape and distribution of the elements.
A key step of the finite element method for numerical computation is mesh generation algorithms. A given domain is to be partitioned it into simpler ‘elements’. There should be few elements, but some portions of the domain may need small elements so that the computation is more accurate there. All elements should be ‘well shaped’. Let us take a walkthrough of different meshing algorithms based of two common domains, namely quadrilateral/hexahedral mesh and triangle/tetrahedral mesh.
Grid-Based Method
The grid based method involves the following steps:
Medial Axis Method
Medial axis method involves an initial decomposition of the volumes. The method involves few steps as given below:
Plastering method
Plastering is the process in which elements are placed starting with the boundaries and advancing towards the centre of the volume. The steps of this method are as follows:
Whisker Weaving Method
Whisker weaving is based on the concept of the spatial twist continuum (STC). The STC is the dual of the hexahedral mesh, represented by an arrangement of intersecting surfaces, which bisect hexahedral elements in each direction. The whisker weaving algorithm can be explained as in the following steps:
Paving Method
The paving method has the following steps to generate a quadrilateral mesh:
Mapping Mesh Method
The Mapped method for quad mesh generation involves the following steps:
Quadtree Mesh Method
With the quadtree mesh method, square containing the geometric model are recursively subdivided until the desired resolution is reached. The steps for two dimensional quadtree decomposition of a model are as follows:
Delaunay Triangulation Method
A Delaunay triangulation for a set P of discrete points in the plane is a triangulation DT such that no points in P are inside the circum-circle of any triangles in DT. The steps of construction Delaunay triangulation are as follows:
Delaunay Triangulation maximizes the minimum angle of all the angle of triangle and it tends to avoid skinny triangles.
Advancing Front Method
Another very popular family of triangular and tetrahedral mesh generation algorithms is the advancing front method, or moving front method. The mesh generation process is explained as following steps:
Spatial Decomposition Method
The steps for spatial decomposition method are as follows:
Sphere Packing Method
The sphere packing method follows the given steps:
Source
Singh, Dr. Lokesh, (2015). A Review on Mesh Generation Algorithms. Retrieved from http://www.ijrame.com
The quality of a mesh plays a significant role in the accuracy and stability of the numerical computation. Regardless of the type of mesh used in your domain, checking the quality of your mesh is a must. The ‘good meshes’ are the ones that produce results with fairly acceptable level of accuracy, considering that all other inputs to the model are accurate. While evaluating whether the quality of the mesh is sufficient for the problem under modeling, it is important to consider attributes such as mesh element distribution, cell shape, smoothness, and flow-field dependency.
It is known that meshes are made of elements (vertices, edges and faces). The extent, to which the noticeable features such as shear layers, separated regions, shock waves, boundary layers, and mixing zones are resolved, relies on the density and distribution of mesh elements. In certain cases, critical regions with poor resolution can dramatically affect results. For example, the prediction of separation due to an adverse pressure gradient depends heavily on the resolution of the boundary layer upstream of the point of separation.
The quality of a cell has a crucial impact on the accuracy of the entire mesh. The quality of cell is analyzed by the virtue of three aspects: Orthogonal quality, Aspect ratio and Skewness.
Orthogonal Quality: An important indicator of mesh quality is an entity referred to as the orthogonal quality. The worst cells will have an orthogonal quality close to 0 and the best cells will have an orthogonal quality closer to 1.
Aspect Ratio: Aspect ratio is an important indicator of mesh quality. It is a measure of stretching of the cell. It is computed as the ratio of the maximum value to the minimum value of any of the following distances: the normal distances between the cell centroid and face centroids and the distances between the cell centroid and nodes.
Skewness: Skewness can be defined as the difference between the shape of the cell and the shape of an equilateral cell of equivalent volume. Highly skewed cells can decrease accuracy and destabilize the solution.
Smoothness redirects to truncation error which is the difference between the partial derivatives in the equations and their discrete approximations. Rapid changes in cell volume between adjacent cells results in larger truncation errors. Smoothness can be improved by refining the mesh based on the change in cell volume or the gradient of cell volume.
The entire effects of resolution, smoothness, and cell shape on the accuracy and stability of the solution process is dependent upon the flow field being simulated. For example, skewed cells can be acceptable in benign flow regions, but they can be very damaging in regions with strong flow gradients.
Mesh size stands out as one of the most common problems to an equation. The bigger elements yield bad results. On the other hand, smaller elements make computing so long that it takes a long amount of time to get any result. One might never really know where exactly is the mesh size is on the scale.
It is important to consider chosen analysis for different mesh sizes. As smaller mesh means a significant amount of computing time, it is important to strike a balance between computing time and accuracy. Too coarse mesh leads to erroneous results. In places where big deformations/stresses/instabilities take place, reducing element sizes allow for greatly increased accuracy without great expense in computing time.
Whether working on a renovation project or making an information data about an as-built situation, it is understandable that the amount of time and energy spent on analysis of the object/project in hand can be quite debilitating. Technical literatures over the years, has come up with several methods to make a precise approach. But inarguably, the most prominent method is the application of Point Clouds.
3D scanners gather point measurements from real-world objects or photos for a point cloud that can be translated to a 3D mesh or CAD model.
But what is a Point Cloud?
A common definition of point clouds would be — A point cloud is a collection of data points defined by a given coordinates system. In a 3D coordinates system, for example, a point cloud may define the shape of some real or created physical system.
Point clouds are used to create 3D meshes and other models used in 3D modeling for various fields including medical imaging, architecture, 3D printing, manufacturing, 3D gaming and various virtual reality (VR) applications. A point is identified by three coordinates that, correlate to a precise point in space relative to a point of origin, when taken together.There are numerous ways of scanning an object or an area, with the help of laser scanners which vary based on project requirement. However, to give a generic overview of point cloud generation process, let us go through the following steps:
Scanning a space or an object and bringing it into designated software lets us to further manipulate the scans, stitch them together which can be exported to be converted into a 3D model. Now there are numerous file formats for 3D modeling. Different scanners yield raw data in different formats. One needs different processing software for such files and each & every software has its own exporting capabilities. Most software systems are designed to receive large number of file formats and have flexible export options. This section will walk you through some known and commonly used file formats. Securing the data in these common formats enables the usage of different software for processing without having to approach a third party converter.
Common point cloud file formats
OBJ: It is a simple data format that only represents 3D geometry, color and texture. And this format has been adopted by a wide range of 3D graphics applications. It is commonly ASCII (American Standard Code for Information Interchange).
PLY: The full form of PLY is the polygon file format. PLY was built to store 3D data. It uses lists of nominally flat polygons to represent objects. The aim is to store a greater number of physical elements. This makes the file format capable of representing transparency, color, texture, coordinates and data confidence values. It is found in ASCII and binary versions.
PTS, PTX & XYZ: These three formats are quite common and are compatible with most BIM software. It conveys data in lines of text. They can be easily converted and manipulated.
PCG, RCS & RCP: These three formats were developed by Autodesk to specifically meet the demands of their software suite. RCS and RCP are relatively newer.
E57: E57 is a compact and widely used vendor-neutral file format and it can also be used to store images and data produced by laser scanners and other 3D imaging systems.
The laser scanning procedure has catapulted the technology of product design to new heights. 3D data capturing system has come a long way and we can see where it’s headed. As more and more professionals and end users are using new devices, the scanner market is rising in a quick pace. But along with a positive market change, handling and controlling the data available becomes a key issue.
Five key challenges professionals working with point cloud face are:
As we have seen during the introduction, the first step to reverse engineer a product is through scanning with the help of 3D scanners. Early eras have seen the painstaking task of obtaining dimensions of an existing product. These methods were time consuming and needed attention to details from the very first stage.
However, with the rapid development in the scanning technology, the inception of a product has caught speed and the chances of errors have reduced dramatically which has made 3D scanning and measurement an important part, starting from design stage to inspection stage.
3D laser scanning is the technology to capture a physical object’s exact size and shape using a laser beam to create a digital 3-dimensional representation of the same. 3D laser scanners create “point clouds” of data from the surface of an object.
We will go through point clouds in later sections.
3D laser scanning efficiently takes the measurements of contoured surfaces and complex geometries which require huge amounts of data for their accurate description because doing this with the use of traditional measurement methods is impractical and time consuming. It creates accurate point cloud data by acquiring measurements and dimensions of free-form shapes.
The basic working principle of a 3D scanner is to collect data of an entity. It can either be:
In reverse engineering, laser scanner’s primary aim is to provide a lot of information about the design of an object which in the later stages gets converted to 3D CAD models, considering the compatibility of 3D scans and Computer Aided Design (CAD) software. 3D scans are even compatible with 3D printing which requires some specific computer software.
3D scanning technologies varies with different physical principles and can be classified as follows:
Contact based 3D scanning technology: This process requires contact between the probe and the object, where the probe is moved firmly over the surface to acquire data.
Apart from scanning technologies, there are various types of 3D scanners. Some are built for short range scanning while others are ideal for medium or long range scanning. The built and usage of specific scanners hugely depend upon size of the object to be scanned. The scanners used for measuring small objects vastly differ from the ones that are used for large bodied objects, such as a ship.
Here is a brief summary of types of 3D laser Scanners:
3D scanners have contributed a lot over the years and needless to say, it comes up with many benefits. Some of them are as follows:
Up until now, we believe our readers got a clear explanation of reverse engineering. Let us give walkthrough — Reverse engineering is the process of extracting design information after studying a physical product, with the intent to reproduce the product, or to create another object that can interact with it.
In the past, designers resorted to physical measurement of the product to redraw its geometry. Today, designers use 3D scanners to capture measurements. The scanned data is then imported to CAD where the design can be analyzed, processed, manipulated and refined. Two key aspects that fall in place when focusing on reverse engineering process are:
A parametric model captures all its information about the data within its parameters. All you need to know for predicting a future data value from the current state of the model is just its parameters. The parameters are usually finite in dimensions. For a parametric model to predict new data, knowing just the parameters is enough. A parametric model is one where we assume the ‘shape’ of the data, and therefore only have to estimate the coefficients of the model.
A non parametric model can capture more subtle aspects of the data. It allows more information to pass from the current set of data that is attached to the model at the current state, to be able to predict any future data.The parameters are usually said to be infinite in dimensions. Hence, it can express the characteristics in the data much better than parametric models. For a non parametric model, predicting future data is based on not just the parameters but also in the current state of data that has been observed. A non-parametric model is one where we do not assume the ‘shape’ of the data, and we have to estimate the most suitable form of the model, along with the coefficients.
The previous sections dealt with the initial and middle stages of reverse engineering. This section highlights a stage which is undoubtedly crucial for product development. After a meshed part is aligned, it goes through either—surface modeling in tools such as Polyworks, which generates a non-parametric model (IGES or STEP format) or parametric modeling where a sketch of the meshed part is created instead of putting it through surfacing (.PRT format). The resultant is generally called, 3D computer aided model or CAD model.
But, what is CAD?
CAD is the acronym for Computer Aided Design. It covers different variety of design tools used by various professionals like artists, game designers, manufacturers and design engineers.
The technology of CAD systems has tremendously helped users by performing thousands of complex geometrical calculations in the background without anyone having to drop a sweat for it. CAD has its origin in early 2D drawings where one could draw objects using basic views: top, bottom, left, right, front, back, and the angled isometric view. 3D CAD programs allow users to take 2D views and convert them into a 3D object on the screen. In simple definition, CAD design is converting basic design data into a more perceptible and more understandable design.
Each CAD system has its own algorithm for describing geometry, both mathematically and structurally.
Everything comes with its own varieties and CAD modeling is no stranger to it. As the technology evolved, CAD modeling came up in different styles. There are many methods of classifying them, but a broad general classification can be as follows:
3D models can be further classified into three categories:
Different professionals use different software, owing to different reasons like cost, project requirements, features etc. Although, software comes with their own file formats, there are instances where one needs to share their project with someone else, either partners or clients, who are using different software. In such cases, it is necessary that both party software understand each other’s file formats. As a result of this situation, it is necessary to have file formats which can be accommodated in variety of software.
CAD file formats can be broadly classified into two types:
Although there are almost hundreds of file formats out there, the more popular CAD formats are as follows:
STEP: This is the most popular CAD file format of all. It is widely used and highly recommended as most software support STEP files. STEP is the acronym for Standard for the Exchange of Product Data.
IGES: IGES is the acronym for Initial Graphics Exchange Specification. It is an old CAD file format which is vendor-neutral. IGES has fallen out lately since it lacks many features which newer file formats have.
Parasolid: Parasolid was originally developed by ShapeData and is currently owned by Siemens PLM Software.
STL: STL stands for Stereolithography which is the format for 3D information created by 3D systems. STL finds its usage mostly in 3D printers. STL describes only the outer structure or surface geometry of a physical object but doesn’t give out color, texture and other attributes of an object.
VRML: VRML stands for Virtual Reality Modeling Language. Although it gives back more attributes than STL but it can be read by a handful of software.
X3D: X3D is an XML based file format for representing 3D computer graphics.
COLLADA: COLLADA stands for Collaborative Design Activity and is mostly used in gaming and 3D modeling.
DXF: DXF stands for Drawing Exchange Format which is a pure 2D file format native to Autocad.
Computer-aided design or CAD has pushed the entire engineering process to the next level. One can actually mould or fold, modify or make a new part from scratch, all with the help of CAD modeling software. The many uses of CAD are as follows: | https://pre-scient.com/component/tags/tag/parametric.html |
Parametric Design combined with GIS, scanned data and BIM
This post is to show our presentation in Oslo about a research that thanks to McNeel and Graphisoft tools, has investigate the possibility to reconstruct the cities by using GIS and BIM models.
"Just a short description of us:I am Michele Calvano, architect and researcher, expert in parametric procedures for drawing and co-founder of the Arfacade, a group specialized in façade engineering;Next to me there is Mario Sacco, architect and BIM expert; important component of Archiradar, the most important BIM web site in Italy where you can find solution for building Information modeling.
I want to start with an overview of the case study. As you know Since August 2016, the center of Italy has been hit by a series of seismic events that have completely destroyed some cities along the Tronto Valley as Amatrice and Accumoli. (Don’t exist anymore)
After the disaster we noticed that we have no information about our historic centres and, above all, we don’t have information about Italian small cities in general, that is a problem not only after a disaster, but also during the ordinary administration of our country. So our solution has been to create a 3D database capable to contain different information about landscape, cities and buildings.
We decided which kind of data are useful for our goal, but above all, understood which data are easily available. In the slide I show three kind of data useful to construct and inform a 3D model: the most accessible data comes from the web, where we can find geometries and metadata (for example GIS data); less available are data from a direct survey, because you have to go there and often there is no possibility to take measure with a metric tape or a total station. But where is possible, we can create points cloud by using a camera or a Laser Scanner. At least, on place we have to understand styles of doors, windows walls etc…. By observing.
With all these data we developed a workflow using tools by McNeel and Graphisoft. Look at this slide. After acquiring data, in the 2nd phase of sythesis we created a definition with which normalize data and push them in a data flow capable to transform 2D geometry inside the GIS file in a 3D shape. The resulting shape is a sort of hull, where is impossible to distinguish architectural objects. With Grasshopper we deconstruct the shape in faces respecting what they representing. Thanks to the Live Connection by Graphisoft we get translate faces in architectural object inside ArchiCAD. So the final model is a BIM model.
Going in the deep, In Italy, some country’s Administration put landscape information on the web. Lazio Region upload GIS files adopting the shape format as a way to describe Its territory, so information are freely gettable by a simple download from wherever you are.
Let’s see the contents of a shape file. It is composed by layers, everyone with different information. For our work we used three layers describing territory in a 2D representation and with attributes.
One layer is composed by curves. If I select….. Another layer is composed by closed polygon that are the shape of the building…. Inside every polygon there are two points with two different code. One represents the base elevation from the line of the see, the other represents the elevation from the top of the building.
In the slide there is a point cloud representing the situation after the earthquake, so it is partially useful for a 3D reconstruction but it increase Its value if It is matched with other data.
Observation on place are very important to understand styles of architectural object: for example the number of stores, the thickness of the walls and slabs, the materials and finishes of doors and windows. We translated attributes in numbers and put them in a spreadsheet.
Grasshopper is a Visual Programming Language generally used to manage simple data that comes from Rhinoceros or from outside sources like Text or Numbers. To manage GIS data you need an add-on called @It, you can find a lot of important tools in the web site www.Food4Rhino.com. New components allow Grasshopper to introduce information from shape file inside the procedure.
Here It is the 1st part of procedure we adopted. With @It, in Grasshopper we deconstruct GIS data in geometric entities and attributes. We design a GH definition that use attributes to transform the entities. For example, if I know the basement shape of a building and I know Its elevation I can transform the 2D boundary by an extrution; thanks to Grasshopper I can solve hundreds of these problem in one time. I can do the same with the terrain working on the contour lines. Live Connection allows to introduce 3D buildings and Terrain as a Morph in Archicad. In archiCAD we can add new attributes to Morph and fill the new strings with data from the spreadsheet.
About the terrain, the first step is introduce the shape file in Grasshopper and split geometries from elevation attributes, and than combine again by using the transformation Move; instead move one by one, with a parametric procedure, we can move all in one time but in the proper elevation. The aim is create a polyhedral surface, so we extract points from curves and connect together by an un-structured mesh. It is impossible to put this kind of mesh in Archicad by using the mesh tool.
So the second step is create a structured grid of points on the horizontal plane and project them along the vertical line until they hit the mesh in the space. The dimension of the matrix is according to the part of landscape we wont to represent. The image on the left, show in green a matrix of regular points useful to create a structured mesh suitable for the mesh tool of ArchiCAD, on the right.
Create the buildings means open the layer of the shape file that contains information about the profiles of the building. Inside every profile there are two points, one with an attribute to describe the elevation of the base of the building and another one to describe the eaves elevation. In Grasshopper we can combine the geometries with the two attributes, creating two parallel polygons. Between two curves we can create a solid. At the end in Rhino we get see the whole model with a low resolution.
A special components from Live connection allow us to construct the model you can see on the right. With the Morph component I can Import in Archicad buildings coming from the live definition done in Grasshopper. Imputs for this component are the mesh previously created; typical settings to describe the object in ArchiCAD (name of the layer for example) and a sort of switch to activate the connection between component and ArchiCAD. The same is done with Mesh tool. I said Live Definition because changing the input data the model both in Rhino and ArchiCAD will immediately change.
In the following part of the work, we add more data inside the procedure. So, to increase the accuracy of the model we upload the point cloud. Inside ArchiCAD is easy to draw simple geometries representing opens like windows and doors. These geometries are object informed by attributes previously collected in spreadsheets.
Now It is time to deconstruct, Live connection is able to collect Morph and Object inside grasshopper and deconstruct them. The output are geometries and all the settings collected until now. Last step is combine data and geometries in special components which translate Morph in walls, slabs and roofs; and Object in doors and windows.
Here we show how we deconstruct a solid to translate faces in architectural object: it is important to calculate the angle between normals and zed axes, If the angle is 90 degree face will be a wall, if different will be roof or slab. | https://www.drawing4design.com/single-post/2018/03/04/parametric-design-combined-with-gis-scanned-data-and-bim |
Posted 20 July 2020
A 3D mesh is a solid surface comprised of faces, edges and vertices. 3D meshes can be derived from point clouds, which are a collection of points to represent an object in the given coordinate system.
Where the geometry of the object of interest is complex, 3D meshes prove to be effective. Features such as berms or scour beneath a vertical structure are tricky to visualise completely using point clouds only.
3D meshes are preferable to grids for areas of complex geometry. Grids are 2.5D surfaces allowing only one depth value per cell, which as such are unsuitable for representing complex 3D objects.
Through generating a 3D mesh from a dense point cloud - such as those created by our mechanical scanning sonar - complex geometry is accurately represented.
While point clouds generated by modern Multibeam Echosounders are very dense, point clouds present difficulties when performing sub-metric inspection of an object. This is fundamentally due to the fact that it is not a solid surface, and the human eye can struggle to identify the objects in the point cloud at these scales.
Where LiDAR or Acoustic beams have penetrated the object, damage location also proves problematic. Creating a 3D mesh allows the human eye to quickly assess areas where there are voids or cracks in the infrastructure.
Read more on our asset inspection page, or get in touch [email protected]. | https://www.sephydrographic.com/news/benefits-of-a-3d-mesh/ |
As computer-aided design (CAD) has become more popular, reverse engineering has become a viable method to create a 3D virtual model of an existing physical part for use in 3D CAD, CAM, CAE or other software. The reverse-engineering process involves measuring an object and then reconstructing it as a 3D model. The physical object can be measured using 3D scanning technologies like CMM's, laser scanners, structured light digitizers, or Industrial C.T. Scanning (computed tomography). The measured data alone, usually represented as a point cloud, lacks topological information and is therefore often processed and modeled into a more usable format such as a triangular-faced mesh, a set of NURBS surfaces, or a CAD model. | http://www.loftingservices.co.uk/capabilities/reverse-engineering |
New technologies are always disrupting the construction and civil engineering industries. Point cloud modeling has existed for a while, but it’s becoming a major tool for contractors and engineers who seek more ease and efficiency when conducting land surveys. It accomplishes the same work with fewer resources spent — which is what every person wants from their business endeavors. But what exactly is a point cloud, and how does it help with surveying work sites?
If you want to learn how to use a point cloud for 3D models, this article can show you how it works — plus what you can gain from it.
What Is a Point Cloud?
A point cloud is a collection of many small data points. These points exist within three dimensions, with each one having X, Y and Z coordinates. Each point represents a portion of a surface within a certain area, such as an engineering work site. You can think of these points similarly to pixels within a picture. Together, they create an identifiable 3D structure. And the denser your point cloud is, the more details and terrain properties you’ll see within your image.
Creating and utilizing a point cloud puts a world of data within your reach, but you must know what to do with it after you generate it. This question can pose a problem for some surveyors — and others may not know how to create a point cloud to begin with. However, both of these problems have easy solutions. When you outline the goals you want to achieve from using a point cloud, you’ll know how to obtain your data and get the most value from it.
You can create point clouds by using two primary methods — photogrammetry and Light Detection and Ranging (LIDAR), which we will discuss in more detail below.
How Is a Point Cloud Created?
How do you create a point cloud when it involves so much detail and so many small points? The answer is typically a laser scanner. Site surveyors can create 3D models from point clouds by using LIDAR lasers. With the laser, you scan a chosen environment — such as a construction site — and the scanner records data points from the surfaces within it.
Once you have the complete point cloud, you can import it into a point cloud modeling software solution. At this stage, you can modify the data points for better accuracy. To see the point cloud in a 3D format that resembles your terrain, you’ll need to export the data from your modeling platform and upload it into a computer-aided design (CAD) or building information modeling (BIM) system.
Using Photogrammetry for Point Cloud Surveying
Photogrammetry is a common method for creating point clouds. With this technique, a drone takes numerous pictures of a construction or civil engineering site. Because the drone uses a camera, you’ll likely need to adjust its settings for the site’s environmental conditions to get the best results. Various angles are required to capture a full view of the landscape. Once all the images are captured, you can use a processing platform to overlap the photos.
By stitching the images together, you can develop a point cloud, create a 3D mesh and produce a complete 3D model within a CAD or BIM program. The process of filling in the gaps between the data points and creating a mesh is known as surface reconstruction. That’s why it’s essential to get as many data points and images as possible — you’ll have fewer spaces to fill in or reconstruct.
In contrast to photogrammetry, remote sensing — which is what LIDAR is categorized as — uses aerial vehicles to study a work site and create data points from it in real time.
What Is a LIDAR Point Cloud?
With the help of drone technology, you can use LIDAR to scan an area and record its data points to produce a point cloud. LIDAR uses infrared light laser pulses to measure distances. When these pulses reflect back to the sensor, it measures how long it took for the light to return. These laser scanners can emit up to 100,000 pulses per second, which gives an incredibly detailed view of the area being mapped.
Once you’ve created your LIDAR point cloud, it goes through a similar process of being transformed into a mesh and developed into a 3D model. Mounting LIDAR hardware onto a drone allows you to use 3D laser scanning to map any area you choose. Attaching the hardware correctly is essential — incorrect setup can impact the drone’s balance, which affects your data’s accuracy.
LIDAR and photogrammetry produce similar levels of accuracy. When choosing which one to use, it’s better to consider factors like how long it takes to set up the equipment and which method will be easier for you to work with.
How Is a Point Cloud Used in Site Models?
What is a point cloud in surveying? Land surveyors use point cloud modeling to create expansive representations of landforms where it would otherwise require tremendous time and effort. Even if your project isn’t huge, using LIDAR drones to collect data increases your efficiency and overall work experience.
Civil engineering sites can consist of roads, subways systems, bridges, buildings and more, which can have complex structures. Surveying these locations manually can stretch out a project’s duration and require a bigger budget, but technological advancements like point cloud modeling streamline the process. In general, new technology has significantly impacted civil engineering within the last few years. Additive manufacturing, smart tech and artificial intelligence are just a few examples.
Drone technology and point cloud modeling could also become essential elements of the connected job site. Tasks like geolocation, transferring as-built information and remotely monitoring work sites can all benefit from these two technologies. In turn, companies can improve employee productivity and safety and reduce their insurance and liability costs.
Point Clouds in Earthworks
Point cloud modeling techniques use drones, which have become increasingly popular for earthworks and construction projects due to their flexibility and efficiency. They can fill multiple roles within the building process — from the beginning to the end of any project. Mining, surveying and agriculture are among the many industries that have adopted drone technology for process optimization.
Here are a few ways that drones have shaped modern earthworks jobs so far:
- Improved progress monitoring: Companies that commission earthworks projects don’t always have the time or resources to send people out to their sites to conduct regular checks. Drones enable them to inspect the progress by taking photos of the site and turning them into an orthomosaic. From there, they can use the orthomosaic to create a digital elevation model (DEM) and compare these daily shots to their final project plans.
- Better worker safety: Manual surveying may require workers to walk up and down steep slopes or through rough terrain, which can prove dangerous if someone falls. If you put a drone in the field instead, you can capture data from afar without the injury risk.
- Quicker cut-and-fill: Some companies use topographic surveys to do cut-and-fill comparisons, which can take days to perform on a large or complex work site. Processing the data adds more time to the schedule — but drones can accomplish data collection at faster speeds. Processing, importing and exporting this information using intuitive software becomes simpler.
Point Clouds Used for 3D Models
Constructing a 3D model can change in complexity depending on the building or landscape type and its features. Renovations or retrofits that must be done while the area is still in use add another layer of intricacy, but they are not impossible to do with the right tools. Laser scanners and high-tech modeling software solutions ensure that every possible object is identified and distinguished from the next.
For landscapes with complicated or richly vegetated terrain, it may be necessary to send a surveyor out to supplement any spots the scanner might miss. When you have your data points and begin the conversion from point cloud to 3D model, you’ll likely have more than one scan to work from. Similar to photogrammetry, you’ll need different angles of the same site to get the full picture.
Rendering the data into a 3D mesh organizes the points and sets a foundation that you can use to build a model. Exporting the point cloud creates a file that can be imported into a CAD or BIM system. What are the common point cloud formats? Depending on the software you use, you might see file formats such as:
- PTS: PTS is an open format for 3D point cloud data. Because open formats are maintained by standards organizations, anyone can use them.
- XYZ: XYZ is an archetypal American Standard Code for Information Interchange (ASCII) format. It’s compatible with many programs, but it has no unit standardizations, which can make data transfer more difficult.
- PTX: This is another common format for storing point cloud data, usually from LIDAR scanners. It can only be used on organized clouds — no unordered ones. It’s also an ASCII format.
- E57: This file format is vendor-neutral and compact. It can store point clouds and metadata from 3D imaging systems — like laser scanners. It’s also specified by ASTM International, with documentation in the ASTM E2807 standard. Additionally, it can store properties connected to 3D point cloud data, such as intensity and color.
- LAS: This open format is designed for data obtained from LIDAR scanning, though it can also accommodate other point cloud data records. It combines Global Positioning System (GPS) data, laser pulse range information and inertial measurement units (IMU) to create data that fits on the X, Y and Z axes.
- PLY: Known as the Polygon File Format, this type stores data from 3D scanners. It accommodates properties such as color, texture and transparency. It can contain data from both the point cloud and the 3D mesh.
Whichever file format you decide on, make sure your modeling software can convert your point cloud into one that’s compatible with your chosen CAD or BIM solution.
The Benefits of Point Cloud Modeling
Point clouds aren’t the only way to create 3D models, but they are incredibly beneficial for numerous reasons. Construction managers and civil engineers use 3D models for better machine control, improved accountability with project progress and true-to-life site layouts. Some of the perks of modeling include:
1. Efficiency
Uploading your point cloud into a photogrammetry platform lets you organize the data without the hassle of triangulating every point on X, Y and Z manually. The software does the work for you, which saves you hours of time you would have otherwise spent manipulating data. With these hours shortened, you can pull together the project details more quickly and begin your work sooner — which also means faster completion time.
Data collection is also faster because of the large number of points that can be recorded at once. A drone can sweep an expansive area in much less time than it would take for a surveying team to do the same.
2. Precision
Laser scanning and photogrammetry give quick and accurate results, transforming a living landscape into a detailed 3D model. Ground-based LIDAR can yield results that are accurate within a millimeter scale, while drone-based LIDAR is accurate from 1 to 30 centimeters. Its lasers can penetrate through dense vegetation for a more comprehensive site view.
Additionally, LIDAR often incorporates other features like GPS to ensure each data point comes with accurate information. Photogrammetry, too, uses Real Time Kinetic (RTK) geo-tags to ensure accuracy in recording the landscape’s form.
3. Savings
Because of the greater precision involved in site mapping with point clouds, you can plan a more effective budget for your projects. You can avoid going over your financial limit, and you’ll have fewer chances of running into any costly mistakes or unexpected expenses. Laser scanning also eliminates the need for manual surveying, which reduces the cost of hiring additional labor.
You’ll save money with these decreased or eliminated expenses, but you’ll also earn more on your projects. Your increased accuracy levels can lead more clients to trust you with completing their assignments, which boosts your reputation and encourages more companies to do business with you.
Work With a Point Cloud Modeling Expert
If you’re ready to incorporate point cloud modeling into your next engineering project, work with the experts at Take-off Professionals. We perform point cloud services and mesh conversions to help you process your data. Whether you’re working with a progress takeoff or an as-built, we can work with your information to provide the personalized results you need.
Working with a data modeling expert can help you save more money on your projects and finish tasks more efficiently. Conversion and processing require expertise and a fine-tuned eye for detail, which can lead to time-consuming mistakes if done on your own. By enlisting the services of our trained technicians, engineers and surveyors, you’ll receive results that have been refined by over 20 years of operation.
Fill out our form to learn more about how we can help you with your next job, or call us today at 623-323-8441. We do projects big and small, whether your point cloud consists of one construction site or acres of land. | https://www.takeoffpros.com/2020/07/14/what-is-point-cloud-modeling/ |
In the field of SLAM (Simultaneous Localization And Mapping) for robot navigation, mapping the environment is an important task. In this regard the Lidar sensor can produce near accurate 3D map of the environment in the format of point cloud, in real time. Though the data is adequate for extracting information related to SLAM, processing millions of points in the point cloud is computationally quite expensive. The methodology presented proposes a fast algorithm that can be used to extract semantically labelled surface segments from the cloud, in real time, for direct navigational use or higher level contextual scene reconstruction. First, a single scan from a spinning Lidar is used to generate a mesh of subsampled cloud points online. The generated mesh is further used for surface normal computation of those points on the basis of which surface segments are estimated. A novel descriptor to represent the surface segments is proposed and utilized to determine the surface class of the segments (semantic label) with the help of classifier. These semantic surface segments can be further utilized for geometric reconstruction of objects in the scene, or can be used for optimized trajectory planning by a robot. The proposed methodology is compared with number of point cloud segmentation methods and state of the art semantic segmentation methods to emphasize its efficacy in terms of speed and accuracy.
Keywords:Semantic Surface Segmentation 3D Point Cloud Processing Lidar Data Meshing
∎
1 Introduction
3D mapping of the environment is an important problem for various robotic applications and is one of the two pillars of SLAM (Simultaneous Localization And Mapping) for mobile robots. Various kinds of sensors are in use to achieve the goal. Stereo vision cameras are one of the cheapest solution and works satisfactorily for well lit, textured environments but fails for the places lacking unique image features. Structured light and Time of Flight (ToF) cameras gives real time depth information for pixels in the image of a scene (RGBD) and is good for indoor usage. But in the presence of strong light i.e. in outdoor environments, its efficiency suffers a lot. Lidar is the primary choice for mobile robots working in the environments with diverse illumination and structural features. Lidar works on the principle of measuring time of flight of short signature bursts of laser that can be filtered out from other forms of radiations. As a result its robustness and range are increased. The downside of Lidar is its low resolution and thus a fair amount of computation is needed to extract usable information from Lidar data. This computational load is one of the deterrents for its usage. Thus there is scope for research in formulating efficient algorithms for Lidar point cloud processing.
The present work exploits the working principle of spinning Lidar to generate a near accurate mesh of the environment in an online fashion. The mesh is built on a subsampled cloud to increase speed of operation. The mesh is used to estimate surface normal of the subsampled points. On the basis of angle between surface normals a simple threshold based graph traversal approach is used to generate surface proposals. A binned histogram of surface normals is used as feature to train and use a Random Decision Forest (RDF) classifier to estimate a semantic surface label for the segment. Such semantic segments can be further utilized to estimate geometric models of object parts in the scene for scene reconstruction or can be directly used for smarter navigation of mobile robots.
The present paper is divided into following sections. Section 2 discusses some major works regarding surface segmentation of 3D point clouds. Section 3 elaborates about the proposed methodology. Section 4 sheds light on the results of the proposed methodology along with its comparison with other relevant works. Finally, Section 5 concludes the work.
2 Previous Works
The field of Lidar point cloud segmentation is comparatively new. Though segmentation of dense point cloud obtained from meticulous scanning of 3D models is an old problem, fast segmentation of sparse point cloud for robotic applications gained impetus in recent years. Some of the important works regarding the problem are as follows.
According to a survey Nguyen and Le , the classical approaches for point cloud segmentation can be grouped as: edge based methodologies Bhanu et al. , region based methodologies Jiang et al. Vo et al. Li and Yin Bassier et al. , derived attributes based methodologies Zhan et al. Ioannou et al. Feng et al. Hackel et al. , model based methodologies Tarsha-Kurdi et al. and graph based methodologies Rusu et al. Golovinskiy and Funkhouser Landrieu and Simonovsky Ben-Shabat et al. .
Vo et al. Vo et al. proposed an octree based region growing method with segment refinement. Bassier et al. Bassier et al. further improved it with the help of Conditional Random Field. Variants of region growing approach was proposed earlier by Jiang et al. Jiang et al. and later Li et al. Li and Yin have followed similar approach to work on the range image generated by 3D point clouds. For segmentation of unorganized point clouds, a Difference of Normal (DoN) based multiscale saliency feature was considered by Ioannou et al. Ioannou et al. . For Lidar, it is often easier to process the point cloud if it is represented in polar or cylindrical coordinates rather than cartesian coordinates. Line fitting in segments to a point cloud represented in cylindrical coordinates was proposed in the work of Himmelsbach et al. Himmelsbach et al. , which was further filtered to extract ground surface. Using undirected graph as mesh building and subsequent estimation of ground surface by local normal comparison was also proposed by Moosman et al. Moosmann et al. . A fast instance level LIDAR Point cloud segmentation algorithm was proposed by Zermas et al. Zermas et al. . It deals with deterministic iterative multiple plane fitting technique for fast extraction of the ground points and it is followed by a point cloud clustering methodology named Scan Line Run (SLR).
Point cloud segmentation methods using deep learning is a recent trend and there are few works in this direction. PointNet Qi et al. [2017a] uses the 3D sliding window approach for semantic labelling of points. It assumes that local features in the neighbourhood window of a point is enough to estimate its semantic class. PointNet++ Qi et al. [2017b] further refined the method by applying pointnet on a nested partitioning of the cloud in a hierarchical fashion. PointCNN Li et al. first learns an transformation that weighs input features describing a point and permutation of points that makes it ordered. Then it applies Convolutional Neural Network (CNN) on the ordered points for semantic labelling. PointSIFT Jiang et al. is a preprocessor for various deep net based approaches that applies a 3D version of SIFT(Scale Independent Feature Transform) on pointcloud to make it orientation and scale invariant. Thereby it enables the training of network with fewer instances. Point Voxel CNN Liu et al. uses voxel based approach for convolution. Thus, it saves time and memory that is wasted on structure mapping of point clouds. DeepPoint3D Srivastava and Lall uses multi-margin contrastive loss for discriminative learning so that directly usable permutation invariant local descriptors can be learnt. Most of the approaches uses convolutional neural network but graph neural network (GCN) can also be used for semantic segmentation of 3D point cloud Li et al. . In GCN the convolution happens on subgraphs rather that local patches and is thus useful for non-euclidean data such as point cloud. Very recently PolarNet Zhang et al. used a polar grid based representation for online semantic segmentation of point cloud from spinning Lidars.
In the earlier works, surface normal is estimated from point neighbourhood which is determined by tree based search. For sparse point cloud, like that from Lidar, this approach is time consuming and prone to errors. This motivated us to develop a fast online meshing of point cloud which can be used for surface normal estimation of points during the scan. An earlier attempt Mukherjee et al. used this normal as the feature for surface segment propagation resulting in unlabelled segmentation. For semantic labelling recent works have heavily relied on deep learning. On the other hand we have gone with traditional machine learning methods for estimating surfaces that can be used for surface fitting for scene reconstruction using vector models. The semantic surfaces, especially the ground plane detected out of the present form is useful in robot navigation purposes.
3 Proposed Methodology
The process to semantically segment the surfaces from Lidar data consists of four major stages as shown in Fig. 1. Part of the system was designed in our previous work Mukherjee et al. . Previously only surface segments were generated from point clouds. In the present form, semantically labelled surface segments are generated. Other than the first stage which remains unchanged, all stages are updated. A new stage of semantic segmentation is also added.
The first stage forms the mesh from subsampled point clouds. The subsampling is based on skipping regular number of vertical scans while the Lidar makes a single spin. The second stage estimates a surface normal for the subsampled points using the local mesh information. It should be noted that in actual use these two stages can be performed in an online fashion. The third stage is tasked with formation of segment proposals based on local distribution of surface normals. The number of proposals generated depends on the nature of the point cloud and also a control parameter discussed later. The final stage processes the proposed segments with a machine learning based classifier to assign a semantic label to that. The entire process is independent of Lidar orientation and scale, but only applies for spinning Lidars.
A spinning Lidar works by spinning a vertical array of units that measure distance by laser sensor. The units individually measures distance by the time of flight method corresponding to a signature pulse of laser. Such pulse does not overlaps with secondary radiations present in the environment. Thus the data acquired is in a point cloud format in spherical coordinates. Each point in the cloud can be defined as where, is the constant vertical angle of a sensor unit with respect to the plane perpendicular to the spinning axis, is the horizontal angle that varies when the array spins and is the distance measured by the laser sensor unit. If only a single spin is considered then this representation helps towards structuring of point cloud. For our methodology all values are not considered and is thus subsampled at regular intervals. This can be done safely as the horizontal density of points in a spinning Lidar is quite high. Thus, by varying the sampling intervals, horizontal density of the point cloud can be varied. This step is also necessary as too close points can give a bad estimate for surface normal at a point due to sensor noise. Figure 2 Mukherjee et al. elaborates the working principle of a spinning Lidar and the resultant point cloud formed for an object with multiple surfaces. The sampling interval is in this case i.e. every second point is sampled only for further computation. The points left out are labelled according to its nearest labelled point horizontally for generating dense surface segment proposals.
3.1 Mesh Construction
A crucial and significant part of the entire methodology is the novel fast mesh generation procedure. The fast operation of this stage ensures the speed and accuracy of the overall methodology. The mesh generation stage can be performed online as no global data is required. During the sweep the links between points are established in the following manner. Let a point be denoted as . Let the range of be which corresponds to vertical sensors in the array; and the range of be where is the number of times the sensor array is sampled uniformly during a single spin. The distance of a point is from the sensor unit and for computational purpose it is estimated that is the distance of the point from a virtual center from which all the sensor units are diverging. Let the topmost sensor in the array corresponds to angle and the sensors are counted in top to bottom order for an individual vertical sensor array. Also, let horizontal angle corresponding to the first shot in a spin be . With these assumptions in place the mesh is constructed using the following rules.
-
Form links between and , , for all points within range of and . This covers most of the points during the spin.
-
Form links between and where varies from to . This takes care of the lowermost circle of the mesh as it was not handled by rule 1.
-
At the end of the spin form links between and , , . For the lowermost point form a link between and . This completes the cylindrical mesh.
It should be noted that rule 1 and 2 is started after two samples and is performed for all shots of vertical samples during the spin. Rule 3 is only applicable during the last shot of vertical sample. For a pair of points if for both the points are within the range of the Lidar then only linking is done for them. For points where all the points are present in a neighbourhood, six of the eight neighbours are linked to a point. If eight neighbours are linked, it will form crosses in the mesh thus violating its very definition of forming non-overlapping triangles.
Figure 3(a) Mukherjee et al. shows the connectivity for a point which have six valid neighbours on the mesh. The mesh is stored in a map of vectors where each point is mapped to the vector . contains the valid neighbors in an ordered fashion by traversing in an anticlockwise fashion from the bottom direction. This is performed either by an insertion sort or considering three consecutive vertical shots at once. For a point with all six valid neighbours, the order of neighbours in is , , , , and . Even if all valid neighbours are not present for a point, the order is maintained and is very crucial for normal estimation stage. The computational complexity of the mesh generation stage is where is the number of sub-sampled points. The mesh is generated during the spin of the Lidar and thus in actual scenario the computation time depends on angular frequency of the spinning Lidar and the regular interval or sub-sampling factor.
3.2 Normal Estimation
Surface normal is generated for each point in the mesh obtained. It is estimated from the ordered neighbours of the point. This stage can also be performed in a pipelined fashion i.e. once the neighbours is generated for a point its corresponding surface normal can be computed. There is no need to wait for the completion of the spin. A point forms a 3D vector when joined with its neighbour i.e. all the links in the mesh are actually vectors. For surface normal computation the direction of vector is towards its neighbours from the point in question. A normal can be estimated for a point if its corresponding has . Thus, surface normal cannot be estimated for points with single link, though in reality that is a very rare scenario. Let the map stores the surface normal of all valid points. are the normal components of point . For valid points, the process of neighbour formation is described in Algorithm 1.
To compute the surface normal at a point, it is connected to its neighbouring points in an anti-clockwise fashion to form a set of vectors. In the same order, the neighbouring vectors are cross multiplied to generate a set of candidate normals. Surface normal of a point is the weighted average of such candidates. During the mesh formation the neighbours are stored in a sorted order as described in Section 3.1. For example, let and are two consecutive vectors obtained by connecting the point with two neighbouring points. formed as is the corresponding candidate normal. The weight for is the inverse of the maximum of and . Thus, in finding the weight of , neighbour at larger distance plays the major role and the weight is inversely proportional to the distance. Finally, candidates arising out of nearby neighbours will have more contribution towards the surface normal for the point. There are points in surface edges for which surface normal estimation is flawed due to interference of edge points from neighbouring surface, but it is almost impossible to distinguish between such surfaces at this stage of the algorithm.
3.3 Segmentation by Surface Homogeneity
Once the surface normal of all the points are computed, surface segment proposals are generated using those normals. A label map is used to assign a segment label to each point . For any point , initially denoting that the point is unlabelled. Whether two points and will get the same label or not depends on the angle formed between their corresponding surface normal vectors and a normalized distance between the points. Thus, surface homogeneity is determined by and . Figure 4 visually elaborates the parameters for a single vertical shot. However, the relation will exist for all the links in the mesh. Segment labels are gradually increased and assigned to each segments. Considering the mesh as a graph, each segment label propagates following a depth first search approach. The algorithm for segment generation is elaborated in Algorithm 2. The normal map and the mesh are computed in the earlier stages. Subsequently Algorithm 2 uses and to label the whole sub-sampled point cloud in an inductive fashion.
The algorithm starts at first point of the entire point cloud and proceeds in a column major order of the cloud expressed as a 2D array. The columns correspond to vertical shots of the Lidar during the sweep. For an interval factor of the resultant subsampled array is of size for a Lidar with laser units in the vertical array. Thus for interval of the resultant array is of size . All entries in the array will not be valid as many points are out of range and thus only points with within Lidar range are processed by the algorithm. The tree traversal may follow any direction to propagate. Once it can no more be spanned maintaining the surface homogeneity, the segment label is increased and the next unlabelled valid point becomes the seed. Thus a linear read is followed every time a new segment is generated. The number of segments will increase with stringent thresholds (i.e. small values for and ) and more complex scenes.
The values and are chosen empirically to (15 degrees in radian) and respectively. Due to sub-sampling, all points in will not be a part of any segment i.e. they will remain unlabelled. To generate a densely segmented map the labels of unlabelled points are estimated by their nearest labelled point in the same horizontal sweep. The horizontal sweep is chosen due to its high density of points. Due to spatial proximity the likelihood of getting the same label is much high along the horizontal sweep than the vertical array.
3.4 Surface feature extraction and classification
After formation of surface segments they are to be classified to assign a semantic label. A feature vector is formed for all unique labels in . For a cloud the list of such vectors is stored in the map where is the semantic class corresponding to all points with label . For classification number of classifiers have been tried as discussed in Section 4.3. During training is supplied to the classifier and during testing the classifier reports the for an . For all the segments the feature vectors are formed using Algorithm 3.
The concatenated histogram of surface normals of a segment along with the surface density form the feature to represent the semantic. Each of the surface normal histograms , and is a distribution of the , and components respectively of all surface normals of a segment. The histograms are -dimensional. For the present work, we have empirically set the value of as . Let stands for a normal component . Then, the function as used in the algorithm determines the bin in the corresponding histogram . The present work is concerned with semantic differentiation between different types of surface namely “plane”,“ground plane”, “cylinder”, “sphere” and “cone”. If an environment can be defined as a composition of such basic generator surfaces then with supervised combination of surfaces, complex models can be estimated. Also surface like “ground plane” has immediate use in robot navigation. It can be observed from Algorithm 2 that the histograms are normalized individually. This is done in order to prevent bias towards a particular component as the surface is composed of normals with variances differing for the components. The density factor also helps to bias the classifier towards correcting the labels of bigger segment proposals. During training the label of a feature vector is chosen as the one corresponding to majority of points in it. Due to local contextual surface propagation logic it may happen that different surfaces with a smooth transition may come under the same segment, the majority voting mitigates the effect of bad labelling in such cases.
The proposed methodology uses a statistical approach to prepare surface segment proposals and subsequently use them for semantic label prediction using classical classifier i.e. feature is not learnt rather engineered. This provides an insight into the structural properties of point cloud from spinning Lidar. Though deep classifiers are becoming popular, point cloud segmentation using them are computationally very expensive and requires sophisticated hardware. The proposed approach on the other hand can work on low configuration hardware providing decent accuracy, without sacrificing much on speed.
4 Experimental Results and Comparison
The proposed methodology is an updated and extended version of our previous work Mukherjee et al. . The system is realized using C++ language with OpenCV libraries for visualization purpose. The hardware configuration of the system used has 4GB DDR3 RAM and first generation Intel i5 processor. A synthetic dataset is prepared in order to test the methodology but the code can also run with live Lidar data stream from Velodyne Lidars. The visual output at different stages of the methodology is shown in Fig. 6. The surface segmentation results are compared with the standard region growing algorithm used in point cloud library Rusu and Cousins and a region growing algorithm combined with merging for organized point cloud data Zhan et al. . The semantic segmentation results are compared with pointnet Qi et al. [2017a] and pointnet++ Qi et al. [2017b].
4.1 Synthetic dataset
A synthetic dataset is created using the “Blensor” tool Gschwandtner et al. . Environmental model files were created that contains regular shaped objects in different orders of scale, orientation, density and occlusion. Four kinds of surfaces are placed on the scene, namely “plane”, “cylinder”, “sphere” and “cone”. The ground plane is labelled as different surface namely “ground plane”. A Velodyne 32E Lidar was simulated with degree horizontal resolution, thus producing a maximum possible point cloud of size for a scene. A Gaussian noise model with zero mean and variance of is incorporated in the sensor. It must be admitted that in reality spinning Lidars are more accurate. However, here the extra noise is incorporated to test the robustness of the methodology. There are unique environments (scenes). This is the primary dataset on which we have tested our surface segmentation process (i.e. prior to semantic label assignment). We refer to this as non semantic segmentation. At this stage only the surfaces are extracted. Figure 5 shows some sample scenes along with their point clouds. The different surfaces of “plane”, “ground plane”, “cylinder”, “sphere” and “cone” are colored as red, off white, blue, green and grey respectively. The color code holds for all other ground truths and semantic output.
In order to put the semantic label, we rely on classifiers. Classifiers are to be trained with sufficient data. The data should correspond to a good mix of different types of surfaces. In our primary dataset scenes are having only one object and mostly contain ground plane. It may bias the training. Hence we have considered the remaining scenes that are comparatively complex with multiple objects. Thus, corresponding cloud will have good proportion of different types of surfaces. We need to augment the dataset also. Hence, for semantic labelling, unique clouds are produced from unique scenes, by shifting the Lidar horizontally. Then a random selection is done in order to divide it in a training set consisting of clouds, validation set of clouds and testing set of clouds. The training set is further augmented to clouds by mirroring each of the point cloud along the and axes. For semantic labelling we consider this dataset and will refer this one as semantic dataset.
4.2 Comparison of performance: non semantic segmentation
For non semantic segmentation, the performance of the proposed methodology is compared with the others using the precision, recall and f1 score metrics. The evaluation has been performed on the primary dataset as discussed earlier. Every individual surface in the scene is annotated by a marker/number (not semantically labelled). Hence it is difficult to compare the output of a methodology with the groundtruth by matching the number/marker. Hence, an edge based matching is performed. The overlap of dilated ground truth edges with the edges reported by the methodologies have been compared.
To reduce the number of points in the cloud, sampling is done as discussed in Section 3. In our experiment, has been varied from to in steps of (note, an interval of corresponds to degree of horizontal sweep of the Lidar). The thresholds for angular difference and normalized distance are kept at ( degree in radian) and respectively. Our methodology is compared with Rusu and Cousins and Zhan et al. . The tuning parameters of these methods are kept at the default settings as suggested by the original authors. Table 1 shows the comparative latency and Table 2 shows the comparative accuracy of the different methodologies. It can be said that the proposed methodology for non semantic segmentation fares significantly well in comparison to others both, in terms of speed and accuracy. It is observed that for sampling interval of the proposed methodology gives best accuracy without much compromise in speed and thus it is kept at this value to study the performance further.
4.3 Choice of classifier
For semantic classification of the extracted surfaces, traditional classifiers have been tried and their performances are evaluated. Section 3.4 discussed about the features used for classification. We have worked with semantic dataset for semantic labelling. The classifiers have been trained with the augmented training set. The validation set has been used to evaluate the performance of different classifiers. Finally, a classifier is chosen based on the comparison metrics obtained on the validation set. Classifiers evaluated are multimodal Support Vector Machine with RBF kernel Lee et al. , K Nearest Neighbour classifier Altman , Decision Tree Quinlan , Random Decision Forest Ho and Extremely Randomized Tree Geurts et al. . Table 3 shows the comparative F1 score, precision and recall. As all the classifiers perform in the similar manner with respect to latency, the accuracy became the important parameter in making the judgement. Even in terms of the accuracy parameters also all the classifiers provide reasonably good outcome. It indicates the strength of the proposed feature vector. Finally, based on the accuracy, we consider either of the top two classifiers i.e. random decision forest or extremely randomized tree can be selected for semantic labelling.
4.4 Comparison of performance: semantic segmentation
For semantic segmentation the performance of the proposed methodology is compared with others using the mean intersection over union (MIoU) metric. As both ground truth and experimental output points belongs to a definite semantic class, such a comparison is possible. Classwise and overall comparison in terms of MIOU are made considering all the point clouds in the test set. Average precision, recall and F1 scores are also provided. The proposed methodology is compared with the pointnet Qi et al. [2017a] and pointnet++ Qi et al. [2017b]. These deep learning based methods are executed in a Linux machine with Intel Xeon 2.3GHz processor, Tesla K80 12GB GPU and 128GB DDR4 RAM. The pointnet and pointnet++ networks are designed to process small dense point clouds. On the other hand our dataset consists of large sparse clouds. Hence, contiguous points of size are fed to the network at one shot during both, training and testing. This is due to the constraint posed by GPU memory. The methods works with local contextual information and thus breaking the clouds into smaller chunks does not interfere with its working principle. Figure 7 shows the output of different methods corresponding to few sample scenes. It can be observed that output of proposed methodology (for both the classifiers) is better than the others for all the classes. Although classes like “plane” and “ground plane” are well detected by the proposed methodology, detection of “cone” suffers.
Table 4 shows the relative latency of the methods. Table 5 shows the relative accuracy of the methods in terms of average F1 score, precision and recall over all points in all clouds of the test set. Table 6 shows the class-wise and overall accuracy of the methods in terms of MIoU. For calculating the overall MIoU all points from all clouds are used rather than average of MIoU for individual test clouds.
From analysis of comparative results it can be said that the proposed methodology can deliver acceptable accuracy at real time speed. In general, spinning Lidars are operated at a maximum speed of rotations per seconds and the average time of execution of our methodology is to milliseconds ( to FPS) depending on the choice of classifier. As a portion of the methodology is computed along with the Lidar spin, with a more optimized version the system can run in real time without any frame loss. Thus the proposed methodology can run on non-GPU low configuration system, in real time, delivering an MIoU accuracy of over .
5 Conclusion
The present work deals with the problem of semantic surface segmentation from Lidar point cloud data. The proposed methodology has a novel fast meshing process that generates surface mesh from the Lidar scan in an online fashion, facilitating fast computation of surface normals. Subsequently a statistical method generates segment proposals. The proposals are described with a novel feature vector based on the distribution of surface normals. Semantic labelling is done by feeding the feature vector as input to classifier. The performance of the proposed methodology is compared with some popular cloud segmentation methods. It is observed that the proposed methodology is significantly faster and provides higher classification accuracy. It can be concluded that the proposed methodology can deliver acceptable accuracy for robotic applications in real time and paves the way for further utilization of semantic surfaces towards generation of models and scene reconstruction.
References
- An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician 46 (3), pp. 175–185. Cited by: §4.3, Table 3.
- Segmentation of large unstructured point clouds using octree-based region growing and conditional random fields. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 42 (2W8), pp. 25–30. Cited by: §2, §2.
- Graph based over-segmentation methods for 3d point clouds. Computer Vision and Image Understanding 174, pp. 12–23. Cited by: §2.
- Range data processing: representation of surfaces by edges. In Proceedings of the eighth international conference on pattern recognition, pp. 236–238. Cited by: §2.
- Fast plane extraction in organized point clouds using agglomerative hierarchical clustering. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 6218–6225. Cited by: §2.
- Extremely randomized trees. Machine learning 63 (1), pp. 3–42. Cited by: §4.3, Table 3.
- Min-cut based segmentation of point clouds. In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, pp. 39–46. Cited by: §2.
- BlenSor: blender sensor simulation toolbox. In International Symposium on Visual Computing, pp. 199–208. Cited by: §4.1.
- Fast semantic segmentation of 3d point clouds with strongly varying density. ISPRS annals of the photogrammetry, remote sensing and spatial information sciences 3 (3), pp. 177–184. Cited by: §2.
- Fast segmentation of 3d point clouds for ground vehicles. In 2010 IEEE Intelligent Vehicles Symposium, pp. 560–565. Cited by: §2.
- Random decision forests. In Proceedings of 3rd international conference on document analysis and recognition, Vol. 1, pp. 278–282. Cited by: §4.3, Table 3.
- Difference of normals as a multi-scale operator in unorganized point clouds. In 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, pp. 501–508. Cited by: §2, §2.
- Pointsift: a sift-like network module for 3d point cloud semantic segmentation. arXiv preprint arXiv:1807.00652. Cited by: §2.
- Fast range image segmentation using high-level segmentation primitives. In Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV’96, pp. 83–88. Cited by: §2, §2.
- Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4558–4567. Cited by: §2.
- Multicategory support vector machines: theory and application to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association 99 (465), pp. 67–81. Cited by: §4.3, Table 3.
- Deepgcns: can gcns go as deep as cnns?. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9267–9276. Cited by: §2.
- A fast segmentation method of sparse point clouds. In 2017 29th Chinese Control And Decision Conference (CCDC), pp. 3561–3565. Cited by: §2, §2.
- Pointcnn: convolution on x-transformed points. In Advances in neural information processing systems, pp. 820–830. Cited by: §2.
- Point-voxel cnn for efficient 3d deep learning. In Advances in Neural Information Processing Systems, pp. 963–973. Cited by: §2.
- Segmentation of 3d lidar data in non-flat urban environments using a local convexity criterion. In 2009 IEEE Intelligent Vehicles Symposium, pp. 215–220. Cited by: §2.
- Fast geometric surface based segmentation of point cloud from lidar data. In International Conference on Pattern Recognition and Machine Intelligence, pp. 415–423. Cited by: §2, Figure 2, Figure 3, §3.1, §3, §3, §4.
- 3D point cloud segmentation: a survey. In 2013 6th IEEE conference on robotics, automation and mechatronics (RAM), pp. 225–230. Cited by: §2.
- Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660. Cited by: §2, Figure 7, §4.4, Table 4, Table 5, Table 6, §4.
- Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pp. 5099–5108. Cited by: §2, Figure 7, §4.4, Table 4, Table 5, Table 6, §4.
- Induction of decision trees. Machine learning 1 (1), pp. 81–106. Cited by: §4.3, Table 3.
- 3d is here: point cloud library (pcl). In 2011 IEEE international conference on robotics and automation, pp. 1–4. Cited by: §4.2, Table 1, Table 2, §4.
- Fast geometric point labeling using conditional random fields. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 7–12. Cited by: §2.
- Deeppoint3d: learning discriminative local descriptors using deep metric learning on 3d point clouds. Pattern Recognition Letters 127, pp. 27–36. Cited by: §2.
- Hough-transform and extended ransac algorithms for automatic detection of 3d building roof planes from lidar data. In ISPRS Workshop on Laser Scanning 2007 and SilviLaser 2007, Vol. 36, pp. 407–412. Cited by: §2.
- Octree-based region growing for point cloud segmentation. ISPRS Journal of Photogrammetry and Remote Sensing 104, pp. 88–100. Cited by: §2, §2.
- Fast segmentation of 3d point clouds: a paradigm on lidar data for autonomous vehicle applications. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 5067–5073. Cited by: §2.
- Color-based segmentation of point clouds. Laser scanning 38 (3), pp. 155–161. Cited by: §2, §4.2, Table 1, Table 2, §4.
- PolarNet: an improved grid representation for online lidar point clouds semantic segmentation. arXiv preprint arXiv:2003.14032. Cited by: §2. | https://www.arxiv-vanity.com/papers/2009.05994/ |
Welcome to the interactive web schedule for the 2018 Spring NEARC Conference!
For tips on how to navigate this site, visit the "Helpful Info" section. To return to the NEARC website, go to:
www.northeastarc.org/spring-nearc.html
.
UPDATE AS OF MAY 16:
Some of our presenters have made their slides or other resources available to download. Under the "Filter by Type" heading, click on "
Presentation Slides Available
" to view which ones have been posted. Check back for updates!
Schedule
Simple
Expanded
Grid
By Venue
Attendees
Helpful Info
Search
Back To Schedule
Tuesday
, May 8 • 11:00am - 11:30am
PRESENTATION: Generating a 3D Point Cloud from UAV Images
Sign up
or
log in
to save this to your schedule, view media, leave feedback and see who's attending!
Tweet
Share
AUTHORS: Katrina Schweikert, Blue Marble Geographics
ABSTRACT: These days, it seems that more and more people are discovering creative uses for UAVs or drones. Whether it's Amazon's plans to deliver packages, film makers looking for that perfect shot, or law enforcement officials keeping an eye on the bad guys. The GIS industry is eagerly jumped on this proverbial band wagon, with the rapid proliferation of UAVs opening the broad field of remote data collection and processing to an ever wider audience. Continual improvements in airborne technology and the miniaturization of the requisite sensors has cultivated a nimble new branch of the industry that provides cost-effective data collection services on demand. In this presentation we will explore one increasingly common workflow for UAV operators, in which overlapping geotagged images are processed to create a high-density 3D point cloud. Using the new Pixels-to-Points tool in Global Mapper's LiDAR Module, we address the challenge of calculating the volume of a landfill represented in a surface model derived from a photogrammetrically-generated point cloud. This procedure involves initial visualization of the array of images to ensure optimal and consistent coverage of the landfill area; the establishment of the parameters and settings that allow the output to be customized; the generation of the point cloud along with an accompanying orthoimage and 3D mesh; the identification, reclassification, and filtering of ground or bare earth points; and the creation of a precise Digital Terrain Model (DTM) from which accurate volumetric calculations can be derived.
Tuesday May 8, 2018 11:00am - 11:30am EDT
Room 202
Concurrent Sessions (30 Minutes)
,
UAV
Tags
3D
,
Analysis
,
Remote Sensing
,
UAV/Aerial Mapping
,
LiDAR/Point Cloud Processing
Attendees (6)
K
Need help? | https://springnearc2018.sched.com/event/EDJG/presentation-generating-a-3d-point-cloud-from-uav-images |
What is Reverse Modeling or Engineering?
The process involves taking measurements of the object with the use of a 3D scanner and then using the 3D data to create a 3D CAD (computer-aided drawing) model. A CAD model is the standard format for manufacturing.
Does 3D scanning an object produce a CAD Model?
A common misunderstanding of the 3D scanning process is that the direct output is a CAD model that can imported into any CAD system. Unfortunately, this is not the case. The direct output of all optical scanners is point cloud or polygon mesh data. A CAD modelling stage is needed if a geometric solid model is required.
If you have the right software and experience, point cloud data tells you everything you need to know in order to reconstruct a part in CAD. Unfortunately most mainstream CAD packages (example: AutoCAD, Rhino3D, Solidworks) are not particularly good at working with this type of data. 3D scan data processing software (example: Geomagic, Rapidform XOR and Polyworks) are great at converting point cloud data into CAD models.
Why use 3D scanning for this process?
It is difficult to take measurements of objects with organic or complex shapes. It is easy and fast to achieve this task using a 3D scanner since we can get measurements of an object from 3D scans rather quickly. A Coordinate Measuring Machine (CMM) only provides a discrete number of points, and although very accurate, it doesn’t compare to the information acquired by scanning, which shows how the surfaces flow.
When 3D scanningan object, all surface geometry is captured, including imperfections caused by the manufacturing process and any damage the part may have suffered. Typically the part will be remodelled to capture the design intent and to disregard imperfections. There are some good reasons for this. Firstly, modelling in every single defect could be time consuming and therefore expensive. Secondly, one of the main reasons for reverse engineering is to remake the part. Therefore the requirement is to create a ‘perfect’ part representing true design intent. This may require a detailed understanding of the function (depending on the part being modelled) because only then can the design intent be correctly interpreted.
using Design X right away. It uses familiar tools for solid modeling, and skips the old reverse engineering workflows involving painstaking polygon editing and surface generation. | http://www.3dreveng.com/reverse-engineering/ |
AutoCAD has a polyface mesh entity (PFACE comand) which is capable of describing a mesh object, something that other CAD systems usually call “polygon mesh”. Basically a collection of triangle and/or quad faces connected to each other by common vertices. … The polyface mesh entity cannot have more than 32,767 vertices.
How do you make a polyface mesh in Autocad?
To create a polyface mesh, you specify coordinates for its vertices. You then define each face by entering vertex numbers for all the vertices of that face. As you create the polyface mesh, you can set specific edges to be invisible, assign them to layers, or give them colors.
What is a mesh in Autocad?
A mesh model consists of vertices, edges, and faces that use polygonal representation, including triangles and quadrilaterals, to define a 3D shape. Unlike solid models, mesh has no mass properties. … You can drag mesh subobjects (faces, edges, and vertices) to shape the mesh object.
How do I convert mesh to surface in Autocad?
To Convert a Mesh Object Into a NURBS Surface
- Click Convert Mesh tab Convert Mesh Convert to Surface. Find.
- Select a mesh object and press Enter. The object is converted to a procedural surface.
- Click Surface tab Control Vertices panel Convert to NURBS. Find.
- Click the surface object to convert it to a NURBS surface.
How do you cut a polyface mesh in Autocad?
- First go to the Limit Selection tool.
- Select the object – workplane / surface with which you want to trim the mesh.
- Select the option ‘Keep both’ from Limit selection pop up toolbar.
- And finally select the.
What is mesh in FEA?
Finite Element Method reduces the degrees of freedom from infinite to finite with the help of discretization or meshing (nodes and elements). One of the purposes of meshing is to actually make the problem solvable using Finite Element. By meshing, you break up the domain into pieces, each piece representing an element.
What is mesh in design?
Mesh generation is the practice of creating a mesh, a subdivision of a continuous geometric space into discrete geometric and topological cells. … Meshes are used for rendering to a computer screen and for physical simulation such as finite element analysis or computational fluid dynamics.
What is the meshing?
Meshing is an integral part of the engineering simulation process where complex geometries are divided into simple elements that can be used as discrete local approximations of the larger domain. The mesh influences the accuracy, convergence and speed of the simulation.
How do I delete a mesh in Autocad?
(Right-click in the drawing area and click Subobject Selection Filter.) Press Delete.
…
Press Ctrl+click one of the following mesh subobject types:
- To remove only that face, click the face.
- To remove adjacent faces, click their shared edge.
- To remove all faces that share a vertex, click the vertex.
How do you extrude in Autocad 2020?
Help
- If necessary, on the status bar click Workspace Switching and select 3D Modeling. Find.
- Click Solid tab > Solid panel > Extrude. Find.
- Select the objects or edge subobjects to extrude.
- Specify the height.
How do you turn a mesh into a solid?
Help
- Click Mesh tab Convert Mesh panel Convert Options drop-down.
- Specify one of the following conversion options: Smooth, optimized. …
- Click Mesh Modeling tab Convert Mesh panel Convert to Solid.
- Select a mesh object that has no gaps or intersecting faces.
How do I convert mesh to surface in Rhino?
Step 1 : Start Rhino.
- Step 2: Select Import… from the File menu. The import file dialog box is displayed. …
- Step 3: Zoom to the extents of the drawing (View->Zoom->Extents). You will find the mesh from pipe. …
- Step 4: The Rhino command prompt ‘Select a mesh to create a nurbs surface’ is displayed.
How do you convert solid mesh to Solidworks?
Because the feature only creates surfaces, the recommended workflow is to trim the surfaces to form a solid. To create a surface from mesh feature: In Tools > Options > Import, under File Format, select STL/OBJ/OFF/PLY/PLY2 and click Import as Graphics Body. Then click OK. | https://birchlerarroyo.com/autocad/what-is-a-polyface-mesh-in-autocad.html |
I didn’t mean for this little project to get so out of hand, but when there is a will… there is a really long winded way of solving a problem.
Firstly, my fuel injection conversion needed a swirl pot. A swirl pot is an intermediate fuel tank in between your original low pressure fuel tank and the high pressure fuel system. It is not always needed, depending on whether you swapped out your old low pressure fuel pump for an in tank high pressure pump or not. Seeing as I already own a perfectly good Facet lift pump, which is feeding my carburettor, I decided to install a swirl pot. The advantage of using a swirl pot is there is a far lower chance of fuel starvation in corners but at the cost of added complexity and weight.
I bought a shiny aluminium ‘pot a while back for this very task and needed to mount it somewhere in the engine bay. Because I’m putting throttle bodies on the engine I needed to get rid of the battery tray to make space for the trumpets. Having moved the battery and removed the tray I had gained a little space at the back of the bay for the ‘pot. However this area wasn’t exactly flat and true, which made the potential mounting of a flat aluminium tank a bit complicated.
My initial plan was to weld in a little steel support platform, simple right? It would have been a pain and would have required a lot of welding in the bay… and it way too simple a solution.
My girlfriend had been pretty positive about my whole 3d printer fixation and she suggested that there might be a printable solution to my problem. Of course I shrugged this off straightaway.
“It’s not an even surface; it would be an utter pain to measure and get right… I couldn’t guarantee it would fit, blah blah blah, nonsense nonsense, i’m an idiot” – Josh Ogilvie, 2015
As always she was completely right, there was a printable solution, I just needed a copy of the surface I was working on… time for 3D Scanning!
Working in an F1 team means you’re surrounded by a large number of greatly experienced and talented colleagues who are, more than likely, into the same weird stuff you are. Fortunately Highly-Experience-Engineer-Come-Pro-CAD-Modeller Mark was at hand and he pointed me at an Xbox Kinect style 3D scanner which would allow me to get the point cloud data I was looking for (If you really need to know it was a ASUS XtionPRO Live).
So on one isolated sunny October afternoon I set to work scanning the car. After trying every laptop in the house it was apparent that 3D Scanning is an extremely CPU intensive task (duh!) and that the only machine powerful enough was my PC; so that got dragged out onto the drive. The process was still very slow and I had to be patient not to move the camera too fast or it would lose sync with itself.
I used the free FARO Scenect software to record the point cloud data and I found it very straightforward. It showed me what I had scanned in real time but did not try to do any extra meshing or reduction on top of that, so it was fairly rapid. The data is even in colour so you can tell what you’re looking at. I tried my best to get the panel from as many angles as possible, increasing both the mesh density and its accuracy.
The point cloud data was then exported as an .xyz file for import into MeshLab. I had never used any of this software before so it took me a few evenings of trial and error with different solutions until I found the one I liked. Take it from me, MeshLab rocks. It’s an extremely flexible mesh manipulation tool and has everything you need to turn a point cloud into an STL or equivalent file. In the end I settled on the following straightforward workflow:
- Save off a copy of your Point Cloud data and hide it so you don’t over right it; you know it makes sense.
- Import the Point Cloud
File->Import Mesh->Pick the XYZ File, MeshLab does the rest
- Delete any unneeded points
- Orient the Point Cloud
Do this now or it’ll never be aligned again; trust me. Filters->Normals, Curvatures and Orientation->Transform: Move, Translate, Center
- Poisson Reduction
Filters->Sampling->Poisson-disk Sampling
You will have too many points to sensibly create a mesh out of so you are going to want average them out. This method uses statistical probability to ensure you lose the least amount of LIKELY real points. The hope is you scanned enough data, from enough different angles, that once it is reduced the data will be correct-ish. Remember to enable “Base Mesh Subsampling”.
- Calculate the Surface Normals
Filters->Normals, Curvatures and Orientation->Compute normals for point sets
- Poisson Mesh Construction
Filters->Remeshing, Simplification and Reconstruction->Surface Reconstruction: Poisson
This will build a closed mesh out of your Point Cloud data. It’s important to note that it is CLOSED. If you’re trying to build a surface you’ll need to delete the excess vertices.
Then you can export the results for use in your CAD package of choice. For instance, I couldn’t use the resultant STL data straight away as it had to be further post processed to become a surface object. I put my new surface patch into an assembling along with a model of my Swirl Pot and the rest was fairly straightforward. I drafted two extruded bases from the ‘pot down to the surface to fill the gap.
To my surprise it all actually fit together once printed and after a quick splash of Matt Black I was ready to bolt it all in. I’ll take some more photo’s once everything is finally mounted in the car and painted.
I want to use this same method build a CFD model of my little red car later next year; wish me luck. | https://www.ogilvieracing.com/fabrication-3d-scanning/ |
The goal of the project is the development of a surface-based approach to the construction of surfaces from three-dimensional clouds of points, for instance sampled from a three-dimensional shape using a digitizing device like a laser scanner. The basic approach is to connect the points by a graph of which the edges lie on the desired surface. The initial graph is the so-called surface description graph (SDG) which is based on beta-environment graphs. This surface wire frame is successively augmented by inserting triangles to a triangular mesh. The process follows basic constraints that are derived from knowledge on smoothness behaviour of sampled surfaces. By this method variations in the point density as well as sharp and detailed features of the surfaces can be reconstructed reliably. The algorithm has been implemented in C++ (using STL) into the software-system RecEye using the toolkits Tcl/Tk, Tix, OpenInventor, and OpenGL.
Graphical User Interface:
3D shutter glasses (CrystalEyes) are supported as well as a hand gesture control for picking subsets of points during user interaction.
Fully automatic reconstruction:
Reliable surface reconstruction from 3D point (cloud) data: The input point data may be totally unorganized. Strong variations in the point density are no problem and even sharp edges (surface turns) of the object are reconstructed reliably. Furthermore, the input data can be delivered by any arbitrary scanning system (tactile, laser etc.). The input data is solely the point set (first picture). Then, in a first step the surface description graph (SDG) is computed out of this point set. This graph is then filled sucessively with triangles. The next three pictures show two snapshots during this reconstruction process and the final result.
The video below shows a complete reconstruction process in slow motion.
Automatic noise reduction in the point set:
Locally-Restricted Reconstruction and Interactive Shape Design:
If the user is only interested in the reconstrution of a subset of the data site, the reconstruction algorithm can be modified as follows:
Points are sequentially connected by single red edges (first three pictures).
The graph represents a first surface estimation. These edges are projected onto the point set and used for the incremental reconstruction algorithm (first picture). The graph has been used for the reconstruction algorithm as wire frame (second picture). In order to reconstruct the part around the nose, two more edges are inserted into the triangular mesh (next two pictures).
After the two edges have been inserted (first picture) the edges of the mesh and the newly added edges are projected onto the point set (second picture). The reconstruction of the blue wire frame (third picture) delivers a first good preview of the area the user is interested in.
In order to further extend the reconstruction area, only one single edge is drawn from the mesh to the nose tip (first picture) and then the mesh is projected onto the point set (second picture). Finally, the reconstruction result delivers the front part of the skull (last picture).
The complete interaction process has been performed in about two minutes. | http://www.robertmencl.de/projects/rekonstruktion-aus-3d-laser.html |
Computer Programming (COP)
This is a beginning level course teaching the essentials of logical computer programming (CP) design techniques, which use pseudocode terminology to create language independent algorithms (LIAs). Topics include the programming development process (PDP), flowcharts, the basic computer operations (COs), the use of arithmetic, assignment, logical, relational, increment, and decrement operators, input, output, constants, elementary data types (EDTs), file types, data structures types (DSTs), selection control structures (SCSs), repetition control structures (RCSs), single dimensional arrays, and using security coding techniques for validating user input. A current programming language will be used as a platform to demonstrate the LIAs.
This course provides an introduction to mobile applications programming on popular operating system platforms. Students will become familiar with the software for creating mobile applications and the process of using the software development kit for each platform. Students will create, build, and run simple applications on each platform.
This course is an introduction to the Python programming language. Topics include variables, data types, decision structures, loops, functions, input/output operators, data structures, and classes. Object-oriented programming concepts are introduced.
This course is an introduction to the C++ programming language syntax. Topics include implementation of loops, decision structures, functions, input/output operations, arrays, structures, and overloading. Introduction to object-oriented paradigms of classes, data abstraction, and encapsulation. In addition, secure application development concepts are reviewed.
This course is a continuation of introduction to C++ programming. Topics include pointers, recursion, operator and function overloading, information hiding, inheritance, virtual functions (polymorphism), and traditional object-oriented programming. Standard data structures including arrays, stacks, queues, linked lists, and their implementations are covered. In addition, secure application development concepts are reviewed.
This course introduces the student to the C# programming language. Topics include language syntax, data types, arithmetic expressions, logical expressions, control structures, repetitive control structures, arrays, collections, and string manipulation. C# object oriented programming concepts including classes, inheritance, and polymorphism are covered. Students develop C# program applications using a software integrated development environment (IDE).
Advanced topics using the C# language including, collections, multi-threading, inheritance, generics, polymorphism, XML documents, LINQ, GUI forms and controls, and interaction with databases and web services. Students will develop C# program applications using a software IDE (Integrated Development Environment).
This course provides project experience in the development of mobile applications on popular device platforms and cross-platform development. The course examines object-oriented programming concepts and their application to mobile application development. Students are introduced to mobile application interface design, learn how to use persistent data in a mobile application, and explore the process of adding images, sound, and video to applications.
This course introduces Relational Database Management System (RDBMS) data modeling concepts, database normalization process, entity relationship concepts, intermediate Structured Query Language (SQL) programming, physical design issues including centralized and distributed designs, concurrency, transaction processing, locking methods, database administration roles and responsibilities, and database security.
This course is an introduction to Java programming. The topics include loops, decision structures, input output (I/O) operations, arrays, references, classes, objects, inheritance, and data encapsulation. An introduction to GUI design using Java's Swing Package and other Java predefined packages is examined.
The course focuses on advanced Java programming concepts, including interfaces, packages, exception handling, and database interaction using Java Database Connectivity (JDBC), multithreading, and networking capabilities. This course is a continuation of Java's object-oriented features with major emphasis on class implementation. Advanced graphical user interface (GUI) design is implemented using Java's Swing package with a major emphasis on event handling. In addition, secure application development concepts are reviewed.
This course is an introduction to extensible markup language (XML). Topics include using document type definitions (DTD's), XML schema, cascading stylesheets (CSS) and extensible stylesheet language to create well-formed and valid XML documents. XML provides users with a uniform method for describing and exchanging structured data that is independent of applications or vendors.
This course is designed to introduce the skills necessary for creating websites. The course uses the current versions of Hypertext Markup language (HTML) and Cascading Style Sheets (CSS). Topics include using graphics, audio, animation, video, tables, forms, using embedded and external CSS coding, and implementing security strategies.
This course gives students opportunity to understand the relationship of theory to practice through participation in a service-learning experience. Students are required to complete 20 hours of volunteer work, a service-learning contract, and an oral and written reflection of the experience.
This course explores the concepts of Object-Oriented Programming (OOP) including abstraction, encapsulation, inheritance, polymorphism, and multithreading. Students will design, write, compile, execute, and debug Java object-oriented programs. Students will be introduced to software development tools including an Integrated Development Environment (IDE), a source code version control system, a unit testing framework, and the Unified Modeling Language (UML). In addition, the use of cryptographic libraries will be introduced.
This is an elementary course in data structures and algorithm analysis. Topics include basic data structures, complexity analysis, sorting, hash tables, trees, queues, graphs, recursion, dynamic programming algorithms, and nondeterministic polynomial time (NP)-completeness.
This course is an in-depth study of Database Management Systems (DBMS), information management, and retrieval concepts. The course focuses on the relational database model including the design and implementation of a database using a commercial DBMS. Key topics include an overview of database systems, database design, the relational model, physical design, indexing, transaction management, concurrency management, recovery, and tuning.
This course includes an overview of web systems, web standards, server configuration and portal design. Students will apply the fundamentals of interactive web design with a focus on active server pages programming.
An introduction to development techniques for mobile devices. This course covers the components for creating basic and more advanced mobile device applications including user interface (UI) components, persistence of data, application packaging, and more advanced interfaces of the mobile phone software developers kit (SDK). Students will design and develop applications for mobile devices. Sample applications that illustrate features and focus on UI implementation will be compiled and debugged. Students will master memory management techniques, delegation, archiving, and the proper use of view controllers. Students will learn to search and understand reference documentation so they can make use of the many methods and classes available in a mobile application platform.
The courses introduces the concepts and methods of configuring web-based servers and employing current server-side scripting language to create, test and debug server applications. Students will be introduced to the concepts of employing client-side scripting languages to create, test, and debug browser-based (BB) applications that communicate with the servers.
This course will expose students to real world application in a business setting. Students will obtain career-related experiences to utilize their classroom knowledge and skills. | https://catalog.easternflorida.edu/course-descriptions-information/cop/ |
In the context of Core Java, ‘core’ represents the basics. Therefore, core Java simply indicates the fundamentals of Java programming. To kick-start the journey as a Java developer, it is essential to be well-versed in the basic concepts.
READ:
Courses:
Java Programming Masterclass covering Java 11 & Java 17
Java Programming for Complete Beginners
- Data types and Variables
- Java Architecture
- Operators and Expression
- Java String Class
- Conditional Statements
- Loops
- OOPs concepts
- Multithreading
- Java IO stream
- Java Collection Framework
Data types and Variables
A variable in Java is a container that stores data values during the execution of a Java program. Every variable is assigned a data type that specifies the type of value it holds.
A data type in Java specifies the different sizes and types of values that a variable can store. Primitive and non-primitive are two data types in Java.
READ:
Java Architecture
Basically, Java architecture consists of three primary components, namely Java Virtual Machine (JVM), Java Runtime Environment (JRE), and Java Development Kit (JDK).
JVM converts byte code into the machine code, JRE is an environment that makes Java programs run, and JDK is a suite of development tools.
READ:
Operators and Expression
An operator in Java is a symbol that performs the operation on variables and their values. Six types of operators in Java include Arithmetic, Assignment, Relation, Unary, Logical and Bitwise.
A Java expression is a combination of variables, operators, literals, and method calls.
READ:
Java String Class
In Java, a string is an object representing a sequence of characters. To perform various operations on a string, Java comes with a Java String class that provides a lot of string functions, including compare(), concatenate(), length(), substring(), and many more.
Conditional Statements
Conditional statements, also known as decision-making statements, are used to execute a specific code block of a Java program based on certain conditions. Java supports five conditional statements, namely if, if-else, if-else-if, nested if, and switch-case.
READ:
Loops
Loops are used to execute a certain block of code repeatedly for a finite number of times until the given condition evaluates to true. It eliminates the need to write a specific code block repetitively. Java supports the while loop, for loop, and do-while loop.
READ:
OOPs concepts
The object-oriented programming paradigm aims to bind data and functions that work on the data together. It works on the basis of classes and objects, where a class is a collection of objects, and an object is any real-world entity. In addition, OOPs work on four basic concepts, namely Polymorphism, data abstraction, encapsulation, and inheritance.
READ:
Multithreading
Multi-threading enables the execution of multiple threads at the same time. A thread in Java is a lightweight subprocess or the smallest unit of processing. It is used to achieve multitasking.
Java IO stream
IO stands for input and output. In Java, the input stream is used to read data from the source. Meanwhile, the output stream is intended for writing data to the destination. For any Java program to be able to read and write data, one has to import the java.io package.
Java Collection Framework
In Java, a Collection framework is a unified architecture for storing and manipulating a group or unit of objects. It primarily has interfaces and its implementation and algorithms. | https://www.techgeekbuzz.com/roadmap/java/ |
Basic Python Programming Language S$400 - 15 hours.
Objective:
Python Programming Basic course covers the basic concepts on how to represent and store data using Python data types and variables and use conditionals and loops to control the flow of programs. Participants will be exposed to essential data structures like lists, dictionaries, and tuples to store collections of related data. Our theoretical and extensive hands-on exercises will help the participants to get familiarise with basic programming techniques of Python.
Course Content
Module 1: Introduction to PythonThe python programming language What is debugging Formal and natural languages First program
Module 2: Variables, Expressions and StatementsValues and types Variables names and keywords Evaluating expressions Operators and operands
Module 3: FunctionsFunction callType conversion and type coercionMath functionsCompositionAdding new functionsParameters and argumentsStack diagram
Module 4: Conditionals and recursionThe modulus operatorBoolean expressionsLogical operatorsConditional executionAlternative executionNested conditionalsRecursion
Module 5: Fruitful FunctionsReturn valuesCompositionBoolean functionsLeap of faith
Module 6: IterationsMultiple assignmentWhile statementTablesEncapsulation and generalizationLocal variables
Module 7: StringsLengthTraversal and for loopString comparisonA find functionLooping and counting
Module 8: ListsList valuesAccessing elementsList membershipLists and for loopList operationsList deletionObjects and valuesCloning lists
Module 9: TuplesMutability and tuplesTuple assignmentRandom numbersLists of random numbersDictionariesDictionary operations and methodsAliasing and copyingSparse matrices
Module 10: DictionariesDictionary operations and methodsAliasing and copyingSparse matricesHintsLong integers
Assignment Closed
This assignment is currently closed. Sign up @
manytutors.com
to get job notifications. | https://jobs.manytutors.com/assignments/basic-python-programming-15-hours-s400 |
This MOOC teaches you how to program core features and classes from the Java programming language that are used in Android, which is the dominant platform for developing and deploying mobile device apps.
In particular, this MOOC covers key Java programming language features that control the flow of execution through an app (such as Java’s various looping constructs and conditional statements), enable access to structured data (such as Java’s built-in arrays and common classes in the Java Collections Framework, such as ArrayList and HashMap), group related operations and data into classes and interfaces (such as Java’s primitive and user-defined types, fields, methods, generic parameters, and exceptions), customize the behavior of existing classes via inheritance and polymorphism (such as subclassing and overriding virtual methods). Learners will apply these Java features in the context of core Android components (such as Activities and basic UI elements) by applying common tools (such as Android Studio) needed to develop Java programs and useful Android apps.
Learners will work on several hands-on projects throughout the MOOC, i.e., each week will require learners to write solutions to programming assignments that reinforce the material covered in the lecture videos. There will be roughly 4-6 hours of student engagement time per week, including video lectures, quizzes, and programming assignments.
课程大纲
周1
完成时间为 1 小时
Module 1: MOOC Overview
Module 1 summarizes the organization of the MOOC and the topics it covers. It also discusses the MOOC prerequisites, workload, and learning strategies needed to complete the MOOC successfully. It then presents an overview of key features in the Java language, outlining its support for object-oriented programming concepts that guide the development of Android apps.
6 个视频 (总计 40 分钟), 1 个阅读材料, 1 个测验
完成时间为 2 小时
Module 2: Introduction to Android Studio
Module 2 provides an overview of Android Studio, explaining how to install it and apply it to develop a simple app using basic Java and Android
features presented in this MOOC.
13 个视频 (总计 82 分钟), 1 个测验
完成时间为 4 小时
Module 3: Writing a Simple Android App Using Basic Java Features
Module 3 shows how to write a simple Android app that defines variables using primitive Java data types, shows how to assign values to those
variables, and output them to the Android display using Java classes and methods.
9 个视频 (总计 72 分钟), 6 个阅读材料, 2 个测验
周2
完成时间为 7 小时
Module 4: Control Flow
Module 4 covers Java’s looping constructs (e.g., for loops, while loops, and do/while loops), as well as its conditional statements (e.g., if/else
statements).
11 个视频 (总计 65 分钟), 10 个阅读材料, 5 个测验
完成时间为 6 小时
Module 5: Structured Data
Module 5 provides more detail on common data structures supported by Java, including built-in arrays, as well as core classes in the Java
Collections Framework, such as ArrayList and HashMap.
10 个视频 (总计 96 分钟), 9 个阅读材料, 2 个测验
周3
完成时间为 12 小时
Module 6: Classes and Interfaces
Module 6 covers Java classes and interfaces, focusing on data types, fields, methods, generic parameters, and exceptions.
7 个视频 (总计 70 分钟), 7 个阅读材料, 8 个测验
完成时间为 8 小时
ModuIe 7: Inheritance and Polymorphism
Module 7 examines Java's inheritance and polymorphism features (e.g., extending classes and virtual methods).
7 个视频 (总计 65 分钟), 7 个阅读材料, 4 个测验
周4
完成时间为 2 小时
Module 8: Android Calculator App Mini-Project Assignment
Module 8 guides learners through the creation of an Android app that implements a simple calculator, which provides features for adding,
subtracting, multiplying, and dividing numbers input by various means (e.g., via numbers and buttons on the Android user interface). | https://www.cmooc.com/course/4629.html |
In the Acellus Introduction to Java course, students are taught basic programming using the Java coding language. They use the jGrasp editor/compiler along with the Java JDK to design and code, and to learn about variables, operations, data types, input and output, libraries, selection statements, arrays, functions, and methods.
The Introduction to Java course is taught by Ms. Lori Hunt.
Course Overview
Sample Lesson - Methods in Java
This course was developed by the International Academy of Science. Learn More
Scope and SequenceUnit 1 Students prepare for learning to program by discussing what programming is and why we program. They investigate basic concepts of programming, including algorithms, commands, and abstractions, and learn to explore an integrated development environment (IDE). They further discuss programming languages and compilers, and are introduced to jGrasp, the IDE they will use for this class. Unit 2 In this unit students lay the foundation of their understanding of the Java programming language by studying the basic structure of a Java program and an overview of input and output, including an introduction to basic terminal output. Variables, Java data types, and string data types help fill in this foundation, along with a discussion of Java Libraries, including how to use the Scanner library for input, and how to use the JOptionPane library for both input and output. Unit 3 Understanding how a computer thinks, as presented in this unit, empowers students to be better programmers. Students also explore additional essential aspects of programming in Java: data types, the ASCII chart, and binary-decimal and decimal-binary conversion. Unit 4 Continuing their study of data types, students explore Int and Double data types, basic arithmetic operators, order of operations, and type-casting with Int and Double. They investigate operator shortcuts, reading in numeric data Scanner, reading in numeric data JOptionPane, and writing a program with arithmetic input. They explore Try - Catch statements including using Try - Catch with Scanner and with JOptionPane, and learn to put it all together to write long division. Unit 5 Students focus on string format, including string format with numbers and with date. They learn to work with dates and formatting and to write a program with formatting. Following this unit students are presented with the Mid-Term Review and Exam. Unit 6 The Java Library Math Class provides programmers with the ability to easily do any math that is needed in their programs. Students explore the Math Class library and learn how to use it to create a program that applies the quadratic formula to solve quadratic equations. They also investigate the Random Class. Unit 7 The logic of programming begins with selection statements. Students learn to use if statements and boolean data types, to compare string values, and to use if-else and nested if-else statements. They also investigate logic operators, using switch/case statements to make decisions, and planning and programming an AI program. Finally, they consider the social responsibilities of programming. Unit 8 Students learn to use looping to facilitate the execution of a set of instructions/functions repeatedly while some condition evaluates to true. They learn for loops, while loops, for loops with multiple statements, and nested for loops, as well as how to use break statements to "break out" of a loop. Unit 9 Using the String Class Method, students learn to perform operations on strings such as trimming, concatenating, converting, comparing, replacing strings etc. They also learn to do algorithms with strings. Unit 10 In this unit students discover that they can use an array - a data structure - to store a fixed-size collection of elements of the same data type. They learn to set up basic arrays, and to populate, print and traverse arrays. They explore parallel arrays as well as optional button input with dialog boxes, and optional drop down menus with dialog boxes. Unit 11 Investigating fairly advanced subjects, students review methods, including the general outline of a method. They discuss how functions work in math (and programming), writing and calling on a method, writing methods that use parameters, that have a return, that both use parameters and have a return, and that have an array as a parameter. They also investigate the scope of variables, and reference versus value in passing parameters. Following this unit students are presented with the Final Review and Exam.
This course does not have any sections. | https://www.science.edu/acellus/course/introduction-to-java/ |
Do you want to become a better problem solver?
This Java course will provide you with a strong understanding of basic Java programming elements and data abstraction using problem representation and the object-oriented framework. As the saying goes, “A picture is worth a thousand words.” This course will use sample objects such as photos or images to illustrate some important concepts to enhance understanding and retention. You will learn to write procedural programs using variables, arrays, control statements, loops, recursion, data abstraction and objects in an integrated development environment.
This course is comprised of two 5-week parts.
Part 1 introduces programming fundamentals:
- Problem solving
- Primitive data types and arithmetic expressions
- Object-oriented programming basics
- Branching and Loops
- Arrays
Part 2 covers the following topics:
- String manipulation
- File I/O
- Simple event-driven programming
- Recursion
- Abstract data types
Syllabus
- Take a “real-life” problem and abstract out the pertinent aspects necessary to solve it in an algorithmic manner.
- Formulate formal solutions to well-defined problems using the logic of a programming language.
- Implement formal solutions in Java using an integrated development environment.
- Understand the basics of data abstraction using the object-oriented framework.
Instructors
Ting-Chuen Pong
Professor of Computer Science and Engineering
The Hong Kong University of Science and Technology
Tony W K Fung
Teaching Associate
The Hong Kong University of Science and Technology
Leo P M Fan
Instructional Assistant
The Hong Kong University of Science and Technology
Content Designer
Platform
Harvard University, the Massachusetts Institute of Technology, and the University of California, Berkeley, are just some of the schools that you have at your fingertips with EdX. Through massive open online courses (MOOCs) from the world's best universities, you can develop your knowledge in literature, math, history, food and nutrition, and more. These online classes are taught by highly-regarded experts in the field. If you take a class on computer science through Harvard, you may be taught by David J. Malan, a senior lecturer on computer science at Harvard University for the School of Engineering and Applied Sciences. But there's not just one professor - you have access to the entire teaching staff, allowing you to receive feedback on assignments straight from the experts. Pursue a Verified Certificate to document your achievements and use your coursework for job and school applications, promotions, and more. EdX also works with top universities to conduct research, allowing them to learn more about learning. Using their findings, edX is able to provide students with the best and most effective courses, constantly enhancing the student experience. | https://www.my-mooc.com/en/mooc/introduction-java-programming-part-1-hkustx-comp102-1x-2/ |
After a brief history of Python and key differences between Python 2 and Python 3, you'll understand how Python has been used in applications such as YouTube and Google App Engine. As you work with the language, you'll learn about control statements, delve into controlling program flow and gradually work on more structured programs via functions.
As you settle into the Python ecosystem, you'll learn about data structures and study ways to correctly store and represent information. By working through specific examples, you'll learn how Python implements object-oriented programming (OOP) concepts of abstraction, encapsulation of data, inheritance, and polymorphism. You'll be given an overview of how imports, modules, and packages work in Python, how you can handle errors to prevent apps from crashing, as well as file manipulation.
By the end of this course, you'll have built up an impressive portfolio of projects and armed yourself with the skills you need to tackle Python projects in the real world.
Audience
Python Fundamentals is great for anyone who wants to start using Python to build anything from simple command-line programs to web applications. Prior knowledge of Python isn't required.
Publisher resources
Table of contents
- Chapter 1 :
- Chapter 2 : Section2: Variables
- Chapter 3 : Section3: Data Types
- Chapter 4 : Section4: Control Statements and Loops
- Chapter 5 : Section5: Functions
- Chapter 6 : Section6: Lists and Tuples
- Chapter 7 : Section7: Dictionaries and Sets
- Chapter 8 : Section8: Object-Oriented Programming
- Chapter 9 : Section9: Modules, Packages, and File Operations
- Chapter 10 : Section10: Error Handling
Product information
- Title: Python Fundamentals
- Author(s): | https://www.oreilly.com/library/view/python-fundamentals/9781789806892/ |
Course Objectives:
This course aims to introduce students to the imperative programming principles and acquaint them with the C programming language.
Course Contents:
- Historical Development: 2 hours
History of computing and computers, Types of computers (analog and digital), Generations of computers
- Introduction to Computer Systems: 4 hours
Fundamental concepts of computer, Memory, hardware, software and firmware, Block diagram of digital computer, Computer peripherals
- Programming Preliminaries: 10 hours
Introduction to program and programming language, Types of programming language, Generations of programming languages, Program design methodology, Software development: Stages of software development, Text editor; Assembler, Compiler, Interpreter, Algorithms, Flowcharts, Pseudo codes, ASCII
- Introduction to C: 16 hours
C Basics; variables and constants, The simple data types in C. Operators, Header files, Input and Output statement: Unformatted I/O, Formatted I/O, Type conversion, Loops and Decisions (For loop, while loop, Do while loop, Nested loop Case-break and continue statements, If Else, Else-If and Switch statements), Functions (Variables, Returning a value from a function, Sending a value to a function, Arguments, Preprocessor directives, C libraries, Macros, Header files and proto typing), Recursion
- Arrays and Strings: 4 hours
Initializing arrays, Multidimensional arrays, String; functions related to the string
6. Structures and Unions 3 hours
Initializing structures, Nested type structure, Arrays and structures, Unions
- Pointers: 4 hours
Pointer data type, Pointers and Arrays, Pointers and Functions, Pointers and Structures
- Files and File handling: 5 hours
Opening and creating a file in different modes (Read, Write and Append)
Text Book:
- Rajaraman, V.; Computer Programming in C, Prentice-Hall of India, New Delhi
Reference Books:
- Kelley, A. & Pohl I: A Book on C, Addison Wesley Longman Singapore Pvt. Ltd.
- Yshavant Kanetkar: Let Us C, BPB Publication, New Delhi. | https://www.merospark.com/bachelor-level/programming-language-syllabus-bba-pokhara-university-second-semester/ |
STUDY - Basic Programming Concepts:
Introduction to the basic ideas of problem solving and programming using
principles of top-down modular design, Flowcharts, Abstraction Mechanisms, Stepwise Refinement.
Syntactic Elements of a Language, General Syntactic Criterion, Formal Definition of Syntax, Semantics,
Storage Management, Static Storage Management, Stack-Based Storage Management, Heap Storage
Management, Operating and Programming Environment.
STUDY - Introduction to Programming Language C:
Data Types, Instruction and its Types, Storage Masters Degreees, Operators
and Hierarchy of Operations, Expressions in C, Control and Repetitive Statements, break, continue,
Functions: User Defined Functions and Library Functions, Local and Global Variables, Parameter Passing,
Pointers, Arrays, Strings, C Preprocessors, Structures, Input and Output in C, C-Library.
STUDY - Introduction to the Major Programming Paradigms:
Imperative Language, Object Oriented Languages,
Functional Languages, Logic Languages, Parallel Languages etc. | http://k2questions.com/Subjects-MCA-computer-application/Introduction-to-Computer-Programming-through-C.aspx |
of all subjects.
SMU question test set of old,
last and previous year are updated
regularly and it is absolutely free to use. Question paper includes Visual basic 6, VB.Net, C#, ASP.Net,
Web, Oracle, Database, SQL, Software Engineering, C, C++, OOPS, MBA, MCA, BSC IT I have requested
you kindly send me the question paper of Computer Programming ―C Language, SMU - Master of Computer Application.
Course Name
MCA (Master of Computer Application)
Subject Code MC0061 (Computer Programming ―C Language)
Get Questions
PART - A
PART - B
PART - C
Computer Programming ―C Language Syllabus.
Part 1: Introduction to C Programming
Features of C; Basic structure of C programs; A simple C program; More
simple C programs.
Part 2: Constants, Variables and
Declarations
Constants: Integer Constants, Real Constants, Character Constants, String
Constants, Backslash Character Constants; Concept of an Integer and Variable;
Declaring an Integer Variable; The rules for naming Variables; Assigning values
to variables.
Part 3: Operators and Expressions
Arithmetic operators; Unary operators; Relational and Logical operators; The
Conditional operator; Library functions; Bitwise operators; The increment and
decrement operators; The size of operator; Precedence of operators.
Part 4: Some More Data Types
Floating-point Numbers; Converting Integers to Floating-point and
vice-versa; Mixed-mode Expressions; The type cast Operator ; The type char.
Part 5: Input and Output operators
Computer Components; Character Input and Output; Formatted input; Formatted
output; The gets() and puts() functions; Interactive Programming.
Part 6: Making Decisions in C
The goto statement; The if statement; The if-else statement; Nesting of if
statements; The conditional expression; The switch statement.
Part 7: Control Statements
The while loop; The do…while loop; The for loop; The nesting of for loops;
The break statement; The continue statement.
Part 8 Functions
Function Basics; Function Prototypes; Recursion; Function Philosophy.
Part 9: Storage Classes
Storage Classes and Visibility; Automatic or local variables; Global
variables; Static variables; External variables.
Part 10: Arrays and Strings
One Dimensional Arrays; Passing Arrays to Functions; Multidimensional
Arrays; Strings.
Part 11: Pointers, Structures and Unions
Basics of Pointers; Basics of structures; Structures and functions; Arrays
of structures; Basic pointer operations; Pointers and one-dimensional arrays:
Pointer arithmetic, Pointer Subtraction and Comparison, Similarities between
Pointers and One-dimensional arrays; Null pointers; Pointers as Function
Arguments; Pointers and Strings; Pointers and two-dimensional arrays: Arrays of
Pointers. Basics of structures; Structures and functions; Arrays of structures;
Pointers to structures; Self-referential structures; Unions.Dynamic memory
allocation and Linked list Dynamic memory allocation: Allocating Memory with
malloc, Allocating Memory with calloc, Freeing Memory, Reallocating Memory
Blocks; Pointer Safety; The Concept of linked list: Inserting a node by using
Recursive Programs, Sorting and Reversing a Linked List, Deleting the Specified
Node in a Singly Linked List.
Part 12: File Management
Defining and opening a file; Closing files; Input/Output operations on
files: Predefined Streams; Error handling during I/O operations; Random access
to files; Command line arguments.
Part 13: The Preprocessor
File Inclusion; Macro Definition and Substitution: Macros with Arguments,
Nesting of Macros; Conditional Compilation.
Part 14: Advanced Data Representation
Exploring Data Representation; Abstract Data Types; Stack as an Abstract
Data Type: Array Implementation of a Stack, Implementation of a Stack Using
Linked Representation, Applications of Stacks; Queue as an Abstract Data Type:
Array Implementation of a Queue, Implementation of a Queue Using Linked List
Representation, Circular Queues, Applications of Queues.
© 2006 - 2022, RM Solution. | http://programmer2programmer.net/tips/smu/smu_subject.aspx?id=MC0061 |
This course is an introduction to the Python programming language for students without prior programming experience. We cover python basic syntax, variables, data types, control flow, functions, object-oriented programming, and graphical user interface-driven applications The course is designed to provide an introduction to the Python programming language. The focus of the course is to provide students with an introduction to programming, I/O, and prepare them for advanced courses in python.
2 Weeks
- Basic computer skills
- Computer and internet connections
- Some programming experience would be beneficial.
- Introduction to python
- Setting up the environment
- First python program
- Executing python scripts
- Python syntax
- Variables
- Comments
- Data Types
- Operators
- String manipulation
- Working with numbers
- Conditional statements
- Using Loops
- Functions
- Collection data types
- Dates and time
- Using Modules
- User input
- Files handling
- OOP
After successful completion of the project, the student will get a certificate from IST to validate their skills. | https://www.isteducation.com/courses/python-programming-for-beginners/ |
This edition of the Computing at School (CAS) newsletter covers a range of topics including:
*Unplugged computing magic tricks
*Programming using Alice, Scratch and GameMaker
*Making games with Kodu
*Object oriented programming in Java with Greenfoot
Computing: The Next Generation (Autumn 2009)
This early edition of the Computing at School (CAS) newsletter includes articles on Scratch programming and curriculum structure in lower secondary school. Also, videos showing the importance of maths in computer science are highlighted.
Intermediate Programming with LiveCode
A series of eleven guided tasks with LiveCode for students with some prior experience, based on the full software development cycle.
Variables and arrays are assigned using keyboard input, logic and maths calculations are carried out and the results displayed in a simple user interface.
Each task...
Advanced Programming with LiveCode
This series of lessons and exercises covers more advanced programming concepts and techniques. Pseudocode is used within the full development cycle to aid understanding of event-driven programs. Variables are covered in-depth, and a range of loops and selection statements are used for flow of control. Complex array...
Data Representation: Bitmap Images
Using a spreadsheet as a grid of 'pixels', this computing activity teachers how 1's and 0's can store image data. The classroom exercises use images with increasing pixel resolution, looking at how this affects the clarity of the image. Moving from black-and-white images, the students then use grids of colour...
An Introduction to Python (v2.7 and v3)
This learning resource is an introduction to programming with Python. Versions are included for Python 2.7 and Python 3.
The fundamentals of programming are covered:
• Arithmetic operations
• Data types
• Control flow
As well as some more advanced techniciques including the use of:...
Introduction to Computing
This textbook is written to explain computing from first principles, and appeals to a broad audience beyond many computing texts. With clear and concise explanation, useful diagrams and a structure that builds on previous understanding, it is aimed at post-16 students but sections would be equally useful at all...
So You Want to Learn to Program?
This e-book gives an introduction to programming in the BASIC language for middle to high school students. It can be used as:
*a nine or 18 week-long introduction to programming
*a brief introduction to programming concepts
*an introduction to data structures for non-programmers
* a...
STEM Learning Magazine: Post-16 and FE - Spring 2016
Under the banner of Building a successful network, Dr Katherine Forsey and Sue Churm consider how to create a technical network and how to get the most from it, Jenny Phillips explores the challenges FE practitioners face and Ed Walsh explains what the teacher recognition scheme...
Efforts to increase students’ interest in STEM studies and careers: National measures taken by 30 countries, 2015 report
This publication is the latest in a series of reports about national efforts to increase students’ interest in pursuing Science, Technology, Engineering, and Mathematics (STEM) studies and careers... | https://www.stem.org.uk/resources/search?f%5B0%5D=field_age_range%3A83&f%5B1%5D=field_subject%3A92&%3Bf%5B1%5D=field_subject%3A50&%3Bamp%3Bf%5B1%5D=field_age_range%3A31&%3Bamp%3Bf%5B2%5D=field_type%3A66&%3Bamp%3Bf%5B3%5D=field_subject%3A92&%3Bamp%3Bf%5B4%5D=field_publication_year%3A32&%3Bamp%3Bf%5B5%5D=field_subject%3A9&page=31 |
Specialization in Java: Part I - For Beginners.
Requirements
-
You should take this course, if you answer ‘Yes’ to any of the below questions:
-
- Do you want to learn basic programming using Java?
-
- Are you completely unfamiliar with ‘Computer Programming’ or programming in Java?
-
- Do you want to learn the most popular programming language in the world?
-
- Do you want to learn basic java programming for developing Android Apps?
-
- Do you want to learn, how to work with Objects in Java?
-
If you have answered yes to any of the above questions, then you must take this course.
-
Well, if you are in search for learning advanced Java concepts, then you must wait for the rest courses of this specialization.
Description
So if you are in search of Complete Java video tutorials or for an online video course to learn java programming(Object Oriented Programming - OOP) than you are at the right place. This course assumes that you have no prior programming skills, and teaches you programming from scratch covering all the basic concepts in Java.
This course is for complete beginners to programming as well as java, and it is one of the quick ways to learn java to crack java interviews.
This course covers the basics of programming and introduces fundamental programming concepts like variables, operators, control statements and loops using Java. The course also covers explanation on all lexical units like keywords, identifiers, separators, constants,etc.
This course also covers concepts about the java compiler and the JVM-Java Virtual Machine to explain the students, how java source code is transformed into class file and executed by a computer.
We will see why java is different from other programming languages like C. Why it is called an Object Oriented Programming Language and, What actually are Objects and Classes in Java.
We are also going to cover variable declaration and initialization and different data types that are defined in java like int, double, float, byte, char, Boolean, long, and even declaring a String. But the string class will be covered in the next course of this specialization. We will also cover type conversion in java.
This course also covers the complete description of all the operators provided in java. Mainly there are four different types of operators Arithmetic, Bitwise, Logical and Boolean Logic operators which are defined in java and are covered in detail in this course.
And finally in this course we are going to discuss condition statements like if/else and switch statements which are found in almost every program.
This is covers concept which are required for Android Application Development. Because Android is implemented in java and to program in android you will need the fundamentals of java.
And the most interesting part of this course is that we are going to create a game to apply all the concepts that we have learned in this tutorial.
With this course, you will learn basics concepts in java in a very short period of time.
This course and this specialization is the complete reference in java.
Who this course is for:
- Anyone who has a Computer and access to the Internet.
- And has a passion for coding. | https://nocourses.com/t/specialization-in-java-part-i-for-beginners/1532 |
By the end of this project you will create a fully functioning Tic-Tac-Toe game on a console application in which 2 players could play against each other, this will be achieved through applying and practicing many concepts of programming which programmers use all the time through their programming careers such as advanced if statements, advanced arithmetic operations, loops, Arrays, and 2D arrays. By applying these concepts you will also be able to create different types of programs that users can interact with. These programming concepts can also be applied using other Programming Languages such as Java and Python, not just C++. Prerequisites: Familiarity with the basics as variables, data types, if-statements and basic arithmetic operations in programming for which it's recommended you take "Introduction to C++ Programming: Build a Calculator". Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
Software Development
Computer Science
Computer Programming
C++
In a video that plays in a split-screen with your work area, your instructor will walk you through these steps: | https://gb.coursera.org/projects/introduction-to-cpp-programming-create-a-tic-tac-toe-game |
Java is an object-oriented language designed for use in the distributed environment of the internet. By practicing the Java code that is presented in the video tutorials you will become familiar with basic Java programming techniques and will be able to write a number of simple Java programs.
Java is a computer software programming language specifically designed for use in the distributed environment of the internet. Java looks similar to the C++ programming language, but it is more simple to use.
This course covers key Java concepts and basic programming techniques for beginners. Learners are encouraged to reflect on the concepts and practice the Java code that is presented in the video tutorials. By the end of this course and following enough practice, the student will become familiar with basic Java programming techniques and will be able to write a number of simple Java programs.
Fundamental concepts in Java are discussed in this course. In every programming language, certain rules must be followed. Java is an object-oriented language and involves objects and classes. Each time a new object is created, at least one constructor will be invoked. An important rule is that constructors should have the same name as the class. When Java code is written in Eclipse, suggestions automatically appear, helping the user to write the code. Keywords used in the code, such as: public, static and void are explained in relation to the function of programs.
Defining variables in Java programs is also demonstrated. Variables store data that programs can manipulate. Different variable types have different capacities for size, layout and the range of values that can be stored in memory. While loops and If statements are demonstrated. A while loop in Java, is something that repeatedly executes a target statement, as long as a given condition is true.
This course will be of great interest to all learners who want to learn the key concepts Java and practice basic Java programming techniques. | https://alison.com/course/java-programming-for-complete-beginners |
Basic knowledge of Grasshopper required.
Introduction to Python programming language and Python script component in Grasshopper. Calculating “things” and plotting geometries from Python code. The course will start with the core concepts (module imports, variables, indentation, etc) from where we will advance through data types, input parameters, conditional statements, loops, list comprehensions, and conclude with functions and recursion.
Grasshopper plug-ins used: GhPython
Agata Migalska is a co-founder of Parametric Support, Berlin-based spin-off offering optimization services for architecture. Agata is an IT specialist with almost a dozen years of experience in designing and developing various size software including in-house business applications, e-commerce solutions, and image processing algorithms for scientific purposes. She has even more years of experience programming just for fun. Agata is a Certified ScrumMaster and an advocate of Agile methodologies in software development, a PhD candidate in Computer Science and an academic tutor at Wrocław University of Technology in Poland, and a co-organizer of PyLadies meetup group in Berlin. | https://arxace.com/courses/hello-python/?level=3 |
In this course, students will start from the basics and learn in-depth each topic while providing the practice necessary in order to master the JAVA language. After that, we catered towards the AP CS A curriculum. In this course, students can develop their understanding of computer coding as they learn the core concepts of computer science such as variables, data types, and control structures. This course is highly suggested for students who are planning to take the AP CS A class at school.
Grade 8-11
None or Python experience is required
Subcourses & Curriculum
Java/AP CS A
AP CS A Prep
In this course, we will provide students the knowledge and practice to succeed in every topic of AP CS A. Students will feel comfortable with challenging topics like Object Oriented Programming through this course.
-
Data Types & Variables
-
Operators
-
If Statement & Loops
-
Arrays
-
Strings
-
Nested Data
-
Structures
-
Classes & Inheritance
-
Abstract/Interface
-
Access Modifiers
-
Practice Exams
-
Multiple Choice Tips
Learning Outcome
Upon completion, students should feel comfortable writing code by using Java. Students should expect a good score on the AP exam. | https://www.btreecode.com/ap-cs-a |
نوشتههای سیدمحمدمسعود صدرنژاد
This course covers all basic knowledge needed for programming in python. The course designed based on the assumption that students want to work in the software industry, but it's also useful for academic purposes.
All available resources for this course will be added to the following table.
|Session||Date||Subjects||Slides||Codes||Homework||Solutions|
|1||October 19, 2017||Python Philosophy, What is Python, Python 2.x vs Python 3.x, Python Installation and Setup, Why Python, Python Shell, Python Basic Syntax, Variables, Basic Data Types, Operators, Type Conversion, Truth Value Testing, Sequences, Sequence Operators, List Comprehension, String Formatting, Tuples, PyCharm Integrated Development Environment||python-basics.html||-||-||-|
|2||October 26, 2017||Control Structures, Loops, Sets, Dictionaries, Functions Basics||python-data-types.html||, ,||, ,||, ,|
|3||November 2, 2017||Problem Solving, Deep dive to Function, Modules Concepts, time Module, random Module, math Module, dir() Function, Packages in python||modules.html||-||, ,||, , ,|
|4||November 16, 2017||Problem Solving, Working with Python Package Manager and VirtualEnv, Writing a real-world program using some well-known high-level python packages||venv-packages.html||1||-||-|
|5||November 23, 2017||Version Control Systems||git-version-control-system||-||-||-|
|6||December 21, 2017||Object Oriented Concepts, Class and Object in Python||python-oop.html||-||-||-|
Here you can find a list of awesome beginners-friendly projects implemented in Python. | http://sadrnezhaad.ir/smm/en/courseware/python_programming/sbu/171019 |
Here, we have provided the links containing the study materials, which will help you study and prepare for your B.Tech 3rdStatistics with R Programming 2020 edition examinations. Referring to the connections we’ve provided below and the links containing the study materials in PDF format, and the list of recommended books that we’ve provided below, you will be able to ace your examinations. We have also provided you with further details that will allow you to do well in your exams and learn more. These study materials help you understand the concepts and everything quickly and creates a better space for you to work on. These study materials give you the best resources to study from.
Statistics with R Programming
Here, we are talking about using the program language R for statistical programming, computation, graphics and modeling, writing functions, and efficiently using R.
Download Statistics with R Programming
Recommended Books
- The Art of R Programming, Norman Matloff, Cengage Learning
- R for Everyone, Lander, Pearson
- Siegel, S. (1956), Nonparametric Statistics for the Behavioral Sciences, McGraw-Hill International, Auckland.
- R Cookbook, PaulTeetor, Oreilly.
- R in Action, Rob Kabacoff, Manning
- Venables, W. N., and Ripley, B. D. (2000), S Programming, Springer-Verlag, New York.
- Venables, W. N., and Ripley, B. D. (2002), Modern Applied Statistics with S, 4th ed., Springer-Verlag, New York.
- Weisberg, S. (1985), Applied Linear Regression, 2nd ed., John Wiley & Sons, New York.
- Zar, J. H. (1999), Biostatistical Analysis, Prentice Hall, Englewood Cliffs, NJ
Syllabus
UNIT-I:
Introduction, How to run R, R Sessions, and Functions, Basic Math, Variables, Data Types, Vectors, Conclusion, Advanced Data Structures, Data Frames, Lists, Matrices, Arrays, Classes.
UNIT-II:
R Programming Structures, Control Statements, Loops, – Looping Over Nonvector Sets,- If-Else, Arithmetic, and Boolean Operators and values, Default Values for Argument, Return Values, Deciding Whether to explicitly call return- Returning Complex Objects, Functions are Objective, No Pointers in R, Recursion, A Quicksort Implementation-Extended Extended Example: A Binary Search Tree.
UNIT-III:
Doing Math and Simulation in R, Math Function, Extended Example Calculating Probability- Cumulative Sums and Products-Minima and Maxima- Calculus, Functions Fir Statistical Distribution, Sorting, Linear Algebra Operation on Vectors and Matrices, Extended Example: Vector cross Product- Extended Example: Finding Stationary Distribution of Markov Chains, Set Operation, Input /out put, Accessing the Keyboard and Monitor, Reading and writer Files,
UNIT-IV:
Graphics, Creating Graphs, The Workhorse of R Base Graphics, the plot() Function – Customizing Graphs, Saving Graphs to Files.
UNIT-V:
Probability Distributions, Normal Distribution- Binomial Distribution- Poisson Distributions Other Distribution, Basic Statistics, Correlation and Covariance, T-Tests,-ANOVA.
UNIT-VI:
Linear Models, Simple Linear Regression, -Multiple Regression Generalized Linear Models, Logistic Regression, – Poisson Regression- other Generalized Linear Models-Survival Analysis, Nonlinear Models, Splines- Decision- Random Forests,
Important Questions
- Explain about Variables, Constants and Data Types in R Programming
- How to create, name, access, merging, and manipulate list elements? Explain with examples.
- Write about Arithmetic and Boolean operators in R programming?
- How to create user-defined function in R? How to define default values in R? Write syntax and examples?
- Explain functions for accessing the keyboard and monitor, Reading and writing files
- Write an R function to find sample covariance. | https://bookpdf.co.in/b-tech-3rd-year-statistics-with-r-programming-study-materials-book-pdf-download-b-tech-3rd-year-statistics-with-r-programming-study-materials-book-pdf/ |
W3SCHOOLS C LANGUAGE PDF
C is a high-level structured oriented programming language, used in general purpose programming, developed by Dennis Ritchie at AT&T Bell Labs, the USA . This C tutorial series has been designed for those who want to learn C programming; whether you are beginners or experts, tutorials are intended to cover basic. C File Handling - C files I/O functions handles data on secondary storage device, such as a hard disk. C File Operations and Steps for Processing a File.
|Author:||BECKY NICOLAISEN|
|Language:||English, Dutch, French|
|Country:||Georgia|
|Genre:||Academic & Education|
|Pages:||173|
|Published (Last):||25.03.2016|
|ISBN:||564-9-54190-971-5|
|ePub File Size:||21.80 MB|
|PDF File Size:||13.42 MB|
|Distribution:||Free* [*Sign up for free]|
|Downloads:||31046|
|Uploaded by:||JERRIE|
C is one of the most widely used computer programming language. This C Tutorial will help you understand basic to advance c programming concepts. understanding on C programming language from where you can take will help you in understanding the C programming concepts and move fast on the. C Programming Tutorial for Beginners - Learn C programming in simple and easy steps starting PDF Version C is the most widely used computer language.
Examples are often easier to understand than text explanations.
1. Introduction
Control comes out of the loop statements once the condition becomes false. There are 3 types of loop control statements in C language. The statements which are used to execute only specific block of statements in a series of blocks are called case control statements. There are 4 types of case control statements in C programming.
They are 1 switch 2 break 3 continue 4 goto…. The keywords which are used to modify the properties of a variable are called type qualifiers. There are two types of qualifiers available in C programming.
They are 1 const 2 volatile. Constants are also like normal variables….
Purpose Of This Tutorial
Storage class specifiers in C programming tell the compiler where to store a variable, how to store the variable, what is the initial value of the variable and the lifetime of the variable. There are 4 storage class specifiers available in C language.
C Array is a collection of variables belongings to the same data type. You can store group of data of the same data type in an array.
W3Schools Offline Version Download
There are 2 types of arrays in C programming. They are 1 One dimensional array 2 Multidimensional array…. This null character indicates the end of the string. In C programming, strings are always enclosed by double quotes.
C questions and answers
Whereas, character is enclosed by single quotes in C… more…. The variable might be any of the data types such as int, float, char, double, short etc.
Normal variable stores the value, whereas pointer variable stores the address of the variable…. Functions in C programming are basic building blocks in a program.
All C programs are written using functions to improve re-usability, understandability and to keep track of them.
A large C program is divided into basic building blocks called C function. Library functions in C language are inbuilt functions which are grouped together and placed in a common place called a library.
Each library function in C programming language performs a specific operation. We can make use of these library functions to get the pre-defined output instead of writing our own code to get those outputs….
They are 1. In real time application, it will happen to pass arguments to the main program itself.
Steps for Processing a File Declare a file pointer variable. Open a file using fopen function. Process the file using the suitable function.
This tutorial does not cover those shells. A procedural language breaks the program into functions, data structures, etc. Buffer manipulation functions in C programming work on the address of the memory block rather than the values inside the address.
The C variable might be belonging to any of the data types like int, float, char etc….
Use any compression tool like Winrar to extract it. There are 4 types of case control statements in C programming.
Related Posts: | http://nvilnephtalyca.ga/art/w3schools-c-language-pdf-1638.php |
PHP is the most popular server-side language used to build dynamic websites, and though it is a very extensive language, this class will take it step-by-step.
You will learn how to make your pages dynamic based upon user interaction, interacting with HTML forms and store and retrieve information from local data sources which include a database.
Objectives for this class:
- Understand how server-side programming works on the web.
- PHP Basic syntax for variable types and calculations.
- Creating conditional structures
- Storing data in arrays
- Using PHP built-in functions and creating custom functions
- Understanding POST and GET in form submission.
- How to receive and process form submission data.
- Reading and writing cookies.
- Security tips (i.e. SQL Injection)
- Create a database in phpMyAdmin.
- Read and process data in a MySQL database.
Applicable Job Roles: web designer, web developer, webmaster, web application developers
Outline
Week 1: PHP Basics
- Setting up a development environment
- Variables, numbers and strings
- Calculations with PHP
- Using Arrays
Week 2: Control Structures and Loops
- Conditional Statements
- Using Loops for Repetitive tasks
- Combing Loops and Arrays
Week 3: Functions, Objects and Errors
- PHP’s Built-in functions
- Creating Custom functions
- Passing Values by Reference
- Understanding Objects
- Server-side includes
Week 4: Working with Forms
- Building a Form
- Processing a Form’s Data
- Differences between POST and GET
- Preserving User Input
Week 5: More with Forms
- Dealing with checkboxes and radiobuttons
- Retrieving values from lists
- Validating and restricting data
- Sending Email
Week 6: Storing and Protecting Data
- Setting and Reading Cookies
- Protecting Online Files
- Understanding Session Variables
Week 7: MySQL Database Overview
- phpMyAdmin Overview
- Using a MySQL Database
- Reading and Writing Data
Prerequisites
- Completion of Introduction to XHTML (H401) or equivalent knowledge.
- Completion of Introduction to CSS (H151) or equivalent knowledge.
Requirements
- You must have an UNIX/Linux web host account that supports PHP and a MySQL database. | http://iteducation.co.in/introduction-to-php/ |
We’re at the crossroads of understanding how customer demands are changing with time and how businesses are continuously working to meet those demands with the unpredictable economy. This e-book aims to help finance leaders across mid-sized businesses understand how digital accounts receivable (AR) payments can bring value to their business with improved operational efficiency.
Contents
Chapter 01
Chapter 02
Chapter 03
Chapter 04
Chapter 05
Chapter 06
Cash plays a crucial role in the survival of any business and the ongoing global crisis has prompted businesses to re-evaluate their payment processes to deal with cash flow issues. Late payments coupled with delayed processing for traditional methods of payment such as cash and checks add on to the struggle of preserving cash. For many mid-sized businesses, paper-checks are still relevant but it comes with a steep price to pay. The primary concern is the manual processing time as they take a long time to reflect on the open AR on top of being error and fraud-prone.
Digital payment methods have emerged to address these pain points by streamlining payment processing with added benefits such as real-time payment, better visibility, compliance, fraud protection, and improved cash flow.
When it comes down to the question of survival, cash flow is oxygen for businesses. Having better visibility and getting paid faster with lower transaction fees are critical for finance executives. Real-time payment methods can accelerate cash flow while avoiding the high cost of manually processing traditional paper-based checks. Owing to digitalized payment methods, a faster payment cycle would result in positive cash flow.
This e-book outlines the changing dynamics in the B2B accounts receivable (AR) payments landscape within mid-sized businesses. Understand the challenges involved in traditional payment methods and how it affects the overall cash flow. Explore key market trends and emerging digital payment methods that provide opportunities to improve cash flow and make a positive impact on working capital.
Traditionally, invoices, receipts, and disbursements were all manually paid and processed either through cash or checks. These payment methods resulted in an increasing rate of financial complexity, delays, and risk. Fast forward to the present, digitalization has proven to be a key enabler to change the entire landscape of payments.
While traditional paper check-based payment systems continue to drive the majority of the cash management cycle, there’s an ongoing worldwide reform to switch to real-time payments. Let’s go over the basic payment systems in place and understand the necessity of improving efficiency with reduced operational risks.
According to Mastercard, the financial sector within the B2B space stands at a range of USD 25 trillion, annually.
Furthermore, according to the National Middle Market Summit, up to 55% of businesses face challenging issues in terms of maintaining a balanced working capital. With opportunities in obtaining valuable information through payments, mid-sized B2B companies can leverage digitally produced insights to address complex problems such as handling the processing of consumer payments while managing regulatory, compliance, and cost-based challenges.
Let’s go over some of the trends that continue to redefine the entire payment landscape.
In order to stay ahead of the curve, it’s important to recognize and address multiple issues in terms of operation and delivery service of payments. Conventional business payment structures continue to face vexing challenges in terms of increased high levels of usability, flexibility, and responsiveness in transactions. Here are some of the challenges faced by businesses –
With poor authorization controls for B2B transactions, the risk of fraud is very high. Some of the top risk segments include check-forgery and cyber fraud. As per reports by the American Bankers Association, 60% of attempted fraud is check related fraud. Businesses following a paper-based manual process are much more likely to be exposed to these risks involved in B2B payments.
With disparate payment methods and processing challenges, there is a lack of visibility into additional costs, error-prone delays, and chargebacks in individual accounts. Furthermore, companies are not able to take care of payment disputes as it is impossible to get a clear picture of finances.
In order to have a competitive advantage, businesses need to accommodate the buyer’s preferred method of payment. The availability of multiple payment formats such as Same-day ACH, Virtual Cards, and Wire makes it possible for businesses to get real-time payments.
Digital payments are not just limited to cards and payment portals, businesses are now taking advantage of Remote Deposits for check payments. Customers can conveniently make remote check deposits using a mobile device. On the customer side, this saves a lot of time, money, and effort of having to go to banks for making a physical deposit. The adaptability of these payment formats helps in taking better cash management and working capital decisions.
With the availability of payment portals, customers can choose to pay for individual or multiple invoices at the same time. As particular invoices are already accounted for in this method, businesses can focus on higher-value tasks instead of manually reconciling individual payments with remittance and invoice information. This removes the risk of human error and provides better control of finances.
The availability of multiple payment formats makes it easier for customers to pay for invoices on time. This will result in a reduction in the overhead of the open AR.
In the B2B payments space, adapting to the changing needs of the business and customers will have a ripple effect going forward. Businesses must determine where they stand in their digital payment journey. Mid-sized business owners can benefit from this digital transformation with faster payments, better customer experience, and improved operational efficiency leading to positive cash flow. The first step is to determine the overall cost and benefits of different payment methods that specifically cater to their business goals.
HighRadius is a Fintech enterprise Software-as-a-Service (SaaS) company that leverages Artificial Intelligence-based Autonomous Systems to help 600+ industry-leading companies automate their Accounts Receivable and Treasury processes.
Processing over $4.7 Trillion in receivable transactions annually, HighRadius solutions have a proven track record of optimizing cash flow, reducing days sales outstanding (DSO) and bad debt, and increasing operational efficiency so that companies may achieve strong ROI in just a few months.
HighRadius is the industry’s most preferred solution for Accounts Receivable & Treasury and has been named a Leader by IDC MarketScape twice in a row.
The RadiusOne AR Suite by HighRadius is a complete accounts receivable solution built for mid-sized businesses to put their order-to-cash on auto-pilot with AI-powered solutions. With out-of-the-box integration, the solution can be deployed in under four weeks without borrowing support and time from internal IT teams. It supports robust API-based connectors for lightning-fast remote deployment.
RadiusOne is designed to automate and fast-track key receivable functions including eInvoicing, Collections, Cash Reconciliation, and Credit Risk Management.
With flexible integration support for popular ERPs such as Oracle NetSuite, Sage Intacct, Microsoft Dynamics, SAP, and many more; the solution aims at streamlining AR processes while eliminating hours of manual and paper-based workload, enabling improved visibility, control, and efficiency.
For more information, visit www.highradius.com
HighRadius RadiusOne AR Suite provides the complete eInvoicing solution designed to automate your invoice delivery and empower customers with a self-service portal to manage, track and pay invoices. It is quick to deploy and ready to integrate with ERPs like Oracle NetSuite, Sage Intacct, Quickbooks, and scales to meet the needs of your order-to-cash process.
Lightning-fast Remote Deployment | Minimal IT Dependency
Prepackaged Modules with Industry-Specific Best Practices. | https://www.highradius.com/resources/ebook/b2b-digital-payments-mid-sized-businesses/ |
It’s the middle of the afternoon as Miranda Jones, project manager, returns home from a meeting with her main customer. The meeting has been successful, the customer is happy with the progress and has asked Miranda to expand the scope of the project. In order to do so, she needs to add two consultants to the team. As she arrives home, she grabs her laptop and logs on to the corporate intranet. She switches to her personal home page, where she finds all elements necessary to quickly accomplish a few managerial tasks. On this page, she studies the time sheets of the consultants who belong to her project group. She finds that one of them has put in a lot of hours during the last week, so she makes a mental note to talk to him and discover the reason for the overtime. She is automatically alerted to the fact that one of her team members has mailed her a request asking for time off. Miranda compares her request to the online time line of the project and sees that this will not pose a problem, so she approves the request.
Next, she fills out a job requisition form to find the two consultants. She keys in a general job description, selects the desired skills and matches the requisition to the corporate resume database. She receives a list of five available consultants that match the request and looks at their details. Two of them fit the description, so she mails them an invitation to meet with her the following day to discuss the project.
Finally, she selects the link to her expense sheet and keys in all the expenses she made during her trip to the customer. When finished, she sends it off to her manager, so it can be approved and the money will be paid out to her account as soon as possible. After reading her mail, she logs off and goes to unpack her suitcase and relax.
From Intranet to Corporate Portal
Although for many companies the above story might read as fiction, using an employee portal to accomplish everyday tasks is a reality. Organisations are discovering the benefit of converting their corporate Intranet into a series of corporate portals linking disparate pages and huge amounts of hard–to-find information. Using technologies that enable real-time access to personalised information, services and applications, employees can accomplish their tasks, when and where they need.
This type of employee empowerment is very important to the modern enterprise. Employees need to make decisions that affect their work and personal lives, and thus need to be provided with access to information that helps them make those decisions. An enterprise portal helps organisations to accomplish this goal, by converting disparate information systems into integrated knowledge systems, which will benefit the organisation from the bottom line up.
A Key Role for Human Resources
Human Resources plays a key role in this transition. Human Resources, traditionally seen as administrating the relationship between employers and employees (like payroll and contract services), is perhaps the most sought-after department in any organisation.
Questions range from “How much vacation time do I have left?” to “How much have I contributed to my retirement fund?” Phone calls and routine inquiries are expensive and take up an inordinate amount of staff time. Employees seeking to change their qualification status and benefits must spend time reviewing and comparing options, seeking explanations from Human Resource professionals, filling out unnecessary paperwork and waiting for a response from HR.
Personalised Access to the Enterprise Portal
In allowing the HR department to shift their focus from administrative support to Employee Relationship Management, technology can play a significant supporting role. An employee portal (sometimes called HR portal) with back-office integration is the part of an enterprise portal that integrates all information about Employee Benefits and Personal Development. Essentially, repetitive information provided by Human Resources is accessible through the portal. Employees receive personalised access to the portal and find information about their benefits. In addition to company information, an organisation can also allow Work/Life events to run in the portal, so employees can register the birth of a child or request time off. Service applications in the portal allow managers and employees to carry out secure transactions on their own behalf, like expense and travel reporting, time sheets, alterations to their life status. They can make their own choices regarding benefits: exchange time for money and vice versa, and in doing so tailor their benefits to fit their individual preferences. The portal holds forms that an employee can use to enter these transactions. When he sends off the form, a workflow is initiated, authorisation takes place, and the transaction is automatically processed in the back office. In the case of Work/Life events, usually there are additional transactions: external parties, like insurance companies, also need to be updated on changes. A portal gives HR professionals, managers and employees integrated, role-based, personalised access to all information they need to accomplish their tasks.
Enhance Efficiency and Productivity
Manual processes are more prone to data-entry error, and are more costly to manage than automated processes. Precious time is often lost, searching for the right documents or information. Time spent printing documents, filling out forms and waiting for management approval can impact on productivity and seriously affect the level of service delivery. After approval on a request, HR professionals often need to enter data from the form into the back office system in order to complete the process.
The use of web-based technologies for automating much of these administrative tasks can enhance the bottom line of your organisation, by lowering transactions costs and creating higher levels of processing efficiency. In addition, self-service supports an organisation’s efforts to contain operating costs, while providing employees with real-time information and services to make them more productive.
Business-Process Integration is Vital for Success
Business-process integration deals with the automation of entire business processes within the organisation, but also with outside parties, like third-party vendors. It uses available data and functionality from existing back-office applications and integrates those with the data and functionality of these other parties. In this way, it becomes invisible as to who performs the business process.
Communicating directly with a back-office system helps automate redundant processing and narrows the possibility of human errors. This increases efficiency and reduces administrative overheads. Integration with a Human Resource Management System (HRMS) is thus vitally important for employee portals. Think of a change of address, for example: when an employee moves, his address needs to be changed in the HRMS of the organisation, and in the system of all parties involved in supplying his benefits, like insurance companies and pension funds. With an integrated portal system, instead of contacting all parties that need to update, the employee need only update his address once, as business logic will ensure that it is automatically sent to all third parties involved.
Benefits: Consistency and Efficiency
Manual processes often involve considerable process variability. One of the most important business benefits is the incorporation of business rules in transaction-processing. Switching to standard processes increases the consistency of HR service-delivery. HR professionals do not always give the same answer to the same question: personal knowledge and experience can have an effect. Yet, if transactions are automated, you create consistency and efficiency – and more profit for your organisation: automation of transactions eliminates repetitive paperwork, electronic approval and routing streamlines organisation, while detailed metrics and reports of HR tasks can be tracked and provided in real-time.
Instant Access to Real-Time Information
Tight integration among applications ensures information is instantly available across the system: since information is entered once and available everywhere, opportunities for data errors are reduced and HR professionals gain time to spend on more strategic issues, such as organisational goals and compensation and benefits programs.
More importantly, key strategic information that has been tracked using paper systems or spreadsheets can be moved online and automated, improving efficiency and business intelligence. Integrated access to HR information can help management make personnel or financial decisions quickly. For example, a skills query allows you not only to scan the resume database to match skills to open positions, but will also give you a means to identify skill shortages in the workforce, enabling you to perform succession-planning.
Portals can offer corporate-wide, self-service access to employee benefits and provide the HR department with fingertip access to important key metrics. They can then quickly derive valuable information and base strategic priorities on more accurate reporting
Measure HR Performance: Introducing Metrics and Reports
When you start delivering streamlined processes in a portal solution, it is vitally important to introduce metrics, so the HR department can start tracking management information about portal activities, processes and results. Most of the time, gathering this information is a manual process, which makes it difficult to establish an integrated view on the performance of the HR department. The more HR activities are automated and moved to a portal, the easier it is to create insightful reports. You can discover how many people are using a certain benefit and what impact that has on the bottom line of your organisation, or calculate the total number of vacation days that employees have bought or sold.
Efficiency and ROI
When streamlining business processes, we have found that efficiency (and ROI) is mostly achieved by the following:
- Self-service: instead of letting the HR department answer all questions, managers and employees receive consistent answers and streamlined processes from the portal. This reduces enquiries to HR and increases employee satisfaction.
- Eliminating superfluous human activity (like copying information and distributing it on paper or entering data more than once) reduces cycle times.
- Enterprise Application Integration (EAI): the integration of back-office systems inside and outside the organisation. Most companies still use legacy systems that cannot be easily upgraded to new, more open systems. EAI allows this data to be accessed and displayed to employees in an integrated way – ensuring optimum efficiency and data integrity.
- Business Intelligence: the ability to measure HR activities and gain insight on key metrics gives support to management when making decisions on employee issues and determining strategic priorities. | https://www.anitalettink.com/2010/06/empowering-employees/ |
Understanding the Finance Workflow
No matter what the business or industry, the finance function is one of the most essential and overworked departments in any organization. Finance personnel performs several significant and multi-faceted roles on a daily basis for the smooth functioning of the organization. The first step to improving the finance process is to understand the financial accounting workflow.
Understanding what finance is about helps organizations track the business’s financial position and manage cash flow and income more efficiently. A streamlined financial workflow ensures that cash inflow is optimally used to improve the business’s bottom line. Tracking the financial workflow enables management to make data-driven decisions on investments, selling, and tax reporting. Maintaining clear financial records helps in maintaining an updated audit trail as well.
Financial accounting is a method that involves gathering, summarizing, and reporting the earnings and expenses of a business over time. This function looks at different forms of financial data:
- Business Revenue– the money earned by the business, which is calculated as the average sales price by the number of sales.
- Assets – the things owned or leased by the business, which include cash and things that can be converted to cash.
- Liabilities – things that the business owns, excluding expenses and debts.
- Equity – the cash value of the business when you go for liquidation.
Two main financial accounting processes are the cash method and the accrual method, the former records financial transactions only when there has been exchanging of cash, while the latter records transactions as they occur. Streamlined finance business processes are the backbone of a financially viable business. The output of financial processes influences every business decision that the organization takes, important organizational changes, and every budget line item. Finance and accounting business processes are all the methods and procedures carried out by the finance department. Almost all the finance and accounting business processes are made up of the following steps:
Gathering finance and accounting data – Financial data gathering is the most important part of finance processes. Accurate data-gathering methods are important to make informed financial decisions. Financial compliance and audit trails require updated financial data. Gathering data pertaining to all financial transactions provides a complete picture of the revenue and expenses of the business. The gathered financial data must be made available for the management for financial planning, forecasting, and budgeting.
Budgeting for individual departments and the entire organization – The finance department is tasked with the review and approval of individual department budgets. Each department prepares its own budget and presents it to the manager for approval. Once the manager approves the budget, it is presented to the finance manager for review and approval. Any changes to the budget numbers are suggested by the finance manager and the final approved version is presented to top management.
Planning and forecasting financial operations – Businesses thrive on proactive planning and forecasting. Planning business operations must be based on past financial performance data. Based on this data, the financial manager can plan and forecast future business operations and activities. Strategic financial planning typically happens annually or semi-annually. Planning and forecasting are carried out when teams sit together and discuss their goals for the next year or 6 months. They also discuss the strategy/methods that they will follow to achieve these goals. Post these discussions, detailed plans of action are created and shared with key stakeholders in the organization. Regular meetings are conducted to review the progress toward achieving these goals.
Financial modeling – Financial modeling is an iterative process that follows the standard steps mentioned below.
1. Entering 3-5 years of historical financial information.
2. Analysis of historical performance.
3. generate assumptions about future performance.
4. Use these assumptions to forecast and link the income statement, cash flow statement, and balance sheet.
5. perform discounted cash flow analysis.
6. audit and stress test the model. Creating a financial model helps the finance team analyze the impact of a future event or decision. Financial analysts most often use these models to analyze and anticipate how a company’s stock performance might be affected by future events or executive decisions.
Financial closure – Financial close refers to all the accounting and financial activities that take place on a regular basis to close the books on the prior month, quarter, or year. For organizations that own subsidiaries, financial closure involves consolidating financial statements and analyzing inter-company statements. Recording daily transactions, reconciling accounts, recording monthly journal entries, reconciling balance sheet accounts, reviewing revenue and expense accounts, preparation of financial statements, performing a review, and closing systems are the steps in financial closure.
Consolidation – Financial consolidation is the process of combining financial data from several departments or business entities within an organization for reporting purposes. Consolidation includes importing data, mapping general ledgers into a single chart of accounts, normalizing consolidated data, and producing reports called financial statements. The main tasks in financial process consolidation are 1- collecting trial balance data from multiple ledger systems and mapping to a centralized chart of accounts, 2- consolidating the data as per specific financial accounting rules and guidelines, and 3- Reporting the results to internal and external stakeholders.
Financial reporting – Financial reporting may be defined as the process of recording and representing a company’s financial data. Financial reporting in the accounting business process flow refers to the process of producing statements that disclose the financial status of the business to management, investors, and government agencies. Key stakeholders like investors, shareholders, financiers, and government regulatory agencies rely on financial reports for making important decisions.
Main reports that are needed to run a business include external financial statements (mainly the income statement, statement of comprehensive income, balance sheet, cash flow statement, and statement of stockholder’s equity), notes to financial statements, communication regarding quarterly earnings, quarterly and annual reports to stakeholders, financial information posted on the business website, financial reports to government agencies, and documentation on the issuance of common stock and other securities.
Well-designed and streamlined finance processes provide deep and clear insights into the fiscal reality of the organization. Technology-driven financial processes ensure optimal function of financial function and optimal resource utilization.
Key Processes in the Finance Function
The finance function is made up of numerous processes, most of which have a direct impact on the business’s bottom line. Here is a list of the main finance process in an organization:
Budgeting: This process involves planning for future activities based on historical financial data. Each department presents its budget to Finance for approval. Apart from individual department budgets, the Finance department also prepares a consolidated budget that is approved by the CFO.
Billing and Approval: Payments from customers and other entities are collected after appropriate approval of the expense requests. Manual request approvals may be the pain points here, causing endless waiting for both parties and process bottlenecks.
Accounts Payable: The vendor and other entities are paid their dues after the invoices are approved by the department heads and finance personnel. Here again, manual approvals may be delayed and result in process bottlenecks.
Planning and Forecasting: Financial planning is done with future business growth in mind. Forecasting future expenses is done based on historical data gathered from previous transactions.
Bookkeeping and Financial Closure: Closing of finance books at the end of the financial year is referred to as closure. Bookkeeping in finance is the recording of all the financial transactions that the organization undertook during the financial year. The accounts are tallied during the financial closure process.
Auditing: The financial transactions and records are verified for compliance with the company’s policies and regulations during financial audits.
Data Collection and Reporting: All data pertaining to financial transactions are recorded. This data is published in the form of reports as frequently as the business decides to do so.
Common Challenges in the Finance Function
One thing common in the list of finance processes is the need for accuracy and consistency. Even a small mistake or overlook in financial operations can result in huge losses and throw the business out of gear. There are several challenges in finance and accounting business processes on a daily basis. Let us take a brief look at some of these challenges:
Inefficient processes –
Outdated systems result in information silos that increase the complexity of finance processes. Manual finance processes take a toll on the productivity of the finance team by eating up their work hours. Finance teams spend way too much time verifying and matching finance data when working with inefficient manual processes. Financial reporting is a tedious process that requires several rounds of review and validation. Slogging on manual financial reporting may subject intelligent and highly qualified finance personnel to undue stress and frustration. Not only does manual processing encourage employee burnout, but it also prevents finance personnel from engaging in productive and strategic activities that foster business growth.
Lack of clarity of roles –
Manual finance processes do not have a clear-cut definition of the role played by team members. When the division of roles and responsibilities is not clear in a team, it leads to confusion and the slipping of tasks through the cracks. Team members do not know who is responsible for which tasks and who should approve which request. This lack of ownership and accountability can result in process redundancy or tasks going undone. Credibility and trust issues arise when accountability is lacking in a team.
Fraud and duplication –
Financial fraud can have severe consequences in any business. However, fraud is an ever-present threat for which businesses need to have preventive measures in place. Fraudulent billing or duplicate invoicing or questionable manual accounts payable systems increase the financial risk in an organization. Manual finance systems are prone to information tampering or overlooking or duplication.
Implementing a solid approval process that prevents finance personnel from tampering with data or submitting duplicate invoices or wrong approvals is a must to ensure the sound financial health of an organization.
Inefficient information management –
The finance and accounting function thrives on data. They need to handle mountains of financial data on a daily basis. Handling such huge volumes of data can be overwhelming for finance teams via manual finance processes. Proper management, storage, tracking, and organization of data is a challenge for businesses using manual finance systems. In addition to data management, these documents need to be easily locatable and accessible to management and financial audit teams. Moreover, receiving and processing paper documents is prone to physical damage and misplacement.
Manual data entry –
Even minor mistakes in data entry can cascade into serious financial issues for the organization. Manual data entry is a highly inefficient process that is time-consuming and resource-intensive. Manual methods for data management are prone to errors and inconsistencies that make the business vulnerable to serious financial repercussions. Simple overlooks can result in underpaid or overpaid invoices and a number of errors that can cause trouble further down the road.
Delayed or slow approval –
Manual approval processes lack transparency and clarity on roles and responsibilities. As a result, the team is not sure who should approve finance requests. There is an undue delay in approvals in manual finance processes. Moreover, manual approvals result in slow and complicated payment processing. Delayed payment processing in turn leads to late payments and purchase order delays. When purchase orders and invoice payments are delayed, supplier dissatisfaction increases. When supplier payments are delayed it affects business operations as well. Purchase order approval delays result in projects running behind schedule and delayed product rollouts. The far-reaching effects of slow approval in finance are ruining the company’s reputation and a dip in stock prices.
Lack of visibility –
There is an inevitable lack of visibility and oversight due to reliance on paper invoices and other documents. Through manual processing, it is impossible for accountants to know precisely when invoices were issued, when/whether the payments were made, and whether the payment was cleared. Manual tracking or logging of each stage of the account and communicating the status of the transaction to suppliers and other stakeholders involves a lot of admin work. Another disadvantage of manual systems is the lack of oversight which leads to a lack of insight into the trends in the financial operations of the business. Finance personnel is clueless about the spending patterns of the business, productivity levels, and what is the efficiency of financial operations.
Misplaced or missing documents –
When there is already a backlog of documents that need to be processed, it is common for emails or paper documents to be misplaced or missing. For example: in businesses where there are no efficient accounting workflows, lost invoices may result in wasted time in contacting suppliers for duplicate copies of purchase invoices. In situations where duplicate invoices are not available, it creates embarrassment in front of the supplier because you have to explain why there is a delay in payment. Missing or misplaced invoices may result in an inconsistent paper trail at audit time.
The best way to overcome the challenges in finance processes is to adopt technology-driven process management techniques like workflow automation. Cflow is a cloud-based workflow automation software that can automate key business workflows quickly and efficiently. Key processes in finance and accounting can be easily automated with Cflow. Visual form builder from Cflow enables anyone in the finance team to create workflows easily. Key finance processes including reporting and compliance, accounts payable and receivable, strategic planning, and CapEx approvals can be automated with Cflow.
People – the Starting Point of Financial Automation
Finance and accounting business processes are prime for automation. Workflow automation is the best way to modernize finance operations. Organizations must adopt a human-led, tech-powered approach to finance automation. A modern finance workforce can be built by reimagining roles, reskilling staff, and prioritizing data analytics. As the finance function continues to evolve, finance leaders recognize that they will need a workforce with new skills to match the new demands of the clients. According to a survey by PWC, 68% of CFOs are investing in digital transformation over the next 12 months including in technologies like cloud and analytics.
Organizations that focus on honing the skills of their workforce, improving their analytical skills, and turning finance into a strategic business partner are able to drive transformative results across core leadership teams and the entire organization. Progressive finance organizations are investing more in finance upskilling and reskilling. Such organizations are creating and iterating on a workforce transformation that supports productivity, innovation, and growth-enhancing tech innovations.
Technical upskilling of the workforce is vital for the successful execution of tech-based innovations. To be effective, finance leaders must understand their businesses deeply so that they can use the data to present a compelling story. The following methods can be adopted by leaders to improve the effectiveness of finance automation.
On-the-job learning – Rotational apprenticeship programs and shadow opportunities give people the opportunity to demonstrate underutilized skills and offer hands-on experience in analytical and technical skills. Business partners can bring insights into improving reporting design and impactful storytelling by integrating finance into the business.
Utilize the right learning technology platform – The majority of the learning technology platforms let businesses create customized learning pathways that foster growth opportunities for employees. Management should look for learning platforms that inspire continuous learning and close skill gaps around data analysis.
Redeploy – Management needs to equip employees to shift comfortably between roles. For employees that are not comfortable shifting roles, management must help find roles that are a better fit for their skills.
Although most finance organizations have made significant investments in technology, they have not tapped into the full potential of their employees. According to a PWC survey, 73% of employees say that they are aware of systems or technology that would improve their work productivity. Empowering employees with technological innovations enables them to identify business pain points and learn to create automation. The use of workflow automation tools, digital social hubs, and chat tools makes it easier for people to be productive.
Automating finance processes flattens organizational silos. Recognizing the effects of spans and layers on individual work habits, organizational culture, and employee trust and collaboration is possible through finance automation. The highest potential employees are most often called upon for transformation initiatives. Management must recognize such employees and designate them to drive finance automation initiatives.
Letting employees lead the finance automation implementation helps close the skill gaps and develop the right financial and operational expertise for the organization. The finance department must take the lead to improve operational efficiency and profitability for the organization. There is no scope for doubt that workflow automation can bring tremendous benefits in terms of productivity/efficiency improvements in financial processes.
End-to-end workflow automation
Build fully-customizable, no code process workflows in a jiffy.
Automating the Finance Function
Automation of the financial function requires careful planning and execution. There are certain points to bear in mind while switching from manual to automated financial processing.
The main steps to finance process automation are;
- Study the existing process completely. Sketch the financial process workflow on a whiteboard to identify the bottlenecks and redundancies. Once a thorough evaluation of the manual finance process is completed, identifying the areas that require automation becomes easy.
- Creating a digital finance workflow is the next step. While creating the workflow, you can optimize the sequence, add additional tasks, and also assign/reassign resources for each task.
- Automation of the finance process is based on the digital workflow that was created. A no-code workflow solution like Cflow helps businesses automate key business functions within minutes.
- Integrating with third-party applications like ERP, CRM, etc. enables seamless communication of data between the systems. Using API tools like Zapier enables smooth third-party integrations.
Automating the finance function is aimed at eliminating the drawbacks of manual processing and improving the productivity and efficiency of the process.
Finance Processes that can be Automated
Corporate finance processes are often limited by conventional methods. Long-established methods perpetuate a pervasive mindset in employees which makes it difficult to implement modern methods like automation. Embracing financial process automation has a number of benefits for the employer and the employee. Reengineering the financial function through workflow automation helps save labor costs, improves cash management, speeds up closures, and overall makes the company more profitable. Here is a list of financial processes that can be automated:
Accounts Payable: The accounts payable process is ridden with inconsistencies and a lack of standardization in the invoices and payment requests. Automation of this process can bring down the error margin considerably and also streamline invoice generation and approval.
Tax Accounting: Tax accounting activities are manual, repetitive, and time-consuming. Automation of the tax accounting process improves the accuracy and speed of processing tax claims.
Fraud Detection: Manual finance processes are prone to duplicate or extraneous claim submissions. Automating the claim submission process standardizes the submission and also prevents the submission of duplicate claims and overlooks.
Invoice and Budget Approvals: Approval of vendor invoices or expense claims is a tedious process that involves scrutiny and validation of data. Automating the approval process shortens the approval cycle and prevents process bottlenecks and delays.
Why is automating the finance processes so important?
Sound financial processing is the backbone of a financially viable organization. Automating the financial process helps organizations save time and money. Here is why finance process automation is important:
Clearer insights into fiscal status: Automated finance function provides a 360ᵒ view of the financial status of the organization. Top management can make informed business decisions based on data insights.
Improved accuracy: Automated processes are more accurate and optimized. Standardization of the financial process also eliminates duplicate or overrated claims.
Centralized access and control of finance operations: A centralized automated finance processing system provides centralized access and control of finance operations. Automation also streamlines and optimizes communication within the finance department and also with other departments.
Optimized resource utilization: Resource allocation is done intuitively by an automated finance processing system. | https://www.cflowapps.com/finance-process-automation/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.