source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
48,985 | Right now, using all our various current means of observing, we can "see" a sphere of X diameter around us. Webb will increase that to Y diameter. So our observable volume will increase by some percent (Volume Y - Volume X) / Volume X. I'm not sure I can rely on any values I find for X and Y using google searches. As of April 2022 what is the diameter of what we can presently observe and what is the expectation for JWST? ADDITION: my apologies for not having enough subject matter expertise to frame my question as well as it might have been. I was thinking along the lines of pela's response. With JWST having a better set of instruments etc our practical ability would have farther reach. I've known there were physics limits to what could ever be observed that could not be changed. the CMB is not the type of observation I had in mind when asking the question. | tl;dr Conservatively 10%, realistically 25%, optimistically 60%. I assume that by the "observable-by-us Universe", you mean not the theoretically observable Universe, which is given by the distance light has had the time to travel since the Big Bang, but the part of the Universe where we may practically see galaxies and other objects, not including the cosmic microwave background. Current redshift record The currently most distant, spectroscopically confirmed object is the galaxy GN-z11 ( Oesch et al. 2016 ), which has a redshift of $z=11.1$ and hence a distance of just over $d=32$ billion lightyears (Glyr). The volume inside the sphere with us in the center and GN-z11 on the surface is $V = 4\pi d^3/3 \simeq 140\,000\,\mathrm{Glyr}^3$ . How far will James Webb see? Predicting the redshift record for James Webb is not easy, since it depends on the unknown physical conditions of even younger galaxies. The important quantity is their (surface) brightness which depends not only on intrinsic properties such as their star formation rate, dustiness, and compactness, but also on the properties of the intergalactic medium, in particular to which degree it is (re-)ionized and hence able to transmit the light from the galaxies. Moreover, we also don't know how many there, so estimates rely on extrapolating their distributions of luminosities (luminosity functions) from lower redshifts. Rather conservative predictions for Webb's redshift record are around $z\simeq13\text{–}16$ ( Mashian et al. 2016 ; Williams et al. 2018 ; Mahler et al. 2019 ; Behroozi et al. 2020 ), corresponding to an age of the Universe of 250–300 million years after the Big Bang (whereas GN-z11 is seen ~400 Myr after the BB), and a distance of 33–34.5 Glyr. These estimates hence correspond to volumes of $\simeq 150\,000\text{–}170\,000\,\mathrm{Glyr}^3$ , i.e. an increase of up to roughly 10–25%. However, sometimes we're lucky and line up with a massive cluster of galaxies acting as a gravitational lens which magnifies the light from distant background sources by factors of tens, hundreds, or even thousands. This may potentially allow James Webb to see some of the very first galaxies, predicted to have formed at $z\sim20\text{–}30$ , 100–200 Myr after BB. I don't know of any peer-reviewed papers doing any serious estimates, but NASA routinely cites Webb's predicted record as " 200 (or 250) Myr, possibly even 100 Myr " af the BB (e.g. here ). If we really discover a galaxy 100 Myr after the BB, that would correspond to $z=30$ , and a distance of 38 Glyr, i.e. a volume of $225\,000\,\mathrm{Glyr}^3$ which is 60% larger than our current "observable-by-us Universe". The extension of our probed volume may not seem like a lot. A better way to understand the significance of this is to think about the age of the Universe at that time: It is likely that we will make the unknown epoch of the Universe half, or maybe even a quarter, of today's value. For comparison, the volume inside the region from which the cosmic microwave background was emitted at $z=1100$ is $390\,000\,\mathrm{Glyr}^3$ , while the volume of the (theoretically) observable Universe is $413\,000\,\mathrm{Glyr}^3$ . The figure below is pretty much to scale: | {
"source": [
"https://astronomy.stackexchange.com/questions/48985",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/45675/"
]
} |
49,094 | It was quite interesting to spot that most dwarf planets have masses close to that of our moon (if we let an error to fluctuate within two orders of magnitude). Why it is so? Is there any common denominator of this phenomena? Maybe it's because all dwarf planets share similar formation roots/causes with the moon? Or is this just a very big and strange coincidence? | Two orders of magnitude is a very large range. The Moon has a mass of $7.342 \times 10^{22}$ kg, so your question is, why do most dwarf planets have a mass between $10^{20}$ and $10^{24}$ kg? By definition, a dwarf planet has to be in hydrostatic equilibrium . Considering a list of possible dwarf planets , the smallest/lightest for which there appears to be consensus that it is in hydrostatic equilibrium is 90482 Orcus , with a mass of $(6.348 ± 0.019) \times 10^{20}$ kg. Smaller bodies are too small to reach hydrostatic equilibrium. Hydrostatic equilibrium means that a body becomes spherical. Generally speaking, small solar system bodies are very far from spherical. It is not a requirement for a body to exist at all, but if it's not spherical, it's not considered a minor planet. The Moon is also in hydrostatic equilibrium, which it would probably not be if it had less than 1% of its actual mass. Now why are there no dwarf planets larger than $10^{24}$ kg? That exceeds the mass of Mars ( $6.4171 \times 10^{23}$ kg) and approaches the mass of Earth ( $5.9724 \times 10^{24}$ kg). Such large planets in the inner solar system are full planets and not dwarf planets. Such large bodies in the outer solar system have not been discovered and probably don't exist. If the moon were two orders of magnitude larger, it would be as heavy as the Earth, and the Earth-Moon system would clearly be a double planet. | {
"source": [
"https://astronomy.stackexchange.com/questions/49094",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/26283/"
]
} |
49,258 | The Moon orbits Earth at a semi-major axis of 384400 km, with its periapsis being 363300 km and apoapsis being 405500 km. (All figures from this NASA fact sheet.) If the Moon orbited Earth at a constant velocity, the average distance would be 384,400 km. Unfortunately, as Kepler found out, celestial objects move faster near periapsis and slower at apoapsis. This means that the Moon's actual average distance from Earth is slightly bigger than 384400 km, due to it having a lower orbital velocity when farther from Earth. So, what is the Moon's actual average distance from Earth? Is there a formula (I'm assuming including calculus) that would give a solution to any two-body-system? | Mean distance averaged over time for any Keplerian orbit is $a(1+\frac{1}{2}e^2)$ , where $a$ is the semi-major axis and $e$ is the eccentricity. Using your NASA fact sheet, I get about 384,979 km for the Moon (which is pretty close to wikipedia's value of 385,000 km). This, as expected is a bit larger than $a$ at 384,400 km. Note: I wouldn't call this the "actual" average distance because there is more than one way to compute an average. | {
"source": [
"https://astronomy.stackexchange.com/questions/49258",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/33635/"
]
} |
49,332 | This article states Their data contained eclipses for all 25 exoplanets, and transits for 17 of them. This page from NASA explains the difference between an eclipse and a transit: Like an eclipse, a transit occurs when one object appears to pass in front of another object. But in a transit, the apparent size of the first object is not large enough to cast the second into complete shadow This must mean that for these 25 exoplanets, the star they orbit is completely hidden when the exoplanet passes along its line of sight. But for this to be the case, the exoplanet would have to be roughly as big as the star it orbits, because the exoplanet and the star are so far away from the Earth. This seems surprising to me. Am I interpreting the article from Hubble correctly? | A star usually is larger than its planet, and that's what they refer to in this instance. Further in the text they explain that they mean the star eclipsing the planet: An eclipse occurs when an exoplanet passes behind its star as seen from Earth, and a transit occurs when a planet passes in front of its star. (emphasis mine) And this of course means that the star blocks any light coming from the planet, not vice versa. The information gained from this measurement where the star eclipses the planet is that it allows us to compare the spectrum of the star compared to the combined spectrum of star and planet, allowing possibly some deduction on the atmospheric or chemical composition of the latter (similar to the transit). HOWEVER: If you consider a hot jupiter (thus a gasous planet very close-in) it is thinkable that it might occur that a planet is bigger in diameter than its host star. This planet would have to orbit its host star, a low-mass red dwarf, very close in. Then radii might be of similar size to an extend that the planet might even be larger (yet still not more massive) than the star itself. A hot jupiter can reach about twice the size of Jupiter - and low mass stars at the lower limit of about 0.08 solar masses have sizes comparable to Jupiter ( $0.1r_{Sun} \approx 70.000\mathrm{km} \approx r_{Jup}$ ). If you consider neutron stars (pulsars), the first dectections of exoplanets were even around one of those (though technically... are they still planets, if the central object does not show fusion anymore?). | {
"source": [
"https://astronomy.stackexchange.com/questions/49332",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/9527/"
]
} |
49,430 | If a black hole comes from a dying star, do we have a record or proof that our galactic center was once a huge ball of burning plasma? I'm not an astronomy student. | We don't know. It's a supermassive black hole and there are several theories about their formation: The origin of supermassive black holes remains an open field of research. Astrophysicists agree that black holes can grow by accretion of matter and by merging with other black holes. There are several hypotheses for the formation mechanisms and initial masses of the progenitors, or "seeds", of supermassive black holes. Independently of the specific formation channel for the black hole seed, given sufficient mass nearby, it could accrete to become an intermediate-mass black hole and possibly a SMBH if the accretion rate persists. The early progenitor seeds may be black holes of tens or perhaps hundreds of solar masses that are left behind by the explosions of massive stars and grow by accretion of matter. Another model involves a dense stellar cluster undergoing core collapse as the negative heat capacity of the system drives the velocity dispersion in the core to relativistic speeds. As far as I know, nothing extra is known about the black hole in the center of our own galaxy. | {
"source": [
"https://astronomy.stackexchange.com/questions/49430",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/31510/"
]
} |
49,533 | Did the majority of Earth's precious metals sink below the crust during Earth's formation? | This is in part marketing hype by wanna-be asteroid mining companies. That said, some asteroids are suspected to be richer in precious metals than is the Earth's crust. For example, the Earth's crust is significantly depleted in gold compared to the solar system as a whole. I wrote about the reasons why this is the case at physics.stackexchange.com . Gold and related precious metals are siderophiles, which means "iron-loving". When the Earth differentiated, the iron and nickel that sank to the center of the Earth took other siderophiles with them. In a sense, the precious metals are more siderophilic than is iron itself. Gold et al. easily dissolve in molten iron. Precious metals are so chemically inert that they do not readily combine form compounds with other elements. There is a lot more gold and other precious metals in the Earth's core than there is in all of the asteroids combined. | {
"source": [
"https://astronomy.stackexchange.com/questions/49533",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/42435/"
]
} |
49,537 | The Event Horizon Telescope emulates an Earth-sized telescope by syncing a bunch of radio telescopes across the planet to do take pictures with a small enough angular resolution to take pictures of a black hole. Would the angular resolution, or perhaps just general performance, of an Earth-sized telescope be appreciably higher than this? | This is in part marketing hype by wanna-be asteroid mining companies. That said, some asteroids are suspected to be richer in precious metals than is the Earth's crust. For example, the Earth's crust is significantly depleted in gold compared to the solar system as a whole. I wrote about the reasons why this is the case at physics.stackexchange.com . Gold and related precious metals are siderophiles, which means "iron-loving". When the Earth differentiated, the iron and nickel that sank to the center of the Earth took other siderophiles with them. In a sense, the precious metals are more siderophilic than is iron itself. Gold et al. easily dissolve in molten iron. Precious metals are so chemically inert that they do not readily combine form compounds with other elements. There is a lot more gold and other precious metals in the Earth's core than there is in all of the asteroids combined. | {
"source": [
"https://astronomy.stackexchange.com/questions/49537",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/42435/"
]
} |
49,810 | NASA reports this photo was taken 19 Sep 1977 by Voyager 1: It doesn't strike me as obvious how to determine which celestial body is closer to Voyager 1 as the photo was taken. My (not very scientific) guess is that the Earth is closer, since the moon seems more out of focus. It also seems more probable, since it was launched from earth. Question : In this image taken by Voyager 1, which is closer: the earth or the moon? | I used JPL Horizons to get the position vectors of each relative to the SSB on Sep 19, 1977. Voyager X = 1.547492527774134E+08 Y = 2.045859856469853E+06 Z = 8.442122223290936E+05
Earth X = 1.503771470116906E+08 Y =-9.323322057091754E+06 Z =-1.007092461168021E+04
Moon X = 1.502869015003825E+08 Y =-9.680376320152178E+06 Z = 1.932598843803350E+04 Subtracting Voyager's position from each, and computing $ \sqrt{x^2+y^2+z^2} $ yields: Earth 1.22E+07km
Moon 1.26E+07km So, the Earth is closer. | {
"source": [
"https://astronomy.stackexchange.com/questions/49810",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/31633/"
]
} |
49,825 | When the Earth regularly casts a shadow on the moon, why does the shadow progress from circular to a straight line and then to a reversed circle? Since the earth is a sphere, shouldn't it cast round shadows only?w I get how the first concave shadow (2nd image) would make sense, but I don't get how it evolves during the month to a straight line and then to a reversed, concave circular shadow (6th image). Lunar phases, annotated from Wikimedia Commons | Aha! I think you'll find that the answer is that those are not photos of Earth's shadow on the Moon at all! Look at the photo of the Earth and the Moon seen from the spacecraft Voyager 1 as it was leaving our neighborhood in In this image taken by Voyager 1, which is closer: the earth or the moon? Both the Earth and the Moon have the same crescent shapes, illuminated from the right side by the Sun. What you're suggesting might be the shadow of the Earth is really just the pattern produced when a sphere is illuminated from one side by a narrow light source, like the 1/2 degree wide Sun. You can see that both the Earth and the Moon have essentially the same illumination pattern. | {
"source": [
"https://astronomy.stackexchange.com/questions/49825",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/39353/"
]
} |
49,845 | Someone just retweeted a NASA tweet onto my timeline, and it includes two images, allegedly from the same star that was in the process of dying, taken by the new space telescope, side by side: I don't quite understand what I'm looking at though. If I understood correctly, these are two images of the same star. But, both have different colour schemes. I know stars can change 'colour' based on their type and life cycle (like blue dwarf, red dwarf), but I doubt that's what I'm looking at as the telescope is relatively new and from what I understand, those life cycles take ages. The other option that comes to my mind is some kind of artistic freedom, like what is done when artists make images of e.g. dinosaurs and guess their colors. But that seems a bit too unscientific for NASA, so I'm expecting there to be a good reason for the difference in color here. If this is the same star, why is the picture on the left blue with red, and the one on the right red with blue? | They are two pictures of the same object. The Southern Ring Nebula . They look different because we are looking at different wavelengths. The picture on the left is near infrared (about the range 0.7 - 5 $\mu m)$ , while the one on the right is mid infrared (JWST is sensitive to up to 30 $\mu m$ ). Both kinds of light are impossible to see with our eyes, therefore, by definition, they don't have a color. There is no such color as infrared. So how do we display these images on an RGB monitor? Essentially, we are entitled to choose the color we want. Scientists usually choose the color so that the image is (i) clear to read, the important features are highlighted (ii) pleasant to the eye. In this case (but this is not a rule), the longest wavelengths have been displayed in red, and the shortest in blue. Mimicking in some way the fact that in the visible spectrum red has the longest wavelength and blue/violet the shortest. This color coding is useful because a scientist can tell at a glance what regions of the nebula are emitting the longest and the shortest wavelengths. If one also knows what processes emit what kind of light, this gives the picture a clear and immediate meaning. As instance you can tell by looking at the left picture that the central part is mainly ionized gas (blue light, shortest wavelengths) while the external region is dust and molecular Hydrogen (longer wavelengths). In the mid infrared image the colors are reversed, because ionized gas emits more strongly in the red part of mid infrared (thus the central region is red), while in the external region we see hydrocarbon grains that emit in the shortest wavelengths of mid infrared. Source: NASA's live coverage of the publication of the first images of JWST | {
"source": [
"https://astronomy.stackexchange.com/questions/49845",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/21719/"
]
} |
49,854 | After seeing the James Webb space picture, in which a tiny sliver of the sky the size of a piece of rice from our perspective here on Earth was examined and revealed contain, as expected, an abundance of galaxies and stars, I am wondering how this confirmation coincides with the Big Bang theory and the concept that Earth is in a non-privileged location in the Universe. A common line NASA uses regarding the Webb telescope is that "looking into space is like looking back in time", which is true, for it take time for light from those distant objects to reach us. So, by examining far enough into a region of space, we are able to "wind back the clock" to see early galaxies now long dead and some of the earliest stars in the history of the Universe. What seems odd to me is that we see the same things at roughly the same distance from every direction in space. If Earth was located in an unprivileged spot, anywhere but the center of the Universe, then some of the space around us should go "farther back" than others. I understand that we are limited in the amount of light we are able to see. An apt comparison would be that we are in a sort of bubble isolated from the rest of the Universe and can only a uniform distance back in all directions. But this doesn't explain why each direction contains roughly the same timeline of events. We can see back a uniform distance, but the fact that this uniform distance seems to reveal the same information in all direction is confusing to me. It won't do just to say the Universe had no "point" in which is started - the BB should be thought of as a sort of circle, the center of which all events spread uniformly from. We are just one event way off in a region of that circle necessarily closer to some part of the circumference than others, since our galaxy did not exist at the beginning of the Universe. So, we should be able to look one way and see nothing beyond galaxies roughly our own age and no early stars, and look the exact other way and see a much richer history, with more early stars and galaxies layered atop one another. But this is not the case - we see the same "density" of events from all directions. Unless, in some way, we are also seeing future events in some direction? Not sure how that would work. | They are two pictures of the same object. The Southern Ring Nebula . They look different because we are looking at different wavelengths. The picture on the left is near infrared (about the range 0.7 - 5 $\mu m)$ , while the one on the right is mid infrared (JWST is sensitive to up to 30 $\mu m$ ). Both kinds of light are impossible to see with our eyes, therefore, by definition, they don't have a color. There is no such color as infrared. So how do we display these images on an RGB monitor? Essentially, we are entitled to choose the color we want. Scientists usually choose the color so that the image is (i) clear to read, the important features are highlighted (ii) pleasant to the eye. In this case (but this is not a rule), the longest wavelengths have been displayed in red, and the shortest in blue. Mimicking in some way the fact that in the visible spectrum red has the longest wavelength and blue/violet the shortest. This color coding is useful because a scientist can tell at a glance what regions of the nebula are emitting the longest and the shortest wavelengths. If one also knows what processes emit what kind of light, this gives the picture a clear and immediate meaning. As instance you can tell by looking at the left picture that the central part is mainly ionized gas (blue light, shortest wavelengths) while the external region is dust and molecular Hydrogen (longer wavelengths). In the mid infrared image the colors are reversed, because ionized gas emits more strongly in the red part of mid infrared (thus the central region is red), while in the external region we see hydrocarbon grains that emit in the shortest wavelengths of mid infrared. Source: NASA's live coverage of the publication of the first images of JWST | {
"source": [
"https://astronomy.stackexchange.com/questions/49854",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/46727/"
]
} |
49,869 | I am aware of popular initiatives such as the Wikipedia articles listing astronomical objects on the basis of some criteria but... do some kind of systematic indexing, or catalogue, of objects found, potentially via machine learning, in each and every astronomy images, exist ? So as to facilitate the search for this object within other pictures of it ? Then for example, when an astronomer look a some faint object within the latest jwst deep field, it can immediately look up if that object is already listed in the Hubble deep field of the same region of space ? | They are two pictures of the same object. The Southern Ring Nebula . They look different because we are looking at different wavelengths. The picture on the left is near infrared (about the range 0.7 - 5 $\mu m)$ , while the one on the right is mid infrared (JWST is sensitive to up to 30 $\mu m$ ). Both kinds of light are impossible to see with our eyes, therefore, by definition, they don't have a color. There is no such color as infrared. So how do we display these images on an RGB monitor? Essentially, we are entitled to choose the color we want. Scientists usually choose the color so that the image is (i) clear to read, the important features are highlighted (ii) pleasant to the eye. In this case (but this is not a rule), the longest wavelengths have been displayed in red, and the shortest in blue. Mimicking in some way the fact that in the visible spectrum red has the longest wavelength and blue/violet the shortest. This color coding is useful because a scientist can tell at a glance what regions of the nebula are emitting the longest and the shortest wavelengths. If one also knows what processes emit what kind of light, this gives the picture a clear and immediate meaning. As instance you can tell by looking at the left picture that the central part is mainly ionized gas (blue light, shortest wavelengths) while the external region is dust and molecular Hydrogen (longer wavelengths). In the mid infrared image the colors are reversed, because ionized gas emits more strongly in the red part of mid infrared (thus the central region is red), while in the external region we see hydrocarbon grains that emit in the shortest wavelengths of mid infrared. Source: NASA's live coverage of the publication of the first images of JWST | {
"source": [
"https://astronomy.stackexchange.com/questions/49869",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/46750/"
]
} |
49,943 | The processing for JWST's alignment is done on Earth. How long does it take for a signal from Earth to reach the JWST? | Almost 5.1 seconds, plus or minus around 1 second. Here's a daily plot (at midnight) of the light travel time from the JWST to the centre of the Earth, courtesy of JPL Horizons, using a script derived from the one in this answer . There are more graphics & scripts related to the JWST here . Times are in TDB . The light travel time to a location on the Earth's surface has a small extra variation due to the Earth's rotation. And of course, you can't get a direct line of sight signal from the JWST when you're on the wrong side of the Earth. ;) Here's an hourly plot for today of the distance from the JWST to its control centre, the Space Telescope Science Institute , which is located on the Johns Hopkins University campus. I used longitude -76.622987, latitude 39.332887, altitude 0.073 km as the coordinates. Here's my plotting script , running on the SageMathCell server. | {
"source": [
"https://astronomy.stackexchange.com/questions/49943",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/17699/"
]
} |
49,957 | Do Schwarzschild black holes exist in reality? I have searched answer for this question but am not fully satisfied. Everything in the universe, including planets, stars, and galaxies, is spinning. How can something nonrotating/non-spinning exist in the universe? From this link , it says that A non-rotating black hole is extremely unlikely … even if one existed,
it would only take one photon to hit the event horizon off-centre to
give it angular momentum, ie start it rotating. Reference : If they don't exist in reality then why do we study them? | No, Schwarzschild black holes probably do not exist. We expect astrophysical black holes to be Kerr black holes, and we expect that most of them have a lot of spin. As the diagram at the end of this answer shows, supermassive black holes generally spin at relativistic speeds. Stellar mass black holes are formed in core-collapse supernovae . They can also form when a neutron star collides with its companion (which could be another neutron star or a normal star); neutron stars are also formed in core-collapse supernova events. Collapses and collisions (of course) conserve angular momentum, and many young neutron stars are pulsars, spinning many times per second. However, some of that angular momentum may be carried away by the ejecta of the supernova explosion. It appears that core collapse can be highly asymmetric, which can give the remnant considerable proper motion, a phenomenon known as a pulsar kick : A pulsar kick is the name of the phenomenon that often causes a neutron star to move with a different, usually substantially greater, velocity than its progenitor star. The cause of pulsar kicks is unknown, but many astrophysicists believe that it must be due to an asymmetry in the way a supernova explodes. If true, this would give information about the supernova mechanism. It's not easy to detect an isolated inactive black hole, or to determine its angular momentum. And if the black hole is active, the accretion disk will have high angular momentum simply due to its orbital speed, even if the spin of the black hole itself is relatively slow. So why do we study Schwarzschild black holes? For the same reason we study Special Relativity even though spacetime is generally not flat. You need to thoroughly understand flat spacetime before you try to learn General Relativity. And you need to understand the Schwarzschild metric before you add the extra complexity that spin brings to the picture. Besides, the Schwarzschild solution is a useful model for any spherical body with relatively low spin, it doesn't only apply to black holes. Thus you can use it (for example) to calculate the gravitational time dilation on the surface of the Earth. As jawheele mentions in the comments, real black holes aren't exactly Kerr black holes either. The Kerr solution, like the Schwarzschild solution, is an eternal vacuum solution to the Einstein Field Equations . And we don't expect real black holes that formed through astrophysical processes to be the multi-universe gateways that the Penrose diagram indicates. Bear in mind that we need a quantum gravity theory to properly talk about what happens at the core of a black hole, and even with such a theory we cannot empirically validate its predictions directly because we cannot extract information from the other side of the event horizon(s). | {
"source": [
"https://astronomy.stackexchange.com/questions/49957",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/46820/"
]
} |
50,156 | Last night I saw a bright light moving across the sky. It was probably as bright (or even brighter) than some of the brightest visible stars like Vega and moving quite slowly (so no shooting star) at roughly the speed satellites appear to move. I was quite surprised to see a satellite that bright but I was pretty certain that it couldn't be the ISS because of the position and direction in the sky. However, from the moment that I spotted it it continuously dimmed until it wasn't anymore visible to the naked eye - maybe 5-10 seconds and 30° of movement later. | From your description, this was most likely a Satellite Flare . This is the Sun reflecting off a highly reflective part of the satellite. The most famous type was the flares from the Iridium Satellites, but they have all been replaced with satellites that no longer flare. Flares still happen though, just not at the predictable level the Iridium satellites did. I do not believe this was just a normal satellite pass, with it entering the Earth's shadow for two reasons: It was brighter than all other stars : Under otherwise normal circumstances, a satellite reflects light in pretty much all directions, and are generally fairly dim. Only during a flare do you get a nearly mirror-like reflection of the sun. It dimmed continuously over 30 degrees : A flare will brighten and dim as more/less of the Sun's image is reflected towards you. When a satellite enters the Earth's shadow, it generally stays the same brightness, and quickly fades to nothing. | {
"source": [
"https://astronomy.stackexchange.com/questions/50156",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/46260/"
]
} |
50,176 | I'm not a physicist or have a very good physics background but I've often wondered why there are new stars that are born in the nebula which was created after the parent star has exploded. As I understand, a star (at least those which have enough mass) explodes after it runs out of fuel, or when the core inside the star starts to fuse iron. So how can new stars be born when the nebula does not have any hydrogen in it? I am probably missing something but this question interests me very much, and I can't seem to find any valuable information which would explain to me the answer to my question. | New stars are not formed from the nebulae created when a parent star explodes. In space there is thin interstellar gas and plasma. This gas is buffeted and blown by the solar winds of stars, and the shockwaves of supernovae. The gas is mostly Hydrogen and Helium. Stars die in two ways. The most common way is for their outer layers to be blown out into space in a fairly gentle way. This process forms a "planetary nebula" The outer layers are formed mostly of hydrogen and helium, but are enriched by other elements. Or stars can die as supernovae. These are much more energetic. Even so, much of the gas blown out is Hydrogen and Helium as it comes from the outer layers of the star, but it will be further enriched by heavier elements. There are different kinds of supernovae with different mixtures of elements. The elements blown off of dying stars mixes with the interstellar gas, enriching it and compressing it. This mixture of gas is still mostly hydrogen and helium and hydrogen is the main fuel for stars! If the gas is sufficiently compressed (for example by a supernova shockwave) then its own gravity can start to pull it together, ultimately forming stars. So stars are not formed from the iron "ashes" of dead stars, but from a mixture of the original Hydrogen fuel that has never been in a star, and the outer layers of stars that are made of "unburnt" hydrogen that was blown off the star as it died. | {
"source": [
"https://astronomy.stackexchange.com/questions/50176",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/47095/"
]
} |
50,432 | I've noticed lately that reflector telescopes are used much more than refractors. The majority of telescopes I see in telescope shops or featured by people online are reflectors. Even the Hubble and JWST are reflector telescopes. Is the reflector just preferred by a lot of people or are there real advantages of having a reflector over a refractor? | Several aspects of refractors limit their usefulness for large telescopes. First is chromatic aberration. Because refractors focus light with refraction, and refraction varies at different wavelengths of light, a single lens is unable to focus all colors at the same point. Refracting telescopes try to correct this by combining a lens with a positive focal-ratio with a lens with a slightly longer negative focal-ratio and a higher index-of-refraction glass so the two dispersions (approximately) cancel out at the focus of the combined lens. The focal-ratio of these achromatic lenses is necessarily much longer than either of the constituent lenses. This has several disadvantages with respect to a mirror with the same diameter: first, the light has to traverse four surfaces rather than just one, at each surface some light is reflected back, and a flaw in any of the surfaces will distort the image; second, for a given focal length, the lens surfaces will be more strongly curved than the mirror, which is more difficult to get correct; third, the shortest practical focal length is necessarily longer, resulting in a longer and more unwieldy instrument. Another consideration is actually supporting the optical elements. Mirrors can be supported from the whole surface of their back sides, while lenses can only be supported from their edges (or else you give up the lack of diffraction in the optical path which is the primary advantage of the refractor). Large lenses tend to bend slightly due to mechanical stress as the telescope is moved around the sky, reducing their image quality. Very large recent telescopes have mirrors made in segments or meniscus shaped mirrors where the overall figure is maintained with supports that can dynamically adjust the mirror to account for mechanical stress. This would be impossible with large lenses. Refractors are best suited for small telescopes optimized for minimal diffraction. | {
"source": [
"https://astronomy.stackexchange.com/questions/50432",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/39353/"
]
} |
50,442 | If we had a very powerful telescope with say a 150-meter diameter, placed in orbit around earth, would it be possible to get a side view rather than a top view of a person standing on the moon? | Yes. The person just has to be standing somewhere near the limb of the moon. That is near the edge of the apparent disc of the moon. We see everything near the limb of the moon from the side, at all times. This is because the moon is (roughly) a sphere. It's not flat. | {
"source": [
"https://astronomy.stackexchange.com/questions/50442",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/46022/"
]
} |
50,444 | Some people talk about the possibility of a planet 9. Could there be a planet 10 or 11? How many undiscovered planets could there be in our solar system? | Let's take a look at the 2006 IAU planet definition A celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit. a) This does not make much of a limit. While we can rule out any extra planets in the relative proximity to the Sun (out to the Kuiper belt or so), there's a vast amount of space out to about a light year were objects would still be orbiting the Sun, but be very difficult for us to observe. b) Objects large enough to gravitationally round themselves are discovered with regularity. While it's hard to observe the exact shape of objects farther away than Pluto, we have many solar system examples showing a radius of a few hundred kilometres is sufficient to achieve hydrostatic equilibrium. c) This is what gives new planets the most trouble. Far from the Sun, the region an object needs to gravitationally dominate far too great, and even large objects will have to share space with millions of rocks and ice fragments. So there being more planets is unlikely due to candidates being far enough away to not having been discovered yet also being so far out that they have difficulty clearing their orbit. That said, the planetary definition is arbitrary, and if we discover planet-sized objects on the very rim of the solar system (something we can not rule out due to insufficient observation capabilities), the definition will again come into question. In that case we could gain quite a large number of new planets. | {
"source": [
"https://astronomy.stackexchange.com/questions/50444",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/47414/"
]
} |
50,624 | Astronauts on the space station do not feel the Earth's gravity at all; they are in free fall.
Since the Earth and all that's on it is in free fall toward the Sun, why would the oceans "feel" the Sun's gravity; i.e., why would the Sun affect the tides at all? It occurs to me (1) it's the center of gravity of the Earth that is in free fall; and (2) points on the surface facing the sun are ~7000 miles closer to the sun than points on the far side; and (3) points on the surface of the Earth at dawn or at dusk experience (due to the Earth's rotation) an acceleration toward or away from the Sun. Are "minor effects" such as these the explanation for why tides are affected? | Yes, the sun does raise tides in just the same way that moon raises tides. It is the difference in gravity that causes the tide. So it is your point 1 and 2. The rotation of the earth causes the tides to move, but don't actually cause the tidal force. The Earth also raises tides on astronauts, but an astronaut is so small that the difference in gravity between her head and her feet is tiny, and the tidal effect is consequently also tiny (and negligible) The moon raises tides in the same way. The Earth is is freefall in its orbit with the moon too. Alternatively you can think in terms of a rotating frame of reference. In such a frame, the side of the earth nearer the sun is moving at sub orbital speed, so gravity is stronger than the centrifugal force. On the other side the centrifugal force is stronger, the effect is to pull water towards and away from the sun. | {
"source": [
"https://astronomy.stackexchange.com/questions/50624",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/47593/"
]
} |
50,667 | As of right now, both stars from Alpha Centauri are in their main sequence stages, but eventually Alpha Centauri A is going to quickly expand in a matter of time, and I’m pretty sure its luminosity is going to increase substantially. So, how bright would it be from Earth? | First of all, by the time Alpha Centauri A becomes a red giant, it will no longer be this close to the Sun due to the orbit of the stars around the galaxy so it probably wouldn't be visible. But let's assume it does stay 4.2 ly away. By the Stefan-Boltzmann Law, the luminosity of a star is given by $$L = 4\pi R^2 \sigma T^4$$ Assuming a radius of $200 R_\odot$ and a temperature of $3600 \text{ K}$ , we get a luminosity of about $6021 L_\odot$ , compared to a present-day luminosity of $1.5 L_\odot$ . This means that the red giant would be 4014 times brighter, corresponding to an apparent magnitude increase of 9 magnitudes to about -9 apparent magnitude. This is slightly dimmer than the brightness of the moon. | {
"source": [
"https://astronomy.stackexchange.com/questions/50667",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/47598/"
]
} |
50,856 | The Pillars of Creation have a strong directional sense. They are referred to as "pillars" and another question asks how "tall" they are. Naively, it looks as if there is a source of "smoke" near the tips of the pillars, and the smoke is being blown by a wind. News explanations and Wikipedia explain that the pillars are formed by UV light from hot stars eroding clouds of molecular hydrogen, which suggests that the "smoke" is actually a void, but it still looks as if the void is being extended by the flow of a wind past the relevant stars. But I have no confidence that this is right, and the Wikipedia article is not very clear. Popular media explanations also don't seem to address the geometry. I see several possibilities: The gas is still relative to most of the nearby stars, and a small number of hot stars are moving through that system, causing the pillars. The hot stars are not in rapid motion relative to nearby stars, but the gas is flowing relative to that population. The shape has nothing to do with relative flows. What's the explanation? A pointer to a good article for the knowledgeable lay person would be helpful too. | Your option #3 is correct; the shape has little to do with the relative motion of the gas and stars. Giant molecular clouds The pillars are part of the giant molecular cloud (GMC) which is giving birth to news stars. Stars are formed when some regions inside the cloud meet the Jeans criterion , i.e. are sufficiently dense and cold that gravity overcomes pressure. Because the density of such clouds is largest in the center (see e.g. Chen et al. 2021 ), stars will tend to form first in the center. Stars are formed with a distribution of masses. The most massive ones — the so-called O and B stars — emit copious amounts of ultraviolet photons, which heat and ionize the surrounding medium. A hot, ionized bubble inside the otherwise cold, neutral, and dusty cloud called a Strömgren sphere then forms. The dark pillars are remainders of the neutral gas, whereas the bluish region is the ionized region, containing newborn stars. The size of the ionized region In this answer about the Carina Nebula , I calculated the typical size of a Strömgren sphere, which we can write approximately as $$
R_\mathrm{S} \simeq 10\,\mathrm{lightyears} \times\color{red}{\left(\frac{Q(\mathrm{H}^0)}{10^{50}\,\mathrm{s}^{-1}}\right)^{1/3}}
\color{blue}{\left(\frac{n_\mathrm{H}}{300\,\mathrm{cm}^{-3}}\right)^{-2/3}}
\color{green}{\left(\frac{T}{10^4\,\mathrm{K}}\right)^{0.23}},
$$ where the three colored terms show typical values of the rate of emitted UV photons $\color{red}{Q(\mathrm{H}^0)}$ from a handful of massive stars, and the neutral hydrogen density $\color{blue}{n_\mathrm{H}}$ and temperature $\color{green}{T}$ of the cloud. This equation tells you two things, namely that the characteristic size of the ionized region is of the order of 10 lightyears, and that the size scales with density as $R_\mathrm{S} \propto n_\mathrm{H}^{-2/3}$ . The origin of the pillar shape But the GMC is not homogeneous; it will have regions that are quite a lot denser, and quite a lot less dense, than the average. According to point #2 above, if some region is, say, 10× more dense than its surroundings, the ionized bubble will propagate $10^{-2/3} \sim 1/5$ as far in this region. The overdensity will therefore shield the part of the cloud that is behind it from the UV radiation of the stellar cluster. This effect causes "pillars" of neutral gas to appear behind the dense regions. In the animation below I attempt to show the evolution of the Strömgren sphere. Stars are formed first in the center, but a secondary high-density region (which perhaps is too hot to start forming its own stars) shields the gas behind it, shaping a pillar. The image below shows you the "pillars of creation" with their surroundings, where you can see the stellar cluster in the center of the Eagle Nebula responsible for this shape: Credit: NASA/ESA/STScI/WikiSky . Opaqueness vs. transparency Neutral gas is quite efficient at blocking light, because the atoms have many electronic transitions available for absorbing photons. Moreover, the gas is full of dust, which also absorbs light. In contrast, ionized gas is much less efficient at absorbing light, and moreover the dust will tend to be destroyed by the free-streaming UV radiation (sublimation from heating), and by high-temperature particles (sputtering). (when I say "ionized gas", this means that hydrogen — which comprise ~90% of the atoms — is more or less fully ionized. But helium and heavier elements still have bound electrons which may absorb some of the light.) Hence, the neutral regions are very opaque, while the ionized regions are transparent. | {
"source": [
"https://astronomy.stackexchange.com/questions/50856",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/7752/"
]
} |
50,986 | In the most general terms (I'm not asking for the actual calculations), how do popular planetarium apps and software calculate the positions of celestial objects? Planets, for example. Do they use Keplerian formulas with their associated elements and rates, or do they calculate long series of periodic terms as outlined in chapter 32 “Positions of the planets” in Meeus? I'm assuming positions of comets and other small bodies would be calculated from Keplerian orbital elements? I'm also assuming they don't use some sort of numerical integration to calculate positions (as do JPL Horizons)? | There are quite a few different types of methods of computing the position of celestial objects, and the method used to compute the position generally depends on the type of object and how accurate the application needs to be. I'll list a few types and methods below. Nebula These are large, distant objects. They generally don't move fast, and are large enough that highly accurate positions are unnecessary and/or difficult to define. So these are generally just static positions from a catalog like the NGC or Messier catalogs. Stars Distant stars are pinpoints of light, and easy to agree on a location and measure movement, so a star catalog generally includes the position of the star at some epoch, and includes proper motion fields to show how the star moves over time. The exact data is catalog specific. For example, the HIPPARCOS catalog contains the star positions for epoch 1991.25, and proper motion fields specifying their change in RA/Dec from that time. Planets Lots of methods for computing the planets' positions have been developed because of their importance and popularity. Keplerian elements are one way, but become inaccurate rather quickly. But the Explanatory Supplement to the Astronomical Almanac supplies elements, and some extra terms to help with accuracy over time. An implementation is here . VSOP87 is probably the most popular method as it provides good accuracy over a long period of time and doesn't require much storage. Implementations are available in a wide array of languages. This is likely the ephemeris used by pretty much all applications you can download to your PC or phone. JPL Development Ephemeris are highly accurate, in fact the most accurate available to the public. They are generally produce by the JPL as needed for specific missions. They use Chebyshev polynomials to represent the positions which are only accurate for a very short period (the longest is 32 days), so to cover long time ranges, a lot of these need to be stored. E.g. DE422 covering 3000BC to 3000AD is 500Mb in binary form. Due to their size, and lack of need for such high accuracy from a planetarium program, they are usually only used for specialized applications. I have written an article Format of the JPL Ephemeris explaining how to use them to implement your own, and provided implementations in a few languages . Artificial Satellites Many planetarium programs also include artificial satellites. These are based on Keplerian elements, but quickly go out of date, so a more complex model is used to adapt the positions called SGP4/SDP4 (and SGP8/SDP8). The algorithm was made public as source code from NORAD in the 80's, and code is available in a multitude of programming languages, but the most thourough ones are available at Celestrak . The elements also need to be updated regularly, and Celestrak provides some of those, but Space-Track is the official source. Programs have to update this file on at least a daily basis to stay up to date. Comets/asteroids Minor solar system bodies are of enough interest that they are tracked, but not important enough to generate the effort the JPL DE requires. Instead, new Keplerian elements are generated and distributed by the Minor Planet Center . Again, these have to be updated quite regularly, but not necessarily daily, so most apps leave it up to the user to request an update. Moons The Earth's moon is included in the VSOP87 and JPL DE. And the moons of most planets have chaotic enough orbits that their importance/difficulty tradeoff doesn't justify a specialized ephemeris. But you will find algorithms for some of Jupiter's and Saturn's moons in Meeus' book. Earth Orientation Parameters are something you didn't ask about, but are quite important to determine where an object will appear in relation to an observer on Earth. Meeus' book covers some of these such as precession, and nutation, and other effects like aberration. But the Explanatory Supplement to the Astronomical Almanac provides a more complete and updated explanation. The USNO provides an example implementation as NOVAS . You can learn more by looking at the source code for some planetarium programs like Stellarium or Xephem . | {
"source": [
"https://astronomy.stackexchange.com/questions/50986",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/28961/"
]
} |
50,993 | All coordinates on Earth are equally likely to experience a solar eclipse, so it's just a matter of luck where the Moon's shadow happens to fall. But some place has to draw the short straw and be the last in line to experience a total solar eclipse. My question is: where is that place exactly? The Five Millennium Canon of Solar Eclipses lists all solar eclipses (of any kind) occuring from the years -1999 to +3000. Yes, there's bound to be some inaccuracies, but this is NASA data and you're not particularly gonna get better than that. Using that data, I'm sure someone smarter than me could gradually highlight all the total eclipses' umbral shadow paths until all but one spot on Earth's surface is covered. Maybe there's a better way to reach the result, but I think I got my point across. P.S: There is the possibility that even after all the total eclipses from 2000 to 3000, multiple random places would still be uncovered. In which case I think starting from an earlier point would be wise. Edit: I feel the need to clarify my question a bit. I am not asking about the last solar eclipse that will happen due to the Moon receding from the Earth. Imagine you had a world map in front of you, and you accurately highlighted the path of totality of every total solar eclipse one by one starting from a certain date. Gradually, almost all the map would be filled as more and more eclipses cover more ground until only a few spots remain. My question is, where are these few unlucky spots that don't get to experience a total solar eclipse? The starting point doesn't necessarily have to be the beginning of the 21st century. The data I linked covers all eclipses from the years -1999 to +3000. That is 5 millennia worth of eclipses. After all these millennia have passed, which spots, if any, will remain uncovered on the world map and never experience a total solar eclipse during that time frame? | Here is a combination of the maps available from NASA SEAtlas . They cover time span from year 2001 to 3000. Made with a custom Python script and some editing in GIMP. The yellow to blue colors mark the time of the next total solar eclipse by location. A red or light blue color marks the areas that do not experience a total eclipse by year 3000. There are many such areas without eclipse, so based on this data it is not possible to say which will be the last one. There is no clear pattern so it is likely that computational accuracy and the definition of total vs. annular vs. 99% eclipse would affect the answer also. Here is a map on the number of total eclipses per area in the whole 5000 year dataset: There are three tiny areas with no eclipses. One is near the border of Brazil and Bolivia, one in north Zimbabwe and one in south Kongo. I have hilighted them in red. For some reason the southern hemisphere seems to have less total eclipses on average. An explanation may be in this answer . | {
"source": [
"https://astronomy.stackexchange.com/questions/50993",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/33635/"
]
} |
51,066 | Someone else asked about various planets located at the orbital range of the Moon. It made me wonder if an object with the same density as the Moon, at a distance which would present the same angular diameter of the Moon seen from the Earth, would have the same general gravitational effects of the Moon on the Earth, given that gravity follows an inverse square law and angular dimension follow an inverse linear law. Would, for instance, a super-Earth of the Moon's density and angular diameter create the same tides as our Moon currently does? We are, for this question, discarding our Moon for the scenario; there is only the super-Earth and the Earth. It seems to me, intuitively (i.e. without any math, let's be honest, I'm still working on that bit), that such an object would have profound effects; the center of gravity of the system would, I would think, remain in proportion to the orbital distance, but be further out in absolute unit terms (kilometers), for instance. In my head I picture an Earth orbit with wider oscillations, given the increased total mass of the system. Yet I think the experienced pull from that super-Earth would be experienced on the ground at the same magnitude as our current Moon's gravitation. How should I be thinking about this situation? Thanks. | Exactly the same tides, yes. The Sun is the same angular size as the Moon but about 400 times further away. If the Sun were as dense as the Moon then its gravitational pull would be 400 times that of the Moon: $400$ times the diameter, so $400^3$ times the mass, divided by $400^2$ because of the inverse square law. But tides depend not on the gravitational pull but on the amount by which this pull diminishes over a distance equal to the Earth’s diameter. And the Earth’s diameter, as a proportion of the distance to the Sun , is $\frac1{400}$ of its diameter as a proportion of the distance to the Moon. So although the gravitational pull is $400$ times stronger, the tidal effects are $\frac{400}{400}$ times stronger - in other words, identical. Since in reality solar tides are smaller than lunar tides, we can deduce that the Sun is less dense than the moon. | {
"source": [
"https://astronomy.stackexchange.com/questions/51066",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/32126/"
]
} |
51,178 | Often you hear people say "time is The 4th dimension". What confuses me is that people talk about "The 4th" dimension as if it's a specific thing and I don't understand why. What I mean by that, is that I can plot the position of a point in 3D space with the 4th dimension being just about anything. For instance, I can plot my position in space with time being the 4th dimension showing me my position at any given timepoint, but I can also plot my position in space with, say, my level of hunger being the 4th dimension where I'd be able to see where I was in space based on hungry I was. What I'm trying to get at is, is there a reason why time is widely considered as THE 4th dimension rather than just another parameter? Is the "The" really describing an intrinsic relationship? | This is because time is the fourth dimension in the theory of General Relativity which describes gravity. It turns out that a good way to describe the paths that objects or light take when in a gravitational field is to describe a curved four-dimensional space with coordinates x,y,z,t. A particle in space becomes a curve in this four dimensional space, and we can use general relativity to find the shape of this line, and so the position of the particle at any given time. If the force of gravity is weak and the velocity of the particle is small then x, y, z correspond to the ordinary position of the particle in 3d space at time t, in the usual Newtonian way. If the gravity is strong, or the velocity is high, then only the General Relativity description matches observations. Now you can, of course describe your (x,y,z,H) position (where H is hunger) however this doesn't lead to much interesting physics. It doesn't help develop a theory of hunger. So time is the fourth dimension, because that helps us to describe reality. | {
"source": [
"https://astronomy.stackexchange.com/questions/51178",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/48274/"
]
} |
51,183 | Is there a term for the period of time when Venus is first visible in the evening to when it switches to being the "morning star", or vice versa? For example, as depicted in the image below, from Early Oct 2022 to late July 2023, Venus will be visible in the evening. I know the Mayans took a particular interest in the 8 different patterns produced (for where they were), but never found a word they used for the patterns, nor the time periods they represent. I'm not looking for a Mayan word specifically, just anything other than "the time when Venus is visible in the evening/morning this time around". Code to produce image above | This is because time is the fourth dimension in the theory of General Relativity which describes gravity. It turns out that a good way to describe the paths that objects or light take when in a gravitational field is to describe a curved four-dimensional space with coordinates x,y,z,t. A particle in space becomes a curve in this four dimensional space, and we can use general relativity to find the shape of this line, and so the position of the particle at any given time. If the force of gravity is weak and the velocity of the particle is small then x, y, z correspond to the ordinary position of the particle in 3d space at time t, in the usual Newtonian way. If the gravity is strong, or the velocity is high, then only the General Relativity description matches observations. Now you can, of course describe your (x,y,z,H) position (where H is hunger) however this doesn't lead to much interesting physics. It doesn't help develop a theory of hunger. So time is the fourth dimension, because that helps us to describe reality. | {
"source": [
"https://astronomy.stackexchange.com/questions/51183",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/25729/"
]
} |
51,279 | Rosetta Comet Orbiter (RCO) crashed into the surface of a comet after the comet passed near Jupiter, which would be out-of-range for its antenna to communicate with Earth. So, the ESA made the difficult decision to just let go and crash the darn thing. (Talk about going out with a bang! geez!) Anyway, I saw a mission called DART wreck into an asteroid on purpose in order to move it. It got me thinking, did Rosetta do the same thing? The DART main spacecraft was about the size of a refrigerator, with 8-meter-wide solar arrays. Rosetta was an aluminum box with two solar panels that extended out like wings. The box, which weighed about 6,600 lbs. (3,000 kilograms), measured about 9 by 6.8 by 6.5 feet (2.8 by 2.1 by 2 meters). It has a wingspan on 105 feet. Given Rosetta is MUCH heavier than DART, was it possible to move the comet with Rosetta? There is a slight factor that could mean all the difference, Rosetta's target was bigger than DART's. I state my question one last time, is it possible that the Rosetta Orbiter might have moved the comet it crashed into? | Yes, it did. But not by much. The comet has a mass of about $10^{13}$ kg. Rosetta had a mass (after fuel had been used up) of about 1300kg. The "impact" was at 0.9 m/s. This means that the spacecraft had a momentum of about 1200 kg m/s After the impact, and in the frame of the comet before the impact, the combined body would have the same momentum: 1200 kg m/s. But with a large mass the velocity would be small: $1200/10^{13}$ . That is (having converted units) about 0.01 mm per day (or about one foot per decade). Now The comet would have had a velocity, relative to the sun of about 7 km per second . A change of 0.01 mm per day would be negligible. | {
"source": [
"https://astronomy.stackexchange.com/questions/51279",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/47503/"
]
} |
51,317 | I read an article at abc.net.au saying that the power output of the sun is about 276.5 watts per cubic metre, similar to that of a compost pile. A compost pile is not incandescent, while the sun is. It's so hot that it glows. So I have a hard time trying to understand how an object that maintains a temperature of 27 million degrees at the core can only have 300 watts of energy per cubic meter at the core. The corona is one million degrees, the photosphere is 11,000 to 6,700 degrees, the chromosphere is 14,000 to 6,700 degrees and the radiative zone closest to the core is 12 million degrees, the radiative zone farthest from the core is 7 million (all temperatures in Fahrenheit). The total power emitted is 3.8 × 10 26 watts, which makes sense given the mass and temperature. I am sure the article is current. Has anyone seen this before and maybe can offer me a little insight? | Yes, the power output of the solar core is about 276.5 watts per cubic metre. However, if we average that power over the whole volume of the Sun it drops to 0.27 watts per cubic metre. (Thanks, ProfRob). Energy is measured in joules, power is measured in watts. One watt is one joule per second. So (in general) the power tells you how much energy is produced or consumed per second. A cubic metre of solar core contains a lot of energy, as indicated by its temperature and density, but the rate that it "generates" new energy is rather small. Of course, energy is conserved, so the Sun isn't actually producing new energy, it's merely converting mass (which is a form of energy) into kinetic energy. Some of that kinetic energy is in the form of photons, and some of it is the kinetic energy of the other fusion reaction products. The primary fusion reactions operating in the Sun are called the proton-proton chain (or p-p chain). Unlike the processes in a hydrogen bomb (which uses deuterium & tritium, not plain hydrogen), the start of the p-p chain is quite slow. When two protons fuse, the resulting diproton is very unstable, and it usually splits apart again. However, in the brief time before the diproton splits there's a tiny probability, on the order of $10^{-26}$ , that one of the protons in the diproton converts to a neutron, creating a deuteron (a deuterium nucleus). The probability is low because the conversion relies on the weak nuclear force, which is much slower than the strong nuclear force involved in binding the nucleons together. A typical solar core proton has a half-life of around 10 billion years. The Sun will last a long time because its main reaction process is so slow. That's good news for star longevity, but bad news for anyone who wants to build a fusion reactor running on plain hydrogen. | {
"source": [
"https://astronomy.stackexchange.com/questions/51317",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/17198/"
]
} |
51,341 | I read somewhere a long time ago that there is enough matter in our solar system in the form of rocks and dust to create another sun.Is this correct? Was our solar system trying to create a 2 star system? | The vast majority of the stuff in the solar system other than the Sun itself is contained in one body, Jupiter. The total mass of the solar system is estimated to be about 1.0014 solar masses, or about one solar mass plus 1.4 Jupiter masses. (Jupiter's mass is a bit less than 0.001 solar masses.) Using the highest estimates on the mass of the Oort cloud, the total mass of the solar system, excluding the Sun itself, is about 30 Jupiter masses. These early estimates have been shown to be wrong. Current estimates are that the mass of the Oort cloud is one or two Earth masses. Even if the hypothesized planet IX does exist and is as large as some hypothesize (about five Earth masses), that will only budge the estimated 1.0014 solar masses by a tiny, tiny bit. The smallest possible star, defined as something capable of fusing hydrogen, is about 65 to 80 Jupiter masses. The answer is no. | {
"source": [
"https://astronomy.stackexchange.com/questions/51341",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/-1/"
]
} |
51,426 | I am no astronomer. I am a computer scientist.
I recently read this article: JWST has changed the speed of discovery, for better or for worse - Astronomers are working at a furious pace to analyze and test whopping amounts of JWST data. Are astronomers still searching somewhat randomly the sky dome for points of interest or is the astronomy community fixed on observing specific regions (about 200-500 uncommon events of black holes, supernovas, exoplanets, nebulas and galaxies) of interest ? Is the main type of data astronomers use JWST (or any other type of orbital telescope) optical images ? It would amaze me if most of those daily Gigabytes of data the article mentions were raw data (measurement values for example not pixels). Does this field use any kind of algorithmic identification of points of interest to discard any dull and common matter passing through the observing lenses ? For example: The telescope after taking a high-resolution image of the universe in any kind of electromagnetic spectrum, it would do the following: Will catalogue the coordinates of the depicted image and other meta data like time and so on (as I presume it is already being done) Maintain only the pixels that are of interest set by the astronomer like regions that exceed a threshold of luminosity or density (cluster of pixels) of certain elements or distance from the telescope (not sure if the latest is something the telescope can measure). Those filtered information will be preserved and the rest discarded resulting in less data sent back to earth in a certain format to be reconstructed as an image (far from the true image of that region) but still eligible for study as those pixels will carry important data. These data could also be further compressed with certain structural reconstruction algorithms, i.e. if the final image is thousands of pixels wide but the actual information caught from filtering occupies scarce small regions in the center then only the coordinates of that pixel clusters could be saved and the rest assumed blank. The reconstructed image in earth would still be of the same dimensions. After reviewing these data, if the astronomer (or an other filtering algorithm) deems it worthy of interest, could ask from the telescope a full spectrum analysis with no filters sent back for further review. Steps 1-4 could be executed by the telescopes computer before sent to earth and steps 2 & 3 could even execute while scanning (before completing the image) without having to post-process the complete image. Forgive my ignorance if this pipeline is already applied. I'm not trying to sound smart. | JWST operates in a mode where groups of astonomers make detailed proposals to observe particular obects with particular instruments. There are no random pointings, although there may well be serendipitously observed observations in the same field of view of some objects of interest. There will also be some deep survey fields, which although not targeted at specific objects, will be targeted at some particular position in the sky. The amount of data taken is small compared with the rate at which the data can be transferred to Earth - so ALL the data is transmitted. Each observing programme takes a fair bit of exposure time. You can see the observing schedules and you will notice that a typical observing programe might take between 1-20 hours. During these programmes the instruments might take a few to a few tens of images or spectra. i.e The shutter is opened for some considerable time before ending the exposure and recording the data as an image. You can find these raw images at the Mikulski Archive Space Telescopes (MAST). For example, I searched for JWST data on HIP 65426 (which was observed in July 2022). There are 34 data images, some of them are calibrations, others are exposures of thousands of seconds. Each image is about 40 Mb in size (for this instrument).
So to get ALL the data requires a transmission rate of about 34 x 40Mb/6 hours = 62 kb/s. Apparently, the data transfer from JWST to the ground stations can take place at 3.5 Mb/s , so there is not the slightest problem in downloading all the data for later analysis. The same is not true for other space missions. For example the Gaia astrometry satellite is limited in terms of the vast amount of data it can take compared with its download speeds to Earth. The difference here is that Gaia is a scanning, imaging instrument that is continuously surveying the whole sky. Here, there is a scheme, much like you propose, where an onboard computer selects the portions of the data (basically postage stamps around stars) that are selected for transmission back to Earth. | {
"source": [
"https://astronomy.stackexchange.com/questions/51426",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/22022/"
]
} |
51,433 | The moon is smaller than the Earth, but how do we know that (without the use of modern technology)? To be more specific, how can we show that the moon is smaller than the Earth (smaller diameter) with technology before 1800s? | Because when the shadow of the moon hits the earth for eclipse, it's only a small shadow that covers a little zone of the earth and lasts a brief moment. When the earth shadow passes on the moon, it lasts a lot longer and it's a bigger shadow. They measured the time 2250 years ago and found the earth is 3.5 times bigger than the moon: | {
"source": [
"https://astronomy.stackexchange.com/questions/51433",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/48510/"
]
} |
51,501 | I am thinking of a partial lunar eclipse where the Sun has not set yet but all three celestial objects are at such an angle that the Earth can cast some of its shadow on the Moon. Is such a scenario possible? How common is it? | Yes, barely. The atmosphere bends light, especially at rise and set. When the sun appears to be on the horizon, it is actually about a degree below the horizon. This means that when the sun and moon are actually aligned they would both be visible. You can simulate this in Stellarium (or another planetarium system that simulates atmospheric effects) Set the time to a MJD of 61102.43689, (2026-03-03 10:29:07 UTC) the location to N 30° 38' 55.14", E 111° 52' 48.64" and use "Zero Horizon" (rather than the default picture of a field). There will always be some locations at which the moon is rising during an eclipse, so this is quite common. But actual locations on land where both horizons are visible and the air is clear enough to see the moon that low are rare. This is strictly an atmospheric effect, it wouldn't be possible if the Earth had no atmosphere. This was observed by the French astronomer Antoine-François Payen , described in his 1666 treatise Selenelion ou apparition luni-solaire en l'isle de Gorgonne . The word "Selenelion" (a portmanteau of Greek selene + helion, ie "moon-sun") has been occasionally been used in English to describe this phenomenon. | {
"source": [
"https://astronomy.stackexchange.com/questions/51501",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/48564/"
]
} |
51,533 | Since at the Event Horizon, time stops completely, how do two black holes merge together?
Shouldn't they should stop moving due to time dilation when they get closer to each other's Schwarzschild radius? | The "event horizon" is defined as the point (or surface) from within which light rays can never (ever) reach a distant observer. To find the location of the event horizon implies that you must know everything about the future of the black hole - so in practice what is referred to is often the event horizon of a Schwarzschild black hole, which is static and eternal (or a Kerr black hole if it is spinning). i.e. It never changes and can be calculated. When black holes merge, they cannot be considered as Schwarzschild (or even Kerr) black holes. It is a dynamic situation. In practice, what is done (in numerical computations) is to define the surface of an apparent horizon, from within which, light rays appear not to be making their way outwards towards a distant observer. The location of this surface (or surfaces when the black holes are well separated) must be calculated dynamically, and it changes as the merger progresses. After the merger it settles down to approximate the event horizon of an eternal Kerr black hole (a merger remnant will always have some spin). However, the root of your question is the apparent paradox around the simpler situation of how anything can fall into a black hole if time dilation slows this process infinitely at any (apparent) event horizon. There is no need to try to resolve this paradox (it isn't a paradox, because there is no one "truth" of what happens in relativity, only what different observers observe) because the apparent horizon is dynamic (it moves) and objects that get close to the horizon become unobservable to a distant observer. | {
"source": [
"https://astronomy.stackexchange.com/questions/51533",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/48555/"
]
} |
51,535 | Background: Below is a graph from John D. O'Keefe's and Thomas J. Ahrens's Impact and Explosion Crater Ejecta, Fragment Size, and Velocity . Said graph models the amount of ejecta produced by silicate and ice impactors impacting at 5 kilometers per second. The red box, which I added, contains 4 symbols; from left to right, they and the lines beneath them represent the escape velocities of: An asteroid (Ceres is used as the model here) The Moon Mars The Earth The vertical axis is a logarithmic scale representing the fraction of ejecta traveling at a certain speed relative to the total mass of all the ejecta. The horizontal axis is a logarithmic scale representing various escape velocities. This chart suggests that between 0.001% (1 * (10^-5)) and 0.0001% (1 * (10^-6)) of the ejecta produced by silicate impactors hitting the Moon at 5 km/s travels quickly enough to escape the Moon's gravity (lunar escape velocity is represented by the 2nd vertical line from the left). My guess, after using my screenshot measurement tool to measure it, with this scale as a reference, is ~0.00043% (4.3 * (10^-6)); this guess is quite precise but likely inaccurate . According to the Meteorite Impact Ejecta: Dependence of Mass and Energy Lost on Planetary Escape Velocity , by the same authors, relatively slow iron impactors hitting bodies with escape velocities greater than 1 km/s produce relatively less ejecta per unit of energy the impact releases in comparison to relatively faster iron impactors. The same is true in the case of anorthosite (i.e. relatively rocky and not as metallic) impactors, for which the escape velocity value is 200 m/s rather than 1 km/s. While there are only two data points to work off of here (iron impactors and anorthosite impactors), this suggests a trend in which faster-moving impactors are more "efficient" at converting their kinetic energy into ejecta velocity. This also makes sense intuitively. Faster-moving impactors release more energy than slower-moving impactors of the same mass, resulting in more powerful shockwaves within the pool of molten material formed by large impact events. More powerful shockwaves rebound faster and more violently (think the little upwards spike produced by a drop of water splashing into a cup); the faster the shockwave, the greater the "spike", meaning faster ejecta, a greater quantity of ejecta, or both. On top of that, the greater energy released by faster-moving impactors will vaporize more rock and soil around their impact point. As per Meteorite Impact Ejecta: Dependence of Mass and Energy Lost on Planetary Escape Velocity , such vapors are trapped in the transient cavity , and, later on in the crater-forming process, "expand and excavate the overlying planetary surface material" (in other words, they blast outwards and take the now-fragmented stuff above with them at various velocities). Assuming faster impactors turn more of their energy into ejecta energy, it stands to reason that the velocity of ejecta increases as the velocity of the impactor producing it increases, and therefore that a version of the graph above made for 35 kilometer-per-second impact velocities would have a less steep slope , representing a greater portion of the ejecta moving at high speeds (or, rather, pieces of ejecta moving at more similar speeds to one another). Question: I'd like to extrapolate the above graph to find the portion of the ejecta escaping the Moon's gravity due to a silicate impactor hitting the Moon at 35 kilometers per second. Does anyone have educated guesses/heuristics/etc. that could let me make a reasonably informed guess/ Fermi estimate regarding velocity distribution of ejecta (or, for that matter, a formula or outright answer)? For the purposes of this question, let's say it's a fairly big one: 433 Eros , at (6.687 * (10^15)) kg. While it's no 2 Pallas or 4 Vesta , it's certainly not a piddly little 99942 Apophis or 25143 Itokawa , either. | The "event horizon" is defined as the point (or surface) from within which light rays can never (ever) reach a distant observer. To find the location of the event horizon implies that you must know everything about the future of the black hole - so in practice what is referred to is often the event horizon of a Schwarzschild black hole, which is static and eternal (or a Kerr black hole if it is spinning). i.e. It never changes and can be calculated. When black holes merge, they cannot be considered as Schwarzschild (or even Kerr) black holes. It is a dynamic situation. In practice, what is done (in numerical computations) is to define the surface of an apparent horizon, from within which, light rays appear not to be making their way outwards towards a distant observer. The location of this surface (or surfaces when the black holes are well separated) must be calculated dynamically, and it changes as the merger progresses. After the merger it settles down to approximate the event horizon of an eternal Kerr black hole (a merger remnant will always have some spin). However, the root of your question is the apparent paradox around the simpler situation of how anything can fall into a black hole if time dilation slows this process infinitely at any (apparent) event horizon. There is no need to try to resolve this paradox (it isn't a paradox, because there is no one "truth" of what happens in relativity, only what different observers observe) because the apparent horizon is dynamic (it moves) and objects that get close to the horizon become unobservable to a distant observer. | {
"source": [
"https://astronomy.stackexchange.com/questions/51535",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/42355/"
]
} |
51,544 | This may seem like a weird question, but something got me thinking about it just recently. The Sun's core is composed of mainly hydrogen and helium, and is present in the form of a extremely hot supercrushed plasma. The Sun's core is mind-bogglingly dense, about 150,000 kg/m 3 (about 15x denser than lead, 7x denser than uranium, 6x denser than osmium). The density can get extremely high at the center of stars. This leads me to think that the solar core, due to the immense amount of atoms packed together would behave like a extremely hard solid, as per my understanding, most dense metals (excluding gold) are extremely hard, like tungsten. I decided to dig into it a bit on the Internet, but whatever information I got were merely about the pressure at the center of the Sun, and not about the hardness of the solar core. By hardness, I mean having stiffness/rigidity, an ability to retain a certain shape when subjected to anisotropic stress. To clarify things a bit: Supposing we submerged an "indestructible" observer really deep into the Sun, just inside the solar core. Supposing we got the indestructible observer to throw a punch randomly inside the Sun, what would this observer feel? More specifically, would the observer perceive the solar core material as being extremely hard, like a solid, or would it act like an extremely viscous fluid? TL;DR Would the solar core be extremely stiff and hard? Or would it simply behave like a dense and viscous gas? | The solar core can be considered soft in a relative sense (compared to other materials at the same density), but hard and incompressible in an absolute sense. The material behaves almost exactly like a perfect gas but would be as viscous as ketchup. The equation of state is that of a perfect monatomic gas and thus the pressure $P \propto \rho^{\alpha}$ , with $\alpha \sim 4/3$ in the solar core, where heat transport is dominated by radiative diffusion. This is a "soft equation of state" - the material is highly compressible - it takes a small fractional increase in pressure to produce a compression. For most solids, $\alpha$ would be in double figures and they are approximately incompressible. However, what you are asking about could be represented by the "bulk modulus" (Young's modulus and Shear modulus are not meaningfully defined for a fluid). This is roughly equal to the pressure of a gas and is a measure of how much force in an absolute sense is required to change the volume of something. At the centre of the Sun, this is $2\times 10^{16}$ Pa. This can be compared with the bulk modulus of diamond which is $4\times 10^{11}$ Pa. Thus in that sense, the solar core is much harder than a diamond to compress. In terms of viscosity, the microscopic kinematic viscosity in the solar core is of order $10^{-4}$ m $^2$ /s ( Ruediger & Kitchatinov 1996 ) and hence a dynamic viscosity of 15 Pa s. For comparison, the kinematic viscosity of water at 293K is $10^{-6}$ m $^2$ /s and the dynamic viscosity is $10^{-3}$ Pa s. Thus the solar interior fluid is 100-10000 times more viscous than water, depending on how viscosity is defined. Fluids of comparable viscosity would be honey or ketchup . The viscosity of solids (like rocks) meanwhile, is of order $\sim 10^{20}$ Pa s. To understand why the Sun behaves like a perfect gas, one must compare the interaction energies between the particles with their kinetic energies. At a density of 150000 kg/m $^3$ the mean separation of protons and electrons is $\sim 2\times 10^{-11}$ m, with a mutual Coulomb energy of about 100 eV. The kinetic energy of the particles is $3k_BT/2$ , and with temperature $T \sim 1.5\times 10^7$ K in the solar core, this is about 1000 eV. Thus the kinetic energies are much greater than the Coulombic binding energies and so the particles behave like a gas. To "freeze" into a solid you would need the binding energy to be about 100 times the kinetic energy, which would require much higher densities at that temperature. | {
"source": [
"https://astronomy.stackexchange.com/questions/51544",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/47665/"
]
} |
51,593 | This article ( https://gothamist.com/news/a-green-comet-is-sailing-over-new-york-and-earth-for-the-first-time-in-50000-years ) claims that a comet will pass by earth soon for the first time in 50,000 years. How do we know that this exact comet passed earth 50,000 years ago? It obviously was not a recorded event at the time so how can such a claim be made? | 50000 years is the comet's estimated orbital period. That does not necessarily mean that the comet was naked-eye visible from Earth 50000 years ago. That also does not necessarily mean that the comet last came close to the Earth's orbit (as opposed to the Earth) 50000 years ago. The 50000 years is an estimate based on nine months or so of observation time. In addition, the comet's orbit might have been perturbed in the time between its last perihelion passage and the current one. This might even be the comet's first visit to the inner solar system. From NPR's A bright green comet may be visible with the naked eye starting later this month (admittedly yet another pop-sci article), "If C/2022 E3 has ever passed through the solar system before, it would have last been seen in the sky more than 10,000 years ago," says Jon Giorgini, a senior analyst at NASA's Jet Propulsion Laboratory. Note well: This admittedly is yet another pop-sci article. However, NPR has a JPL expert who says it is it least 10000 years ago (if ever) that the comet last visited the inner solar system rather than 50000 years ago. This article is fairly recent. NPR did their research well; they went to an expert from JPL. Determining the orbit of a long-period comet is highly non-trivial. We have nine months worth of partial observations (mostly azimuth and elevation, which have significant measurement errors, even from the best observatories) of a comet with a suppose 50000 year period. If true, that 9 month interval is 15 millionths of the comet's orbit. That simply is not a long enough of an arc to perform precise orbit determination. To make matters worse, those long-period comets necessarily travel well beyond Pluto's orbit. At those distances, the entire inner solar system out to Neptune gravitationally act essentially as a single body located at the solar system barycenter. Inside Neptune's orbit, it's better (from an orbital element perspective) to look at objects as orbiting the Sun with the planets as perturbations. There are now articles saying the comet will never return. That's because using JPL's Horizons to provide osculating orbital elements yields an eccentricity slightly greater than one -- in heliocentric coordinates. Osculating elements can be deceiving, particularly so for long-period comets. Bottom line: There is no telling if the 50000 year value is anywhere close to correct. Take popular science articles with a grain of salt. | {
"source": [
"https://astronomy.stackexchange.com/questions/51593",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/48729/"
]
} |
51,621 | I have two related questions: Where in the Milky Way did the solar system form? Is there a particular nebula it can be traced to? How far back in time can we track the location of the solar system within the Milky Way with reasonable certainty? Notes: Obviously, if the answer to (2) is "as far back as the origin of the solar system", then we have an answer to (1), but I suspect that's not the correct answer to (2), so I ask these as separate questions. For (2), I would imagine that parallax data from Gaia would be particularly relevant. For (1), I wonder if something like relative metal abundances can be measured and give a precise enough "fingerprint" to "match" the solar system to its birthplace. For (1), I don't know whether nebulae remain recognizable as nebulae billions of years after their star-forming days -- if not, then the premise of my question may be flawed -- it may be that the birthplace of the solar system has diffused away into the background structure of the Milky Way by now. | Basically no, and not very far back at all. Star forming regions generally last for at most 10 million years. The "nebula" in which the Sun was born is long gone so cannot be identified. The motion of the Sun around the Galaxy is not very precisely known. Typical uncertainties in its velocity are about 1 km/s along each axis (towards the Galactic centre, tangential to the Galactic centre and out of the plane of the Galaxy) - see for example Schoenrich, Binney & Dehnen (2010) and How far is the Earth/Sun above/below the galactic plane, and is it heading toward/away from it? . It just so happens that 1 km/s is about 1 pc per million years. Thus every million years we go back in time, the uncertainty in the Sun's position grows by about 1 pc in each dimension. A further hazard is that stars do not "orbit" like planets in the Solar System". Their orbits can be perturbed significantly over billions of years by (for example) encounters with giant molecular clouds or passage through spiral arms. It is therefore impossible to precisely place where the Sun was 4.5 billion years ago. Since its metallicity is slightly higher than the average of stars in the Solar neighbourhood, it is thought it may have originated 1-2 kpc closer to the Galactic centre than it is now ( Nieva & Przybilla 2012 ), though others disagree - Martinez-Barbosa et al. 2015 ) or claim the Sun may have even originated near the Galactic Bulge ( Tsujimoto & Baba 2020 ). You raise the issue of a chemical abundance "fingerprint". Unfortunately, although this is a "hot topic" in "Galactic Archaeology" , it has borne little fruit so far. There is a great deal of similarity between stars born in different clusters and a great deal of overlap in their chemical abundance signatures. It is doubtful (in my opinion) that true siblings of the Sun will ever be identified in this way. | {
"source": [
"https://astronomy.stackexchange.com/questions/51621",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/17285/"
]
} |
51,628 | In India, there is a temple named Konark Sun Temple which is around 750 years old and made up of 100% stones and rocks, and has a chariot which is headed by 7 horses and includes the Hindu god Surya(Sun) in the chariot.
This chariot has 24 wheels out of which, only 2 were understood by humans, the rest 22 are still a mystery.
Those 2 wheels work like a perfect sundial. You can refer this video: Question : How were ancient Indians able to perform such mind-boggling calculations without any kind of technology? Is there any other way to make such a huge sundial without modern technology? | As @JohnHoltz points out in a comment, planting a stick in the ground or in a wall and watching where the shadow falls is something very easy; sundials have been known since prehistoric times. I’m not sure where you got the idea that this implies “mind-boggling calculations,” because it’s very easy and has been known since the earliest times how to divide numbers. Much more complex calculations already appear in the Rhind Papyrus from ancient Egypt or on cuneiform clay tablets from Mesopotamia. In India, the work of Aryabhata in astronomy was much more complex than telling the time with the Sun, already in the fifth century CE. I’m no expert on ancient monuments or civilizations, and even less about the Konark temple, but a quick Google search reveals that “Twelve wheels represent 12 months of the year. According to the Indian calendar, each month has a Shukla paksha and a Krishna paksha, so the other 12 wheels stand for them.” Considering the maximum altitude of the Sun in the sky changes from one month to another, it would be reasonable to assume that the position of each wheel is calculated to correspond to the Sun’s position during the corresponding month. Maybe the hub of each wheel sticks out by a different amount? This is not mentioned in the video. I wholeheartedly disagree with the video’s conclusion that “If ancient people spent a lot of time creating something, there’s a very good chance that it was done for a valuable, scientific purpose.” Case in point: the pyramids in Egypt certainly took a lot of time to build, yet serve only as tombs for pharaohs and their entourage. Other examples are the numerous temples in any region of the world, or the Coliseum in Rome, which served only for housing games and such. Just because someone did something a long time ago that we can’t seem to understand now, doesn’t mean that it was done using “advanced” knowledge or techniques. Touristic sites may also like to keep an aura of mystery for the visiting public, so that not everything might be revealed about them. | {
"source": [
"https://astronomy.stackexchange.com/questions/51628",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/48741/"
]
} |
51,780 | From my persepective here on Earth, the sky seems to look like a few large-ish things and a bunch of tiny things. Hubble teaches us that even the apparent void between the tiny things has many very tiny things. Stuff that is very near takes up a large portion of what I see. For example, the Earth takes up about 50% of what I can see, because I'm right next to it. The Moon takes up a circle that is about half a degree in diameter or so, depending on what day it is. A small-angle approximation gives that this takes up about $\frac{\pi(0.25)^2}{41253} \approx 0.00048\%$ of the sky. ( there are about 41253 square degrees in the sky ) The percent of the sky that "has stuff in it" changes dramatically hour by hour as the Earth spins round, since it occupies so much of our view. This change in time also occurs due to the motions of all of the heavenly bodies, but minus the Earth I would imagine the value would change relatively little over large periods of time, except during eclipses. Minus the Earth, and barring an eclipse, when we add up all of the Sun, the Moon, the planets and the stars, what percent of the night sky is still just empty or nearly-empty space? | It is really quite hard to answer the question as posed because as you observe deeper and deeper (e.g. using a larger telescope or observing for longer) then more and more (fainter) objects become apparent. Every telescope that you use (and indeed your eye) has a finite angular resolution - the smallest angle between two objects that can be resolved. i.e. The closest two objects can be where there might still be some perceived "gap" between them. Since every telescope has a finite resolution, but you could in principle just observe deeper and deeper, then eventually you reach the stage where the whole sky is almost full of very faint objects with few discernable gaps between them (at least for the telescopes in use today). As a rough idea, there are often said to be at least $10^{11}$ galaxies in the observable universe and at leat $10^{11}$ stars in our own galaxy. If we were to spread these evenly over the sky (probably ok for galaxies), then the separation between adjacent galaxies or stars is about 2 arcseconds. For the stars, most big telescopes at good observing sites would be able to resolve these (though the stars are unevenly distributed and telescopes are incapable of resolving all the stars towards the plane of the Milky Way for example because the density is much higher). Galaxies though have a finite size - e.g. a galaxy of diameter 10 kpc seen at a distance of 1 Gpc has an angular size of 2 arcseconds. Thus with the best telescopes and the deepest exposures, if you look very closely there is a galaxy (or at least the blurred image of a galaxy) intercepting almost every line of sight. However, if you were to define some brightness limit to your pictures and ignore the likelihood that there were fainter objects in the "gaps" then you could attempt to put some percentage figure on it. e.g. Here is an image from the Hubble Ultra Deep Field. You might estimate (by eye) that about 20% of the pixels are filled with a galaxy of some sort. There are about 10,000 identified galaxies in this 3.1x3.1 arcmin $^2$ image, so each galaxy is actually only separated by about 2 arcsec (see the calculation above) and you would be hard pressed to count those 10,000 galaxies by eye since most of them are extremely faint blurs that occupy what you might perceive initially as gaps. Finally, the answer you get will depend not only on the depth of your image but the resolution of the instrument taking it. To quote your own comment: For a given resolution, driving the depth to infinity sends the proportion of "nothing" to 0, but for a given depth (i.e. exposure time), increasing the resolution increases the proportion of sky that has nothing? That is an accurate summary, at least with current instrumentation. Edit (for the dedicated reader) To further explain a few things. The answer above considers an "object" to be a resolved thing in the sky. Clearly galaxies, consisting of unresolved stars, are mostly empty space and so the vast majority of sightlines will not intercept the surface of a star or anything else. However, that does not mean that sightline is "dark" because all instrumentation we have has a finite resolution that blurs the light from these stars into an image of a galaxy. Some have commented on the finite age of the universe, Olber's paradox and possibly misunderstood what is meant by "depth" in the quote above. Galaxies and stars have a vast range of luminosities and the least luminous things are much more common than the more luminous. Even if you can only observe to a set distance (e.g. set by the finite time since stars and galaxies were first formed), then increasing the exposure time, or "depth" of your image will still reveal more and more of the less luminous objects. If there were a lower limit to the luminosity of a galaxy, then in principle yes, there might come a time when instrumentation was so good that increasing exposure time would not reveal more objects, but we aren't there yet. Even if that were the case, there is no guarantee at all, even with excellent angular resolution, that sightlines will not intercept any galaxies, because they have a finite angular size - and angular size actually increases with large redshifts in the currently accepted cosmological model. Finally we should talk about wavelength. It is far easier to find "empty sky" in the optical (e.g. the Hubble Deep Field), because the light from distant galaxies gets redshifted out of the visible range. It will be interesting to see how crowded JWST deep fields will be in the infrared at an equivalent depth. They will certainly be more crowded, but whether they present an "infrared wall" will depend on the uncertain details of the formation timescale, size scale and star formation history of early galaxies and the shape of the bottom end of the galaxy luminosity function with redshift. | {
"source": [
"https://astronomy.stackexchange.com/questions/51780",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/26277/"
]
} |
51,810 | Summer in the Northern Hemisphere starts on the day of the summer solstice. This is the day that the Northern Hemisphere receives more light from the Sun, due to the Earth's tilt. To my knowledge, the amount of light we receive is related to the temperatures that we have. That's why summer is the warmest period in most regions of the Northern Hemisphere. Then, shouldn't the solstice be the central day of the summer instead of the initial day? That way, summer would be constituted by the days of the year when more light is received in the Northern Hemisphere, which is related to the higher temperatures in practice. | The English word "summer" means the season of the year that is associated with higher temperatures and shorter nights. There is no official "first day of summer" and different groups of people take different conventions. One possible convention is to take "June, July and August" as summer, so the first day of summer is June 1st. This is the convention taken by the Met Office in the UK, and roughly corresponds to the warmest temperatures in the UK. (The reason that the warmest temperatures are not around the solstice is nothing to do with astronomy, it is because the surface takes some time to warm up, so there is a lag between the longest day and the highest temperature) Another possible convention is to take June 21st to Sept 20th as "summer". This fits the solstice and equinox and still roughly corresponds to the warmer days of the year in the Northern Hemisphere. This is the convention in many modern calendars. Another possibility is to take the "cross-quarter days" (named in Gaelic Samhain, Imbolc, Beltane, and Lughnasadh) So summer would be from Beltane/May day to Lughnasadh/Lammas day: May 1st - August 1st. This matches the shortest nights, but generally, May is cooler than August in the UK, so is not consistent with summer meaning "warmest season of the year". The big point here is that, there is no official definition of summer. | {
"source": [
"https://astronomy.stackexchange.com/questions/51810",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/48975/"
]
} |
51,903 | NASA’s James Webb Space Telescope has taken very blurry "photos" of exoplanets around distant stars, such as the exoplanet HIP 65426 b, in different bands of infrared light: My question is, is it possible for the next generations of space telescopes to generate detailed, high-resolution images of such planets? Is there enough "data" or "DPI", so to speak, in the light coming from these planets to achieve this? I know there is cosmic dust and all sorts of other interference that affects the light on its way to Earth, but I'm wondering just how much that is. I'm talking about images with the same spatial resolution of this one of Mars: | No, not with the current or any projected "next-generation" of telescopes. The problem isn't dust, it is distance. To put it in context, you can consider a scale model of the universe. There is no limit to the "dpi" in the light (light doesn't have an intrinsic resolution), it is just that these objects are very small, very dim and very very very far away. Capturing that image of Mars is like taking a photograph of a grain of sand at 20 metres. That is an impressive feat. But to take a similar photograph of an exoplanet you would be like taking a photograph of a grain of sand on the other side of the world! This would require one of three things: An enormous telescope with a mirror that is kilometers across. Or travel to the gravitational focus of the sun (which is about 100 times further than Pluto). Or optical interferometry: combining the light from two telescopes. This would require the telescopes to be linked and to maintain sub-nanometer positioning over several kilometres. Of these, optical interferometry would seem most plausible. But while we have managed 100m baseline interferometry with ground-based telescopes (which have the advantage of not floating in space!) the amount of light they can capture means that they are limited to observations of a few bright stars. One could conceive of an array of moon-based telescopes eventually achieving the kind of resolution required to directly image an exoplanet, but this isn't "next-generation", but closer to science fiction. | {
"source": [
"https://astronomy.stackexchange.com/questions/51903",
"https://astronomy.stackexchange.com",
"https://astronomy.stackexchange.com/users/49077/"
]
} |
1 | I'd like to learn which format is most commonly used for storing the full human genome sequence (4 letters without a quality score) and why. I assume that storing it in plain-text format would be very inefficient. I expect a binary format would be more appropriate (e.g. 2 bits per nucleotide). Which format is most common in terms of space efficiency? | Genomes are commonly stored as either fasta files (.fa) or twoBit (.2bit) files. Fasta files store the entire sequence as text and are thus not particularly compressed. twoBit files store each nucleotide in two bits and contain additional metadata that indicates where there's regions containing N (unknown) bases. For more information, see the documentation on the twoBit format at the UCSC genome browser . You can convert between twoBit and fasta format using the faToTwoBit and twoBitToFa utilities . For the human genome, you can download it in either fasta or twoBit format here: http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips/ | {
"source": [
"https://bioinformatics.stackexchange.com/questions/1",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/43/"
]
} |
14 | I'd like to learn the differences between 3 common formats such as FASTA , FASTQ and SAM . How they are different? Are there any benefits of using one over another? Based on Wikipedia pages, I can't tell the differences between them. | Let’s start with what they have in common: All three formats store sequence data, and sequence metadata. Furthermore, all three formats are text-based. However, beyond that all three formats are different and serve different purposes. Let’s start with the simplest format: FASTA FASTA stores a variable number of sequence records, and for each record it stores the sequence itself, and a sequence ID. Each record starts with a header line whose first character is > , followed by the sequence ID. The next lines of a record contain the actual sequence. The Wikipedia artice gives several examples for peptide sequences, but since FASTQ and SAM are used exclusively (?) for nucleotide sequences, here’s a nucleotide example: >Mus_musculus_tRNA-Ala-AGC-1-1 (chr13.trna34-AlaAGC)
GGGGGTGTAGCTCAGTGGTAGAGCGCGTGCTTAGCATGCACGAGGcCCTGGGTTCGATCC
CCAGCACCTCCA
>Mus_musculus_tRNA-Ala-AGC-10-1 (chr13.trna457-AlaAGC)
GGGGGATTAGCTCAAATGGTAGAGCGCTCGCTTAGCATGCAAGAGGtAGTGGGATCGATG
CCCACATCCTCCA The ID can be in any arbitrary format, although several conventions exist . In the context of nucleotide sequences, FASTA is mostly used to store reference data; that is, data extracted from a curated database; the above is adapted from GtRNAdb (a database of tRNA sequences). FASTQ FASTQ was conceived to solve a specific problem arising during sequencing: Due to how different sequencing technologies work, the confidence in each base call (that is, the estimated probability of having correctly identified a given nucleotide) varies. This is expressed in the Phred quality score . FASTA had no standardised way of encoding this. By contrast, a FASTQ record contains a sequence of quality scores for each nucleotide. A FASTQ record has the following format: A line starting with @ , containing the sequence ID. One or more lines that contain the sequence. A new line starting with the character + , and being either empty or repeating the sequence ID. One or more lines that contain the quality scores. Here’s an example of a FASTQ file with two records: @071112_SLXA-EAS1_s_7:5:1:817:345
GGGTGATGGCCGCTGCCGATGGCGTC
AAATCCCACC
+
IIIIIIIIIIIIIIIIIIIIIIIIII
IIII9IG9IC
@071112_SLXA-EAS1_s_7:5:1:801:338
GTTCAGGGATACGACGTTTGTATTTTAAGAATCTGA
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII6IBI FASTQ files are mostly used to store short-read data from high-throughput sequencing experiments. The sequence and quality scores are usually put into a single line each, and indeed many tools assume that each record in a FASTQ file is exactly four lines long, even though this isn’t guaranteed. As with FASTA, the format of the sequence ID isn’t standardised, but different producers of FASTQ use fixed notations that follow strict conventions . SAM SAM files are so complex that a complete description [PDF] takes 15 pages. So here’s the short version. The original purpose of SAM files is to store mapping information for sequences from high-throughput sequencing. As a consequence, a SAM record needs to store more than just the sequence and its quality, it also needs to store information about where and how a sequence maps into the reference. Unlike the previous formats, SAM is tab-based, and each record, consisting of either 11 or 12 fields, fills exactly one line. Here’s an example (tabs replaced by fixed-width spacing): r001 99 chr1 7 30 17M = 37 39 TTAGATAAAGGATACTG IIIIIIIIIIIIIIIII
r002 0 chrX 9 30 3S6M1P1I4M * 0 0 AAAAGATAAGGATA IIIIIIIIII6IBI NM:i:1 For a description of the individual fields, refer to the documentation. The relevant bit is this: SAM can express exactly the same information as FASTQ, plus, as mentioned, the mapping information. However, SAM is also used to store read data without mapping information. In addition to sequence records, SAM files can also contain a header , which stores information about the reference that the sequences were mapped to, and the tool used to create the SAM file. Header information precede the sequence records, and consist of lines starting with @ . SAM itself is almost never used as a storage format; instead, files are stored in BAM format, which is a compact, gzipped, binary representation of SAM. It stores the same information, just more efficiently. And, in conjunction with a search index , allows fast retrieval of individual records from the middle of the file (= fast random access ). BAM files are also much more compact than compressed FASTQ or FASTA files. The above implies a hierarchy in what the formats can store: FASTA ⊂ FASTQ ⊂ SAM. In a typical high-throughput analysis workflow, you will encounter all three file types: FASTA to store the reference genome/transcriptome that the sequence fragments will be mapped to. FASTQ to store the sequence fragments before mapping. SAM/BAM to store the sequence fragments after mapping. | {
"source": [
"https://bioinformatics.stackexchange.com/questions/14",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/43/"
]
} |
21 | What are the actual differences between different annotation databases? My lab, for reasons still unknown to me, prefers Ensembl annotations (we're working with transcript/exon expression estimation), while some software ship with RefSeq annotations. Are there significant differences between them today, or are they, for all intents and purposes, interchangeable (e.g., are exon coordinates between RefSeq and Ensembl annotations interchangeable)? | To add to rightskewed answer :
While it is true that: Gencode is an additive set of annotation (the manual one done by Havana and an automated one done by Ensembl), the annotation (GTF) files are quite similar for a few exceptions involving the X chromosome and Y par and additional remarks in the Gencode file (see more at FAQ - Gencode ). What are the actual differences between different annotation databases? They are a few differences, but the main one for me (and it could be stupid) is that Refseq is developed by the American NCBI and the ENSEMBL is mainly developed by the European EMBL-EBI. Often, labs or people will just start using what is the best known to them (because of a course or workshop) or because they start working with one of the databases with one specific tool and keep with it later. My lab, for reasons still unknown to me, prefers Ensembl annotations (we're working with transcript/exon expression estimation), while some software ship with RefSeq annotations. Your lab might be mostly European based people or they might also have read papers like the one from Frankish et al. Comparison of GENCODE and RefSeq gene annotation and the impact of reference geneset on variant effect prediction. BMC Genomics 2015; 16(Suppl 8):S2 - DOI: 10.1186/1471-2164-16-S8-S2 From the Frankish et al. paper paper: The GENCODE Comprehensive transcripts contain more exons, have greater genomic coverage and capture many more variants than RefSeq in both genome and exome datasets, while the GENCODE Basic set shows a higher degree of concordance with RefSeq and has fewer unique features. As for: Are there significant differences between them today, or are they, for all intents and purposes, interchangeable (e.g., are exon coordinates between RefSeq and Ensembl annotations interchangeable)? No. I don't think they are great differences between them as that the global picture should stay the same (although you will see different results if you are interested in a small set of genes). However, they are not directly interchangeable . Particularly as there are many versions of Ensembl and Refseq based on different genome annotations (and those won't be interchangeable between themselves either in most cases). However, you can easily translate most[1] of your Refseq IDs to ENSEMBL IDs and vice-versa with tools as http://www.ensembl.org/biomart/martview for example (there are devoted libraries/API as well like Biocondutor: biomaRt [1] Most as sometimes, they might be annotated in one of the database but haven't (yet) an equivalent in the other. EDIT In fine, even if people tends to keep to what they are used to (and that the annotations are constantly expanded and corrected) depending on the research subject one might be interested in using one database over another: From Zhao S, Zhang B. A comprehensive evaluation of ensembl, RefSeq, and UCSC annotations in the context of RNA-seq read mapping and gene quantification. BMC Genomics. 2015;16: 97. paper: When choosing an annotation database, researchers should keep in mind that no database is perfect and some gene annotations might be inaccurate or entirely wrong. [..] Wu et al. [27] suggested that when conducting research that emphasizes reproducible and robust gene expression estimates, a less complex genome annotation, such as RefGene, might be preferred. When conducting more exploratory research, a more complex genome annotation, such as Ensembl, should be chosen. [..] [27] Wu P-Y, Phan JH, Wang MD. Assessing the impact of human genome annotation choice on RNA-seq expression estimates. BMC Bioinformatics. 2013;14(Suppl 11):S8. doi: 10.1186/1471-2105-14-S11-S8. | {
"source": [
"https://bioinformatics.stackexchange.com/questions/21",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/82/"
]
} |
27 | I have a set of BAM files that are aligned using the NCBI GRCh37 human genome reference (with the chromosome names as NC_000001.10) but I want to analyze it using a BED file that has the UCSC hg19 chromosome names (e.g. chr1). I want to use bedtools to pull out all the on-target and off-target reads. Are NCBI and UCSC directly comparable? Or do I need to re-align the BAM/lift-over the BED to the UCSC reference? Should I convert the BED file or the BAM file? Everyone here uses the UCSC chromosome names/positions so I'll need to convert the eventual files to UCSC anyway. | You're the second person I have ever seen using NCBI "chromosome names" (they're more like supercontig IDs). Normally I would point you to a resource providing mappings between chromosome names , but since no one has added NCBI names (yet, maybe I'll add them now) you're currently out of luck there. Anyway, the quickest way to do what you want is to samtools view -H foo.bam > header to get the BAM header and then change each NCBI "chromosome name" to its corresponding UCSC chromosome name. DO NOT REORDER THE LINES! You can then use samtools reheader and be done. Why, you might ask, would this work? The answer is that chromosome/contig names in BAM files aren't stored in each alignment. Rather, the names are stored in a list in the header and each alignment just contains the integer index into that list (read group IDs are similar, for what it's worth). This also leads to the warning above against reordering entries, since that's a VERY convenient way to start swapping alignments between chromosomes. As an aside, you'd be well served switching to Gencode or Ensembl chromosome names, they're rather more coherent than the something_random mess that's present in hg19 from UCSC. Update : Because I'm nice, here is the conversion between NCBI and UCSC. Note that if you have any alignments to patches that there is simply no UCSC equivalent. One of the many reasons not to use UCSC (avoid their annotations too). | {
"source": [
"https://bioinformatics.stackexchange.com/questions/27",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/110/"
]
} |
66 | I have the following data of fragment counts for each gene in 16 samples: > str(expression)
'data.frame': 42412 obs. of 16 variables:
$ sample1 : int 4555 49 122 351 53 27 1 0 0 2513 ...
$ sample2 : int 2991 51 55 94 49 10 55 0 0 978 ...
$ sample3 : int 3762 28 136 321 94 12 15 0 0 2181 ...
$ sample4 : int 4845 43 193 361 81 48 9 0 0 2883 ...
$ sample5 : int 2920 24 104 151 50 20 32 0 0 1743 ...
$ sample6 : int 4157 11 135 324 58 26 4 0 0 2364 ...
$ sample7 : int 3000 19 155 242 57 12 18 2 0 1946 ...
$ sample8 : int 5644 30 227 504 91 37 11 0 0 2988 ...
$ sample9 : int 2808 65 247 93 272 38 1108 1 0 1430 ...
$ sample10: int 2458 37 163 64 150 29 729 2 1 1049 ...
$ sample11: int 2064 30 123 51 142 23 637 0 0 1169 ...
$ sample12: int 1945 63 209 40 171 41 688 3 2 749 ...
$ sample13: int 2015 57 432 82 104 47 948 4 0 1171 ...
$ sample14: int 2550 54 177 59 201 36 730 0 0 1474 ...
$ sample15: int 2425 90 279 73 358 34 1052 3 3 1027 ...
$ sample16: int 2343 56 365 67 161 43 877 3 1 1333 ... How do I compute RPKM values from these? | First off, Don’t use RPKMs . They are truly deprecated because they’re confusing once it comes to paired-end reads. If anything, use FPKM s, which are mathematically the same but use a more correct name (do we count paired reads separately? No, we count fragments ). Even better, use TPM (= transcripts per million), or an appropriate cross-library normalisation method . TMP is defined as: $$
\text{TPM}_\color{orchid}i =
{\color{dodgerblue}{\frac{x_\color{orchid}i}{{l_\text{eff}}_\color{orchid}i}}}
\cdot
\frac{1}{\sum_\color{tomato}j \color{dodgerblue}{\frac{x_\color{tomato}j}{{l_\text{eff}}_\color{tomato}j}}}
\cdot
\color{darkcyan}{10^6}
$$ where $\color{orchid}i$: transcript index, $x_i$: transcript raw count, $\color{tomato}j$ iterates over all (known) transcripts, $\color{dodgerblue}{\frac{x_k}{{l_\text{eff}}_k}}$: rate of fragment coverage per nucleobase ($l_\text{eff}$ being the effective length ), $\color{darkcyan}{10^6}$: scaling factor (= “per millions”). That said, FPKM an be calculated in R as follows. Note that most of the calculation happens in log transformed number space, to avoid numerical instability : fpkm = function (counts, effective_lengths) {
exp(log(counts) - log(effective_lengths) - log(sum(counts)) + log(1E9))
} Here, the effective length is the transcript length minus the mean fragment length plus 1; that is, all the possible positions of an average fragment inside the transcript, which equals the number of all distinct fragments that can be sampled from a transcript. This function handles one library at a time. I ( and others ) argue that this is the way functions should be written. If you want to apply the code to multiple libraries, nothing is easier using ‹dplyr› : tidy_expression = tidy_expression %>%
group_by(Sample) %>%
mutate(FPKM = fpkm(Count, col_data$Lengths)) However, the data in the question isn’t in tidy data format, so we first need to transform it accordingly using ‹tidyr› : tidy_expression = expression %>%
gather(Sample, Count) This equation fails if all your counts are zero; instead of zeros you will get a vector of NaNs. You might want to account for that. And I mentioned that TPMs are superior, so here’s their function as well: tpm = function (counts, effective_lengths) {
rate = log(counts) - log(effective_lengths)
exp(rate - log(sum(exp(rate))) + log(1E6))
} | {
"source": [
"https://bioinformatics.stackexchange.com/questions/66",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/191/"
]
} |
156 | Why do some assemblers like SOAPdenovo2 or Velvet require an odd-length k -mer size for the construction of de Bruijn graph, while some other assemblers like ABySS are fine with even-length k -mers? | From the manual of Velvet : it must be an odd number, to avoid palindromes. If you put in an even
number, Velvet will just decrement it and proceed. the palindromes in biology are defined as reverse complementary sequences. The problem of palindromes is explained in this review : Palindromes induce paths that fold back on themselves. At least one
assembler avoids these elegantly; Velvet requires K, the length of a
K-mer, to be odd. An odd-size K-mer cannot match its reverse
complement. It is possible to construct graph with palindromes, but then the interpretation will be harder. Allowing only graphs of odd k -mers is just an elegant way to avoid writing a code for interpretation of a more complicated graph. | {
"source": [
"https://bioinformatics.stackexchange.com/questions/156",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/57/"
]
} |
225 | I am using a reference genome for mm10 mouse downloaded from NCBI , and would like to understand in greater detail the difference between lowercase and uppercase letters, which make up roughly equal parts of the genome. I understand that N is used for 'hard masking' (areas in the genome that could not be assembled) and lowercase letters for 'soft masking' in repeat regions. What does this soft masking actually mean? How confident can I be about the sequence in these regions? What does a lowercase n represent? | What does this soft masking actually mean? A lot of the sequence in genomes are repetitive. Human genome, for example, has (at least) two-third repetitive elements.[1]. These repetitive elements are soft-masked by converting the upper case letters to lower case. An important use-case of these soft-masked bases will be in homology searches: An atatatatatat will tend to appear both in human and mouse genomes but is likely non-homologous. How confident can I be about the sequence in these regions? As you can be about in non soft-masked based positions. Soft-masking is done after determining portions in the genome that are likely repetitive. There is no uncertainty whether a particular base is 'A' or 'G', just that it is part of a repeat and hence should be represented as an 'a'. What does a lowercase n represent? UCSC uses Tandom Repeat Finder and RepeatMasker for soft-masking potential repeats. NCBI most likely uses TANTAN . 'N's represents no sequence information is available for that base. It being replaced by 'n' is likely an artifact of the repeat-masking software where it soft-masks an 'N' by an 'n' to indicate that portion of the genome is likely a repeat too. [1] http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1002384 | {
"source": [
"https://bioinformatics.stackexchange.com/questions/225",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/163/"
]
} |
361 | I used to work with publicly available genomic references, where basic statistics are usually available and if they are not, you have to compute them only once so there is no reason to worry about performance. Recently I started sequencing project of couple of different species with mid-sized genomes (~Gbp) and during testing of different assembly pipelines I had compute number of unknown nucleotides many times in both raw reads (in fastq) and assembly scaffolds (in fasta), therefore I thought that I would like to optimize the computation. For me it is reasonable to expect 4-line formatted fastq files, but general solution is still prefered It would be nice if solution would work on gzipped files as well Q : What is the fastest way (performance-wise) to compute the number of unknown nucleotides (Ns) in fasta and fastq files? | For FASTQ: seqtk fqchk in.fq | head -2 It gives you percentage of "N" bases, not the exact count, though. For FASTA: seqtk comp in.fa | awk '{x+=$9}END{print x}' This command line also works with FASTQ, but it will be slower as awk is slow. EDIT: ok, based on @BaCH's reminder, here we go (you need kseq.h to compile): // to compile: gcc -O2 -o count-N this-prog.c -lz
#include <zlib.h>
#include <stdio.h>
#include <stdint.h>
#include "kseq.h"
KSEQ_INIT(gzFile, gzread)
unsigned char dna5tbl[256] = {
0, 1, 2, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 0, 4, 1, 4, 4, 4, 2, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 0, 4, 1, 4, 4, 4, 2, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4
};
int main(int argc, char *argv[]) {
long i, n_n = 0, n_acgt = 0, n_gap = 0;
gzFile fp;
kseq_t *seq;
if (argc == 1) {
fprintf(stderr, "Usage: count-N <in.fa>\n");
return 1;
}
if ((fp = gzopen(argv[1], "r")) == 0) {
fprintf(stderr, "ERROR: fail to open the input file\n");
return 1;
}
seq = kseq_init(fp);
while (kseq_read(seq) >= 0) {
for (i = 0; i < seq->seq.l; ++i) {
int c = dna5tbl[(unsigned char)seq->seq.s[i]];
if (c < 4) ++n_acgt;
else if (c == 4) ++n_n;
else ++n_gap;
}
}
kseq_destroy(seq);
gzclose(fp);
printf("%ld\t%ld\t%ld\n", n_acgt, n_n, n_gap);
return 0;
} It works for both FASTA/Q and gzip'ed FASTA/Q. The following uses SeqAn: #include <seqan/seq_io.h>
using namespace seqan;
int main(int argc, char *argv[]) {
if (argc == 1) {
std::cerr << "Usage: count-N <in.fastq>" << std::endl;
return 1;
}
std::ios::sync_with_stdio(false);
CharString id;
Dna5String seq;
SeqFileIn seqFileIn(argv[1]);
long i, n_n = 0, n_acgt = 0;
while (!atEnd(seqFileIn)) {
readRecord(id, seq, seqFileIn);
for (i = beginPosition(seq); i < endPosition(seq); ++i)
if (seq[i] < 4) ++n_acgt;
else ++n_n;
}
std::cout << n_acgt << '\t' << n_n << std::endl;
return 0;
} On a FASTQ with 4-million 150bp reads: The C version: ~0.74 sec The C++ version: ~2.15 sec An older C version without a lookup table (see the previous edit ): ~2.65 sec | {
"source": [
"https://bioinformatics.stackexchange.com/questions/361",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/57/"
]
} |
2,216 | A bit of a historical question on a number, 30 times coverage, that's become so familiar in the field: why do we sequence the human genome at 30x coverage? My question has two parts: Who came up with the 30x value and why? Does the value need to be updated to reflect today's state-of-the-art? In summary, if the 30x value is a number that was based on the old Solexa GAIIx 2x35bp reads and error rates, and the current standard Illumina sequencing is 2x150bp, does the 30x value need updating? | The earliest mention of the 30x paradigm I could find is in the original Illumina whole-genome sequencing paper: Bentley, 2008 . Specifically, in Figure 5, they show that most SNPs have been found, and that there are few uncovered/uncalled bases by the time you reach 30x: These days, 30x is still a common standard, but large-scale germline sequencing projects are often pushing down closer to 25x and finding it adequate. Every group doing this seriously has done power calculations based on specifics of their machines and prep (things like error rates and read lengths matter!). Cancer genomics is going in the other direction. When you have to contend with purity, ploidy, and subclonal populations, much more coverage than 30x is needed. Our group showed in this 2015 paper that even 300x whole-genome coverage of a tumor was likely missing real rare variants in a tumor. On the whole, the sequence coverage you need really depends on what questions you're asking, and I'd recommend that anyone designing a sequencing experiment consult with both a sequencing expert and a statistician beforehand (and it's even better if those are the same person!) | {
"source": [
"https://bioinformatics.stackexchange.com/questions/2216",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/180/"
]
} |
3,583 | I am writing a python script that requires a reverse complement function to be called on DNA strings of length 1 through around length 30. Line profiling programs indicate that my functions spend a lot of time getting the reverse complements, so I am looking to optimize. What is the fastest way to get the reverse complement of a sequence in python? I am posting my skeleton program to test different implementations below with DNA string size 17 as an example. #!/usr/bin/env python
import random
import timeit
global complement
complement = {'A': 'T', 'C': 'G', 'G': 'C', 'T': 'A'}
DNAlength = 17
#randomly generate 100k bases
int_to_basemap = {1: 'A', 2: 'C', 3: 'G', 4: 'T'}
num_strings = 500000
random.seed(90210)
DNAstrings = ["".join([int_to_basemap[random.randint(1,4)] for i in range(DNAlength)])
for j in range(num_strings)]
#get an idea of what the DNAstrings look like
print(DNAstrings[0:5])
def reverse_complement_naive(seq):
this_complement = {'A': 'T', 'C': 'G', 'G': 'C', 'T': 'A'}
return "".join(this_complement.get(base, base) for base in reversed(seq))
def reverse_complement(seq):
return "".join(complement.get(base, base) for base in reversed(seq))
tic=timeit.default_timer()
rcs = [reverse_complement_naive(seq) for seq in DNAstrings]
toc=timeit.default_timer()
baseline = toc - tic
namefunc = {"naive implementation": reverse_complement_naive,
"global dict implementation": reverse_complement}
for function_name in namefunc:
func = namefunc[function_name]
tic=timeit.default_timer()
rcs = [func(seq) for seq in DNAstrings]
toc=timeit.default_timer()
walltime = toc-tic
print("""{}
{:.5f}s total,
{:.1f} strings per second
{:.1f}% increase over baseline""".format(
function_name,
walltime,
num_strings/walltime,
100- ((walltime/baseline)*100) )) By the way, I get output like this. It varies by the call, of course! naive implementation
1.83880s total,
271916.7 strings per second
-0.7% increase over baseline
global dict implementation
1.74645s total,
286294.3 strings per second
4.3% increase over baseline Edit: Great answers, everyone! When I get a chance in a day or two I will add all of these to a test file for the final run. When I asked the question, I had not considered whether I would allow for cython or c extensions when selecting the final answer. What do you all think? Edit 2: Here are the results of the final simulation with everyone's implementations. I am going to accept the highest scoring pure python code with no Cython/C. For my own sake I ended up using user172818's c implementation. If you feel like contributing to this in the future, check out the github page I made for this question . the runtime of reverse complement implementations.
10000 strings and 250 repetitions
╔══════════════════════════════════════════════════════╗
║ name %inc s total str per s ║
╠══════════════════════════════════════════════════════╣
║ user172818 seqpy.c 93.7% 0.002344 4266961.4 ║
║ alexreynolds Cython (v2) 93.4% 0.002468 4051583.1 ║
║ alexreynolds Cython (v1) 90.4% 0.003596 2780512.1 ║
║ devonryan string 86.1% 0.005204 1921515.6 ║
║ jackaidley bytes 84.7% 0.005716 1749622.2 ║
║ jackaidley bytesstring 83.0% 0.006352 1574240.6 ║
║ global dict 5.4% 0.035330 283046.7 ║
║ revcomp_translateSO 45.9% 0.020202 494999.4 ║
║ string_replace 37.5% 0.023345 428364.9 ║
║ revcom from SO 28.0% 0.026904 371694.5 ║
║ naive (baseline) 1.5% 0.036804 271711.5 ║
║ lambda from SO -39.9% 0.052246 191401.3 ║
║ biopython seq then rc -32.0% 0.049293 202869.7 ║
╚══════════════════════════════════════════════════════╝ | I don't know if it's the fastest, but the following provides an approximately 10x speed up over your functions: import string
tab = string.maketrans("ACTG", "TGAC")
def reverse_complement_table(seq):
return seq.translate(tab)[::-1] The thing with hashing is that it adds a good bit of overhead for a replacement set this small. For what it's worth, I added that to your code as "with a translation table" and here is what I got on my workstation: global dict implementation
1.37599s total,
363374.8 strings per second
3.3% increase over baseline
naive implementation
1.44126s total,
346919.4 strings per second
-1.3% increase over baseline
with a translation table
0.16780s total,
2979755.6 strings per second
88.2% increase over baseline If you need python 3 rather than python 2, then substitute tab = str.maketrans("ACTG", "TGAC") for tab = string.maketrans("ACTG", "TGAC") , since maketrans is now a static method on the str type. For those wondering, using biopython is slower for this (~50% slower than the naive implementation), presumably due to the overhead of converting the strings to Seq objects. If one were already reading sequences in using biopython, though, I wouldn't be surprised if the performance was much different. | {
"source": [
"https://bioinformatics.stackexchange.com/questions/3583",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/2085/"
]
} |
7,458 | I have a DNA sequence for which I would like to quickly find the reverse complement. Is there a quick way of doing this on the bash command line using only GNU tools? | Thanks to Manu Tamminen for this solution: echo ACCTTGAAA | tr ACGTacgt TGCAtgca | rev | {
"source": [
"https://bioinformatics.stackexchange.com/questions/7458",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/492/"
]
} |
11,227 | The SARS-Cov2 coronavirus's genome was released, and is now available on Genbank. Looking at it... 1 attaaaggtt tataccttcc caggtaacaa accaaccaac tttcgatctc ttgtagatct
61 gttctctaaa cgaactttaa aatctgtgtg gctgtcactc ggctgcatgc ttagtgcact
121 cacgcagtat aattaataac taattactgt cgttgacagg acacgagtaa ctcgtctatc
...
29761 acagtgaaca atgctaggga gagctgccta tatggaagag ccctaatgtg taaaattaat
29821 tttagtagtg ctatccccat gtgattttaa tagcttctta ggagaatgac aaaaaaaaaa
29881 aaaaaaaaaa aaaaaaaaaa aaa Wuhan seafood market pneumonia virus isolate Wuhan-Hu-1, complete genome , Genbank Geeze, that's a lot of a nucleotides---I don't think that's just random. I would guess that it's either an artifact of the sequencing process, or there is some underlying biological reason. Question : Why does the SARS-Cov2 coronavirus genome end in 33 a's? | Good observation! The 3' poly(A) tail is actually a very common feature of positive-strand RNA viruses, including coronaviruses and picornaviruses. For coronaviruses in particular, we know that the poly(A) tail is required for replication, functioning in conjunction with the 3' untranslated region (UTR) as a cis-acting signal for negative strand synthesis and attachment to the ribosome during translation. Mutants lacking the poly(A) tail are severely compromised in replication. Jeannie Spagnolo and Brenda Hogue report: The 3′ poly (A) tail plays an important, but as yet undefined role in Coronavirus genome replication. To further examine the requirement for the Coronavirus poly(A) tail, we created truncated poly(A) mutant defective interfering (DI) RNAs and observed the effects on replication. Bovine Coronavirus (BCV) and mouse hepatitis Coronavirus A59 (MHV-A59) DI RNAs with tails of 5 or 10 A residues were replicated, albeit at delayed kinetics as compared to DI RNAs with wild type tail lengths (>50 A residues). A BCV DI RNA lacking a poly(A) tail was unable to replicate; however, a MHV DI lacking a tail did replicate following multiple virus passages. Poly(A) tail extension/repair was concurrent with robust replication of the tail mutants. Binding of the host factor poly(A)- binding protein (PABP) appeared to correlate with the ability of DI RNAs to be replicated. Poly(A) tail mutants that were compromised for replication, or that were unable to replicate at all exhibited less in vitro PABP interaction. The data support the importance of the poly(A) tail in Coronavirus replication and further delineate the minimal requirements for viral genome propagation. Spagnolo J.F., Hogue B.G. (2001) Requirement of the Poly(A) Tail in Coronavirus Genome Replication. In: Lavi E., Weiss S.R., Hingley S.T. (eds) The Nidoviruses. Advances in Experimental Medicine and Biology, vol 494. Springer, Boston, MA Yu-Hui Peng et al. also report that the length of the poly(A) tail is regulated during infection: Similar to eukaryotic mRNA, the positive-strand coronavirus genome of ~30 kilobases is 5’-capped and 3’-polyadenylated. It has been demonstrated that the length of the coronaviral poly(A) tail is not static but regulated during infection; however, little is known regarding the factors involved in coronaviral polyadenylation and its regulation. Here, we show that during infection, the level of coronavirus poly(A) tail lengthening depends on the initial length upon infection and that the minimum length to initiate lengthening may lie between 5 and 9 nucleotides. By mutagenesis analysis, it was found that (i) the hexamer AGUAAA and poly(A) tail are two important elements responsible for synthesis of the coronavirus poly(A) tail and may function in concert to accomplish polyadenylation and (ii) the function of the hexamer AGUAAA in coronaviral polyadenylation is position dependent. Based on these findings, we propose a process for how the coronaviral poly(A) tail is synthesized and undergoes variation. Our results provide the first genetic evidence to gain insight into coronaviral polyadenylation. Peng Y-H, Lin C-H, Lin C-N, Lo C-Y, Tsai T-L, Wu H-Y (2016) Characterization of the Role of Hexamer AGUAAA and Poly(A) Tail in Coronavirus Polyadenylation. PLoS ONE 11(10): e0165077 This builds upon prior work by Hung-Yi Wu et al , which showed that the coronaviral 3' poly(A) tail is approximately 65 nucleotides in length in both genomic and sgmRNAs at peak viral RNA synthesis, and also observed that the precise length varied throughout infection. Most interestingly, they report: Functional analyses of poly(A) tail length on specific viral RNA species, furthermore, revealed that translation, in vivo, of RNAs with the longer poly(A) tail was enhanced over those with the shorter poly(A). Although the mechanisms by which the tail lengths vary is unknown, experimental results together suggest that the length of the poly(A) and poly(U) tails is regulated. One potential function of regulated poly(A) tail length might be that for the coronavirus genome a longer poly(A) favors translation. The regulation of coronavirus translation by poly(A) tail length resembles that during embryonal development suggesting there may be mechanistic parallels. Wu HY, Ke TY, Liao WY, Chang NY. Regulation of coronaviral poly(A) tail length during infection. PLoS One. 2013;8(7):e70548. Published 2013 Jul 29. doi:10.1371/journal.pone.0070548 It's also worth pointing out that poly(A) tails at the 3' end of RNA are not an unusual feature of viruses. Eukaryotic mRNA almost always contains poly(A) tails, which are added post-transcriptionally in a process known as polyadenylation. It should not therefore be surprising that positive-strand RNA viruses would have poly(A) tails as well. In eukaryotic mRNA, the central sequence motif for identifying a polyadenylation region is AAUAAA, identified way back in the 1970s, with more recent research confirming its ubiquity. Proudfoot 2011 is a nice review article on poly(A) signals in eukaryotic mRNA. | {
"source": [
"https://bioinformatics.stackexchange.com/questions/11227",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/1451/"
]
} |
11,353 | I'm looking at a genome sequence for 2019-nCoV on NCBI . The FASTA sequence looks like this: >MN988713.1 Wuhan seafood market pneumonia virus isolate 2019-nCoV/USA-IL1/2020, complete genome
ATTAAAGGTTTATACCTTCCCAGGTAACAAACCAACCAACTTTCGATCTCTTGTAGATCTGTTCTCTAAA
CGAACTTTAAAATCTGTGTGGCTGTCACTCGGCTGCATGCTTAGTGCACTCACGCAGTATAATTAATAAC
TAATTACTGTCGTTGACAGGACACGAGTAACTCGTCTATCTTCTGCAGGCTGCTTACGGTTTCGTCCGTG
...
...
TTAATCAGTGTGTAACATTAGGGAGGACTTGAAAGAGCCACCACATTTTCACCGAGGCCACGCGGAGTAC
GATCGAGTGTACAGTGAACAATGCTAGGGAGAGCTGCCTATATGGAAGAGCCCTAATGTGTAAAATTAAT
TTTAGTAGTGCTATCCCCATGTGATTTTAATAGCTTCTTAGGAGAATGACAAAAAAAAAAAA Coronavirus is an RNA virus, so I was expecting the sequence to consist of AUGC characters. But the letters here are ATGC , which looks like DNA! I found a possible answer, that this is the sequence of a "complementary DNA" . I read that The term cDNA is also used, typically in a bioinformatics context, to refer to an mRNA transcript's sequence, expressed as DNA bases (GCAT) rather than RNA bases (GCAU). However, I don't believe this theory that I'm looking at a cDNA. If this were true, the end of the true mRNA sequence would be ...UCUUACUGUUUUUUUUUUUU , or a "poly(U)" tail. But I believe the coronavirus has a poly(A) tail . I also found that the start of all highlighted genes begin with the sequence ATG . This is the DNA equivalent of the RNA start codon AUG . So, I believe what I'm looking at is the true mRNA, in 5'→3' direction, but with all U converted to T . So, is this really what I'm looking at? Is this some formatting/representation issue? Or does 2019-nCoV really contain DNA, rather than RNA? | That is the correct sequence for 2019-nCov. Coronavirus is of course an RNA virus and in fact, to my knowledge, every RNA virus in Genbank is present as cDNA (AGCT, i.e. thydmine) and not RNA (AGCU, i.e. uracil). The reason is simple, we never sequence directly from RNA because RNA is too unstable and easily degraded by RNase. Instead the genome is reverse transcribed, either by targeted reverse transcription or random amplification and thus converted to cDNA. cDNA is stable and is essentially reverse transcribed RNA. The cDNA is either sequenced directly or further amplified by PCR and then sequenced. Hence the sequence we observe is the cDNA rather than RNA, thus we observe thymine rather than uracil and that is how it is reported. | {
"source": [
"https://bioinformatics.stackexchange.com/questions/11353",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/6910/"
]
} |
11,354 | I've got some doubts on the hisat2 --rna-strandness option and its output for downstream analysis. Is it expected to see a difference in alignment and counting results given the default usage of --rna-strandness in hisat2 followed by htseq-count -s reverse on a strand-specific assay? Please see below. I understand that the --rna-strandness option produces an XS tag to indicate where a transcript is from (on the + or - strand) for downstream transcriptome assembly analysis. I have a paired-end stranded sequencing library that was aligned to the genome using hisat2 without specifying the --rna-strandness (in other words, the default unstranded was the usage). Following this, the reads were assigned to genes using htseq-count and this time -s reverse was specified given the strand-specific sequencing assay type. Since --rna-strandness is for transcriptome assembly using the XS tags generated and htseq does not use XS tags for counting, I presume there should be no practical impact from the above. Could you also shed light on this? in case I may have been overlooking other facts of the usages of the tools. To help verify the above, I re-aligned and counted the reads from 2 samples by switching on --rna-strandness RF in hisat2. I attach the alignment and count features info. below for assessment. Overall alignment rate of Sample 1: 94.52% (--rna-strandness RF)
94.12% (--rna-strandness unstranded) Overall alignment rate of Sample 2: 94.57% (--rna-strandness RF)
94.15% (--rna-strandness unstranded) Feature counts of Sample 1 (following --rna-strandness RF + -s reverse): __no_feature 6327294
__ambiguous 2954776
__too_low_aQual 3784481
__not_aligned 688856
__alignment_not_unique 4858182 Feature counts of Sample 1 (following --rna-strandness unstranded + -s reverse): __no_feature 6291151
__ambiguous 2911298
__too_low_aQual 4075017
__not_aligned 754400
__alignment_not_unique 16136045 Feature counts of Sample 2 (following --rna-strandness RF + -s reverse): __no_feature 5417882
__ambiguous 1708510
__too_low_aQual 3532352
__not_aligned 564596
__alignment_not_unique 2859501 Feature counts of Sample 2 (following --rna-strandness unstranded + -s reverse): __no_feature 5359434
__ambiguous 1676091
__too_low_aQual 3813344
__not_aligned 623122
__alignment_not_unique 2891792 These results look comparable to me across pipelines. Thanks
Guan | That is the correct sequence for 2019-nCov. Coronavirus is of course an RNA virus and in fact, to my knowledge, every RNA virus in Genbank is present as cDNA (AGCT, i.e. thydmine) and not RNA (AGCU, i.e. uracil). The reason is simple, we never sequence directly from RNA because RNA is too unstable and easily degraded by RNase. Instead the genome is reverse transcribed, either by targeted reverse transcription or random amplification and thus converted to cDNA. cDNA is stable and is essentially reverse transcribed RNA. The cDNA is either sequenced directly or further amplified by PCR and then sequenced. Hence the sequence we observe is the cDNA rather than RNA, thus we observe thymine rather than uracil and that is how it is reported. | {
"source": [
"https://bioinformatics.stackexchange.com/questions/11354",
"https://bioinformatics.stackexchange.com",
"https://bioinformatics.stackexchange.com/users/7167/"
]
} |
7 | For people not familiar with this name -- MathJax is a plugin which converts LaTeX math markup like $X_1^2$ into proper math notation, in this case . On one hand equations may appear here too rare to justify it, on the other hand it might be useful for chemistry; maybe it is not a desired use of this system, but it is easier to type $H_2O$ than to fight with HTML subscript. | Yes, because to be frank, typing out any math without some sort of notation system in the Stack Exchange software is a pain. And I don't necessarily think it's "overkill". While some users might not use it a great deal, most of my work is either mathematical biology or mathematical epidemiology, and talking about predator-prey models, epidemic models, etc. - which don't really have a home to talk about their substance rather than their implementation outside of this site - is essentially impossible without typing out some math. More to the point, deciding not to have it I think stakes out a somewhat dangerous position that this site isn't interested in questions regarding Biology that require heading into any math more complex than high school algebra. | {
"source": [
"https://biology.meta.stackexchange.com/questions/7",
"https://biology.meta.stackexchange.com",
"https://biology.meta.stackexchange.com/users/-1/"
]
} |
784 | I have only really been active here for about 3 months, so I’m not very familiar with this site’s policies and ambitions. On some sites I participate in, there is a vision that seems to be shared, reflected partly by the comments certain kinds of questions get, by how quickly they're put on hold, etc. I don't have a good sense of what the vision of this site is. Sometimes I see questions that are of poor quality, probably because of a lack of understanding of science in general or biology in particular, as with this question and this one . Sometimes it’s both, or there is some other reason to object to the question. For example, incomprehensibility . Sometimes a question concerns me because I know the OP can answer it himself with a bit more thought, or because an OP posts a flurry of similar questions about slightly different drugs that have the same or similar mechanisms of action. I see some good questions closed , and I don't understand the reason. Sometimes, I see comments that lead me to wonder what is expected in the level of expertise of users here. (I think there is more expertise here than is sometimes recognized.) What is the vision this site has for itself? I'm clearly missing something here. | The audience I wanted this site to have when I committed to the A51 proposal was undegraduates, PhD students and PostDocs in biological sciences. I'm obviously biased as I'm part of that group, but even then I think that this group should be our core audience. They have the knowledge we need, and also have still enough questions to ask. I'm generally skeptical of targeting a site too high, most professors are extremely busy and it would be exceedingly difficult to get them onto a site like this to answer questions (unless we would achieve the kind of professional standing a site like Math Overflow has). PhD students and PostDocs are the users that in my opinion could benefit the most from this site, and at the same time have the knowledge necessary to provide great answers, and more likely also the necessary time and inclination. This site was targeted towards all users in the A51 proposal. So of course we received a large amount of questions from users that are not professionals. I did ask about this on meta early in our beta phase , but there was no real conclusion to that. While I'd personally like to lift the minimum entry barrier a bit, I'm also hesitant to actually propose this as I fear that this could get out of hand quickly. If there are no clear lines, this kind of decision gets rather arbitrary, and we end up with a pretty hostile site. I think that this is one of the topics we have to seriously discuss at some point, but I doubt there is an easy answer. | {
"source": [
"https://biology.meta.stackexchange.com/questions/784",
"https://biology.meta.stackexchange.com",
"https://biology.meta.stackexchange.com/users/5198/"
]
} |
3,439 | @Terdon made an interesting observation in the chat room recently: The simple truth is that we've never been a site for biologists. Those
of us who answer are usually biologists or in similar fields but in my
experience, the vast majority of questions have always come from
laymen. This is supported by this figure, showing the relationship between rep and questions asked: There was also a response from @James noting that graduate-level questions generally get fewer upvotes and less attention, although I acknowledge that's only anecdotal. Recently, this proposal was closed due to an apparent substantial overlap with Biology.SE. When I asked the proposer about this, their reasoning appeared to be that questions on Biology.SE were mostly by laymen and they wanted to create a community for more 'hardcore' questions. Since the proposal was closed, the originator of this proposal has not become a contributor to Biology.SE and I'm not aware of any others from that potential community who joined, so the SE network lost a few dozen professional biologists. I think both Biology.SE and the SE model in general are great. However, it seems that the fact that a high proportion of the community using Biology.SE consists of laymen could be discouraging professional biologists from joining. This does not affect the value of Biology.SE as a tool for public engagement but could limit its value as a tool for knowledge exchange. Does anyone else think this might be the case, and if so have any thoughts about what, if anything, we need to do about it? For example, one solution (although I don't really think it's a good one, it's just the first one I can think of) would be to set up a separate SE for biology professionals (ProfBio.SE?). I have only arrived here recently, but it looks to me as though this has been a recurring issue: Shouldn't we be more tolerant with newcomers and non-biologists? , Should we encourange the relevant questions from non-professionals? | Good question. It's always a good idea to now and then reflect on what our purpose and target audience is, as it often changes over time. Basically, I agree with the OP that biology SE is not really a site for professionals, at least not in the sense that professionals can come here and resolve actual research problems. But I'm not sure that will ever be feasible, and I don't think it means we need to change the site, because it does serve a function as it is. I'm a cell biology / biochemistry professional and joined Biology SE a year ago. At the time I was curious if it might be a good network to shoot some out-of-the-box questions to other scientists, broaden my horizons a bit and discuss issues outside my own field of expertise. I have got some interesting feedback on my own broad-audience research questions, but mainly I have been engaged in answering more basic questions from students and the general public. And I think this is fine --- today I mainly view Biology SE as outreach activity, which I feel is a very important but often neglected task for academics. I don't think we have critical mass as a forum for research-level questions from scientists, and I doubt we ever will achieve that. Biology is too large a research field, and fractioned into so many subfields, that asking a research-level question in any of them requires access to the handful of world experts that really knows what they're talking about. Most professionals ask those questions within their own network --- if I have a question on, say, mechanism of transcriptional regulation of the glutaminase enzyme by c-myc, I pick up the phone and call the local c-myc expert at the department next door. I don't expect to get an informed answer by shooting a question to a web forum or googling wikipedia. Most of the interesting answers (or rather advice and discussions) are not found in the published literature. A StackExchange site for the actual scientists might work for mathematics and computer science, but I think biology as an empirical discipline is quite different. (I think this is what Koustav Pal is saying as well in the other answer.) This doesn't mean that Biology SE is not useful for scientists. While it is not a good channel for my own research questions, it is a good site for asking more general-purpose biology questions outside my own field, where I don't have expertise, out of pure curiosity. For example, I'm delighted to learn about weird species like the seemingly immortal hydras , or that some women have a fourth color receptor in their retina. On Biology SE I encounter biology facts and ideas I otherwise would not have come across, and I think that is important for professional biologists today, as the subfields become ever more specialized and people risk losing sight of the larger questions. But again, mostly this site handles rather basic questions from students and the general public, setting facts straight and clearing up fundamental misconceptions. And I think that's a worthwhile activity, and we should continue doing it. I think the description from the tour page that Koustav Pal is quoting, that Biology SE is "for biology researchers, academics, and students", is not really accurate in describing who is asking the most questions (that would be students and laymen, as the graph in the question suggests). But it is probably fairly accurate if it describes who is giving the most answers . And perhaps that's more important to describe, as it signals to a new would-be member that this is a site where he/she can get some informed answers to his/her questions. And most of the time, they do get well informed answers --- better than they would find otherwise --- and for free. That's not a bad outcome. | {
"source": [
"https://biology.meta.stackexchange.com/questions/3439",
"https://biology.meta.stackexchange.com",
"https://biology.meta.stackexchange.com/users/22628/"
]
} |
32 | What do the strain designations for flu mean? For example avian flu is classified as H5N1 , what do the letters H , N and numbers 5 , 1 mean? Is it more than a simple string-identifier? | The sub-type is named for the broad classes of the hemagglutinin (HA) or neuraminidase (NA) surface proteins sticking through the viral envelope. There are 16 HA sub-types (designated H1 - H16) and 9 NA sub-types (designated N1 - N9). All of the possible combinations of these influenza A subtypes infect birds, but only those containing the H1, H2, H3, H5, H7 and H9 and the N1, N2 and N7 surface proteins infect humans, and of these, so far, only H1, H2, H3 and N1 and N2 do so to any extent. Read more: http://www.fluwiki.info/pmwiki.php?n=Science.NamingInfluenzaViruses | {
"source": [
"https://biology.stackexchange.com/questions/32",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/61/"
]
} |
90 | I'm by no means an expert in the field, merely a curious visitor, but I've been thinking about this and Google isn't of much help. Do we know of any lifeforms that don't have the conventional double-helix DNA as we know it? Have any serious alternatives been theorized? | To follow up what mbq said, there have been a number of "origin of life" studies which suggest that RNA was a precursor to DNA, the so-called "RNA world" (1). Since RNA can carry out both roles which DNA and proteins perform today. Further speculations suggest things like a Peptide-Nucleic Acids " PNA " may have preceded RNA and so on. Catalytic molecules and genetic molecules are generally required to have different features. For example, catalytic molecules should be able to fold and have many building blocks (for catalytic action), whereas genetic molecules should not fold (for template synthesis) and have few building blocks (for high copy fidelity). This puts a lot of demands on one molecule. Also, catalytic biopolymers can (potentially) catalyse their own destruction. RNA seems to be able to balance these demands, but then the difficulty is in making RNA prebiotically - so far his has not been achieved. This has lead to interest in "metabolism first" models where early life has no genetic biopolymer and somehow gives rise to genetic inheritance. However, so far this seems to have been little explored and largely unsuccessful (2). edit I just saw this popular article in New Scientist which also discusses TNA (Threose nucleic acid) and gives some background reading for PNA, GNA (Glycol nucleic acid) and ANA (amyloid nucleic acid). (1) Gilbert, W., 1986, Nature, 319, 618 "Origin of life: The RNA world" (2) Copley et al., 2007, Bioorg Chem, 35, 430 "The origin of the RNA world: co-evolution of genes and metabolism." | {
"source": [
"https://biology.stackexchange.com/questions/90",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/26/"
]
} |
91 | The information between the brain and peripheral nerves is sent via electrical pulses or signals, How then does a non-metallic human cell manage to conduct an electrical
signal? | This is quite a big question! I'll try to outline the basic view. First, let's review how neurons signal between each other. The canonical way for a neuron to send a signal to a downstream neuron is by generating an action potential , the "electrical impulse" you have heard of. This action potential causes the release of neurotransmitter at a point where the two cells are very close to each other called a synapse . The downstream postsynaptic cell receives the neurotransmitter signal and converts it into a small electrical signal. If enough of these small electrical signals happen in a short time, they sum together and are likely to initiate an action potential in the second cell and the cycle repeats all along the circuit. How is the electrical signal generated? The basics of how this works was worked out most famously by Hodgkin and Huxley in 1952. The short story is that the plasma membrane is selectively permeable to ions . Let's build the concept from the ground up. The toolbox Imagine a sphere of plasma membrane that represents a simple neuron. For starters, we assume that this membrane is bare lipid with no membrane-associated proteins. Because of the hydrophobicity of the bilayer, charged particles cannot diffuse through the membrane. The cell is bathed, inside and outside, in a solution containing many ions (charged atoms), including sodium (Na+), potassium (K+), chloride (Cl-), and calcium (Ca2+). As we noted above, these ions cannot go through the membrane without "help". Now we add an ion pump protein into the membrane which will pump sodium ions out and potassium ions in. This particular pump, the Na-K ATPase , creates an excess of sodium ions outside the cell and an excess of potassium ions inside. Now we add a potassium ion channel to the membrane. This protein creates a pore in the membrane that only allows potassium ions through. This particular protein's pore is always open. Now things start getting exciting... What do the potassium ions do now that they can go through the membrane? Ions will move based on the forces created by their electrochemical gradients . The pump created a chemical gradient by putting excess K+ inside, so the K+ ions start to flow out through the ion channels. But K+ ions are positively charged, so when they flow out, positive charge starts building up outside and negative charge builds up inside. This electrical gradient opposes the chemical gradient, tending to pull the K+ ions into the cell while the chemical gradients pulls K+ ions out. The influx and efflux reach an equilibrium at the Nernst potential , where the electrical and chemical forces equal out. For physiological concentrations of K+ ions, the K+ equilibrium potential is about -80mV or -90mV. This means that K+ ions will flow until the outside of the cell is 80-90mV more positive than the inside of the cell. We started at 0mV, so K+ ions mostly flow out. We now have a membrane potential , a difference in electrical potential between the inside and the outside of the cell at about -80mV (usually closer to -70mV or -60mV in "real life"). In particular, this membrane potential is the resting potential that exists when the cell is not active. We can simplify for now and think of the resting potential as being set by a resting permeability of the membrane to potassium ions, but not to sodium ions. We call this membrane polarized, and thus depolarization is when the membrane potential becomes more positive, and hyperpolarization is when the membrane potential becomes more negative. Now, we add to the membrane a voltage-gated sodium channel , an ion channel that passes only sodium ions but is usually closed. The voltage-gating means that this ion channel is sensitive to the membrane potential. At the resting potential, the pore is closed and the membrane is still impermeable to sodium ions. When the membrane potential becomes slightly more positive, the channels opens and sodium ions can flow. This channel is also inactivating , so that when it opens it only opens for a short period of time, letting in a limited amount of sodium. What way will sodium flow when we open this channel? Because of the negative resting potential (-70mV) and the excess of sodium ions outside due to the pump, both the electrical and chemical gradient will drive sodium ions into the cell. The sodium equilibrium potential is usually around +60mV. To complete the machinery for generating an action potential, we also add a voltage-gated potassium channel to the membrane. It works just like the voltage-gated sodium channel that is also closed at rest and opens when the membrane potential becomes more positive. This channel opens a bit more slowly than the sodium channel does, but it does not inactivate. Generating an action potential Ok, so how do these parts come together to create an electrical impulse? The cell sits at its resting membrane potential, with all of its voltage-gated channels closed. It receives a signal from an upstream cell that causes a slight depolarization. The action potential will initiate when the membrane potential hits the threshold potential . At the threshold potential, the voltage-gated sodium channels open letting sodium ions flow into the cell. The sodium flux pulls the membrane from the resting potential (-70mV) towards the sodium equilibrium potential (+60mV). These values are far apart, so the driving force is large and the membrane depolarizes rapidly. This is the action potential upstroke . The depolarization also activates the (slightly slower) voltage-gated potassium channels. The potassium ions flow out and drive the depolarized membrane (about +20mV at the action potential peak) back towards the potassium equilibrium potential (-80mV). At the same time, the sodium channels are inactivating so that sodium is no longer depolarizing the membrane. The repolarization rate is usually slower than the depolarization rate. This is the action potential downstroke . The whole process of the action potential depolarization/repolarization cycle takes about 2-3 milliseconds in an "average" neuron. Once the cell reaches resting potentials again, the membrane is basically reset. The voltage-gated channels are turned off. The ion pump moves the potassium ions that flowed out and the sodium ions that flowed in. That patch of membrane is ready to fire another action potential! As a final note, I'll mention that the voltage-gated sodium channel provides a mechanism for the action potential to propagate down the axon. The action potential is initiated in one location of the cell, and creates a depolarization. This depolarization causes the voltage-gated sodium channels in neighbouring regions of the membrane to open and generate an action potential cycle of their own. This is how an action potential travel down axons (and sometimes dendrites too). | {
"source": [
"https://biology.stackexchange.com/questions/91",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/135/"
]
} |
171 | What is the advantage gained by the substitution of thymine for uracil in DNA? I have read previously that it is due to thymine being "better protected" and therefore more suited to the storage role of DNA, which seems fine in theory, but why does the addition of a simple methyl group make the base more well protected? | One major problem with using uracil as a base is that cytosine can be deaminated , which converts it into uracil. This is not a rare reaction; it happens around 100 times per cell, per day. This is no major problem when using thymine, as the cell can easily recognize that the uracil doesn't belong there and can repair it by substituting it by a cytosine again. There is an enzyme, uracil DNA glycosylase , that does exactly that; it excises uracil bases from double-stranded DNA. It can safely do that as uracil is not supposed to be present in the DNA and has to be the result of a base modification. Now, if we would use uracil in DNA it would not be so easy to decide how to repair that error. It would prevent the usage of this important repair pathway. The inability to repair such damage doesn't matter for RNA as the mRNA is comparatively short-lived and any potential errors don't lead to any lasting damage. It matters a lot for DNA as the errors are continued through every replication. Now, this explains why there is an advantage to using thymine in DNA, it doesn't explain why RNA uses uracil. I'd guess it just evolved that way and there was no significant drawback that could be selected against, but there might be a better reason (more difficult biosynthesis of thymine, maybe?). You'll find a bit more information on that in "Molecular Biology of the Cell" from Bruce Alberts et al. in the chapter about DNA repair (from page 267 on in the 4th edition). | {
"source": [
"https://biology.stackexchange.com/questions/171",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/69/"
]
} |
175 | I am currently doing an experiment on cells to test the internalization of a protein.
Normally, I seeded my cells the day before the incubation. This worked well for Hela, CHL or PANC1 cells. However, when I did the same with INS1-E and MIN6 (both beta-cells) after the incubation and the washing step the majority of the cells were gone. This was better in the control where I did not put in the compound, but still a lot of cells were detached. Therefore, I wonder if I should seed the INS1-E and MIN6 cells earlier, more like 2-3 days before. Do these cell lines need more time after splitting to attach again? | One major problem with using uracil as a base is that cytosine can be deaminated , which converts it into uracil. This is not a rare reaction; it happens around 100 times per cell, per day. This is no major problem when using thymine, as the cell can easily recognize that the uracil doesn't belong there and can repair it by substituting it by a cytosine again. There is an enzyme, uracil DNA glycosylase , that does exactly that; it excises uracil bases from double-stranded DNA. It can safely do that as uracil is not supposed to be present in the DNA and has to be the result of a base modification. Now, if we would use uracil in DNA it would not be so easy to decide how to repair that error. It would prevent the usage of this important repair pathway. The inability to repair such damage doesn't matter for RNA as the mRNA is comparatively short-lived and any potential errors don't lead to any lasting damage. It matters a lot for DNA as the errors are continued through every replication. Now, this explains why there is an advantage to using thymine in DNA, it doesn't explain why RNA uses uracil. I'd guess it just evolved that way and there was no significant drawback that could be selected against, but there might be a better reason (more difficult biosynthesis of thymine, maybe?). You'll find a bit more information on that in "Molecular Biology of the Cell" from Bruce Alberts et al. in the chapter about DNA repair (from page 267 on in the 4th edition). | {
"source": [
"https://biology.stackexchange.com/questions/175",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/126/"
]
} |
222 | Many plants (e.g. roses, palms) can be protected from frost during the winter if shielded with an appropriate coat that can be bought in garden shops. Do plants produce any heat that can be kept inside with these "clothes"? | Cellular respiration in plants is slightly different than in other eukaryotes because the electron transport chain contains an additional enzyme called Alternative Oxidase (AOX). AOX takes some electrons out of the pathway prematurely - basically the energy is used to generate heat instead of ATP. The exact purpose of AOX in plants is still unclear. Plants will make more AOX in response to cold, wounding, and oxidative stress. We know of at least one plant (skunk cabbage) that exploits this pathway to generate enough heat to melt snow. This link gives a pretty good overview. (AOX is dear to my heart, since my first 3 years working in a laboratory were spent studying this gene <3) | {
"source": [
"https://biology.stackexchange.com/questions/222",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/164/"
]
} |
239 | Darwin suggested that sexual selection, especially by female choice, may counter natural selection. Theoretical models, such as a Fisherian runaway process, suggest that evolution of preference and preferred phenotypes may drive each other in ever increasing speed. Because one male may fertilize many females, one could imagine that natural selection against preferred but energetically costly phenotypes may be weak, and the whole process may not slow down fast enough (i.e., be sufficiently self-limiting). If male mortality is high and their number is low, the random fluctuations may easily cause the extinction of population. Is there any fossil or experimental evidence that this may really happen? | TL;DR : There is a dearth of actual experimental evidence. However: there is at least one study that confirmed the process ([ STUDY #7 ] - Myxococcus xanthus; by Fiegna and Velicer, 2003). Another study experimentally confirmed higher extinction risk as well ([ STUDY #8 ] - Paul F. Doherty's study of dimorphic bird species an [ STUDY #9 ] - Denson K. McLain). Theoretical studies produce somewhat unsettled results - some models support the evolutionary suicide and some models do not - the major difference seems to be variability of environmental pressures. Also, if you include human predation based solely on sexually selected trait, examples definitely exist, e.g. Arabian Oryx First of all, this may be cheating but one example is the extinction because a predator species specifically selects the species because of selected-for feature. The most obvious case is when the predator species is human. As a random example, Arabian Oryx was nearly hunted to extinction specifically because of their horns. Please note that this is NOT a simple question - for example, the often-cited in unscientific literature example of Irish Elk that supposedly went extinct due to its antler size may not be a good crystal-clear example. For a very thorough analysis, see: " Sexy to die for? Sexual selection and risk of extinction " by Hanna Kokko and Robert Brooks, Ann. Zool. Fennici 40: 207-219 . [ STUDY #1 ] They specifically find that evolutionary "suicide" is unlikely in deterministic environments, at least if the costs of the feature are borne by the individual organism itself. Another study resulting in a negative result was " Sexual selection and the risk of extinction in mammals ", Edward H. Morrow and Claudia Fricke; The Royal Society Proceedings: Biological Sciences, Published online 4 November 2004, pp 2395-2401 [ STUDY #2 ] The aim of this study was therefore to examine whether the level of
sexual selection (measured as residual testes mass and sexual size dimorphism) was related to the risk of extinction that mammals are currently experiencing. We found no evidence for a relationship between these factors, although our analyses may have been confounded by the possible dominating effect of contemporary anthropogenic factors. However, if one takes into consideration changes in the environment, the extinction becomes theoretically possible. From " Runaway Evolution to Self-Extinction Under Asymmetrical Competition " - Hiroyuki Matsuda and Peter A. Abrams; Evolution Vol. 48, No. 6 (Dec., 1994), pp. 1764-1772 : [ STUDY #3 ] We show that purely intraspecific competition can cause evolution of extreme competitive abilities that ultimately result in extinction, without any influence from other species. The only change in the model required for this outcome is the assumption of a nonnormal distribution of resources of different sizes measured on a logarithmic scale. This suggests that taxon cycles, if they exist, may be driven by within- rather than between-species competition. Self-extinction does not occur when the advantage conferred by a large value of the competitive trait (e.g., size) is relatively small, or when the carrying capacity decreases at a comparatively rapid rate with increases in trait value. The evidence regarding these assumptions is discussed. The results suggest a need for more data on resource distributions and size-advantage in order to understand the evolution of competitive traits such as body size. As far as supporting evidence, some studies are listed in " Can adaptation lead to extinction? " by Daniel J. Rankin and Andre´s Lo´pez-Sepulcre, OICOS 111:3 (2005) . [ STUDY #4 ] They cite 3: The first example is a study on the Japanese medaka
fish Oryzias latipes (Muir and Howard 1999 - [STUDY #5]) . Transgenic males which had been modified to include a salmon growth-hormone gene are larger than their wild-type counterparts, although their offspring have a lower fecundity (Muir and Howard 1999). Females
prefer to mate with larger males, giving the larger
transgenic males a fitness advantage over wild-type
males. However, offspring produced with transgenic
males have a lower fecundity, and hence average female
fecundity will decrease. As long as females preferentially
mate with larger males, the population density will
decline. Models of this system have predicted that, if
the transgenic fish were released into a wild-type
population, the transgene would spread due to its mating
advantage over wild-type males, and the population
would become go extinct (Muir and Howard 1999).
A recent extension of the model has shown that
alternative mating tactics by wild-type males could
reduce the rate of transgene spread, but that this is still
not sufficient to prevent population extinction (Howard
et al. 2004). Although evolutionary suicide was predicted
from extrapolation, rather than observed in nature, this
constitutes the first study making such a prediction from
empirical data . In cod, Gadus morhua, the commercial fishing of large
individuals has resulted in selection towards earlier
maturation and smaller body sizes ( Conover and Munch
2002 [STUDY #6] ). Under exploitation, high mortality decreases the
benefits of delayed maturation. As a result of this,
smaller adults, which mature faster, have a higher fitness
relative to their larger, slow maturing counterparts
(Olsen et al. 2004). Despite being more successful
relative to slow maturing individuals, the fast-maturing
adults produce fewer offspring, on average. This adaptation,
driven by the selective pressure imposed by
harvesting, seems to have pre-empted a fishery collapse
off the Atlantic coast of Canada (Olsen et al. 2004). As
the cod evolved to be fast-maturing, population size was
gradually reduced until it became inviable and vulnerable
to stochastic processes. The only strictly experimental evidence for evolutionary
suicide comes from microbiology. In the social
bacterium Myxococcus xanthus individuals can develop
cooperatively into complex fruiting structures (Fiegna
and Velicer 2003 - [ STUDY #7 ]). Individuals in the fruiting body are
then released as spores to form new colonies. Artificially
selected cheater strains produce a higher number of
spores than wild types. These cheaters were found to
invade wild-type strains, eventually causing extinction of
the entire population (Fiegna and Velicer 2003). The
cheaters invade the wild-type population because they
have a higher relative fitness, but as they spread through
the population, they decrease the overall density, thus
driving themselves and the population in which they
reside, to extinction. Another experimental study was " Sexual selection affects local extinction and turnover
in bird communities " - Paul F. Doherty, Jr., Gabriele Sorci, et al; 5858–5862 PNAS May 13, 2003 vol. 100 no. 10 [ STUDY #8 ] Populations under strong sexual selection experience
a number of costs ranging from increased predation and
parasitism to enhanced sensitivity to environmental and demographic
stochasticity. These findings have led to the prediction that
local extinction rates should be higher for speciespopulations
with intense sexual selection. We tested this prediction by analyzing
the dynamics of natural bird communities at a continental
scale over a period of 21 years (1975–1996), using relevant statistical
tools. In agreement with the theoretical prediction, we found
that sexual selection increased risks of local extinction (dichromatic
birds had on average a 23% higher local extinction rate than
monochromatic species) . However, despite higher local extinction
probabilities, the number of dichromatic species did not decrease
over the period considered in this study. This pattern was caused
by higher local turnover rates of dichromatic species , resulting in
relatively stable communities for both groups of species. Our
results suggest that these communities function as metacommunities,
with frequent local extinctions followed by colonization. This result is similar to another bird-centered study: Sexual Selection and the Risk of Extinction of Introduced Birds on Oceanic Islands ": Denson K. McLain, Michael P. Moulton and Todd P. Redfearn. OICOS Vol. 74, No. 1 (Oct., 1995), pp. 27-34 [ STUDY #9 ] We test the hypothesis that response to sexual selection increases the risk of extinction by examining the fate of plumage-monomorphic versus plumage-dimorphic bird species introduced to the tropical islands of Oahu and Tahiti. We assume that plumage dimorphism is a response to sexual selection and we assume that the males of plumage-dimorphic species experience stronger sexual selection pressures than males of monomorphic species. On Oahu, the extinction rate for dimorphic species, 59%, is significantly greater than for monomorphic species, 23%. On Tahiti, only 7% of the introduced dimorphic species have persisted compared to 22% for the introduced monomorphic species . ... Plumage is significantly associated with increased risk of extinction for passerids but insignificantly associated for fringillids. Thus, the hypothesis that response to sexual selection increases the risk of extinction is supported for passerids and for the data set as a whole. The probability of extinction was correlated with the number of species already introduced. Thus, species that have responded to sexual selection may be poorer interspecific competitors when their communities contain many other species. | {
"source": [
"https://biology.stackexchange.com/questions/239",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/68/"
]
} |
305 | What is the smallest viable reproducing population, such as in a human population. By viable I mean a population which keeps genetic defects low (enough). A very strongly related question: what is the expected number of generations a given population can survive? | The conservation biology literature has a great deal of information, particularly with reference to developing species survival plans (e.g., Traill et al. [2007] report a minimum effective population size of ~4,000 will give a 99% persistence probability of 40 generations). Because the question specifically mentions human populations, I'll focus my answer on the genetics of small human populations, though considerably less information is available. Hamerton et al. (1965; Nature 206:1232-1234) studied chromosome abnormalities in 201 individuals from a total population size of 268 from the small island of Tristan da Cunha . These authors report increasing chromosome abnormalities ( aneuploidy ; hypo- or hyperdiploidy) with age and suggest that it may result in decreased mitotic efficiency. This population is thought to have developed from a founder population of only 15. According to Mantle and Pepys (2006; Clin Exp Allergy 4:161-170) approximately two or three of the original settlers were asthmatic, which has led to a very high prevalence (32%) in the current population. Kaessmann et al. (2002; Am J Hum Genet 70:673-685) present a more modern study of linkage disequilibrium in two small human populations (Evenki and Saami; ~58,000 and ~60,000 population sizes, respectively) compared to two large populations (Finns and Swedes; ~5 and ~9 million). The authors find significant LD in 60% of the Evenki population and 48% of Saami, but only 29% in Finns and Swedes. Lieberman et al. (2007; Nature 445:727-731) discuss the potential for human kin detection to avoid inbreeding. Such mechanisms have been found in other species, "from social amoebas, social insects and shrimp, to birds, aphids, plants, rodents and primates." Lieberman et al. propose mechanisms contributing to sibling detection in humans, including "maternal perinatal association" and "coresidence duration." Beyond these behavioral cues, the authors also suggest physiological cues such as major histocompatability complex as playing a role. | {
"source": [
"https://biology.stackexchange.com/questions/305",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/84/"
]
} |
320 | My teachers growing up told me it was impossible to decode the maize genome. But yet its been done. Why was decoding the genome so significant, and what made it so difficult? | The short answer is that corn genome is large and has a huge amount of duplication events. Around 80% of the genome are repeated. It's hard to assemble genomes with large amount of duplications because our sequencing technology, practically, at best can give ~500 base pairs. Figuring out the ordering of duplicated regions relies on scaffold sequences or comparative assembly to rice genome. | {
"source": [
"https://biology.stackexchange.com/questions/320",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/238/"
]
} |
324 | The process of sleep seems to be very disadvantageous to an organism as it is extremely vulnerable to predation for several hours at a time. Why is sleep necessary in so many animals? What advantage did it give the individuals that evolved to have it as an adaptation? When and how did it likely occur in the evolutionary path of animals? | This good non-scholarly article covers some of the usual advantages (rest/regeneration). One of the research papers they mentioned (they linked to press release) was Conservation of Sleep: Insights from Non-Mammalian Model Systems by John E. Zimmerman, Ph.D.; Trends Neurosci. 2008 July; 31(7): 371–376. Published online 2008 June 5. doi: 10.1016/j.tins.2008.05.001; NIHMSID: NIHMS230885 . To quote from the press release: Because the time of lethargus coincides with a time in the round worms’ life cycle when synaptic changes occur in the nervous system, they propose that sleep is a state required for nervous system plasticity. In other words, in order for the nervous system to grow and change, there must be down time of active behavior. Other researchers at Penn have shown that, in mammals, synaptic changes occur during sleep and that deprivation of sleep results in a disruption of these synaptic changes. | {
"source": [
"https://biology.stackexchange.com/questions/324",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/62/"
]
} |
339 | I'm a biology amateur, but it seems like sexual selection is almost always performed based on physical characteristics, the outcome of physical contests, or some sort elaborate courtship. But do any non Homo-Sapiens perform sexual selection based on intelligence factors, like problem solving abilities? If so, how does the species accomplish this? I know natural selection as a whole would definitely favor intelligent individuals, but I'm curious if any species actually takes this into account when choosing mates. | Very intresting question. The problem is that animal intelligence is hard to measure not only for scientists, but probably also for the potential mate. Paradoxically, that is why selection for intelligence, if it occurred, may be very strong. One has to be smart in order to recognise smart behaviour, so preference and preferred feature are strongly connected. But that's only my opinion. Boogert et al., 2011 1 reviews the current knowledge about animal preferences for cognition skills. They conclude that there is very little data on this subject. The given examples are: 1) Preference for elaborating birds songs (as songs are not inborn and have to be learned) 2) Spatial abilities: In meadow voles (Microtus pennsylvanicus), males with better spatial learning and memory abilities were not only found to have larger home ranges and to locate more females in the field (Spritzer, Solomon, et al. 2005 2 ) but were also preferred by females in mate-choice tests, even though the females did not observe males’ performance on spatial tests (Spritzer, Meikle, et al. 2005 3 ). In guppies (Poecilia reticulata), males that learned faster to swim through mazes to gain a food reward were found to be more attractive to females (Shohet and Watt 2009 4 ). However, females were not able to see the males’ performance in the mazes. Although male learning ability was weakly correlated with saturation of the orange patches on his body (a sexually selected trait (...)), orange saturation surprisingly did not correlate with female preferences. Thus, the cues leading female guppies to prefer faster learners are unknown. It is possible, that females base their choose on some factors that correlates with cognitive skills or on total wellness, what may depend on intelligence. 3) bowerbird's abilities to build bowers (courtship constructions): Comparative studies across bowerbird species have shown that relative brain size is larger in species that build bowers than in closely related nonbuilding species (Madden 2001 5 ). In addition, relative brain size increases with the species-typical complexity of the bower (Madden 2001 5 ), and a comparative study on the relative size of specific brain regions showed that species with more complex bowers have a relatively larger cerebellum (Day et al. 2005 6 ). 4) foraging performance A recent experiment by Snowberg and Benkman (2009) 7 using red crossbills (Loxia curvirostra) showed that, after observing 2 males extracting seeds from conifer cones, females associated preferentially with the more efficient forager of the 2. The authors were able to exclude female choice for correlated traits by experimentally manipulating foraging efficiency, such that fewer seeds were available in the cones of one of the males. The males were also swapped between treatments (i.e., slow vs. fast forager) so that male identity could not explain the females’ preferences for the most efficient forager. Another way that intelligence may be favored by sexual selection is "cheating" during courtship. For example most frog species call to attract females. But this signal may also attract aggresive rivals or predators. Some males, especially the weaker ones, do not call but stay near calling individual. This allows them to avoid confrontation and wait for approaching females [8]. The successfulness of this strategy may depend on how "smart" the individual is (only my opinion). [1] Boogert, N. J., Fawcett, T. W., & Lefebvre, L. (2011). Mate choice for cognitive traits: a review of the evidence in nonhuman vertebrates. Behavioral Ecology, 22(3), 447-459. [2] Spritzer MD, Solomon NG, Meikle DB. 2005. Influence of scramble competition for mates upon the spatial ability of male meadow voles. Anim Behav. 69:375–386. [3] Spritzer MD, Meikle DB, Solomon NG. 2005. Female choice based on male spatial ability and aggressiveness among meadow voles. Anim Behav. 69:1121–1130. [4] Shohet AJ, Watt PJ. 2009. Female guppies Poecilia reticulata prefer males that can learn fast. J Fish Biol. 75:1323–1330. [5] Madden J. 2001. Sex, bowers and brains. Proc R Soc Lond B Biol Sci. 268:833–838. [6] Day LB, Westcott DA, Olster DH. 2005. Evolution of bower complexity and cerebellum size in bowerbirds. Brain Behav Evol. 66:62–72 [7] Snowberg LK, Benkman CW. 2009. Mate choice based on a key ecological performance trait. J Evol Biol. 22:762–769. [8] Bateson P. 1985. Mate choice. Cambridge University Press. 181-210 | {
"source": [
"https://biology.stackexchange.com/questions/339",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/62/"
]
} |
344 | There seem to be a number of ideas about why we age. Hypotheses include the gradual accumulation of cell metabolic products affecting organism function and the reduction of telomere length during cell division.
My hand-wavey idea would be "wear and tear". Are we anywhere near a consensus theory of senescence? | The 'wear and tear' argument is most likely true but it is also interesting to reason about ageing as inevitable from the evolutionary point of view. To set up the argument, we need two things:
First, each individual has got a 'reproductive potential' which is realised throughout life. This means a deleterious mutation which has an effect in early life, will affect reproductive value more than a mutation which manifests itself in later life, after the individual has already had offspring. Thus selection will act strongly on genes which are expressed in early life than on those which are expressed later. For that reason, there's no strong selection against diseases such as diabetes or cancer. This argument can be applied not only to occurrence of disease but also to decay of ordinary functions of the body. Secondly, cells in the body are constantly renewed and defects such as telomeric breaks are repaired. Mutations in the soma are taken care of by the immune system and can be in principle avoided. The fact that they tend to accumulate in later life can be explained by the first point: selection is weaker to oppose telomeric breaks and mutations in later life . I was trying to be brief here, but there are more sides to the argument (e.g. Williams' antagonistic pleiotropy). Modular Evolution (Vinicius, CUP 2010) provides a good overview of the evolutionary aspect of theory of senescence (and many other interesting evolutionary arguments). | {
"source": [
"https://biology.stackexchange.com/questions/344",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/81/"
]
} |
345 | The standard protocol for a person experiencing chest pains is to chew a 300mg aspirin tablet, the argument being that chewing rather than swallowing the tablet results in the aspirin entering the blood stream faster. From a biological standpoint, why is this the case? Given that the stomach and GI tract are specialised tissues to allow for maximum diffusion, why would it be faster to pass aspirin across the gums ( bucaal administration?) tongue and cheeks which are not specialised for this purpose? It is not just a special case for aspirin either, as Hypostop TM /Glucogel TM (acute treatment for hypoglycaemic shock, essentially concentrated sugar) is applied directly to the gums or cheek with a similar argument that in critical situations it is faster. The only suggestion I could find was very vague from the " Merck Manual ": The stomach has a relatively large epithelial surface, but its thick mucous layer and short transit time limit absorption
Which I assume could mean that it is the reduced absorption rate in the stomach that makes the oral membranes faster, yet it also says that the delay in the stomach is brief. I'd be really interested to know the biology behind this! | There are several issues here: 1) Any mucous membrane is a specialized tissue for absorption. Mucous membranes are indeed not so good for passive diffusion, that makes them absolutely perfect tools for active absorption of certain substances, almost independently from the membrane type. To provide some examples: many drugs like cocaine are inhaled and absorbed in nasal cavities, whereas the rectum is also known and favorite delivery way for medicines. Generally the absorption force of the mucous membrane is dependent upon how well it is vasculated, for after going through the basal membrane the substance directly enters the blood flow and quickly travels away, thus the concentration gradient on both sides of the basal membrane -- the main barrier in mucosa -- remains relatively high. See Bhat P . 1995. The limiting role of mucus in drug absorption: Drug permeation through mucus solution for an experimental model for this statement. 2) Mouth mucosa can absorb many substances. There is a special term called "oral absorption" to describe the rapid drug absorption into blood flow from the mouth cavity. The mucous membrane is not specialized here, but small molecules are able to permeate here through all barriers. 3) Advantages of oral absorption. There are some: The mucous membrane in the mouth cavity is very highly vascularized. The whole mouth can be seen as a bundle of skeletal muscles and every muscle requires a lot of energy and oxygen, therefore they have one of the highest vascularity rates, having much less distance between single capillaries. Blood flow is higher in the mouth cavity walls than in other inner organs . This is mostly because muscle contractions (during chewing) lead to increased propulsion of blood through capillaries and small vessels here. Any substance that enters blood flow here bypasses the hepatic-portal system . This means that the substance does not have to wait until it is filtered by our liver, it is immediately distributed through the whole body. There are special reflexes from oral mucosa to inner organs. Even not so important in case of aspirin, this is very important for some placebo drugs like methyl valerate (known as "Validol" in many countries) used a lot for treating angina pectoris whose only action is to activate the cold receptors in mouth and thereby leading to reflectory dilating of cardiac vessels. This is why many drugs, like for example loperamid, are administered only as sub-lingual tablets. And this also explains why in emergency medicine many remedies are injected directly into the tongue. | {
"source": [
"https://biology.stackexchange.com/questions/345",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/69/"
]
} |
371 | Can anyone summarize the mechanism by which when an object of a given temperature is placed in contact with, say, the skin on a human fingertip, the average speed of the particles of the object is converted into nerve signals to the brain? If you can answer that, how about the format of how the temperature is encoded in the nerve signals? | In the periphery (e.g. on our fingertips), our body senses external temperature through nerve terminals, expressing certain TRP channels . These are ion channels that are sensitive to temperature (note that TRP channels can be sensitive to several things, such as pH, light, and stretch) and allow entrance of cations in the cell when the temperature is higher or lower than a certain threshold. Six TRP channels have been described as been involved in sensing different temperature ranges 1 , 2 : TRPV1 is activated at >43 °C TRPV2 at >52 °C TRPV3 at ~> 34-38 °C TRPV4 at ~> 27-35 °C TRPM8 at ~< 25-28 °C TRPA1 at ~< 17 °C Not surprisingly, TRPV1 and TRPV2 are also involved in nociception (=pain perception). The exact molecular mechanisms by which different temperatures open different TRP channels are unclear, although some biophysical models have been proposed. 3 I am not sure of the exact "format" in which sensory fibers encode different temperature, but I would assume that the neuron would fire faster the more the temperature is distant from the specific threshold (hot or cold). 1 Thermosensation and pain. - J Neurobiol. 2004 Oct;61(1):3-12. 2 Sensing hot and cold with TRP channels. - Int J Hyperthermia. 2011;27(4):388-98. 3 - Thermal gating of TRP ion channels: food for thought? - Sci STKE. 2006 Mar 14;2006(326):pe12. | {
"source": [
"https://biology.stackexchange.com/questions/371",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/263/"
]
} |
378 | I am very interested in the evolution of the evolution process itself. There are of course a lot of things that influence how evolution will work, but for this question, I am interested in things that are only related to the evolution process. Examples could be increase chance of mutations in newborns, change in reproduction age, and similar. I am specifically interested in observation where the evolution process itself has adapted to a change in the environment. | Bacteria such as E. coli are known to increase their mutation rate (by switching to a more error prone polymerase among other things) when under stress. This can mean being placed in a medium where it's not adapted to grow ( http://www.micab.umn.edu/courses/8002/Rosenberg.pdf ) or when treated with antibiotics ( http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1088971/?tool=pmcentrez ). | {
"source": [
"https://biology.stackexchange.com/questions/378",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/205/"
]
} |
406 | This particular question has been of a great deal of interest to me, especially since it dives at the heart of abiogenesis. | In 2010, Dr. Craig Venter actually used a bacterial shell and wrote DNA for it . Scientists have created the world's first synthetic life form in a landmark experiment that paves the way for designer organisms that are built rather than evolved. (Snip) The new organism is based on an existing bacterium that causes mastitis in goats, but at its core is an entirely synthetic genome that was constructed from chemicals in the laboratory. Keep in mind, this is only a synthetic genome , not a truly unique organism created from scratch. Although I am confident that the technology will become available in the future. As has been pointed out, the entire genome wasn't built de novo , but rather most of it was copied from a baseline which was built up from the base chemicals with no biological processes, and then the watermarks were added (still damn impressive since they took inorganic matter and made a living cell function with it) . But they are working at building a totally unique genome from scratch (PDF). This is actually quite an emerging field, so much so that the MIT press has set up an entire series of journals for this. As far as to the purpose of these artificial organisms, most research funded by companies are meant to be for specific purposes that biology hasn't solved yet (such as a bacteria that eats a toxic waste or something). Although, a lot of people are concerned about scientists venturing into the domain of theology. In terms of abiogenesis, there are many resources to learn more about this. Here is a list of 88 papers that discuss the natural mechanisms of abiogenesis (this list is a little old, so I am sure that there are many, many more papers at this time). I also found this list of links and resources for artificial life. I cannot verify the usefulness of this since the field is a bit outside my area of expertise. However, it does seem quite extensive. EDIT TO ADD : Now we have "XNA" (a totally synthetic genome) on the way. | {
"source": [
"https://biology.stackexchange.com/questions/406",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/279/"
]
} |
445 | The most annoying thing for me about being cold is a runny nose. Is there an advantage to having a runny nose when cold? What does having a runny nose achieve? | There are two reasons for this: Nasal mucus helps warm inhaled air before it reaches the lungs. In cold weather, the mucus tends to dry out, so the membranes increase their production. At the same time, exhaled air is warmer than the surrounding air, so it contains more moisture than the outside air can hold. This moisture condenses around the tip of the nose. Explanation found here . So there's no particular advantage to getting a runny nose; it's just a normal reaction occurring in extreme conditions. | {
"source": [
"https://biology.stackexchange.com/questions/445",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/158/"
]
} |
450 | I know plants are green due to chlorophyll. Surely it would be more beneficial for plants to be red than green as by being green they reflect green light and do not absorb it even though green light has more energy than red light. Is there no alternative to chlorophyll? Or is it something else? | Surely it would be even more beneficial for plants to be black instead of red or green, from an energy absorption point of view. And Solar cells are indeed pretty dark. But, as Rory indicated , higher energy photons will only produce heat. This is because the chemical reactions powered by photosynthesis require only a certain amount of energy, and any excessive amount delivered by higher-energy photons cannot be simply used for another reaction 1 but will yield heat. I don't know how much trouble that actually causes, but there is another point: As explained, what determines the efficiency of solar energy conversion is not the energy per photon, but the amount of photons available. So you should take a look at the sunlight spectrum : The Irradiance is an energy density, however we are interested in photon density, so you have to divide this curve by the energy per photon, which means multiply it by λ/(hc) (that is higher wavelengths need more photons to achieve the same Irradiance). If you compare that curve integrated over the high energy photons (say, λ < 580 nm) to the integration over the the low energy ones, you'll notice that despite the atmospheric losses (the red curve is what is left of the sunlight at sea level) there are a lot more "red" photons than "green" ones, so making leaves red would waste a lot of potentially converted energy 2 . Of course, this is still no explanation why leaves are not simply black — absorbing all light is surely even more effective, no? I don't know enough about organic chemistry, but my guess would be that there are no organic substances with such a broad absorption spectrum and adding another kind of pigment might not pay off. 3 1) Theoretically that is possible, but it's a highly non-linear process and thus too unlikely to be of real use (in plant medium at least) 2) Since water absorbs red light stronger than green and blue light deep sea plants are indeed better off being red, as Marta Cz-C mentioned . 3 And other alternatives, like the semiconductors used in Solar cells, are rather unlikely to be encountered in plants... Additional reading, proposed by Dave Jarvis : http://pcp.oxfordjournals.org/content/50/4/684.full http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3691134/ http://www.life.illinois.edu/govindjee/photosynBook/Chapter11.pdf https://www.heliospectra.com/sites/default/files/general/What%20light%20do%20plants%20need_5.pdf | {
"source": [
"https://biology.stackexchange.com/questions/450",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/299/"
]
} |
452 | My biology teachers never explained why animals need to breathe oxygen, just that we organisms die if we don't get oxygen for too long. Maybe one of them happened to mention that its used to make ATP. Now in my AP Biology class we finally learned the specifics of how oxygen is used in the electron transport chain due to its high electronegativity. But I assume this probably isn't the only reason we need oxygen. What other purposes does the oxygen we take in through respiration serve? Does oxygen deprivation result in death just due to the halting of ATP production, or is there some other reason as well? What percentage of the oxygen we take in through respiration is expelled later through the breath as carbon dioxide? | Oxygen is actually highly toxic to cells and organisms – reactive oxygen species cause oxidative stress , essentially cell damage and contributing to cell ageing. A lot of anaerobic organisms have never learned to cope with this and die almost immediately when exposed to oxygen. One classical example of this is C. botulinum . Oxygen is incorporated in several molecules in the cell (for instance riboses and certain amino acids) but as far as I know, all of this comes into the cell as metabolic products, not in the form of pure oxygen. The oxygen ( $\ce{O2}$ ) we breathe is completely used up during aerobic respiration. The stoichiometry of this is given by the following simplified equation : $$\ce{C_6H12O6 + 6 O2 -> 6 CO2 + 6 H2O + heat}$$ WYSIWYG’s answer goes into more detail. | {
"source": [
"https://biology.stackexchange.com/questions/452",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/62/"
]
} |
477 | DNA replication goes in the 5' to 3' direction because DNA polymerase acts on the 3'-OH of the existing strand for adding free nucleotides. Is there any biochemical reason why all organisms evolved to go from 5' to 3'? Are there any energetic/resource advantages to using 5' to 3'? Is using the 3'-OH of the existing strand to attach the phosphate of the free nucleotide more energetically favorable than using the 3'-OH of the free nucleotide to attach the phosphate of the existing strand? Does it take more resources to create a 3' to 5' polymerase? | Prof. Allen Gathman has a great 10-minutes video on Youtube, explaining the reaction of adding nucleotide in the 5' to 3' direction, and why it doesn't work the other way. Briefly, the energy for the formation of the phosphodiester bond comes from the dNTP, which has to be added. dNTP is a nucleotide which has two additional phosphates attached to its 5' end. In order to join the 3'OH group with the phosphate of the next nucleotide, one oxygen has to be removed from this phosphate group. This oxygen is also attached to two extra phosphates, which are also attached to a Mg++. Mg++ pulls up the electrons of the oxygen, which weakens this bond and the so called nucleophilic attack of the oxygen from the 3'OH succeeds, thus forming the phospodiester bond. If you try to join the dNTP's 3'OH group to the 5' phosphate of the next nucleotide, there won't be enough energy to weaken the bond between the oxygen connected to the 5' phosphorous (the other two phosphates of the dNTP are on the 5' end, not on the 3' end), which makes the nucleophilic attack harder. Watch the video, it is better explained there. | {
"source": [
"https://biology.stackexchange.com/questions/477",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/90/"
]
} |
510 | The accepted range for the wavelengths of light that the human eye can detect is roughly between 400nm and 700nm. Is it a co-incidence that these wavelengths are identical to those in the Photosynthetically Active Radiation (PAR) range (the wavelength of light used for normal photosynthesis)? Alternatively is there something special about photons with those energy levels that is leading to stabilising selection in multiple species as diverse as humans and plants? | Good question. If you look at the spectral energy distribution in the accepted answer here , we see that photons with wavelengths less than ~300 nm are absorbed by species such as ozone. Much beyond 750 infrared radiation is largely absorbed by species such as water and carbon dioxide. Therefore the vast majority of solar photons reaching the surface have wavelengths that lie between these two extremes. Therefore, I would suggest that surface organisms will have adapted to use these wavelengths of light whether it be used in photoreceptors or in photosynthesis since these are the wavelengths available; i.e., organisms have adapted to use these wavelengths of light, rather than these wavelengths being special per se (although in the specific case of photosynthesis there is a photon energy sweet spot). For example this study suggests that some fungi might actually be able to utilize ionizing radiation in metabolism. This suggests that hypothetical organisms on a world bathed in ionizing radiation may evolve mechanisms to utilize this energy. | {
"source": [
"https://biology.stackexchange.com/questions/510",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/69/"
]
} |
546 | This is a question that has been in my mind since I was a kid. I'm not a doctor, nor even a biology student, just a curious person. What is the minimum and maximum temperature a human body can stand without dying or suffering severe consequences (eg. a burn or a freeze)? While at this subject, how much more global warming will the human body be able to take? Seeing as temperatures keep on rising, I'm just wondering how much longer until the temperature starts having drastic effects. In my country the temperature is about 35-40-45 degrees Celsius in mid-summer (I live in Romania, eastern Europe, and the climate is supposedly ideal here) which is very unhealthy. Does the human body suffer more and more as the temperatures change? | Hypothermia (when the body is too cold) is said to occur when the core body temperature of an individual has dropped below 35° celsius. Normal core body temperature is 37°C. ( 1 ) Hypothermia is then further subdivided into levels of seriousness ( 2 ) (although all can be damaging to health if left for an extended period of time) Mild 35–32 °C : shivering, vasoconstriction, liver failure (which would eventually be fatal) or hypo/hyper-glycemia (problems maintaining healthy blood sugar levels, both of which could eventually be fatal) . Moderate 32–28 °C: pronounced shivering, sufficient vasoconstriction to induce shock, cyanosis in extremities & lips (i.e. they turn blue), muscle mis-coordination becomes more apparent . Severe 28–20 °C: this is where your body would start to rapidly give up. Heart rate, respiratory rate and blood pressure fall to dangerous levels (HR of 30bpm would not be uncommon - normally around 70-100). Multiple organs fail and clinical death (where the heart stops beating and breathing ceases) soon occurs . However, as with most things in human biology, there is a wide scope for variation between individuals. The Swedish media reports the case of a seven year old girl recovering from hypothermia of 13°C ( 3 ) (though children are often more resilient than adults). Hyperthermia (when the body is too hot - known in its acute form as heatstroke) and is medically defined as a core body temperature from 37.5–38.3 °C (4) . A body temperature of above 40°C is likely to be fatal due to the damage done to enzymes in critical biochemical pathways (e.g. respiratory enzymes). As you mentioned burns , I will go into these too. Burns are a result of contact with a hot object or through infra-red (heat) radiation. Contact with hot liquid is referred to as a scald rather than a burn. Tests on animals showed that burns from hot objects start to take effect when the object is at least 50°C and the heat applied for over a minute. ( 5 ) Freeze-burn/frostbite , which is harder to heal than heat burns( 6 ) occurs when vaso-constriction progresses to the degree where blood flow to affected areas is virtually nil. The tissue affected will eventually literally freeze, causing cell destruction. ( 7 ) Similarly to hypothermia, frostbite is divided into four degrees (that can be viewed on Wikipedia). As to the matter of global warming cooking us to death, I would imagine that it would be more indirect changes that got us first. If the average temperature had risen to the necessary 40°C to cause heat-stroke, sea levels would have risen hugely due to the melting of the polar ice caps. Crops and other food sources would likely be affected too, therefore I don't think that global warming is overly likely to directly kill humans. | {
"source": [
"https://biology.stackexchange.com/questions/546",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/324/"
]
} |
574 | If the aim of evolution is to allow an organism to better compete against rivals, why would stabilizing selection ever happen? If you're not selecting the most highly adapted competitors at either end of the spectrum then how would a species progress? | It occurs when a beneficial characteristic has been developed over time and it would be harmful to stray from it. In these cases it is not the individuals at the fringe (as you put it the most adapted ) who are the best adapted. I think it may help if I answer with an example that is widely promoted by AQA in their A2 Biology syllabus. Stabilising Selection in human birth weight It is harmful for an infant to be born with a very low birth weight. They are much more vulnerable to heat loss due to their high surface area to volume ratio and consequently their respiratory demands are very high. Pre-term babies (which account for 67% of low-birthweight infants(1)) are particularly susceptible to respiratory problems (lack of surfactant in the lungs), cardiac problems (Patent ductus arteriosus - the lungs are still bypassed when the umbilical cord has been cut) and dangerous intestinal problems (Necrotizing enterocolitis) amongst many other conditions can all be fatal ( further information on mentioned conditions ) and are reflected in high mortality rates at these low birth rates. It is therefore not beneficial to be on the extremes of birth weight. Similarly, delivering a child of too high birth weight can cause complications with delivery if the head and shoulders are too wide to pass through the mother's hips. Therefore the other extreme of high birth rate is also not beneficial and will not be selected towards. This leads to selective pressures in both directions, stabilising towards a mean birth weight as shown below: This is an example of evolution not pushing a species forward but ensuring that individuals have the best chance of getting to reproductive age themselves. (1) Martin, J.A., et al. (2007). Births: Final Data for 2005. National Vital Statistics Reports, 56(6). | {
"source": [
"https://biology.stackexchange.com/questions/574",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/318/"
]
} |
594 | Why are nearly all amino acids in organisms left-handed (exception is glycine which has no isomer) when abiotic samples typical have an even mix of left- and right-handed molecules? | I know that you are referring to the commonly ribosome-translated L-proteins, but I can't help but add that there are some peptides, called nonribosomal peptides, which are not dependent on the mRNA and can incorporate D-amino acids. They have very important pharmaceutical properties. I recommend this (1) review article if you are interested in the subject. It is also worth mentioning that D-alanine and D-glutamine are incorporated into the peptidoglycane of bacteria. I read several papers (2, 3, 4) that discuss the problem of chirality but all of them conclude that there is no apparent reason why we live in the L-world. The L-amino acids should not have chemical advantages over the D-amino acids, as biocs already pointed out. Reasons for the occurrence of the twenty coded protein amino acids (2) has an informative and interesting outline. This is the paragraph on the topic of chirality: This is related to the question of the origin of optical
activity in living organisms on which there is a very
large literature ( Bonner 1972 ; Norden 1978 ; Brack and
Spack 1980 ). We do not propose to deal with this
question here, except to note that arguments presented
in this paper would apply to organisms constructed from
either D or L amino acids. It might be possible that both L and D lives were present (L/D-amino acids, L/D-enzymes recognizing L/D-substrates), but, by random chance the L-world outcompeted the D-world. I also found the same question in a forum where one of the answers seems intriguing. I cannot comment on the reliability of the answer, but hopefully someone will have the expertise to do so: One, our galaxy has a chiral spin and a magnetic orientation, which causes cosmic dust particles to polarize starlight as circularly polarized in one direction only. This circularly polarized light degrades D enantiomers of amino acids more than L enantiomers, and this effect is clear when analyzing the amino acids found on comets and meteors. This explains why, at least in the milky way, L enantiomers are preferred. Two, although gravity, electromagnetism, and the strong nuclear force are achiral, the weak nuclear force (radioactive decay) is chiral. During beta decay, the emitted electrons preferentially favor one kind of spin. That's right, the parity of the universe is not conserved in nuclear decay. These chiral electrons once again preferrentially degrade D amino acids vs. L amino acids. Thus due to the chirality of sunlight and the chirality of nuclear radiation, L amino acids are the more stable enantiomers and therefore are favored for abiogenesis. BIOSYNTHESIS OF NONRIBOSOMAL PEPTIDES Reasons for the occurrence of the twenty coded protein amino acids Molecular Basis for Chiral Selection in RNA Aminoacylation How nature deals with stereoisomers The adaptation of diastereomeric S-prolyl dipeptide derivatives to the quantitative estimation of R- and S-leucine enantiomers. Bonner WA, 1972 The asymmetry of life. Nordén B, 1978 Beta-Structures of polypeptides with L- and D-residues. Part III. Experimental evidences for enrichment in enantiomer. Brack A, Spach G, 1980 | {
"source": [
"https://biology.stackexchange.com/questions/594",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/81/"
]
} |
623 | From what I can tell and what thus far all people with whom I discussed this subject confirmed is that time appears to "accelerate" as we age. Digging a little, most explanations I found basically reduced this to two reasons: As we age physically, a time frame of constant length becomes ever smaller in contrast to the time we spent living As we age socially, we are burdened with an increasing amount of responsibility and thus an increasing influx of information which impairs our perception of the present To be honest, neither sounds entirely convincing to me because: In my perception "local time" (short time frames that I don't even bother to measure on the scale of my lifetime) is also accelerating. Just as an example: When I wait for the bus, time goes by reasonably fast as opposed to my childhood tortures of having to wait an eternity for those five minutes to pass. Even after making a great effort to cut myself off from society and consciously trying to focus on the moment, the perceived speed of time didn't really change. (Although I did have a great time :)) Which leads me to a simple question (and a few corollaries): Am I just in denial of two perfectly plausible and sufficient explanations, or are there actual biological effects (e.g. changes in brain chemistry) in place, that cause (or at least significantly influence) this? Is there a mechanism, that "stretches out" time for the young brain so that weight of an immense boredom forces it to benefit from its learning ability, while it "shrinks" time as the brain "matures" and must now act based on what it has learned, which often involves a lot of patience? If there is such a mechanism, are there any available means to counter it? (not sure I'd really want to, but I'd like to know whether I could) | This is not really a biological answer, but a psychological one: One important fact to consider is that the perception of time is essentially a recollection of past experience, rather than perception of the present. Researchers who study autobiographical memory have suggested that part of this effect may be explained by the number of recallable memories during a particular time period. During one's adolescence, one typically has a large number of salient memories, due to the distinctness of events. People often make new friends, move frequently, attend different schools, and have several jobs. As each of these memories is unique, recollection of these (many) memories gives the impression that the time span was large. In contrast, older adults have fewer unique experiences. They tend to work a single job, and live in a single place, and have set routines which they may follow for years. For this reason, memories are less distinct, and are often blurred together or consolidated. Upon recollection, it seems like time went by quickly because we can't remember what actually happened. In other words, it can be considered a special case of the availability heuristic : people judge a time span to be longer in which there are more salient/unique events. Incidentally, (and to at least mention biology), episodic memory has been shown to be neurally distinct from semantic memory in the brain. In particular, a double dissociation has been shown for amnesics who suffer from semantic or episodic memory, but not both. My apologies for the lack of citations, but a good bit about autobiographical memories can be found in: Eysenck, M.W., & Keane, M.T. (2010). Cognitive Psychology: A
Student's Handbook. You may also be interested in some responses or references to a related question on the Cognitive Science StackExchange: Perception of time as a function of age | {
"source": [
"https://biology.stackexchange.com/questions/623",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/340/"
]
} |
653 | This question got me thinking about amino acids and the ambiguity in the genetic code. With 4 nucleotides in RNA and 3 per codon, there are 64 codons. However, these 64 codons only code for 20 amino acids (or 22 if you include selenocysteine and pyrrolysine ), so many of the amino acids are coded by multiple codons. Is there any hypothesis as to why there are only 22 amino acids and not 64? Is it possible that there were 64 (or at least more than 22) at an earlier time? | Brian Hayes wrote a very interesting article from a mathematical point of view: http://www.americanscientist.org/issues/pub/the-invention-of-the-genetic-code especially the "Reality intrudes" section. Basically people had created fancy mathematical reasons why it has to be exactly 20. Nature, being nature, does not follow the reasoning, but has its own ideas. In other words there was nothing especially special about 20. In fact there seems to be a slow grafting of a 21st amino acid, selenocysteine using the codon UGA. Also pyrrolysine is considered the 22nd. The last section suggests that the code was originally doublet, so coded for <16 amino acids. This can partly explain why the third base in each codon is not as discriminating. So perhaps in the year 2002012 someone will be asking on biology.stackexchange why there are only 40 amino acids. | {
"source": [
"https://biology.stackexchange.com/questions/653",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/43/"
]
} |
714 | All humans can be grouped into ABO and Rh+/- blood groups (at a minimum). Is there any advantage at all to one group or the other? This article hints that there are some pathogens that display a preference to a blood type (for example Schistosomiasis apparently being more common in people with blood group A, although it could be that more people have type A in the areas that the parasite inhabits). Is there any literature out there to support or refute this claim or provide similar examples? Beyond ABO-Rh, is there any advantage or disadvantage (excluding the obvious difficulties in finding a donor after accident/trauma) in the 30 other blood type suffixes recognised by the International Society of Blood Transfusions ( ISBT )? I'd imagine not (or at least very minimal) but it would be interesting to find out if anyone knows more. | I've been doing a little more digging myself and have found a couple of other advantages: Risk of Venous-thromboembolism ( deep vein thrombosis/pulmonary embolism (1) ). Blood group O individuals are at lower risk of the above conditions due to reduced levels of von Willebrand factor (2) and factor VIII clotting factors. Cholera Infection Susceptibility & Severity . Individuals with blood group O are less susceptible to some strains of cholera ( O1 ) but are more likely to suffer severe effects from the disease if infected (3) . E. coli Infection Susceptibility & Severity. A study in Scotland indicated that those with the O blood group showed higher than expected infection rates with E. coli O157 and significantly higher fatality rates (78.5% of fatalities had blood group O). (4) Peptic Ulcers caused by Heliobacter pylori which can also lead to gastric cancer. Group O are again more susceptible to strains of H. pylori (5) . Whether blood group antigens are displayed on other body cells or not has been linked to increased or decreased susceptibility to many diseases, notably norrovirus and HIV. This is fully explained in the article that I was above summarising - " The relationship between blood group and disease " in addition to extended descriptions of the other two answers. | {
"source": [
"https://biology.stackexchange.com/questions/714",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/69/"
]
} |
762 | Humans have, in a relatively short amount of time, evolved from apes on the African plains to upright brainiacs with nukes, computers, and space travel. Meanwhile, a lion is still a lion and a beetle is still a beetle. Is there a specific reason for this? Do we have a particular part of brain that no other animal has? | We have the Human Accelerated Regions (HARs) which are some of the most rapidly evolving RNA genes elements. While heavily conserved in vertebrates, they go haywire in humans and are linked with neurodevelopment. Thanks to @Nico, the following paper compares the human genome with that of the chimp and identifies genetic regions that show accelerated evolution. The most extreme, HAR1 is expressed mainly in Cajal-Retzius neuron cells which hints at its important role in human development. Pollard KS, Salama SR, Lambert N, Lambot M-A, Coppens S, Pedersen JS, Katzman S, King B, Onodera C, Siepel A, et al. . 2006. An RNA gene expressed during cortical development evolved rapidly in humans. Nature 443: 167–72. | {
"source": [
"https://biology.stackexchange.com/questions/762",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/393/"
]
} |
775 | Background I am a computer programmer who is fascinated by artificial intelligence and artificial neural networks, and I am becoming more curious about how biological neural networks work. Context & what I think I understand In digesting all I have been reading, I am beginning to understand that there are layers to neural networks. A front-line layer of neurons may receive, for example, a visual stimulus such as a bright light. That stimulus is taken in by the front-line neurons, each of which produce a weighted electro-chemical response that results in a binary decision to pass an electrical charge through its axon to the dendrites of the tens of thousands of neurons to which it is connected. This process repeats through layers channeling the electrical signals and focusing them based on their permutations until ultimately a charge is passed to a focused response mechanism such as the nerves that control shrinking of the pupils. Hopefully I got that correct. Preamble to the question ;) Assuming that I am not completely off-base with my basic understanding of how a biological neural network operates, I am beginning to grasp how an input (stimulus) results in an output (response) such as motor movement or reflexes. That would just seem to be basic electricity of open and closed circuits. HOWEVER, what confuzzles me still is how a memory is stored. The analogy to an electrical circuit breaks down here, for in a circuit I can't really stop the flow of electrons unless I dam up said electrons in a capacitor. If I do that, once the electrons are released (accessed), they are gone forever whereas a memory endures. So. . . How the heck are memories constructed and stored in the human brain? Are they stored in a specific region? If so, where? | Unfortunately, we are all still "confuzzled" by how memory works. We are far from a complete understanding of how memory is stored and recalled. Nonetheless, we do know a little , so read on. Your understanding of basic neural function is almost correct. First, an individual neuron will signal through its single axon onto the dendrites of many downstream neurons, not the other way around. Second, I am not sure what you mean by "focusing them based on their permutations," but it is true that neural information can undergo many transformations as it propagates through a circuit. Third, if there is a behavioral outcome of the network activity like a muscle response or hormone release, those effects are mediated by nerves communicating with muscles and hormone-releasing cells. I'm not sure if that is what you meant by "focused response mechanism." Finally, as you have discovered, the analogy of neural circuits to electrical circuits is relatively poor at any reasonably sophisticated level of analysis. My opinion is that biological systems are often poorly served by being framed as engineering problems. Others will disagree with that, but I think understanding a biological system on its own terms makes many things much clearer. The key thing missing from the electrical circuit analogy turns out to be one of the keys to understanding information storage in neural circuits-- the synapse , the site where one neuron communicates with another. The synapse transforms the electrical signal from the upstream neuron into a chemical signal. That chemical signal is then converted back into an electrical signal by the downstream neuron. The strength of the synapse can be adjusted in a long-term way by changing the level of protein expression--this is called long-term potentiation (LTP) or long-term depression (LTD) . LTP and LTD therefore can regulate the ease with which information can flow along a particular path. As a basic example (that should not be taken too seriously), imagine a set of neurons that represents "New York City" and another set of neurons that represents "My Friend John." If you then happen to be in New York City with your friend John, both of those groups of neurons will be active and synapses between these two networks will be strengthened because they are co-active (see Hebbian plasticity ). In this way, the idea of NYC and the idea of John are now bound together. Where are these neurons that represent NYC and John? We are still not totally clear on this, and the question is complicated because there are many different types of memory. For instance, your memory of how to ride a bike (procedural memory) is not treated the same as your memory of what you ate for breakfast (episodic memory). However, a best current answer is that the hippocampus and its associated regions are important for the initial encoding of memories and the neocortex is where longer term memories are stored. There is substantial communication between these two areas so that memories can be effectively adjusted over time. Update In response to Jule's comment asking for some resources, I realize it is important to make the point that the Hebbian model I outlined hasn't been definitively shown. Like with all aspects of neuroscience, there is a lot of good work at the molecular and cellular level and good work at the behavioral level, but the causal link between the two is not so clear. Nonetheless, Hebb's idea is still the mainstream working model for how memory works. Some reading might include: 1) Neves, G., Cooke, S.F., Bliss, T.V.P., 2008. Synaptic plasticity, memory and the hippocampus: a neural network approach to causality. Nature Reviews Neuroscience 9, 65–75. A review on hippocampal memory and its relation to LTP/LTD and Hebbian theory. Notes the general difficulty of proving the theory and some ways for experiments to move forward. 2) Lisman, J., Grace, A.A., Duzel, E., 2011. A neoHebbian framework for episodic memory; role of dopamine-dependent late LTP. Trends in Neurosciences 34, 536–547. A review proposing an elaboration of the Hebbian model that includes neuromodulatory influence on plasticity and memory process. 3) Johansen, J.P., Cain, C.K., Ostroff, L.E., LeDoux, J.E., 2011. Molecular Mechanisms of Fear Learning and Memory. Cell 147, 509–524. . An excellent review on fear learning and memory with an extensive section on Hebbian theory. 4) Liu, X., Ramirez, S., Pang, P.T., Puryear, C.B., Govindarajan, A., Deisseroth, K., Tonegawa, S., 2012. Optogenetic stimulation of a hippocampal engram activates fear memory recall. Nature. A research article which is perhaps a realization of some of suggestions in the Neves et al review. They use light to reactivate a fear memory. This suggests that activation of the hippocampal network that was active during memory formation is sufficient to elicit the memory. | {
"source": [
"https://biology.stackexchange.com/questions/775",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/393/"
]
} |
832 | A student asked me this the other day and I thought that I would ask it again here. If one organism is said to be "more evolved" than another, what exactly does this mean? | "More evolved" is actually meaningless in all contexts. See terdon's answer for a good explanation. In the strictest sense, an organism can be said to be more divergent than another when comparing both to an outgroup, such that there is an inferred most common ancestor in reference to which to make the comparison. In this case, one organism is more divergent if there are more changes to this organism than the other, relative to the reference point. However, when speaking, many people get lazy, and use "more evolved" as shorthand, wishing it to mean something like "more divergent". Even "more divergent" is meaningless in the following contexts: when there is no outgroup understood when describing increasing complexity (obligate parasites have lost complexity and have had more evolutionary changes than their non-parasitic relatives) when the outgroup is poorly chosen. Mammal vs reptile comparisons should not, in general, use prokaryotes as the outgroup. Edited 2013/12/06 to reflect the precision in the answer by terdon. | {
"source": [
"https://biology.stackexchange.com/questions/832",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/81/"
]
} |
926 | Is it possible for a virus to live symbiotically with its host?
Is the human body plagued with viral infections that do negligible harm, or even serve a beneficial role? | It is possible for viruses to live in mutualistic relationships with their hosts, these associations are often overlooked due to the devastating effect that many viruses can have. To give an example in humans, when HIV-1-infected patients are also infected with hepatitis G virus, progression to AIDS is slowed significantly ( Heringlake et al ., 1998 ; Tillmann et al ., 2001 ). Also hepatitis A infection can surpress hepatitis C infection (Deterding et al ., 2006) . There are many other notable examples within plants, fungi, insects, and other animals, reviewed by Shen (2009) , and Roossinck (2011) , in two excellent papers. The table below, summarises some beneficial viruses across all organisms, and is taken from Roossinck (2011) . References Deterding, K. et al ., 2006. Hepatitis A virus infection suppresses hepatitis C virus replication and may lead to clearance of HCV. Journal of Hepatology , 45(6), pp.770-778. Heringlake, S. et al ., 1998. GB Virus C/Hepatitis G Virus Infection: A Favorable Prognostic Factor in Human Immunodeficiency Virus-Infected Patients? Journal of Infectious Diseases , 177(6), pp.1723 -1726. Roossinck, M.J., 2011. The good viruses: viral mutualistic symbioses. Nature Reviews Microbiology , 9(2), pp.99–108. Shen, H.-H., 2009. The challenge of discovering beneficial viruses. Journal of Medical Microbiology , 58(4), pp.531 -532. Tillmann, H.L. et al ., 2001. Infection with GB Virus C and Reduced Mortality among HIV-Infected Patients. New England Journal of Medicine , 345(10), pp.715-724. | {
"source": [
"https://biology.stackexchange.com/questions/926",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/238/"
]
} |
935 | Polyploidy is the multiplication of number of chromosomal sets from 2n to 3n (triploidy), 4n (tetraploidy) and so on. It is quite common in plants, for example many crops like wheat or Brassica forms. It seems to be rarer in animals but still it is present among some amphibian species like Xenopus. As I know in mammals polyploidy is lethal (I don't mean tissue - limited polyploidy). I understand that triploidy is harmful due to stronger influence of maternal or paternal epigenetic traits that cause abnormal development of placenta, but why there is no tetraploid mammals? | Great question, and one about which there has historically been a lot of speculation, and there is currently a lot of misinformation. I will first address the two answers given by other users, which are both incorrect but have been historically suggested by scientists. Then I will try to explain the current understanding (which is not simple or complete). My answer is derived directly from the literature, and in particular from Mable (2004), which in turn is part of the 2004 special issue of the Biological Journal of the Linnean Society tackling the subject. The 'sex' answer... In 1925 HJ Muller addressed this question in a famous paper, "Why polyploidy is rarer in animals than in plants" (Muller, 1925). Muller briefly described the phenomenon that polyploidy was frequently observed in plants, but rarely in animals. The explanation, he said, was simple (and is approximate to that described in Matthew Piziak's answer): animals usually have two sexes which are differentiated by means of a process involving the diploid mechanism of segregation and combination whereas plants-at least the higher plants-are usually hermaphroditic. Muller then elaborated with three explanations of the mechanism: He assumed that triploidy was usually the intermediate step in chromosome duplication. This would cause problems, because if most animals' sex was determined by the ratios of chromosomes (as in Drosophila), triploidy would lead to sterility. In the rare cases when a tetraploid was accidentally created, it would have to breed with diploids, and this would result in a (presumably sterile) triploid. If, by chance, two tetraploids were to arise and mate, they would be at a disadvantage because, he said, they would be randomly allocated sex chromosomes and this would lead to a higher proportion of non-viable offspring, and thus the polyploid line would be outcompeted by the diploid. Unfortunately, whilst the first two points are valid facts about polyploids, the third point is incorrect. A major flaw with Muller's explanation is that it only applies to animals with chromosomal ratio-based sex determination, which we have since discovered is actually relatively few animals. In 1925 there was comparatively little systematic study of life, so we really didn't know what proportion of plant or animal taxa showed polyploidy. Muller's answer doesn't explain why most animals, e.g. those with Y-dominant sex determination, exhibit relatively little polyploidy. Another line of evidence disproving Muller's answer is that, in fact, polyploidy is very common among dioecious plants (those with separate male and female plants; e.g. Westergaard, 1958), while Muller's theory predicts that prevalence in this group should be as low as in animals. The 'complexity' answer... Another answer with some historical clout is the one given by Daniel Standage in his answer, and has been given by various scientists over the years (e.g. Stebbins, 1950). This answer states that animals are more complex than plants, so complex that their molecular machinery is much more finely balanced and is disturbed by having multiple genome copies. This answer has been soundly rejected (e.g. by Orr, 1990) on the basis of two key facts. Firstly, whilst polyploidy is unusual in animals, it does occur. Various animals with hermaphroditic or parthenogenetic modes of reproduction frequently show polyploidy. There are also examples of Mammalian polyploidy (e.g. Gallardo et al., 2004). In addition, polyploidy can be artificially induced in a wide range of animal species, with no deleterious effects (in fact it often causes something akin to hybrid vigour; Jackson, 1976). It's also worth noting here that since the 1960s Susumo Ohno (e.g. Ohno et al. 1968; Ohno 1970; Ohno 1999) has been proposing that vertebrate evolution involved multiple whole-genome duplication events (in addition to smaller duplications). There is now significant evidence to support this idea, reviewed in Furlong & Holland (2004). If true, it further highlights that animals being more complex (itself a large, and in my view false, assumption) does not preclude polyploidy. The modern synthesis... And so to the present day. As reviewed in Mable (2004), it is now thought that: Polyploidy is an important evolutionary mechanism which was and is probably responsible for a great deal of biological diversity. Polyploidy arises easily in both animals and plants, but reproductive strategies might prevent it from propagating in certain circumstances, rather than any reduction in fitness resulting from the genome duplication. Polyploidy may be more prevalent in animals than previously expected, and the imbalance in data arises from the fact that cytogenetics (i.e. chromosome counting) of large populations of wild specimens is a very common practise in botany, and very uncommon in zoology. In addition, there are now several new suspected factors involved in ploidy which are currently being investigated: Polyploidy is more common in species from high latitudes (temperate climates) and high altitudes (Soltis & Soltis, 1999). Polyploidy frequently occurs by the production of unreduced gametes (through meiotic non-disjunction), and it has been shown that unreduced gametes are produced with higher frequency in response to environmental fluctuations. This predicts that polyploidy should be more likely to occur in the first place in fluctuating environments (which are more common at higher latitudes and altitudes). Triploid individuals, the most likely initial result of a genome duplication event, in animals and plants often die before reaching sexual maturity, or have low fertility. However, if triploid individuals do reproduce, there is a chance of even-ploid (fertile) individuals resulting. This probability is increased if the species produces large numbers of both male and female gametes, or has some mechanism of bypassing the triploid individual stage. This may largely explain why many species with 'alternative' sexual modes (apomictic, automictic, unisexual, or gynogenetic) show polyploidy, as they can keep replicating tetraploids, thus increasing the chance that eventually a sexual encounter with another tetraploid will create a new polyploid line. In this way, non-sexual species may be a crucial evolutionary intermediate in generating sexual polyploid species. Species with external fertilisation are more likely to establish polyploid lines - a greater proportion of gametes are involved in fertilisation events and therefore two tetraploid gametes are more likely to meet. Finally, polyploidy is more likely to occur in species with assortative mixing. That is, when a tetraploid gamete is formed, if the genome duplication somehow affects the individual so as to make it more likely that it will be fertilised by another tetraploid, then it is more likely that a polyploid line will be established. Thus it may be partly down to evolutionary chance as to how easily a species' reproductive traits are affected. For example in plants, tetraploids often have larger flowers or other organs, and thus are preferentially attractive to pollinators. In frogs, genome duplication leads to changes in the vocal apparatus which can lead to immediate reproductive isolation of polyploids. References Furlong, R.F. & Holland, P.W.H. (2004) Polyploidy in vertebrate ancestry: Ohno and beyond. Biological Journal of the Linnean Society. 82 (4), 425–430. Gallardo, M.H., Kausel, G., Jiménez, A., Bacquet, C., González, C., Figueroa, J., Köhler, N. & Ojeda, R. (2004) Whole-genome duplications in South American desert rodents (Octodontidae). Biological Journal of the Linnean Society. 82 (4), 443–451. Jackson, R.C. (1976) Evolution and Systematic Significance of Polyploidy. Annual Review of Ecology and Systematics. 7209–234. Mable, B.K. (2004) ‘Why polyploidy is rarer in animals than in plants’: myths and mechanisms. Biological Journal of the Linnean Society. 82 (4), 453–466. Muller, H.J. (1925) Why Polyploidy is Rarer in Animals Than in Plants. The American Naturalist. 59 (663), 346–353. Ohno, S. (1970) Evolution by gene duplication. Ohno, S. (1999) Gene duplication and the uniqueness of vertebrate genomes circa 1970–1999. Seminars in Cell & Developmental Biology. 10 (5), 517–522. Ohno, S., Wolf, U. & Atkin, N.B. (1968) EVOLUTION FROM FISH TO MAMMALS BY GENE DUPLICATION. Hereditas. 59 (1), 169–187. Orr, H.A. (1990) ‘Why Polyploidy is Rarer in Animals Than in Plants’ Revisited. The American Naturalist. 136 (6), 759–770. Soltis, D.E. & Soltis, P.S. (1999) Polyploidy: recurrent formation and genome evolution. Trends in Ecology & Evolution. 14 (9), 348–352. Stebbins, C.L. (1950) Variation and evolution in plants.
Westergaard, M. (1958) The Mechanism of Sex Determination in Dioecious Flowering Plants. In: Advances in Genetics. Academic Press. pp. 217–281. (I'll come back and add links to the references later) | {
"source": [
"https://biology.stackexchange.com/questions/935",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/68/"
]
} |
937 | Evolution is often mistakenly depicted as linear in popular culture. One main feature of this depiction in popular culture , but even in science popularisation , is that some ocean-dwelling animal sheds its scales and fins and crawls onto land. Of course, this showcases only one ancestral lineage for one specific species ( Homo sapiens ). My question is: Where else did life evolve out of water onto land? Intuitively, this seems like a huge leap to take (adapting to a fundamentally alien environment) but it still must have happend several times (separately at least for plants, insects and chordates, since their respective most recent common ancestor is sea-dwelling). In fact, the more I think of it the more examples I find. | I doubt we know the precise number, or even anywhere near it. But there are several well-supported theorised colonisations which might interest you and help to build up a picture of just how common it was for life to transition to land. We can also use known facts about when different evolutionary lineages diverged, along with knowledge about the earlier colonisations of land, to work some events out for ourselves. I've done it here for broad taxonomic clades at different scales - if interested you could do the same thing again for lower sub-clades. As you rightly point out, there must have been at least one colonisation event for each lineage present on land which diverged from other land-present lineages before the colonisation of land. Using the evidence and reasoning I give below, at the very least, the following 9 independent colonisations occurred: bacteria cyanobacteria archaea protists fungi algae plants nematodes arthropods vertebrates Bacterial and archaean colonisation The first evidence of life on land seems to originate from 2.6 ( Watanabe et al., 2000 ) to 3.1 ( Battistuzzi et al., 2004 ) billion years ago. Since molecular evidence points to bacteria and archaea diverging between 3.2-3.8 billion years ago ( Feng et al.,1997 - a classic paper), and since both bacteria and archaea are found on land (e.g. Taketani & Tsai, 2010 ), they must have colonised land independently. I would suggest there would have been many different bacterial colonisations, too. One at least is certain - cyanobacteria must have colonised independently from some other forms, since they evolved after the first bacterial colonisation (Tomitani et al., 2006), and are now found on land, e.g. in lichens. Protistan, fungal, algal, plant and animal colonisation Protists are a polyphyletic group of simple eukaryotes, and since fungal divergence from them ( Wang et al., 1999 - another classic) predates fungal emergence from the ocean ( Taylor & Osborn, 1996 ), they must have emerged separately. Then, since plants and fungi diverged whilst fungi were still in the ocean ( Wang et al., 1999 ), plants must have colonised separately. Actually, it has been explicitly discovered in various ways (e.g. molecular clock methods, Heckman et al., 2001 ) that plants must have left the ocean separately to fungi, but probably relied upon them to be able to do it ( Brundrett, 2002 - see note at bottom about this paper). Next, simple animals... Arthropods colonised the land independently ( Pisani et al, 2004 ), and since nematodes diverged before arthropods ( Wang et al., 1999 ), they too must have independently found land. Then, lumbering along at the end, came the tetrapods ( Long & Gordon, 2004 ). Note about the Brundrett paper: it has OVER 300 REFERENCES! That guy must have been hoping for some sort of prize. References Battistuzzi FU, Feijao A, Hedges SB . 2004. A genomic timescale of prokaryote evolution: insights into the origin of methanogenesis, phototrophy, and the colonization of land. BMC Evol Biol 4: 44. Brundrett MC . 2002. Coevolution of roots and mycorrhizas of land plants. New Phytologist 154: 275–304. Feng D-F, Cho G, Doolittle RF . 1997. Determining divergence times with a protein clock: Update and reevaluation. Proceedings of the National Academy of Sciences 94: 13028 –13033. Heckman DS, Geiser DM, Eidell BR, Stauffer RL, Kardos NL, Hedges SB . 2001. Molecular Evidence for the Early Colonization of Land by Fungi and Plants. Science 293: 1129 –1133. Long JA, Gordon MS . 2004. The Greatest Step in Vertebrate History: A Paleobiological Review of the Fish‐Tetrapod Transition. Physiological and Biochemical Zoology 77: 700–719. Pisani D, Poling LL, Lyons-Weiler M, Hedges SB . 2004. The colonization of land by animals: molecular phylogeny and divergence times among arthropods. BMC Biol 2: 1. Taketani RG, Tsai SM . 2010. The influence of different land uses on the structure of archaeal communities in Amazonian anthrosols based on 16S rRNA and amoA genes. Microb Ecol 59: 734–743. Taylor TN, Osborn JM . 1996. The importance of fungi in shaping the paleoecosystem. Review of Palaeobotany and Palynology 90: 249–262. Wang DY, Kumar S, Hedges SB . 1999. Divergence time estimates for the early history of animal phyla and the origin of plants, animals and fungi. Proc Biol Sci 266: 163–171. Watanabe Y, Martini JEJ, Ohmoto H . 2000. Geochemical evidence for terrestrial ecosystems 2.6 billion years ago. Nature 408: 574–578. | {
"source": [
"https://biology.stackexchange.com/questions/937",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/166/"
]
} |
1,037 | How did the red blood cell in humans get to lose its nucleus (and other organelles)? Does the bone marrow just not put the nucleus in, or is it stripped out at some stage in the construction of the cell? | Red blood cells are initially produced in the bone marrow with a nucleus. They then undergo a process known as enucleation in which their nucleus is removed. Enucleation occurs roughly when the cell has reached maturity. According to one research ( Ji, et al. , 2008 ), the way this occurs in mice is that a ring of actin filaments surrounds the cell, and then contracts. This cuts off a segment of the cell containing the nucleus, which is then swallowed by a macrophage. Enucleation in humans most likely follows a very similar mechanism. The absence of a nucleus is an adaptation of the red blood cell for its role. It allows the red blood cell to contain more hemoglobin and, therefore, carry more oxygen molecules. It also allows the cell to have its distinctive bi-concave shape which aids diffusion. This shape would not be possible if the cell had a nucleus in the way. Because of the advantages it gives, it is easy to see why evolution would cause this to occur. However, since little is known about the genes the control enucleation, it is still not a fully understood process. | {
"source": [
"https://biology.stackexchange.com/questions/1037",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/84/"
]
} |
1,038 | In the antibody-mediated immune response, when the helper T cell gets activated by the costimulus (IL-2 and TNF-α secreted by the APC) which in turn produces IL-2, IL-2 acts in an autocrine manner. I'm just wondering why does IL-2 have to be secreted? Why doesn't it just exert an affect while it's already inside the helper T cell? What's the point of autocrine signalling? I hope the answer isn't going to be, "Well, that's just the way it is..." because paracrine and endocrine make sense and have advantages, but autocrine just seems a bit extra. | Red blood cells are initially produced in the bone marrow with a nucleus. They then undergo a process known as enucleation in which their nucleus is removed. Enucleation occurs roughly when the cell has reached maturity. According to one research ( Ji, et al. , 2008 ), the way this occurs in mice is that a ring of actin filaments surrounds the cell, and then contracts. This cuts off a segment of the cell containing the nucleus, which is then swallowed by a macrophage. Enucleation in humans most likely follows a very similar mechanism. The absence of a nucleus is an adaptation of the red blood cell for its role. It allows the red blood cell to contain more hemoglobin and, therefore, carry more oxygen molecules. It also allows the cell to have its distinctive bi-concave shape which aids diffusion. This shape would not be possible if the cell had a nucleus in the way. Because of the advantages it gives, it is easy to see why evolution would cause this to occur. However, since little is known about the genes the control enucleation, it is still not a fully understood process. | {
"source": [
"https://biology.stackexchange.com/questions/1038",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/496/"
]
} |
1,041 | During photosynthesis, a plant translates CO 2 , water and light into O 2 . I assume the carbon C is further used for the growing process. I wonder how the plant grows before the time where photosynthesis is possible, i.e. before there are even leaves, in which photosynthesis occurs. To what extent does the plant grow from the seed/the minerals in the ground? How much carbon are relevant for which parts of the plant and at what times of the evolution of the plant? I don't know if there are only these life periods of the plant, i.e. if there are other major biochemist-of-growing changes, than in the comparison before and after the plant got leaves. So are there other relevant aspects to this? If there are leaves present, is the rigid structure of the plant only coming from the CO 2 in the air and not from the ground anymore? The answer will probably depend on the plant. So here is another formulation of the question: What are typical characteristics of different plants in this regard? I.e., how do common species of plants manage their C consumption before (and after) the development of leaves? | There are quite a few questions and thoughts in there, I'll try to cover them all: First, to correct your initial word equation: During photosynthesis, a plant translates CO 2 and water into O 2 and carbon compounds using energy from light (photons) . You are correct to assume the C is further used for the growing process; it is used to make sugars which store energy in their bonds. That energy is then released when required to power other reactions, which is how a plant lives and grows. C is also incorporated into all the organic molecules in the plant. Plants require several things to live: CO2, light, water and minerals. If any of those things is missing for a sustained period, growth will suffer. Most molecules in a plant require some carbon, which comes originally from CO 2 , and also an assortment of other elements which come from the mineral nutrients in the soil. So the plant is completely reliant on minerals. Most plants, before a leaf is established or roots develop, grow using energy and nutrients stored in the endosperm and cotyledons of the seed. I whipped up a rough diagram below. Cotyledons are primitive leaves inside the seed. The endosperm is a starchy tissue used only for storage of nutrients and energy. The radicle is the juvenile root. The embryo is the baby plant. When the seed germinates the embryo elongates, the endosperm depletes, the testa ruptures, and the cotyledons emerge from the seed. The cotyledons are green, and like leaves can photosynthesise, so as soon as they are in the light the plant is able to make carbon compounds. The radicle elongates at the same time, and becomes the root, so the plant is very quickly able to obtain fresh nutrients from the soil (or whatever it's growing in). At all stages of a plant's life it is using both energy stored in carbon compounds (from CO 2 ) and nutrients which it took up via its roots. At no point does the plant start to depend solely on the CO 2 in the air for its growth. You are right that the way in which plants acquire energy and nutrients prior to leaves and roots being established varies between plants. Above I outlined the way most plants use. But there are lots of variations. For example, orchids have very tiny seeds, some barely visible to the naked eye, like specks of dust. They have no endosperm or storage tissue, so they have to rely on a symbiotic mycorrhizal fungus to get carbon and nutrients. The fungus grows through the coat of the orchid seed, then provides everything the growing plant needs until it has its own leaves. Then the orchid repays the fungus by providing sugars. There are lots of other examples, but we could go on all day! | {
"source": [
"https://biology.stackexchange.com/questions/1041",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/498/"
]
} |
1,090 | From what I understand, bacteria have circular DNA. What advantages does it have over linear strands like for eukaryotes? Do there exist bacteria with more than one ring of DNA? | Vibrio cholerae is known to have two circular chromosomes. Bacteria cell division is a lot simpler and efficient as compared to eukaryotic cell division, partly due in part to the nature of their chromosomes. They don't have to undergo mitosis -- condensation of chromosomes, segregation, spindle fibre formation, attachment et al aren't involved in bacterial cell division. Circular DNA also circumvents the Hayflick limit (thus allowing it to be "immortal"), which is the number of times a cell population can divide before it stops, presumably due to the shortening of telomeres , the sequences at the end of the chromosomes. Since circular DNA lacks telomeres, it does not get shorter with each replication cycle. Circular DNA can also facilitate horizontal gene transfer such as Hfr mediated conjugation. Remember, conjugation is analogous to a "rolling-circle" type replication which is of course, only possible on circular pieces of DNA. | {
"source": [
"https://biology.stackexchange.com/questions/1090",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/84/"
]
} |
1,382 | Which animal/plant/anything has smallest length genome? | Since you said plant/animal/anything, I offer the smallest genomes in various categories... (Kb means Kilobases, Mb means Megabases. 1 Kb = 1000 base pairs, 1Mb = 1000Kb) Smallest plant genome: Genlisea margaretae at 63Mb ( Greilhuber et al., 2006 ) Smallest animal genome: Pratylenchus coffeae (nematode worm) at 20Mb ( Animal Genome Size DB ) Smallest vertebrate genome: Tetraodon nigroviridis (pufferfish) at 385Mb ( Jailon et al., 2004 ) Smallest eukaryote: Encephalitozoon cuniculi (microsporidian) at 2.9Mb ( Vivarès & Méténier, 2004 ) Smallest free-living bacterial genome: Nanoarchaeum eqitans at 491Kb ( Waters et al., 2003 ) Smallest bacterial genome: Carsonella ruddii (endosymbiont) at 160Kb ( Nakabachi et al., 2006 ) Smallest genome of anything: Circovirus at 1.8Kb (only 2 proteins!!) ( Chen et al., 2003 ) Refs... Chen, C.-L., Chang, P.-C., Lee, M.-S., Shien, J.-H., Ou, S.-J. & Shieh, H.K. (2003) Nucleotide sequences of goose circovirus isolated in Taiwan. Avian Pathology: Journal of the W.V.P.A. 32 (2), 165–171. Greilhuber, J., Borsch, T., Müller, K., Worberg, A., Porembski, S. & Barthlott, W. (2006) Smallest Angiosperm Genomes Found in Lentibulariaceae, with Chromosomes of Bacterial Size. Plant Biology. 8 (6), 770–777. Jaillon, O., Aury, J.-M., Brunet, F., Petit, J.-L., Stange-Thomann, N., Mauceli, E., Bouneau, L., Fischer, C., Ozouf-Costaz, C., Bernot, A., Nicaud, S., Jaffe, D., Fisher, S., Lutfalla, G., et al. (2004) Genome duplication in the teleost fish Tetraodon nigroviridis reveals the early vertebrate proto-karyotype. Nature. 431 (7011), 946–957. Nakabachi, A., Yamashita, A., Toh, H., Ishikawa, H., Dunbar, H.E., Moran, N.A. & Hattori, M. (2006) The 160-Kilobase Genome of the Bacterial Endosymbiont Carsonella. Science. 314 (5797), 267–267. Vivarès, C.P. & Méténier, G. (2004) Opportunistic Infections: Toxoplasma, Sarcocystis, and Microsporidia. In: World Class Parasites. Springer US. pp. 215–242. Waters, E., Hohn, M.J., Ahel, I., Graham, D.E., Adams, M.D., Barnstead, M., Beeson, K.Y., Bibbs, L., Bolanos, R., Keller, M., Kretz, K., Lin, X., Mathur, E., Ni, J., et al. (2003) The genome of Nanoarchaeum equitans: insights into early archaeal evolution and derived parasitism. Proceedings of the National Academy of Sciences of the United States of America. 100 (22), 12984–12988. | {
"source": [
"https://biology.stackexchange.com/questions/1382",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/610/"
]
} |
1,446 | I don't know if this question applies to only humans but why can cones see much greater detail than rods? Is it possible to have a rod that can detect light intensity and color? | The spectral sensitivity of photoreceptors expressed is the key to color vision. See figure below for the sensitivity of three-types of cone cells (S, M, L) and rod cell (R, dashed line). From this figure, one can say rod cells provide information about the "blue-greenness" of vision, however, despite their spectral sensitivity, it seems that in human vision rod cells do not contribute to color vision, because they are highly sensitive to intensity , and thus they are mostly saturated in their response (does not induce firing of downstream bipolar cells) during normal daylight conditions. Rod cells specialize for night vision ( scotopic conditions ) which is crucial for survival, and under this condition the cone cells are pretty much useless. | {
"source": [
"https://biology.stackexchange.com/questions/1446",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/238/"
]
} |
1,448 | CbpA is DNA binding protein found in E. coli and binds non-specifically to curved DNA ( Cosgriff et al., 2010 ), when the bacterium is in a static phase of growth. The use of "curved DNA" confuses me. Is the term "curved DNA" essentially the same as "Circular DNA"? Cosgriff, S. et al. Dimerization and DNA-dependent aggregation of the Escherichia coli nucleoid protein and chaperone CbpA. Mol. Microbiol. 77, 1289–1300 (2010). | The spectral sensitivity of photoreceptors expressed is the key to color vision. See figure below for the sensitivity of three-types of cone cells (S, M, L) and rod cell (R, dashed line). From this figure, one can say rod cells provide information about the "blue-greenness" of vision, however, despite their spectral sensitivity, it seems that in human vision rod cells do not contribute to color vision, because they are highly sensitive to intensity , and thus they are mostly saturated in their response (does not induce firing of downstream bipolar cells) during normal daylight conditions. Rod cells specialize for night vision ( scotopic conditions ) which is crucial for survival, and under this condition the cone cells are pretty much useless. | {
"source": [
"https://biology.stackexchange.com/questions/1448",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/158/"
]
} |
1,495 | I know death and cancer doesn't hurt humans' reproductive success. It's not helping either. Why do we die? Why dying humans (all of us) are common? What's the point of dying? | Death is not only for humans. All 'complicated enough' organisms die (with a notable exception of Hydra , though you may argue when it comes to the complexity). It is is easier to create a new organism from scratch than to repair both internal factors (free radicals, metabolic by-products, ...) and external (physical damage, exposure to toxins, ...). Underlying causes of death actually can be evolutionary beneficial. For example, shortening of telomeres offers protection against cancer (on a cellular level), but also bounds lifespan. So actually they may be evolutionary competition (within the same species) of young and old. Mutations helping young but harming older may be preferred to the opposite ones. | {
"source": [
"https://biology.stackexchange.com/questions/1495",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/554/"
]
} |
Subsets and Splits