q_id
stringlengths 6
6
| title
stringlengths 4
294
| selftext
stringlengths 0
2.48k
| category
stringclasses 1
value | subreddit
stringclasses 1
value | answers
dict | title_urls
sequencelengths 1
1
| selftext_urls
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|
5p3w46 | What are the pros/cons between putting a video game on a cartridge vs a disc? | The release of the Switch has me curious | Technology | explainlikeimfive | {
"a_id": [
"dco6jjc"
],
"text": [
"Optical media has long held an advantage for stationary consoles because it is dirt cheap and holds a lot of data. However, it is not practical for mobile devices (moving parts) and suffers from long loading times. Solid state media, like cartridges, is more expensive to make (although the price has fallen a lot over the past decade), but it benefits from fast access times and can tolerate being used in a moving device (no moving parts). Cartridges are also generally a lot more durable than optical discs."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5p419h | Why do internet data caps exist? | Technology | explainlikeimfive | {
"a_id": [
"dco6x95"
],
"text": [
"In the case of both wired and wireless connections (cable and cell towers), it's a tactic by the internet provider to try and slow the growth of bandwidth use. People using more bandwidth means they need to spend a lot of money upgrading their back end infrastructure to support the connections. That equipment is expensive and hard to upgrade, and the companies don't want to spend that money unless they have to."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5p4j04 | Why do gas stations hide my debit card pin for security, yet leave my credit card zip code visible? | If it's a security measure, why would they not obscure the zip code? This has never, ever made sense to me. | Technology | explainlikeimfive | {
"a_id": [
"dcoaviy"
],
"text": [
"> This has never, ever made sense to me. How about why do they even ask for the zip code if losing your credit card often would involve also losing your driver's license that has your home zip code on it anyway? The reason is that asking for the zip code makes it more difficult for someone to just copy your card number with a skimmer and use it because they need additional information. They don't really care about someone looking over your shoulder and figuring out your zip code because, hey, *phone books exist.*"
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5p540m | How does your phone know where you are tapping the screen through a plastic/glass screen protector? | Technology | explainlikeimfive | {
"a_id": [
"dcog6qr"
],
"text": [
"Phones use something called \"capacitance\" to detect contact. Basically, your phone has a weak electrical current running through the screen. When something conductive gets close to the screen (doesn't actually have to make physical contact), some of the current travels through that instead of the screen. The phone can detect this change in current and, if it matches the kind of change caused by a human finger (which is pretty consistent), it registers that as input. A plastic cover is generally very thin and 'electrically transparent', which basically just means it has similar conductive properties to air. As such, the current will still change in the same way when your finger (or a stylus with similar electrical properties to your finger) gets close to the screen."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5p70z8 | Why can't computers/operating systems detect when they are about to "freeze"? | Technology | explainlikeimfive | {
"a_id": [
"dcow27h",
"dcowxml"
],
"text": [
"You could easily set up something that alerts you if the memory capacity reaches a certain point. However, just halting it early if it *might* freeze would be even more ineffective and would be basically the same thing as just freezing early. Most times when the computer freezes it's because of a memory leak or some kind of operation that is looping in a way that doesn't end. But there's no way for the computer to *know* the looping will never end, since the only way for it to know is to keep looping and just hope. It can't even just say \"If I loop a hundred times, cancel what I was doing\" since some actions *would* need to loop a hundred times, so it's up to the individual program to do this for the operating system. Many programs *do* halt automatically if certain conditions are met, and typically pop out an error message to the user in those situations. But if something goes wrong and no failsafe had been built in, there's no way for it to *know* that's the case. Hence why things freeze and you have to manually force quit them.",
"You've got it a little backwards. A computer operating at 100% CPU usage does not cause it to freeze, so watching for high usage as a warning wouldn't help. In fact your computer *regularly* jumps from 0-100 and back. Try it yourself: open Task Manager, check the CPU graph and start doing stuff. It'll spike then drop back low. The CPU works hard to do what you need it to then mostly idles waiting. Rather, **stuck code** causes the CPU to shoot to 100% and stay there. If a program gets stuck in an infinite loop, that can lock up the CPU and effectively freeze the computer, because nothing else is able to get some CPU time to run. This includes the scheduler and any other utilities that could potentially stop the errant app. If a computer is running at 100% *finishing a task that isn't stuck, just time consuming* then the other system components may have a hard time getting a byte in edgewise, causing it to appear temporarily locked up because none of the graphics-related code gets to run for a little while."
],
"score": [
5,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5p8f61 | How do calculators work? | Technology | explainlikeimfive | {
"a_id": [
"dcpe7br",
"dcpatyr"
],
"text": [
"Well im pretty drunk, friday and all, so ill just cover addition. you'd have to understand a lot of computer architecture to understand the full... thinggg. we first have to cover XOR, it's pretty easy. XOR is a bit operation thats used for addition and it works like this, a 0 and 0 = 0, 1 and 1 = 0, and 1 and 0 = URL_0 1100 XOR 1110 = 0010 at a closer look: 0 XOR 0 = 0 (our furthest right bit result) 0 XOR 1 = 1 1 XOR 1 = 0 1 XOR 1 = 0 getting us 0010 so if we add 2 and 9 we have 2 = 0010, 9 = 1001, then 0010 XOR 1001 = 1011 = 11(eleven) and the calculator just added those numbers. take 1 + 1 though, we have 1 = 0001, then 0001 XOR 0001 = 0000, which isn't right. so when two 1's are XOR'd a carry flag would be set (or something similar) then that flag would be included in the XOR for the next bit. so heres an attempt at showing what it would look like 0001 XOR 0001 1 XOR 1 = 0 (but the carry flag is then set to 1) 0 XOR 0 = 0 but the carry flag is set so that result is XOR'd with the carry flag so 0 XOR 1(carry) = 1 0 XOR 0 = 0 0 XOR 0 = 0 so our end result is 0010 = 2",
"Binary logic is used to carry out simple arithmetic, such as addition, subtraction, multiplication and division. I cannot delve into the programmatic details. Transcendental functions like sine, cosine, inverse trig functions, integrals and so on are calculated using a series expansion called a Taylor Series. In short, every \"elementary function\" has an equivalent expression in the form of an infinite sum in polynomial form that's called a Taylor series. Using a known Taylor series, a calculator can carry out simple addition and multiplication operations, accurate to a number of decimal places that your calculator truncates at in order to calculate the value of an otherwise complex operation (trig functions, logarithms, etc.). For example, I believe a TI 83 truncates at 12-16 decimal places or so. MATLAB truncates at 16. An infinite number of terms isn't necessary on a calculator because it has finite memory to store decimal places. EDIT: Clarity"
],
"score": [
10,
6
],
"text_urls": [
[
"1.so"
],
[]
]
} | [
"url"
] | [
"url"
] |
|
5pau4l | is it really true that 10 strategic atomic bombs could wipe out the human life on earth? | it's something someone told me when I was younger. But when I've used the nuke map to compare sizes of nukes compared to cities I can't seriously believe that 10 nukes, even really big would be able to take out whole human lifef on earth. How many nukes would really be needed? | Technology | explainlikeimfive | {
"a_id": [
"dcqcxgs",
"dcpqrdb",
"dcpppf5",
"dcpq5w6",
"dcpprc4",
"dcrb2a1"
],
"text": [
"Well, I know this is going to get buried but I'll answer as best I am able. One of the side effects is actually forest fires. I read a paper a while back that if Pakistan launched all their nukes and India launched just an equal number than the resulting forest fires would create enough Ash to create winter for four years. This is equal to the time period of what wiped out the dinosaurs. It's way more than 10, but hopefully helps you to understand some of the side effects of such a catastrophe.",
"Well, for one thing, we've detonated more than ten strategic-class (megaton-range) bombs, and that 58 Mt monster, and we're still around. The concept of a nuclear exchange between two superpowers somehow killing *everyone on the planet* is a product of supposed climate change as a result of mass nuclear detonations - usually a \"nuclear winter\" due to all the dust and ash being thrown up, obscuring sunlight, causing a global cold thaw and wrecking agriculture worldwide. Problem is, the concept is largely propagandistic, and there is no consensus that it would even happen, let alone how many nukes it would take. There are circa 400-500 targets for a *countervalue* strike in the US (i.e. if you're not trying to take out US nukes - counterforce - but try to just kill Americans); roughly the same for Russia or China. Current nuclear arsenals are in the thousands; 95% of the nukes or more would reach their targets, because pretty much the only place that has a decent defense capable of repelling a massed nuclear attack is Moscow. Once you take out the major cities, people in the countryside are likely to die off in a few years. Any of the three nuclear superpowers getting blown off the map would wreck the global economy, but it would hardly lead to the extinction of humanity.",
"What type of nukes were you comparing? I know the ICBM's of today pack a much delivery than what was dropped on Japan.",
"Well, the Soviets detonated a 57 MT bomb and as far as I know, it didn't cause any significant effects outside its blast radius. It would take more than 10. The whole idea of nuclear winter is controversial in and of itself.",
"The thing with nukes, is that the devastation continues long after the initial blast. Those who aren't killed in the blast wave would be subjected to radiation poisoning. I'm still not sure 10 would be enough, but with modern day yields in the megaton, who knows?",
"Hi — I made the NUKEMAP. I'm glad it was useful to you in thinking about this question. One can, I think, categorically say that it would take more than 10 bombs of any size actually ever built to wipe out all of human life. Ten \"Tsar Bomba\" bombs at 100 Mt apiece, while very large, still could only produce only so much destruction, even taking into account the possibilities of nuclear fallout, and nuclear winter. (The latter would be the only possible way — I think 1,000 Mt in only ten locations would probably not generate enough to generate significant climate change, but it might be, depending on how much burning one is estimating you'd have. 1,000 Mt distributed over a much more diffuse area — e.g., in 100 kt intervals — would definitely get you around the area to have significant global consequences, under some of the models. Would it \"wipe out life on Earth\"? I don't know.) Now, if you don't limit yourself to bombs that were actually built, you can make them pretty much as large as you want. Let's just limit ourselves to bombs that were actually _contemplated_ when considering the high-end of the megaton scale. There were plans during the 1950s for bombs as large as 10,000 Megatons. Ten 10,000 Mt bombs might get you into \"wild disruptions of climate\" (each could set fire to [an area the size of Texas or France]( URL_0 )), and impressive amounts of fission product dispersals if they were detonated on the ground (which would not be ideal if you were trying to set fire to large areas). Would that wipe out human life on Earth? Maybe, maybe not — there are a lot of people and we're a pretty tough species to kill out. But it would be pretty terrible; the world would not be the same one as existed before the detonations in question."
],
"score": [
6,
6,
4,
4,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[
"https://twitter.com/wellerstein/status/641973383429795841"
]
]
} | [
"url"
] | [
"url"
] |
5pbcnj | How exactly does Morse Code work and how was it developed? | Technology | explainlikeimfive | {
"a_id": [
"dcpuqtf"
],
"text": [
"It was developed at a time where we could not transmit speech yet. A simple \"current on or off\" is easier to transmit. But how do you convert messages into \"on/off\" patterns? You invent a pattern for every letter. The sender sends them via pressing a key, and the receiver has a speaker making a sound when the key is pressed by the sender. Examples: * \"e\" is a single short signal. * \"t\" is a single long signal * \"a\" is a short signal, followed by a long signal * \"s\" is short-short-short * \"o\" is long-long-long Between each letter you have to leave some break to make clear that a letter is done (otherwise \"a\" looks like \"et\"). \"SOS\" is \"short-short-short----long-long-long-----short-short-short\", for example. The translation between letters and sequences is arbitrary. Samuel Morse used short sequences for frequent letters (like e and t) and longer sequences for less common letters (like Q: long-long-short-long), that makes transmissions faster."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5pbdex | Why can't/won't we use SVG's for textures in PC games? | Technology | explainlikeimfive | {
"a_id": [
"dcpvks4"
],
"text": [
"When a graphics card is trying to determine what color a screen pixel should be, it finds out what shader is represented there. To find a base color (barring the extra processing), it can quickly figure out a grid location associated to the relative location on the polygon it hit. The fastest way to figure out the reference color for any texture at that point is to look it up in a table. A bitmap or raster image is just that: a table of colors or values the card needs. An SVG (or any vector-based image format) is a mathematical representation of how to generate that grid of data. It requires processing to determine the desired colour at a specific point, based off of the geometric data used to generate that point. Every time a vector image is displayed, it's rasterized to be shown at the desired resolution. A graphics card could be made to use a vector image, but it would still have to rasterize the data before displaying on your screen. If it had to do it for every pixel, it would push computations very high, dropping your framerate dramatically. To make it faster, the card could remember that computed image as a bitmap and keep it for just the single frame or remember it for the time it's being used. In either case, there is still the overhead of making that rasterized image, x 3 or more for every image type (diffuse, specular, normal, metallic, emission, etc) and for each element on the screen (barrel's texture set, gun's texture set, vehicle's texture set, terrain, etc). That requires a lot of memory and puts computation time up as well (every single SVG has to be converted first, remember). Your 60+ fps game will drop dramatically when this has to happen, such as every time a new unit comes on screen or a new special effect triggers, or you have to have enough memory to store all of that generated data on top of the initial SVG. Game design strives to use as few clock cycles per frame as possible and use a graphics card's memory as efficiently as possible. An SVG requires more clock cycles than a bitmap or more memory than conventional bitmaps, so we just cut out the middle man and generate a bitmap instead. Anecdotally, the \"game loading\" process also optimizes the existing bitmaps into a more efficient format and laid out in the graphics card's memory as tightly as possible and could use SVGs to source that data for that process, but at the expense of longer load times."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5pblm4 | Why do so many people stress protecting yourself on public wifi but no one says anything about mobile data towers shared by everyone? | Technology | explainlikeimfive | {
"a_id": [
"dcpw2gv",
"dcpx2i6"
],
"text": [
"Your phone is not designed to pick up other phone's traffic normally. While you could hack a device to do this it os not that easy by any means. On the other hand, wi-fi devices listen to all packets transmitted over their wi-fi connection but only process packets sent to them. It is a small matter, in a lot of cases, to put a device in a mode where you look at all packets. In truth, your data is safe nowhere. There are levels of security, however, and we try to not make it easy. As the saying goes, locks are made to keep honest people honest.",
"Its very easy for me to sit down in a Starbucks with my laptop, and set up a network called \"public wifi\". If you jump on it, now I can capture all your traffic. Its not nearly so easy to do the same with with a cell tower."
],
"score": [
6,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5pcx8d | How are we so sure that all the wireless communication is not harmful to us? | Living next to a pretty massive TV-tower I am wondering if and why we know that all this radiation from cellphones, wifi networks, radio stations and tv-networks is not harmful to us. Obviously it would be very long term harm but that makes it even harder to prove, doesn't it? | Technology | explainlikeimfive | {
"a_id": [
"dcq70k8",
"dcq70v3",
"dcq7bip",
"dcq8ese",
"dcq8fsu"
],
"text": [
"There are lots of different of kinds of radiation. There's alpha, beta, gamma, cosmic, all that, but the kind you're talking about is *electromagnetic* radiation - physically speaking, the same stuff as visible light, a particle/wave called a photon, but whose frequency is in another part of the spectrum. Broadly speaking, you can divide all the types of radiation into two groups: ionizing and non-ionizing. Ionizing radiation is any radiation energetic enough to knock the electrons off an atom, thus making it an *ion*. For electromagnetic radiation, gamma rays, x-rays, and the high end of the ultraviolet part of the spectrum are ionizing, and those rays are the dangerous ones. Exposure to enough ionizing radiation can cause cell damage, cancer, death by radiation poisoning, the condition known as \"hot dog fingers\", children born with the head of a golden retriever, all sorts. However, ultraviolet and nastier kinds of electromagnetic radiation are all \"high frequency\" or \"short wavelength\" radiation - whereas the radio signals used by phones, wifi, radio stations, tv, etc, are all in the microwave (yes, like the oven) area, which is *non-ionizing* radiation. While intense microwave radiation *can* cause water molecules to heat up, which is how a microwave oven works, it *doesn't* do the kind of damage ionizing radiation can do, and anyway, the *amounts* and *intensity* of the radiation put out by those devices is relatively tiny - your router puts out nowhere *near* what it takes to warm up a room temperature chimichanga, let alone cook your eyeballs like a hard boiled egg or give you cancer.",
"Because we've done massive amounts of research, studies and scientific testing on it, and we've never found any evidence of any harm.",
"ELi5 version: Wireless technology uses frequencies of light that are harmless to humans, and only output an incredibly small amount of it at that.",
"Three different ways: theory, experiment, and long-term studies. Theory tells us that the kind of radiation used in wireless communication has too little energy to cause cancer or other such problems. We know how much concentrated energy it takes to mutate DNA, for example, and radio waves have less than that. It requires a higher frequency (such as X rays). Experiments tell us that animals exposed to extensive amounts of wireless communication signals are about as healthy as those unexposed. Long-term studies tell us that people who live near sources of radio waves are about as healthy as those who do not.",
"On top of what's been said already just to be clear why there isn't really much worry about it, The light you see with your eyes, is the same as what we use for wireless communication, it's still light, just light where the waves are not the right length for the eye to see, in our everyday lives and as humans, we see these different wavelengths or frequencies (the two are related because the shorter the length of a single wave the more you'd imagine to go past a single point in a given time, i.e higher frequency of waves), anyway, we see these as colors. [Here is an image of the electromagnetic spectrum;]( URL_0 ) Now we know visible light obviously doesn't cause any damage to us. To the right of visible light on the spectrum you see Ultra Violet - UV, you might recognize that this is what's talked about as a cause of skin cancer, the sun produces a massive range of waves along with Ultra Violet, these cause skin cancer because waves of that frequency have enough energy to knock electrons off atoms, so if that happens to be an atom that belongs to you, i.e part of your DNA is damaged, next time a cell uses your DNA in any way, perhaps to replicate, it will replicate this damaged DNA. Now finally, to the left of the visible light in the image, you see infrared, later microwaves, radio waves - these are waves that have less, even less and even way less energy than even normal visible light we see everyday, and so nowhere near enough to break atoms, these are the waves you'll recognise we use for wireless communication. To feel safe you can try looking at TV-Towers, Wi-FI routers etc as giant (or small) light bulbs, always blinking away in a pattern that our computers / radios etc have been designed to understand, using light waves that have less energy than the light coming from your bedroom light"
],
"score": [
20,
4,
4,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[
"http://earthobservatory.nasa.gov/Experiments/ICE/panama/Images/em_spectrum.gif"
]
]
} | [
"url"
] | [
"url"
] |
5pdecr | Why we get shocked when we lick the top of a 9v Battery | Growing up, my mom told me to lick the top of a 9v to check and see if it had any charge left. Whilst that does work, it's a cruel joke to play on a 7 year old lol, but how does it work? | Technology | explainlikeimfive | {
"a_id": [
"dcqak7u",
"dcqdki4"
],
"text": [
"It works because you're completing the circuit, allowing electrons to pass from the positive terminal, through your tongue, to the negative terminal. As far as it being cruel, I've stuck my tongue to a fair share of 9v batts in my time and never felt anything more than a tickle or vibration feeling.",
"/u/MikeHunturtze has it mostly correct. Most of the electrical current is actually passing through your saliva, which is slightly conductive. If your tongue was dry, it would presumably be about as conductive as your skin. Which is not very. Also, electrons don't go from the positive terminal to the negative one...it's the other way around. Electrons are negatively charged, and go from the negative terminal to the positive one. We say that current flows from + to -, but electrons flow from - to +. It is a funky result of calling electrons negative charge carriers. A negative charge flowing backwards results in a positive current. And he either has very few nerve endings in his tongue, or he's never licked a fresh 9V. Try it with a brand new one, and really press it onto the middle of your tongue. It's a lot more than mild, although it does pale in comparison to grabbing mains voltage."
],
"score": [
6,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
5pfupl | What is net neutrality? | Technology | explainlikeimfive | {
"a_id": [
"dcr2jm5",
"dcqvqd9"
],
"text": [
"***All data is equal, and no data on the internet shall be treated differently than other data regardless of content, origin or intended purpose.*** Basically your ISP cannot artificially cut off or slow down your connection to Youtube because they want you to rather use Hulu. They can't promise Facebook that their data gets priority over data from reddit, and they cannot refuse to serve you a connection to google even if they personally own Bing (unless forced by law, but that's a different story), they cannot make you pay for a \"Social pack\" that only allows you access to Instagram, facebook, Twitter and tumblr while locking you out of everything else. all data is equal, no company on the internet is different than the rest. no ISP can control what or how you consume the internet. Net neutrality ensures that all you are paying for is access to the internet, the entirety of the internet. An analog would be \"Electricity neutrality\". your power company sells you power and you have full control over what you do with that power once in your home. they cannot dictate how you use your power, what you use your power for and the rate you consume specific electric devices. your company cannot charge you extra for a \"brighter bulb\" package to ensure your light bulbs have sufficient power at all times.",
"That all information, regardless of what it is, is transmitted over the internet through the same servers and at the same speed. Basically, \"noone owns the internet\", so whether you're looking up a scientific research paper, or just your daily wank material, the servers don't care. It's all processed the same way. It's just information."
],
"score": [
13,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5pgfaf | What is the difference between http and https? | Technology | explainlikeimfive | {
"a_id": [
"dcr0cer",
"dcr0nkx"
],
"text": [
"The S in httpS stands for \"secure\". What this means is basically two things - encryption and authentication. Encryption simply means that the data send between your computer and the server is encrypted. Anyone intercepting this communication should be unable to decrypt it. Authentication means that the server sends a digitally signed certificate which proves the server is indeed the entity that it claim it is. This way, for example, you can know for sure that the banking website that you just entered your credentials in is indeed your bank, and not a phishing website that wants to steal your password.",
"https is a *secure* version of http. It's probably easier to explain the weaknesses of http, rather than explaining what https does to protect you. With http, any of the following may happen: - someone may eavesdrop on everything you send and receive - someone with access to one of the routers your data travels over may redirect your requests to a different site (we can see you're requesting http:// URL_0 . Let's redirect that so the site you actually see is my shady Chinese clone, which looks exactly like Paypal, and even lets you carry out transactions. But it also steals your password so I can use that to log in to your real Paypal account at my leisure. - someone may modify your request, or the response you get (injecting malware into the response, or perhaps just letting your ISP insert additional ads on the page) https prevents all this. It offers authentication, so you can be sure that the site you're visiting is the real deal. That if you go to https:// URL_0 , you will *get* URL_0 . If you get redirected to an impostor, your browser will be able to see it, and drop the connection and show you an error instead. And it offers encryption, so an eavesdropper can't see what you send, what you receive, or even who you're sending *to*. (They can still see which IP address your request is going to, but they can't see the corresponding hostname, and they can't see which page on the site you're requesting)"
],
"score": [
6,
3
],
"text_urls": [
[],
[
"paypal.com",
"http://paypal.com",
"https://paypal.com"
]
]
} | [
"url"
] | [
"url"
] |
|
5pgfpl | Why is white a brighter color? | This question came to mind because sometimes at night I will put a white page on my second monitor to create a sort of 'lamp.' I understand that white is a lighter color than black or blue, but why is it brighter, especially when on a computer/phone/tv screen? | Technology | explainlikeimfive | {
"a_id": [
"dcr0def"
],
"text": [
"White is a combination of all the colors of the visible spectrum. In screens, all colors are mixed using red, green and blue subpixels. So to get white, you have to turn on all three at the same intensity. Thus you get three times as much light as you would for a solid blue color."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5pgkzc | Why does a kettle boil water so much faster than a pan on the hob? | Technology | explainlikeimfive | {
"a_id": [
"dcr1ew4"
],
"text": [
"A kettle is designed to pump as close to 100% of the heat energy as possible into the water. The heating element is submerged, often surrounded on all sides by water. Whereas on a stove, the gas is heating the bottom of a pan, but a lot of the heat flows up, around the pan and away, as opposed to going towards heating the water."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5phy6d | Technicolor | How does it work and how does it affect the visuals of a product when compared to other types of effects? | Technology | explainlikeimfive | {
"a_id": [
"dcrgb7f",
"dcrhf2o",
"dcrba6i"
],
"text": [
"Technicolor is a brand name, so it refers so the company, its photo lab processing services, and several different processing technologies over the years, some of which are obsolete. The most significant today is their positive print process. Color film involves three images, one each for red, green, and blue. To work for projection, the complementary colors, cyan, magenta, and yellow dyes are used. The main difference between Technicolor and other processes is how the dyes are put into the film during developing. Most processing uses chemicals called dye-couplers, which rely on chemical reactions to form the dye. Because this depends on specific chemical reactions, it limits the nature of the dye and the stability of the dye over time. Technicolor uses a dye-transfer process, where a substance such as gelatin can absorb the dye. This is more like using food coloring to color unflavored gelatin, whipped cream, etc. Because it's just a mixture without requiring a specific chemical reaction, there's much more flexibility in the choice of dyes. The result is that Technicolor prints are longer lasting than other processes, and thus still used for archival storage (where digital isn't wanted). In the 30s through 50s, Technicolor also referred to the film and camera system for making the negative. This used a combination of mirrors, prisms, and filters to expose three separate filmstrips. It was the best possible when introduced, but was very expensive, requiring special cameras, very bright lighting, and company specialists on the set to advise the cinematographer. This is one reason we still saw B & W movies being made in the 50s and 60s. Over time, systems involving just one filmstrip evolved, and they were much cheaper, so Technicolor process cameras are no longer used. TL;DR: Technicolor allows for the use of much more stable dyes, creating longer lasting prints.",
"I am a professional cinematographer and can answer this. Technicolor today is a digital color lab. When I shoot a movie digitally (almost all movies now a days) I end up with thousands of video clips that need to be organized and color corrected for the editor. Technicolor will do that for you. Many other companies will do that as well but Technicolor is considered the best and costs the most so they do the dailies and color work for big budget movies mostly. Once the movie is edited they will do another color correction pass to make sure all the shots match. This is very important because scenes don't always match color wise from shot to shot for a variety of reasons. Also some lighting problems (maybe the light on the actor is too dark or too bright) can be fixed with a good color pass. Since technicolor is big budget they tend to hire to best people to do that job.",
"Technicolor was a color film technology. Several different companies began experimenting with color processes and Technicolor became the most widely known because it produced highly saturated colors. Nowadays the Technicolor company acts as a post production house, processing digital raw files."
],
"score": [
6,
4,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
5pk6t6 | What is a PGP fingerprint and why are reporters putting it in their twitter bio? | Technology | explainlikeimfive | {
"a_id": [
"dcrxwto",
"dcrr3jd",
"dcrrc5r"
],
"text": [
"PGP is a form of \"public key\" cryptography, which lets people talk to the reporter without anyone listening in. Suppose I'm a government worker who wants to talk to a reporter about something horrible our government is doing. I have two problems: 1) How can I send my info to the reporter without my bosses finding out about it? 2) Once the reporter writes back, how can I be sure the person I'm talking to is really the reporter, rather than someone pretending to be him/her? PGP can solve both of these problems. It uses a special code that has two passwords: one secret, which the reporter knows, and one public, which is public knowledge (and is related to the \"fingerprint\"). A message encoded with the public password can only be read by someone with the secret password, and vice versa. So I email the reporter, and encode my email using his public password. I know only he can read it. He writes back, encoding his email with his *secret* password. I decode it with his public password, which means only he could have written it, and nobody could have messed with it along the way.",
"You can use it to encrypt a message and send it to the reporter. They have the key that will decrypt it. It's a way to send them a secure message only they can read.",
"PGP is a way of having a secure way of talking to another while not seeing each other. Imagine you want to write to your aunt but want to be sure only your aunt can read the letter. So you use a special glue (public pgp key of your aunt) to close the letter, knowing only your aunt has the right chemical to undo the glue. (This is her private pgp key). Now every person (read: email-adress) has their own pair of PGP keys. The PGP fingerprint contains the information about the key (the glue used to conceal the letter) and the owner (email-address). It is used to verify the identiy of a person. Journalists publish their pgp keys so everybody can write them super secret emails and know that they use the right clue to seal the letters. Edit: typos"
],
"score": [
7,
3,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5plv2h | Why can a coaxial cable carry hundreds of HD TV channels but a PC display cable can barely handle high resolutions? | Technology | explainlikeimfive | {
"a_id": [
"dcs69hc",
"dcs61bd"
],
"text": [
"Your DisplayPort cable actually has more total bandwidth, as much as 17 GHz. The difference is that this data is sent relatively *uncompressed,* that is, close to every pixel of every frame is transmitted. By contrast, cable TV is sent highly compressed, avoiding sending redundant data but requiring more compute power and high-speed memory on each end.",
"It's the data that the cables carry. The coaxial cable transfers a signal and information in a manner similar to an Ethernet cord from your Internet Provider, which could be data as video, data as a website link (etc.), while a Display Port cable carries specific instructions from the graphics card about how many pixels to place on the display, and where."
],
"score": [
9,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5pm1ij | How does my Amazon Alexa not react when a commercial for it comes on? | Technology | explainlikeimfive | {
"a_id": [
"dcsbk54"
],
"text": [
"I haven't seen an Alexa commercial honestly, but mine does react to my television sometimes. Since the voice recognition is really done on Amazon's servers, not in the device itself, they could have code to filter out audio clips from their own commercials."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5pm4ad | If Linux can't detect SD cards/eMMC at boot, then how do SBCs like the Raspberry Pi boot off them? | Technology | explainlikeimfive | {
"a_id": [
"dcs7b0t",
"dcs92fb"
],
"text": [
"The booting process happens long before Linux does anything (in fact, the booting process is what *starts* Linux). The thing that actually does the booting in a computer is something called the \"boot loader,\" and its sole job is to find an operating system and start it. The simplest \"answer\" I can give is just that the Raspberry Pi has a boot loader that is programmed to detect and read SD cards.",
"It can...sort of. It works like this: The BIOS/UEFI on the motherboard does the actual booting. It looks for drives, finds the OS kernel, and starts it. The BIOS/UEFI has to have support for SD cards and/or eMMC built in to boot from these drives. Linux doesn't immediately mount SD cards or eMMC because it considers them to be removable and likely used only for user files after everything is loaded. However, it can start from either type of drive just fine. If the BIOS/UEFI doesn't boot from these drives, the boot loader (the software that bridges the gap between the BIOS/UEFI, i.e. GRUB) can be put on something it will boot like a CD or hard drive, which then loads the rest of the OS from the SD or eMMC."
],
"score": [
9,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5pm91w | Why is it that video games have loading screens after deaths? | Okay, so I accept the need for loading screens in general. I don't get, however, why they're necessary after you **die**. It seems to me that if you have a save in the room/area you're fighting in when you die, the area should already be loaded (because you were JUST standing there) and all the computer has to do is refill everyone's health bars and put the enemies where they were. However in a game like The Witcher 3, it's faster to warp to an entirely different area of the map than it is to load a save or continue after death. What gives? | Technology | explainlikeimfive | {
"a_id": [
"dcs89g2"
],
"text": [
"Resetting just the things that changed back to their proper state can be harder than you think. The laziest way to program it is to just pretend you're loading the game for the first time, and re-initialize everything. It's inefficient, but effective."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5pnxde | Why do all remote controls have squishy rubber buttons instead of proper hard keys like a keyboard? | Technology | explainlikeimfive | {
"a_id": [
"dcsjece",
"dcsipz6",
"dct7s64",
"dcsit44",
"dcssf7z",
"dcstk1e",
"dcszx0d",
"dcsxkyn"
],
"text": [
"While hard keys like keyboard keys get produced one by one, the rubber keys of remote controlls are all made in one step. those rubber buttons are on one big pad per remote control so they are produced pretty fast and they are cheap. old cellphones do have the same kind of buttons. the drawback of those rubber buttons is that you cannot type as fast as with keyboard keys, but because u dont need to be as fast as that it is no problem. also the stupid humans would never recognize that they get fucking cheap remote controlls instead of high quality products",
"It's cheaper and easier to produce. This style of button has a plastic membrane, which when pushed down, completes the circuit and registers a button press. There isn't a physical button component soldered on for each button. A keyboard has more tactile buttons because it feels better to type on, however you can get cheap keyboards with the same type of buttons as a remote.",
"Something not mentioned is that they also do a pretty good job of waterproofing the switches. Because all the switches are usually on a single sheet of rubber, random splashes of water don't penetrate into the unit as much as they would with discrete keys. In fact, while the emphasis ITT seems to be cost, in reality I'd argue that the real reason is because it's a brilliant bit of design. You can pull it apart and clean the contacts easily, and in the worst case scenario where the conductive pad is worn, you can even paint on new contacts. If I was purchasing a 20 year old second hand device with heaps of switches, I know which style I'd prefer.",
"Maybe because remotes get dropped a lot, and many people have wood or tile floors? Hard keys might snap off and get lost, or break entirely. Source: watch a fair amount of TV, have hardwood floors, drop remote frequently.",
"Size and complexity which relates to cost. Going from simplest and cheapest to most complex and expensive it would go like this: 1. Plastic mushy button that presses down the circuit board like you see on a remote control. 2. Membrane keyboard where you have plastic mushy piece that pushes on the circuit board but is covered by a hard plastic key that your finger pushes. This is all cheap, non-mechanical keyboards. The vast majority of user keyboards. 3. Mechanical keyboards. A plastic key (of various materials if you like) that pushes down on a mechanical switch made of hard plastics and/or metals. All movements are precise and \"mechanical\" instead of relying on a squishy plastic to deform. So, the reason is essentially cost. The remote is generally the last thought for any device. When have you ever seen a nicely designed remote? Or one with decent materials? Never, or almost never I would bet. The amount of engineering and design and cost that it would take to make a small remote feel *nice* is beyond most consumer electronics that generally focus on low cost.",
"They used to. Personally I prefer soft keys, and judging by other comments, soft keys are cheaper and last longer.",
"If it had keys like a keyboard it would be way easier to accidentally press them. Put the remote down upside down, the channel changes, or the volume skyrockets, or the TV turns off. Same thing if you sit on it, etc.",
"Those squishy buttons are called membrane switches, and most keyboards use them, the same as a remote control. The key difference is the keyboard covers the rubber dimple with a plastic key. The reason membrane switches are used a lot, is because when the dimple is pressed in, it will rebound on it's own without needing a spring. So they are mechanically simple, yet robust. Without this self re-setting type of switch, you would need to have a spring under every switch which would make it less reliable and add to cost. Membrane keys on a remote are not all rubber tipped though, some are plastic on top, rubber on the bottom, such as like an Xbox button."
],
"score": [
199,
77,
18,
14,
4,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5po977 | Skyrim got a revamped version. It is the same game but with better graphics. One of the new features is the 64 bit engine. What did the programmers have to do? Rewrite the whole game? What is the difference between 32 bit and 64 bit from that perspective? | Technology | explainlikeimfive | {
"a_id": [
"dcsl6b4",
"dcsy10g",
"dcsph5f"
],
"text": [
"Answering one bit at a time: > What is the difference between 32 bit and 64 bit - When your CPU runs in 64-bit mode, it gains access to a number of new instructions, which, taken together, may allow certain computations to be performed more efficiently. (Much of it is to do with the width of data being operated on. As a simple example, if your 32-bit application needs to add two 64-bit integers together, it doesn't have a single instruction for that. It has to emulate it through several 32-bit additions). However, the effect of this is negligible for Skyrim, because the program didn't really have a need for 64-bit arithmetic in the first place. - It also gives the CPU access to twice as many registers (where data is stored while the CPU is working on it. A larger number of registers means less need to read/write data from/to memory), and that might speed up the game by a few percent. This probably won't have a huge impact, but it's something. - Finally, it allows the application to use more than 4GB of RAM (which is all you can normally access from a 32-bit application). This has the potential to dramatically cut down on loading times. In the 32-bit version as you moved around the world, different parts had to be unloaded from memory, in order to make room for the parts you were entering. Now that the game can use as much memory as it likes, the game doesn't need to do that at all. (Of course the data still have to be initially loaded from the disk, and depending on how much RAM you have, it may still not be possible to fit *everything* into RAM, but at least the game can hold a lot more than 4GB of data in memory at the same time. The effect of this is twofold: it can cut down on loading times as I described above, but it also gives them the breathing room they need to upgrade the graphics. Higher resolution textures or more detailed models all take up more memory. In the 32-bit version, that would have to push something else out, causing even *more* swapping data back and forth between memory and harddrive. But in the 64-bit version, there's room for this. > What did the programmers have to do? They *had* to do very little. It is quite easy to convert a 32-bit program into a 64-bit one. They likely needed to fix a few bits of code but assuming the source code is of half-decent quality most of it should *just work*, and then just compile the code again, telling the compiler to build a 64-bit application instead of a 32-bit one. So the mandatory work likely wasn't much. But then comes all the optional work, which I hinted at above: if they just did the lazy solution, they'd end up with a 64-bit application which had access to all the RAM in your machine, but which still *tried* to stay within 4GB. So they'd have to rewrite the code that loads/unloads assets to no longer be so strict about unloading assets. There may also have been some performance tuning, because the code performs a bit differently when it runs as 64-bit code. And then of course, any graphical enhancements probably required changes to their rendering engine, shaders would have to be rewritten and so on. But all of that is basically optional; not something they *had* to do in order for Skyrim to work as a 64-bit game, but rather rewriting bits of the game to take advantage of what 64-bit gives them and to make the game prettier and better.",
"It should also be noted that most game studios use revisions of the same engine for multiple games. [Creation Engine]( URL_0 ) was used for Skyrim and Fallout 4, so any improvements they made for Fallout 4 were certainly rolled into Skyrim SE.",
"They updated the lighting and water in the new game, but that's really about it graphically. The 64 bit engine allows for access to more memory, but perhaps more importantly, it gives Modders a much more stable experience with far less crashes compared to the 32 bit version."
],
"score": [
136,
8,
4
],
"text_urls": [
[],
[
"https://en.wikipedia.org/wiki/Creation_Engine"
],
[]
]
} | [
"url"
] | [
"url"
] |
|
5pokog | How did MS-DOS make Microsoft an OS power house? | Technology | explainlikeimfive | {
"a_id": [
"dcsnkf4",
"dcsrnf4",
"dcsmurj"
],
"text": [
"I'll have to leave most of your questions to others, but here's the answer to your first one: Microsoft's deal with IBM (to create/supply the operating system for the brand new IBM PC) allowed Microsoft to license the operating system to other computer manufacturers. When the makers of IBM compatible computers started selling their products, they could offer MS-DOS, which helped their computers run the same application programs as the IBM PCs, making the compatible computers more attractive to the market. Microsoft earned license fees from the computers that IBM sold, and also from the ones sold by the other manufacturers. The market for PC computers grew very large. IBM's revenue came from only a part of the market (computers sold by IBM), but Microsoft's revenue came from the entire market (computers sold by all the manufacturers). That was the start of Microsoft becoming a powerhouse.",
"> How did MS-DOS make Microsoft an OS power house. Bill Gates became the middle-man between IBM and another company. He bought the rights to DOS from a small computer company, renamed it, and then sold a license to IBM. IBM could have went to that same small computer company, and cut Bill Gates (and this Microsoft out of the picture). Instead, they went through Microsoft - which got a wad of cash from each computer IBM sold - and that is how they came to existence. > Also how did MS-DOS lead to DOS based Windows and eventually Windows NT. Slowly but surely. First you need to realize that DOS-based Windows and Windows NT are two separate beasts. Think of DOS-based Windows as a gasoline car, and Windows NT as diesel car. They both get you from A to B, but the engine is different. With *that* out of the way, imagine DOS as being a small little gasoline powered scooter. With every new version of DOS, they added more stuff to this scooter - flags, a honky horn, and chrome rims. *Eventually*, someone got an idea that this scooter could do more - let's add a basket and another seat. **BOOM.** Dos-based Windows was born. It's still a scooter at this point, but with an upgraded frame - it's a pretty bad-ass scooter. Now, Microsoft keeps refining and adding stuff to this scooter. Eventually, they realize that this scooter is horribly underpowered and is prone to crashing. There's not a whole lot that can be done about *that*, because the fundamental flaws with the scooter (small wheels, lack of shocks, etc) are based off of being a scooter in and of itself. You can mitigate some of the problems, but *cannot fix them*. So, with a perfectly functioning scooter to use as a reference plate, they start developing *a motorcycle*, using brand-new technologies and tools, using the scooter as a reference point. By starting over from scratch, they can gut a lot of bad design choices made during the development from the scooter. However, by making smart design choices on the motorcycle, some accessories from the scooter to be used on the motorcycle. Not all will work, not all will fit, but the accessories (the honky horn) that properly followed the specs for the scooter, will work perfectly on the motorcycle. Cheap junk that barely worked on the scooter will not work on the motorcycle at all. So now, we got DOS-based Windows scooter, and a Windows NT-based motorcycle. They keep both products in the storeroom floor, as some existing accessories will only work for the scooter. Eventually, accessory makers start making products for *both* the scooter and the motorcycle. Eventually, people start buying the motorcycle more because it's flashier - more accessories work on it - it's more stable - that the demand for the scooter dries up, leaving the motorcycle to be the last product remaining. > Finally, how was Microsoft BASIC (for 8 bit computers) effected by MS-DOS? BASIC at that point in the computer history, was an interpreted language. So, think of an \"interpreted\" language this way - you're in a foreign country with a friend, and you ask a local a question. You don't know the language, but your friend does. So you ask your friend the question, who then asks the local, who then responds back to your friend, who then gives you the answer. So with DOS in particular, what's going on is that the change in architecture (8-bit to 16-bit to 32-bit to 64-bit) is in ELI5 form, you going to different countries with different friends.",
"The *very* short version is something along the lines of IBM, the computer manufacturer, was releasing a new thing called a 'Personal Computer' to businesses, and needed an O/S, so they scouted around and Bill Gates saw an opportunity, bought in an existing O/S called Q-DOS, modified the source code to the Intel architecture to run on the x86 chipset and called it MS-DOS."
],
"score": [
11,
9,
5
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5ppu0b | How can game key aglorithms be simultaneously strict enough that they can't be guessed and vague enough that millions of games can generate millions of single-use keys? | And how do the people designing these algorithms can be sure that their key generator won't accidentally create a key for another existing game? | Technology | explainlikeimfive | {
"a_id": [
"dcsx1yg"
],
"text": [
"Assuming you have a 16 digit alphanumeric code (A-Z, 0-9), you have 36^16 or 7.96 x 10^24 different combinations. 7.96 x 10^24 combinations is roughly 8 *heptillion*, 8 million million million, unique keys. The chances of getting the same key twice, for any two given games is 64 x 10^48, or 64 *quindecillion*. In other words, for every 64 trillion trillion trillion keys generated, on average, 2 will be the same. That's well within acceptable overlap. That being said, *yes, hypothetically, two keys could be the same for two different games*, although there are most likely algorithms in place to prevent this."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5prhe8 | How does Facebook change its mobile layout and colors without having me update it on the App Store/play store? | Technology | explainlikeimfive | {
"a_id": [
"dctafja",
"dcta2et"
],
"text": [
"A bunch of the stuff you see in the FB app is done not with compiled native (Java/Swift) code but with HTML and Javascript. These assets can be downloaded and updated without changing the compiled app and thus don't require a new version in the Play/App Stores. It's a little risky on FB's part, because there are a ton of security checks FB ought to be doing to make sure a malefactor cannot inject their own Javascript into those updates. Imagine Comcast snooping your Internet traffic, recognizing a request for updated FB Javascript, and substituting their own response that tracks what you read and shows you their ads.",
"There are several ways this could be done, but they all include server-side action. So for example, facebook might develop a new layout and put it inside the app, but leave the old layout in as well and don't activate the new one yet. Maybe something on the server side is not ready yet for the new layout, or they want to roll it out to just some people first to see if it works properly. Then, when they think it's good, they can make the server tell the app to switch to the new layout the next time it goes online."
],
"score": [
5,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5prtil | Why do games need to restart in order to change certain graphics settings (e.g. resolution) but not certain other ones (shadows, etc.)? | Technology | explainlikeimfive | {
"a_id": [
"dctb8xi",
"dctg473"
],
"text": [
"Think of the software like a building. When you construct it, certain specifications (settings) need to be determined as the foundation of the building. If any of these specifications need to be changed, you may have to tear down the building and make a new one. However, the furniture/wiring/plumbing inside can be swapped without modifications to the foundation of the building. Depending on the game engine, some settings need to be defined upon starting the software. Other settings are more dynamic so they can be changed within the game.",
"They don't, always. some games can change resolution without restarting. They'll just flicker momentarily, and then use the new resolution. It is technically possible to write a game that lets you change *any* setting without ever restarting. But from the developer's point of view, it's often simpler to say \"alright, if the user wants to change this, we just exit the game and restart\". That gives them a clean slate, where the can make simplified assumptions such as \"the resolution is never going to change while the game is running\". That makes their life easier, so sometimes they do that. For any setting that can be changed while the game is running, the developer needs to make sure that all relevant parts of the game are notified about the change. For example, when changing the resolution, you need to make sure that all UI elements are redrawn in new positions. You may need to notify the logic that detects the mouse's position to correctly map screen coordinates, and you need to update the renderer to fit the new target. Perhaps, based on resolution, you'll want to swap out certain textures too, for more or less detailed ones. And if you get this wrong, if you forget to update one corner of the game, so that bit of code still thinks you're running with the old resolution, then that might potentially crash the game. So often, it's simpler to just say \"you want to change something that has big knock-on effects on the rest of the game? Fine, but we'll restart to make sure we get it right\"."
],
"score": [
4,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5psjrs | When you press the unlock button on your keys, how does it only work on your car? | I'm guessing it unlocks it with a frequency value, but there must be something more? | Technology | explainlikeimfive | {
"a_id": [
"dctgzu7"
],
"text": [
"> I'm guessing it unlocks it with a frequency value, but there must be something more? It's not just something more. It's everything more. There is no \"secret frequency\" that's used. Instead, the keyfob and car both have electronic circuits that create psuedo-random numbers. Both the circuits are initialized with the same value, so they both produce the same sequence of numbers. Every time you press the unlock button on your remote, it sends the next number. The car keeps a list of the next hundred or so numbers in the sequence, and if it sees one of those numbers, it unlocks, and resets the sequence to the next hundred numbers from there (so that you have some leeway to accidentally press the button a few times on your remote without getting locked out of the car)."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5psv8d | How do they produce TV shows so much faster than movies? | It seems they take so much longer to make a entire movie compared to one episode, but think of how long a TV show runtime is for an entire season. How do they do it so much faster? | Technology | explainlikeimfive | {
"a_id": [
"dctjvey"
],
"text": [
"I believe I can give some reasons * Fewer and more basic special effects. A Hollywood blockbuster can take months on end just creating all the CGI even using countless Indian thralls and huge render farms. * Everything is on a tight predetermined schedule so actors know exactly what they're doing when they're doing it well ahead. Everything is taken into account. * Sets and locations are continuously reused and rarely have to be adjusted, sometimes different series use share sets. A location setup for the pilot can be continuously used for years. * With the exception of some premium channel programming the scope is far smaller. While a film may span the world and have outrageous action a series will typically have few characters who stay in one city unless needed. * Scenes can be filmed concurrently (e.g. the heroes can be chatting at a pub and the villains can be scheming in a mansion and both can be recorded at the same time.) * Most series have multiple directors working at the same time and the screenwriters have already at least outlined the whole season"
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5ptdmg | What is the purpose of the turnable dial around the face of a watch? | Technology | explainlikeimfive | {
"a_id": [
"dctos3f",
"dctos4r",
"dctuy47",
"dctoubi"
],
"text": [
"The ring as we know it today on some watches, also called rotating bezel, actually dates to the first diving watches of the 1950's. At the time, using a stopwatch was impossible, because every additional pusher could further compromise water-resistance, so instead they relied upon the bezel as a basic timing apparatus. Right before a dive, the wearer aligns the zero mark on the minutes hand. The bezel then indicates the minutes passed since entering water. To add security, the bezel can only be turned counter clockwise, meaning that if it were accidentally rotated, the immersion time would appear longer and the diver would be compelled to return to the surface earlier. During the 1960's and 1970's, the US and British Ministries of Defence also incorporated the bezel into military standard, either to display dive time or hours.",
"It can be used as a lapse timer or a reminder for a specific time. Turn the point on the rotating bezel to the current minute then at some future time you can see calculate how much time has passed without having to keep up with what time you started. Turn the point on the bezel to a specific time in the future as a reminder of some event at that time.",
"On pilot's chronograph watches, the bezel is used as part of a slide rule, used for calculations. Aviators used to rely on these to handle range and other navigational computations.",
"it's a timing feature. you turn the dial so the minute hand is currently on the number of minutes you want to count down, and you know it's been the correct amount of minutes when the minute hand reaches zero. it's originally for diving before digital watches. an alarm doesn't make sense under water and the old school watches didn't have stuff like flashing alarms or digital timers. it's mostly a traditional thing today, but i still use it from time to time, and sometimes when you're diving you may not want your watch to start blinking."
],
"score": [
87,
4,
3,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5ptfy7 | How does Google Maps and other GPS calculate my estimated time of arrival? | I understand that Google has massive amounts of data available to them to calculate arrival time, but what do they use, exactly? For example, do they estimate as if you are going to drive the speed limit the entire drive or do they use the current data at hand and estimate that you will be driving the average speed of drivers currently on each road you will take? Further more, does it calculate for average slowdowns in certain areas on what time it is estimated that you will arrive in said area? For example, would it estimate for a rush hour scenario that you would hit 6 hours in the future if you were destined to drive through a populated city? If so, would it direct you around before you made it to that city? Sorry if I am not making sense. Edit: I'm on mobile and sorta high. Sorry for bad grammar and whatnot. | Technology | explainlikeimfive | {
"a_id": [
"dctrfj0"
],
"text": [
"It utilizes the phone signal from each device passing by the nearest towers and triangulates them to the closest ride. It it notices there are a lot of connected devices and none are connecting to the next tower then it will assume a traffic jam and average out the time it takes to move from one tower to the next. It does this for the most common and optimal routes to assume a time frame."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5ptmgo | Why are some redirecting urls like URL_0 just as long as the origanal urls? | Technology | explainlikeimfive | {
"a_id": [
"dctrcvl"
],
"text": [
"A YouTube video ID is always 11 characters. Anything appended to the end is likely a marker pointing to a specific time (\"?t=75\" directs you to 75 second), an indication that the video is included in some playlist, or some other instructions like video size or window size, etc."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5pto1o | Why did each (manned) Apollo Mission have a different crew? | I'm doing research for a paper on the Space Race during the Cold War, and part of that has been researching the Apollo Missions. I noticed that each of the manned Apollo Missions had a different crew on board. For instance Apollo 10 was basically a full test for the moon landing that would occur with Apollo 11, but without the actual landing part. Why did Apollo 10 and Apollo 11 have different crews? Why not have Neil Armstrong, Buzz Aldrin, and Michael Collins crew the test mission as well as the real mission? | Technology | explainlikeimfive | {
"a_id": [
"dctrl5d"
],
"text": [
"Good question... They had different goals/objectives on each mission. Everyone in that program got \"a piece of the pie\" and had to study the shit out of it/make it happen. Also, just sort of FYI, the crews did rotate (some people flew mutliple times). For example, Mike Collins could've walked on the moon but chose to end his NASA career after the first flight. Said it was so much stress and time away from his family, he decided if he made it through the first mission, that would be enough. And James Lovell was the Command Module Pilot (guy who orbited around the moon while the other two walked on the moon) on the Apollo 8 mission, and would have walked on the moon during Apollo 13 had things not gone wrong. Also, no one knew if \"moon germs\" were going to be an issue. So they had to be in quarantine for quite some time after each mission. Just a bit of an educated guess/jumping off point here - good luck with your assignment, sounds fascinating."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5ptpyf | how do computers tell time? | Technology | explainlikeimfive | {
"a_id": [
"dctxr28",
"dctrk6r"
],
"text": [
"There are three main engineering challenges to digital time-telling: - (A) Create a circuit that uses some sort of natural physical phenomenon to track the passage of time. - (B) Maintain accurate time when power is removed from the circuit. - (C) Keep the clock from drifting off the correct time. For part (A), we use a thing called *crystal oscillators*, usually using quartz for the crystal because it's cheap, has the right properties, and is common and well-understood in this application. Because of some complicated quantum physics stuff, you can take a quartz crystal of the right size and shape attached to the right supporting electronics, put a steady voltage in and get a steady stream of high speed pulses out. Millions or billions of pulses per second depending on how exactly you set it up. Add a digital circuit to count the pulses, roll over the \"seconds\" number once every however many million pulses it goes in an actual second, and check the system with a regular clock (or maybe a super duper stupidly high precision atomic clock) to be sure the number of pulses per second is right. Extra hardware and software can be added to the design for functions like displaying the time to the user, or any other time-based tasks like an alarm clock function. Almost any digital system these days contains one or more CPU's (that's the Central Processing Unit, the main part of a computer that does the actual computing) -- things like PC's, laptops, cell phones or game consoles have a pretty powerful CPU that's often \"front and center\" of marketing aimed at technically inclined audiences, to the extent that you even see commercials for Intel CPU's on TV. But even simple devices like microwaves or thermostats often have a *microcontroller* (a complete computer system including CPU, memory and other supporting circuitry on a single chip with very low performance, but correspondingly small size, cost and power usage). CPU's require a crystal oscillator to operate anyway, and many modern CPU's and microcontrollers have one built in, so it's usually just a matter of adding a fairly small amount of software code to harness the existing oscillator for a general-purpose clock, or any other timekeeping functions. For part (B), maintaining the clock without power, one answer is to add a battery to the design specifically for the purpose of keeping the clock running when the main power supply is cut. That's what's done in traditional desktop computers. Some other systems, like laptop computers, cell phones, and cars, already have a battery. The clock uses very low power so it's usually kept running even when the system's \"off\", except when the battery's completely disconnected. Of course, using a battery to keep time means you also have to have a fallback system to set the time whenever the system is restarted for the first time after a complete and full power cut including battery removal. For many devices, traditionally the answer has been that the user must enter the time in this case. However in the modern digital age, many devices connect to the Internet using NTP (Network Time Protocol). The current time is also available from the GPS (Global Positioning System) satellite signals. And the current time is also available over the cellphone network. And another possible answer to problem (B) is to deny that it is actually a problem that needs solving. That is, the designer simply accepts that the system will \"forget\" the time when the power's lost. Often you'll see a bunch of electronic devices around your place -- microwaves, ovens and alarm clocks -- all reset to midnight after a power outage (and sometimes blink or otherwise alter their display to indicate the outage occurred). For part (C), how to keep the clock drifting over time. It's impossible to make quartz crystals all 100% precisely identical. In other words, there is some error (deviation from the ideal tick speed), due to imperfections in the manufacturing process, and issues in the crystal's usage (basically temperature and the ability of the supporting electronics to supply a precise voltage). This error is usually small fractions of a second, but it can add up over many days, weeks, months or years of timekeeping. The traditional solution has been requiring the user to notice the clock is wrong and manually enter the correct time. But again, for modern systems, in many cases the system will be connected to an external machine readable clock source (Internet/NTP, GPS, cell network) which can be used to automatically correct the clock a couple times a day or so, before the drift has accumulated enough to make a noticeable difference.",
"They don't really \"tell\" it, they're just given it or get it from the internet and then keep track of it. Computers use a quartz crystal to keep time, as do almost all watches. The main element in quartz crystal, called silicon dioxide, has a piezoelectric potential which means when heat, pressure or any type of impact is applied, the electrons in the silicon dioxide begin to jump from their orbit and release a mild electrical charge that can be harnessed. The electrical charge is an oscillating vibration that is so constant and accurate it is harnessed for many things including keeping time."
],
"score": [
5,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5ptrdc | what's the difference between triple A games and other types of games? | Technology | explainlikeimfive | {
"a_id": [
"dctrzl3"
],
"text": [
"It's not a purely technical definition - the definition will vary from person to person and from context. Generally a triple A game is one that receives a large amount funding, normally from a major publisher, which is often visible in the number and quality of assets in the game. This doesn't necessarily mean the game is good (a major criticism of AAA games is that because they can't take risks with that much money on the line so they are cookie cutter games - identical sequel after sequel), just that the visual and audio components that is uses are pretty and plentiful. Think of AAA games like blockbuster movies. You can *see* the money spent on the screen in the form of big scenes, impressive CGI and lots of explosions."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5pv8j8 | How do fishermen on those large trawlers with nets catch the exact fish they are looking for? | Technology | explainlikeimfive | {
"a_id": [
"dcu3me2"
],
"text": [
"The fishing spots are not randomly chosen. They are chosen because there is a lot of the desired fish there. Different fish will have specialized themselves to feed on different food, swim at different depths and handle different ocean conditions. So if you go to the place that best fits the fish you are after and throw your net you have a good chance of getting the right catch. The nets is also a complex piece of equipment designed to only catch fish of the right size and shape. Too big and the fish will not be able to enter the net, too small and it can flee through the net. Finally a lot of fish travel together in steams. These steams are visible on sonar and you can even see the size and shape of the fish to find out what kind of fish is in the steam. A trawler can then throw its nets and catch the entire steam at once."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5pwxbq | Why is quartz so useful in making watches? | Technology | explainlikeimfive | {
"a_id": [
"dcug21r",
"dcugam3"
],
"text": [
"Quartz vibrates when an electric current is passed through it. The watch counts the vibrations and uses it to accurately count seconds.",
"If you put an electric signal into a quartz crystal, it resonates in a very dependable way. Quartz watches contain a tiny [quartz tuning fork]( URL_0 ). The tuning fork is cut to a size where it will vibrate 32,768 (2^15) times per second. That's a high enough frequency that humans can't hear it. The vibrations drive a 15-bit digital counter, which reaches its maximum value exactly once per second, producing the pulse that drives the second hand."
],
"score": [
5,
4
],
"text_urls": [
[],
[
"https://en.wikipedia.org/wiki/File:Inside_QuartzCrystal-Tuningfork.jpg"
]
]
} | [
"url"
] | [
"url"
] |
|
5px40z | There are a surprising amount of people who believe any moon landings were a hoax. Why can't we use telescopes/powerful cameras to prove there was activity on the moon? | Are there no traces of the landings (flag, residual space ship parts, any other debris) that are visible using a high powered camera or telescope? I know even consumer-grade cameras are getting to be pretty good at taking high res photos from very far. I'm not saying that it is necessary to provide MORE evidence of a moon landing nor do I care to open this thread to a debate, but I am curious if this would be possible and if not, why not? | Technology | explainlikeimfive | {
"a_id": [
"dcuhq74",
"dcuksri",
"dcuhcgm",
"dcuhag7",
"dcuvx40"
],
"text": [
"The Lunar Rconnaissance Orbiter took pictures of one of the [Apollo landing sites]( URL_0 ). But to a conspiracy theorist, any contrary evidence is part of the conspiracy.",
"The lunar landers are so small (0.009km wide), and the Moon so far away (380,000km), that no Earth-based telescope can make them out. Not even Hubble. You'd need a *huge* telescope to do it. Like, \"hundreds of meters wide\" big. Or, you could use a camera mounted on a satellite orbiting the Moon, which is what NASA has done (the Lunar Reconnaissance Orbiter), as linked by /u/AzrgExplorers.",
"> Are there no traces of the landings (flag, residual space ship parts, any other debris) that are visible using a high powered camera or telescope? Not from Earth, and even if there were the conspiracy theorists would call it fake. It isn't worth trying.",
"Word is that it's too hard to hold a camera steady while zoomed in that much. I believe it. I tried holding a 72× zoom camera, and that was unwieldy enough with zoom.",
"In a myth busters episode they explained how they left a special reflector that can be detected with a laser. The surface of the moon scatters the laser beam but the reflector can send the beam back and so it can be detected. This was demonstrated on the episode but, again, conspiracy theorists will just think the show was in on it. There is plenty of evidence available for the average person to demonstrate that the Earth is round, but there are plenty that still believe otherwise."
],
"score": [
12,
6,
4,
3,
3
],
"text_urls": [
[
"https://www.nasa.gov/mission_pages/LRO/news/apollo-sites.html"
],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
5px7np | What is Net Neutrality and why is there a lot of controversy surrounding it? | Technology | explainlikeimfive | {
"a_id": [
"dcuuy6j",
"dcukjk4",
"dcuibsa",
"dcuytdn",
"dcumw6h"
],
"text": [
"Say I run a factory building steel folding chairs. I have my steel imported by rail, on the one rail line that accesses my town. Now, railroads are expensive to lay down, and the ones we have were all built a while ago, and only because the government helped by providing large subsidies and seizing right-of-ways through eminent domain. That means there aren't gonna be any new railways coming into my town any time soon, and the railroad company in my town essentially has a monopoly on railroad traffic for the foreseeable future. Now, there are four steel refineries that I could purchase my steel from, and they all compete on price and quality. The biggest of these companies used to be the best, but it dropped off in quality a long time ago, and it's starting to lose business to its competitors. It has way more money in the bank than the other companies, though, so it hatches a plan. Instead of investing to make its steel better or less expensive, the big company instead pays a giant kickback to the railroad company for an \"exclusive contract,\" which requires that the railroad charge the other steel companies *double* to carry their steel and won't allow them to ship steel on the railroad's express trains. This makes it impossible for the three other companies to compete, and they eventually stop shipping steel to my town. Once that happens, the big company is free to jack up the price on its crappy steel. Many, many years ago, the government recognized that deals like the one between the railroad and the steel refinery are bad for competition and the free market, so they made a rule to prevent it. The government has long required that railroads are a \"common carrier,\" meaning they can't discriminate between customers and have to charge everyone the same for carrying freight. This ensures that, in my case, the market for steel in my town stays competitive and free instead of being taken over by a monopoly. \"Net neutrality\" does the same thing, establishing ISPs as common carriers for Internet data. Like railroads, Internet infrastructure was all laid down years ago with healthy government subsidies, and it's prohibitively expensive for a new company to come in an lay new lines. (RCN has tried to and found it very difficult to make overbuilding profitable.) This means that most localities only have one, maybe two ISPs to choose from, and there won't be any competitors in the near future. Say you're Comcast, the country's largest ISP provider, which also owns the Xfinity cable service brand. Xfinity has a streaming video service, and Comcast would much rather you watch that service (and the advertisements on it) than Netflix. Comcast would very much like to add a surcharge to Netflix data to discourage people from using it, or throttle Netflix transfer speeds so it has a lower resolution than Xfinity streaming. Net neutrality says they can't do so, and have to treat Xfinity and Netflix data the same. This means that Netflix and Xfinity have to compete on price and quality, not just who has a sweetheart deal with the local monopoly provider. Consumer advocates, startups, and smaller companies like net neutrality because it helps keep the Internet a free, competitive marketplace where the best product wins. Incumbent telecom giants hate net neutrality because in a free market, they might lose their position on top, and because they can't squeeze out more money for mediocre products.",
"The idea of Net Neutrality is that all internet traffic has equal access to bandwidth/speeds. Doing away with it would be akin to allowing for highway speed limit signs that read \"Speed Limit: 65, BMW Owners Speed Limit: 85\" if BMW decided to pay for such access as a selling point. Or conversely, it'd be like a state choosing to impose slower speeds against a car maker who jilts them and builds a plant in the neighboring state. On the internet, this could play out with ISPs being able to pick winners and losers by doing something like giving Hulu priority speeds and throttling Netflix, since Comcast owns NBC, which is a partner in Hulu. Or an ISP might speed up load times for right wing news sources while slowing access to left wing sources. The loss of Net Neutrality could mean the uneven flow of information, which goes against the basic beliefs of the internet being open to all.",
"It's the idea that internet service providers won't give priority access (more bandwidth, faster speeds, etc) to domains and sites that pay them premium access fees to do so. Net neutrality = everyone has equal access to the same information superhighway. It's controversial because some companies want to be able to pay more to be more visible, and ISPs want to get their hands on that extra revenue stream. The rest of us want the \"little guys\" to have an equal shot at internet bandwidth and access.",
"Internet access is provided mostly by large cable companies. Comcast, AT & T, Time Warner Cable, Verizon, Century Link, etc... Those companies want to make as much money as they can. They can make more money (on top of subscriber fees) by charging companies money to use bandwidth. Right now, they are required to let all companies use bandwidth and speeds equally, and they aren't allowed to charge extra to a company that uses a lot, like Netflix, or one that competes with their products, like Netflix. Net Neutrality is what requires them to keep bandwidths/speeds the same for all websites. These companies don't like that. They want more money. And they already have a lot of money, so they are pressuring politicians to change the rules to let them charge different users different prices.",
"In a world where you can/have to pay for a faster/bigger line to your customers, companies like Netflix, YouTube etc would have a very hard time if they were starting up new today. Net nutraility ensures that all traffic is treated the same."
],
"score": [
38,
16,
12,
4,
3
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5pxfou | Why can't touchscreens recognize a touch while a finger is being held on it in another location? | My toddler plays a learning game on my touchscreen phone but he doesn't understand that when he holds the phone (with this thumb touching the screen) he can't touch anything else on the screen. I cannot get him to move his thumb because otherwise he will not be able to hold onto the phone. | Technology | explainlikeimfive | {
"a_id": [
"dcukbg1"
],
"text": [
"It's not that the touch screen can't recognize the touch, it's that it doesn't know which one to pay attention to. It doesn't want to randomly click things if you grab the phone with your whole hand or if you touch something on the side of the screen with the hand holding it. Basically it's designed to be conservative and safe otherwise people would be accidentally clicking things all the time. My toddler had the same problem. Our solution was to get him his own tablet with a big thick rubbery case. When playing with our phones we put it on a hard surface like a table and tell him he can only touch it with one hand. It's a new life skill. If the game is fun enough, he'll learn soon enough!"
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5pxjv4 | Why is it that a 10 minute 720p video I record with my phone is 700mb, yet a 720p 1.5 hour long movie uses less data? | Thank you everyone for your answers. | Technology | explainlikeimfive | {
"a_id": [
"dcul68h"
],
"text": [
"Before movies are distributed, the data gets carefully compressed on full-sized computers that may have spent many minutes or even hours compressing it, after the recording was complete. The result is better compression, hence a smaller total file size."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5pxwa5 | How does an air fryer work? | Technology | explainlikeimfive | {
"a_id": [
"dcuoe21"
],
"text": [
"It's just a self contained convection oven. It uses fans to circulate the hot air so that it cooks things very evenly. Most ovens are not convection. They have heating elements that heat up the food but might not always do it evenly. A convection oven has a fan in it that moves the air around constantly this helps distribute the heat evenly around the space inside the oven which helps cook the food more evenly. This also cooks faster since it takes less time for each part of the food to reach the required temperature."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5py3mv | In the age of computers, why do traffic cops take ages to write a traffic ticket by hand? Why not have the system that scans your drivers license and creates a ticket for the next available court date? | Technology | explainlikeimfive | {
"a_id": [
"dcuqhzf",
"dcuqlrz"
],
"text": [
"Actually, there are some agencies that do just that; however, the underlying need for all of the relevant computer systems to be inter-connected does not exist everywhere.",
"Some do, I know from personal experience the California Highway Patrol has a system and doohickey where your ticket is printed up just like a receipt at a store."
],
"score": [
3,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5pz30y | How is this light effect created in 90s cartoon animation? | Technology | explainlikeimfive | {
"a_id": [
"dcuzqwt"
],
"text": [
"Its called backlit animation. Instead of the cells just being photographed on a table or whatever, it sits on a backlit lightbox that shine the light through translucent paint."
],
"score": [
23
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5q1xt1 | How does a hacker get caught? Couldn't they just go buy a $200 laptop, install their hacking tools from a cd or thumb drive, and then take it to a public wifi place (library, Starbucks, etc)? Then burn their stolen hacking data to a CD when they get home? | Technology | explainlikeimfive | {
"a_id": [
"dcvnl6j",
"dcvoh42",
"dcvnogi",
"dcvoc1v",
"dcvszg1",
"dcw0qa8"
],
"text": [
"First of all hacking takes time, you can't just sit in Starbucks all day. Also wifi isn't usually very fast in public places, it's inefficient.",
"Surprisingly, most hackers don't operate like this. Many of them sit home and cover their traced by hiding their personal information that could be used to identify them such as their IP address or really anything. Web traffic is anonymized through VPN services or tools such as TOR browser. This makes it harder for people to track down hackers by rerouting their web traffic to hide the origination which usually provides a geolocation. Hackers that get caught are usually sloppy or they're playing with the big boys (such as large hacking organizations or government agencies.) The Big boys can intercept and gather more data because they have more sources and ISPs (Internet service providers) they can tap data from. To give you an idea of how someone would get caught in the scenario you presented, I'll walk through. •$200 laptop, has a serial number, maybe the hacker got careless and set it up with some random shred of personal info. •The public wifi they're accessing is sure to have some oversight and monitoring. In fact, I wouldn't be surprised if ISP security teams more actively monitor traffic from sources such as this (and other places that offer free wifi) more actively than they would a residential Internet user. •That CD is physical evidence. If a raid were to be conducted on the said hackers house, the chances of this data being found is extremely probably. Hackers get caught by being sloppy or brave, yet it's usually a combo of the two when talking about the ones that got caught.",
"they get caught because they leak or leave details of their crime in places where law enforcement finds them. you don't hear about the hackers who don't get caught because the police never find them! being in a public wifi places doesn't prevent you from getting caught. starbucks still has security camera's with time indexes. oh did you buy a coffee with your credit card while you were on that 10 hour hackathon? bam..caught. did you finish up and then hailed a cab to take you home? bam...caught. did you brag about it to your online chat group that you just scored big? ohh...that new guy is a snitch. bam...caught",
"Most 'hacking' is done via social engineering. They gain people's confidence in order to either extract credentials to log into the system with the assets or gain physical access to the system or a network with the system. The act of interacting with people means the people who were duped have information that could aid in an investigation. Physical access leaves more information. Secondly even on public wifi you have to necessarily be in public, which leaves you open to people witnessing your presence.",
"Here's a tip. Use a laptop with no hard drive. Boot off of tails or JonDo OS on CD only. There will be No trace of your use other than the Mac address which can also be modified before connecting to the internet... Store all data that needs saving into an encrypted cloud drive using an encrypted zip container within a zip container using truecrypt with a nice long password. If you wanted to make sure no one will discover the contents of the zip drive, save the first portion of the container in a separate location. You could make this turtles all the way down. Use Kali Linux to encrypt your Disk. When asked for the password give them the self destruct code that wipes the key space and POOF all data is non recoverable..",
"Here is a story of someone getting caught at a library: URL_0 He's not really a \"hacker\" per se, but he's a cybercriminal who was caught while doing exactly what you describe. Using his laptop in a public place was actually one of the things that contributed to his capture: “The plan for the arrest…was to get him into a position where we could have him in a public setting, and I could initiate a chat with him,” Deryeghiayan said in response to questions from prosecutor Serrin Turner. “The purpose was that if indeed [the Dread Pirate Roberts] was Ross Ulbricht, we could get his computer in an open, unencrypted state.”"
],
"score": [
68,
51,
21,
10,
4,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[
"https://www.wired.com/2015/01/silk-road-trial-undercover-dhs-fbi-trap-ross-ulbricht/"
]
]
} | [
"url"
] | [
"url"
] |
|
5q27qs | Why do cars have two controls for parking? | I'm referring to the gearstick park and the parking brake (or hand brake). Would it not have been possible to incorporate both into a single control? | Technology | explainlikeimfive | {
"a_id": [
"dcvpjh7",
"dcvq2sf",
"dcvq9j4",
"dcvz5rh"
],
"text": [
"They're two different mechanisms. The parking brake is connected directly to the wheels, holding them in place. The gear shift locks the transmission, which is a part of the engine. There are electronic parking brakes which engage automatically when you move the gear shift to park.",
"The idea is to have two redundant systems so you have double the chance of one break working as intended. If you parked at a slope and had only one system and it failed...crash. If you have two separate systems the chance of both systems failing at the same time is much smaller. Say chance of failure is 1%. So in 1/100 cases your car rolls down the hill. Now both systems have 1% failure chance its 1/10000 cases that both systems fail. redundancy is the easiest way to better statistical errors.",
"Only automatic transmissions have a Park setting on the gearbox, manual transmission are generally left in neutral when parked up",
"Another reason is the emergency brake (the one not on the gear stick) can also be used in case of hydraulic brake (foot brake) failure since it's a separate system. The emergency brake system is more reliable since it's just a cable that runs to the back wheels. If your foot brake goes out you can use the handbrake to slow you down, just do it slowly or you'll loose control. You can't do this with the gear stick parking btw it won't work unless you're stopped."
],
"score": [
13,
10,
4,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
5q2uto | How do random number generators work on the most fundamental levels in programming or otherwise? | I was trying to think how I would come up with it but couldn't find anything that was very clean or efficient. Thanks for reading | Technology | explainlikeimfive | {
"a_id": [
"dcvwftv",
"dcw1k8f"
],
"text": [
"Without an additional hardware device, called a \"true random number generator\" (TRNG), there simply is no real random, but we use pseudo randomness. To clarify that, what does \"random\" actually mean? Simplified, randomness means the fact of having a sequence of numbers that we can't find any pattern in, so the next number in the sequence cannot be determined knowing the whole prior sequence. Furthermore this implies that the elements of the sequence as a whole are equally distributed on a given interval. We can now emulate that behaviour by using pseudo-random generators: Those are mathematical functions or algorithms that show the required behaviour of equal distribution, but that have a period (the amount of 'random' numbers you can get out of it before we get the same random numbers as before a second time) that is far greater than the amount of pseudo-random numbers we actually get out of it. The most famous example might be the [Mersenne Twister]( URL_0 ) with a period of 2^19937 - 1. Let me be clear: The amount of atoms in the visible universe is [estimated to be roughly 10^80]( URL_1 ), what estimates to 2^266. When you now initialize the Mersenne Twister with some variables that only youself are aware of (e.g. the current system time in microseconds, maybe joined with some memory contents), you got your own, personal source of pseudo-randomness.",
"These guys have pretty much nailed it. To make it more simple, however... There are two options - 1. Take an input from something your computer is monitoring. Some good ones: Temperature, current RAM usage, hard drive spin RPM, etc. Apply some math to those to get them to fit into the random number range you're generating. I.e. if the hard drive is spinning at 3217 RPMs, and you need a number between 1-10, take the last 2 digits, divide by 2, and round down. 2. Pick from a big, predetermined list. You can start at different places on the list, also by taking an input."
],
"score": [
15,
3
],
"text_urls": [
[
"https://en.wikipedia.org/wiki/Mersenne_Twister",
"https://en.wikipedia.org/wiki/Observable_universe#Matter_content"
],
[]
]
} | [
"url"
] | [
"url"
] |
5q3z6w | What exactly will an Exascale supercomputer be used for, and how will it benefit humanity? | Technology | explainlikeimfive | {
"a_id": [
"dcw5kr5"
],
"text": [
"Ever wonder why a video game has to simulate unit stats instead of working out the physics of swords hitting shields? Simulating each particle requires a TON of computing power. Engineering software is the same way. We can model parts of systems with particular materials and simulate how they interact with one another. This allows us to stretch the limits of materials and design, which is HUGE for high-end projects like space travel, industrial plant design, and micro-processors. However, we can only model PARTS of each system, and have to spoof in external factors. Even then those models refer to generalizations from industry codebooks, not actual particle science formulas. To do the whole system would require massive computing power, and programming particle physics into miles of pipeline can be done, but no reasonable computer could run it. Also, every day more computer algorithms are being developed on paper that enhance security or process MASSIVE amounts of fuzzy data to put together patterns. DNA and genetics research is largely statistics based. Why did you inherit your grandmother's parkinson's, but it skipped your mom? We can throw everyone's genetic data into this computer and have it sort out what makes you so special. Using the same logic, we could sort out how the universe moves within each other by throwing daily star coordinates into the data pile. The computer can look at every star in the sky, compare it to every other star in the sky, and try to spot patterns in their movements. This can give us a greater understanding of our place in the galaxy and maybe even spot a Death Star headed our way. There's a ton of things that various industries are trying to do, but are limited by computer power. Most of science and engineering is driven by industry codes that use generalizations that are't accurate but \"close enough\". If we are to truly understand the world, we have to keep sharpening our pencils, and the Exascale is a really big one."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5q6x4m | How To Get Reliable Internet | Technology | explainlikeimfive | {
"a_id": [
"dcwswum"
],
"text": [
"You want to get a second opinon from another internet provider. It's possible your current provider may not have good service in your area"
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5q7i9b | Where does an electron "go" once it enters a piece of electronics? Whathappens to it? | Technology | explainlikeimfive | {
"a_id": [
"dcx0tsx",
"dcwy1mt"
],
"text": [
"They don't really 'go' very far. In DC currents, it can take hours for an actual single electron to travel an inch. In AC currents, they just oscillate back and forth - not travelling anywhere. Electricity - as what we commonly think about - is really an electromagnetic field that propegates really fast down a conductor. A very simple definition for voltage is the difference of intensity between two points of an EM field. A not so great analogy is think about yelling at a person across a field. You are generating a pressure wave that propegates through air molecules a long distance. The actual air you exhaled in doing to isn't what the person on the other side experienced. There is no \"flow\" of air between you two. The air isn't going anywhere, it's the wave that was generated that is travelling",
"It keeps moving down the wire until eventually it moves out of the device and back into the power source. Electrons are just little particles that we can convince to move through a wire. We take advantage of that movement to either heat something or cause something else to move. But electrons don't get burned up or used up, they just get pushed around."
],
"score": [
10,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5q8eqo | Why do Apple consumers stay devoted to Apple products and have a hard time switching to anything else? | Technology | explainlikeimfive | {
"a_id": [
"dcx6psg",
"dcx717p"
],
"text": [
"I've only had an automatic red gasoline car my whole life and I know everything about it. Now I have a manual blue disel truck and know very little about it. Most of the same things can be done like getting from Point A to Point B but it is all different. The color (UI) is different which makes identifying it in public (using the programs/applications) harder, I have to learn things that I didn't before (stick shift).",
"In my experience, a lot of people get an iPhone because that's the brand they hear about most. Or a MacBook because it looks cool. They get used to it and have a hard time using something else, so they stick with what they know. Or, if they're feeling adventurous, they'll buy a $250.00 Windows laptop or a $75.00 Android tablet and find out that it's a useless piece of crap compared to their $1500.00 MacBook or $800 iPad and infer that all Windows or Android devices are crap."
],
"score": [
7,
6
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5q8io0 | How does Twitter censorship work? And how are governments involved? | Technology | explainlikeimfive | {
"a_id": [
"dcxdrzj"
],
"text": [
"Judging by Wikipedia, Twitter has the ability to limit messages based on geographic region. And apparently has a mechanism by which Twitter can be asked to hide messages. Basically, if Twitter gets a legal order telling Twitter to hide things, they can comply. How the government is involved depends also on whether you consider the judiciary part of government."
],
"score": [
18
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5q8t1n | each time a new wireless standard comes out, it seems better and faster than before. Any reason we couldn't have accomplished this sooner? What are the enablers we now have that we didn't have before? | I'm asking because I happened to be reading about Bluetooth 5. This is also applicable to wifi etc. Did we discover new encoding / compression algorithms or what? | Technology | explainlikeimfive | {
"a_id": [
"dcxbrbc",
"dcxg45q",
"dcxgpe1"
],
"text": [
"Every engineering problem comes down to a trade-off between cost and capability. A wireless standard is limited by what the cost effective electronics of the day can accomplish. As time goes on, processing power gets cheaper, so you can do more at a target price point.",
"Good answer about cost effective solutions to meet the current needs, where both the needs and costs change over time, but your point about discovering new techniques should be addressed. New, better, coding techniques (for errors correction and recovery) allow you to send more bits in a given symbol. Essentially, if you and I are talking and we want to communicate the most complete sentences with the fewest words, we can agree on a code book. Something like: heyu = how are you?, hiya = I am fine, etc. but in binary. If I just send those four letters, but you only get h\\_y\\_ you cannot decode my message. I can add extra letters to the message (redundancy) with some type of agreed coding so if you lose some of the letters you might be able to figure out which were missing, up to some point. In communication systems, this can extend to 256 bits or more in a single symbol (sent as a single transmission waveform), and with coding techniques available I will chose how to encode my data and what kind if symbols to use based on how likely it is that there will be errors (how much noise there is on the channel). At the same time that processing power has gotten better and cheaper, so we can process long messages to decode them quickly, and we have moved from hardware processing to software, people have also discovered new codes which can give me as much or more ability to decode the original message with better chances of success. In some cases the math was simple (turbo codes), it just took someone having an Aha! moment, in others (LDPC) the math gets pretty out there. Also, new codes are great on paper, but not always possible to process in the time required for real two way real-time communication. Beyond an ELI5 post as well, new antenna designs, multiple input multiple output antenna arrays and cross antenna interference cancellation techniques, as well as new more efficient multiple access strategies, have been developed that would not have been reasonable, or in some cases possible, with older systems.",
"Wireless N draft was 2007, released 2009. Since then, we haven't had any strictly better standards for 2.4 GHz. ac is an improvement over short distances, but not over longer distances. It is not strictly better. ac wireless has several advantages: 1) Does not use 2.4 GHz. 2.4GHz spectrum is massively congested now, since wifi is everywhere. 5 GHz has more room, and less things on it. This is an advantage which in 2007 really wasn't all that big, because there were less wifi devices everywhere (the iphone only came out in 2007, no smart watches, etc.). Also, less interference from things like microwave ovens. 2) Higher frequency = more speed. ac starts at 450 Mbits per second. Not really much call for that in 2007. 3) Beamforming / MIMO. This uses processing power to increase the throughput by focusing the signal. Wireless n was already a significantly higher throughput than most people needed, and if you needed really high throughput you'd use ethernet. 4) Extended battery life. This advantage has improved over time, as the power drain of other items such as CPU has gone down, laptops have gotten thinner, and batteries smaller as a consequence. However, there are downsides: a) More processing power. Which means more expensive, a cost which goes down over time as chips get faster. b) Using 5 GHz means you need antennae for two frequencies, since otherwise you would not be able to use 2.4 GHz networks. More cost. c) Less range on the 5 GHz. With a higher frequencies, it does not go as far, and is stopped by walls, etc. more easily. d) You need to upgrade everything to ac to get the benefits. So, I'm seeing quite a few disadvantages, for advantages which are mostly increased throughput which no one needed."
],
"score": [
53,
5,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
5qapjx | How do ad blockers work? | How do they differentiate between native content and ads? Edit: also how do ad blocker blockers work? | Technology | explainlikeimfive | {
"a_id": [
"dcxnubu",
"dcxtd58"
],
"text": [
"> How do they differentiate between native content and ads? Ads are hosted from a different address than the native content. An ad will be an area of the page blocked out for \"whatever the ad server wants to put there\" so the creator of the site really doesn't know what ultimately will be there. Ad blockers keep a big list of ad provider servers and just filter out content pointing to those addresses.",
"Ad blockers use a list of known ad companies and block their information. Ad blocker blockers have a number of mechanisms, but mostly, the browser sends all kinds of information to the server. Often, that includes extensions and settings. The server can be set up to respond to that information, like with a popup that says \"IE 4 isn't a secure browser and our site won't work on it.\" Or \"we see you're using an ad blocker.\" The next step will eventually be an ad blocker that is more passive."
],
"score": [
5,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
5qe0zg | why is it that people use underscore "_" instead of "-" when programming | On the internet (URL's) you rarely see the "-" while in programming languages i've never seen them, why is that? | Technology | explainlikeimfive | {
"a_id": [
"dcyh0zm",
"dcyiqnv",
"dcyhrgk",
"dcyijzr"
],
"text": [
"In many languages \"-\" means \"minus\" which means it breaks things when you just use it as a separator.",
"\"my-variable\" is \"subtract 'variable' from 'my' \" (in many programming languages) \"my variable\" is often a syntax error, because \"my\" and \"variable\" are both seen as variable (or words with special meaning) and the computer has no idea what to do with the two variables (add them? multiply?) \"MyVariable\" and \"my_variable\" are both treated as one variable.",
"When using an \"_\" it's used for a space. While in programming \"-\" can be misinterpreted as subtraction.",
"A \"-\" can mean a minus sign, or a hyphen or dash. A \"_\" is kind of a leftover character from typewriters, where to underline you would type a bunch of underscores, then backspace over and type letters on top of them. Obviously computers work differently than that, so it kind of lost its meaning. It took up a new one as \"a space somewhere I'm not allow to put an actual space\"."
],
"score": [
11,
5,
3,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
5qe3xy | How does chat applications knows when the other person is typing before sending the message? | Technology | explainlikeimfive | {
"a_id": [
"dcyiyuo",
"dcyp7oe"
],
"text": [
"Easy, once you type at least 1 character, the program detects that and sends that info to whomever the conversation is with, it's super simple.",
"There is a tiny bird inside your phone. When you start typing, it flies over to the other phone real fast and tells the other bird (the one in that phone)."
],
"score": [
7,
7
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5qg4ol | If web browsers treat Javascript consistently, why can't they do the same with CSS? | Technology | explainlikeimfive | {
"a_id": [
"dcz1imd",
"dcz7m05"
],
"text": [
"Browsers **don't** treat Javascript consistently. They're better than they used to be but they all have little quirks. Why? Because JS & CSS are complex specifications and nobody does it perfectly & nobody implements 100% of it.",
"Browsers don't treat javascript consistently. That's why libraries like jQuery are so popular. They essentially standardise tasks by executing them in browser specific ways behind the scenes. HTLM and CSS isn't standardised because different vendors have different priorities. Think of it like this: * Apple is big on selling media and as a result they're big on standards that support licensing and other anti piracy measures as demanded by their business partners. * Mozilla and Opera wave the open source / open standards flag. They want the opposite of what Apple wants. They're not in favour of media related HTML tags that are based on proprietary or licence based technology for instance. * Microsoft traditionally wanted their browser to integrate strongly with their own technology and software. They prioritised their own solutions over those of others. Which gives them a different approach than either Apple or the open source browsers. A standardised approach to how browsers interpret the web is next to impossible because each browser vendor has goals that conflict with those of other browser vendors. [XKCD sums up the problem quite nicely.]( URL_0 )"
],
"score": [
6,
3
],
"text_urls": [
[],
[
"https://xkcd.com/927/"
]
]
} | [
"url"
] | [
"url"
] |
|
5qg97v | How is that my cable box can record up to 4 HD TV shows at once whilst playing 5th one directly but my internet, provided with a package cannot cope to smoothly stream HD anything | Technology | explainlikeimfive | {
"a_id": [
"dcz023j",
"dczakfs",
"dcz570m"
],
"text": [
"Because you get way more bandwidth on a TV cable. The reason for this is because they only have to send out each channel once. So if there are 400 channels, the cable TV provider just sends out 400 channels of content which people who are subscribed can request. This means even if there are 2000 people connected to that provider, it still only sends out 400 channels. On the other hand, the internet is designed to be *dynamic*. Each person who wants a file - such as a bit of a video stream - requests it specifically and gets a specific response back. So if there are 2000 people asking for various parts of various videos, it uses bandwidth for all of them. This is why you are getting much higher bandwidth via your cable tv line than your internet line. Coax cable I think with all channels combined can transfer about 4200 mbps? Though I might be misremembering. Still, the limit on your internet speed isn't in the cabling, usually, but instead on the fact that *somewhere* down the line there's just too many different things going on to get it all through at once.",
"First, the streams that you record are streamed out to everyone as part of the cable package you get. It has dedicated bandwidth. The cable box has 5 tuners in it. Second, when you stream an on-demand program it has to go to you and only you. If too many people want to stream on-demand at once you could get a slowdown, because you share a \"last mile\" segment with neighbors. You may have another problem though. What is your advertised internet speed and are you getting it? Try using wired internet if possible. Many times people's slow speeds are due to wi-fi problems.",
"I worked tech support for a big IPTV company. The pipe going into your house is big enough to support 5 HD channels and whatever tier of internet you have but always has the bandwidth required for the TV reserved for TV. So if you are watching one TV show and not recording anything the bandwidth that can carry 4 HD streams just sits idle it doesn't get allocated to the internet. This is done for two reasons. People complain way more about bad TV service then they do about slower internet. And this big IPTV company wants you watching their TV content which is way more profitable for them then Netflix which they don't make money from."
],
"score": [
137,
4,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5qgq1c | What does "/16 or /24" after an IP Address mean? | Hello all - I have seen the /16, /24, etc multiple times after an IP Address and I just don't understand it! How do I know which number after the slash belongs to my home network? Any help/guidance an explaining this would be very much appreciate! Thanks in advance! EDIT 1: Thank you everyone who replied and for not making me feel so dumb! haha I now have a much better understanding of what this means! :) | Technology | explainlikeimfive | {
"a_id": [
"dcz2vfh"
],
"text": [
"It is a decimal representation of your subnet mask. Your computer has an IP address. Something like: 10.1.1.2 But this address represents two things: a) your computer; and b) the local network your computer is on. And it needs a way of knowing which part is which.^* To do this it uses what is known as a subnet mask which can be represented in a number of ways. The \"slash\" notation, followed by a number, is how many 1 bits are in the subnet mask, in the case of /24, that would be 24 1 bits. Remember that computers deal in binary, so... 10.1.1.2 Is actually: 00001010.00000001.00000001.00000010 A subnet mask of 24 would be: 11111111.11111111.11111111.00000000 Each bit of the IP address that lines up with a 1 bit in the mask is part of the network (in this case 10.1.1.x). Each bit of the IP address that lines up with a 0 bit in the mask is a host address (in this case x.x.x.2). Your computer uses this information to determine if communication is destined for the local network (and doesn't need to go through a gateway) or is destined for a remote network (and does). In this case, everything from 10.1.1.1 to 10.1.1.254 would be considered local. All other IP addresses would be remote. \\* - Originally, they designed a scheme where IP addresses would fall into certain classes that defined which part was the network part and which part was the host part. For example, if it began with a 10, then the first octet was automatically the network part and the last three would be the host part. This turned out to be insufficient for modern networking needs, so they developed the concept of subnetworking of which the subnet(work) mask allows us to chop up the address any way we want."
],
"score": [
13
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5qh9y4 | How do transportation apps like Moovit or Navigation in Google Maps know exactly when buses are arriving late, where exactly are they, at what time will they serve, etc? | Technology | explainlikeimfive | {
"a_id": [
"dcz868h"
],
"text": [
"Through public or private APIs. The transport company almost always has a way of knowing where their transports are. This data can then be shared with other organisations. Some of the APIs are available for free and sometimes they will pay for access."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5qhbrw | How come Digg died but Reddit thrived? | Technology | explainlikeimfive | {
"a_id": [
"dczasnp",
"dcz8z96"
],
"text": [
"I came here from Digg when the collapse came. Before that day, Digg had a far superior look to it.. it was “Web 2.0” CSS – rounded buttons, soft edges. Reddit was a “Wall of text” and muddled with data. On my normal day, I would open URL_0 and scroll through the first few pages of stories to see what I had missed over the night. Then I would head to the submitted articles section and see what was worth ‘suggesting’ to other people. Bury the story if it was stupid, or just spam or trying to sell stuff we didn’t want. But Digg needed money. They couldn’t figure out how to turn the website into a cash cow, so they decided to have advertising websites (like Tech crunch or cnet) just automatically feed their articles into Digg like an RSS feed. You didn’t get any imaginary points for submitting it yourself! You couldn’t bury sponsored articles! Control and curation of content were no longer something users felt we controlled. (Perhaps you’d say we never had the control, but we had the PERCEPTION of it at the LEAST) We were being shown/told what to like by marketers. The exact opposite of the core system behind why people enjoyed Digg. Reddit, the ugly step brother, still offered us control over content. So we submitted our content to Reddit, got all the content we wanted to see, then posted Reddit to Digg. One day the entire front page of links on Digg directed to Stories ON Reddit. ( < 3 ). Soon after, everyone just stayed on Reddit, and the crappy design of Reddit started to grow on you like Moss.. or shingles..",
"Change of layout. Digg changed layout that people loved, into something that looked like news websites. People hated it, and somebody posted about reddit there. Thus, the exodus. I was there when it happened. And yes, I hated the new layout too"
],
"score": [
8,
4
],
"text_urls": [
[
"Digg.com"
],
[]
]
} | [
"url"
] | [
"url"
] |
|
5qhebi | How do most companies monitor their employee's internet activity? | I recently left the start-up business world and started working for a large corporation. At my old jobs, I used to use social media, do my banking, and handle personal things online at work in my downtime. This is pretty much frowned upon and not allowed at my new job so most "non-work" sites are blocked. I understand all companies are unique, but I'm curious, how do some companies monitor what their employees are doing? Is someone getting notified if I try to access a blocked website? Is somebody monitoring my emails and getting notified if there are certain words being used? Can my manager just choose to watch my screen all day, or would most companies require some kind of "just cause?" | Technology | explainlikeimfive | {
"a_id": [
"dcz7oaj"
],
"text": [
"It can vary widely, depending on the company. Most corporations do have a \"you're supposed to only use work equipment for work\" policy, but smaller organizations may not have much to monitor their employees besides some simple blocks and antivirus/security software. I work for a large company and we do have some pretty significant barriers in place. If you access something that's immediately dangerous or absolutely forbidden, someone will get an alert in IT. But managers can monitor absolutely everything, see your screen and hear your calls and be alerted to keywords if they choose to; this is the norm for a lot of companies where the ability is there, but only used if an employee gives cause. A caveat is that some business areas, like call centers, will pretty much have every single thing monitored at all times. And, of course, the more sensitive the information, the heavier the scrutiny, typically."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5qhr27 | Why is Social Media so addictive? | Technology | explainlikeimfive | {
"a_id": [
"dczao7k"
],
"text": [
"Good ol' dopamine. Social media is a constant stream of potentially new and interesting information, or funny memes and such. This triggers your dopamine response and like many things that trigger dopamine it can become addictive, especially since it takes little effort. It's actually very similar to smoking. Have you ever noticed yourself checking social media without thinking? Maybe exiting out of Facebook or Reddit only to go back to it a few seconds later? It gets to a point where you're not even doing it because you want to but rather because you're bored, thinking vainly that something new might have showed up (trying to get your dopamine fix). It's a habit that's hard to break similar to smoking."
],
"score": [
9
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5qhz1i | why must I first dial 1 before calling this number? | Computers can land a satellite on a meteor, but can't add the 1 themselves? | Technology | explainlikeimfive | {
"a_id": [
"dczcp3m",
"dczdroe"
],
"text": [
"The 1 is part of the number. It's the country code, and indicates that the number is based in the US or Canada. The system can't add the 1 itself, because it has no way of knowing what country you want to call unless you tell it.",
"They do. See why you can dial local numbers without the area code. Source: 8 years running softpbx's"
],
"score": [
16,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
5qjn9g | Why is Apple suing Qualcomm? | Technology | explainlikeimfive | {
"a_id": [
"dcztiyb"
],
"text": [
"Qualcomm developed a lot of the technologies behind LTE, and have patents on them. When a company's technology gets incorporated into a standard like LTE, they have to promise to let other people use those patents under \"fair, reasonable, and non-discriminatory\" terms (which makes sense, but isn't exactly well defined). Apple is claiming that the terms Qualcomm wants aren't fair- they're charging too much and making them pay extra for patents they don't want in order to get the ones that they need in order to make phones with LTE. They're also claiming this as an anti-trust issue: Qualcomm makes their own processors (if you're in the US, pretty much any flagship phone other than the iPhone uses Qualcomm chipsets), so Apple is claiming that they're doing this in order to make it more expensive to use their competitors."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5qkjw7 | Why can't huge companies like Google and YouTube beat ad blockers? | Technology | explainlikeimfive | {
"a_id": [
"dd00vhz",
"dd00eis",
"dd007l3"
],
"text": [
"Google doesn't go to great lengths to prevent ad blockers for the same reason that Walmart and Target don't use draconian measures to stop petty theft. Sure, a big store could eliminate *all* petty theft if they wanted to - but not without inconveniencing and annoying legitimate honest customers. Same with ad blockers. It's not worth going to great lengths to fight ad blockers when it will just piss off the average user.",
"Ad blockers work in one of two ways. * The \"Take it and dump it\" method. * The \"Refuse to ask for it\" method. The majority of ad blockers work on the \"Refuse to ask for it\" method. Basically when a page is loaded, your ad blocker consults a list of known ad serving addresses to check if the content loading on a page is part of an advert (Most adverts are loaded from third party domains). It also consults an algorithm to check if something on a page could be an advert (For example most adverts come in industry standardized width and heights in pixels) and for locations on the page it suspects might be an advert. It will also examine the cookies being placed on your PC by the site. Once the ad blocker has examined the page and determined what is and is not an advert, it will continue to request the bits of the page that is not advertising (Such as photos and images that belong to the sites design), and not request any of the advert components of the page (Like advert graphics/audio, cookies), which it then hides from the sites design so it doesn't look funny. This saves you loading times and bandwidth. Unfortunately, the developers of the site can write detection routines using Javascript to detect that your adblocker has not requested say for example, the graphic of an advert on the page or other advertising components. This is how they detect you are blocking ads and how those annoying \"We know you are blocking our adverts, will you please reconsider?\" messages appear. To prevent this, the other method of blocking ads can be used. The \"Take it and dump it\" method downloads everything, adverts, advertising cookies, the whole lot. However once they have downloaded to your computer, the ad blocker then deletes them almost immediately and carries on hiding them from the page. Though you still do not see the adverts, this is much slower as it still has to download all the advertising components of the page. It is however, nearly undetectable by the site operator. The reason it is not used more frequently at the moment is because it's often more liable to break the working of the site, can sometimes be functionally messy and at the moment, it hasn't yet hit the point where they have to resort to this method so the bandwidth savings are worth it. Detecting ad blockers requires some considerable input and at this point the arms race hasn't yet got to that stage. There is actually something of a cold war armaments race going on between the developers of ad driven sites and ad blocking software developers. As ways are found to beat the ad blocker, the ad blocker is updated, repeat, repeat, repeat. In the end though, it's always going to be a lose for the ad companies because an ad blocker when given total control of a browsers output, it's always going to win. The ad companies just don't have that much control over your browser. To that end, it's likely online advertising is going to change or the paywall is going to become more common in the future.",
"I'd argue that at least a portion of the reason Alphabet (nee Google) doesn't do so is because of market forces. They really want to maintain their browser market share. Firefox was big and it offered extensions (and specifically adblockers), and so Chrome also ended up supporting them. They have all the necessary power to prevent ad blockers from being effective - they could remove the functionality from Chrome, remove those extensions from the Chrome Store, or even bake-in a workaround specifically for their own ads. But I think the fear of losing market share at least partially hinders that move. I'd also speculate that there would be significant legal repercussions, as well as a revolt in at least part of the developer community."
],
"score": [
7,
7,
4
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5ql0z0 | why water completely damages a cell phone when submerged. | Technology | explainlikeimfive | {
"a_id": [
"dd03bf6"
],
"text": [
"Electronic circuits are designed to only allow electricity to pass through certain parts at certain times. That's how your phone works. It's a set of boolean functions (1 or 0/true or false). Electricity passes through chip, and it makes a decision such as \"and/or\". If it's 'and', it sends the signal one way, and if it's 'or', it sends it another way. After it does that, this step is repeated through other logic gates that have other functions that aren't and/or (not/or or any of the many other variants). Once you submerge it into water, it doesn't follow this designed 'trail', and the phone short-circuits. Because water is conductive, the electrical signals go wherever they can, and electronics can't handle that. To make something of a comparison; It's the same reason you get in a line when you're shopping. Imagine if all the customers just threw all their items onto the counter at the same time and talked over each other. The cashier wouldn't know what to do. That's what the submersion is."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5qly78 | If monitors only have 3 colored pixels: red, green, and blue, how do we see colors like yellow on our computers/phones? | So, as I understand it, the basic monitor has pixels and those pixels can be colored in as red, green or blue. If those are the only colors on the monitor, how can we see colors like yellow, or even different shades of yellow? | Technology | explainlikeimfive | {
"a_id": [
"dd0bb0n"
],
"text": [
"Your eyes can only *detect* red, green, and blue. Yellow light has a wavelength between that of red and green, so when yellow light hits our eyes, it stimulates the red and the green receptors just a little bit (rather than stimulating the red receptor a lot and the green receptor not at all, the way red light would, or stimulating the green receptor a lot and the red receptor not at all, the way a green light would). So we actually have absolutely no way of telling the difference between light that itself is yellow, and a combination of smaller amounts of red and green light that together stimulate the red and green receptors that same amount. (Yes, this means that some objects that appear yellow to us might not actually reflect yellow light - they might reflect both red and green light, instead! This means they're indistinguishable to us, but *some* creatures have more color-sensing types of cells, and might be able to distinguish them!)"
],
"score": [
14
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5qoepv | DRM and how it actually stops pirates instead of just making it more difficult for me to play my movies on my many devices, etc... | DRM has been around the block, and eventually removed from music in pretty much every market place, but only increased in invasiveness with movies. Despite this, pirates can circumvent enough DRM schemes that pirated movies are common. Are there quantifiable ways in which it's deceased movie piracy, or is the inconvenience just on me as a purchaser? | Technology | explainlikeimfive | {
"a_id": [
"dd0tekl",
"dd1474m"
],
"text": [
"DRM on movies *is* dumb. Worst case, people can just record their screen while watching a movie legally. Best case, people just remove it. It's only an inconvenience for the legal buyer. Same goes for games (aside from Denuvo, which hasn't been cracked yet and actually has stopped piracy, but it'll be cracked at some point, which even the Denuvo devs admit). EDIT: I'd like to add there was no big increase in legal sales for Denuvo games.",
"Usually, DRM is not added with the expectation that it will never be broken. It is more commonly used as deterrent to make people less likely to bother getting around it. It increases the level of effort required to get it illegally, and the idea is that it'll increase the effort enough for a significant percentage of potential customers to just buy it instead. It is also intended to increase the time between the release of a work, and when it's available illegally. This is often the case for games. The developers know the protection will eventually be broken, but games are the most valuable when they're new, so the longer it takes until the DRM is broken, the more people will just give up waiting for a \"crack\" and buy it instead. Eventually, the price of the game decreases by a lot, and a significant portion of those who are willing to pay for games have already bought the game. At this point, it doesn't matter too much for the developer that the DRM has been broken, as the people pirating it at this point are often people who would never have bought the game anyway. Some developers have said that having a game that's not possible to pirate for just the first two weeks after release makes a very big difference. The DRM doesn't have to be very long lived for it to be worth the cost of licensing it. Game developers have a business to run, after all. If the cost of DRM leads to a net loss of money compared to not using it, I don't think they would bother."
],
"score": [
6,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
5qohy8 | Why do every (most) games start with one having to "press a button" to get to the game menu? | Technology | explainlikeimfive | {
"a_id": [
"dd0zifw"
],
"text": [
"most games have to load the main menu into memory, so it is waiting for you to press a button to command it to load it. otherwise most games go into a demo mode while its waiting"
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5qovvo | What would happen if I followed one of those "Shoot the target to win a new iPhone" pop-ups on a webpage to the very end? | Technology | explainlikeimfive | {
"a_id": [
"dd0y09n"
],
"text": [
"Either never get to the end or end up on a site where you have to complete 2 offers from each of 3 tiers and the last tier is usually signing up for pricey services. But it's set up so that they get all kinds of personal info from you and have you signed up for spammy crap before you realize you'll have to sign up for a timeshare and an expensive wannabe AAA to get a free iPad. If you do buy the time share and fake AAA you'll have to jump through hoops and probably still won't get the prize. Then, when you try calling MCA (shitty AAA), you won't be able to get it cancelled within an hour. Then the next month when you get charged again, you'll try to cancel again. The month after that you'll spend your time finding out where their headquarters are at. When you go fire bomb the place you'll have probably crosses state lines. So now you'll go to jail on federal terrorism charges. Tldr: If you complete it, you'll probably go to jail for terrorism."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5qpcgc | How do mobile companies like Verizon and Sprint decide on data charges? Does using more data really cost them more money? | Technology | explainlikeimfive | {
"a_id": [
"dd11a2z",
"dd11rep"
],
"text": [
"Yes, carrying more traffic costs them more money. Much of that is in upgrades - e.g. adding more antennas on towers and adding more backhaul capacity from those to the core of their network, upgrading circuits in the core of their network and adding transit and peering capacity. But they'll charge you whatever they can get you to pay, which is lots higher than what it costs them to carrying that traffic.",
"There is a limited amount of data that can be transmitted per tower. If too many people are using their maximum bandwidth, it will be capped out and it would effect everyone's throughout. Think of everyone having a straw. That represents how much data they can receive over the air. Now think of a larger pipe. This represents how much data the tower's data connection can receive. Everyone crams their straw in the pipe, but it can only hold so many straws before they start to squish, limiting the amount each straw can hold. The above doesn't answer the question at all, but it's important to understand that. Now, a company can either be known for having crappy data, or they can add more pipes, or increase the size of their pipes. This costs money. They could pay out of their normal budget, or they could charge everyone extra. A third option is to charge people who use more data more for their usage. This way, people who don't use data as much pay for less, and people who use data more pay for more. An individual person using more data on a network that is not being fully flooded doesn't cost a company more for that data (or if they do, it would be negligible power and wholesale data costs). It does cost the company to upgrade their networks. To answer the first question second: if I were a company of any sort, I would charge as much as I could without the cost reducing the subscriber base beyond what the increased profits are from increased prices. In essence, I think that's how all prices are determined for anything."
],
"score": [
10,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5qpo45 | Why do internet browsers fail to prevent "add extension" advert pages from locking up the browser? | Technology | explainlikeimfive | {
"a_id": [
"dd191k3"
],
"text": [
"There's lots of Javascript and HTML(5) APIs around that allow websites to provide cool content such as web apps and games. Sadly, since the authors of these APIs forget that there are evil developers around, they end up giving the APIs a lot of control over the browser that loads the page. This is also how some shady ads are capable of vibrating your phone, taking you to the app store, etc. Improper sandboxing and APIs with too much power. You can reduce this by installing an Adblocker and NoScript, which makes pages incapable of doing much without your consent."
],
"score": [
10
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5qpq34 | Why do different parts of the world have different emergency numbers? | Eg. Currently living in Switzerland where the emergency number is 117, whereas in most European countries it's 112, etc. | Technology | explainlikeimfive | {
"a_id": [
"dd13nty"
],
"text": [
"Interesting question! In the USA, AT & T recommended 911 back in 1967 as the nationwide emergency number. AT & T chose 911 as it fit into the North American Dialing Plan N11 code system (dial 411 for directory assistance, 611 for repair telephone repair, etc.) and 9 hadn't been allocated at that time. Canada also adopted the 911 emergency number. 111 is the emergency number (adopted in 1958) in New Zealand whilst in the UK the emergency number is 999. This is because with the old rotary dial systems the number of \"pulses\" sent were reversed in NZ vis-a-vis the UK. In NZ dialing\"1\" on a rotary phone sent 9 pulses whilst in the UK dialing \"9\" on a rotary phone sent 9 pulses. So the equipment used in NZ and the UK for the phone exchanges worked for both 111 in NZ and 999 in the UK. Across the ditch in Australia the emergency number is 000. This is due to the historic need to dial two zeroes in remote Australian areas to reach an operator; adding one more zero for the national emergency number seemed like the smart thing to do. Australia briefly considered using 911 but those numbers were already in use for regular phone numbers."
],
"score": [
12
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5qqh1j | How do video game servers differ from web servers? | Technology | explainlikeimfive | {
"a_id": [
"dd1a1ln",
"dd1o0gc"
],
"text": [
"Oh boy, something I can explain! Generally, the two are very similar. (The software is the only difference) A server's job is to serve (hence the name *server*) content to a user that is asking for it, and receive and process input from the user. With a game server, depending on the game, the server will often act as a middle hub that all the players connect to, and the server handles the game. There are many types of game servers, such as matchmaker servers, multiplayer or world hosting servers, etc. For a matchmaker server, the server receives a request from a user logging into the game to be placed into the queue, which it will process. Then, based on the method coded into the server, it will dispatch the player to a game server, one that actually hosts the game world. Again, this may differ between games, with some games having the server do very little, and some making the server the focus of everything. The method that these servers usually use to communicate with the game client is usually UDP. A web server serves the purpose of serving web services to users, such as web apps or websites. Web servers host the content, and give it to users that request it by visiting the web site. For example, when you go to Google, Google's servers will send you the Google search page. If you then search something, it will send that search back to Google's servers to be logged and processed, and then the server will return the results to you. Web servers often use HTTP(S) or in some rare cases FTP to communicate with users. Attempting to connect to a game server with a browser will likely result in you being unable to connect. In some cases, it will launch the game and connect to the server for you. (Some valve games can launch from the browser via a steam: link.) Let me know if you have any more questions.",
"They both have the same general kind of interface: people send data to the servers, the servers do some stuff, and then send some data back in response. The actual specifics of what data is being transferred is different though. Web servers get requests like \"show me the page URL_0 \", then do work to put together the resulting webpage (e.g., getting a list of the top N posts on reddit), and returning the result. Video game servers get requests like \"I am moving forward\" or \"I am firing my weapon\" from different players, use this information to figure out what's happening in the game overall, and then respond with the actions of other players and how those actions affected the game."
],
"score": [
5,
3
],
"text_urls": [
[],
[
"http://reddit.com"
]
]
} | [
"url"
] | [
"url"
] |
|
5qrxdx | Can someone please explain how Jurassic Park (1993) was visually so ahead of its time, it seems almost comparable todays visual effects? | Technology | explainlikeimfive | {
"a_id": [
"dd1v3ms",
"dd1s1k3",
"dd1p8uk",
"dd1qp14",
"dd1yji5",
"dd1rr0i",
"dd1mauy",
"dd1v0lg",
"dd1mflo",
"dd1q4nl",
"dd1urcl",
"dd1pr3x",
"dd1scxb",
"dd1sbp5",
"dd1shk7",
"dd1s97u",
"dd1unq8",
"dd1qskm",
"dd1r0wi",
"dd23b72",
"dd1unqi",
"dd1u77g",
"dd1sfz7",
"dd20tno",
"dd20928",
"dd1xnf8",
"dd1sdb2",
"dd1s0rf",
"dd21s4o",
"dd1s37i",
"dd1t517"
],
"text": [
"They spared no expense. But seriously, there's a lot less CG in Jurassic Park than most people think. Today, it's quite common for the only live action element in many shots to be the actors. If JJ Abrams (hardly the most CG reliant director currently working) were to remake Jurassic Park today, he'd probably feel forced to engage in a full-on CG fuckathon for sequences where Spielberg simply did not have that option. Spielberg was, instead, forced to use every cinematic trick in the book to hide the short-comings of the comparatively awful CG elements he had available to him. Dark lighting, practical effects (models and animatronics), clever cutting... Jurassic Park is a tour de force in all of these things because the CG was still quite primitive. There are only a couple of full-CG shots that take place in brightly-lit daylight (e.g. the Brontosauraus reveal) and these are probably the weakest shots in the film. People today watch Jurassic Park and think they're seeing flawless, state-of-the-art CG, but what they're really seeing is mostly models and animatronics with only flashes of CG to tie the shots together. The CG was weak, but Spielberg hid it's weaknesses like the master he is. Bonus example: Consider the cup of water in the Jeep when the T-Rex is first approaching. > > One other issue that was initially seen as very small turned into a big puzzling problem. As the T-Rex approaches our main characters, we hear it before we see it. In particular, we see a cup of water start to form “rings” from the T-Rex’s approaching footsteps. Spielberg got the idea after listening to loud music in his car and seeing his rear-view mirror vibrate from the heavy bass. He quotes, “I was on my way to do storyboards for Jurassic Park and I never forgot what it looked like when the bass rhythm went off. I thought in the middle of storyboarding ‘hey wouldn’t it be cool if, when the T-Rex began to approach, the low-end vibration of all that tonnage hitting the ground was causing these little concentric circles.’” The crew thought the gag would be really simple to pull off. As it turned out, however, forming these “concentric circles” proved to be much more arduous than anticipated. Everyone on the production team was puzzled. Sound engineers, physics specialists, and wave tank generators were called in and used to try to achieve the circles, but to no avail. Finally, a solution was found. Michael Lantieri, who was part of the special effects team, quotes, “The night before the shot, I’m at home and I’m still playing around and I took a guitar that I had at home and set a glass of water on the guitar and plucked the string, and it vibrated and did it.” So in the end, the gag was achieved by feeding a guitar string from the cup through the tour car down to the ground, where a guy laying under the vehicle plucked the guitar string (Jurassic Park DVD). [Source]( URL_0 ) Directors often have a vision and go to great lengths to see it realized. However, I guarantee you that Michael Bay isn't going to call in physicists to help him make concentric rings in a cup of water (or to make realistic action scenes, apparently). He's just going to CG that shit and, most of the time, people won't even notice. CG has gotten really good at fixing seemingly small things that cause great pain for the production crew. I don't point this out to denigrate CG or directors today (except maybe Michael Bay), but rather, to illustrate how impressive it was for a director like Spielberg to pull off a film like Jurassic Park that still holds up so well, in spite of the technical limitations of his day.",
"One of the big primary factors that makes a CGI effect look real to the human eye is the way that light reflects off the generated object. The human eye is really good at noticing when an object is CGI when there is a lot of light illuminating it. The visual effects team working on Jurassic Park understood this, so to compensate for the issue they tried to put their CGI characters in the dark. Additionally, many of the scenes switch between puppet and CGI. For example, the T Rex scene switches between a giant T Rex puppet and a CGI object. The Raptors in the Kitchen scene is the best example. It's harder to notice the raptors are CGI because the room is dimly lit. And the character switches between CGI and puppet at different camera angles. The fast switching between the puppet raptors and the CGI raptors in the kitchen is very deliberate. Because it tricks your brain into not noticing the CGI character as much. The visual effects team was so cautious with CGI at the time that they avoided using it only in cases where a puppet character would be impossible. The hydraulic T Rex they built was supposed to be impossible to make, as no one had made hydraulics that could respond as quickly as they needed the character to move. But they were able to overcome some engineering hurdles by using a special fluid in the hydraulics. So that should be some indication of just how much effort went into producing puppet characters, before resorting to CGI. **Edit** Woke up this morning and found my top comment of all time is about Jurassic Park. I'm really super ok with this. Apparently some people think I sounded smart. I've never worked in the film or visual effects industry. I'm just a nerd with a huge love for Jurassic Park. And I've watched a LOT of behind the scenes for the movie. Many people have pointed out that there are obviously scenes where dinosaurs are in the broad day light. I didn't mention these because I don't feel like they hold up nearly as well. And I hadn't given those scenes as much thought. But as [Kaptain Kristian]( URL_0 ) points out in his nice little video, Spielberg had a lot of other ways to hide the weaknesses in Jurassic Park. He used cinematography to give the dinosaurs perspective and dimension so that they would seem believable. Not to mention that seeing the actors on screen react to seeing the dinosaurs makes us see them as more believable creatures.",
"For the computer graphics, Spielberg originally wanted to avoid them and use stop motion and puppetry instead. It took a lot to convince him. Even so, most of the dinos you see are puppetry, models, and robotics. Most of the CG dinos were intentionally kept in conditions that were difficult to see. Dinos in the rain, dinos in the distance, dinos running quickly or jumping out. There were very few scenes where the CG dinos were up close and personal. Thanks to that, those few scenes had the dedication of the entire crew. Modern movies have an enormous budget for computer graphics, but the cost per second is rather low because they are everywhere in the movie. Jurassic Park spent a fortune on computer graphics, but because they had so few scenes of them the investment per second is quite high. Each brief scene was meticulously reviewed and brought to exacting standards.",
"There were only a small handful of actual CG shots and there's only about 18 minutes of dinosaurs in the film total, and most of those were puppets. These days 9/10 of a movie is CG and on bonus material you see people say stuff like, \"well, we had 1,897 effects shots to do and render in the space of two months.\" Jurassic Park had maybe 20: - A couple of the brachiosaurus and then the watering hole. - The most during the T-Rex attacks (after breaking out of the fence and then the Jeep attack). - The gallimimus scene. - A couple wide shots with the raptors in the kitchen. - Another couple of the raptors near the end with the skeletons. - The big T-Rex finale. But I agree, it holds up so incredibly well. Edit: changed the spelling of \"gallimimus\" because I was on mobile and made a typo that people FREAKED THE FUCK OUT over. Good God, you people.",
"It's a combination of a number of factors. For starters, there was a great lack of faith in CGI's ability to produce realistic creature effects. CGI had been used in movies since at least the 70s but never to create convincing lifelike creatures. As a result an extraordinary amount of care was taken to hide the imperfections in the CGI. Secondly, Jurassic Park had a big name director and a big budget. It could afford to take the best from both worlds. It used physical effects for the shots where those would work best. In a lot of the close ups where you can't see the entire animal but just snapping jaws, lunging claws and stomping legs, animatronics were used. They have a real world presence which means they reflect real light, have real texture, real fluids and give the actors and environment something real to interact with. Vice versa, animatronics tend to be bad at convincing and lifelike motion. So for the full body shots where dinosaurs are walking, jumping and running in full view, CGI animation is used where the animator has much more control than someone using a mechanical puppet. And because the effects team was very concerned about making sure neither type of effect didn't stuck out like a sore thumb, a lot of care was taken to try and mask the way the effects blended with the environment. That's why the T-rex escapes it's paddock at night in the pouring rain for instance. Rain, fog, refracted light, shadows, lots of stuff to help hide any imperfection in the effects. But more than anything, what helps sell the effects is 'movie magic', nothing more than basic psychology. Jurassic Park went to incredible lengths to convince you that the dinosaurs are real animals both before the fx works starts and during. Let's look at the T-rex for instance: * Your first introduction to the T-rex is... nothing. The characters sitting bored in the car while the rex hides in the jungle. A very recognisable feeling for anyone who has visited the zoo. * There's a lot of minor interaction between the dinosaurs and their environment that isn't relevant to the story but extremely relevant in convincing the viewer that they're real. For example... * The T-rex eats, like any other animal. It eats a goat and just for good measure it drops a goat leg on the car. Physical interaction between the rex and it's environment. * You see the rex's little forearms testing the fence, pulling on the wires before you see the wires snap one by one as he pushes his body against it out of view. Again physical interaction with it's environment. * And they do this over and over. The vibration of it's step makes the glass of water ripple. It's feet sink into the soaked mud when it walks, leaving footprints that fill with water. It's pupils dilate when Timmy shines the flashlight into it's eyes. It's breath blows off Grant's hat. * The rex acts like an animal too. It get's distracted by the tire on the wrecked car. It follows the dominant motion of Ian's flare. It's not a monster, it's an animal. Over and over Jurassic Park reinforces the appearance that these animals are having a real physical impact on the world around them. It makes the viewer want to believe. Looking at monster movies in the decades after, most movies simply don't take this much work to establish their creatures are real. Occasionally monsters fight the protagonists and that's it, none of the build up. And part of movie magic is also that if you convince the viewer once, he'll want to believe later. Stan Winston once pointed out that in the gallimimus scene in broad daylight, the rex looks like a rubber toy. But after the amount of work put into the rain storm escape scene of the rex, the viewers already believed it was real and they didn't notice it looks a lot worse in the daylight scenes later in the movie.",
"You only notice CGI when it's bad. Or when it's the focus. Take a look here: URL_0 The Boardwalk Empire one with the boats, looks really well done. But it still seems a bit off. Because it's a major focal point. Take a look at the Life of Pi set. The one where he's standing on the boat seems a bit off, but that's from the tiger. The sky and sea look well done. That's because they're not the focus of the shot. Now, for Jurassic Park. There's some scenes where the cgi does fall flat. And falls just to the bad side of the uncanny valley. The brontosaurus in particular. But, the night scene with the trex looks great. But, if you look close there's not too much separating the quality of the cgi. What separates the two scenes is the intent behind the use of the cg. With the bronto meaning to wow the audience and the TRex showing us the fear of the characters. Also, the dim lighting helps. Another things Jurassic Park has going for it is the uncanny valley point is so shallow. We've never seen (at least most of us, looking at you roswell) a dinosaur in real life. So we can accept it more readily. However, faces, tigers, boats, water, etc. we've all seen at least video of. So the valley is set much deeper.",
"They used state-of-the-art practical effects for as much of the movie as possible. Things like the raptors coming out of the eggs, the sick triceratops, the brachiosaur that sneezed on Lex, were all lifesized robots made for the film. And that tends to be leaps and bounds above CGI animation.",
"A lot of people talking out of their butt on this one. OP, here's the actual answer you're looking for, posted a couple of years ago by u/teaguechrystie in a thread about the Jurassic World trailer. \"VFX artist here. [...] Aside from utilizing a whole slew of fairly basic (albeit smart) tricks that make it easier to look photoreal, Jurassic Park also had a few things going for it, historically speaking. As a thing to attempt doing, it was more or less unprecedented. Just a ton of work, a ton of question marks, unforeseen innovations were certain to be required, and custom scripts and software would have to be written. They knew what it had to look like, but they didn't know exactly how to get there. Their target was a look. They'd know it when they saw it. So, they started hammering away at it. There wasn't even a solid optimism that it was possible to pull off so much CG, at that level of quality, at that point in time — much less an absolute goddamned foregone conclusion that obviously it's possible to do twenty times as much CG at that level of quality — and so they benefited, a bit, from the exploratory nature of it. As far as executives and producers and studios and expectations go, the attempt to make that first CG dinosaur movie was akin to Apollo 11. \"Oh god, I hope this is fucking possible.\" When it actually worked, it was an accomplishment. That was the context for that CG work. These days, the context for the CG in, like, The Avengers, is akin to Southwest Flight 782, service from Oakland to Burbank. \"Oh god, I hope I'll be able to rent a red car when I obviously make it to Burbank.\" It became \"obvious\" (to the higher-ups) that we could do CG VFX. The process got figured out, the pipelines established, the groundwork laid, the procedures sorted... and now, the process of arriving at the end of the VFX process is seen as the goal. First you do your story art, then you do your modeling, then you do your layout, then you do your animation and sims, then you do your comp, then you render out the result. \"That's how ya do it.\" Once the process is complete, your VFX are complete. Congratulations, let's move on to the next movie. The problem — and distinction — is that, remember, Jurassic Park's goal was a look. They didn't know what the process would be, but they'd know it when they saw it. Now the goal is, largely, a process. Finish the process. Are we capable of delivering CG at the level of quality you see in Jurassic Park? Fucking absolutely. (And, \"duh,\" quite frankly. Most movies with big CG setpieces are actually at that level of quality.) When that doesn't happen, these days, it's because we're working under a very different set of limitations. For instance, way, way, way more shots, way more complex shots, way harder shots, an atmosphere of assumed possibility, a wee bit of studio apathy, less-and-less money, higher-and-higher rez, stereoscopic delivery... and, uh, not to put too fine a point on it... not much of a premium being placed on quality of life for the artists. (That's a whole separate thing.) In addition to that, like I said a few paragraphs ago, Jurassic Park also (smartly) utilized a handful of tricks to make life easier. In CG, realistic shiny things are easier than realistic matte things, so they made the T-Rex wet. They did the T-Rex scene at night. They did a tremendous number of hand-offs between the CG Tippet critters and the practical Winston critters. Not to mention, there's way fewer CG shots in that movie than you're probably remembering, and on and on. So. Yeah, it was twenty years ago, but they were also climbin' a different mountain. Now, it's important to note that Jurassic Park deserves every bit of the VFX credit it gets. (That Gallimimus sequence blows my mind.) It's outstanding work, it stands the test of time, it's great — I know I'm basically saying, \"yeah, good job with the fucking Coliseum, you guys, you scrappy group of rag-tag weirdos,\" but. I want to make sure it's clear that I'm not throwing shade at Jurassic Park. I love Jurassic Park. But, for being a trip to the moon with nothing but a tin can and a calculator — sorry, I'm very analogy-heavy this morning — for being just this impossible thing, it also managed to avoid some of the pitfalls of the modern CG experience. Expectations, mostly. Different flavors of expectations, at different points along the line. Being the first to do a very hard thing well isn't easy. For that matter, neither is being the 6000th to do a very hard thing well, when people are totally unimpressed with the assumption that you can do a very hard thing well. Like \"come on, knock it out. We're on a schedule here.\" Not that they weren't on a schedule, but. You know what I mean. I've rambled on long enough.\"",
"It's because they went to extreme lengths to get everything looking great on screen. They built massive robots and rigs to get movement and creatures looking even better then what cgi, especially for the time, could keep up with. They also made their own camera rigs and sets to make scenes look far better then just green screening everything in. It wasn't cheap, but it got the job done like nothing else could.",
"There's about 5 minutes of CG, done with a big budget, and the shots were all very carefully planned. Basically, no smoke or atmospherics that would make compositing terrible difficult. No shakey cam that would make tracking harder. No half-assed \"this would be neat, let's not plan it and hope it can be fixed in post\" documentary style shots. Plenty of time for the VFX crew to get lighting references and the like. A very forgiving schedule. And some very talented maniacs who knew their tools and their art as well as anybody on the planet.",
"Finally a topic I feel qualified to contribute to... and everyone else has already done it! One really key point - visual effects are what the filmmakers want them to be. If you want the photorealism of the Revenant, you can get it. It costs a certain amount requires extensive research and talented artists and technicians, but you can get it. However, many directors, producers, studios simply don't want that. They want to be bigger and more impressive than *insert rival studios summer tentpole* or *insert last summer's box office success*. So many times I've seen incredibly realistic vfx shots ruined because somebody who's been looking at it too long decides it needs to 'look prettier' and starts breaking physically correct and wonderfully invisible vfx. You can also get amazing vfx on a budget. But that requires very careful planning, sticking to the original vision, and careful use of practical effects to complement the digital work (ala Jurassic Park) That's why movies like Ex Machina, Kon Tiki or The Impossible can seem to come out of nowhere and really impress with their visuals. It's like wine. If you study regions carefully, make a sound selection, and enjoy it in moderation, you can have something great. Or you can grab the first bottle from the liquor store shelf, overindulge once you realise you quite like it, and end up shitfaced, with your head in the toilet bowl and waiting for the inevitable hangover to set in.",
"They had some of the most talented people ever to work on effects, in their primes with a good budget, a good director. And most importantly they chose the most effective places to deploy the dinosaurs. There are almost no gratuitous shots. Tou can do a lot with newer technology but you can't replace talent and good decisions.",
"Because it was one of the first. Arguably the very first to do what we now just casually call \"CG.\" Meaning CG that an audience can watch and not really pick out as CG; that they can see as just part of the film as it was shot. *Terminator 2*, and two years later *Jurassic Park*, proved that **photo-realistic** CG was a thing. Prior efforts to use CG had largely been either extremely obvious as CG, or had been very, very, very minor pieces of the film that audiences saw. *T2* built on what Cameron had been trying to do with the water aliens in *The Abyss*. The T-1000 was a transformative character made out of CG (when it's doing the liquid metal thing). It was something new, something that had never been seen before. And even as good as it was, most of the shots were still obviously limited. Better than anything prior, but still not quite all the way to believably artificial-without-being-identified as such. Then *Jurassic Park* arrived in '93 and it looked even better than the liquid metal T-1000. It took the next step. Sure a lot of the movie was animatronic, but the CG was used to blend between what Spielberg wanted to show and what the puppets couldn't do well enough. The result was ... I mean, you really need to look at the history of movies in the late 80s/early 90s. Reference (look at the wikipedia article) the comments Spielberg and others had when they decided to try the CG. And when they got the tests they ran back and realized how well it was working. *Jurassic Park* was just something that really had not been done on that scale, that well, ever before. It was golden age Spielberg, but even with that everyone was still talking about the \"digital dinosaurs.\" It was something spectacular because it was something film hadn't been able to do before. From the wiki article: > But despite go motion's attempts at motion blurs, Spielberg still found the end results unsatisfactory in terms of working in a live-action feature film. Muren declared to Spielberg that he thought the dinosaurs could be built through computer-generated imagery, and the director asked him to prove it. ILM animators Mark Dippé and Steve Williams developed a computer-generated walk cycle for the T. rex skeleton, and were approved to do more. When Spielberg and Tippett saw an animatic of the T. rex chasing a herd of Gallimimus, Spielberg said, \"You're out of a job,\" to which Tippett replied, \"Don't you mean extinct?\" It's not that *Jurassic Park* looks like today's films. It's that they largely look like it, because it was the first to do it. The tech has evolved a lot since, but there haven't been any seriously huge leaps ahead from what *Jurassic Park* offered, compared to what was available prior to *Jurassic Park*.",
"The puppetry has been repeated ad nauseam so I'll add something I don't see here yet. Computer graphics have a really hard time with light when it's diffused through skin, because it is slightly transparent. We're just now reaching the point where computer rendered skin looks realistic at all. In Jurassic Park they were very clever with their scene composition. Remember the famous up close scene with the T. Rex? It was raining, which makes its skin shiny. This allows for a much more simple reflection that still looks good up close. They used a lot of techniques like that. Limited light sources, low lighting in general, water to simplify the reflections, and when all else fails (like the sick stegosaurus) animatronics alone. Source: Some dude who answered a similar question in another thread long ago. I'd link him if I could.",
"> \"It seems almost comparable today's visual effects.\" No it doesn't. Effects today are so far beyond Jurrasic Park it's not even close. I wish there were more videos like this one explaining that people these days often haven't a clue what is / isn't CGI / effects - and how far composite, practical effects / CGI / hybrids have come: URL_3 You can spot every instance of CGI / visual effects work in Jurassic Park. Nobody can spot every instance of CGI / visual effects in Life of Pi - you can spot a few of the very obvious ones, but you'll miss tons 'cause they are so well done. Same goes for Mad Max, Gravity, most Marvel films, etc. Most people couldn't even tell Grand Moff Tarkin in Rogue One was entirely CGI. VFX / CGI reel for Mad Max Fury Road: URL_1 Hell - here's an effects reel from Boardwalk Empire...a TV show... from like 5+ years ago: URL_2 Game of Thrones Season 6 CGI reel: URL_0 Even for live people / animals, the Planet of Apes two reboot films complete blow Jurassic Park out of the water in every single way: textures, expressions (probably the biggest leap forward thanks to motion capture), movement, hair, skin, eyes, etc. and also practical / CG hybrid effects. This doesn't mean modern effects are perfect. Also doesn't mean the work in Jurassic Park wasn't spectacular (same could be said for T2 and The Matrix), but honestly, these are really dated examples by today's standards. If Jurassic Park came out today it would look SUPER dated compared to other heavy effects films.",
"Despite all the \"this is the answer\" posts, nobody's given the actual answer yet. The big difference between Jurassic Park's CGI and the CGI of modern movies is Jurassic Park's lack of an established workflow. They were inventing the methods and the tools to create exactly the visuals they needed, step by step, because it had never been done before. So it wasn't just a CGI studio adding a bunch of dinosaurs in post, but the entire crew working to pull it off together. Now that workflow already exists, the tools to create CGI imagery already exist, and it isn't designed for specific images but for everything. But now there's an expectation that it costs a certain amount of time and money to produce a certain amount of shots, and the workflow has to be bent to that task. Think of it like a master carpenter inventing an intricate piece of furniture, and then a factory mass producing it. Edit: I knew I'd read a great comment about this very subject once and I've tracked it down. This redditor describes the reason a lot better than I did and is well worth reading if you want the actual answer to this question: URL_0",
"They understood what the available technology could do, and they made the movie work within the limits of that technology. They didn't ask the visual effects people to promise more than they could deliver. At that level, you're dealing with the best people in the world. When they tell you \"No\", you need to listen. Clever beats high tech. You can go all the way back to silent films, and see crazy special effects that still look totally real. You can also go back and see a stop motion King Kong that looks like shit. And that was a blockbuster at the time. Try watching 2001. It was made in 1969, but it still looks so fucking real.",
"Stan Winston studios was at the forefront of animatronics and special effects and basically made them so if you were standing looking at them in the face, they were still incredibly life like (although, as another user pointed out, they used other camera effects such as lighting and rain to make it easier to hide the flaws). CGI is just catching up to that point where it can mimic the real world, and you can generally still pick it out with things that are complex and move (such as a dinosaur), whereas it's much easier to make something look real, that well is real.",
"A lot of other people have answered, but I think that it's important to note how incredible the puppetry was. There is a great YouTube series on making the TRex puppet. The companies supplying the parts didn't think it would be possible to even do. Edit: typo",
"Off topic, but what really blows my mind is Kubrick's tour de force in 2001: A Space Odyssey. From 1968, if you want to talk about being ahead of one's time.",
"Because it was done well & properly. I think a big part of this illusion is just the comparison to bad CG. In terms of CG effects, Jurassic Park was groundbreaking. But after its success, a lot of other movies wanted to use similar effects, but weren't willing to pay top dollar for it. Because of this, a ton of bad/cheap CG was used in movies in the 90's and early 2000's. That's why Jurassic Park aged so well - because a lot of films from the same era really didn't age well.",
"A lot of effects that people think are CGI in Jurassic Park are animatronics, puppets or man in suits. Among other things they build a [life sized hydraulic T-Rex]( URL_0 ) for the movie and used [man in suits for raptors]( URL_3 ). Generally speaking, CGI is only used for dinosauriers far away from the camera, seen in total and moving. Jurassic Park also mixes those different effects together constantly, so you get close up of a puppet, followed by a shot further out that is CGI. Thus giving you photo-realistic texture via the puppet and realistic movement via CGI, those two blend together in your mind to a realistic looking dinosaur and they compensate the shortcomings of each other. The CGI is also always used for only a small part of the scene, you always have a real background into which the CGI dino gets inserted, in modern movies it's often the other way around, the background is 100% CGI and you only have a few actors in front of a green screen. All the physical dino props they build for the movie also had another effect: It provided reference for the CGI people. So they could tweak their CGI graphics until they matched the physical prop. Without the reference it is much harder to create realistic results, since you don't know how it should look. That said, Jurassic Park CGI doesn't look perfect, you can for example see the [pixels in the textures of the Brachiosaurus]( URL_1 ), but that is one of the few obvious faults one can find and would be invisible on a lower resolution version of the movie. Effects aside, Jurassic Park dinosaurs also look realistic because that was their goal from the start . They made those dinos to behave like real animals and had experts around to give advice. A lot of other movies don't aim for realism to begin with, they want scary monsters, relatable cute animals or cool action set pieces. Thus even so the graphics themselves might look better than JP, the behaviour and action happening on screen makes it obviously it is fake. Most [outrageous example is the King Kong dino stampede]( URL_2 ), the compositing in that scene is horrible, but no amount of good CGI could fix that and make that scene believable, as the action itself is already completely removed from reality.",
"They made heavy use of rain and darkness to hide the limitations of cgi However take a fresh viewing and many of the effects look terrible, especially the brontasaurus scene",
"Because, just like with the original Star Wars, 99% of it wasn't visual effects. URL_0 Pretty much the only times they were actually using CGI was when you could barely see the dinos and it was night/raining. And Steven Spielberg knew what CGI could and couldn't do going into the project so he wasn't like \"we need 15 T-rex's with laser cannons on their head fighting magic fireball casting Raptors but it's all gotta look 100% real\". And what they did use is not even remotely close to today's CGI. The majority of big budget movies are mostly CGI and you can't even tell. Unless the actors are directly interacting with an object or walking on something it's a good chance it's not actually there. Especially things like what's going on in the background. Most movies don't even go shoot on location anymore it's all in a studio with greenscreen walls and a handful of actual objects in the room. For instance this is \"New York City\" in The Avengers URL_1",
"Rather than feed you a repeat of countless Cracked articles, I'll do the thing people hate. Everything that was amazing about Jurassic World's effects: -The detail level was insane. Every bit of flesh that could be accounted for on modern CGI tech was there, at a resolution scale that almost-certainly constitutes all of Jurassic Park's work in a few frames. -The way the flesh and teeth interacted with light, revealing their perceptible texture and substructures was amazing. Until independent human craftsmen are 3D printing real bone and tissue, this is the most amazing and least disconcerting art our species could hope for. -The detail of the I-Rex's mouth as it sniffs out Pratt is another great example of a disconcertingly....\"meaty\" sort of dino. I think there might have even been some flies buzzing around that steamy mess of a mouth it had. Again, silicon with paint and KY jelly on top looks like silicon with paint and KY jelly on top. -Animation. Good lord the animation. Where the intersection of human fact and myth starts to pay off visually. Most specifically, the way the I-Rex attacks the sphere-o with the exact timing and approach that an oviraptor (or any egg-eating bird/reptile) would. Heck, the scene that got people so riled up, where the lady gets snatched up by the pterodactyls, who attempt to drown her, and then both get chomped by the big beastie? Straight outta National Geographic through and through. That scene was so well done, people weren't talking about FX, they were imagining that the nature on display owed them some karmic balance sheet to explain what they just saw and reacted to with their own deep human fears. If FX came up? \"CGI.\" As if those words are a curse in and of themselves without any functional understanding of how to do this stuff. -If that one-shot in the finale doesn't justify CGI to you, you should read or watch more things about how this stuff is accomplished, and how films work, and why humans bother to make films in the first place. The flawless shifts in scale from dino to human and back. The modern choreography of massively detailed simulations being wrangled by factories of electron clouds buzzing.... Bonus: Blue: They turned a raptor from a clunky hand-off between \"real\" puppets with puppet-level emotions and a CGI super-animal....into a proper character. That hero shot was *flawless* in excess of anything Jurassic Park managed, including with the Amusement Park T-Rex. Not only visually convincing, but a well-earned bit of character work from a fully-digital character. That is what defined the CG in Jurassic World to me.. Its utter commitment and execution of genuine character work through this digital realm, with animal characters given modern respect as to their evolutionary cultural capabilities. They needed one old-school Speilberg monster, and they had to genetically engineer it in their narrative of animals being animals. The thing Spielberg always talked about without ever managing to portray with his clunky giant robots...he settled on horror-movie tropes about not seeing anything. Great for \"holding up\" visually, but not so much scientifically/intellectually/philosophically. In Jurassic World, we got a nature documentary so convincing, we hated the humans for being flawed humans and the dinosaurs for acting like dinosaurs at those humans we hated. Also, our brains are still clinging to the Uncanny Valley for dear life, and \"Pffft CGI\" is a spell to strengthen our hold. (Totally fair exaggeration of the Reddit Response :-P ) Bonus Bonus: You guys realize that Reddit comments are CGI right? And that when you all cluster around the one that gets the most positive attention, that's when its influence gets questionable? Groupthink is great for facts, but considering this thread asks for an explanation of an incorrect perception of art, the top comment seems like a fireside retelling of like .075% of all internet content. (Edit: I called Reddit itself CGI, and it got revenge with auto-formatting. I will not go back to change my numbers back to the numbers that I typed before a machine tried to correct me. Particularly on one of these Jurassic Park CGI threads. Food for thought/vomit.)",
"URL_0 A great video by the legendary u/kaptainkristian about the visual effects in Jurassic Park",
"Huh? A lot of the special effects in Jurassic Park look very dated. Source: I just watched it again two weeks ago.",
"Also note that you see a lot of VFX today that you have no idea was VFX. You only notice the bad stuff.",
"They used real dinosaurs. Sure, it takes a lot of takes to get the trained dinos to act perfectly, but it doesn't get more life like than the real deal.",
"2 reasons. 1. Mixed with 'real' practical effects. 3. You've never seen real dinosaurs so you have nothing to go on. Also, 63 million dollar budget in 1993. That's about 110 million today and movies today can be 100% CGI so you can imagine more time invested in less frames.",
"Most of the big trex scenes with cars at the paddock fence was a giant model. Same with the scene where the raptors are in the power room are all models* and you have to remember when Jurassic park was made the animatronic and props and sets were all real and kinda at there peak in film industry. The CGI used was expensive and leading technology at the time. Also the raptors in the kitchen, its just them rendered on a real background and no filters or effects like lense flare, shudder etc so in being minimal it gives a really good and believable effect. Today's CGI where everything's CGI and going for ultra realism can have the opposite effect where your brain just knows its bs, whilst Jurassic park CGI your brain is just questioning it as 90% of the scene is real with the CGI over the top. My 2 cents anyways :)"
],
"score": [
10914,
7536,
2488,
760,
338,
236,
214,
202,
162,
125,
44,
30,
30,
29,
23,
20,
10,
9,
7,
5,
5,
5,
4,
4,
4,
3,
3,
3,
3,
3,
3
],
"text_urls": [
[
"http://www.cinemablography.org/blog/behind-the-scenes-jurassic-park-t-rex-entrance-scene"
],
[
"https://www.youtube.com/watch?v=_rlr3Lzvqog"
],
[],
[],
[],
[
"https://digitalsynopsis.com/design/movies-before-after-green-screen-cgi/"
],
[],
[],
[],
[],
[],
[],
[],
[],
[
"https://www.youtube.com/watch?v=3fPRK92TtIY",
"https://www.youtube.com/watch?v=Cnb-5AZmzGE",
"https://www.youtube.com/watch?v=aFHKwaW4Um8",
"https://www.youtube.com/watch?v=bL6hp8BKB24"
],
[
"https://www.reddit.com/r/movies/comments/2ndx0r/the_full_jurassic_world_trailer/cmcs22y"
],
[],
[],
[],
[],
[],
[
"https://www.youtube.com/watch?v=4SK1qTnhHzI",
"http://imgur.com/C1cyb3i",
"https://www.youtube.com/watch?v=5PwOSFd0BBw",
"https://www.youtube.com/watch?v=jAzQr3Ml0UI"
],
[],
[
"https://www.youtube.com/watch?v=B4J9TBlFxAg",
"http://i.imgur.com/4DJdhb6.jpg"
],
[],
[
"https://youtu.be/_rlr3Lzvqog"
],
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5qsh5y | Why is the YouTube video player so much better than other sites (For instance ESPN, Netlfix, SlingTV, etc)? | I have found myself getting more and more frustrated with video players and streaming, especially on sites that I use frequently; ESPN being the most common. On seemingly every other video site with regards to jumping forward or back, or even just starting a video, there is a major issue with speed, buffering, and skipping, while YouTube is almost always instantaneous for me. Why is that? What is YouTube doing that is so much better than everyone else? | Technology | explainlikeimfive | {
"a_id": [
"dd1ru7r",
"dd1r9fi",
"dd1vxbm",
"dd1rana"
],
"text": [
"Youtube has a lot of servers, where they store the videos they send to you, in lots of places. This means that odds are better they'll have a server ready to respond to your request for a video, they'll have more bandwidth and/or a better connection to serve it to you, and that they'll be physically closer to you so the data doesn't have to travel between as many internet nodes and get slowed down. Another factor could be that the Youtube player code is just better written than whatever other websites use. I'm not sure how it works exactly, but consider how big Youtube is and how many engineers they can afford to have working on optimizing it for every conceivable circumstance. This could be things like taking advantage of data they've already buffered to your computer instead of discarding it and redownloading every time you skip around the video.",
"If it's issues with the speed your video loads at, it's probably just that youtube has better servers or servers that are closer to you. The quality of the player shouldn't impact the speed your video loads at, bar really badly designed players.",
"Part of the reason like /u/ataiwaochinchin said is that YouTube has the support of Google's massive server farms for sending videos to you super quickly. Another reason is that Google works hard to make the player as \"bare-metal\" as possible, meaning there's usually very little extra getting in the way of you and your video, besides the ads. It uses the technology already built into your browser (and if you're using Chrome, technology which they specifically design and maintain) to keep things as minimal as possible. Also thanks to Google's server farms it has the ability to \"transcode\" the videos into other formats (this is what the \"processing video\" part of the upload process usually is). Some video formats run better on certain browsers and devices, and since they can store all the different formats they can detect what you're running on and send you the most appropriate format. It's also in Google's best interest to make that player run as fast as possible, since the service absolutely requires that users watch the ads, and users are less likely to watch/endure the ads if the player is loading slowly.",
"Video streaming sites use something called bandwidth. This is (basically) analogous to water flowing through a pipe to a tap. Youtube has the equivalent of a standard faucet with bigger than average pipes and a giant super powerful water pump at the water station. Other sites might not have such a big pump such but a really shiny tap that has diamonds and is made is of solid gold. It looks pretty, but it's just a tap and there's hardly any water pressure. This can be fixed by reducing the pipe size (HD - > SD) or by getting a bigger pump. But because it's not their primary purpose they don't really need a bigger pump running all the time and it's cheaper for a smaller one even if it doesn't work quite as well. I know it's a bit rambly but I just finished a 7 hour shift 😴 if you need me to clarify just ask :)"
],
"score": [
35,
8,
7,
5
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
5qt88d | How can malware get installed on your computer just by openning up a website and without the user getting notified about a download? | Technology | explainlikeimfive | {
"a_id": [
"dd1zzhp",
"dd1y24k",
"dd1yauv",
"dd240t0"
],
"text": [
"Malware is mostly spread through security vulnerabilities on your computer, for example in your browser or some plugins your browser uses. The most common ways for an infection are through JavaScript or the Flash plugin. Someone might find a bug in Adobe Flash to manipulate it into running arbitrary code. Using this bug, he downloads the real malware to your computer and executes it. Since this still requires you to visit a website that serves the exploit and quite sure you won't visit some site like URL_0 too often, there is often another step in the whole attack campaign like hacking a advertisement network and serving the exploit code via advertisements. This way every website that includes ads from the hacked network, will serve the malware (that's why we love ad blockers ;)). Another way to spread the exploit are so called Cross-Site-Scripting vulnerabilities in websites where a attacker can inject JavaScript code in a website that will get executed when a user visits the website. But using vulnerabilities in your browser or the plugins aren't the only ways to infect your computer. Another way that was described not to long ago but only in theory (never seen in the wild) was to infect Fedora Linux using the fact that Google Chrome will download files without prompting the user and Fedora will automatically index the downloaded files and create for example preview images for pictures or video files to show in your file manager. A attacker could cause Chrome to download a manipulated media file, that would be stored on the disk and processed by the multimedia engine used by Fedora (called gstreamer). Due to a bug in gstreamer, when processing the manipulated media file, the attacker could execute code on the computer. All in all, to infect a computer without user interaction the attacker needs a vulnerability in any program you use or multiple minor bugs combined in a clever way.",
"in general it just shouldn't be possible, everybody knows that this is a big security risk. Usually JavaScript and everything that allows your browser to run code on a homepage is the source of all evil. You just don't know exactly what this code will do on your computer. People try hard to make it save but programs have bugs and programmers don't always think about all possibles uses of the tools they develop. A basic approach of hacking is sending a string to your browser that gets executed and in some way nobody considered yet this code gets more rights to execute and write stuff on your computer than it should have. And i have to give credit to the hackers who develop those things. It is really hard and requires a lot of knowledge about a browser to do things like that. And as soon as other people notice how the code works they usually improve the browser to prevent something like it from happening again. But nowadays most malware works by abusing the human mind \"click here for free iphones!\" or an email that says \"man, look at this picture i send you!\" or \"check out this cool game!\" are a lot easier to create and use.",
"When you load a webpage there are things going on in the background to make that possible. For example images are downloaded and displayed, scripts are executed, plugins loaded etc. Every time a program gets an input and does something with it there is a chance that such input is malicious (not the kind of data that the program normally deals with) and this fact should be accounted for by the programmer and dealt with in a graceful manner. However programmers do make mistakes and sometimes such malicious inputs aren't dealt with correctly and and the program can be derailed in its execution by such malicious code. For example, the browser takes some picture in a website and decodes it so it can display it to you and the programmer didn't make sure to check that only precisely the size of the image can be loaded into memory, not a byte less not a byte more. An attacker can use that fact to create a particular image file that tricks the software into loading in memory more than it should which in turn causes it to write in an area of memory that enable the attacker to execute code. Now in a scenario like this it means that just by loading an image some code, aka a program, would be executed without any further action by you. Fortunately most of these issues come from plugins like java or flash, which means that disabling them greatly reduces the risk of being victims of such attacks.",
"I answered this [yesterday]( URL_4 ). Here's a copy of my answer. > A modern web browser is a complicated beast, with lots of different capabilities. It can do pretty much everything, from displaying complex graphics with custom fonts and playing audio and video in a variety of formats, to showing PDFs and doing all sorts of computationally intensive tasks. Browsers can also have addons and plugins, such as Flash, Java and Acrobat Reader. Websites can use your webcam and microphone, and access your local files, although only if you give it permission. > > Having the aforementioned capabilities means that browsers have a lot of complicated components, each with a lot of code. More code means more bugs, and some bugs can be abused by an attacker to take over your computer or steal your data. > > Getting infected by just visiting a website isn't that common these days, but it's still entirely possible, especially on shadier sites. Browser developers are pretty fast at fixing known exploits, but sometimes hackers use [zero-day vulnerabilities]( URL_5 ). Keep your browser up to date and pay close attention to which websites you visit, and you should be safe. > [part 2] > > Yes, normally web pages can only save certain kinds of data, but certain bugs can lead into [arbitrary code execution]( URL_0 ), meaning a carefully crafted web page can execute any code the attacker wants on your computer. > > Image, video, document and font formats can be quite complicated. For instance, two years ago [Google engineers discovered]( URL_3 ) that a bug in Windows's font handling enabled carefully crafted font files to run arbitrary code on your machine. Since web sites can embed custom fonts, any website could've abused this. > > Plugins are also a common source of exploits. [Here's an example from this week.]( URL_1 ) Cisco has a popular browser plugin called WebEx which is used for video calls. The plugin has to communicate with programs installed to your PC. Due to the incompetency of Cisco's programmers it's not limited to just communicating with their program, or only being usable from their website - any website can do anything to your computer. This applies to any browser the plugin is available for. > > Websites use Javascript as the programming language. Once again, things are complicated, so a serious bug in the language implementation can be exploited for who knows what. Almost every page already executes some code on your machine, and while it's limited to only certain things, breaking out of that [sandbox]( URL_2 ) is not impossible."
],
"score": [
13,
5,
3,
3
],
"text_urls": [
[
"askdhfawej92nd09a32nd.com"
],
[],
[],
[
"https://en.wikipedia.org/wiki/Arbitrary_code_execution",
"https://bugs.chromium.org/p/project-zero/issues/detail?id=1096",
"https://en.wikipedia.org/wiki/Sandbox_\\(computer_security\\)",
"https://googleprojectzero.blogspot.fi/2015/07/one-font-vulnerability-to-rule-them-all.html",
"https://www.reddit.com/r/explainlikeimfive/comments/5qigkm/eli5_how_could_simply_opening_an_3mail_or/dczhqhg/",
"https://en.wikipedia.org/wiki/Zero-day_\\(computing\\)"
]
]
} | [
"url"
] | [
"url"
] |
|
5qufpa | Why US Telecos still use CDMA technology, when majority of the world uses GSM for the communication? | What's benefit of using CDMA in the US ? | Technology | explainlikeimfive | {
"a_id": [
"dd294jr",
"dd2eyxc"
],
"text": [
"The main reasons are a matter of timing, corporate greed and legacy. Back when the US networks where starting to form, the switch from analogue to digital cellular technology was also happening and CDMA had some interesting advantages over GSM. One of the most appealing features at the time (And still continues to be) was that it is easier to lock a CDMA user into the network that provides the phone than it is with GSM technology whose spec demands that they be interoperable between networks. CDMA makes it harder for a user to leave a network for another one and take the phone with them (In some cases it's impossible). There where other benefits to CDMA as well such as greater capacity on the network, a questionable theory that call quality was better and so forth but GSM caught up very quickly and eventually leapfrogged CDMA in the quality and feature departments. Now of course, some of those network operators have folded into the big players you see today and frankly switching from CDMA to GSM is a BIG commitment those network operators don't really wish to undertake. CDMA as a technology outside of the USA and small parts of Russia is dead with the advent of 4G. GSM has been taken up by most of the world, mostly driven by Europe's mass uptake of it. Though 3G briefly was based on a variance of CDMA, 4G uses a technology called LTE which is a further refinement of GSM technology.",
"The selection of CDMA over GSM was mostly based on the distances and number of users supported by an antenna. CDMA was initially superior to GSM on both, therefore cellular networks could have better coverage with fewer macro cells (towers). However with the adoption of LTE, as well as refinements to 4G over GSM (contrary to popular belief, modifications to both CDMA and GSM were allowed to call themselves 4G without supporting LTE), and subsequent future migration to 5G, the differences have become moot. But as the US was an early adopter of cellular technology, and Qualcomm was the leading provider of CDMA technology to both Verizon's predecessors and cell phone manufacturers."
],
"score": [
11,
6
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
5quj61 | What is the difference between Chrome & Chromium, and who owns Chromium? | Technology | explainlikeimfive | {
"a_id": [
"dd26t8w"
],
"text": [
"Chrome is a browser created by Google. Chromium is an open source browser based on Chrome. Chrome OS is an operating system made by Google, where the Chrome browser is the primary user interface. It is designed for lightweight devices that are primarily used to access the internet. Chromium OS is an open source version of Chrome. Chromebook as a laptop sold by Google with the Chrome OS preinstalled."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5qw1o3 | Smartphone CPUs, compared to PC ones | What are the best, the worst, and the in-between? I've heard good smarphones use Snapdragon, does that mean they are the i7 of smartphones? | Technology | explainlikeimfive | {
"a_id": [
"dd2iuab",
"dd39l9q"
],
"text": [
"There are many different brands and models of mobile processors. It's not like Intel vs AMD. The most common ones are from Qualcomm, who are responsible for the Snapdragon. Their flagship processors can be identified by the 8 as the first number. They are currently on the 835. Their mid ranges begin with a 6, and the low end at 4. Other notable brands are Samsung, Mediatek, Huawei, Intel, etc.",
"Just think about cars, we have two main types used today: cars with diesel engines and cars with gasoline engines. They both do the same thing (power a car) but the engines are different and each have their strengths and weaknesses. CPUs are the same way. Currently there are two major types: ARM CPUs and x64 CPUs. ARM is a lot newer and more power efficient while x64 is older, more power-hungry but generally considered to be more powerful. 99% of mobile phones use ARM CPUs while basically all PCs use x64 CPUs for obvious reasons. Now going back to the car analogy, there are a bunch of companies that make engines. Companies like Honda, Ford, VW, etc... make either diesel or gas engines for their cars or sometimes for other car companies! In the CPU world there really aren't so many companies actually designing and building chips. x64 CPUs are only made by Intel or AMD. ARM CPUs are a bit more complicated. Companies can buy \"premade schematics\" from ARM and then hire a manufacturer to actually make the CPUs off the schematic. Qualcomm and Apple are special, they hire their own engineers to take pre-existing schematics and heavily modify because they wanted an edge on the competition (there are actually a lot more reasons). Since neither Qualcomm or Apple actually own any chip factories they also need to find companies to actually build the things from their custom schematics. Samsung, Taiwan Semiconductor (TWSC) are examples of some companies that manufactor ARM chips"
],
"score": [
3,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
5qyp7f | What is ActiveX control? And what does it do? | I tried searching the internet, but I couldn't find anything that I can understand. | Technology | explainlikeimfive | {
"a_id": [
"dd35kuy"
],
"text": [
"Before the advent of HTML5, it was difficult for developers to build sophisticated web apps or multimedia sites because the HTML and JavaScript features supported by web browsers at the time were more limited. There also wasn't a standard way to build cross-platform compatible browser plug-ins to extend the functionality of the browser. Basically ActiveX was a proprietary 'Object Linking and Embedding' technology developed by Microsoft which helped faciliate the exchanging of data/content between applications and the embedding of data/content from one application into another. ActiveX was well known because it was integrated into Internet Explorer. It effectively allowed developers to build more advanced web applications (that could run inside a user's web browser and be embedded into webpages), but the problem was that the web applications were only compatible with Internet Explorer. ActiveX controls also became infamous because they introduced all sorts of security risks by allowing developers to run potentially dangerous and malicious code in a user's web browser simply by a user visiting a webpage."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5qzpcq | Why aren't busses aerodynamic? | I know that some busses are aerodynamic but most of them are not. Why? | Technology | explainlikeimfive | {
"a_id": [
"dd3cn07",
"dd3dee0",
"dd3ecw4",
"dd3ipay"
],
"text": [
"Because they're economical when it comes to space. Buses aren't designed to go fast, they're made to carry as many people as possible; as efficiently as possible.",
"I can only speak for my country (Germany) here. We have a maximum length depending on the number of axles. There is 13.5m for 2 axles, 15m for 3 or more axles and 18,75m for a bus with a hinge. If you were to make the bus more aerodynamic then you'd loose at least 2 seats, maybe more. Same goes for trucks. At some point in the past a German politician (can't remember who) wanted to give trains the advantage over trucks and introduced a maximum length for them so they couldn't carry as much cargo. The industry's response was to move the driver's cabin from behind the motor to above the motor to create more space for cargo.",
"aerodynamics dont really apply or have much benefit when the vehicle is designed primarily to go 20 - 30 mph and stop and start very often. at high speeds it makes a difference",
"We mostly want buses to be able to carry lots of people and adding features that make a bus aerodynamic may cut in on that ability. To keep the same passenger capacity you'd need a longer bus which would make it more difficult to drive on city streets where aerodynamics aren't really necessary when you are dealing with traffic."
],
"score": [
32,
15,
13,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
5r34rq | How are rechargeable batteries different from regular ones? | Basically - how are rechargeable batteries able to be charged, but other batteries aren't? | Technology | explainlikeimfive | {
"a_id": [
"dd41iq8"
],
"text": [
"Batteries normally store their power through chemical energy. When power is drawn from them, a chemical is broken down that produces the electricity needed for whatever the battery is powering. However, the reaction cannot go on forever, turning whatever chemical is in your battery from one chemical into it's base elements/molecules. A rechargeable battery has a certain set of chemicals that, when depleted, will convert the broken down elements/molecules back into whatever chemical your battery originally used. The non-rechargeable batteries have a chemical that is essentially impossible to rebuild with electricity alone."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5r3dme | How come calculators are the only computers we've commonly adapted solar panels into? Why haven't we intergrated them into things like laptops or cellphones? | Technology | explainlikeimfive | {
"a_id": [
"dd4357g",
"dd436dj",
"dd47ezn"
],
"text": [
"Handheld calculators require very little power; tiny solar panels can power them. You'd need massive panels to power a smart phone. The power consumption difference is more than you'd think.",
"They don't require nearly as much power. Even more powerful calculators like he Ti-whatever need batteries",
"Calculators only need to power a few things. On demand. Like the LCD screen which doesn't require much power at all. Or the logic units in its brain. These units can be kept off and when they are on, they work very little while you are using the calculator. Think about how often you press the \"=\" sign. That's peak power consumption. With a smatphone or a laptop, its very different. You have a high resolution color LCD display. Many, many more pixels than the one in calculators. These devices are always broadcasting or receiving signals over the air. This requires a lot of power. These devices run complex operating systems that manage a huge set of resources. All of this requires a substantially large amount of power. tl;dr: the vastly different feature sets offered by the two devices are responsible for the difference in power sources."
],
"score": [
19,
4,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5r5ezb | How Pilots and Trucks driver communicate over the radio ? | For example, a plane is flying over a restricted area, so a military aircraft, Say Turn around... bla bla, how one plane find the radio signal of another plane ? and same with trucks drivers, for example one truck pass another truck in the route, sometimes they pick up the radio and say something to each other, how they do that ? Sorry for my bad london, i have an extra chromosome | Technology | explainlikeimfive | {
"a_id": [
"dd4nbpb"
],
"text": [
"YouTube has interesting conversations between tower and planes. Search for \"ATC recordings.\" A guy named 'Kennedy Steve' is famous because of his delivery/humor. You could just search for that as well."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5r6j5e | Why do some card machines ask for your pin number when using your debit card while some machines don't? | Additional question, why do card machines now require you to use the chip on your card instead of swiping? | Technology | explainlikeimfive | {
"a_id": [
"dd50k2e"
],
"text": [
"Chip cards use tokenization, meaning a 1-use code is used for every transaction. Your card number also tell machines that it's a chip card, so a clone can't be swiped. So if someone somehow stole your number and 1-use code, it would be worthless as that code no longer works, and the card can't be swiped. However, online transactions of course still work. This is why Apple Pay (with NFC and mobile transaction) is currently the most secure format, and the most private. As for why you wouldn't be asked for PIN, because the business ran it as a credit. Credit cards and debit cards get run through different networks, and since any debit can be ran as credit, some businesses opt to only pay to run credit cards, as the ability to run debit cards is an separate payment. I know places like Walmart only accepted credit chip cards for months before adding debit chip capability, so it may be something other than cost as well."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
5r6jt3 | Why is it that in most depictions of UFO's, their vehicle has some kind of rotating piece | ELI5: Why is it that in most depictions of UFO's, their vehicle has some kind of rotating piece? | Technology | explainlikeimfive | {
"a_id": [
"dd4vlp4"
],
"text": [
"For the same reason that many depictions of Jesus Christ are of a white Italian esque guy rather than middle eastern, it's become mainstream. However, entertaining the question, some have argued that a UFO would need to spin or else it would risk tumbling around in the sky. Kind of like when you throw a frisbee and it spins, this spinning would give the object angular momentum, which increases stability. In other words, it basically makes the spinning object a gyroscope. In regards to the saucer shape, in assuming some bright yet misguided people thought that this would be the optimum shape for this flying saucer to create lift as well as remain stable in flight. Resources: \"How does a frisbee fly?\". (2012). Retrieved from URL_0"
],
"score": [
12
],
"text_urls": [
[
"http://howthingsfly.si.edu/ask-an-explainer/how-does-frisbee-fly"
]
]
} | [
"url"
] | [
"url"
] |
5r6qpo | What's the difference between cmd.exe and Windows PowerShell? | Technology | explainlikeimfive | {
"a_id": [
"dd4x189"
],
"text": [
"Ok the ELI5 version. Both are programs that (now this is simplified) present interfaces where the user can interact with the computer using a keyboard for input and see text as output. You type a line (some text + enter key) and program interprets what you typed as a command. This is why these interfaces are called Command Line Interfaces (CLI). Anyway in the old days the only way to interact with a computer to give commands was either to type them in a CLI or to use another program that would pass on those commands (they presented graphics and allowed other input methods like mice). In time some of those other programs became so good that people preferred them. These days these other programs are mostly used as interfaces and are called Graphical User Interfaces (GUI). Microsoft had a CLI in the old days called *command*. And it was the primary interface for people use in the Microsoft OS of the day (MS DOS). It was simply called \"command\" as this was name of the program that was run by the OS when it booted so you could interact with it. e Eventually another MS program called Windows (a GUI) became the most popular and MS made the \"command\" program live inside Windows (this is a simplification - I know all about different mode 16/32 real protected etc but it's not pertinent to this explanation). instead. The original program was called command.exe and MS produced new version with each OS version but renamed it \"cmd\" as a shorthand (again there is more but it's not important). This is cmd.exe. It has all the same features and compatibility basically with the the original command program. The biggest issue is that it's got decades of legacy to support so the commands are not consistent and not complete in function or form etc. In addition the CLIs of that vintage (cmd and the various linux ones) are character (text) oriented. So commands read text and write text out. In the Linux world the CLIs are much more comprehensive than the ones like cmd. Linux operators can do practically everything from the command line. However over the years MS sort of pushed Windows people into using GUIs for everything. As Linux servers slowly rose in popularity, Windows administrators also asked for better CLI tools to manage Windows systems. So eventually MS reengineered a new CLI from the ground up (using something called Lamda Calculus incidentally) and this is was originally called Monad.exe. It's biggest difference from cmd was that it wasn't character/text oriented (even though you interacted with it using keyboard and screen). Rather it uses object oriented/based technology that is similar to the one that is provided for developers (.Net). This allows much more powerful and comprehensive commands to created and they can interact with each other much more intelligently. Unfortunately though MS tried, they couldn't make monad 100% compatible with cmd but they made a damn good effort. After 2 versions monad was renamed to Powershell. Over time more and more of Windows was made controllable through Powershell and it's recommended that unless you need to use legacy commands in cmd, that you use Powershell. Hope this helps."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5r6uyw | Why can't we charge batteries very quickly or even instantly? | Technology | explainlikeimfive | {
"a_id": [
"dd4wwjw",
"dd4xf01"
],
"text": [
"Batteries are in effect repeatable reversible chemical reactions. Think of, for example, how water turns to ice when you cool it (take energy out) and turns to liquid when you heat it (put energy in). That's basically what a battery is, you load it up with the potential to create a reaction and generate electricity, and then you discharge the energy. When you charge it up again, that reaction isn't instantaneous, and you are limited by a number of physical limitations, such as heat and maximum voltage, but primarily by the fact that being too aggressive can ruin the battery, affecting its ability to reverse the reaction. This is why I'm still holding out for removable batteries.",
"Analogy time: filling a standard kitchen sink with water. You have a couple ways you can fill the sink. The first way is to turn on a tap and let it fill up slowly. This takes a flow of water and dumps it into the sink in a contained fashion. You turn the tap off after a minute or so, and there's your filled sink, nice and contained and ready for some dishes or whatnot. Or you can get a yuge bucket full of water and just upend 'er to dump it all into the sink... and then spend the next half hour mopping up the mess. Or you can bring in the high power firehose and spray it full tilt. Again... mess. Clearly, the better way is to have a controlled inbound flow of water to fill the sink. It doesn't have any risk of soaking the place at the expense of a few minutes of time. BUT... maybe you build a better sink. Put a piece of plywood over it with a hole through it to contain the splashes as you blast it full quickly. Changing the sink is certainly an option, although an expensive one, at least at first. Rechargeable batteries are like that. There's a capacity limit to how much they can absorb when electrical power flows into them and gets converted into chemical storage. We're working with new and novel materials like graphene to improve how much \"flow\" our system can absorb at once, but your bog-standard rechargeable batteries just can't take and process the inbound flow of electricity fast enough to instantly, or even quickly, recharge. That would overload the system and cause splashes and puddles... uh.. sparks and heat."
],
"score": [
57,
17
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
5r84zy | What is the difference between Shell, Bash, Zsh and terminal? | Technology | explainlikeimfive | {
"a_id": [
"dd54r9k"
],
"text": [
"* shell - a program that provides a command line interface to the operating system, especially with Unix/Linux * sh, csh, ksh, bash, zsh - examples of specific shell programs * terminal - used to mean a keyboard and monitor that could connect to a remote computer...now it usually means a program that runs a shell in a window"
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
5r9jsb | Since smartphones are basically tiny computers, why aren't people assembling their own phones like we do for computers? | Is it just a matter of everything being too small, and the parts being too specialized? Don't most phones have similar processors and chips etc? | Technology | explainlikeimfive | {
"a_id": [
"dd5g1l4",
"dd5gtdh"
],
"text": [
"you got it. same is true of laptops and AIO^*edit PCs. to gain the nescessary density, the components are uniquely designed to shoehorn together into the specially sized case. all of the bones of a phone are proprietary. Even modular parts like sensors and memory chips are largely soldered in place because slots and ports take too much space.",
"People have been hooking up telephone modems to computers for many decades, and that's all that makes a \"smartphone.\" But the trouble is that people want them to be very small and, like laptops, they have to fit a particular form factor. These design requirements make it harder to keep components accessible, and manufacturers have no real incentive to do so. This makes it a lot harder to assemble your own smartphone compared to a desktop computer with a modem. And accordingly, because most people have neither the desire nor skill to do so, there is large consumer market for the components in the way there is for desktops."
],
"score": [
6,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
5r9ojc | How are vocals removed from songs to make instrumentals? | Sometimes you can hear a little bit of the vocals still but how do they completely remove them without changing the rest of the sounds? | Technology | explainlikeimfive | {
"a_id": [
"dd5hy9s",
"dd5i8qq"
],
"text": [
"The vocals were never there in the first place. You're just hearing the instrumental track before the vocals were mixed in. The little bit of vocals you're hearing is because a mic for an instrument is picking it up. Technically, it *is* possible to take vocals out by mixing in just the vocal track at the same exact volume, and invert the phase of the waveform. This causes a cancelation in the wave, and you *almost* don't hear the vocal track anymore, which leaves the instrumental. You'll see this referred to as \"DIY a capella or instrumental.\" In most cases, if the a capella track is available from the studio, then more than likely the instrumental track is available from the studio, also. So you would rarely have a case where you need to do the DIY method for vocals. Edit: Explained better.",
"When songs are recorded professionally, every instruement and voice is usually recorded as a separate audio track, for easy editing. It's only combined into one when it's time to put it up for sale. In this case, you just remove the one track that's the vocalist and you're done. They usually leave backup vocals intact because it 'sounds empty' without em. When songs have vocals removed *after they've been released* (as in, by some random person editing the audio file that came off the CD or online music store), they have a few filtering options which work... okayish. When songs are originally created, different instruments are \"panned\" more to the left or right speaker, which gives you a nicer-sounding stereo experience. Usually however, they leave the main vocals right in the middle. By filtering out just the \"middle\" audio and leaving what was going more to the sides, they can cut the vocals. Kinda. Usually they miss stuff, and if the instruements aren't panned the way the filter expects it might cut something else."
],
"score": [
7,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
5ravno | How come websites like Google and Amazon are never down "for maintenance?" | Technology | explainlikeimfive | {
"a_id": [
"dd5s7wx",
"dd5s4ir",
"dd5rjuv",
"dd6o58w"
],
"text": [
"Think of a big website like it's a house with an address: 200 Web Street. Everyone knows to go to 200 Web Street to get to, say, the Amazon family's house. The Amazon family's house is getting pretty worn out, though, and lots of things are broken. And the family wants to make some pretty major design changes. They need to build a new house, but they don't want to miss a friend coming to visit while they're putting up that new house. (They have friends coming by pretty non-stop! They're a popular family!) So they build a house down the block at 204 Web Street, and don't tell anyone about it. While that house is being built, everyone's still coming to 200 Web Street. Here's the secret: when the new house is built and the Amazon family is all moved in and ready for people to start visiting them at the new house, _they put the old house number on the new house!!!_ Now when all their friends visit 200 Web Street, they all show up at the new house, _not_ the old house. (See, their friends only know how to get to the house via an app -- kinda like being led by Google Maps in the car -- so they just go to whatever house has the right address. Silly friends!) Now that they're moved into the new house and all their friends are visiting them there, the Amazon family can tear down the old, broken house without missing any visits from their friends! Yay! The End",
"A friend of mine works for Facebook, and for a while she was on the team that handles their backup systems. They have \"transparent fail-over\" setups, if the main servers go down the backup ones can *immediately* kick in and users notice no difference. Facebook classifies their server incidents from Sev5 to Sev1, with Sev1 being the worst*, with \"the site doesn't work.\" Sev5-2 happen with varying regularity, but Sev1 is almost unheard of since the backup of the backup would have to break. They also, like almost all tech companies, use the \"testing, staging, production\" server setup. (Or an even fancier version I dunno all the secrets). Basically, this is when there's three versions of your website. The public can only access production. Testing is where you build and test, of course. Staging and Production shoudl ALMOST always be identical. You put your \"this should work\" code on Staging first, test it like crazy, then copy-paste over to Production so you can be sure it's identical and functional. \\* = there was an incident referred to as \"Sev 0, we broke the entire internet\" jokingly, but that's another story",
"Because they have thousands of servers and the content you receive is served by the ones that are \"up\". If they need to do maintenance somewhere they have infrastructure to replace it before taking it down.",
"If you're careful enough and have enough machines you can just do maintenance on some of the machines while others are running."
],
"score": [
344,
15,
4,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
73j5i7 | Why can’t video games make walking/running look realistic? | Video games have seen major innovations in recent years, with incredible graphics, new forms of interaction like VR, and more complicated algorithms. But while playing FIFA 18, I noticed that players running around still have the illusion that their feet are sliding around. It seems that their feet are never actually planted, and the game just coordinates their movements and positions rather than have the player realistically run within out the feet sliding around when they change direction. tl;dr: Why can’t video games like FIFA, 2K, Madden make running look realistic? | Technology | explainlikeimfive | {
"a_id": [
"dnqpp3y",
"dnqplba"
],
"text": [
"A lot of movement animation is captured live with peoples' actual body movements ([Example]( URL_0 )) and then these are applied to the character's animation when moving in-game. The problem comes because many states/actions can make the player run faster/slower (such as sprinting vs. walking, as well as changing directions in many different angles) and the studio can only capture so many animations to account for that. After that it comes down to using mathematics to calculate where body parts should be, or trying to find the \"most appropriate\" animation for whatever action a player is doing and a lot of the time this isn't perfect, and thus things like running animations are not perfect with the speed that the feet move in, etc.",
"there are generally 2 ways to do this: - Physics simulation: use the momentum of the character to calculate where the legs/feet should go in order to provide the proper change in direction. In real time. - Create a seperate animation for every single possible leg position, body position and movement change that the physical body is capable of doing, as well as create a system that determines the correct animation to play. (Pretty much all games use a severely cut down version of this, as animations take a long time to make) A lot of work for something most people wont care about."
],
"score": [
10,
3
],
"text_urls": [
[
"https://youtu.be/Rpr1SIvL4Gg"
],
[]
]
} | [
"url"
] | [
"url"
] |
73l5cq | Why do some websites prevent me from putting a number at the start of my username? | I've noticed a handful of websites I sign up for don't let me start my username with a number. Does it have something to do with attack prevention? | Technology | explainlikeimfive | {
"a_id": [
"dnra4o0",
"dnre59z"
],
"text": [
"Many programming languages and systems require certain types of identifiers to start with a letter, and programmers often blindly copy these conventions even though there is no technical need for that. Since in a well designed database your username will always be treated as data and quoted as such in queries, there is no technical reason for them not to allow numbers. An exception is if the username is supposed to be used in an e-mail address - I think e-mail addresses *might* have a \"letter first\" requirement but I'd have to check the RFC. For Unix usernames, the reason behind the restrictions was that you can easily distinguish a number (e.g. a numerical user ID) from a username by looking at the first character. This is relevant because they are command that accept both the name and the ID in the same place. This does not apply to modern web applications.",
"Old Unix systems required a letter for the first character. People shy away to avoid breaking legacy code."
],
"score": [
21,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
73lgga | How did we "solve" the Y2K problem? Was our solution completely thorough? | Technology | explainlikeimfive | {
"a_id": [
"dnr6ale",
"dnrfkfn",
"dnr5ese",
"dnrb774",
"dnr5olh",
"dnrvbc8",
"dnr7okf",
"dnr7xxg",
"dnr71lp"
],
"text": [
"We didn't \"solve it\" as much as \"address it\" one piece of software at a time, by whatever organization owned/operated the affected system. Critical systems were mostly likely prioritized, and I'm sure a lot of systems were just left broken, to be dealt with as needed. (That's what my company did.) The Big Scary Y2K stuff that people were worried about had at least some potential to happen, but, thankfully, nothing dramatic happened. To some degree, being scared about it led people to deal with it, which can look like The Boy Who Cried Wolf. It is kind of the definition of a thankless job.",
"I worked on Y2K. Hell, anyone in IT at the time worked on Y2K. In reality, it was a lot of work to find and fix any programs that used a date in a 6 digit format (MMDDYY) and change it to an 8 digit format (MMDDYYYY). One issue was that software is not always well documented and lots of code doesn't get touched after it's initially written. That made it hard to find all the dates in millions and millions of lines of code. Another issue was that there were many companies using legacy software, i.e. software that had been purchased/created 20 years previous and no one had looked at it or touched it in a long time. Most of the time, companies just replaced that software with something newer and supported, but there were companies who would not replace the software because it was so customized to their exact needs. Those companies hired programmers to review every line of code and find any dates. Y2K also resulted in better programming discipline. Instead of embedding constants (dates) into code, more and more shops insisted on using variables declared at the top of the program and then assigning values to those variables as the first step of the program, which made the code easy to maintain. In my case, I worked for a subsidiary of GE Capital. My boss made the entire IT department work on New Year's Eve in case we'd missed anything in our software. We didn't. He was a dick.",
"There was never really a problem. It was all sensationalistic journalism combined with a few people 'in the know' not really knowing. anything, almost everyone in IT at the time knew it was not going to be an issue The 'next' Y2k will be in about 20 years, but that will be another non issue to those who know",
"We solved it by giving us more room to write the date and the solution was not complete. There are many different ways to write down the date. In the early day of computing space to write things down was expensive so lazy programmers and those who didn't expect their creations to still be around in 2000 simply just wrote down the last 2 digits of the year. Humans have used this space saving format informally for quite some time. Like you sometimes say that something happened in '74 or in the '80s and it is assumed that it is clear from context that you meant **19**74 and the **19**80s. With humans one could expect, that when they got a date like '00 that they would understand from context that this now meant 2000 rather 1900. Computer weren't smart enough to make that guess. They were programmed to assume that the first two digits were always 19 and they went with that. At some point people realized that the programs written in decades past were still around and that computers that used this method would still be around by the years 2000 and that this might cause them to act as if they were a hundred years in the past and cause all sorts of bugs. The media liked to paint pictures of airplanes falling out of the sky but professionals were concerned with much more mundane issues. A world wide effort to find and fix the problem in systems that had it was undertaken. For the most part this was a huge success. The general public which had been promised Armageddon by the media was quite miffed that the problem had been fixed without causing he fall of human civilization and declared that the warnings must have been wrong rather than that engineers had worked hard to see that it didn't come to pass. So the Y2K problem was mostly averted. Here and there some systems that had been overlooked created some funny error messages but nothing big happened. However keeping accurate date/time records in computers is a very, very complicated issues (mostly because the way we tell time as humans is much, much less regular than the average person thinks) and there are a number of problems similar to the Y2K problem that will become an issue at some point. UNIX based system tell time by counting how many seconds have passed since the beginning of January 1st 1970 UTC (called the unix epoch) and then they take this number of seconds and use a complicated system involving looking up timezone and adding anything from leap days to the occasional leap second to figure out what time and date that is in the local time. Currently we are at 1506870661 seconds since the Unix Epoch. The space the computers store that number of seconds in has room for up to 2147483647 seconds, this corresponds to a date in mid January 2038. When that time is reached computers will try to add another second to that number, not have enough room and end up creating a date with -2147483648 which is as far in the past of 1970 as 1938 is in its future. This will obviously result in problems for anyone trying to make use of these dates. Already some computersystems that work with stuff more than 20 years in the future, may encounter the problem. Everyone hopes that all the old system that still use this way of telling time will no longer be in use 20 years from now or will have been upgraded with patches. *fun fact*: Don't try to set you phone a electronic toy to any date before 1970 or after 2037 to avoid risking breaking it. Nobody expect these things to last that long so they are not currently able to handle such dates. There are a number of similar problem from many different systems that handle dates and almost all of them will at some point run out of space to write down the date. Almost nobody who was involved in fixing the y2k mess by switching from 2-disgts for the year to 4-digits for the year, seemed to care about what would happened on December 31st 9999 when we would end up needing 5 digits for the year if we make it that long and keep using their fix until then. We are mostly just pushing down the problem for future generations of humanity to fix if they manage to survive that long.",
"I remember that as we approached the Year 2000, there was a programme on TV about it. A presenter stood in front of a huge map of the world and was going to report on all the huge issues occurring as they happened. She kept having to go say, \"Nothing to report so far...\"",
"If you want to learn about the next Y2K-ish problem it's called the End of Unix Time. This video by Computerphil really sums it up in an easy to understand way. URL_0",
"An old friend said his dad was part of this and was working none stop on servers changing the software before the clock on the servers hit the next year. He said his dad had to work non stop to change a bunch of them because they all needed updated before the new year or a bunch of medical records would be ruined. His dad also liked to take acid so who knows what really happened Edit: Grammer",
"As far as the solution being completely thorough, probably not. A common way to store dates/times is to store the number of seconds since January 1st, 1970. Well, eventually that number will get too big for the storage chosen at the time. For example, we may have an issue in 2038 - see URL_0 . Luckily that's far off enough than none of us programmers will have to deal with it, which is the same thing people said about the year 2000 in the 70s. :)",
"The proper way to view that problem (and similar problems in future) is as technical debt - many organizations had taken a shortcut in many pieces of software that would work for dates up to 2000, and so when that came closer, that debt had to be paid one way or another if they want these systems to function correctly. It's worth noting that many of them would function almost correctly anyways, with just cosmetic problems - e.g. showing a 1900 in some places where you can just instruct the employees looking at it to treat it as year 2000 if you don't want to bother updating the software. Nothing much happened on the actual 1st january 2000 because a lot happened before that - pretty much everybody who had accumulated such debt and considered those systems important spent a bunch of time and effort to correct their stuff. There's no single solution, because there's no single problem but a class of very many similar shortcuts/design flaws/intentional tradeoffs. I wouldn't be surprised if there'd be some systems still in operation where they simply replaced all occurrences of a hardcoded \"19\"+two-digit year with a hardcoded \"20\"+two-digit year; so that they'd have to repeat the change in year 2100 if that system still is alive at the time."
],
"score": [
111,
70,
45,
26,
23,
4,
4,
4,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[
"https://www.youtube.com/watch?v=QJQ691PTKsA"
],
[],
[
"https://en.wikipedia.org/wiki/Year_2038_problem"
],
[]
]
} | [
"url"
] | [
"url"
] |
|
73m6la | Why do newly released blockbuster films only circulate on the internet when they have been released on DVD? | I find it strange that when a film is released to cinemas worldwide, it never leaves their systems. It doesn't get hacked. Angry employees don't leak them. It is tightly contained within the bounds of where the filmmakers want them and quite literally does not leave that spot until they want it to --and I'm not quite sure how they achieve this. | Technology | explainlikeimfive | {
"a_id": [
"dnrc57j",
"dnravuz",
"dnrodvn",
"dnrfkeg"
],
"text": [
"First of all, cinemas don't get their movies on DVDs (or Bluray for that matter). They used to get movies on movie reels back in the days of analog film. These days cinemas receive a Digital Cinema Package, or [DCP]( URL_0 ), which is basically a computer hard drive in a special enclosure which contains the movie. So, as long as a movie is still only in theaters, versions of it in the wild on optical media should be few and far between. A proverbial handful of 'screeners' is likely in the hands of journalists and critics around the world though, in the form of DVDs usually, for reviewing purposes. Security on these is really, really strict. They are usually unique for each person that receives one and contains secret watermarks and other security features. As for the previously mentioned DCP's: the format the movies are in on these devices is very unusual, consisting of hundreds of thousands of individual files (as mentioned in the linked article). Besides that, it probably has additional security to prevent copying, if it can be done at all. So, in a nutshell, that's why we have to wait until the retail release for good quality releases of movies.",
"Sometimes they are, but it's rare. Before DVD release pirates have to rely on cams, which are of questionable quality. Also, it's not really worth the time and effort to steal one of the theaters copies. Getting caught is too easy.",
"These days, movies are all sent digitally to movie theaters. The data is strongly encrypted in such a way that it can only be played back on a specific projector. the enclosures are tamper-proof and log every time the film is played. Even if somebody were to steal one, it'd be nearly impossible to get the movie off of it. Beyond that, movie pirates don't want to commit actual felonies by breaking into a theater and stealing one in order to get movies. Piracy is only copyright infringement, a civil issue, something you can't go to jail for.",
"One thing I haven't seen mentioned is that studios started watermarking the videos they send out, so if it gets leaked they know which copy it was."
],
"score": [
22,
6,
5,
4
],
"text_urls": [
[
"https://www.quora.com/What-is-the-video-format-played-in-digital-cinemas"
],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
73p6uv | Why is Windows 8.1 Considered the 'Worst' Microsoft OS? | Technology | explainlikeimfive | {
"a_id": [
"dns1bdm",
"dns1e9k",
"dns2ffp",
"dns2za7",
"dns3its",
"dns7lay",
"dns59hf",
"dns6wmq"
],
"text": [
"It's far from the worst, it's just the most recent unpopular one. Windows ME is undeniably the biggest turd ever shat out by Redmond. While Win8 wasn't perfect, a lot of the are just the result of people resisting change.",
"So coming from someone who's used XP, 7, 8, 8.1, and 10 and knows their way around technology pretty competently (CSIT major), I'll say this: it feels like Microsoft tried to merge a tablet OS and a desktop OS and got the absolute worst of both worlds. Confusing touch interface for touch devices, and confusing click interface for non-touch devices (e.g. having to swipe your cursor to the top right to access the \"charms\" menu instead of just having a proper start menu). It was just an absolute mess and I feel like they've struck a better balance for 10: don't force people to use the touch interface if they don't want to, but have it there as an option. That said, I've heard much worse about ME, but I never got a chance to use it and will likely never want to.",
"Maybe worst in your lifetime... Let me tell you the legend of Windows Millennium Edition...",
"Because they were born recently and never had to use Windows 8, Windows Vista and Windows Me? Windows 8.1 was a significant improvement over 8.",
"The only reason Windows 8.1 would be considered the \"worst\" is because someone had not heard of or used Windows Me. That operating system was terrible. It crashed all the time, nothing worked with it, and it was simply terrible. There were a few features that were incorporated into it that survived into XP, but it mostly sucked.",
"It's not. Windows Me and Vista are more disliked. 8.1 was a way for MS to appeal to the desktop market after 8 was extremely tablet/mobile focused.",
"Have had my MCSE since Win 2000 and have been using Windows since 95. Without getting into technical specifics, first outings are usually the worst form of a Microsoft product. 95 was a better iteration of 3.1. 98 was problematic and hated by most. 98 SE was a refinement. ME was a bad OS at the beginning but was later patched to be “okay.” Windows XP was a refinement of ME and Windows 2000 Pro. A lot of people forget that XP wasn’t well received at the beginning, but stayed around so long that it was patched to be very good. Vista suffered like ME, being the first of the “new wave” of Microsoft OS. Window 7 was the refinement. Windows 8 was the first of another wave and 8.1 made it much better. Windows 10 is actually built on the lessons learned of XP through 8.1.",
"It's not. Windows 8 is significantly worse than Windows 8.1. I have not used ME but based on all reports, it is by far the worst OS ever released. Not worst Microsoft OS, just worst OS at all."
],
"score": [
30,
25,
14,
11,
8,
5,
5,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |